# Mean independent and correlated variables

Given two real random variables $$X$$ and $$Y$$, we say that:

1. $$X$$ and $$Y$$ are independent if the events $$\{X \le x\}$$ and $$\{Y \le y\}$$ are independent for any $$x,y$$,
2. $$X$$ is mean independent from $$Y$$ if its conditional mean $$E(Y | X=x)$$ equals its (unconditional) mean $$E(Y)$$ for all $$x$$ such that the probability that $$X = x$$ is not zero,
3. $$X$$ and $$Y$$ are uncorrelated if $$\mathbb{E}(XY)=\mathbb{E}(X)\mathbb{E}(Y)$$.

Assuming the necessary integrability hypothesis, we have the implications $$\ 1. \implies 2. \implies 3.$$.
The $$2^{\mbox{nd}}$$ implication follows from the law of iterated expectations: $$\mathbb{E}(XY) = \mathbb{E}\big(\mathbb{E}(XY|Y)\big) = \mathbb{E}\big(\mathbb{E}(X|Y)Y\big) = \mathbb{E}(X)\mathbb{E}(Y)$$

Yet none of the reciprocals of these two implications are true.

## Mean-independence without independence

Let $$\theta \sim \mbox{Unif}(0,2\pi)$$, and $$(X,Y)=\big(\cos(\theta),\sin(\theta)\big)$$.

Then for all $$y \in [-1,1]$$, conditionally to $$Y=y$$, $$X$$ follows a uniform distribution on $$\{-\sqrt{1-y^2},\sqrt{1-y^2}\}$$, so: $\mathbb{E}(X|Y=y)=0=\mathbb{E}(X).$ Likewise, we have $$\mathbb{E}(Y|X) = 0$$.

Yet $$X$$ and $$Y$$ are not independent. Indeed, $$\mathbb{P}(X>0.75)>0$$  and $$\mathbb{P}(Y>0.75) > 0$$, but  $$\mathbb{P}(X>0.75, Y>0.75) = 0$$ because $$X^2+Y^2 = 1$$ and $$0.75^2 + 0.75^2 > 1$$.

## Uncorrelation without mean-independence

A simple counterexample is $$(X,Y)$$ uniformly distributed on the vertices of a regular polygon centered on the origin, not symmetric with respect to either axis.

For example, let $$(X, Y)$$ have uniform distribution with values in $\big\{(1,3), (-3,1), (-1,-3), (3,-1)\big\}.$

Then $$\mathbb{E}(XY) = 0$$ and $$\mathbb{E}(X)=\mathbb{E}(Y)=0$$, so $$X$$ and $$Y$$ are uncorrelated.

Yet $$\mathbb{E}(X|Y=1) = -3$$, $$\mathbb{E}(X|Y=3)=1$$ so we don’t have $$\mathbb{E}(X|Y) = \mathbb{E}(X)$$. Likewise, we don’t have $$\mathbb{E}(Y|X) = \mathbb{E}(Y)$$.

# Determinacy of random variables

The question of the determinacy (or uniqueness) in the moment problem consists in finding whether the moments of a real-valued random variable determine uniquely its distribution. If we assume the random variable to be a.s. bounded, uniqueness is a consequence of Weierstrass approximation theorem.

Given the moments, the distribution need not be unique for unbounded random variables. Carleman’s condition states that for two positive random variables $$X, Y$$ with the same finite moments for all orders, if $$\sum\limits_{n \ge 1} \frac{1}{\sqrt[2n]{\mathbb{E}(X^n)}} = +\infty$$, then $$X$$ and $$Y$$ have the same distribution. In this article we describe random variables with different laws but sharing the same moments, on $$\mathbb R_+$$ and $$\mathbb N$$.

### Continuous case on $$\mathbb{R}_+$$

In the article a non-zero function orthogonal to all polynomials, we described a function $$f$$ orthogonal to all polynomials in the sense that $\forall k \ge 0,\ \displaystyle{\int_0^{+\infty}} x^k f(x)dx = 0 \tag{O}.$

This function was $$f(u) = \sin\big(u^{\frac{1}{4}}\big)e^{-u^{\frac{1}{4}}}$$. This inspires us to define $$U$$ and $$V$$ with values in $$\mathbb R^+$$ by: $\begin{cases} f_U(u) &= \frac{1}{24}e^{-\sqrt{u}}\\ f_V(u) &= \frac{1}{24}e^{-\sqrt{u}} \big( 1 + \sin(\sqrt{u})\big) \end{cases}$

Both functions are positive. Since $$f$$ is orthogonal to the constant map equal to one and $$\displaystyle{\int_0^{+\infty}} f_U = \displaystyle{\int_0^{+\infty}} f_V = 1$$, they are indeed densities. One can verify that $$U$$ and $$V$$ have moments of all orders and $$\mathbb{E}(U^k) = \mathbb{E}(V^k)$$ for all $$k \in \mathbb N$$ according to orthogonality relation $$(\mathrm O)$$ above.

### Discrete case on $$\mathbb N$$

In this section we define two random variables $$X$$ and $$Y$$ with values in $$\mathbb N$$ having the same moments. Let’s take an integer $$q \ge 2$$ and set for all $$n \in \mathbb{N}$$: $\begin{cases} \mathbb{P}(X=q^n) &=e^{-q}q^n \cdot \frac{1}{n!} \\ \mathbb{P}(Y=q^n) &= e^{-q}q^n\left(\frac{1}{n!} + \frac{(-1)^n}{(q-1)(q^2-1)\cdot\cdot\cdot (q^n-1)}\right) \end{cases}$

Both quantities are positive and for any $$k \ge 0$$, $$\mathbb{P}(X=q^n)$$ and $$\mathbb{P}(Y=q^n) = O_{n \to \infty}\left(\frac{1}{q^{kn}}\right)$$. We are going to prove that for all $$k \ge 1$$, $$u_k = \sum \limits_{n=0}^{+\infty} \frac{(-1)^n q^{kn}}{(q-1)(q^2-1)\cdot\cdot\cdot (q^n-1)}$$ is equal to $$0$$.

Continue reading Determinacy of random variables