All posts by Jean-Pierre Merx

Separability of a vector space and its dual

Let’s recall that a topological space is separable when it contains a countable dense set. A link between separability and the dual space is following theorem:

Theorem: If the dual \(X^*\) of a normed vector space \(X\) is separable, then so is the space \(X\) itself.

Proof outline: let \({f_n}\) be a countable dense set in \(X^*\) unit sphere \(S_*\). For any \(n \in \mathbb{N}\) one can find \(x_n\) in \(X\) unit ball such that \(f_n(x_n) \ge \frac{1}{2}\). We claim that the countable set \(F = \mathrm{Span}_{\mathbb{Q}}(x_0,x_1,…)\) is dense in \(X\). If not, we would find \(x \in X \setminus \overline{F}\) and according to Hahn-Banach theorem there would exist a linear functional \(f \in X^*\) such that \(f_{\overline{F}} = 0\) and \(\Vert f \Vert=1\). But then for all \(n \in \mathbb{N}\), \(\Vert f_n-f \Vert \ge \vert f_n(x_n)-f(x_n)\vert = \vert f(x_n) \vert \ge \frac{1}{2}\). A contradiction since \({f_n}\) is supposed to be dense in \(S_*\).

We prove that the converse is not true, i.e. a dual space can be separable, while the space itself may be separable or not.

Introducing some normed vector spaces

Given a closed interval \(K \subset \mathbb{R}\) and a set \(A \subset \mathbb{R}\), we define the \(4\) following spaces. The first three are endowed with the supremum norm, the last one with the \(\ell^1\) norm.

  • \(\mathcal{C}(K,\mathbb{R})\), the space of continuous functions from \(K\) to \(\mathbb{R}\), is separable as the polynomial functions with coefficients in \(\mathbb{Q}\) are dense and countable.
  • \(\ell^{\infty}(A, \mathbb{R})\) is the space of real bounded functions defined on \(A\) with countable support.
  • \(c_0(A, \mathbb{R}) \subset \ell^{\infty}(A, \mathbb{R})\) is the subspace of elements of \(\ell^{\infty}(A)\) going to \(0\) at \(\infty\).
  • \(\ell^1(A, \mathbb{R})\) is the space of summable functions on \(A\): \(u \in \mathbb{R}^{A}\) is in \(\ell^1(A, \mathbb{R})\) iff \(\sum \limits_{a \in A} |u_x| < +\infty\).

When \(A = \mathbb{N}\), we find the usual sequence spaces. It should be noted that \(c_0(A, \mathbb{R})\) and \(\ell^1(A, \mathbb{R})\) are separable iff \(A\) is countable (otherwise the subset \(\big\{x \mapsto 1_{\{a\}}(x),\ a \in A \big\}\) is uncountable, and discrete), and that \(\ell^{\infty}(A, \mathbb{R})\) is separable iff \(A\) is finite (otherwise the subset \(\{0,1\}^A\) is uncountable, and discrete).

Continue reading Separability of a vector space and its dual

Determinacy of random variables

The question of the determinacy (or uniqueness) in the moment problem consists in finding whether the moments of a real-valued random variable determine uniquely its distribution. If we assume the random variable to be a.s. bounded, uniqueness is a consequence of Weierstrass approximation theorem.

Given the moments, the distribution need not be unique for unbounded random variables. Carleman’s condition states that for two positive random variables \(X, Y\) with the same finite moments for all orders, if \(\sum\limits_{n \ge 1} \frac{1}{\sqrt[2n]{\mathbb{E}(X^n)}} = +\infty\), then \(X\) and \(Y\) have the same distribution. In this article we describe random variables with different laws but sharing the same moments, on \(\mathbb R_+\) and \(\mathbb N\).

Continuous case on \(\mathbb{R}_+\)

In the article a non-zero function orthogonal to all polynomials, we described a function \(f\) orthogonal to all polynomials in the sense that \[
\forall k \ge 0,\ \displaystyle{\int_0^{+\infty}} x^k f(x)dx = 0 \tag{O}.\]

This function was \(f(u) = \sin\big(u^{\frac{1}{4}}\big)e^{-u^{\frac{1}{4}}}\). This inspires us to define \(U\) and \(V\) with values in \(\mathbb R^+\) by: \[\begin{cases}
f_U(u) &= \frac{1}{24}e^{-\sqrt[4]{u}}\\
f_V(u) &= \frac{1}{24}e^{-\sqrt[4]{u}} \big( 1 + \sin(\sqrt[4]{u})\big)

Both functions are positive. Since \(f\) is orthogonal to the constant map equal to one and \(\displaystyle{\int_0^{+\infty}} f_U = \displaystyle{\int_0^{+\infty}} f_V = 1\), they are indeed densities. One can verify that \(U\) and \(V\) have moments of all orders and \(\mathbb{E}(U^k) = \mathbb{E}(V^k)\) for all \(k \in \mathbb N\) according to orthogonality relation \((\mathrm O)\) above.

Discrete case on \(\mathbb N\)

In this section we define two random variables \(X\) and \(Y\) with values in \(\mathbb N\) having the same moments. Let’s take an integer \(q \ge 2\) and set for all \(n \in \mathbb{N}\): \[
\mathbb{P}(X=q^n) &=e^{-q}q^n \cdot \frac{1}{n!} \\
\mathbb{P}(Y=q^n) &= e^{-q}q^n\left(\frac{1}{n!} + \frac{(-1)^n}{(q-1)(q^2-1)\cdot\cdot\cdot (q^n-1)}\right)

Both quantities are positive and for any \(k \ge 0\), \(\mathbb{P}(X=q^n)\) and \(\mathbb{P}(Y=q^n) = O_{n \to \infty}\left(\frac{1}{q^{kn}}\right)\). We are going to prove that for all \(k \ge 1\), \( u_k = \sum \limits_{n=0}^{+\infty} \frac{(-1)^n q^{kn}}{(q-1)(q^2-1)\cdot\cdot\cdot (q^n-1)}\) is equal to \(0\).

Continue reading Determinacy of random variables

A semi-continuous function with a dense set of points of discontinuity

Let’s come back to Thomae’s function which is defined as:
\mathbb{R} & \longrightarrow & \mathbb{R} \\
x & \longmapsto & 0 \text{ if } x \in \mathbb{R} \setminus \mathbb{Q} \\
\frac{p}{q} & \longmapsto & \frac{1}{q} \text{ if } \frac{p}{q} \text{ in lowest terms and } q > 0

We proved here that \(f\) right-sided and left-sided limits vanish at all points. Therefore \(\limsup\limits_{x \to a} f(x) \le f(a)\) at every point \(a\) which proves that \(f\) is upper semi-continuous on \(\mathbb R\). However \(f\) is continuous at all \(a \in \mathbb R \setminus \mathbb Q\) and discontinuous at all \(a \in \mathbb Q\).

A prime ideal that is not a maximal ideal

Every maximal ideal is a prime ideal. The converse is true in a principal ideal domain – PID, i.e. every nonzero prime ideal is maximal in a PID, but this is not true in general. Let’s produce a counterexample.

\(R= \mathbb Z[x]\) is a ring. \(R\) is not a PID as can be shown considering the ideal \(I\) generated by the set \(\{2,x\}\). \(I\) cannot be generated by a single element \(p\). If it was, \(p\) would divide \(2\), i.e. \(p=1\) or \(p=2\). We can’t have \(p=1\) as it means \(R = I\) but \(3 \notin I\). We can’t have either \(p=2\) as it implies the contradiction \(x \notin I\). The ideal \(J = (x)\) is a prime ideal as \(R/J \cong \mathbb Z\) is an integral domain. Since \(\mathbb Z\) is not a field, \(J\) is not a maximal ideal.

Totally disconnected compact set with positive measure

Let’s build a totally disconnected compact set \(K \subset [0,1]\) such that \(\mu(K) >0\) where \(\mu\) denotes the Lebesgue measure.

In order to do so, let \(r_1, r_2, \dots\) be an enumeration of the rationals. To each rational \(r_i\) associate the open interval \(U_i = (r_i – 2^{-i-2}, r_i + 2^{-i-2})\). Then take \[
\displaystyle V = \bigcup_{i=1}^\infty U_i \text{ and } K = [0,1] \cap V^c.\] Clearly \(K\) is bounded and closed, therefore compact. As Lebesgue measure is subadditive we have \[
\mu(V) \le \sum_{i=1}^\infty \mu(U_i) \le \sum_{i=1}^\infty 2^{-i-1} = 1/2.\] This implies \[
\mu(K) = \mu([0,1]) – \mu([0,1] \cap V) \ge 1/2.\] In a further article, we’ll build a totally disconnected compact set \(K^\prime\) of \([0,1]\) with a predefined measure \(m \in [0,1)\).

Converse of fundamental theorem of calculus

The fundamental theorem of calculus asserts that for a continuous real-valued function \(f\) defined on a closed interval \([a,b]\), the function \(F\) defined for all \(x \in [a,b]\) by
\[F(x)=\int _{a}^{x}\!f(t)\,dt\] is uniformly continuous on \([a,b]\), differentiable on the open interval \((a,b)\) and \[
F^\prime(x) = f(x)\]
for all \(x \in (a,b)\).

The converse of fundamental theorem of calculus is not true as we see below.

Consider the function defined on the interval \([0,1]\) by \[
f(x)= \begin{cases}
2x\sin(1/x) – \cos(1/x) & \text{ for } x \neq 0 \\
0 & \text{ for } x = 0 \end{cases}\] \(f\) is integrable as it is continuous on \((0,1]\) and bounded on \([0,1]\). Then \[
F(x)= \begin{cases}
x^2 \sin \left( 1/x \right) & \text{ for } x \neq 0 \\
0 & \text{ for } x = 0 \end{cases}\] \(F\) is differentiable on \([0,1]\). It is clear for \(x \in (0,1]\). \(F\) is also differentiable at \(0\) as for \(x \neq 0\) we have \[
\left\vert \frac{F(x) – F(0)}{x-0} \right\vert = \left\vert \frac{F(x)}{x} \right\vert \le \left\vert x \right\vert.\] Consequently \(F^\prime(0) = 0\).

However \(f\) is not continuous at \(0\) as it does not have a right limit at \(0\).

Four elements rings

A group with four elements is isomorphic to either the cyclic group \(\mathbb Z_4\) or to the Klein four-group \(\mathbb Z_2 \times \mathbb Z_2\). Those groups are commutative. Endowed with the usual additive and multiplicative operations, \(\mathbb Z_4\) and \(\mathbb Z_2 \times \mathbb Z_2\) are commutative rings.

Are all four elements rings also isomorphic to either \(\mathbb Z_4\) or \(\mathbb Z_2 \times \mathbb Z_2\)? The answer is negative. Let’s provide two additional examples of commutative rings with four elements not isomorphic to \(\mathbb Z_4\) or \(\mathbb Z_2 \times \mathbb Z_2\).

The first one is the field \(\mathbb F_4\). \(\mathbb F_4\) is a commutative ring with four elements. It is not isomorphic to \(\mathbb Z_4\) or \(\mathbb Z_2 \times \mathbb Z_2\) as both of those rings have zero divisor. Indeed we have \(2 \cdot 2 = 0\) in \(\mathbb Z_4\) and \((1,0) \cdot (0,1)=(0,0)\) in \(\mathbb Z_2 \times \mathbb Z_2\).

A second one is the ring \(R\) of the matrices \(\begin{pmatrix}
x & 0\\
y & x\end{pmatrix}\) where \(x,y \in \mathbb Z_2\). One can easily verify that \(R\) is a commutative subring of the ring \(M_2(\mathbb Z_2)\). It is not isomorphic to \(\mathbb Z_4\) as its characteristic is \(2\). This is not isomorphic to \(\mathbb Z_2 \times \mathbb Z_2\) either as \(\begin{pmatrix}
0 & 0\\
1 & 0\end{pmatrix}\) is a non-zero matrix solution of the equation \(X^2=0\). \((0,0)\) is the only solution of that equation in \(\mathbb Z_2 \times \mathbb Z_2\).

One can prove that the four rings mentioned above are the only commutative rings with four elements up to isomorphism.

Counterexamples around series (part 2)

We follow the article counterexamples around series (part 1) providing additional funny series examples.

If \(\sum u_n\) converges and \((u_n)\) is non-increasing then \(u_n = o(1/n)\)?

This is true. Let’s prove it.
The hypotheses imply that \((u_n)\) converges to zero. Therefore \(u_n \ge 0\) for all \(n \in \mathbb N\). As \(\sum u_n\) converges we have \[
\displaystyle \lim\limits_{n \to \infty} \sum_{k=n/2}^{n} u_k = 0.\] Hence for \(\epsilon \gt 0\), one can find \(N \in \mathbb N\) such that \[
\epsilon \ge \sum_{k=n/2}^{n} u_k \ge \frac{1}{2} (n u_n) \ge 0\] for all \(n \ge N\). Which concludes the proof.

\(\sum u_n\) convergent is equivalent to \(\sum u_{2n}\) and \(\sum u_{2n+1}\) convergent?

Is not true as we can see taking \(u_n = \frac{(-1)^n}{n}\). \(\sum u_n\) converges according to the alternating series test. However for \(n \in \mathbb N\) \[
\sum_{k=1}^n u_{2k} = \sum_{k=1}^n \frac{1}{2k} = 1/2 \sum_{k=1}^n \frac{1}{k}.\] Hence \(\sum u_{2n}\) diverges as the harmonic series diverges.

\(\sum u_n\) absolutely convergent is equivalent to \(\sum u_{2n}\) and \(\sum u_{2n+1}\) absolutely convergent?

This is true and the proof is left to the reader.

\(\sum u_n\) is a positive convergent series then \((\sqrt[n]{u_n})\) is bounded?

Is true. If not, there would be a subsequence \((u_{\phi(n)})\) such that \(\sqrt[\phi(n)]{u_{\phi(n)}} \ge 2\). Which means \(u_{\phi(n)} \ge 2^{\phi(n)}\) for all \(n \in \mathbb N\) and implies that the sequence \((u_n)\) is unbounded. In contradiction with the convergence of the series \(\sum u_n\).

If \((u_n)\) is strictly positive with \(u_n = o(1/n)\) then \(\sum (-1)^n u_n\) converges?

It does not hold as we can see with \[
u_n=\begin{cases} \frac{1}{n \ln n} & n \equiv 0 [2] \\
\frac{1}{2^n} & n \equiv 1 [2] \end{cases}\] Then for \(n \in \mathbb N\) \[
\sum_{k=1}^{2n} (-1)^k u_k \ge \sum_{k=1}^n \frac{1}{2k \ln 2k} – \sum_{k=1}^{2n} \frac{1}{2^k} \ge \sum_{k=1}^n \frac{1}{2k \ln 2k} – 1.\] As \(\sum \frac{1}{2k \ln 2k}\) diverges as can be proven using the integral test with the function \(x \mapsto \frac{1}{2x \ln 2x}\), \(\sum (-1)^n u_n\) also diverges.

Group homomorphism versus ring homomorphism

A ring homomorphism is a function between two rings which respects the structure. Let’s provide examples of functions between rings which respect the addition or the multiplication but not both.

An additive group homomorphism that is not a ring homomorphism

We consider the ring \(\mathbb R[x]\) of real polynomials and the derivation \[
D : & \mathbb R[x] & \longrightarrow & \mathbb R[x] \\
& P & \longmapsto & P^\prime \end{array}\] \(D\) is an additive homomorphism as for all \(P,Q \in \mathbb R[x]\) we have \(D(P+Q) = D(P) + D(Q)\). However, \(D\) does not respect the multiplication as \[
D(x^2) = 2x \neq 1 = D(x) \cdot D(x).\] More generally, \(D\) satisfies the Leibniz rule \[
D(P \cdot Q) = P \cdot D(Q) + Q \cdot D(P).\]

A multiplication group homomorphism that is not a ring homomorphism

The function \[
f : & \mathbb R & \longrightarrow & \mathbb R \\
& x & \longmapsto & x^2 \end{array}\] is a multiplicative group homomorphism of the group \((\mathbb R, \cdot)\). However \(f\) does not respect the addition.