Bounded functions and infimum, supremum

According to the extreme value theorem, a continuous real-valued function \(f\) in the closed and bounded interval \([a,b]\) must attain a maximum and a minimum, each at least once.

Let’s see what can happen for non-continuous functions. We consider below maps defined on \([0,1]\).

First let’s look at \[
f(x)=\begin{cases}
x &\text{ if } x \in (0,1)\\
1/2 &\text{otherwise}
\end{cases}\] \(f\) is bounded on \([0,1]\), continuous on the interval \((0,1)\) but neither at \(0\) nor at \(1\). The infimum of \(f\) is \(0\), its supremum \(1\), and \(f\) doesn’t attain those values. However, for \(0 < a < b < 1\), \(f\) attains its supremum and infimum on \([a,b]\) as \(f\) is continuous on this interval.

Bounded function that doesn’t attain its infimum and supremum on all \([a,b] \subseteq [0,1]\)

The function \(g\) defined on \([0,1]\) by \[
g(x)=\begin{cases}
0 & \text{ if } x \notin \mathbb Q \text{ or if } x = 0\\
\frac{(-1)^q (q-1)}{q} & \text{ if } x = \frac{p}{q} \neq 0 \text{, with } p, q \text{ relatively prime}
\end{cases}\] is bounded, as for \(x \in \mathbb Q \cap [0,1]\) we have \[
\left\vert g(x) \right\vert < 1.\] Hence \(g\) takes values in the interval \([-1,1]\). We prove that the infimum of \(g\) is \(-1\) and its supremum \(1\) on all intervals \([a,b]\) with \(0 < a < b <1\). Consider \(\varepsilon > 0\) and an odd prime \(q\) such that \[
q > \max(\frac{1}{\varepsilon}, \frac{1}{b-a}).\] This is possible as there are infinitely many prime numbers. By the pigeonhole principle and as \(0 < \frac{1}{q} < b-a\), there exists a natural number \(p\) such that \(\frac{p}{q} \in (a,b)\). We have \[ -1 < g \left(\frac{p}{q} \right) = \frac{(-1)^q (q-1)}{q} = - \frac{q-1}{q} <-1 +\varepsilon\] as \(q\) is supposed to be an odd prime with \(q > \frac{1}{\varepsilon}\). This proves that the infimum of \(g\) is \(-1\). By similar arguments, one can prove that the supremum of \(g\) on \([a,b]\) is \(1\).

On limit at infinity of functions and their derivatives

We consider continuously differentiable real functions defined on \((0,\infty)\) and the limits \[
\lim\limits_{x \to \infty} f(x) \text{ and } \lim\limits_{x \to \infty} f^\prime(x).\]

A map \(f\) such that \(\lim\limits_{x \to \infty} f(x) = \infty\) and \(\lim\limits_{x \to \infty} f^\prime(x) = 0\)

Consider the map \(f : x \mapsto \sqrt{x}\). It is clear that \(\lim\limits_{x \to \infty} f(x) = \infty\). As \(f^\prime(x) = \frac{1}{2 \sqrt{x}}\), we have as announced \(\lim\limits_{x \to \infty} f^\prime(x) = 0\)

A bounded map \(g\) having no limit at infinity such that \(\lim\limits_{x \to \infty} g^\prime(x) = 0\)

One idea is to take an oscillating map whose wavelength is increasing to \(\infty\). Let’s take the map \(g : x \mapsto \cos \sqrt{x}\). \(g\) doesn’t have a limit at \(\infty\) as for \(n \in \mathbb N\), we have \(g(n^2 \pi^2) = \cos n \pi = (-1)^n\). However, the derivative of \(g\) is \[
g^\prime(x) = – \frac{\sin \sqrt{x}}{2 \sqrt{x}},\] and as \(\vert g^\prime(x) \vert \le \frac{1}{2 \sqrt{x}}\) for all \(x \in (0,\infty)\), we have \(\lim\limits_{x \to \infty} g^\prime(x) = 0\).

Limit points of real sequences

Let’s start by recalling an important theorem of real analysis:

THEOREM. A necessary and sufficient condition for the convergence of a real sequence is that it is bounded and has a unique limit point.

As a consequence of the theorem, a sequence having a unique limit point is divergent if it is unbounded. An example of such a sequence is the sequence \[
u_n = \frac{n}{2}(1+(-1)^n),\] whose initial values are \[
0, 1, 0, 2, 0, 3, 0, 4, 0, 5, 6, \dots\] \((u_n)\) is an unbounded sequence whose unique limit point is \(0\).

Let’s now look at sequences having more complicated limit points sets.

A sequence whose set of limit points is the set of natural numbers

Consider the sequence \((v_n)\) whose initial terms are \[
1, 1, 2, 1, 2, 3, 1, 2, 3, 4, 1, 2, 3, 4, 5, \dots\] \((v_n)\) is defined as follows \[
v_n=\begin{cases}
1 &\text{ for } n= 1\\
n – \frac{k(k+1)}{2} &\text{ for } \frac{k(k+1)}{2} \lt n \le \frac{(k+1)(k+2)}{2}
\end{cases}\] \((v_n)\) is well defined as the sequence \((\frac{k(k+1)}{2})_{k \in \mathbb N}\) is strictly increasing with first term equal to \(1\). \((v_n)\) is a sequence of natural numbers. As \(\mathbb N\) is a set of isolated points of \(\mathbb R\), we have \(V \subseteq \mathbb N\), where \(V\) is the set of limit points of \((v_n)\). Conversely, let’s take \(m \in \mathbb N\). For \(k + 1 \ge m\), we have \(\frac{k(k+1)}{2} + m \le \frac{(k+1)(k+2)}{2}\), hence \[
u_{\frac{k(k+1)}{2} + m} = m\] which proves that \(m\) is a limit point of \((v_n)\). Finally the set of limit points of \((v_n)\) is the set of natural numbers.

Continue reading Limit points of real sequences

A linear map without any minimal polynomial

Given an endomorphism \(T\) on a finite-dimensional vector space \(V\) over a field \(\mathbb F\), the minimal polynomial \(\mu_T\) of \(T\) is well defined as the generator (unique up to units in \(\mathbb F\)) of the ideal:\[
I_T= \{p \in \mathbb F[t]\ ; \ p(T)=0\}.\]

For infinite-dimensional vector spaces, the minimal polynomial might not be defined. Let’s provide an example.

We take the real polynomials \(V = \mathbb R [t]\) as a real vector space and consider the derivative map \(D : P \mapsto P^\prime\). Let’s prove that \(D\) doesn’t have any minimal polynomial. By contradiction, suppose that \[
\mu_D(t) = a_0 + a_1 t + \dots + a_n t^n \text{ with } a_n \neq 0\] is the minimal polynomial of \(D\), which means that for all \(P \in \mathbb R[t]\) we have \[
a_0 + a_1 P^\prime + \dots + a_n P^{(n)} = 0.\] Taking for \(P\) the polynomial \(t^n\) we get \[
a_0 t^n + n a_1 t^{n-1} + \dots + n! a_n = 0,\] which doesn’t make sense as \(n! a_n \neq 0\), hence \(a_0 t^n + n a_1 t^{n-1} + \dots + n! a_n\) cannot be the zero polynomial.

We conclude that \(D\) doesn’t have any minimal polynomial.

Non linear map preserving Euclidean norm

Let \(V\) be a real vector space endowed with an Euclidean norm \(\Vert \cdot \Vert\).

A bijective map \( T : V \to V\) that preserves inner product \(\langle \cdot, \cdot \rangle\) is linear. Also, Mazur-Ulam theorem states that an onto map \( T : V \to V\) which is an isometry (\( \Vert T(x)-T(y) \Vert = \Vert x-y \Vert \) for all \(x,y \in V\)) and fixes the origin (\(T(0) = 0\)) is linear.

What about an application that preserves the norm (\(\Vert T(x) \Vert = \Vert x \Vert\) for all \(x \in V\))? \(T\) might not be linear as we show with following example:\[
\begin{array}{l|rcll}
T : & V & \longrightarrow & V \\
& x & \longmapsto & x & \text{if } \Vert x \Vert \neq 1\\
& x & \longmapsto & -x & \text{if } \Vert x \Vert = 1\end{array}\]

It is clear that \(T\) preserves the norm. However \(T\) is not linear as soon as \(V\) is not the zero vector space. In that case, consider \(x_0\) such that \(\Vert x_0 \Vert = 1\). We have:\[
\begin{cases}
T(2 x_0) &= 2 x_0 \text{ as } \Vert 2 x_0 \Vert = 2\\
\text{while}\\
T(x_0) + T(x_0) = -x_0 + (-x_0) &= – 2 x_0
\end{cases}\]