Tag Archives: linear-algebra

Complex matrix without a square root

Consider for \(n \ge 2\) the linear space \(\mathcal M_n(\mathbb C)\) of complex matrices of dimension \(n \times n\). Is a matrix \(T \in \mathcal M_n(\mathbb C)\) always having a square root \(S \in \mathcal M_n(\mathbb C)\), i.e. a matrix such that \(S^2=T\)? is the question we deal with.

First, one can note that if \(T\) is similar to \(V\) with \(T = P^{-1} V P\) and \(V\) has a square root \(U\) then \(T\) also has a square root as \(V=U^2\) implies \(T=\left(P^{-1} U P\right)^2\).

Diagonalizable matrices

Suppose that \(T\) is similar to a diagonal matrix \[
D=\begin{bmatrix}
d_1 & 0 & \dots & 0 \\
0 & d_2 & \dots & 0 \\
\vdots & \vdots & \ddots & 0 \\
0 & 0 & \dots & d_n
\end{bmatrix}\] Any complex number has two square roots, except \(0\) which has only one. Therefore, each \(d_i\) has at least one square root \(d_i^\prime\) and the matrix \[
D^\prime=\begin{bmatrix}
d_1^\prime & 0 & \dots & 0 \\
0 & d_2^\prime & \dots & 0 \\
\vdots & \vdots & \ddots & 0 \\
0 & 0 & \dots & d_n^\prime
\end{bmatrix}\] is a square root of \(D\). Continue reading Complex matrix without a square root

A linear map without any minimal polynomial

Given an endomorphism \(T\) on a finite-dimensional vector space \(V\) over a field \(\mathbb F\), the minimal polynomial \(\mu_T\) of \(T\) is well defined as the generator (unique up to units in \(\mathbb F\)) of the ideal:\[
I_T= \{p \in \mathbb F[t]\ ; \ p(T)=0\}.\]

For infinite-dimensional vector spaces, the minimal polynomial might not be defined. Let’s provide an example.

We take the real polynomials \(V = \mathbb R [t]\) as a real vector space and consider the derivative map \(D : P \mapsto P^\prime\). Let’s prove that \(D\) doesn’t have any minimal polynomial. By contradiction, suppose that \[
\mu_D(t) = a_0 + a_1 t + \dots + a_n t^n \text{ with } a_n \neq 0\] is the minimal polynomial of \(D\), which means that for all \(P \in \mathbb R[t]\) we have \[
a_0 + a_1 P^\prime + \dots + a_n P^{(n)} = 0.\] Taking for \(P\) the polynomial \(t^n\) we get \[
a_0 t^n + n a_1 t^{n-1} + \dots + n! a_n = 0,\] which doesn’t make sense as \(n! a_n \neq 0\), hence \(a_0 t^n + n a_1 t^{n-1} + \dots + n! a_n\) cannot be the zero polynomial.

We conclude that \(D\) doesn’t have any minimal polynomial.

A linear map having all numbers as eigenvalue

Consider a linear map \(\varphi : E \to E\) where \(E\) is a linear space over the field \(\mathbb C\) of the complex numbers. When \(E\) is a finite dimensional vector space of dimension \(n \ge 1\), the number of eigenvalues is finite. The eigenvalues are the roots of the characteristic polynomial \(\chi_\varphi\) of \(\varphi\). \(\chi_\varphi\) is a complex polynomial of degree \(n \ge 1\). Therefore the set of eigenvalues of \(\varphi\) is non-empty and its cardinal is less than \(n\).

Things are different when \(E\) is an infinite dimensional space.

A linear map having all numbers as eigenvalue

Let’s consider the linear space \(E=\mathcal C^\infty([0,1])\) of smooth complex functions having derivatives of all orders and defined on the segment \([0,1]\). \(E\) is an infinite dimensional space: it contains all the polynomial maps.

On \(E\), we define the linear map \[\begin{array}{l|rcl}
\varphi : & \mathcal C^\infty([0,1]) & \longrightarrow & \mathcal C^\infty([0,1]) \\
& f & \longmapsto & f^\prime \end{array}\]

The set of eigenvalues of \(\varphi\) is all \(\mathbb C\). Indeed, for \(\lambda \in \mathbb C\) the map \(t \mapsto e^{\lambda t}\) is an eigenvector associated to the eigenvalue \(\lambda\).

A linear map having no eigenvalue

On the same linear space \(E=\mathcal C^\infty([0,1])\), we now consider the linear map \[\begin{array}{l|rcl}
\psi : & \mathcal C^\infty([0,1]) & \longrightarrow & \mathcal C^\infty([0,1]) \\
& f & \longmapsto & x f \end{array}\]

Suppose that \(\lambda \in \mathbb C\) is an eigenvalue of \(\psi\) and \(h \in E\) an eigenvector associated to \(\lambda\). By hypothesis, there exists \(x_0 \in [0,1]\) such that \(h(x_0) \neq 0\). Even better, as \(h\) is continuous, \(h\) is non-vanishing on \(J \cap [0,1]\) where \(J\) is an open interval containing \(x_0\). On \(J \cap [0,1]\) we have the equality \[
(\psi(h))(x) = x h(x) = \lambda h(x)\] Hence \(x=\lambda\) for all \(x \in J \cap [0,1]\). A contradiction proving that \(\psi\) has no eigenvalue.

Two matrices A and B for which AB and BA have different minimal polynomials

We consider here the algebra of matrices \(\mathcal{M}_n(\mathbb F)\) of dimension \(n \ge 1\) over a field \(\mathbb F\).

It is well known that for \(A,B \in \mathcal{M}_n(\mathbb F)\), the characteristic polynomial \(p_{AB}\) of the product \(AB\) is equal to the one (namely \(p_{BA}\)) of the product of \(BA\). What about the minimal polynomial?

Unlikely for the characteristic polynomials, the minimal polynomial \(\mu_{AB}\) of \(AB\) maybe different to the one of \(BA\).

Consider the two matrices \[
A=\begin{pmatrix}
0 & 1\\
0 & 0\end{pmatrix} \text{, }
B=\begin{pmatrix}
0 & 0\\
0 & 1\end{pmatrix}\] which can be defined whatever the field we consider: \(\mathbb R, \mathbb C\) or even a field of finite characteristic.

One can verify that \[
AB=A=\begin{pmatrix}
0 & 1\\
0 & 0\end{pmatrix} \text{, }
BA=\begin{pmatrix}
0 & 0\\
0 & 0\end{pmatrix}\]

As \(BA\) is the zero matrix, its minimal polynomial is \(\mu_{BA}=X\). Regarding the one of \(AB\), we have \((AB)^2=A^2=0\) hence \(\mu_{AB}\) divides \(X^2\). Moreover \(\mu_{AB}\) cannot be equal to \(X\) as \(AB \neq 0\). Finally \(\mu_{AB}=X^2\) and we verify that \[X^2=\mu_{AB} \neq \mu_{BA}=X.\]

Two non similar matrices having same minimal and characteristic polynomials

Consider a square matrix \(A\) of dimension \(n \ge 1\) over a field \(\mathbb F\), i.e. \(A \in \mathcal M_n(\mathbb F)\). Results discuss below are true for any field \(\mathbb F\), in particular for \(\mathbb F = \mathbb R\) or \(\mathbb F = \mathbb C\).

A polynomial \(P \in \mathbb F[X]\) is called a vanishing polynomial for \(A\) if \(P(A) = 0\). If the matrix \(B\) is similar to \(B\) (which means that \(B=Q^{-1} A Q\) for some invertible matrix \(Q\)), and the polynomial \(P\) vanishes at \(A\) then \(P\) also vanishes at \(B\). This is easy to prove as we have \(P(B)=P(Q^{-1} A Q)=Q^{-1} P(A) Q\).

In particular, two similar matrices have the same minimal and characteristic polynomials.

Is the converse true? Are two matrices having the same minimal and characteristic polynomials similar? Continue reading Two non similar matrices having same minimal and characteristic polynomials

One matrix having several interesting properties

We consider a vector space \(V\) of dimension \(2\) over a field \(\mathbb{K}\). The matrix:
\[A=\left( \begin{array}{cc}
0 & 1 \\
0 & 0 \end{array} \right)\] has several wonderful properties!

Only zero as eigenvalue, but minimal polynomial of degree \(2\)

Zero is the only eigenvalue. The corresponding characteristic space is \(\mathbb{K} . e_1\) where \((e_1,e_2)\) is the standard basis. The minimal polynomial of \(A\) is \(\mu_A(X)=X^2\). Continue reading One matrix having several interesting properties

A module without a basis

Let’s start by recalling some background about modules.

Suppose that \(R\) is a ring and \(1_R\) is its multiplicative identity. A left \(R\)-module \(M\) consists of an abelian group \((M, +)\) and an operation \(R \times M \rightarrow M\) such that for all \(r, s \in R\) and \(x, y \in M\), we have:

  1. \(r \cdot (x+y)= r \cdot x + r \cdot y\) (\( \cdot\) is left-distributive over \(+\))
  2. \((r +s) \cdot x= r \cdot x + s \cdot x\) (\( \cdot\) is right-distributive over \(+\))
  3. \((rs) \cdot x= r \cdot (s \cdot x)\)
  4. \(1_R \cdot x= x \)

\(+\) is the symbol for addition in both \(R\) and \(M\).
If \(K\) is a field, \(M\) is \(K\)-vector space. It is well known that a vector space \(V\) is having a basis, i.e. a subset of linearly independent vectors that spans \(V\).
Unlike for a vector space, a module doesn’t always have a basis. Continue reading A module without a basis

A vector space not isomorphic to its double dual

In this page \(\mathbb{F}\) refers to a field. Given any vector space \(V\) over \(\mathbb{F}\), the dual space \(V^*\) is defined as the set of all linear functionals \(f: V \mapsto \mathbb{F}\). The dual space \(V^*\) itself becomes a vector space over \(\mathbb{F}\) when equipped with the following addition and scalar multiplication:
\[\left\{
\begin{array}{lll}(\varphi + \psi)(x) & = & \varphi(x) + \psi(x) \\
(a \varphi)(x) & = & a (\varphi(x)) \end{array} \right. \] for all \(\phi, \psi \in V^*\), \(x \in V\), and \(a \in \mathbb{F}\).
There is a natural homomorphism \(\Phi\) from \(V\) into the double dual \(V^{**}\), defined by \((\Phi(v))(\phi) = \phi(v)\) for all \(v \in V\), \(\phi \in V^*\). This map \(\Phi\) is always injective. Continue reading A vector space not isomorphic to its double dual