Skip to main content

Section 3.1 Generalized Eigenspaces

In this section we will define a new type of invariant subspace and explore its key properties. This generalization of eigenvalues and eigenspaces will allow us to move from diagonal matrix representations of diagonalizable matrices to nearly diagonal matrix representations of arbitrary matrices.

Subsection 3.1.1 Kernels of Powers of Linear Transformations

With Section [provisional cross-reference: section-jordan-canonical-form] as our goal, we will become increasingly interested in kernels of powers of linear transformations, which will go a long way to helping us understand the structure of a linear transformation, or its matrix representation. We need the next theorem to help us understand generalized eigenspaces, though its specialization in Theorem [provisional cross-reference: theorem-about-powers-nilpotent] to nilpotent linear transformations will be the real workhorse.

There are several items to verify in the conclusion as stated. First, we show that \(\krn{T^k}\subseteq\krn{T^{k+1}}\) for any \(k\text{.}\) Choose \(\vect{z}\in\krn{T^k}\text{.}\) Then

\begin{equation*} \lteval{T^{k+1}}{\vect{z}} =\lteval{T}{\lteval{T^k}{\vect{z}}} =\lteval{T}{\zerovector} =\zerovector \end{equation*}

so \(\vect{z}\in\krn{T^{k+1}}\)

Second, we demonstrate the existence of a power \(m\) where consecutive powers result in equal kernels. A by-product will be the condition that \(m\) can be chosen so that \(m\leq n\text{.}\) To the contrary, suppose that

\begin{equation*} \set{\zerovector}=\krn{T^0}\subsetneq\krn{T^1}\subsetneq\krn{T^2}\subsetneq\cdots \subsetneq\krn{T^{n-1}}\subsetneq\krn{T^n}\subsetneq\krn{T^{n+1}}\subsetneq\cdots \end{equation*}

Since \(\krn{T^k}\subsetneq\krn{T^{k+1}}\text{,}\) Theorem PSSD implies that \(\dimension{\krn{T^{k+1}}}\geq\dimension{\krn{T^k}}+1\text{.}\) Repeated application of this observation yields

\begin{align*} \dimension{\krn{T^{n+1}}}&\geq\dimension{\krn{T^n}}+1\\ &\geq\dimension{\krn{T^{n-1}}}+2\\ &\geq\dimension{\krn{T^{n-2}}}+3\\ &\vdots\\ &\geq\dimension{\krn{T^{0}}}+(n+1)\\ &=n+1 \end{align*}

As \(\krn{T^{n+1}}\) is a subspace of \(V\text{,}\) of dimension \(n\text{,}\) this is a contradiction.

The contradiction yields the existence of an integer \(k\) such that \(\krn{T^k}=\krn{T^{k+1}}\text{,}\) so we can define \(m\) to be smallest such integer with this property. From the argument above about dimensions resulting from a strictly increasing chain of subspaces, we conclude that \(m\leq n\text{.}\)

It remains to show that once two consecutive kernels are equal, then all of the remaining kernels are equal. More formally, if \(\krn{T^m}=\krn{T^{m+1}}\text{,}\) then \(\krn{T^m}=\krn{T^{m+j}}\) for all \(j\geq 1\text{.}\) The proof is by induction on \(j\text{.}\) The base case (\(j=1\)) is precisely our defining property for \(m\text{.}\)

For the induction step, our hypothesis is that \(\krn{T^m}=\krn{T^{m+j}}\text{.}\) We want to establish that \(\krn{T^m}=\krn{T^{m+j+1}}\text{.}\) At the outset of this proof we showed that \(\krn{T^m}\subseteq\krn{T^{m+j+1}}\text{.}\) So we need only show the subset inclusion in the opposite direction. To wit, choose \(\vect{z}\in\krn{T^{m+j+1}}\text{.}\) Then

\begin{equation*} \lteval{T^{m+j}}{\lteval{T}{\vect{z}}} =\lteval{T^{m+j+1}}{\vect{z}} =\zerovector \end{equation*}

so \(\lteval{T}{\vect{z}}\in\krn{T^{m+j}}=\krn{T^m}\text{.}\) Thus

\begin{equation*} \lteval{T^{m+1}}{\vect{z}}=\lteval{T^{m}}{\lteval{T}{\vect{z}}}=\zerovector \end{equation*}

so \(\vect{z}\in\krn{T^{m+1}}=\krn{T^m}\text{,}\) as desired.

As an illustration of Theorem Theorem 3.1.1 consider the linear transformation \(\ltdefn{T}{\complex{10}}{\complex{10}}\) defined by \(\lteval{T}{\vect{x}}=A\vect{x}\text{.}\)

\begin{equation*} A=\begin{bmatrix} -27 & 17 & -12 & -1 & 24 & 16 & 26 & -2 & -1 & -3 \\ -66 & 45 & -55 & 11 & 73 & 44 & 45 & -6 & 15 & 1 \\ -85 & 58 & -65 & 13 & 94 & 56 & 61 & -6 & 16 & -4 \\ -81 & 58 & -55 & 4 & 83 & 52 & 70 & -7 & 6 & -6 \\ -33 & 21 & -22 & 6 & 37 & 21 & 23 & -1 & 6 & -4 \\ -38 & 28 & -25 & 1 & 39 & 25 & 34 & -3 & 1 & -4 \\ 20 & -15 & 23 & -8 & -28 & -15 & -8 & 1 & -9 & 0 \\ 41 & -30 & 39 & -12 & -53 & -29 & -23 & 2 & -13 & 3 \\ 58 & -43 & 43 & -6 & -64 & -38 & -47 & 4 & -6 & 6 \\ -69 & 46 & -47 & 7 & 72 & 44 & 54 & -5 & 9 & -4\end{bmatrix} \end{equation*}

This linear transformation is engineered to illustrate the full generality of the theorem. The kernels of the powers (null spaces of the matrix powers) increase, with the nullity incrementing first by twos, then by ones, until we top out to find the maximum nullity of \(8\) at the \(m=6\) power, well less than the maximum of \(n=10\text{.}\)

\begin{align*} \dimension{\krn{T^0}}&=0 & \dimension{\krn{T^1}}&=2 & \dimension{\krn{T^2}}&=4\\ \dimension{\krn{T^3}}&=5 & \dimension{\krn{T^4}}&=6 & \dimension{\krn{T^5}}&=7\\ \dimension{\krn{T^6}}&=8 & \dimension{\krn{T^7}}&=8 \end{align*}

It is somewhat interesting to row-reduce the powers of \(A\text{,}\) since the null spaces of these powers are the kernels of the powers of \(T\text{.}\) These are best done with software, but here are two examples, first a mid-range power, then an extreme power.

\begin{align*} A^4&\rref\begin{bmatrix} 1 & 0 & 0 & 0 & -1 & -3 & 3 & -3 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & -4 & 7 & -5 & 0 & -1 \\ 0 & 0 & 1 & 0 & 0 & -1 & 2 & -1 & 0 & -1 \\ 0 & 0 & 0 & 1 & 1 & -1 & 1 & -1 & 1 & -1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\end{bmatrix}\\ A^{50}&\rref\begin{bmatrix}1 & 0 & -3 & \frac{2}{3} & -\frac{1}{3} & -\frac{2}{3} & -\frac{7}{3} & -\frac{2}{3} & \frac{2}{3} & \frac{7}{3} \\ 0 & 1 & -5 & 1 & 1 & 0 & -2 & -1 & 1 & 3 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\end{bmatrix} \end{align*}

Once we get to the sixth power, the kernels do not change, and so because of the uniqueness of reduced row-echelon form, these do not change either.

Subsection 3.1.2 Generalized Eigenspaces

These are the two main definitions of this section.

Definition 3.1.3. Generalized Eigenvector.

Suppose that \(\ltdefn{T}{V}{V}\) is a linear transformation. Suppose further that for \(\vect{x}\neq\zerovector\text{,}\) \(\lteval{\left(T-\lambda I_V\right)^k}{\vect{x}}=\zerovector\) for some \(k>0\text{.}\) Then \(\vect{x}\) is a generalized eigenvector of \(T\) with eigenvalue \(\lambda\text{.}\)

Definition 3.1.4. Generalized Eigenspace.

Suppose that \(\ltdefn{T}{V}{V}\) is a linear transformation. Define the generalized eigenspace of \(T\) for \(\lambda\) as

\begin{equation*} \geneigenspace{T}{\lambda}=\setparts{\vect{x}}{\lteval{\left(T-\lambda I_V\right)^k}{\vect{x}}=\zerovector\text{ for some }k\geq 0} \end{equation*}

So the generalized eigenspace is composed of generalized eigenvectors, plus the zero vector. As the name implies, the generalized eigenspace is a subspace of \(V\text{.}\) But more topically, it is an invariant subspace of \(V\) relative to \(T\text{.}\)

First we establish that \(\geneigenspace{T}{\lambda}\) is a subspace of \(V\text{.}\) Note that \(\lteval{\left(T-\lambda I_V\right)^0}{\zerovector}=\zerovector\text{,}\) so by Theorem LTTZZ we have \(\zerovector\in\geneigenspace{T}{\lambda}\text{.}\)

Suppose that \(\vect{x},\,\vect{y}\in\geneigenspace{T}{\lambda}\text{.}\) Then there are integers \(k,\,\ell\) such that \(\lteval{\left(T-\lambda I_V\right)^k}{\vect{x}}=\zerovector\) and \(\lteval{\left(T-\lambda I_V\right)^\ell}{\vect{y}}=\zerovector\text{.}\) Set \(m=k+\ell\text{,}\)

\begin{align*} \lteval{\left(T-\lambda I_V\right)^m}{\vect{x}+\vect{y}}&=\lteval{\left(T-\lambda I_V\right)^m}{\vect{x}}+ \lteval{\left(T-\lambda I_V\right)^m}{\vect{y}}\\ &=\lteval{\left(T-\lambda I_V\right)^{k+\ell}}{\vect{x}}+ \lteval{\left(T-\lambda I_V\right)^{k+\ell}}{\vect{y}}\\ &=\lteval{\left(T-\lambda I_V\right)^{\ell}}{\lteval{\left(T-\lambda I_V\right)^{k}}{\vect{x}}}+\\ &\quad\quad\lteval{\left(T-\lambda I_V\right)^{k}}{\lteval{\left(T-\lambda I_V\right)^{\ell}}{\vect{y}}}\\ &= \lteval{\left(T-\lambda I_V\right)^{\ell}}{\zerovector}+ \lteval{\left(T-\lambda I_V\right)^{k}}{\zerovector} \\ &=\zerovector+\zerovector=\zerovector \end{align*}

So \(\vect{x}+\vect{y}\in\geneigenspace{T}{\lambda}\text{.}\)

Suppose that \(\vect{x}\in\geneigenspace{T}{\lambda}\) and \(\alpha\in\complexes\text{.}\) Then there is an integer \(k\) such that \(\lteval{\left(T-\lambda I_V\right)^k}{\vect{x}}=\zerovector\text{.}\)

\begin{equation*} \lteval{\left(T-\lambda I_V\right)^k}{\alpha\vect{x}}=\alpha\lteval{\left(T-\lambda I_V\right)^k}{\vect{x}}=\alpha\zerovector=\zerovector\text{.} \end{equation*}

So \(\alpha\vect{x}\in\geneigenspace{T}{\lambda}\text{.}\) By Theorem TSS, \(\geneigenspace{T}{\lambda}\) is a subspace of \(V\text{.}\)

Now we show that \(\geneigenspace{T}{\lambda}\) is invariant relative to \(T\text{.}\) Suppose that \(\vect{x}\in\geneigenspace{T}{\lambda}\text{.}\) Then by Definition Definition 3.1.4 there is an integer \(k\) such that \(\lteval{\left(T-\lambda I_V\right)^k}{\vect{x}}=\zerovector\text{.}\) The following argument is due to Zoltan Toth.

\begin{align*} \lteval{\left(T-\lambda I_V\right)^k}{\lteval{T}{\vect{x}}}&=\lteval{\left(T-\lambda I_V\right)^k}{\lteval{T}{\vect{x}}} - \lambda\zerovector\\ &=\lteval{\left(T-\lambda I_V\right)^k}{\lteval{T}{\vect{x}}} - \lambda\lteval{\left(T-\lambda I_V\right)^k}{\vect{x}}\\ &=\lteval{\left(T-\lambda I_V\right)^k}{\lteval{T}{\vect{x}}} - \lteval{\left(T-\lambda I_V\right)^k}{\lambda\vect{x}}\\ &=\lteval{\left(T-\lambda I_V\right)^k}{\lteval{T}{\vect{x}}-\lambda\vect{x}}\\ &=\lteval{\left(T-\lambda I_V\right)^k}{\lteval{\left(T-\lambda I_V\right)}{\vect{x}}}\\ &=\lteval{\left(T-\lambda I_V\right)^{k+1}}{\vect{x}}\\ &=\lteval{\left(T-\lambda I_V\right)}{\lteval{\left(T-\lambda I_V\right)^k}{\vect{x}}}\\ &=\lteval{\left(T-\lambda I_V\right)}{\zerovector}=\zerovector \end{align*}

This qualifies \(\lteval{T}{\vect{x}}\) for membership in \(\geneigenspace{T}{\lambda}\text{,}\) so by Definition Definition 3.1.4, \(\geneigenspace{T}{\lambda}\) is invariant relative to \(T\text{.}\)

Before we compute some generalized eigenspaces, we state and prove one theorem that will make it much easier to create a generalized eigenspace, since it will allow us to use tools we already know well, and will remove some of the ambiguity of the clause “for some \(k\)” in the definition.

To establish the set equality, first suppose that \(\vect{x}\in\geneigenspace{T}{\lambda}\text{.}\) Then there is an integer \(k\) such that \(\lteval{\left(T-\lambda I_V\right)^k}{\vect{x}}=\zerovector\text{.}\) This is equivalent to the statement that \(\vect{x}\in\krn{\left(T-\lambda I_V\right)^k}\text{.}\) No matter what the value of \(k\) is, Theorem 3.1.1 gives

\begin{equation*} \vect{x}\in\krn{\left(T-\lambda I_V\right)^k}\subseteq\krn{\left(T-\lambda I_V\right)^n}. \end{equation*}

So, \(\geneigenspace{T}{\lambda}\subseteq\krn{\left(T-\lambda I_V\right)^n}\text{.}\)

For the opposite inclusion, suppose \(\vect{y}\in\krn{\left(T-\lambda I_V\right)^n}\text{.}\) Then \(\lteval{\left(T-\lambda I_V\right)^n}{\vect{y}}=\zerovector\text{,}\) so \(\vect{y}\in\geneigenspace{T}{\lambda}\) and thus \(\krn{\left(T-\lambda I_V\right)^n}\subseteq\geneigenspace{T}{\lambda}\text{.}\) So we have the desired equality of sets.

Theorem Theorem 3.1.6 allows us to compute generalized eigenspaces as a single kernel (or null space of a matrix representation) without considering all possible powers \(k\text{.}\) We can simply consider the case where \(k=n\text{.}\) It is worth noting that the “regular” eigenspace is a subspace of the generalized eigenspace since

\begin{equation*} \eigenspace{T}{\lambda}=\krn{\left(T-\lambda I_V\right)^1}\subseteq\krn{\left(T-\lambda I_V\right)^n}=\geneigenspace{T}{\lambda} \end{equation*}

where the subset inclusion is a consequence of Theorem Theorem 3.1.1.

Also, there is no such thing as a “generalized eigenvalue.” If \(\lambda\) is not an eigenvalue of \(T\text{,}\) then the kernel of \(T-\lambda I_V\) is trivial and therefore subsequent powers of \(T-\lambda I_V\) also have trivial kernels (Theorem Theorem 3.1.1 gives \(m=0\)). So if we defined generalized eigenspaces for scalars that are not an eigenvalue, they would always be trivial. Alright, we know enough now to compute some generalized eigenspaces. We will record some information about algebraic and geometric multiplicities of eigenvalues (Definition AME, Definition GME) as we go, since these observations will be of interest in light of some future theorems.

In order to gain some experience with generalized eigenspaces, we construct one and then also construct a matrix representation for the restriction to this invariant subspace.

Consider the linear transformation \(\ltdefn{T}{\complex{5}}{\complex{5}}\) defined by \(\lteval{T}{\vect{x}}=A\vect{x}\text{,}\) where

\begin{equation*} A=\begin{bmatrix} -22 & -24 & -24 & -24 & -46 \\ 3 & 2 & 6 & 0 & 11 \\ -12 & -16 & -6 & -14 & -17 \\ 6 & 8 & 4 & 10 & 8 \\ 11 & 14 & 8 & 13 & 18 \end{bmatrix} \end{equation*}

One of the eigenvalues of \(A\) is \(\lambda=2\text{,}\) with geometric multiplicity \(\geomult{T}{2}=1\text{,}\) and algebraic multiplicity \(\algmult{T}{2}=3\text{.}\) We get the generalized eigenspace according to Theorem Theorem 3.1.6,

\begin{align*} W&= \geneigenspace{T}{2} =\krn{\left(T-2I_{\complex{5}}\right)^5}\\ &=\spn{\set{ \colvector{-2\\1\\1\\0\\0},\, \colvector{0\\-1\\0\\1\\0},\, \colvector{-4\\2\\0\\0\\1}}} =\spn{\set{\vect{w}_1,\,\vect{w}_2,\,\vect{w}_3}} \end{align*}

By Theorem Theorem 3.1.5, we know \(W\) is invariant relative to \(T\text{,}\) so we can employ Definition LTR to form the restriction, \(\ltdefn{\restrict{T}{W}}{W}{W}\text{.}\)

We will from the restriction of \(T\) to \(W\text{,}\) \(\restrict{T}{W}\text{,}\) since we will do this frequently in subsequent examples. For a basis of \(W\) we will use \(C=\set{\vect{w}_1,\,\vect{w}_2,\,\vect{w}_3}\text{.}\) Notice that \(\dimension{W}=3\text{,}\) so our matrix representation will be a square matrix of size 3. Applying Definition MR, we compute

\begin{align*} \vectrep{C}{\lteval{T}{\vect{w}_1}} &=\vectrep{C}{A\vect{w}_1}\\ &=\vectrep{C}{\colvector{-4\\2\\2\\0\\0}} =\vectrep{C}{ 2\colvector{-2\\1\\1\\0\\0}+ 0\colvector{0\\-1\\0\\1\\0}+ 0\colvector{-4\\2\\0\\0\\1} } =\colvector{2\\0\\0}\\ \vectrep{C}{\lteval{T}{\vect{w}_2}} &=\vectrep{C}{A\vect{w}_2}\\ &=\vectrep{C}{\colvector{0\\-2\\2\\2\\-1}} =\vectrep{C}{ 2\colvector{-2\\1\\1\\0\\0}+ 2\colvector{0\\-1\\0\\1\\0}+ (-1)\colvector{-4\\2\\0\\0\\1} } =\colvector{2\\2\\-1}\\ \vectrep{C}{\lteval{T}{\vect{w}_3}} &=\vectrep{C}{A\vect{w}_3}\\ &=\vectrep{C}{\colvector{-6\\3\\-1\\0\\2}} =\vectrep{C}{ (-1)\colvector{-2\\1\\1\\0\\0}+ 0\colvector{0\\-1\\0\\1\\0}+ 2\colvector{-4\\2\\0\\0\\1} } =\colvector{-1\\0\\2} \end{align*}

So the matrix representation of \(\restrict{T}{W}\) relative to \(C\) is

\begin{equation*} \matrixrep{\restrict{T}{W}}{C}{C}= \begin{bmatrix} 2 & 2 & -1\\ 0 & 2 & 0\\ 0 & -1 & 2 \end{bmatrix} \end{equation*}

The question arises: how do we use a \(3\times 3\) matrix to compute with vectors from \(\complex{5}\text{?}\) To answer this question, consider the randomly chosen vector

\begin{equation*} \vect{w}=\colvector{-4\\4\\4\\-2\\-1} \end{equation*}

First check that \(\vect{w}\in\geneigenspace{T}{2}\text{.}\) There are two ways to do this, first verify that

\begin{equation*} \lteval{\left(T-2I_{\complex{5}}\right)^5}{\vect{w}}=\left(A-2I_5\right)^5\vect{w}=\zerovector \end{equation*}

meeting Definition Definition 3.1.3 (with \(k=5\)). Or, express \(\vect{w}\) as a linear combination of the basis \(C\) for \(W\text{,}\) to wit, \(\vect{w}=4\vect{w}_1-2\vect{w}_2-\vect{w}_3\text{.}\)

Now compute \(\lteval{\restrict{T}{W}}{\vect{w}}\) directly

\begin{equation*} \lteval{\restrict{T}{W}}{\vect{w}} =\lteval{T}{\vect{w}} =A\vect{w} =\colvector{-10\\9\\5\\-4\\0} \end{equation*}

It was necessary to verify that \(\vect{w}\in\geneigenspace{T}{2}\text{.}\) If we trust our work so far, then this output we just computed will also be an element of \(W\text{,}\) but it would be wise to check this anyway (using either of the methods we used for \(\vect{w}\)). We'll wait.

Now we will repeat this sample computation, but instead using the matrix representation of \(\restrict{T}{W}\) relative to \(C\text{.}\)

\begin{align*} \lteval{\restrict{T}{W}}{\vect{w}}&=\vectrepinv{C}{\matrixrep{\restrict{T}{W}}{C}{C}\vectrep{C}{\vect{w}}}\\ &=\vectrepinv{C}{\matrixrep{\restrict{T}{W}}{C}{C}\vectrep{C}{4\vect{w}_1-2\vect{w}_2-\vect{w}_3}}\\ &=\vectrepinv{C}{\begin{bmatrix}2 & 2 & -1 \\ 0 & 2 & 0 \\ 0 & -1 & 2\end{bmatrix} \colvector{4\\-2\\-1}}=\vectrepinv{C}{\colvector{5\\-4\\0}}\\ &=5\colvector{-2\\1\\1\\0\\0}+ (-4)\colvector{0\\-1\\0\\1\\0}+ 0\colvector{-4\\2\\0\\0\\1}=\colvector{-10\\9\\5\\-4\\0} \end{align*}

This matches the previous computation. Notice how the “action” of \(\restrict{T}{W}\) is accomplished by a \(3\times 3\) matrix multiplying a column vector of size 3.

If you would like more practice with these sorts of computations, mimic the above using the other eigenvalue of \(T\text{,}\) which is \(\lambda=-2\text{.}\) The generalized eigenspace has dimension 2, so the matrix representation of the restriction to the generalized eigenspace will be a \(2\times 2\) matrix.

Our next two examples compute a complete set of generalized eigenspaces for a linear transformation.

In Example Example 1.4.2 we presented two invariant subspaces of \(\complex{4}\text{.}\) There was some mystery about just how these were constructed, but we can now reveal that they are generalized eigenspaces. Example Example 1.4.2 featured \(\ltdefn{T}{\complex{4}}{\complex{4}}\) defined by \(\lteval{T}{\vect{x}}=A\vect{x}\) with \(A\) given by

\begin{equation*} A=\begin{bmatrix} -8 & 6 & -15 & 9 \\ -8 & 14 & -10 & 18 \\ 1 & 1 & 3 & 0 \\ 3 & -8 & 2 & -11 \end{bmatrix} \end{equation*}

A matrix representation of \(T\) relative to the standard basis (Definition SUV) will equal \(A\text{.}\) So we can analyze \(A\) with the techniques of Chapter E. Doing so, we find two eigenvalues, \(\lambda=1,\,-2\text{,}\) with multiplicities,

\begin{align*} \algmult{T}{1}&=2 & \geomult{T}{1}&=1 &\algmult{T}{-2}&=2 & \geomult{T}{-2}&=1 \end{align*}

To apply Theorem Theorem 3.1.6 we subtract each eigenvalue from the diagonal entries of \(A\text{,}\) raise the result to the power \(\dimension{\complex{4}}=4\text{,}\) and compute a basis for the null space.

\begin{align*} \left(A-(-2)I_4\right)^4&=\begin{bmatrix} 648 & -1215 & 729 & -1215 \\ -324 & 486 & -486 & 486 \\ -405 & 729 & -486 & 729 \\ 297 & -486 & 405 & -486 \end{bmatrix} \rref \begin{bmatrix} 1 & 0 & 3 & 0 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\end{bmatrix}\\ \geneigenspace{T}{-2}&=\spn{\set{ \colvector{-3\\-1\\1\\0},\, \colvector{0\\-1\\0\\1}}}\\ \left(A-(1)I_4\right)^4&= \begin{bmatrix} 81 & -405 & -81 & -729 \\ -108 & -189 & -378 & -486 \\ -27 & 135 & 27 & 243 \\ 135 & 54 & 351 & 243 \end{bmatrix} \rref \begin{bmatrix} 1 & 0 & 7/3 & 1 \\ 0 & 1 & 2/3 & 2 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\end{bmatrix}\\ \geneigenspace{T}{1}&=\spn{\set{ \colvector{-7\\-2\\3\\0},\, \colvector{-1\\-2\\0\\1}}} \end{align*}

In Example Example 1.4.2 we concluded that these two invariant subspaces formed a direct sum of \(\complex{4}\text{,}\) only at that time, they were called \(W\) and \(X\text{.}\) Now we can write

\begin{equation*} \complex{4}=\geneigenspace{T}{1}\ds\geneigenspace{T}{-2} \end{equation*}

This is no accident. Notice that the dimension of each of these invariant subspaces is equal to the algebraic multiplicity of the associated eigenvalue. Not an accident either. (See the upcoming Theorem Theorem 3.1.10.)

Define the linear transformation \(\ltdefn{S}{\complex{6}}{\complex{6}}\) by \(\lteval{S}{\vect{x}}=B\vect{x}\) where

\begin{equation*} \begin{bmatrix} 2 & -4 & 25 & -54 & 90 & -37 \\ 2 & -3 & 4 & -16 & 26 & -8 \\ 2 & -3 & 4 & -15 & 24 & -7 \\ 10 & -18 & 6 & -36 & 51 & -2 \\ 8 & -14 & 0 & -21 & 28 & 4 \\ 5 & -7 & -6 & -7 & 8 & 7\end{bmatrix} \end{equation*}

Then \(B\) will be the matrix representation of \(S\) relative to the standard basis and we can use the techniques of Chapter E applied to \(B\) in order to find the eigenvalues of \(S\text{.}\)

\begin{align*} \algmult{S}{3}&=2 & \geomult{S}{3}&=1 & \algmult{S}{-1}&=4 & \geomult{S}{-1}&=2 \end{align*}

To find the generalized eigenspaces of \(S\) we need to subtract an eigenvalue from the diagonal elements of \(B\text{,}\) raise the result to the power \(\dimension{\complex{6}}=6\) and compute the null space. Here are the results for the two eigenvalues of \(S\text{,}\)

\begin{align*} \left(B-3I_6\right)^6&= \begin{bmatrix} 64000 & -152576 & -59904 & 26112 & -95744 & 133632 \\ 15872 & -39936 & -11776 & 8704 & -29184 & 36352 \\ 12032 & -30208 & -9984 & 6400 & -20736 & 26368 \\ -1536 & 11264 & -23040 & 17920 & -17920 & -1536 \\ -9728 & 27648 & -6656 & 9728 & -1536 & -17920 \\ -7936 & 17920 & 5888 & 1792 & 4352 & -14080 \end{bmatrix}\\ \rref& \begin{bmatrix} 1 & 0 & 0 & 0 & -4 & 5 \\ 0 & 1 & 0 & 0 & -1 & 1 \\ 0 & 0 & 1 & 0 & -1 & 1 \\ 0 & 0 & 0 & 1 & -2 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}\\ \geneigenspace{S}{3}&=\spn{\set{ \colvector{4\\1\\1\\2\\1\\0},\, \colvector{-5\\-1\\-1\\-1\\0\\1}}}\\ \left(B-(-1)I_6\right)^6&= \begin{bmatrix} 6144 & -16384 & 18432 & -36864 & 57344 & -18432 \\ 4096 & -8192 & 4096 & -16384 & 24576 & -4096 \\ 4096 & -8192 & 4096 & -16384 & 24576 & -4096 \\ 18432 & -32768 & 6144 & -61440 & 90112 & -6144 \\ 14336 & -24576 & 2048 & -45056 & 65536 & -2048 \\ 10240 & -16384 & -2048 & -28672 & 40960 & 2048 \end{bmatrix}\\ \rref& \begin{bmatrix} 1 & 0 & -5 & 2 & -4 & 5 \\ 0 & 1 & -3 & 3 & -5 & 3 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}\\ \geneigenspace{S}{-1}&=\spn{\set{ \colvector{5\\3\\1\\0\\0\\0},\, \colvector{-2\\-3\\0\\1\\0\\0},\, \colvector{4\\5\\0\\0\\1\\0},\, \colvector{-5\\-3\\0\\0\\0\\1} }} \end{align*}

If we take the union of the two bases for these two invariant subspaces we obtain the set

\begin{equation*} C=\set{ \colvector{4\\1\\1\\2\\1\\0},\, \colvector{-5\\-1\\-1\\-1\\0\\1},\, \colvector{5\\3\\1\\0\\0\\0},\, \colvector{-2\\-3\\0\\1\\0\\0},\, \colvector{4\\5\\0\\0\\1\\0},\, \colvector{-5\\-3\\0\\0\\0\\1} } \end{equation*}

You can check that this set is linearly independent (right now we have no guarantee this will happen). Once this is verified, we have a basis for \(\complex{6}\text{.}\) This is enough for us to apply Theorem Theorem 1.2.3 and conclude that

\begin{equation*} \complex{6}=\geneigenspace{S}{3}\ds\geneigenspace{S}{-1} \end{equation*}

This is no accident. Notice that the dimension of each of these invariant subspaces is equal to the algebraic multiplicity of the associated eigenvalue. Not an accident either. (See Theorem Theorem 3.1.10.)

Our principal interest in generalized eigenspaces is the following important theorem, which has been presaged by the two previous examples.

We will provide a complete proof soon. For now, we give an outline.

We now that a decomposition of the domain of a linear transformation into invariant subspaces will give a block diagonal matrix representation. But it cuts both ways. If there is a similarity transformation to a block diagonal matrix, then the columns of the nonsingular matrix used for similarity will be a basis that can be partitioned into bases of invariant subspaces that are a direct sum decomposition of the domain (Theorem SCB). So we outline a sequence of similarity transformations that converts any square matrix to the appropriate block diagonal form.

  1. Begin with the eigenvalues of the matrix, ordered so that equal eigenvalues are adjacent.

  2. Determine the upper triangular matrix with these eigenvalues on the diagonal and similar to the original matrix as guaranteed by Theorem UTMR.

  3. Suppose that the entry in row \(i\) and column \(j\) in the “upper half” (so \(j\gt i\)) has the value \(a\text{.}\) Suppose further that the diagonal entries (eigenvalues) \(\lambda_i\) and \(\lambda_j\) are different.

    Define \(S\) to be the identity matrix, with the addition of the entry \(\frac{a}{\lambda_j-\lambda_i}\) in row \(i\) and column \(j\text{.}\) Then a similarity transformation by \(S\) will place a zero in row \(i\) and column \(j\text{.}\) Here is where we begin to understand being careful about equal and different eigenvalues.

  4. The similarity transformation of the previous step will change other entries of the matrix, but only in row \(i\) to the right of the entry of interest and column \(j\) above the entry of interest.

  5. Begin in the bottom row, going only as far right as needed to get different eigenvalues, and “zero out” the rest of the row. Move up a row, work left to right, “zeroing out” as much of the row as possible. Continue moving up a row at a time, then move left to right in the row. The restriction to using different eigenvalues will cut a staircase pattern.

  6. You should understand that the blocks left on the diagonal correspond to runs of equal eigenvalues on the diagonal. So each block has a size equal to the algebraic multiplicity of the eigenvalue.

Now, given any linear transformation, we can find a decomposition of the domain into a collection of invariant subspaces. And, as we have seen, such a decomposition will provide a basis for the domain so that a matrix representation realtive to this basis will have a block diagonal form. Besides a decomposition into invariant subspaces, this proof has a bonus for us.

Coming soon: as a consequence of proof, or by counting dimensions with inequality on geometric dimension.

We illustrate the use of this decomposition in building a block diagonal matrix representation.

In Example Example 3.1.9 we computed the generalized eigenspaces of the linear transformation \(\ltdefn{S}{\complex{6}}{\complex{6}}\) by \(\lteval{S}{\vect{x}}=B\vect{x}\) where

\begin{equation*} B=\begin{bmatrix} 2 & -4 & 25 & -54 & 90 & -37 \\ 2 & -3 & 4 & -16 & 26 & -8 \\ 2 & -3 & 4 & -15 & 24 & -7 \\ 10 & -18 & 6 & -36 & 51 & -2 \\ 8 & -14 & 0 & -21 & 28 & 4 \\ 5 & -7 & -6 & -7 & 8 & 7\end{bmatrix} \end{equation*}

We also recognized that these generalized eigenspaces provided a vector space decomposition.

From these generalized eigenspaces, we found the basis

\begin{align*} C&=\set{\vect{v}_1,\,\vect{v}_2,\,\vect{v}_3,\,\vect{v}_4,\,\vect{v}_5,\,\vect{v}_6}\\ &=\set{\colvector{4\\1\\1\\2\\1\\0},\, \colvector{-5\\-1\\-1\\-1\\0\\1},\, \colvector{5\\3\\1\\0\\0\\0},\, \colvector{-2\\-3\\0\\1\\0\\0},\, \colvector{4\\5\\0\\0\\1\\0},\, \colvector{-5\\-3\\0\\0\\0\\1}} \end{align*}

of \(\complex{6}\) where \(\set{\vect{v}_1,\,\vect{v}_2}\) is a basis of \(\geneigenspace{S}{3}\) and \(\set{\vect{v}_3,\,\vect{v}_4,\,\vect{v}_5,\,\vect{v}_6}\) is a basis of \(\geneigenspace{S}{-1}\)

We can employ \(C\) in the construction of a matrix representation of \(S\) (Definition MR). Here are the computations,

\begin{align*} \vectrep{C}{\lteval{S}{\vect{v}_1}}\\ &=\vectrep{C}{\colvector{11\\3\\3\\7\\4\\1}} =\vectrep{C}{4\vect{v}_1+1\vect{v}_2} =\colvector{4\\1\\0\\0\\0\\0}\\ \vectrep{C}{\lteval{S}{\vect{v}_2}} &=\vectrep{C}{\colvector{-14\\-3\\-3\\-4\\-1\\2}} =\vectrep{C}{(-1)\vect{v}_1+2\vect{v}_2} =\colvector{-1\\2\\0\\0\\0\\0}\\ \vectrep{C}{\lteval{S}{\vect{v}_3}} &=\vectrep{C}{\colvector{23\\5\\5\\2\\-2\\-2}} =\vectrep{C}{5\vect{v}_3+2\vect{v}_4+(-2)\vect{v}_5+(-2)\vect{v}_6} =\colvector{0\\0\\5\\2\\-2\\-2}\\ \vectrep{C}{\lteval{S}{\vect{v}_4}} &=\vectrep{C}{\colvector{-46\\-11\\-10\\-2\\5\\4}} =\vectrep{C}{(-10)\vect{v}_3+(-2)\vect{v}_4+5\vect{v}_5+4\vect{v}_6} =\colvector{0\\0\\-10\\-2\\5\\4}\\ \vectrep{C}{\lteval{S}{\vect{v}_5}} &=\vectrep{C}{\colvector{78\\19\\17\\1\\-10\\-7}} =\vectrep{C}{17\vect{v}_3+1\vect{v}_4+(-10)\vect{v}_5+(-7)\vect{v}_6} =\colvector{0\\0\\17\\1\\-10\\-7}\\ \vectrep{C}{\lteval{S}{\vect{v}_6}} &=\vectrep{C}{\colvector{-35\\-9\\-8\\2\\6\\3}} =\vectrep{C}{(-8)\vect{v}_3+2\vect{v}_4+6\vect{v}_5+3\vect{v}_6} =\colvector{0\\0\\-8\\2\\6\\3} \end{align*}

These column vectors are the columns of the matrix representation, so we obtain

\begin{equation*} \matrixrep{S}{C}{C}=\begin{bmatrix}4 & -1 & 0 & 0& 0 & 0\\ 1 & 2 & 0 & 0& 0 & 0\\ 0 & 0 & 5 & -10& 17 & -8\\ 0 & 0 & 2 & -2& 1 & 2\\ 0 & 0 & -2 & 5& -10 & 6\\ 0 & 0 & -2 & 4& -7 & 3\end{bmatrix} \end{equation*}

As before, the key feature of this representation is the \(2\times 2\) and \(4\times 4\) blocks on the diagonal. They arise from generalized eigenspaces and their sizes are equal to the algebraic multiplicities of the eigenvalues.