Skip to content

12.25

Definition: Suppose \(T \in L(V, V)\). A subspace \(U\) of \(V\) is called invariant under \(T\) if \(T(u) \in U \ \forall u \in U\), that is \(T|_U : U \to U\).

  1. Suppose \(T \in L(V, V)\) and \(U\) is a subspace of \(V\). Prove that:

    • If \(U \subseteq \text{Null}(T)\), then \(U\) is invariant under \(T\).
      Proof:
      Suppose \(u \in U \subseteq \text{Null}(T)\), then \(T(u) = 0 \in U\) (every subspace contains the element \(0\)).
      So \(T(u) \in U \ \forall u \in U\), which implies \(U\) is invariant under \(T\).
    • If \(\text{Range}(T) \subseteq U\), then \(U\) is invariant under \(T\).

    Proof:

    Suppose \(u \in U\). Then \(T(u) \in \text{Range}(T) \subseteq U\) (by definition).

    So \(T(u) \in U \ \forall u \in U\), which implies \(U\) is invariant under \(T\).

    1. Suppose \(S, T \in L(V, V)\) are such that \(S \circ T = T \circ S\). Prove that \(\text{Null}(S)\) and \(\text{Range}(S)\) are invariant under \(T\).

    2. Null(S):
      Let \(u \in \text{Null}(S)\), so \(S(u) = 0\). We want to show that \(T(u) \in \text{Null}(S)\).
      Note that \(S(T(u)) = S \circ T(u) = T \circ S(u) = T(0) = 0\). Hence, \(T(u) \in \text{Null}(S)\).

    3. Range(S):
      Let \(u \in \text{Range}(S)\). Then there exists \(v \in V\) such that \(u = S(v)\).
      \(T(u) = T(S(v)) = S(T(v)) \implies S(T(v)) \in \text{Range}(S)\)

    So \(T(u) \in \text{Range}(S) \ \forall u \in \text{Range}(S)\).


Let’s focus on the simplest non-trivial invariant subspace: the ones with dimension 1.

Let \(v \in V, v \neq 0\), and consider \(U = \langle v \rangle = \{\lambda v : \lambda \in \mathbb{F}\}\).

So now, if \(T \in \mathcal{L}(V, V)\) and \(U\) is invariant under \(T\) \(\Rightarrow T(u)\in U\quad\forall u\in U.\)

In particular, \(T(v) \in U = \langle v \rangle\) \(\Rightarrow T(v)=\lambda v\quad\text{for some }\lambda\in\mathbb{F}.\)

Definition: Suppose \(T \in \mathcal{L}(V, V)\). \(\lambda \in \mathbb{F}\) is called an eigenvalue of \(T\) if \(\exists v \in V, v \neq 0\) such that \(T(v) = \lambda v\).

\(v\) is called an eigenvector of \(T\) corresponding to \(\lambda\).

Note:

\(T(v) = \lambda v \iff T(v) - \lambda v = 0 \iff (T - \lambda I)(v) = 0 \iff v \in \text{Null}(T - \lambda I).\)

  1. Let \(T : \mathbb{R}^2 \to \mathbb{R}^2\), \(T(x, y) = (3y, x)\). Find the eigenvalues of \(T\) and the associated eigenvectors.

    We want \(\lambda \in \mathbb{R}\) such that \(T(x, y) = \lambda (x, y)\) for \((x, y) \neq (0, 0)\):

    \((3y,x)=(\lambda x,\lambda y)\;\iff\;\begin{cases}3y=\lambda x\\ x=\lambda y\end{cases}\;\iff\;\begin{cases}3y=\lambda^2y\\ x=\lambda y\end{cases}\text{Since }y\neq0\quad(\text{if not }(x,y)=(0,0))\)

    \(\;\iff\;\begin{cases}\lambda^2=3\\ x=\lambda y\end{cases}\;\iff\;\begin{cases}\lambda=\pm\sqrt3\\ x=\pm\sqrt3y\end{cases}\)

    Therefore, the eigenvalues of \(T\) are \(\sqrt{3}\) and \(-\sqrt{3}\), and the associated eigenvector spaces are:

    \(\{\ (\sqrt{3}y, y) : y \in \mathbb{R}, y \neq 0 \ \} = \langle (\sqrt{3}, 1) \rangle \setminus \{(0, 0)\}\)

    \(\{\ (-\sqrt{3}y, y) : y \in \mathbb{R}, y \neq 0 \ \} = \langle (-\sqrt{3}, 1) \rangle \setminus \{(0, 0)\}\) respectively.

  2. Let \(T : \mathbb{R}^3 \to \mathbb{R}^3\), \(T(x, y, z) = (2y, 0, 5z)\).

    Find the eigenvalues of \(T\) and the associated eigenvector spaces.

    We want \(T(x, y, z) = \lambda (x, y, z)\)\(\iff(2y, 0, 5z) = (\lambda x, \lambda y, \lambda z)\)\(\;\iff\;\begin{cases}2y=\lambda x\\ 0=\lambda y\\ 5z=\lambda z\end{cases}\)

    We analyze two cases:


    Case 1: \(\lambda = 0\)

    \(2y = 0 \implies y = 0, \quad 5z = 0 \implies z = 0\)

    So \(\lambda = 0\) is an eigenvalue with the associated eigenvector space: \(\{(x,0,0):x\in\mathbb{R}\setminus\{0\}\}=\langle(1,0,0)\rangle\setminus\left\lbrace\left(0,0,0\right)\right\rbrace\)


    Case 2: \(y=0\)

    If \(\lambda=0\) same as before. If \(x=0\) we have: \(\begin{cases}x=0\\\lambda=5,z\neq0\end{cases}\)

    So \(\lambda = 5\) is an eigenvalue with the associated eigenvector space: \(\{(0,0,z):z\in\mathbb{R}\setminus\{0\}\}=\langle(0,0,1)\rangle\setminus\left\lbrace\left(0,0,0\right)\right\rbrace\)

  3. Show that \(T \in \mathcal{L}(\mathbb{C}^\infty, \mathbb{C}^\infty)\), defined by \(T(z_1, z_2, z_3, \dots) = (0, z_1, z_2, \dots)\) has no eigenvalues.

    Note that \(T(z) = \lambda z \iff (0, z_1, z_2, \dots) = (\lambda z_1, \lambda z_2, \lambda z_3, \dots)\)\(\iff 0 = \lambda z_1, \quad z_1 = \lambda z_2, \quad z_2 = \lambda z_3, \dots\)

    We have two cases:

    • \(\lambda = 0\): Since we have \(z_1 = \lambda z_2 \implies z_1 = 0\), \(z_2 = \lambda z_3 \implies z_2 = 0\), \(\dots\), so \(z_i = 0 \, \forall i \in \mathbb{N}\).

    Then \(\lambda =0\) is not a eigenvalue since we cannot have an eigenvector \(0\)​ * \(z_1 \neq 0\)

    \(\begin{cases}0=z_1=\lambda z_2\\z_2=\lambda z_1\\\ldots\\\end{cases}\). By the first equation \(0 = \lambda z_2\).

    Since we know that \(\lambda \neq 0\), we have \(z_2 = 0\): \(\begin{cases}0=z_1\\ 0=z_2=\lambda z_3\\ \ldots\end{cases}\Rightarrow z_3=0\) and you have continue for all \(i\)

    Thus, \(z_i = 0 \, \forall i \in \mathbb{N}\). So \(z = 0\).

    Therefore, we have no eigenvalues (all eigenvectors would be \(0\)).


    Another way to not have eigenvalues: Consider \(T: \mathbb{R}^2 \to \mathbb{R}^2\), \(T(x, y) = (-3y, x)\).

    In the same way as before, we get:

    \(\lambda^2 = -3 \iff \lambda = \pm \sqrt{-3} \quad (\lambda \in \mathbb{C})\)

    So we can’t have eigenvalues because they are not in \(\mathbb{R}\) (our field).

  4. Suppose \(S, T \in \mathcal{L}(V, V)\) and \(S\) is invertible.

    • Prove that \(T\) and \(S^{-1} T S\) have the same eigenvalues.

    We have that \(\lambda\) is an eigenvalue of \(S^{-1} T S\):

    \(\iff \exists v \in V, v \neq 0 \text{ such that } S^{-1} T S (v) = \lambda v\)

    \(\iff \exists v \in V, v \neq 0 \text{ such that } T S (v) = S (\lambda v) = \lambda S(v)\)

    \(\iff \exists w \in V, w \neq 0 \text{ such that } T (w) = \lambda w, \quad (w = S(v) \text{ for } v \neq 0).\)

    Since \(S\) is invertible, \(S(v) \neq 0\), and therefore \(\lambda \text{ is an eigenvalue of } T.\) * What is the relationship between the eigenvectors of \(T\) and the eigenvectors of \(S^{-1} T S\)?

    By what we’ve done in exercise (a):

    \(S^{-1} T S (v) = \lambda v \iff T(S(v)) = \lambda S(v)\).

    Therefore, \(v\) is an eigenvector of \(S^{-1} T S\)\(\iff S(v) \text{ is an eigenvector of } T.\)

  5. Suppose \(T \in \mathcal{L}(V, V)\) is invertible.

    • Suppose \(\lambda \in \mathbb{F}\), \(\lambda \neq 0\). Prove that \(\lambda\) is an eigenvalue of \(T \iff \frac{1}{\lambda}\) is an eigenvalue of \(T^{-1}\).

    \(\lambda\) is an eigenvalue of \(T\)\(\iff \exists v \in V, v \neq 0 \text{ such that } T(v) = \lambda v\)

    \(\iff \exists v \in V, v \neq 0 \text{ such that } v = T^{-1}(\lambda v) = \lambda T^{-1}(v)\)

    \(\iff \exists v \in V, v \neq 0 \text{ such that } T^{-1}(v) = \frac{1}{\lambda}v\)

    \(\iff \frac{1}{\lambda} \text{ is an eigenvalue of } T^{-1}.\)


    • Prove that \(T\) and \(T^{-1}\) have the same eigenvectors.

    Note that \(\lambda = 0\) is not an eigenvalue of \(T\) nor \(T^{-1}\), since if \(\lambda = 0\) were an eigenvalue:

    \(\implies \exists v \in V, v \neq 0 \text{ such that } T(v) = 0 \cdot v = 0\)\(\implies v \in \text{Null}(T) = \{0\}, \text{ which is a contradiction.}\)

    Now, if \(\lambda \neq 0\), by exercise (a): \(T(v) = \lambda v \iff T^{-1}(v) = \frac{1}{\lambda} v.\)

    So \(v\) is an eigenvector of \(T\) associated to \(\lambda\)\(\;\iff\;v\text{ is an eigenvector of }T^{-1}\text{ associated to }\frac{1}{\lambda}\)

  6. Suppose \(T \in \mathcal{L}(V, V)\) and \(u, v\) are eigenvectors of \(T\) such that \(u + v\) is also an eigenvector of \(T\).

    Prove that \(u\) and \(v\) are eigenvectors of \(T\) associated to the same eigenvalue.

    Note: The reverse implication is true: if \(T(v) = \lambda v\) and \(T(u) = \lambda u\), then since \(T(u + v) = \lambda (u + v)\), \(u+v\) is also an eigenvector

    Proof

    \(\implies T(u) + T(v) = \lambda_1 u + \lambda_2 v\)\(\implies T(u + v) = \lambda_1 u + \lambda_2 v \quad \text{(1)}.\)

    By our hypothesis, \(u + v\) is also an eigenvector of \(T\): let \(\lambda_3\) be the associated eigenvalue.

    \(\implies T(u + v) = \lambda_3 (u + v) \quad \text{(2)}.\)

    By (1) and (2),

    \(\lambda_1 u + \lambda_2 v = \lambda_3 (u + v).\)\(\implies (\lambda_1 - \lambda_3) u + (\lambda_2 - \lambda_3) v = 0 \quad \text{(3)}.\)

    Remember: Eigenvectors of \(T\) (a linear map) are always linearly independent.

    Therefore, by (3): \(\lambda_1 - \lambda_3 = 0 \quad \text{and} \quad \lambda_2 - \lambda_3 = 0.\)

    So \(\lambda_1 = \lambda_3 = \lambda_2\).

    Thus, \(u\) and \(v\) are eigenvectors associated with the same eigenvalue \(\lambda_1 = \lambda_2 = \lambda_3\).