9.12 Matrices tutorial
Linear Systems
A linear system of equations in \(n\) variables is a set of \(m\) equations of the form:
\(a_{11} x_1 + a_{12} x_2 + \cdots + a_{1n} x_n = b_1\)
\(a_{21} x_1 + a_{22} x_2 + \cdots + a_{2n} x_n = b_2\)
\(\vdots\)
\(a_{m1} x_1 + a_{m2} x_2 + \cdots + a_{mn} x_n = b_m\)
(S)
Where \(a_{ij}, b_i\) with \(1 \leq i \leq m, 1 \leq j \leq n\) are coefficients.
Solution
A solution for the linear system \((S)\) is a collection of \(n\) numbers \(s_1, s_2, \dots, s_n\) such that \(a_{11} s_1 + a_{12} s_2 + \cdots + a_{1n} s_n = b_1\) \(a_{21} s_1 + a_{22} s_2 + \cdots + a_{2n} s_n = b_2\) \(\vdots\) \(a_{m1} s_1 + a_{m2} s_2 + \cdots + a_{mn} s_n = b_m\)
Example
\(s_1=-1,s_2=3\) is a solution of the linear system: \(3x_1 + 2x_2 = 3\), \(-x_1 + x_2 = 4\),
Since \(3(-1) + 2(3) = 3\), \(-(-1) + 3 = 4\).
Some definitions
A general solution of a linear system is the set of all the solutions.
Two linear systems are equivalent if they have the same general solution.
A linear system is called consistent if it has at least one solution, and is called inconsistent if it has no solution.
Remark
Given a linear system, exactly one of the following holds: (1) The linear system has a unique solution (consistent). (2) The linear system has infinite solutions (consistent). (3) The linear system has no solution (inconsistent).
Given a linear system
\(a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n = b_1\) \(a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n}x_n = b_2\), \(\vdots\), \(a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn}x_n = b_m\),
we associate two matrices, the coefficient matrix: \(\begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix}_{m \times n}\)
and the augmented matrix: \(\begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} & b_1 \\ a_{21} & a_{22} & \cdots & a_{2n} & b_2 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} & b_m \end{pmatrix}_{m \times (n+1)}\)
Elementary row operations
Given a matrix, it is possible to perform the following operations called elementary row operations:
-
Interchange two rows, \(r_i \leftrightarrow r_j\).
-
Multiply a row by a nonzero constant, \(cr_i\) with \(c \neq 0\).
-
Add a multiple from one row to another, \(cr_i + r_j\).
Example
\(\begin{pmatrix}6 & 2 & 1 & -1\\ 4 & 2 & 2 & 2\\ -2 & 2 & 1 & 7\end{pmatrix}\rightarrow r_1\leftrightarrow r_2\rightarrow\begin{pmatrix}4 & 2 & 2 & 2\\ 6 & 2 & 1 & -1\\ -2 & 2 & 1 & 7\end{pmatrix}\rightarrow\frac12r_1\\\rightarrow\begin{pmatrix}2 & 1 & 1 & 1\\ 6 & 2 & 1 & -1\\ -2 & 2 & 1 & 7\end{pmatrix}\rightarrow-3r_1+r_2\rightarrow\begin{pmatrix}2 & 1 & 1 & 1\\ 0 & -1 & -2 & -4\\ -2 & 2 & 1 & 7\end{pmatrix}\)
Remark
The importance of elementary row operations is that when applied to the augmented matrix of a linear system, they produce an equivalent system.
Leading entry
The first non-zero element in each row in a matrix is called the leading entry.
Row echelon form
A matrix is said to be in row echelon form if it satisfies: (1) Every row of zeros is at the bottom. (2) The leading entry in each non-zero row is located in a column to the left of the leading entry below it.
Example:
Then \(A\) and \(B\) are in row echelon form but \(C\) is not.
Gauss elimination method for solving linear systems
A linear system can be solved by following these steps:
(1) Write the augmented matrix of the linear system.
(2) Use elementary row operations to convert the augmented matrix to a matrix in row echelon form.
(3) Use backward substitution to solve the linear system corresponding to the matrix in row echelon form (which is equivalent to the initial system).
Example
Solve the following linear system using the Gauss elimination method.
\(\begin{aligned}x_2 - x_3 &= 3 \\ -2x_1 + 4x_2 - x_3 &= 1 \\ -2x_1 + 5x_2 - 4x_3 &= -2\end{aligned}\)
Solution:
\(\begin{pmatrix} 0 & 1 & -1 & 3 \\ -2 & 4 & -1 & 1 \\ -2 & 5 & -4 & -2 \end{pmatrix} \xrightarrow{R_1 \leftrightarrow R_2} \begin{pmatrix} -2 & 4 & -1 & 1 \\ 0 & 1 & -1 & 3 \\ -2 & 5 & -4 & -2 \end{pmatrix} \xrightarrow{-R_1 + R_3} \begin{pmatrix} -2 & 4 & -1 & 1 \\ 0 & 1 & -1 & 3 \\ 0 & 1 & -3 & -3 \end{pmatrix} \xrightarrow{-R_2 + R_3} \begin{pmatrix} -2 & 4 & -1 & 1 \\ 0 & 1 & -1 & 3 \\ 0 & 0 & -2 & -6 \end{pmatrix}\)
\(\begin{aligned} -2x_1 + 4x_2 - x_3 &= 1 \ (E1) \\ x_2 - x_3 &= 3 \ (E2) \\ -2x_3 &= -6 \ (E3) \end{aligned}\)
From \((E_3)\) we have \(x_3 = 3\). Substituting \(x_3 = 3\) in \((E_2)\) we have \(x_2 - 3 = 3 \implies x_2 = 6\).
Substituting \(x_2 = 6\) and \(x_3 = 3\) in \((E_1)\) we have \(-2x_1 + 24 - 3 = 1 \implies -2x_1 = -20 \implies x_1 = 10\).
One more
\(\begin{aligned} x_1 + 2x_2 &= -7 \\ -x_1 - x_2 &= 1 \\ 2x_1 + x_2 &= 5 \end{aligned}\)
Solution:
\(\begin{pmatrix} 1 & 2 & -7 \\ -1 & -1 & 1 \\ 2 & 1 & 5 \end{pmatrix} \xrightarrow{R_1 + R_2, \ -2R_1 + R_3} \begin{pmatrix} 1 & 2 & -7 \\ 0 & 1 & -6 \\ 0 & -3 & 19 \end{pmatrix} \xrightarrow{3R_2 + R_3} \begin{pmatrix} 1 & 2 & -7 \\ 0 & 1 & -6 \\ 0 & 0 & 1 \end{pmatrix}\)
From the last row we have the equation \(0 = 0x_1 + 0x_2 = 1\) which is not possible, and therefore the linear system has no solution.
Theorem
A linear system is consistent, if and only if, the row echelon form of the augmented matrix has no row of the form \((0 \ 0 \ \dots \ 0 \ \ c)\) with \(c \neq 0\).
Upper and lower triangular matrix
A matrix \(A = (a_{ij})_{n \times n}\) is called an upper triangular matrix if \(a_{ij} = 0\) for \(i > j\), that is, if all entries below the main diagonal are 0.
\(A = \begin{pmatrix} a_{11} & a_{12} & \dots & a_{1n} \\ 0 & a_{22} & \dots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & a_{nn} \end{pmatrix}\)
A matrix \(A = (a_{ij})_{n \times n}\) is called a lower triangular matrix if \(a_{ij} = 0\) for \(i < j\), that is, if all entries above the main diagonal are 0.
\(A = \begin{pmatrix} a_{11} & 0 & \dots & 0 \\ a_{21} & a_{22} & \dots & 0 \\ \vdots & \vdots & \ddots & 0 \\ a_{n1} & a_{n2} & \dots & a_{nn} \end{pmatrix}\)
-
Find an upper triangular matrix \(A\) such that \(A^3 = \begin{pmatrix} 8 & -57 \\ 0 & 27 \end{pmatrix}\)
\(A=\begin{pmatrix}2 & -3\\ 0 & 3\end{pmatrix}\)
-
Let \(A = \begin{pmatrix} -5 & -3 \\ 2 & 1 \end{pmatrix}\). Find \(A^{-1}\), \((A^T)^{-1}\) and \((7A)^{-1}\).
\(A^{-1}=\begin{pmatrix}1 & 3\\ -2 & -5\end{pmatrix}\), \(\left(A^{T}\right)^{-1}=\begin{pmatrix}1 & -2\\ 3 & -5\end{pmatrix}\).
\((7A)^{-1}=\begin{pmatrix}\frac17 & \frac37\\ -\frac27 & -\frac57\end{pmatrix}\)
-
Let \(A\) and \(B\) be square matrices of the same size. Show that \(AB = BA\), if and only if, \((A + B)(A - B) = A^2 - B^2\).
\(\Rightarrow\) Suppose that \(AB = BA\), then:
\((A + B)(A - B) = A^2 - AB + BA - B^2 = A^2 - B^2\)\(\Leftarrow\) Suppose that \((A + B)(A - B) = A^2 - B^2\), then:
\(A^2 - AB + BA - B^2 = A^2 - B^2\)
\(-AB + BA = 0\)
\(AB = BA\)
-
Let \(A\) and \(B\) be square matrices of the same size. Show that \((AB)^T + \left[ A(B^T - B) \right]^T = BA^T\).
\(B^{T}A^{T}+(B^{T}-B)^{T}A^{T}\\=B^{T}A^{T}+BA^{T}-B^{T}A^{T}\\=BA^{T}\)
-
A matrix \(A \in M_{n \times n}(\mathbb{K})\) is called orthogonal if \(A^T = A^{-1}\). Prove that the matrix \(A = \begin{pmatrix} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end{pmatrix}\) is orthogonal.
\(A^T = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix}, \quad A^{-1} = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix}\)
-
A matrix \(A \in M_{n \times n}(\mathbb{K})\) is called idempotent if \(A^2 = A\). If \(A = \frac{1}{2} \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}\), prove that \(A\) and \(I_2 - A\) are idempotent.
\(A^2 = \frac{1}{4} \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix} \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix} = \frac{1}{4} \begin{pmatrix} 2 & 2 \\ 2 & 2 \end{pmatrix} = \frac{1}{2} \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix} \checkmark\)
\(I_2-A=\begin{pmatrix}1 & 0\\ 0 & 1\end{pmatrix}-\frac12\begin{pmatrix}1 & 1\\ 1 & 1\end{pmatrix}=\frac12\begin{pmatrix}1 & -1\\ -1 & 1\end{pmatrix}\checkmark\)