Linear independence
We will work with row vectors in \(\mathbb{R}^n\). Consider a set of \(m\) vectors. If one of these vectors can be expressed as a linear combination of the other vectors, these vectors are said to be linearly dependent. If none of these vectors can be expressed as a linear combination of the other vectors, they are said to be linearly independent. We show that the determinant of a matrix is zero if its row vectors are linearly dependent and vice versa.
Definition (Linear dependence)
We say that a finite sequence \(\mathbf{v}_1, \mathbf{v}_2, \cdots, \mathbf{v}_m\) of vectors is linearly dependent if there are real numbers \(\lambda_1, \lambda_2, \cdots, \lambda_m\) which are not all 0 (i.e., some are non-zero), such that
\[\sum_{i=1}^{m}\lambda_i\mathbf{v}_i = 0.\]
Why this terminology? Suppose \(\lambda_1 \neq 0\). Then we can rearrange the above equation into
\[\mathbf{v}_1 = -(1/\lambda_1)(\lambda_2\mathbf{v}_2 + \cdots + \lambda_m\mathbf{v}_m).\]
That is, \(\mathbf{v}_1\) can be expressed as a linear combination of the other vectors. In this sense, the vector \(\mathbf{v}_1\) depends on the other vectors.
Example. \((2, -1, 0), (-3, 2, 1),\) and \((7, -4, -1)\) are linearly dependent because
\[2(2, -1, 0) - 1\cdot(-3, 2, 1) -1\cdot(7, -4, -1) = (0, 0, 0).\] □
Definition (Linear independence)
If a finite sequence \(\mathbf{v}_1, \mathbf{v}_2, \cdots, \mathbf{v}_m\) of vectors is not linearly dependent, we say that it is linearly independent.
Remark. Therefore, \(\mathbf{v}_1, \mathbf{v}_2, \cdots, \mathbf{v}_m\) are linearly independent if and only if
\[\sum_{i=1}^{m}\lambda_i\mathbf{v}_i = 0\]
implies \[\lambda_1 = \lambda_2 = \cdots = \lambda_m = 0.\] □
Example. \(\mathbf{e}_1 = (1, 0)\) and \(\mathbf{e}_2 = (0, 1)\) are linearly independent vectors. (Verify this!) □
Theorem
Suppose that \(\mathbf{v}_1, \mathbf{v}_2, \cdots, \mathbf{v}_m\) are linearly independent. If there is a vector \(\mathbf{u}\) such that
\[\mathbf{u} = \sum_{i=1}^{m}\beta_i \mathbf{v}_i\]
for some \(\beta_1, \beta_2, \cdots, \beta_m\), then this representation is unique.
Remark. What this uniqueness means is the following. If \(\mathbf{u}\) can be represented in terms of another linear combination of \(\mathbf{v}_i\), say,
\[\mathbf{u} = \sum_{i=1}^{m}\gamma_i \mathbf{v}_i,\]
then, we have \(\beta_1 = \gamma_1, \beta_2 = \gamma_2, \cdots, \beta_m = \gamma_m.\) □
Proof. Suppose there is an alternative representation
\[\mathbf{u} = \sum_{i=1}^{m}\gamma_i \mathbf{v}_i.\]
Then subtracting both sides, we have
\[\mathbf{0} = \mathbf{u} -\mathbf{u} = \sum_{i=1}^{m}(\beta_i - \gamma_i)\mathbf{v}_i.\]
Since \(\mathbf{v}_1, \mathbf{v}_2, \cdots, \mathbf{v}_m\) are linearly independent by assumption, it follows that \(\beta_i - \gamma_i = 0\), and hence \(\beta_i = \gamma_i\) for all \(i = 1, 2, \cdots, m\) as required. ■
Linear independence and matrix determinant
Theorem
Let \[ A = \begin{pmatrix} \mathbf{a}_1\\ \mathbf{a}_2\\ \vdots\\ \mathbf{a}_n \end{pmatrix} \]
be an \(n\times n\) matrix where \(\mathbf{a}_1, \mathbf{a}_2,\cdots, \mathbf{a}_n\) are \(n\)-dimensional row vectors. If these row vectors are linearly dependent, then \(\det A = 0\).
Proof. Since the row vectors are linearly dependent, at least one of them can be expressed as a linear combination of the other vectors. Without the loss of generality, assume \(\mathbf{a}_n = \sum_{i=1}^{n-1}\lambda_i\mathbf{a}_i\) with not all \(\lambda_1, \cdots, \lambda_{n-1}\) being equal to 0.
Then, by the properties of determinants, we have \[ \begin{eqnarray} \begin{vmatrix} \mathbf{a}_1\\ \mathbf{a}_2\\ \vdots\\ \mathbf{a}_{n-1}\\ \mathbf{a}_n \end{vmatrix} &=& \begin{vmatrix} \mathbf{a}_1\\ \mathbf{a}_2\\ \vdots\\ \mathbf{a}_{n-1}\\ \lambda_1\mathbf{a}_1 + \cdots +\lambda_{n-1}\mathbf{a}_{n-1} \end{vmatrix}\\ &=& \sum_{i=1}^{n-1}% \begin{vmatrix} \mathbf{a}_1\\ \mathbf{a}_2\\ \vdots\\ \mathbf{a}_{n-1}\\ \lambda_i\mathbf{a}_i \end{vmatrix}\\ &=& \sum_{i=1}^{n-1}% \lambda_i\begin{vmatrix} \mathbf{a}_1\\ \mathbf{a}_2\\ \vdots\\ \mathbf{a}_{n-1}\\ \mathbf{a}_i \end{vmatrix}. \end{eqnarray} \]
In the last sum, each determinant contains two rows of the same vectors (i.e., \(\mathbf{a}_i\) is equal to one of \(\mathbf{a}_1,\cdots, \mathbf{a}_{n-1}\)). Therefore, by the property of determinants, all the terms are 0. ■
See also: Review More on determinants for the properties of determinants.
Example. Consider the matrix determinant \[ \begin{vmatrix} 2 & -1 & 0\\ -3 & 2 & 1\\ 7 & -4 & -1 \end{vmatrix}. \] We know that the row vectors are linearly dependent (see the example above), and hence, the determinant is zero. □
Corollary
Let
\[A = \begin{pmatrix} \mathbf{a}_1\\ \mathbf{a}_2\\ \vdots\\ \mathbf{a}_n \end{pmatrix} \]
be an \(n\times n\) matrix where \(\mathbf{a}_1, \mathbf{a}_2,\cdots, \mathbf{a}_n\) are \(n\)-dimensional row vectors. If \(\det A \neq 0\), then these row vectors are linearly independent.
Proof. The contrapositive of the above theorem. ■
The converse is also true: If the row vectors are linearly independent, then \(\det A \neq 0\). However, proving this is beyond the scope of this module. See some textbooks on Linear Algebra (We need to define the notion of the rank of a matrix).
Comments
Post a Comment