Linear independence

We will work with row vectors in Rn. Consider a set of m vectors. If one of these vectors can be expressed as a linear combination of the other vectors, these vectors are said to be linearly dependent. If none of these vectors can be expressed as a linear combination of the other vectors, they are said to be linearly independent. We show that the determinant of a matrix is zero if its row vectors are linearly dependent and vice versa.



Definition (Linear dependence)

We say that a finite sequence v1,v2,,vm of vectors is linearly dependent if there are real numbers λ1,λ2,,λm which are not all 0 (i.e., some are non-zero), such that

i=1mλivi=0.

Why this terminology? Suppose λ10. Then we can rearrange the above equation into

v1=(1/λ1)(λ2v2++λmvm).

That is, v1 can be expressed as a linear combination of the other vectors. In this sense, the vector v1 depends on the other vectors.

Example. (2,1,0),(3,2,1), and (7,4,1) are linearly dependent because

2(2,1,0)1(3,2,1)1(7,4,1)=(0,0,0).

Definition (Linear independence)

If a finite sequence v1,v2,,vm of vectors is not linearly dependent, we say that it is linearly independent.

Remark. Therefore, v1,v2,,vm are linearly independent if and only if

i=1mλivi=0

implies λ1=λ2==λm=0.

Example. e1=(1,0) and e2=(0,1) are linearly independent vectors. (Verify this!) □

Theorem

Suppose that v1,v2,,vm are linearly independent. If there is a vector u such that

u=i=1mβivi

for some β1,β2,,βm, then this representation is unique.

Remark. What this uniqueness means is the following. If u can be represented in terms of another linear combination of vi, say,

u=i=1mγivi,

then, we have β1=γ1,β2=γ2,,βm=γm.

Proof. Suppose there is an alternative representation

u=i=1mγivi.

Then subtracting both sides, we have

0=uu=i=1m(βiγi)vi.

Since v1,v2,,vm are linearly independent by assumption, it follows that βiγi=0, and hence βi=γi for all i=1,2,,m as required. ■

Linear independence and matrix determinant

Theorem

Let A=(a1a2an)

be an n×n matrix where a1,a2,,an are n-dimensional row vectors. If these row vectors are linearly dependent, then detA=0.

Proof. Since the row vectors are linearly dependent, at least one of them can be expressed as a linear combination of the other vectors. Without the loss of generality, assume an=i=1n1λiai with not all λ1,,λn1 being equal to 0.

Then, by the properties of determinants, we have |a1a2an1an|=|a1a2an1λ1a1++λn1an1|=i=1n1|a1a2an1λiai|=i=1n1λi|a1a2an1ai|.

In the last sum, each determinant contains two rows of the same vectors (i.e., ai is equal to one of a1,,an1). Therefore, by the property of determinants, all the terms are 0. ■

See also: Review More on determinants for the properties of determinants.

Example. Consider the matrix determinant |210321741|. We know that the row vectors are linearly dependent (see the example above), and hence, the determinant is zero. □

Corollary 

Let

A=(a1a2an)

be an n×n matrix where a1,a2,,an are n-dimensional row vectors. If detA0, then these row vectors are linearly independent.

Proof. The contrapositive of the above theorem. ■

The converse is also true: If the row vectors are linearly independent, then detA0. However, proving this is beyond the scope of this module. See some textbooks on Linear Algebra (We need to define the notion of the rank of a matrix).


Comments

Popular posts from this blog

Birth process

Branching processes: Mean and variance

Informal introduction to formal logic