Properties of matrix determinants
We study the properties of matrix determinants. We can exploit these properties to (sometimes greatly) simplify the computation of matrix determinants.
Before studying the properties of determinants, we introduce the important notion of linearity.
Definition (Linear function)
Let and be vector spaces, both over the field . The function is said to be linear if the following hold.
- For all
, , - For all
and , .
Remark. These two conditions can be summarized as
Matrix determinants are linear functions of row (or column) vectors (in fact, they are called multilinear).
Lemma (Determinants are linear)
Let
be an matrix such that each is an -dimensional row vector.
- If
for a particular , we have - If
, then
Proof.
- By the definition of determinant, noting that
, - (exercise).
■
Lemma (anti-symmetry of determinants)
- If two rows are swapped, the sign of a determinant changes.
- If two rows are the same, the determinant is 0. If
,
Proof.
1. We prove this by mathematical induction. If , there is nothing to prove (no rows to be swapped). If , and so
Suppose the claim holds for . Now consider a matrix and let be the matrix that is the same as except that its -th and -th rows are swapped.
where each is a matrix obtained by removing the -th row and -th column of . We may assume . If we swap the -th and -th rows of , then the corresponding rows in each of are swapped, which we denote by . By the inductive hypothesis, we have
so that
which completes the proof.
2. Suppose that two rows are identical, say, . Then swapping these rows does not change the determinant. On the other hand, by Part 1, they should have opposite signs. Thus, this determinant must be 0.
■
Let us summarize the properties of determinants.
is a function . can be also considered as a function where each corresponds to a row (or column) vector of the matrix. is a multilinear function. This means that it is linear in each row (or column) as we saw in the above lemma.- Swapping the values of two arguments (i.e., rows) changes the sign of the value of the function.
- If two arguments (i.e., rows) take the same value, then the value of the function is 0.
Now let us have a look at the determinant of the matrix again,
If we define the following unit row vectors
then we can write, for example, the first row as
and so on. Therefore, by applying the above lemmas repeatedly, we have
Note the order of the unit vectors in each term: They cover all the permutations. We can swap the rows in each term so that it becomes the identity matrix (with an appropriate change of signs), and note that
and we obtain the familiar result:
Theorem (The determinant of a product is the product of determinants)
Let . Then
Proof. Let and let denote the -th row of . Also, let denote the -th row of the identity matrix . Then
so we can write
where the summation is over all permutations . Rearranging the order of to
Now, noting that
we have
where we reordered the last factor to eliminate . Following the argument from ({eq:xy1) to (eq:xy2) in backwards, with 's replaced with 's, we have
Combining with (eq:xy3), we have
■
Remark. In the above proof, the equalities such as
assume the following facts:
- The permutation from
to can be achieved by swapping two entries finitely many times. - The number of such swap operations is not unique (e.g., you can swap the same two elements 0, 2, 4, or any even number of times to have the same permutation), but its parity (whether the number of swaps is odd or even) is unique. If the parity is odd,
, if the parity is even, .
□
See also: Permutation group (Wikipedia)
Theorem (Determinant of transposed matrix)
For any square matrix , we have
Proof. Exercise. ■
Definition (Adjugate matrix)
Let . We define the adjugate matrix by
where is the matrix obtained by removing the -th row and the -th column from .
Example. For , its adjugate matrix is □
Theorem
Let and its adjugate.
. has a right inverse if and only if .- If
has a right inverse , then is also a left inverse of , and is also the unique inverse on either side.
Proof.
- Let
. Then If , then this is a definition of . If , this can be regarded as a determinant of some matrix. That matrix is almost the same as except that the -th row is replaced with the -th row of so that it has two rows with the same value. Hence this determinant is 0. Thus, That is, The case for is similar. - Let the right inverse of
be . Then we have . Thus . But . Therefore . Conversely, if , then is a right inverse from Part 1. - Suppose
. Multiplying both sides by from the left yields . Similar for .
■
The unique inverse of is denoted .
Theorem
Let . If their inverses exist, then
Proof.
Similarly, . ■
Comments
Post a Comment