Loading and building chapter...

In this chapter, we will introduce the determinant, which is a number associated with a square matrix. The determinant has many important uses. One such use is that the system of $n$ linear equations $\mx{A}\vc{x}=\vc{b}$ in $n$ unknowns, has one and only one solution if and only if the determinant of $\mx{A}$ is nonzero. There are many other uses and interpretations of the determinant as we will see later on in the chapter.

Example 7.1:
The Fundamental Matrix

In Interactive Illustration 5.2, we saw a 3D reconstruction of a number of so called feature points from a set of images. The problem of calculating both 3D points and at the same time calculating how the camera has moved between images is an interesting problem. A first step of solving this involves understanding how to solve the problem for just two images. One technique involves the so called*fundamental matrix* that relates corresponding points in two images. The determinant is used to derive this constraint. Assume that two different images of the same scene are given. A point in the left image with coordinates $(x_1,y_1)$ and a point in the right image with coordinates $(x_2,y_2)$ can only be in correspondence if

where

and $\mx{F}$ is a $3 \times 3$ matrix related to the * relative motion * between the two camera positions.
In the example figure, $\mx{F}= \left(\begin{array}{ccc} 1.055 & 58.10 & -58.96 \\ 58.60 & -0.044 & -336.0 \\ -55.52 & 228.3 & 110.5 \end{array}\right)$.

In Interactive Illustration 5.2, we saw a 3D reconstruction of a number of so called feature points from a set of images. The problem of calculating both 3D points and at the same time calculating how the camera has moved between images is an interesting problem. A first step of solving this involves understanding how to solve the problem for just two images. One technique involves the so called

\begin{equation} \vc{u}_1^\T \mx{F} \vc{u}_2 = 0, \end{equation} | (7.1) |

\begin{equation} \vc{u}_1 = \begin{pmatrix} x_1\\ y_1\\ 1 \end{pmatrix} \ \ \mathrm{and}\ \ \vc{u}_2 = \begin{pmatrix} x_2\\ y_2\\ 1 \end{pmatrix} \end{equation} | (7.2) |

$\vc{u}_1 = \left(\begin{array}{l} \hid{0..9} \\ \hid{1} \\ \hid{1} \end{array}\right.$

$\left.\begin{array}{l} \hid{0..} \\ \hid{1} \\ \hid{1} \end{array}\right)$

$\vc{u}_2 = \left(\begin{array}{l} \hid{0..9} \\ \hid{1} \\ \hid{1} \end{array}\right.$

$\left.\begin{array}{l} \hid{0..} \\ \hid{1} \\ \hid{1} \end{array}\right)$

$\vc{u}_1^\T \mx{F} \vc{u}_2 = $

The determinant is a function from a matrix $\mx{A}$ to a value $\det(\mx{A})$. We will later see that the determinant can be defined using three simple properties. However, before delving into the details we will first look at explicit expressions for the determinant in dimensions one, two and three. For $1 \times 1$ matrices

\begin{equation} \mx{A} = \left( \begin{array}{r} a \end{array} \right) \end{equation} | (7.3) |

\begin{equation} \vc{a} = \left( \begin{array}{r} a \end{array} \right) . \end{equation} | (7.4) |

For a $2 \times 2$ matrix

\begin{equation} \mx{A} = \left( \begin{array}{rr} a_{11} & a_{12} \\ a_{21} & a_{22} \\ \end{array} \right) \end{equation} | (7.5) |

\begin{equation} \det(\mx{A}) = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{vmatrix} = a_{11} a_{22} - a_{12} a_{21} . \end{equation} | (7.6) |

\begin{equation} \mx{A} = \begin{pmatrix} \vc{a}_{,1} & \vc{a}_{,2} \end{pmatrix} \end{equation} | (7.7) |

$\mx{A} = \left(\begin{array}{l} \hid{1} \\ \hid{1} \end{array}\right.$

$\left.\begin{array}{l} \hid{1} \\ \hid{1} \end{array}\right)$

$\textcolor{#aa0000}{\vc{a}_{,1}}$

$\textcolor{#009000}{\vc{a}_{,2}}$

$\det(\mx{A})=$

For a $3 \times 3$ matrix

\begin{equation} \mx{A} = \left( \begin{array}{rr} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23}\\ a_{31} & a_{32} & a_{33} \end{array} \right) \end{equation} | (7.8) |

\begin{equation} \mx{A} = \left( \begin{array}{rrr} \vc{a}_{,1} & \vc{a}_{,2} & \vc{a}_{,3} \end{array} \right) \end{equation} | (7.9) |

In the following, we relax our notation a little so that a column vector $\vc{a}_{,i}$ of a matrix $\mx{A}$ also can be denoted $\vc{a}_i$. There are several different ways one could define the determinant. What definitions one use is a technicality and the end result is the same. Here we define the determinant for a general $n \times n$ matrix $\mx{A}$ using three basic properties according to the following definition.

Definition 7.1:
Determinant

The determinant $\det$ is a scalar function of a square matrix $\mx{A}$ fulfilling the following three properties:

It turns out that there is one and only one function that fulfills these three properties.
The proof of the existence and uniqueness of the determinant is a bit
technical and is of less importance than the properties of the determinant.
In order not to obscure the view we leave these proofs for Section 7.3.
From these three basic properties follow many other useful properties which we
summarize in the Theorem 7.1.
The determinant $\det$ is a scalar function of a square matrix $\mx{A}$ fulfilling the following three properties:

\begin{equation} \begin{array}{llr} (i) & \det( \mx{I}) = 1 & \spc\text{(determinant of unit matrix)} \\ (ii) & \begin{vmatrix} \ldots & \vc{a}_i & \ldots & \vc{a}_i & \ldots \end{vmatrix} = 0 & \spc\text{(zero if two columns are equal)} \\ (iii) & \begin{vmatrix} \ldots & \lambda_1 \vc{a}_1 + \lambda_2 \vc{a}_2 & \ldots \end{vmatrix} = \lambda_1 \begin{vmatrix} \ldots & \vc{a}_1 & \ldots \end{vmatrix} + \lambda_2 \begin{vmatrix} \ldots & \vc{a}_2 & \ldots \end{vmatrix} & \spc\text{(linear in each column)} \\ \end{array} \end{equation} | (7.10) |

Theorem 7.1:
Properties of the determinant

\begin{equation} \begin{array}{llr} (iv) & \begin{vmatrix} \ldots & \vc{0} & \ldots \end{vmatrix} = 0 & \spc\text{(zero if one column is zero)} \\ (v) & \begin{vmatrix} \ldots & \vc{a}_i & \ldots & \vc{a}_j & \ldots \end{vmatrix} = - \begin{vmatrix} \ldots & \vc{a}_j & \ldots & \vc{a}_i & \ldots \end{vmatrix} & \spc\text{(swapping columns)} \\ (vi) & \begin{vmatrix} \ldots & \vc{a}_i & \ldots & \vc{a}_j & \ldots \end{vmatrix} = \begin{vmatrix} \ldots & \vc{a}_i+\lambda \vc{a}_j & \ldots & \vc{a}_j & \ldots \end{vmatrix} & \spc\text{(adding to another column)} \\ (vii) & \det( \mx{A}) = \det( \mx{A}^\T) & \spc\text{(transpose)} \\ (viii) & \det( \mx{A} \mx{B}) = \det( \mx{A}) \det( \mx{B}) & \spc\text{(product)} \\ (ix) & \det( \mx{A}^{-1}) = \frac{1}{\det( \mx{A})} & \spc\text{(inverse)} \\ \end{array} \end{equation} | (7.11) |

Property $(iv)$ follows the linearity (property $(iii)$ in the definition). Setting $\lambda_2 = 0$ in property $(iii)$ gives

\begin{equation} \begin{vmatrix} \ldots & \lambda_1 \vc{a}_1 & \ldots \end{vmatrix} = \lambda_1 \begin{vmatrix} \ldots & \vc{a}_1 & \ldots \end{vmatrix} \end{equation} | (7.12) |

\begin{equation} \begin{vmatrix} \ldots & \vc{0} & \ldots \end{vmatrix} = \begin{vmatrix} \ldots & 0\ \vc{a}_1 & \ldots \end{vmatrix} = 0 \ \begin{vmatrix} \ldots & \vc{a}_1 & \ldots \end{vmatrix} = 0. \end{equation} | (7.13) |

\begin{equation} \begin{vmatrix} \ldots & \vc{a}_i + \vc{a}_j & \ldots & \vc{a}_i + \vc{a}_j & \ldots \end{vmatrix} , \end{equation} | (7.14) |

\begin{equation} \begin{array}{ll} 0 = & \begin{vmatrix} \ldots & \vc{a}_i + \vc{a}_j & \ldots & \vc{a}_i + \vc{a}_j & \ldots \end{vmatrix} = \\ & \begin{vmatrix} \ldots & \vc{a}_i & \ldots & \vc{a}_i & \ldots \end{vmatrix} + \begin{vmatrix} \ldots & \vc{a}_i & \ldots & \vc{a}_j & \ldots \end{vmatrix} + \\ + & \begin{vmatrix} \ldots & \vc{a}_j & \ldots & \vc{a}_i & \ldots \end{vmatrix} + \begin{vmatrix} \ldots & \vc{a}_j & \ldots & \vc{a}_j & \ldots \end{vmatrix} . \end{array} \end{equation} | (7.15) |

\begin{equation} 0 = \begin{vmatrix} \ldots & \vc{a}_i & \ldots & \vc{a}_j & \ldots \end{vmatrix} + \begin{vmatrix} \ldots & \vc{a}_j & \ldots & \vc{a}_i & \ldots \end{vmatrix} , \end{equation} | (7.16) |

\begin{equation} \begin{array}{ll} & \begin{vmatrix} \ldots & \vc{a}_i + \lambda \vc{a}_j & \ldots & \vc{a}_j & \ldots \end{vmatrix} = \\ & \begin{vmatrix} \ldots & \vc{a}_i & \ldots & \vc{a}_j & \ldots \end{vmatrix} + \lambda \begin{vmatrix} \ldots & \vc{a}_j & \ldots & \vc{a}_j & \ldots \end{vmatrix} = \\ & \begin{vmatrix} \ldots & \vc{a}_i & \ldots & \vc{a}_j & \ldots \end{vmatrix} \end{array} , \end{equation} | (7.17) |

$\square$

On the surface, statement $(vi)$ can seem counterintuitive since the determinant is connected to the area or volume spanned by the column vectors, and adding something to a column vector will make it longer. How come the area stays the same even though one of the vectors become longer? The reason is illustrated in Interactive Illustration 7.4.

$\mx{A} = \left(\begin{array}{l} \hid{1} \\ \hid{1} \end{array}\right.$

$\left.\begin{array}{l} \hid{1} \\ \hid{1} \end{array}\right)$

$\textcolor{#0000aa}{\vc{a}_{,1}}$

$\textcolor{#009000}{\vc{a}_{,2}}$

$\det(\mx{A})=$

$\lambda=$

$\lambda \vc{a}_{,2}$

$\textcolor{#aa0000}{\vc{a}_{,1} + \lambda \vc{a}_{,2}}$

$\det(\mx{A'})=$

$\mx{A'} = \left(\begin{array}{l} \hid{1} \\ \hid{1} \end{array}\right.$

$\left.\begin{array}{l} \hid{1} \\ \hid{1} \end{array}\right)$

Example 7.2:

What is the determinant of

Notice that the third column is two times $(1,5,4)$. Using the linearity on column three, we have

Now since the last determinant has two equal columns, we see that

What is the determinant of

\begin{equation} \begin{vmatrix} 1 & 23 & 2 \\ 5 & 12 & 10 \\ 4 & 72 & 8 \end{vmatrix} ? \end{equation} | (7.18) |

\begin{equation} \begin{vmatrix} 1 & 23 & 2 \\ 5 & 12 & 10 \\ 4 & 72 & 8 \end{vmatrix} = 2 \begin{vmatrix} 1 & 23 & 1 \\ 5 & 12 & 5 \\ 4 & 72 & 4 \end{vmatrix} . \end{equation} | (7.19) |

\begin{equation} \begin{vmatrix} 1 & 23 & 2 \\ 5 & 12 & 10 \\ 4 & 72 & 8 \end{vmatrix} = 2 \begin{vmatrix} 1 & 23 & 1 \\ 5 & 12 & 5 \\ 4 & 72 & 4 \end{vmatrix} = 0 . \end{equation} | (7.20) |

This subsection contains the proof that there actually is a function in fulfilling the properties in Definition 7.1, that there is only one such function, and also a formula for calculating the determinant. To complete this proof we need to introduce the notion of a permutation of $n$ numbers. The main idea is of permutations is really quite simple and is best exemplified by using properties $(i)-(vi)$ on a general $2 \times 2 $ matrix.

Example 7.3:

We study the determinant of a general $2 \times 2$ matrix

Since the first column can be written as the following linear combination

it follows from the linearity of the determinant on the first column that

Using the analogous trick on the second column gives

In the above equation, the first and the last determinants are zero because they have two identical columns.
The second determinant is $1$ according to property (i) of the definition. The third determinant is $-1$ since

where we used property $(v)$ to swap two columns and then property $(i)$.
Thus we have showed that the definition of the determinant gives that

which coincides with our earlier definition.

In this example we calculated a $2 \times 2$ determinant by expanding each column.
By doing this for a general $n \times n$ matrix, one obtains an explicit expression for the determinant.
This expression is best expressed using the concept of a permutation of $n$ numbers.
We study the determinant of a general $2 \times 2$ matrix

\begin{equation} \begin{vmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{vmatrix} . \end{equation} | (7.21) |

\begin{equation} \begin{bmatrix} a_{11} \\ a_{21} \end{bmatrix} = a_{11} \begin{bmatrix} 1 \\ 0 \end{bmatrix} + a_{21} \begin{bmatrix} 0 \\ 1\end{bmatrix} \end{equation} | (7.22) |

\begin{equation} \begin{vmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{vmatrix} = a_{11} \begin{vmatrix} 1 & a_{12}\\ 0 & a_{22} \end{vmatrix} + a_{21} \begin{vmatrix} 0 & a_{12}\\ 1 & a_{22} \end{vmatrix} . \end{equation} | (7.23) |

\begin{equation} \begin{vmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{vmatrix} = a_{11} \left (a_{12} \underbrace{\begin{vmatrix} 1 & 1\\ 0 & 0 \end{vmatrix}}_{0} + a_{22}\underbrace{\begin{vmatrix} 1 & 0\\ 0 & 1 \end{vmatrix}}_{1} \right )+ a_{21} \left( a_{12}\underbrace{\begin{vmatrix} 0 & 1\\ 1 & 0 \end{vmatrix}}_{-1} + a_{22} \underbrace{\begin{vmatrix} 0 & 0\\ 1& 1 \end{vmatrix}}_{0} \right). \end{equation} | (7.24) |

\begin{equation} \begin{vmatrix} 0 & 1\\ 1 & 0 \end{vmatrix} = - \begin{vmatrix} 1 & 0\\ 0 & 1 \end{vmatrix} = -1 , \end{equation} | (7.25) |

\begin{equation} \begin{vmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{vmatrix} = a_{11} a_{22} - a_{21} a_{12} , \end{equation} | (7.26) |

Definition 7.2:

A permutation $p$ of $n$ elements is a bijection from the set $\{ 1, 2, \ldots, n \}$ to itself.

A permutation $p$ is specified by how it acts on the set, i.e., by the numbers $p(1), p(2), \ldots, p(n)$.
Since $p$ is a bijection, each number $\{ 1, 2, \ldots, n \}$ occurs precisely once in the list $p(1), p(2), \ldots, p(n)$.
In other words the numbers are re-arranged or permuted. Sometimes the notation $p = [p(1), p(2), \ldots, p(n)] $ is used.
A permutation $p$ of $n$ elements is a bijection from the set $\{ 1, 2, \ldots, n \}$ to itself.

Definition 7.3:

A transposition is a permutation where two elements change places and the other elements are fixed.

A transposition is a permutation where two elements change places and the other elements are fixed.

Example 7.4:

The permutation $p = [1 2 4 3]$ is a transposition, since the elements $3$ and $4$ change place and the other elements, i.e., $1$ and $2$ are fixed.

The permutation $p = [1 2 4 3]$ is a transposition, since the elements $3$ and $4$ change place and the other elements, i.e., $1$ and $2$ are fixed.

Definition 7.4:

The inverse $p^{-1}$ of a permutation is defined as the inverse of corresponding function.

The inverse $p^{-1}$ of a permutation is defined as the inverse of corresponding function.

Example 7.5:

Let $p = [2 3 4 1]$ be a permutation of $4$ numbers. This means that $p = [p(1) p(2) p(3) p(4)] = [2 3 4 1]$. What is the inverse $p^{-1}$. Since $p(1)=2$, then $p^{-1}(2) = 1$. Thus $p^{-1} = [? 1 ? ?]$. Since $p(2)=3$, then $p^{-1}(3)=2$. Thus $p^{-1} = [? 1 2 ?]$. Similarily, $p(3)=4$ and $p(4)=1$ gives $p^{-1}(4)=3$ and $p^{-1}(1)=4$. This gives $p^{-1} = [4 1 2 3]$.

If $p$ is a permutation of $n$ elements, there are ${n \choose 2}$ pairs of numbers $(p(i),p(j))$. Some of these are in the correct order, i.e., $((p(i),p(j))$ are in the same order as $(i,j)$. If on the other hand $(p(i),p(j))$ are in the wrong order as compared to $(i,j)$, then the pair is called an Let $p = [2 3 4 1]$ be a permutation of $4$ numbers. This means that $p = [p(1) p(2) p(3) p(4)] = [2 3 4 1]$. What is the inverse $p^{-1}$. Since $p(1)=2$, then $p^{-1}(2) = 1$. Thus $p^{-1} = [? 1 ? ?]$. Since $p(2)=3$, then $p^{-1}(3)=2$. Thus $p^{-1} = [? 1 2 ?]$. Similarily, $p(3)=4$ and $p(4)=1$ gives $p^{-1}(4)=3$ and $p^{-1}(1)=4$. This gives $p^{-1} = [4 1 2 3]$.

Definition 7.5:

Here ${n \choose 2}$ is the*binomial coefficient*, ${n \choose k} = \frac{n!}{k! (n-k)!}$. One of the many interesting properties of the binomial coefficient is that there are ${n \choose k}$ ways to choose $k$ out of a set of $n$ elements.

Here ${n \choose 2}$ is the

Definition 7.6:

A permutation is called*even* if the number of inversions are even and called *odd* if the number of inversions are odd.

A permutation is called

Definition 7.7:

The signature $\sigma$ of a permutation $p$, here denoted $\sigma(p)$ is

The signature $\sigma$ of a permutation $p$, here denoted $\sigma(p)$ is

\begin{equation} \sigma(p) = \begin{cases} +1 &\mbox{if $n$ is an even permutation} \\ -1 &\mbox{if $n$ is an odd permutation} . \end{cases} \end{equation} | (7.27) |

Theorem 7.2:
Properties of the signature

For permutations we have

A permutation $p$ of $n$ elements can be represented as an $n \times n$ matrix $\mx{E}_p$, so that $\mx{E}_p$ is zeroes except for the elements $(p(1),1), (p(2),2) \ldots, (p(n),n)$. These elements are set to one. The determinant of $\mx{E}_p$ is particularly simple to calculate. We have $\det(\mx{E}_p) = \sigma(p)$.
We are now ready to state a theorem, that gives an explicit formula for the determinant in general dimensions.
For permutations we have

\begin{equation} \begin{array}{llr} (i) & \sigma(p) = \sigma(p^{-1}) & \\ (ii) & \sigma(p \circ q) = \sigma(p) \circ \sigma(q)& \\ (iii) & \sigma(p) = -1 & \spc\text{(if $p$ is a transposition)} \end{array} \end{equation} | (7.28) |

Theorem 7.3:

The determinant of an $n \times n$ matrix $\mx{A}$ is

where the sum is taken over all permutations $p$ and $\sigma(p)$ denotes the signature of the permutation $p$.

The determinant of an $n \times n$ matrix $\mx{A}$ is

\begin{equation} \det(\mx{A}) = \sum_p \sigma(p) a_{p(1),1} a_{p(2),2} \ldots a_{p(n),n} , \end{equation} | (7.29) |

If we carried out the same expansion as in Example 7.3 for a general $n \times n$ matrix, then each time we expand a column, we would obtain $n$ times as many terms. After expanding all $n$ columns we would get the determinant as a sum of $n^n$ terms. Some of these terms vanish because the determinant have equal columns. In the example the first and the last of the four terms vanished. After removing these we get

\begin{equation} \det(\mx{A}) = \sum_p a_{p(1),1} a_{p(2),2} \ldots a_{p(n),n} \det(\mx{E}_p), \end{equation} | (7.30) |

\begin{equation} \det(\mx{A}) = \sum_p a_{p(1),1} a_{p(2),2} \ldots a_{p(n),n} \sigma(p). \end{equation} | (7.31) |

\begin{equation} a_{p(1),1} a_{p(2),2} \ldots a_{p(n),n} \sigma(p) + a_{p(2),1} a_{p(1),2} \ldots a_{p(n),n} \sigma(q) . \end{equation} | (7.32) |

\begin{equation} a_{p(1),1} a_{p(2),2} \ldots a_{p(n),n} \sigma(p) - a_{p(2),2} a_{p(1),1} \ldots a_{p(n),n} \sigma(p) = 0 . \end{equation} | (7.33) |

$\square$

Thus the determinants of orders $1$, $2$, and $3$ consists of $1$, $2$, and $6$ terms in accordance with the previous example. There are relatively easy patterns that one can use to remember the $2 \times 2$ determinant. Write the matrix and think about a cross. The elements on the diagonal should be multiplied and then subtract the product of the other diagonal according to the following illustration.

\begin{equation} \begin{array}{cccccc} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! + & \,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\, - \\ \!\!\!\!\!\!\!\!\!\! \searrow & \,\,\,\,\,\,\,\, \swarrow \\ a_{11} & a_{12}\\ a_{21} & a_{22} \end{array}, \end{equation} | (7.34) |

\begin{equation} \begin{array}{cccccc} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!+ &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! + &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! + &\,\,\,\,\,\,\,\,\,\,\,\, - &\,\, - & - \\ \!\!\!\!\!\!\!\!\!\!\searrow &\!\!\!\!\!\!\!\!\! \searrow &\!\!\!\!\!\!\!\!\!\! \searrow & \!\swarrow &\!\!\!\!\!\!\!\!\! \swarrow &\!\!\!\!\!\!\!\!\!\!\! \swarrow \\ a_{11} & a_{12} & a_{13} &\!\!\!\!\!\!\!\!\! a_{11} &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! a_{12} &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! a_{13} \\ a_{21} & a_{22} & a_{23} &\!\!\!\!\!\!\!\!\! a_{21} &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! a_{22} &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! a_{23} \\ a_{31} & a_{32} & a_{33} &\!\!\!\!\!\!\!\!\! a_{31} &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! a_{32} &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! a_{33} \\ \end{array} \end{equation} | (7.35) |

In this section, we present theorems about the determinant of a transposed matrix, of a product of matrices, and of the inverse of a matrix.

Theorem 7.4:

The determinant does not change under transposition, i.e.,

The determinant does not change under transposition, i.e.,

\begin{equation} \det(\mx{A}) = \det(\mx{A}^\T) . \end{equation} | (7.36) |

For a matrix $\mx{A}=[a_{ij}]$, the transposed matrix is $\mx{A}^\T = [b_{ij}]$, where $b_{ij}=a_{ji}$. According to Theorem 7.3, we have

\begin{equation} \det(\mx{A}^T) = \sum_{p} b_{p(1),1} b_{p(2),2} \ldots b_{p(n),n} \sigma(p) = \sum_{p} a_{1,p(1)} a_{2,p(2)} \ldots a_{n,p(n)} \sigma(p) . \end{equation} | (7.37) |

\begin{equation} \det(\mx{A}^\T) = \sum_{p} a_{1,p(1)} a_{2,p(2)} \ldots a_{n,p(n)} \sigma(p) = \sum_{q} a_{q(1),1} a_{q(2),2} \ldots a_{q(n),n} \sigma(q) = \det{\mx{A}} , \end{equation} | (7.38) |

$\square$

Theorem 7.5:

The determinant of a matrix product is a product of corresponding determinants, i.e.,

The determinant of a matrix product is a product of corresponding determinants, i.e.,

\begin{equation} \det(\mx{A} \mx{B}) = \det(\mx{A}) \det(\mx{B}). \end{equation} | (7.39) |

The matrix product $\mx{A} \mx{B}$ can be written

\begin{equation} \mx{A} \mx{B} = \begin{pmatrix} \mx{A} \vc{b}_1 & \mx{A} \vc{b}_2 & \ldots & \mx{A} \vc{b}_n \end{pmatrix} , \end{equation} | (7.40) |

\begin{equation} \mx{A} \vc{b}_j = \sum_i \vc{a}_i b_{ij}, \end{equation} | (7.41) |

\begin{equation} \det(\mx{A} \mx{B}) = \det( \begin{pmatrix} \sum_i b_{i1} \vc{a}_i & \sum_i b_{i2} \vc{a}_i & \ldots \sum_i b_{in} \vc{a}_i \end{pmatrix} = \sum_p b_{p(1),1} b_{p(2),2} \cdot b_{p(n),n} \det( \begin{pmatrix} \vc{a}_{p(1)} & \vc{a}_{p(2)} & \ldots \vc{a}_{p(n)} \end{pmatrix} ). \end{equation} | (7.42) |

\begin{equation} \det ( \begin{pmatrix} \vc{a}_{p(1)} & \vc{a}_{p(2)} & \ldots & \vc{a}_{p(n)} \end{pmatrix} ) = \sigma(p) \det{\mx{A}} . \end{equation} | (7.43) |

\begin{equation} \det(\mx{A} \mx{B}) = \sum_p b_{p(1),1} b_{p(2),2} \cdot b_{p(n),n} \sigma(p) \det(\mx{A}) = \det(\mx{A}) \det(\mx{B}) , \end{equation} | (7.44) |

$\square$

Using the Theorem 7.5 we can prove the following useful result.

Theorem 7.6:

The determinant of the inverse of a matrix is the inverse of the determinant, i.e.,

The determinant of the inverse of a matrix is the inverse of the determinant, i.e.,

\begin{equation} \det(\mx{A}^{-1}) = \frac{1}{\det(\mx{A})}. \end{equation} | (7.45) |

Using Theorem 7.5 on the identity $\mx{A} \mx{A}^{-1} = \mx{I}$ gives

\begin{equation} \det( \mx{A} \mx{A}^{-1} ) = \det( \mx{A} )\det( \mx{A}^{-1} ) = \det( \mx{I} ) = 1 \end{equation} | (7.46) |

\begin{equation} \det(\mx{A}^{-1}) = \frac{1}{\det(\mx{A})}. \end{equation} | (7.47) |

$\square$

In Interactive Illustration 7.5, Theorem 7.6 is shown for $2\times 2$ matrices. For example, if you make the area of the parallelogram of $\mx{A}$ smaller, then the area of the parallelogram of $\mx{A}^{-1}$ becomes larger and vice versa.

$\mx{A} = \left(\begin{array}{l} \hid{1} \\ \hid{1} \end{array}\right.$

$\left.\begin{array}{l} \hid{1} \\ \hid{1} \end{array}\right)$

$\textcolor{#aa0000}{\vc{a}_{,1}}$

$\textcolor{#009000}{\vc{a}_{,2}}$

$\mx{A}^{-1} = \left(\begin{array}{l} \hid{1} \\ \hid{1} \end{array}\right.$

$\left.\begin{array}{l} \hid{1} \\ \hid{1} \end{array}\right)$

We start this section with the definition of

Definition 7.8:
Minor

Assume that $\mx{A}$ is an $n \times n$ matrix. The minor, denoted $D_{ij}$, is the determinant of the $(n-1) \times (n-1)$ matrix that you obtain after removing row $i$ and column $j$ in $\mx{A}.$

These determinants are often called Assume that $\mx{A}$ is an $n \times n$ matrix. The minor, denoted $D_{ij}$, is the determinant of the $(n-1) \times (n-1)$ matrix that you obtain after removing row $i$ and column $j$ in $\mx{A}.$

Example 7.6:

Let the matrix $\mx{A}$ be

What are the matrices $D_{1,2}$ and $D_{3,1}$?
The matrix $D_{1,2}$ is obtained after removing row $1$ and column $2$. Thus

The matrix $D_{3,1}$ is obtained after removing row $3$ and column $1$. Thus

Let the matrix $\mx{A}$ be

\begin{equation} \mx{A} = \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6\\ 7 & 8 & 9 \end{pmatrix} . \end{equation} | (7.48) |

\begin{equation} D_{1,2} = \det \begin{pmatrix} 4 & 6\\ 7 & 9 \end{pmatrix} . \end{equation} | (7.49) |

\begin{equation} D_{3,1} = \det \begin{pmatrix} 2 & 3 \\ 5 & 6 \end{pmatrix} . \end{equation} | (7.50) |

Theorem 7.7:
Expansion Along a Row/Column

The determinant of an $n \times n$ matrix $\mx{A}$ can be computed using expansion by row $i$ according to

The determinant can also be computed using expansion by column $j$ according to

The determinant of an $n \times n$ matrix $\mx{A}$ can be computed using expansion by row $i$ according to

\begin{equation} \det(\mx{A}) = \sum_j (-1)^{i+j} a_{ij} D_{ij} . \end{equation} | (7.51) |

\begin{equation} \det(\mx{A}) = \sum_i (-1)^{i+j} a_{ij} D_{ij} . \end{equation} | (7.52) |

Column $j$ of $\mx{A}$ can be written as a linear combination

\begin{equation} \vc{a}_{,j} = a_{1j} \vc{i}_1 + a_{2j} \vc{i}_2 + \ldots + a_{nj} \vc{i}_n \end{equation} | (7.53) |

\begin{equation} \det(\mx{A}) = \sum_k a_{kj} \det(\mx{B}_k) , \end{equation} | (7.54) |

\begin{equation} \mx{C}_i = \begin{pmatrix} 1 & 0\\ 0 & D_{ij} \end{pmatrix} . \end{equation} | (7.55) |

\begin{equation} \det(\mx{C}_i) = \sum_p c_{p(1),1} c_{p(2),2} \ldots c_{p(n),n} \sigma(p). \end{equation} | (7.56) |

\begin{equation} \det(\mx{C}_i) = \sum_{p,p(1)=1} c_{p(1),1} c_{p(2),2} \ldots c_{p(n),n} \sigma(p) = \sum_{p,p(1)=1} c_{p(2),2} \ldots c_{p(n),n} \sigma(p) = \sum_{q} d_{q(1),1} \ldots d_{q(n-1),n-1} \sigma(q) = \det{(D_{ij})} . \end{equation} | (7.57) |

$\square$

Example 7.7:

Calculate

using expansion along row $1$.

Calculate the same determinant using expansion along column 2.

Expansion along a row or column is particularily useful if there are a lot of zeros in that row or column. This is illustrated in the following example
Calculate

\begin{equation} \begin{vmatrix} 1 & 2 & 3 \\ 4 & 5 & 6\\ 7 & 8 & 9 \end{vmatrix} \end{equation} | (7.58) |

\begin{equation} \begin{vmatrix} 1 & 2 & 3 \\ 4 & 5 & 6\\ 7 & 8 & 9 \end{vmatrix} = 1 \begin{vmatrix} 5 & 6\\ 8 & 9 \end{vmatrix} - 2 \begin{vmatrix} 4 & 6\\ 7 & 9 \end{vmatrix} +3 \begin{vmatrix} 4 & 5 \\ 7 & 8 \end{vmatrix} = 1 (45-48) - 2 (36-42) + 3 (32-35) = -3 + 12 - 9 = 0 \end{equation} | (7.59) |

\begin{equation} \begin{vmatrix} 1 & 2 & 3 \\ 4 & 5 & 6\\ 7 & 8 & 9 \end{vmatrix} = -2 \begin{vmatrix} 4& 6\\ 7 & 9 \end{vmatrix} +5 \begin{vmatrix} 1 & 3\\ 7 & 9 \end{vmatrix} -8 \begin{vmatrix} 1 & 3 \\ 4 & 6 \end{vmatrix} = -2 (36-42) +5 (9-21) -8 (6-12) = 12 -60 +48 = 0. \end{equation} | (7.60) |

Example 7.8:

Calculate

We start by expanding along row 3. This gives

We continue by expanding along column 3.

Now use expansion along row 3.

Calculate

\begin{equation} \begin{vmatrix} 1 & 2 & 3 & 3 & \pi \\ 4 & 5 & 0 & 4 & -912\\ 0 & 0 & 0 & 0 & 1 \\ 7 & 8 & 0 & 5 & -\pi^2 \\ 0 & 2 & 0 & 0 & 77 \end{vmatrix}. \end{equation} | (7.61) |

\begin{equation} \begin{vmatrix} 1 & 2 & 3 & 3 & \pi \\ 4 & 5 & 0 & 4 & -912\\ 0 & 0 & 0 & 0 & 1 \\ 7 & 8 & 0 & 5 & -\pi^2 \\ 0 & 2 & 0 & 0 & 77 \end{vmatrix} = 0 + 0 + 0 + 0 + 1 \begin{vmatrix} 1 & 2 & 3 & 3 \\ 4 & 5 & 0 & 4 \\ 7 & 8 & 0 & 5 \\ 0 & 2 & 0 & 0 \end{vmatrix}. \end{equation} | (7.62) |

\begin{equation} \begin{vmatrix} 1 & 2 & 3 & 3 & \pi \\ 4 & 5 & 0 & 4 & -912\\ 0 & 0 & 0 & 0 & 1 \\ 7 & 8 & 0 & 5 & -\pi^2 \\ 0 & 2 & 0 & 0 & 77 \end{vmatrix} = \begin{vmatrix} 1 & 2 & 3 & 3 \\ 4 & 5 & 0 & 4 \\ 7 & 8 & 0 & 5 \\ 0 & 2 & 0 & 0 \end{vmatrix} = 3 \begin{vmatrix} 4 & 5 & 4 \\ 7 & 8 & 5 \\ 0 & 2 & 0 \end{vmatrix} + 0 + 0 + 0 . \end{equation} | (7.63) |

\begin{equation} \begin{vmatrix} 1 & 2 & 3 & 3 & \pi \\ 4 & 5 & 0 & 4 & -912\\ 0 & 0 & 0 & 0 & 1 \\ 7 & 8 & 0 & 5 & -\pi^2 \\ 0 & 2 & 0 & 0 & 77 \end{vmatrix} = 3 \begin{vmatrix} 4 & 5 & 4 \\ 7 & 8 & 5 \\ 0 & 2 & 0 \end{vmatrix} = 3 \cdot (-2) \begin{vmatrix} 4 & 4 \\ 7 & 5 \\ \end{vmatrix} = -6 (20-28) = 48 \end{equation} | (7.64) |

Example 7.9:
The determinant of a triangular matrix

The determinant of a triangular matrix is the product of the diagonal elements. This is easy to see using expansion along rows or columns. For example, what is the determinant of

Using repeated expansions along the first row we get

Thus the determinant is the product of the diagonal elements.

The determinant of a triangular matrix is the product of the diagonal elements. This is easy to see using expansion along rows or columns. For example, what is the determinant of

\begin{equation} \begin{vmatrix} 11 & 0 & 0 \\ 1 & 17 & 0 \\ 1 & 2 & 42 \end{vmatrix} ? \end{equation} | (7.65) |

\begin{equation} \begin{vmatrix} 11 & 0 & 0 \\ 1 & 17 & 0 \\ 1 & 2 & 42 \end{vmatrix} = 11 \begin{vmatrix} 17 & 0 \\ 2 & 42 \end{vmatrix} = 11 \cdot 17 \begin{vmatrix} 42 \end{vmatrix} = 11 \cdot 17 \cdot 42 . \end{equation} | (7.66) |

Example 7.10:
Adding one column to another

Adding (or subtracting) a multiple of one column to another column does not change the determinant according to property (vi). This can sometimes be useful to simplify the calculations. Here is an example

In the first step we subtracted column 2 from column 3. Then we subtracted column 1 from the new column 2. In the final step we used the fact that the determinant of a triangular matrix is the product of the diagonal elements
(see Example 7.9).

Adding (or subtracting) a multiple of one column to another column does not change the determinant according to property (vi). This can sometimes be useful to simplify the calculations. Here is an example

\begin{equation} \begin{vmatrix} 1 & 1 & 1\\ 1 & 2 & 2 \\ 1 & 3 & 9 \end{vmatrix} = \begin{vmatrix} 1 & 1 & 0 \\ 1 & 2 & 0 \\ 1 & 3 & 6 \end{vmatrix} = \begin{vmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 2 & 6 \end{vmatrix} = 1 \cdot 1 \cdot 6 = 6 . \end{equation} | (7.67) |

In this section, we will introduce the

Definition 7.9:
Adjoint matrix

The adjoint matrix $\adj( \mx{A} )$ is a matrix whose elements at position $(i,j)$ is $(-1)^{i+j} D_{ji}$, where similar to the previous section $D_{ij}$ is the determinant of the matrix $A$ after removing row $i$ and column $j$.

Notice that we used $D_{ji}$ and not $D_{ij}$ in the definition.
The adjoint matrix $\adj( \mx{A} )$ is a matrix whose elements at position $(i,j)$ is $(-1)^{i+j} D_{ji}$, where similar to the previous section $D_{ij}$ is the determinant of the matrix $A$ after removing row $i$ and column $j$.

Example 7.11:

When calculating the adjoint of a $2 \times 2$ matrix

we first need to calculate the determinants $D_{ij}$.
Here $D_{11}$ is the determinant of the matrix obtained when removing row $1$ and column $1$. This
gives $D_{11}=a_{22}$. In the same way $D_{12} = a_{21}$, $D_{21} = a_{12}$ and $D_{22} = a_{11}$.
Thus the adjoint is

Here for example $\adj(\mx{A})_{12} = -D_{21} = -a_{12}$.
Notice that the two elements on the diagonal changed place and that the two elements on the off diagonal (i.e., not on
the main diagonal) changed sign.

The adjoint matrix is closely related to the inverse, through the following theorem.
When calculating the adjoint of a $2 \times 2$ matrix

\begin{equation} \begin{pmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{pmatrix} , \end{equation} | (7.68) |

\begin{equation} \adj(\mx{A}) = \begin{pmatrix} a_{22} & -a_{12}\\ -a_{21} & a_{11} \end{pmatrix} . \end{equation} | (7.69) |

Theorem 7.8:

If $\mx{A}$ is a square matrix and $\adj(\mx{A})$ is its adjoint matrix, then

If $\det(\mx{A}) \neq 0$ then $\mx{A}$ is invertible and the inverse is given by

If $\mx{A}$ is a square matrix and $\adj(\mx{A})$ is its adjoint matrix, then

\begin{equation} \mx{A} (\adj(\mx{A})) = (\adj(\mx{A})) \mx{A} = \det(\mx{A}) \mx{I} . \end{equation} | (7.70) |

\begin{equation} \mx{A}^{-1} = \frac{1}{\det \mx{A}} \adj \mx{A} . \end{equation} | (7.71) |

Study the product $\mx{B} = (\adj (\mx{A})) \mx{A}$. The diagonal elements $b_{jj}$ of $\mx{B}$ are then

\begin{equation} b_{jj} = \sum_i (\adj (\mx{A}))_{ji} a_{ij} = \sum_i (-1)^{i+j} D_{ij} a_{ij} = \det \mx{A}, \end{equation} | (7.72) |

\begin{equation} b_{ij} = \sum_k (\adj (\mx{A}))_{ik} a_{kj} = \sum_i (-1)^{i+k} D_{ij} a_{ki} a_{kj} = 0. \end{equation} | (7.73) |

$\square$

Example 7.12:

The inverse of a $2 \times 2$ matrix is particularly easy to calculate using Theorem 7.8. We saw in Example 7.11 that for a general $2 \times 2$ matrix

the adjoint is

According to Theorem 7.8, we have

which is the same as in Theorem 6.3 from the previous chapter.

The inverse of a $2 \times 2$ matrix is particularly easy to calculate using Theorem 7.8. We saw in Example 7.11 that for a general $2 \times 2$ matrix

\begin{equation} \mx{A} = \begin{pmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{pmatrix} , \end{equation} | (7.74) |

\begin{equation} \adj(\mx{A}) = \begin{pmatrix} a_{22} & -a_{12}\\ -a_{21} & a_{11} \end{pmatrix} . \end{equation} | (7.75) |

\begin{equation} \mx{A}^{-1} = \frac{1}{\det \mx{A}} \adj \mx{A} = \frac{1}{a_{11} a_{22} - a_{12} a_{21}} \begin{pmatrix} a_{22} & -a_{12}\\ -a_{21} & a_{11} \end{pmatrix}, \end{equation} | (7.76) |

In this section, Cramer's rule will be introducted. It is useful when you want the solution $\vc{x}$ to $\mx{A}\vc{x}=\vc{y}$ without needing to know the inverse matrix of $\mx{A}$.

Theorem 7.9:

If $\mx{A}$ is a square matrix and $\det \mx{A} \neq 0$ then the solution $\vc{x}$ to the matrix equation $\mx{A}\vc{x} = \vc{y}$ has

In other words by exchanging column $i$ in $\mx{A}$ with the right hand side $\vc{y}$, and then calculating the determinant of this matrix and dividing by the determinant of $\mx{A}$, we directly obtain coordinate $i$ of the solution vector $\vc{x}$.
It almost seems too easy.
If $\mx{A}$ is a square matrix and $\det \mx{A} \neq 0$ then the solution $\vc{x}$ to the matrix equation $\mx{A}\vc{x} = \vc{y}$ has

\begin{equation} x_i =\frac{ \det \begin{pmatrix} \vc{a}_1 & \ldots & \vc{a}_{i-1} & \vc{y} & \vc{a}_{i+1} & \ldots & \vc{a}_n \end{pmatrix} }{\det \mx{A}} . \end{equation} | (7.77) |

Study the determinant

\begin{equation} \det \begin{pmatrix} \vc{a}_1 & \ldots & \vc{a}_{i-1} & \vc{y} & \vc{a}_{i+1} & \ldots & \vc{a}_n \end{pmatrix} . \end{equation} | (7.78) |

\begin{equation} \vc{y} = \sum_j x_j \vc{a}_j . \end{equation} | (7.79) |

\begin{equation} \det \begin{pmatrix} \vc{a}_1 & \ldots & \vc{a}_{i-1} & \vc{y} & \vc{a}_{i+1} & \ldots & \vc{a}_n \end{pmatrix} = \sum_j x_j \det \begin{pmatrix} \vc{a}_1 & \ldots & \vc{a}_{i-1} & \vc{a}_{j} & \vc{a}_{i+1} & \ldots & \vc{a}_n \end{pmatrix} = x_i \det \mx{A} . \end{equation} | (7.80) |

$\square$

Example 7.13:

Study the matrix equation

If $\det(\mx{A}) = a_{11} a_{22} - a_{12} a_{21} \neq 0 $, then the solution is unique and

and

In Interactive Illustration 7.6, we solve for $\vc{z}$ in a system of
equations $\mx{A}\vc{z}=\vc{b}$ using Cramer's rule.
Study the matrix equation

\begin{equation} \begin{pmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{pmatrix} \begin{pmatrix} x_1\\ x_2 \end{pmatrix} = \begin{pmatrix} y_1\\ y_2 \end{pmatrix} . \end{equation} | (7.81) |

\begin{equation} x_1 = \frac{ \begin{vmatrix} y_1 & a_{12}\\ y_2 & a_{22} \end{vmatrix} }{ \begin{vmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{vmatrix} } = \frac{y_1 a_{22} - y_2 a_{12}}{a_{11} a_{22} - a_{12} a_{21}} \end{equation} | (7.82) |

\begin{equation} x_2 = \frac{ \begin{vmatrix} a_{11} & y_1 \\ a_{21} & y_2 \end{vmatrix} }{ \begin{vmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{vmatrix} } = \frac{y_2 a_{11} - y_1 a_{21}}{a_{11} a_{22} - a_{12} a_{21}}. \end{equation} | (7.83) |

$\mx{A} = \left(\begin{array}{l} \hid{1} \\ \hid{1} \end{array}\right.$

$\left.\begin{array}{l} \hid{1} \\ \hid{1} \end{array}\right)$

$\textcolor{#aa0000}{\vc{a}_{,1}}$

$\textcolor{#009000}{\vc{a}_{,2}}$

$\det(\mx{A})=$

$\left(\begin{array}{l} \hid{1} \\ \hid{1} \end{array}\right.$

$\left.\begin{array}{l} \hid{1} \\ \hid{1} \end{array}\right)\begin{pmatrix}x\\y\end{pmatrix}=\left(\begin{array}{l} \hid{1} \\ \hid{1} \end{array}\right.$

$\left.\begin{array}{l} \hid{1} \\ \hid{1} \end{array}\right)$

$x=$

$y=$

$\vc{b}$

Theorem 7.10:

The following is equivalence holds for all square matrices $\mx{A}$,

The following is equivalence holds for all square matrices $\mx{A}$,

\begin{equation} \begin{array}{ll} (i) & \spc\text{The column vectors of}\spc \mx{A} \spc\text{is a basis} \\ (ii) & \spc\text{The row vectors of}\spc \mx{A} \spc\text{is a basis} \\ (iii) & \spc\text{The matrix equation} \spc \mx{A} \vc{x} = 0 \spc\text{has only one solution} \spc \vc{x}=0 \\ (iv) & \spc\text{The matrix equation}\spc \mx{A} \vc{x} = \vc{y} \spc\text{has a solution for every} \spc \vc{y} \\ (v) & \spc\text{The matrix} \spc \mx{A} \spc\text{is invertible} \\ (vi) & \spc\det \mx{A} \neq 0 \\ \end{array} \end{equation} | (7.84) |

From Theorem 6.9 we know that $(i), (ii), (iii), (iv),(v)$ are equivalent. Here we only need to prove that these are also equivalent to $(iv)$.

If $\det \mx{A} \neq 0$ then the matrix is invertible due to Theorem 7.8. On the other hand if the matrix $\mx{A}$ is invertible then we know from Theorem 7.8 that $\det \mx{A} \det \mx{A}^{-1} = 1$ that both $\det \mx{A} \neq 0$ and $\det \mx{A}^{-1} \neq 0$.

$\square$

Example 7.14:
Three points on a line

Use the determinant to derive a constraint to check that three points $(x_1,y_1)$, $(x_2,y_2)$ and $(x_3,y_3)$ are co-linear, i.e. that they lie on a line.

If the three points are co-linear, then there exists a line, say $ax+by+c = 0$ such that the three points all lie on that line. In other words we have

Rewrite this as a matrix-vector equation

This is an equation of type $\mx{A}\vc{s}=0$, where $\vc{s} = (a, b, c)$.
According to Theorem 7.10 this has only the solution $a=b=c=0$ if $\det{\mx{A}} \neq 0$, which means that the points do not lie on a line. If on the other hand $\det{\mx{A}} = 0$, then is at least one non-zero solution to the problem. Each solution $\vc{x}$ corresponds to a line. Notice, however, that rescaling a solution
$\lambda \vc{s}$ corresponds to the same line as $\vc{s}$.

Answer: Three points are co-linear if and only if

Use the determinant to derive a constraint to check that three points $(x_1,y_1)$, $(x_2,y_2)$ and $(x_3,y_3)$ are co-linear, i.e. that they lie on a line.

If the three points are co-linear, then there exists a line, say $ax+by+c = 0$ such that the three points all lie on that line. In other words we have

\begin{equation} a x_1+b y_1+c = 0 \\ a x_2+b y_2+c = 0 \\ a x_3+b y_3+c = 0 . \end{equation} | (7.85) |

\begin{equation} \begin{pmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{pmatrix} \begin{pmatrix} a\\ b\\ c \end{pmatrix} = 0 . \end{equation} | (7.86) |

Answer: Three points are co-linear if and only if

\begin{equation} \begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix} = 0 . \end{equation} | (7.87) |

Example 7.15:
The Fundamental Matrix

In Interactive Illustration 5.2, we saw that the position of a 2D point in the first image is constrained by the position of the corresponding 2D point in the other image. The reason is that these 2D points are both projections of the same 3D point.

There is a relatively simple model for the projection of a 3D point with coordinates $(u_1,u_2,u_3)$ onto the the 2D image point $\vc{v} = (v_1,v_2)$. The model becomes simpler by using so called* homogeneous * coordinates. Points are then represented as column vectors with an additional coordinate set to one. In this representation the 3D point is represented as

and the image points is represented as

The model for the projection is then

or

using matrix and vector notation.
In this model equation, $\mx{P}$ is the so called * camera matrix *. It contains the parameters of the * position * and * orientation * of the camera and $\lambda$ is a scalar. Because of the $\lambda$ in the equation, the model is non-linear. Nevertheless methods from linear algebra are crucial for the analysis and understanding of the problem.

Assume that two images are given, where $\mx{P}_L$ is the camera matrix corresponding to the first image and $\mx{P}_R$ is the camera matrix for the second image. Assume that one 3D point with homogeneous coordinate vector $\vc{u}$ is seen in both images. In the first image, the homogeneous coordinate vector is $\vc{v}_L$ and in the other image the vector is $\vc{v}_R$. The camera matrix equations for these two images are

These two matrix equations can be written as a joint matrix equation, i.e.,

Notice that the matrices and vectors are made out of * blocks * of other matrices or vectors. One often talks of block matrices.
The matrix equation is of type $\mx{A} \vc{x} = \vc{0}$. Here $\mx{A}$ is a square matrix of size $6 \times 6$. Notice that the matrix equation has a solution $\begin{pmatrix}\vc{u}\\-\lambda_L \\-\lambda_R \end{pmatrix}$ that is not the zero vector. According to Theorem 7.10, this means that
$\det \mx{A} = 0$.
If we expand the determinant first along the fifth column and then along the sixth column we obtain

Here the elements of $\mx{F}$ are determinants of $4\times 4$ matrices involving the elements of the two camera matrices.

In Interactive Illustration 5.2, we saw that the position of a 2D point in the first image is constrained by the position of the corresponding 2D point in the other image. The reason is that these 2D points are both projections of the same 3D point.

There is a relatively simple model for the projection of a 3D point with coordinates $(u_1,u_2,u_3)$ onto the the 2D image point $\vc{v} = (v_1,v_2)$. The model becomes simpler by using so called

\begin{equation} \vc{u} = \begin{pmatrix} u_1\\ u_2\\ u_3\\ 1 \end{pmatrix} \end{equation} | (7.88) |

\begin{equation} \vc{v} = \begin{pmatrix} v_1\\ v_2\\ 1 \end{pmatrix} . \end{equation} | (7.89) |

\begin{equation} \lambda \begin{pmatrix} v_1\\ v_2\\ 1 \end{pmatrix} = \lambda \begin{pmatrix} p_{11} & p_{12} & p_{13} & p_{14} \\ p_{21} & p_{22} & p_{23} & p_{24} \\ p_{31} & p_{32} & p_{33} & p_{34} \end{pmatrix} \begin{pmatrix} u_1\\ u_2\\ u_3\\ 1 \end{pmatrix} \end{equation} | (7.90) |

\begin{equation} \lambda \vc{v} = \mx{P} \vc{u} \end{equation} | (7.91) |

Assume that two images are given, where $\mx{P}_L$ is the camera matrix corresponding to the first image and $\mx{P}_R$ is the camera matrix for the second image. Assume that one 3D point with homogeneous coordinate vector $\vc{u}$ is seen in both images. In the first image, the homogeneous coordinate vector is $\vc{v}_L$ and in the other image the vector is $\vc{v}_R$. The camera matrix equations for these two images are

\begin{equation} \lambda_L \vc{v}_L = \mx{P}_L \vc{u}\ \ \ \ \mathrm{and} \ \ \ \ \lambda_R \vc{v}_R = \mx{P}_R \vc{u}. \end{equation} | (7.92) |

\begin{equation} \begin{pmatrix} \mx{P}_L & \vc{v}_L & \vc{0} \\ \mx{P}_R & \vc{0} & \vc{v}_R \end{pmatrix} \begin{pmatrix} \vc{u}\\ -\lambda_L \\ -\lambda_R \end{pmatrix} = \begin{pmatrix} \vc{0}\\ \vc{0} \end{pmatrix} . \end{equation} | (7.93) |

\begin{equation} \vc{v}_L^\T \mx{F} \vc{v}_R = 0. \end{equation} | (7.94) |