Loading and building chapter...

For an entirely matte (i.e., non-glossy) surface, the apparent brightness is the same no matter where the surface is viewed from. This means that photons arriving to the receiving surface are absorbed in the surface for an instant, and then they are shot out in arbitrary directions. This is often modeled with Lambert's law, which states that the outgoing intensity is proportional to the cosine between the normal at the receiver and the vector that goes from the receiver to the location of the light source. This situation is shown to the right, where the normal is $\vc{n}$, and the vector that is directed towards the light source (yellow circle) is called $\vc{l}$. Lambert's law states that the "brightness leaving the surface" is proportional to

\begin{equation} \cos \theta, \end{equation} | (4.1) |

\begin{equation} \cos \theta = \vc{n} \cdot \vc{l} \end{equation} | (4.2) |

Lambert's law can be used to compute very simple shading on a three-dimensional object. A light source can be placed anywhere, and thus used to calculate $\vc{l}$. However, the normal, $\vc{n}$, is also needed. Let us assume that the three-dimensional object consists of triangles. We have already seen in Chapter 3, that the "normal" of a line can be calculated, and we also saw that from a plane equation, the normal of the plane could be derived as well. However, this chapter describes a tool that is much simpler to use for computing the normal, namely the vector product. This is also sometimes called the cross product. An example of a three-dimensional objects consisting of triangles is shown in Interactive Illustration 4.2.

In order to present the vector product, we first need to define orientation and "handedness". The reason is that the vector product of two vectors, $\vc{u}$ and $\vc{v}$, is orthogonal to both $\vc{u}$ and $\vc{v}$. However, there is one remaining degree of freedom left for the vector product, i.e., either it is pointing in one direction so that it is orthogonal to both $\vc{u}$ and $\vc{v}$, or it points in the opposite direction. We start by presenting orientation in two dimensions, i.e., in the plane. As shown to the right, positive orientation is defined as anti-clockwise, while negative orientation is defined as clockwise orientation. This is analoguous to how angles are defined, which may already be familiar to the reader, namely, a positive angle $\alpha$ starts from the $x$-axis and goes anti-clockwise, whereas a negative angle goes clockwise, as can be seen in the second step of Figure 4.3. The concept of orientation can also be applied to vectors in the plane. Two vectors, $\vc{u}$ and $\vc{v}$, are positively oriented if $\vc{u}$ can be rotated in positive orientation (see Figure 4.3) so that the smallest angle, $[\vc{u},\vc{v}]$, becomes zero. Note that the vectors in Figure 4.4 are moveable, so the reader can move them and see when the orientation changes by watching the text at the bottom. Note that if $\vc{u}$ and $\vc{v}$ are positively oriented, then $\vc{v}$ and $\vc{u}$ must be negatively oriented, and vice versa. In addition, if $\vc{u}$ and $\vc{v}$ are parallel, then they are simply parallel, and are not positively oriented and neither negatively oriented. Next, the concept of orientation is extended to higher dimensions, e.g., orientation of vectors in three dimensions. For this, we need three vectors, and let us call them $\vc{u}$, $\vc{v}$, and $\vc{w}$. Similar to two-dimensional orientation, the order of the vectors is important. Assume that $\vc{u}$, $\vc{v}$, and $\vc{w}$ are all starting at a common origin. Now, the vectors, $\vc{u}$, $\vc{v}$, and $\vc{w}$, are said to be

For the remainder of this book, we will mostly use right-handed systems. Also, recall that the main axes are defined as

\begin{align} \vc{e}_x &= (1,0,0), \\ \vc{e}_y &= (0,1,0), \\ \vc{e}_z &= (0,0,1). \end{align} | (4.3) |

\begin{align} &\vc{e}_x, \vc{e}_y, \vc{e}_z \\ & \,\,\,\,\,\swarrow \swarrow \\ &\vc{e}_y, \vc{e}_z, \vc{e}_x \\ & \,\,\,\,\,\swarrow \swarrow \\ &\vc{e}_z, \vc{e}_x, \vc{e}_y, \\ \end{align} | (4.4) |

In this chapter, we will simply start with the definition of the vector product, and later see what it is useful for.

Definition 4.1:
Vector Product

The vector product of two vectors, $\vc{u}$ and $\vc{v}$, in three dimensions is defined as a new vector denoted by $\vc{u} \times \vc{v}$, which has the following properties:

(1) $\vc{u} \times \vc{v}$ is orthogonal to both $\vc{u}$ and $\vc{v}$.

(2) $\ln{\vc{u} \times \vc{v}}= \ln{\vc{u}}\,\ln{\vc{v}} \sin [\vc{u},\vc{v}]$.

(3) The vectors $\vc{u}$, $\vc{v}$, and $\vc{u} \times \vc{v}$, are positively oriented.

From (2), it follows that the vector product is $\vc{0}$ if either $\vc{u}=\vc{0}$ or $\vc{v}=\vc{0}$, because a vector of 0 length must be the $\vc{0}$ vector. Similarly, from (2), the vector product is $\vc{0}$ if $[\vc{u},\vc{v}]=0$. Also, since $[\vc{u},\vc{u}]=0$, it holds that $\vc{u} \times \vc{u}=\vc{0}$.

Note that the definition of the vector product, so far, is rather abstract. Soon, we will see that it has many important uses.
However, first, we note that the length of vector product,
$\ln{\vc{u} \times \vc{v}}=$ $\ln{\vc{u}}\,\ln{\vc{v}} \sin [\vc{u},\vc{v}]$, can be interpreted geometrically.
As shown to the right, $\ln{\vc{v}} \sin [\vc{u},\vc{v}]$ is the length of the height of the triangle formed by
$\vc{u}$ and $\vc{v}$, but it is also the height of the parallelogram spanned by $\vc{u}$ and $\vc{v}$.
The area, $a$, of the parallelogram is $a = bh$, where $b$ is the base and $h$ is the height. This can be expressed
in terms of the vectors, $\vc{u}$ and $\vc{v}$, that is,
The vector product of two vectors, $\vc{u}$ and $\vc{v}$, in three dimensions is defined as a new vector denoted by $\vc{u} \times \vc{v}$, which has the following properties:

(1) $\vc{u} \times \vc{v}$ is orthogonal to both $\vc{u}$ and $\vc{v}$.

(2) $\ln{\vc{u} \times \vc{v}}= \ln{\vc{u}}\,\ln{\vc{v}} \sin [\vc{u},\vc{v}]$.

(3) The vectors $\vc{u}$, $\vc{v}$, and $\vc{u} \times \vc{v}$, are positively oriented.

From (2), it follows that the vector product is $\vc{0}$ if either $\vc{u}=\vc{0}$ or $\vc{v}=\vc{0}$, because a vector of 0 length must be the $\vc{0}$ vector. Similarly, from (2), the vector product is $\vc{0}$ if $[\vc{u},\vc{v}]=0$. Also, since $[\vc{u},\vc{u}]=0$, it holds that $\vc{u} \times \vc{u}=\vc{0}$.

\begin{equation} a = bh = \ln{\vc{u}}\, \ln{\vc{v}} \sin [\vc{u},\vc{v}]. \end{equation} | (4.5) |

\begin{equation} a = \ln{\vc{u} \times \vc{v}}. \end{equation} | (4.6) |

$\vc{u}$

$\vc{v}$

$\vc{u} \times \vc{v}$

$\ln{\vc{u}}=$

$\ln{\vc{v}}=$

$\sin [\vc{u},\vc{v}]=$

$\ln{\vc{u} \times \vc{v}}=$

Similar to the dot product, there is also a set of rules for the vector product. These are summarized below.

Theorem 4.1:
Vector Product Rules

\begin{align} \begin{array}{llr} (1) :&\,\,\, \vc{u} \times \vc{v} = -\vc{v} \times \vc{u} & \spc\text{(anti-commutativity)} \\ (2) :&\,\,\, \vc{u} \times (\vc{v} + \vc{w}) = \vc{u} \times \vc{v} + \vc{u} \times \vc{w} & \spc\text{(distributivity)} \\ (3) :&\,\,\, (\vc{u} + \vc{v}) \times \vc{w} = \vc{u} \times \vc{w} + \vc{v} \times \vc{w} & \spc\text{(distributivity)}\\ (4) :&\,\,\, k(\vc{u} \times \vc{v}) = (k\vc{u}) \times \vc{v} = \vc{u} \times (k\vc{v}) & \spc\text{(associativity)} \end{array} \end{align} | (4.7) |

(1) This follows immediately from Definition 4.1, i.e., we know that $\vc{u}$, $\vc{v}$, and $\vc{u} \times \vc{v}$ are positively oriented, and if we change order of $\vc{u}$ and $\vc{v}$, the vector product will point in the opposite direction.

(2) We have seen in Interactive Illustration 4.9 that the vector product can also be thought of as a scaled projection. Furthermore, in Theorem 3.1, rule (3) says $\vc{u} \cdot (\vc{v} +\vc{w})=$ $\vc{u} \cdot \vc{v} + \vc{u} \cdot \vc{w}$. This was proved by showing that the sum of projections is equal to the projection of the sum. Since (3) for the vector product can be thought of as a sum or scaled projection, and a scaled projection of sums, we conlude that (3) must be true. It may help to draw a figure similar to Interactive Illustration 4.9 with both $\vc{v}$ and $\vc{w}$ being projected to the plane with $\vc{u}$ as a normal, and also look at the projection of $\vc{v} + \vc{w}$. There are more formal proofs for this, but this is an intuitive reasoning that we believe helps the understanding.

(3) Given (1) and (2), this is quite straightforward to prove, i.e.,

\begin{align} (\vc{u} + \vc{v}) \times \vc{w} & \overset{(1)}{=} -\vc{w} \times (\vc{u} + \vc{v}) \\ & \overset{(2)}{=} -\vc{w} \times \vc{u} -\vc{w} \times\vc{v} \\ & \overset{(1)}{=} \vc{u} \times \vc{w} +\vc{v} \times\vc{w}, \\ \end{align} | (4.8) |

(4) This follows from the definition of the vector product, and scaling rules for vectors.

$\square$

Example 4.1:
Using the Vector Product Rules

In this example, we will use the rules from Theorem 4.1 to simplify two expressions. We start with $(\vc{u} + \vc{v}) \times (\vc{u} - \vc{v})$, and use the same convention as previously, where we place the rule in parenthesis above the equal sign,

In the second expression, we simply change the order of the terms in the cross product above,
i.e., we simplify $(\vc{u} - \vc{v}) \times (\vc{u} + \vc{v})$.
This has a slightly different outcome, i.e.,

Since we simply changed order of the terms in the vector product, we could have simply used the
result from Equation (4.9), and rule (1) for vector products, to see
that the result would be exactly the same, but negated.

Next, let us see in Interactive Illustration 4.10 what this means geometrically.

Finally, we notice that the three identities
In this example, we will use the rules from Theorem 4.1 to simplify two expressions. We start with $(\vc{u} + \vc{v}) \times (\vc{u} - \vc{v})$, and use the same convention as previously, where we place the rule in parenthesis above the equal sign,

\begin{align} (\vc{u} + \vc{v}) \times (\vc{u} - \vc{v}) & \overset{(2,4)}{=} (\vc{u} + \vc{v}) \times \vc{u} - (\vc{u} + \vc{v}) \times \vc{v} \\ & \overset{(3)}{=} \underbrace{\vc{u} \times \vc{u}}_{=\vc{0}} + \vc{v} \times \vc{u} - \vc{u}\times \vc{v} - \underbrace{\vc{v} \times \vc{v}}_{=\vc{0}} \\ & \overset{(1)}{=} -2\vc{u} \times \vc{v}. \end{align} | (4.9) |

\begin{align} (\vc{u} - \vc{v}) \times (\vc{u} + \vc{v}) & \overset{(2,4)}{=} (\vc{u} - \vc{v}) \times \vc{u} + (\vc{u} - \vc{v}) \times \vc{v} \\ & \overset{(3)}{=} \underbrace{\vc{u} \times \vc{u}}_{=\vc{0}} - \vc{v} \times \vc{u} + \vc{u}\times \vc{v} - \underbrace{\vc{v} \times \vc{v}}_{=\vc{0}} \\ & \overset{(1)}{=} 2\vc{u} \times \vc{v}. \end{align} | (4.10) |

Next, let us see in Interactive Illustration 4.10 what this means geometrically.

\begin{align} \vc{e}_1 \times \vc{e}_2 &= \vc{e}_3, \\ \vc{e}_2 \times \vc{e}_3 &= \vc{e}_1, \\ \vc{e}_3 \times \vc{e}_1 &= \vc{e}_2, \\ \end{align} | (4.11) |

\begin{align} \vc{e}_2 \times \vc{e}_1 &= -\vc{e}_3, \\ \vc{e}_3 \times \vc{e}_2 &= -\vc{e}_1, \\ \vc{e}_1 \times \vc{e}_3 &= -\vc{e}_2. \\ \end{align} | (4.12) |

Definition 4.1 is somewhat abstract. However, it turns out that there is a very direct way of computing the vector product in an orthonormal basis in three dimensions. This is summarized in the following theorem.

Theorem 4.2:
Vector Product in Orthonormal Basis

For three-dimensional vectors, $\vc{u}$ and $\vc{v}$, and for a positively oriented and orthonormal basis, the vector product is

For three-dimensional vectors, $\vc{u}$ and $\vc{v}$, and for a positively oriented and orthonormal basis, the vector product is

\begin{equation} \vc{u} \times \vc{v} = (u_y v_z - u_z v_y, \, u_z v_x - u_x v_z, \, u_x v_y- u_y v_x). \end{equation} | (4.13) |

Given that $\vc{u} = u_x\vc{e}_1 + u_y\vc{e}_2 + u_z\vc{e}_3$ and $\vc{v} = v_x\vc{e}_1 + v_y\vc{e}_2 + v_z\vc{e}_3$, the vector product can be expressed as below, where the rules in Theorem 4.1 are used,

\begin{align} \vc{u} \times \vc{v} =& (u_x\vc{e}_1 + u_y\vc{e}_2 + u_z\vc{e}_3) \times (v_x\vc{e}_1 + v_y\vc{e}_2 + v_z\vc{e}_3)\\ =& u_x v_x \underbrace{\vc{e}_1 \times \vc{e}_1}_{=\vc{0}} + u_x v_y \vc{e}_1 \times \vc{e}_2 + u_x v_z \vc{e}_1 \times \vc{e}_3 + \\ & u_y v_x \vc{e}_2 \times \vc{e}_1 + u_y v_y \underbrace{\vc{e}_2 \times \vc{e}_2}_{=\vc{0}} + u_y v_z \vc{e}_2 \times \vc{e}_3 + \\ & u_z v_x \vc{e}_3 \times \vc{e}_1 + u_z v_y \vc{e}_3 \times \vc{e}_2 + u_z v_z \underbrace{\vc{e}_3 \times \vc{e}_3}_{=\vc{0}} \\ =& (u_x v_y - u_y v_x)\vc{e}_1 \times \vc{e}_2 + (u_x v_z - u_z v_x) \vc{e}_1 \times \vc{e}_3 + (u_y v_z - u_z v_y)\vc{e}_2 \times \vc{e}_3. \end{align} | (4.14) |

\begin{align} \vc{u} \times \vc{v} =& (u_x v_y - u_y v_x)\underbrace{\vc{e}_1 \times \vc{e}_2}_{=\vc{e}_3} + (u_x v_z - u_z v_x)\underbrace{\vc{e}_1 \times \vc{e}_3}_{=-\vc{e}_2} + (u_y v_z - u_z v_y)\underbrace{\vc{e}_2 \times \vc{e}_3}_{=\vc{e}_1} \\ =& (u_y v_z - u_z v_y)\vc{e}_1 + (u_z v_x -u_x v_z)\vc{e}_2 + (u_x v_y - u_y v_x)\vc{e}_3. \end{align} | (4.15) |

$\square$

Using Theorem 4.2 to compute the vector product is extremely useful, but it may be difficult to remember. One way to remember it easier is called Sarrus' rule. The basis vectors, $\vc{e}_1$, $\vc{e}_2$, and $\vc{e}_3$ are put on a row twice, and under that the $x$-, $y$-, and $z$-components of $\vc{u}$ are put down on the row below, and also twice. Then on the third row, the same is done for $\vc{v}$. This is shown below:

\begin{equation} \begin{array}{cccccc} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!+ &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! + &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! + &\,\,\,\,\,\,\,\,\,\,\,\, - &\,\, - & - \\ \!\!\!\!\!\!\!\!\!\!\searrow &\!\!\!\!\!\!\!\!\! \searrow &\!\!\!\!\!\!\!\!\!\! \searrow & \!\swarrow &\!\!\!\!\!\!\!\!\! \swarrow &\!\!\!\!\!\!\!\!\!\!\! \swarrow \\ \vc{e}_1 & \vc{e}_2 & \vc{e}_3 &\!\!\!\!\!\!\!\!\! \vc{e}_1 &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \vc{e}_2 &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \vc{e}_3 \\ u_x & u_y & u_z &\!\!\!\!\!\!\!\!\! u_x &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! u_y &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! u_z \\ v_x & v_y & v_z &\!\!\!\!\!\!\!\!\! v_x &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! v_y &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! v_z \\ \end{array} \end{equation} | (4.16) |

\begin{equation} \vc{u}\times \vc{v} = +\vc{e}_1 u_y v_z +\vc{e}_2 u_z v_x +\vc{e}_3 u_x v_y -\vc{e}_1 u_z v_y -\vc{e}_2 u_x v_z -\vc{e}_3 u_y v_x, \end{equation} | (4.17) |

Another way is to write down the vector components in rows, where $\vc{u}$'s components are on top of $\vc{v}$'s components, that is,

\begin{equation} \begin{array}{ccc} u_x & u_y & u_z \\ v_x & v_y & v_z \end{array} \end{equation} | (4.18) |

\begin{equation} \left| \begin{array}{cc} u_y & u_z \\ v_y & v_z \end{array} \right| = u_y v_z - u_z v_y. \end{equation} | (4.19) |

\begin{equation} \vc{u}\times \vc{v} = \Biggl( +\left| \begin{array}{cc} u_y & u_z \\ v_y & v_z \end{array} \right|, \, -\left| \begin{array}{cc} u_x & u_z \\ v_x & v_z \end{array} \right|, \, +\left| \begin{array}{cc} u_x & u_y \\ v_x & v_y \end{array} \right| \Biggr). \end{equation} | (4.20) |

There are two types of triple products, i.e., products with three terms, which are based on vector products. One of them generates a vector and is therefore called vector triple product, which is the topic of Section 4.6. The other, which is the topic of this section, is called a scalar triple product, since it generates a scalar. The definition follows below.

Definition 4.2:
Scalar Triple Product

The scalar triple product of three vectors, $\vc{u}$, $\vc{v}$, and $\vc{w}$, is

Interestingly, this expression is also how the determinant (which is the topic of Chapter 7)
of a $3\times 3$ matrix is calculated.

As it turns out, the scalar triple product can be used to compute the volume of a
parallelepiped spanned by three vectors, $\vc{u}$, $\vc{v}$, and $\vc{w}$. This is
summarized in the following theorem:
The scalar triple product of three vectors, $\vc{u}$, $\vc{v}$, and $\vc{w}$, is

\begin{equation} (\vc{u} \times \vc{v}) \cdot \vc{w}. \end{equation} | (4.21) |

Theorem 4.3:
Signed Volume of Parallelepiped

The scalar triple product, $(\vc{u} \times \vc{v}) \cdot \vc{w}$, can be used to calculate the volume $V$ of a parallelepiped spanned by $\vc{u}$, $\vc{v}$, and $\vc{w}$, as

This means that the volume is always the absolute value of the scalar triple product, i.e.,
$|(\vc{u} \times \vc{v}) \cdot \vc{w}|$. The volume is zero if either of the vectors are zero or
if two (or more) of the vectors are parallel, or if all three vectors lie in the same plane.

The scalar triple product, $(\vc{u} \times \vc{v}) \cdot \vc{w}$, can be used to calculate the volume $V$ of a parallelepiped spanned by $\vc{u}$, $\vc{v}$, and $\vc{w}$, as

\begin{align} V = +(\vc{u} \times \vc{v}) \cdot \vc{w}, & \hspace{5pt} \text{if the vectors are positively oriented}, \\ V = -(\vc{u} \times \vc{v}) \cdot \vc{w}, & \hspace{5pt} \text{if the vectors are negatively oriented}. \\ \end{align} | (4.22) |

With the help of the figure of a parallelepiped, spanned by $\vc{u}$, $\vc{v}$, and $\vc{w}$, to the right, this becomes fairly straightforward. As we know from Figure 4.7 and from the definition of the vector product (Definition 4.1), the length of the vector product is the area of the parallelogram (yellow in the figure). That is, the area of the base parallelogram is $\ln{\vc{u} \times \vc{v}}$, where the direction of the vector product depends on the orientation of $\vc{u}$ and $\vc{v}$. Also, from the definition of the dot product (Definition 3.1), we know that $\vc{a}\cdot \vc{w} =$$ \ln{\vc{a}}\,\ln{\vc{w}} \cos[\vc{a},\vc{w}]$. Now, we introduce $\vc{a} = \vc{u} \times \vc{v}$, with the consequence that

\begin{align} (\vc{u} \times \vc{v})\cdot \vc{w} =& \vc{a} \cdot \vc{w} = \ln{\vc{a}}\,\ln{\vc{w}} \cos[\vc{a},\vc{w}] \\ =& \underbrace{\ln{\vc{u} \times \vc{v}}}_{\text{base area}} \, \underbrace{\ln{\vc{w}} \cos[\vc{u} \times \vc{v},\vc{w}]}_{\text{height with sign}}. \end{align} | (4.23) |

$\square$

As can be seen, it is a simple matter to test whether three vectors are positively oriented, i.e., they form a right-handed system. This is done by checking whether $(\vc{u}\times \vc{v}) \cdot \vc{w} > 0$. Also, note that for an right-handed orthonormal basis in three dimensions, we have $\ln{\vc{e}_1}=1$, $\ln{\vc{e}_2}=1$, and $\ln{\vc{e}_3}=1$, and $\vc{e}_1 \cdot \vc{e}_2 = 0$, $\vc{e}_1 \cdot \vc{e}_3 = 0$, and $\vc{e}_2 \cdot \vc{e}_3 = 0$. Furthermore, it must hold that $(\vc{e}_1 \times \vc{e}_2)\cdot \vc{e}_3 > 0$. In fact, due to the vectors being normalized, and pairwise orthogonal, we have

\begin{gather} \underbrace{(\vc{e}_1 \times \vc{e}_2)}_{\vc{e}_3} \cdot \vc{e}_3 = \vc{e}_3\cdot \vc{e}_3=1, \end{gather} | (4.24) |

Note that since the scalar triple product computes the volume (with sign) of the parallelepiped, we can actually change the order of the vectors in the expression for the scalar triple product as long as we keep them in a positively oriented system. Hence, we have

\begin{equation} (\vc{u} \times \vc{v}) \cdot \vc{w} = (\vc{v} \times \vc{w}) \cdot \vc{u} = (\vc{w} \times \vc{u}) \cdot \vc{v}. \end{equation} | (4.25) |

\begin{equation} (\vc{u} \times \vc{v}) \cdot \vc{w} = -(\vc{v} \times \vc{u}) \cdot \vc{w} = -(\vc{w} \times \vc{v}) \cdot \vc{u} = -(\vc{u} \times \vc{w}) \cdot \vc{v}. \end{equation} | (4.26) |

As the name implies the vector triple product is a product of three vectors. Before we present the vector triple product, we present a simplified version of it, where two of the vectors are the same:

Theorem 4.4:
Simplified Vector Triple Product

When the two first terms in the vector triple product are the same, we have

When the two first terms in the vector triple product are the same, we have

\begin{equation} \vc{u} \times (\vc{u} \times \vc{v}) = (\vc{u} \cdot \vc{v})\vc{u} - (\vc{u} \cdot \vc{u})\vc{v}. \end{equation} | (4.27) |

As can be seen in the figure to the right, $\vc{u}$, $\vc{u}\times \vc{v}$, and $\vc{u}\times (\vc{u}\times \vc{v})$ form a right-handed system, and they are all mutually orthogonal to each other. This means that we can create a vector, which we call $\vc{a}$, by projecting $\vc{v}$ onto $\vc{u}\times (\vc{u}\times \vc{v})$:

\begin{equation} \vc{a} = \vc{v} - \proj{\vc{u}}{\vc{v}}. \end{equation} | (4.28) |

\begin{align} \ln{\vc{u}\times (\vc{u}\times \vc{v})}^2 =& \ln{\vc{u}}^2 \, \underbrace{ \ln{\vc{u}}^2 \ln{\vc{v}}^2 \sin^2[\vc{u}, \vc{v}] }_{ \ln{\vc{u} \times \vc{v}}^2} \\ =&\ln{\vc{u}}^4\ln{\vc{v}}^2 \bigl(1-\cos^2[\vc{u}, \vc{v}]\bigr) \\ =&\ln{\vc{u}}^4\ln{\vc{v}}^2 - \ln{\vc{u}}^4\ln{\vc{v}}^2 \cos^2[\vc{u}, \vc{v}] \\ =& \ln{\vc{u}}^4\ln{\vc{v}}^2 - \ln{\vc{u}}^2 (\vc{u} \cdot \vc{v})^2. \end{align} | (4.29) |

\begin{equation} \vc{a} = \vc{v} - \frac{\vc{u}\cdot \vc{v}}{\vc{u}\cdot \vc{u}}\vc{u}. \end{equation} | (4.30) |

\begin{equation} \vc{b} = (\vc{u}\cdot \vc{u})\vc{v} - (\vc{u}\cdot \vc{v})\vc{u}. \end{equation} | (4.31) |

\begin{align} \ln{\vc{b}}^2 &= (\vc{u}\cdot \vc{u})^2 (\vc{v}\cdot\vc{v}) -2(\vc{u}\cdot \vc{u})(\vc{u}\cdot \vc{v})^2 + (\vc{u}\cdot \vc{v})^2(\vc{u} \cdot \vc{u}) \\ &=(\vc{u}\cdot \vc{u})^2 (\vc{v}\cdot\vc{v}) -(\vc{u}\cdot \vc{u})(\vc{u}\cdot \vc{v})^2 \\ &= \ln{\vc{u}}^4\ln{\vc{v}}^2 - \ln{\vc{u}}^2(\vc{u}\cdot \vc{v})^2, \end{align} | (4.32) |

$\square$

Next, the full vector triple product, sometimes called Lagrange's formula, or triple product expansion, is presented in theorem below.

Theorem 4.5:
Vector Triple Product

The vector triple product of $\vc{u}$, $\vc{v}$, and $\vc{w}$ is

The vector triple product of $\vc{u}$, $\vc{v}$, and $\vc{w}$ is

\begin{equation} \vc{u} \times (\vc{v} \times \vc{w}) = (\vc{u} \cdot \vc{w})\vc{v} - (\vc{u} \cdot \vc{v})\vc{w}. \end{equation} | (4.33) |

We assume that the vectors $\vc{v}$ and $\vc{w}$ not are parallel, because otherwise the product will be the zero vector. Hence, $\vc{u}$ can be expressed in terms of $\vc{v}$ and $\vc{w}$:

\begin{equation} \vc{u} = a\vc{v} + b\vc{w} + c(\vc{v} \times \vc{w}), \end{equation} | (4.34) |

\begin{align} \vc{u}\cdot \vc{v} & = (a\vc{v} + b\vc{w} + c(\vc{v} \times \vc{w}))\cdot \vc{v} \\ & = a(\vc{v}\cdot\vc{v}) + b(\vc{w}\cdot \vc{v}) + c\underbrace{(\vc{v} \times \vc{w})\cdot \vc{v}}_{=0}\\ & = a(\vc{v}\cdot\vc{v}) + b(\vc{w}\cdot \vc{v}) . \end{align} | (4.35) |

\begin{equation} \vc{u}\cdot \vc{w} = a(\vc{v}\cdot\vc{w}) + b(\vc{w}\cdot \vc{w}). \end{equation} | (4.36) |

\begin{align} \vc{u} \times (\vc{v} \times \vc{w}) &= (a\vc{v} + b\vc{w} + c(\vc{v} \times \vc{w}))\times (\vc{v} \times \vc{w}) \\ &= a\vc{v}\times (\vc{v} \times \vc{w}) + b\vc{w} \times (\vc{v} \times \vc{w}) + c\underbrace{(\vc{v} \times \vc{w})\times (\vc{v} \times \vc{w})}_{=\vc{0}} \\ &= a\underbrace{\vc{v}\times (\vc{v} \times \vc{w})}_{(\vc{v} \cdot \vc{w})\vc{v} - (\vc{v} \cdot \vc{v})\vc{w}} - b\underbrace{\vc{w} \times (\vc{w} \times \vc{v})}_{(\vc{w} \cdot \vc{v})\vc{w} - (\vc{w} \cdot \vc{w})\vc{v}} \\ &= a\bigl((\vc{v} \cdot \vc{w})\vc{v} - (\vc{v} \cdot \vc{v})\vc{w}\bigr) - b\bigl((\vc{w} \cdot \vc{v})\vc{w} - (\vc{w} \cdot \vc{w})\vc{v}\bigr) \\ &=\bigl(\underbrace{a (\vc{v} \cdot \vc{w}) +b (\vc{w} \cdot \vc{w}}_{\vc{u}\cdot\vc{w}})\bigr)\vc{v} - \bigl(\underbrace{a(\vc{v} \cdot \vc{v}) + b(\vc{w} \cdot \vc{v})}_{\vc{u}\cdot\vc{v}}\bigr)\vc{w}\\ &= (\vc{u}\cdot\vc{w})\vc{v} - (\vc{u}\cdot\vc{v})\vc{w}, \end{align} | (4.37) |

$\square$

We can use the vector triple product to prove that the

\begin{equation} \vc{a} \times (\vc{b} \times \vc{c}) = (\vc{a}\cdot\vc{c})\vc{b} - (\vc{a}\cdot \vc{b})\vc{c}. \end{equation} | (4.38) |

\begin{equation} \vc{c} \times (\vc{a} \times \vc{b}) = (\vc{c}\cdot\vc{b})\vc{a} - (\vc{c}\cdot \vc{a})\vc{b}. \end{equation} | (4.39) |

\begin{equation} (\vc{a} \times \vc{b})\times \vc{c} = -(\vc{c}\cdot\vc{b})\vc{a} + (\vc{c}\cdot \vc{a})\vc{b}. \end{equation} | (4.40) |

\begin{equation} k_1 \vc{b} + k_2 \vc{c} = k_3 \vc{b} + k_4 \vc{a}, \end{equation} | (4.41) |

\begin{equation} k_2 \vc{c} = (k_3-k_1) \vc{b} + k_4 \vc{a}, \end{equation} | (4.42) |

The vector triple product theorem can also be used to show that the vector product satisfies the Jacobian identity, as shown below.

Theorem 4.6:
Jacobian Identity for Vector Products

The triple vector products satisfies the following:

The triple vector products satisfies the following:

\begin{equation} \vc{u} \times (\vc{v} \times \vc{w}) + \vc{v} \times (\vc{w} \times \vc{u}) + \vc{w} \times (\vc{u} \times \vc{v}) =0. \end{equation} | (4.43) |

We simply use Theorem 4.5 to prove the Jacobian identity:

\begin{gather} \vc{u} \times (\vc{v} \times \vc{w}) + \vc{v} \times (\vc{w} \times \vc{u}) + \vc{w} \times (\vc{u} \times \vc{v}) = \\ (\vc{u} \cdot \vc{w})\vc{v} - (\vc{u} \cdot \vc{v})\vc{w} + (\vc{v} \cdot \vc{u})\vc{w} - (\vc{v} \cdot \vc{w})\vc{u} + (\vc{w} \cdot \vc{v})\vc{u} - (\vc{w} \cdot \vc{u})\vc{v}=0, \end{gather} | (4.44) |

$\square$

This chapter ends with some examples.

Example 4.2:
Orientation of Two Two-Dimensional Vectors

Given two two-dimensional vectors, $\vc{u}$ and $\vc{v}$, in an orthonormal basis, we would like to determine how they are oriented. See Figure 4.4 for a small interactive figure on this topic. As we have seen, this is simple to do with three-dimensional vectors, i.e., we can simply use Theorem 4.2 and then investigate in which direction the vector product points. So, this can be done by agumenting the two-dimensional vectors with one more component, namely a $z$-component, which is set to zero. This means that we have $\vc{u}' = (u_x, u_y, 0)$ and $\vc{v}' = (v_x, v_y, 0)$. The vector product is then:

Note that both the $x$- and the $y$-component are zero. This can be realized from the fact that both $\vc{u}'$ and $\vc{v}'$
lie in the $xy$-plane (i.e., their $z$-components are zero), and therefore, the dot product must be orthogonal to
the $xy$-plane.

Hence, to determine the orientation of $\vc{u}$ and $\vc{v}$, simply compute $s = u_x v_y - u_y v_x$. If $s>0$ then they are positively oriented, and if $s<0$ they are negatively oriented, and otherwise they are parallel.

Given two two-dimensional vectors, $\vc{u}$ and $\vc{v}$, in an orthonormal basis, we would like to determine how they are oriented. See Figure 4.4 for a small interactive figure on this topic. As we have seen, this is simple to do with three-dimensional vectors, i.e., we can simply use Theorem 4.2 and then investigate in which direction the vector product points. So, this can be done by agumenting the two-dimensional vectors with one more component, namely a $z$-component, which is set to zero. This means that we have $\vc{u}' = (u_x, u_y, 0)$ and $\vc{v}' = (v_x, v_y, 0)$. The vector product is then:

\begin{equation} \vc{u}' \times \vc{v}' = (u_x, u_y, 0) \times (v_x, v_y, 0) = (0,0, u_x v_y - u_y v_x). \end{equation} | (4.45) |

Hence, to determine the orientation of $\vc{u}$ and $\vc{v}$, simply compute $s = u_x v_y - u_y v_x$. If $s>0$ then they are positively oriented, and if $s<0$ they are negatively oriented, and otherwise they are parallel.

Example 4.3:
Normal and Plane Equation of a Triangle

Given a triangle with three vertices, $A$, $B$, and $C$, we want to compute the normal of the plane that the triangle lies in. First construct two edge vectors as:

The normal, $\vc{n}$, is then simply the vector product of the edge vectors, i.e.,

In fact, this is the same technique that was used when computing
Lambertian shading in Figure 4.2.

Now that we have the normal, $\vc{n}$, of the plane, and we know that $A$, $B$, and $C$ all lie in the plane of the triangle, it is possible to find the plane equation (Section 3.6.2) of the triangle. The plane equation is simply

where $P$ is any point on the plane, and $A$ is a point that we already know lies in the plane.
We could have chosen any point, e.g., $B$ or $C$.
Note that since the normal is expressed as a vector product, the entire plane equation
is expressed as a scalar triple product.

Given a triangle with three vertices, $A$, $B$, and $C$, we want to compute the normal of the plane that the triangle lies in. First construct two edge vectors as:

\begin{align} \vc{u} =& B - A, \\ \vc{v} =& C - A. \\ \end{align} | (4.46) |

\begin{equation} \vc{n} = \vc{u} \times \vc{v}. \end{equation} | (4.47) |

Now that we have the normal, $\vc{n}$, of the plane, and we know that $A$, $B$, and $C$ all lie in the plane of the triangle, it is possible to find the plane equation (Section 3.6.2) of the triangle. The plane equation is simply

\begin{gather} \vc{n} \cdot (P-A) = 0 \\ \Longleftrightarrow \\ (\vc{u} \times \vc{v})\cdot (P-A) = 0, \end{gather} | (4.48) |

Example 4.4:
Volume of Tetrahedron

Given a tetrahedron that is defined by four points, $A$, $B$, $C$, and $D$, we seek the volume of this geometrial shape. First, the three edge vectors from $A$ are constructed as:

The volume of the tetrahedron is one sixth of the absolute value of the scalar triple product of these three vectors,
i.e., you can fit six tetrahedrons inside the parallelepiped spanned by $\vc{u}$, $\vc{v}$, and $\vc{w}$. The
reader is encouraged to try to show this with pen and paper.
The volume of a tetrahedron is then expressed as

where we need to have the absolute value of the scalar triple product in case $\vc{u}$, $\vc{v}$, and $\vc{w}$,
are defined in a negatively oriented system.

Given a tetrahedron that is defined by four points, $A$, $B$, $C$, and $D$, we seek the volume of this geometrial shape. First, the three edge vectors from $A$ are constructed as:

\begin{align} \vc{u} =& B - A, \\ \vc{v} =& C - A, \\ \vc{w} =& D - A. \\ \end{align} | (4.49) |

\begin{equation} \frac{1}{6} | (\vc{u} \times \vc{v}) \cdot \vc{w} |, \end{equation} | (4.50) |

Example 4.5:
Orthonormal Basis from Two Vectors

Assume that we have two non-parallel vectors, $\vc{u}$ and $\vc{v}$, and that we wish to produce an orthonormal basis, where $\vc{u}$ is parallel to one of the basis vectors, that is

The first basis vector is simply $\vc{u}$ normalized. Since the angle between $\vc{u}$ and $\vc{v}$
can be anything, we cannot use a normalized version of $\vc{v}$ directly as a basis vector. However, we can instead use the normalized vector
product

Finally, $\vc{e}_3$ is created from $\vc{e}_1$ and $\vc{e}_2$ using the vector product as

As can be seen, the third basis vector can be created directly using a simplified vector triple
product (Theorem 4.4)

The basis is then $\{\vc{e}_1$, $\vc{e}_2$, $\vc{e}_3\}$.

Assume that we have two non-parallel vectors, $\vc{u}$ and $\vc{v}$, and that we wish to produce an orthonormal basis, where $\vc{u}$ is parallel to one of the basis vectors, that is

\begin{equation} \vc{e}_1 = \frac{1}{\ln{\vc{u}}} \vc{u}. \end{equation} | (4.51) |

\begin{equation} \vc{e}_2 = \frac{1}{\ln{\vc{u}\times \vc{v}}} (\vc{u}\times \vc{v}). \end{equation} | (4.52) |

\begin{equation} \vc{e}_3 = \vc{e}_1 \times \vc{e}_2. \end{equation} | (4.53) |

\begin{equation} \vc{e}_3 = \vc{e}_1 \times \vc{e}_2 = \frac{1}{\ln{\vc{u}} \, \ln{\vc{u}\times \vc{v}}} \vc{u} \times (\vc{u}\times \vc{v}). \end{equation} | (4.54) |

In Section 4.1, we needed to compute a normalized normal vector of a triangle. A normal vector of a triangle is a vector that is orthogonal to the plane of the triangle. Now that we know how the vector product works, this is a simple matter. Assume that the vertices of a triangle are called $P_1$, $P_2$, and $P_3$, i.e., they are three-dimensional points. Let us choose $P_3$ as a reference point from where we compute edge vectors, that is, vectors from $P_3$ to $P_1$ and $P_2$. This can be expressed as

\begin{align} \vc{e}_1 &= P_1 - P_3, \\ \vc{e}_2 &= P_2 - P_3. \end{align} | (4.55) |

\begin{equation} \vc{m} = \vc{e}_1 \times \vc{e}_2. \end{equation} | (4.56) |

\begin{equation} \vc{n} = \frac{\vc{m}}{\ln{\vc{m}}}. \end{equation} | (4.57) |