© 2015-2017 Jacob Ström, Kalle Åström, and Tomas Akenine-Möller Forum

# Chapter 4: The Vector Product

$\vc{n}$
$\vc{l}$
$\theta$
For an entirely matte (i.e., non-glossy) surface, the apparent brightness is the same no matter where the surface is viewed from. This means that photons arriving to the receiving surface are absorbed in the surface for an instant, and then they are shot out in arbitrary directions. This is often modeled with Lambert's law, which states that the outgoing intensity is proportional to the cosine between the normal at the receiver and the vector that goes from the receiver to the location of the light source. This situation is shown to the right, where the normal is $\vc{n}$, and the vector that is directed towards the light source (yellow circle) is called $\vc{l}$. Lambert's law states that the "brightness leaving the surface" is proportional to
 $$\cos \theta,$$ (4.1)
which can be expressed as
 $$\cos \theta = \vc{n} \cdot \vc{l}$$ (4.2)
given that $\ln{\vc{n}}=1$ and $\ln{\vc{l}}=1$, i.e., they are normalized. Losely speaking, this means that if the light source direction, $\vc{l}$, is perfectly aligned with the normal, $\vc{n}$, then the brightness is maximized. However, when the angle between $\vc{l}$ and $\vc{n}$ becomes bigger, the brightness decreases.

Lambert's law can be used to compute very simple shading on a three-dimensional object. A light source can be placed anywhere, and thus used to calculate $\vc{l}$. However, the normal, $\vc{n}$, is also needed. Let us assume that the three-dimensional object consists of triangles. We have already seen in Chapter 3, that the "normal" of a line can be calculated, and we also saw that from a plane equation, the normal of the plane could be derived as well. However, this chapter describes a tool that is much simpler to use for computing the normal, namely the vector product. This is also sometimes called the cross product. An example of a three-dimensional objects consisting of triangles is shown in Interactive Illustration 4.2.
Interactive Illustration 4.2: A continuous model of a torus has been turned into (also called tessellated) a representation consisting of quadrilaterals, where each quadrilateral consists of two triangles. Recall that the camera view position can be changed by clicking the right mouse button, or pressing two fingers on an iPad. As you will see the shading will change as a result.
Interactive Illustration 4.2: A continuous model of a torus has been turned into (also called tessellated) a representation consisting of quadrilaterals, where each quadrilateral consists of two triangles. Recall that the camera view position can be changed by clicking the right mouse button, or pressing two fingers on an iPad. As you will see the shading will change as a result.

In order to present the vector product, we first need to define orientation and "handedness". The reason is that the vector product of two vectors, $\vc{u}$ and $\vc{v}$, is orthogonal to both $\vc{u}$ and $\vc{v}$. However, there is one remaining degree of freedom left for the vector product, i.e., either it is pointing in one direction so that it is orthogonal to both $\vc{u}$ and $\vc{v}$, or it points in the opposite direction. We start by presenting orientation in two dimensions, i.e., in the plane. As shown to the right, positive orientation is defined as anti-clockwise, while negative orientation is defined as clockwise orientation. This is analoguous to how angles are defined, which may already be familiar to the reader, namely, a positive angle $\alpha$ starts from the $x$-axis and goes anti-clockwise, whereas a negative angle goes clockwise, as can be seen in the second step of Figure 4.3.

$\vc{u}$
$\vc{v}$
The concept of orientation can also be applied to vectors in the plane. Two vectors, $\vc{u}$ and $\vc{v}$, are positively oriented if $\vc{u}$ can be rotated in positive orientation (see Figure 4.3) so that the smallest angle, $[\vc{u},\vc{v}]$, becomes zero. Note that the vectors in Figure 4.4 are moveable, so the reader can move them and see when the orientation changes by watching the text at the bottom. Note that if $\vc{u}$ and $\vc{v}$ are positively oriented, then $\vc{v}$ and $\vc{u}$ must be negatively oriented, and vice versa. In addition, if $\vc{u}$ and $\vc{v}$ are parallel, then they are simply parallel, and are not positively oriented and neither negatively oriented.

Next, the concept of orientation is extended to higher dimensions, e.g., orientation of vectors in three dimensions. For this, we need three vectors, and let us call them $\vc{u}$, $\vc{v}$, and $\vc{w}$. Similar to two-dimensional orientation, the order of the vectors is important. Assume that $\vc{u}$, $\vc{v}$, and $\vc{w}$ are all starting at a common origin. Now, the vectors, $\vc{u}$, $\vc{v}$, and $\vc{w}$, are said to be positively oriented if you imagine that you are sitting at the tip of $\vc{w}$, and looking towards the origin, and if $\vc{u}$ and $\vc{v}$ are positively oriented as seen from that position. The same is true for a set of negatively oriented vectors, except that $\vc{u}$ and $\vc{v}$ must be negatively oriented as seen from that position. This is illustrated in Figure 4.5.
Interactive Illustration 4.5: Orientation in 3D. To the left, the vectors $\vc{u}$, $\vc{v}$, and $\vc{w}$ are negatively oriented, while to the right, the vectors are positively oriented. If you imagine that you are at the tip of the $\vc{w}$ vector, looking down towards the shared origin of $\vc{u}$, $\vc{v}$, and $\vc{w}$, then you have a positively oriented set of vectors if $\vc{u}$ and $\vc{v}$ are positively oriented as seen from that position.
Interactive Illustration 4.5: Orientation in 3D. To the left, the vectors $\hid{\vc{u}}$, $\hid{\vc{v}}$, and $\hid{\vc{w}}$ are negatively oriented, while to the right, the vectors are positively oriented. If you imagine that you are at the tip of the $\hid{\vc{w}}$ vector, looking down towards the shared origin of $\hid{\vc{u}}$, $\hid{\vc{v}}$, and $\hid{\vc{w}}$, then you have a positively oriented set of vectors if $\hid{\vc{u}}$ and $\hid{\vc{v}}$ are positively oriented as seen from that position.
$\vc{u}$
$\vc{v}$
$\vc{w}$
$\vc{u}$
$\vc{v}$
$\vc{w}$
$\vc{u}$
$\vc{v}$
$\vc{w}$
A positively oriented set of vectors is also called a right-handed system, while a negatively oriented set is called a left-handed system. A right-handed system is shown in Figure 4.6, where it becomes clear why it is called a right handed system: such a system can easily be created with your fingers on your right hand. Just imagine that your thumb is the first vector ($\vc{u}$), and your index finger is the second vector ($\vc{v}$), and finally, your middle finger is the third vector ($\vc{w}$). In contrast, a negatively oriented system can be formed in exactly the same way with your left hand.

For the remainder of this book, we will mostly use right-handed systems. Also, recall that the main axes are defined as
 \begin{align} \vc{e}_x &= (1,0,0), \\ \vc{e}_y &= (0,1,0), \\ \vc{e}_z &= (0,0,1). \end{align} (4.3)
Looking at the right part of Figure 4.5, you can see that $\vc{e}_x$, $\vc{e}_y$, and $\vc{e}_z$ could be replaced by $\vc{u}$, $\vc{v}$, and $\vc{w}$. Hence, $\vc{e}_x$, $\vc{e}_y$, and $\vc{e}_z$ form a right-handed system. By looking at the illustrations, one can also see that it is possible to shift the order of the vectors, while keeping it a right-handed system. Hence the following three systems are all right-handed,
 \begin{align} &\vc{e}_x, \vc{e}_y, \vc{e}_z \\ & \,\,\,\,\,\swarrow \swarrow \\ &\vc{e}_y, \vc{e}_z, \vc{e}_x \\ & \,\,\,\,\,\swarrow \swarrow \\ &\vc{e}_z, \vc{e}_x, \vc{e}_y, \\ \end{align} (4.4)
where the arrows simply show how the left-shifting is done from one row to the next. Note that the one axis that falls out on the left, appears in the rightmost position. All other permutations (three in total) of the axes will be negatively oriented, i.e., form left-handed systems.

In this chapter, we will simply start with the definition of the vector product, and later see what it is useful for.

Definition 4.1: Vector Product
The vector product of two vectors, $\vc{u}$ and $\vc{v}$, in three dimensions is defined as a new vector denoted by $\vc{u} \times \vc{v}$, which has the following properties:

(1) $\vc{u} \times \vc{v}$ is orthogonal to both $\vc{u}$ and $\vc{v}$.
(2) $\ln{\vc{u} \times \vc{v}}= \ln{\vc{u}}\,\ln{\vc{v}} \sin [\vc{u},\vc{v}]$.
(3) The vectors $\vc{u}$, $\vc{v}$, and $\vc{u} \times \vc{v}$, are positively oriented.

From (2), it follows that the vector product is $\vc{0}$ if either $\vc{u}=\vc{0}$ or $\vc{v}=\vc{0}$, because a vector of 0 length must be the $\vc{0}$ vector. Similarly, from (2), the vector product is $\vc{0}$ if $[\vc{u},\vc{v}]=0$. Also, since $[\vc{u},\vc{u}]=0$, it holds that $\vc{u} \times \vc{u}=\vc{0}$.
$\vc{u}$
$\vc{v}$
$\ln{\vc{u}}$
$\ln{\vc{v}}$
$\ln{\vc{v}} \sin [\vc{u},\vc{v}]$
$[\vc{u},\vc{v}]$
Note that the definition of the vector product, so far, is rather abstract. Soon, we will see that it has many important uses. However, first, we note that the length of vector product, $\ln{\vc{u} \times \vc{v}}=$ $\ln{\vc{u}}\,\ln{\vc{v}} \sin [\vc{u},\vc{v}]$, can be interpreted geometrically. As shown to the right, $\ln{\vc{v}} \sin [\vc{u},\vc{v}]$ is the length of the height of the triangle formed by $\vc{u}$ and $\vc{v}$, but it is also the height of the parallelogram spanned by $\vc{u}$ and $\vc{v}$. The area, $a$, of the parallelogram is $a = bh$, where $b$ is the base and $h$ is the height. This can be expressed in terms of the vectors, $\vc{u}$ and $\vc{v}$, that is,
 $$a = bh = \ln{\vc{u}}\, \ln{\vc{v}} \sin [\vc{u},\vc{v}].$$ (4.5)
Hence, we see that the area of a parallelogram is simply the length of the cross product, i.e.,
 $$a = \ln{\vc{u} \times \vc{v}}.$$ (4.6)
Note that the area of a triangle spanned by $\vc{u}$ and $\vc{v}$ is simply half of that of the parallelogram. In fact, this is what we have seen already in Example 3.7.

The vector product is shown in Interactive Illustration 4.8.
Interactive Illustration 4.8: This interactive figure shows the vector product, $\vc{u} \times \vc{v}$, between two vectors $\vc{u}$ and $\vc{v}$. The camera can be moved around, and both $\vc{u}$ and $\vc{v}$ can be moved. Note that the area of the light yellow parallelogram is equal to the length of $\vc{u} \times \vc{v}$. The reader is encouraged to play around with the vectors to investigate what happens when $\vc{u}$ and $\vc{v}$ are parallel, and to look at the direction of $\vc{u} \times \vc{v}$, when $\vc{u}$ and $\vc{v}$ change order.
Interactive Illustration 4.8: This interactive figure shows the vector product, $\hid{\vc{u} \times \vc{v}}$, between two vectors $\hid{\vc{u}}$ and $\hid{\vc{v}}$. The camera can be moved around, and both $\hid{\vc{u}}$ and $\hid{\vc{v}}$ can be moved. Note that the area of the light yellow parallelogram is equal to the length of $\hid{\vc{u} \times \vc{v}}$. The reader is encouraged to play around with the vectors to investigate what happens when $\hid{\vc{u}}$ and $\hid{\vc{v}}$ are parallel, and to look at the direction of $\hid{\vc{u} \times \vc{v}}$, when $\hid{\vc{u}}$ and $\hid{\vc{v}}$ change order.
$\vc{u}$
$\vc{v}$
$\vc{u} \times \vc{v}$
$\ln{\vc{u}}=$
$\ln{\vc{v}}=$
$\sin [\vc{u},\vc{v}]=$
$\ln{\vc{u} \times \vc{v}}=$
Another way to visualize the vector product is shown in Interactive Illustration 4.9.
Interactive Illustration 4.9: This interactive illustration shows, in a different way, how the vector product can be thought of. As usual, we are interested in the vector product, $\vc{u} \times \vc{v}$, and we start by just showing $\vc{u}$. By definition, $\vc{u} \times \vc{v}$ is orthogonal to $\vc{u}$, and hence, we can draw the conclusion that $\vc{u} \times \vc{v}$ must lie in the light blue plane shown in the figure.
Interactive Illustration 4.9: Finally, the vector product, $\hid{\vc{u} \times \vc{v}}$, is shown, and note that it is merely the projection of $\hid{\vc{v}}$ onto the plane, but rotated so that it becomes orthogonal to that projection. This is so, since $\hid{\vc{u} \times \vc{v}}$ is orthogonal to both $\hid{\vc{u}}$ and $\hid{\vc{v}}$. Note also that in this case $\hid{\ln{\vc{u}}=1}$ in order to simplify the illustration. With an arbitrary length of $\hid{\vc{u}}$, the blue vector (the vector product) would have been scaled by a factor $\hid{\ln{\vc{u}}}$.
$\vc{u}$
$\vc{v}$
$\vc{u} \times \vc{v}$
$\ln{\vc{v}}\sin[\vc{u},\vc{v}]$
As seen in Interactive Illustration 4.9, one can actually think of the vector product as a projection. A third way to think of the vector product is to imagine that $\vc{u}$ is the normal of one plane, and $\vc{v}$ is normal to another plane. If $\vc{u}$ and $\vc{v}$ are not parallel, then the vector product must be parallel to the line of intersection between these two planes. This is left for the reader to draw on paper to ensure that this is in fact true.

Similar to the dot product, there is also a set of rules for the vector product. These are summarized below.

Theorem 4.1: Vector Product Rules
 \begin{align} \begin{array}{llr} (1) :&\,\,\, \vc{u} \times \vc{v} = -\vc{v} \times \vc{u} & \spc\text{(anti-commutativity)} \\ (2) :&\,\,\, \vc{u} \times (\vc{v} + \vc{w}) = \vc{u} \times \vc{v} + \vc{u} \times \vc{w} & \spc\text{(distributivity)} \\ (3) :&\,\,\, (\vc{u} + \vc{v}) \times \vc{w} = \vc{u} \times \vc{w} + \vc{v} \times \vc{w} & \spc\text{(distributivity)}\\ (4) :&\,\,\, k(\vc{u} \times \vc{v}) = (k\vc{u}) \times \vc{v} = \vc{u} \times (k\vc{v}) & \spc\text{(associativity)} \end{array} \end{align} (4.7)

(1) This follows immediately from Definition 4.1, i.e., we know that $\vc{u}$, $\vc{v}$, and $\vc{u} \times \vc{v}$ are positively oriented, and if we change order of $\vc{u}$ and $\vc{v}$, the vector product will point in the opposite direction.
(2) We have seen in Interactive Illustration 4.9 that the vector product can also be thought of as a scaled projection. Furthermore, in Theorem 3.1, rule (3) says $\vc{u} \cdot (\vc{v} +\vc{w})=$ $\vc{u} \cdot \vc{v} + \vc{u} \cdot \vc{w}$. This was proved by showing that the sum of projections is equal to the projection of the sum. Since (3) for the vector product can be thought of as a sum or scaled projection, and a scaled projection of sums, we conlude that (3) must be true. It may help to draw a figure similar to Interactive Illustration 4.9 with both $\vc{v}$ and $\vc{w}$ being projected to the plane with $\vc{u}$ as a normal, and also look at the projection of $\vc{v} + \vc{w}$. There are more formal proofs for this, but this is an intuitive reasoning that we believe helps the understanding.
(3) Given (1) and (2), this is quite straightforward to prove, i.e.,
 \begin{align} (\vc{u} + \vc{v}) \times \vc{w} & \overset{(1)}{=} -\vc{w} \times (\vc{u} + \vc{v}) \\ & \overset{(2)}{=} -\vc{w} \times \vc{u} -\vc{w} \times\vc{v} \\ & \overset{(1)}{=} \vc{u} \times \vc{w} +\vc{v} \times\vc{w}, \\ \end{align} (4.8)
which is what we wanted to prove.
(4) This follows from the definition of the vector product, and scaling rules for vectors.
$\square$

Example 4.1: Using the Vector Product Rules
In this example, we will use the rules from Theorem 4.1 to simplify two expressions. We start with $(\vc{u} + \vc{v}) \times (\vc{u} - \vc{v})$, and use the same convention as previously, where we place the rule in parenthesis above the equal sign,
 \begin{align} (\vc{u} + \vc{v}) \times (\vc{u} - \vc{v}) & \overset{(2,4)}{=} (\vc{u} + \vc{v}) \times \vc{u} - (\vc{u} + \vc{v}) \times \vc{v} \\ & \overset{(3)}{=} \underbrace{\vc{u} \times \vc{u}}_{=\vc{0}} + \vc{v} \times \vc{u} - \vc{u}\times \vc{v} - \underbrace{\vc{v} \times \vc{v}}_{=\vc{0}} \\ & \overset{(1)}{=} -2\vc{u} \times \vc{v}. \end{align} (4.9)
In the second expression, we simply change the order of the terms in the cross product above, i.e., we simplify $(\vc{u} - \vc{v}) \times (\vc{u} + \vc{v})$. This has a slightly different outcome, i.e.,
 \begin{align} (\vc{u} - \vc{v}) \times (\vc{u} + \vc{v}) & \overset{(2,4)}{=} (\vc{u} - \vc{v}) \times \vc{u} + (\vc{u} - \vc{v}) \times \vc{v} \\ & \overset{(3)}{=} \underbrace{\vc{u} \times \vc{u}}_{=\vc{0}} - \vc{v} \times \vc{u} + \vc{u}\times \vc{v} - \underbrace{\vc{v} \times \vc{v}}_{=\vc{0}} \\ & \overset{(1)}{=} 2\vc{u} \times \vc{v}. \end{align} (4.10)
Since we simply changed order of the terms in the vector product, we could have simply used the result from Equation (4.9), and rule (1) for vector products, to see that the result would be exactly the same, but negated.
Next, let us see in Interactive Illustration 4.10 what this means geometrically.
Interactive Illustration 4.10: In this illustration, we will explore $(\vc{u} + \vc{v}) \times (\vc{u} - \vc{v})$, and in particular, the length: $\ln{(\vc{u} + \vc{v}) \times (\vc{u} - \vc{v})}$. Recall that $\ln{\vc{u} \times \vc{v}}$ is the area of the parallelogram spanned by $\vc{u}$ and $\vc{v}$. We first show the two vectors, $\vc{u}$ and $\vc{v}$, which are both moveable. Click/press Forward to continue.
Interactive Illustration 4.10: In this final step of the illustration, the remaining four triangular areas have been painted with cyan color. It is quite clear that the area of the four green triangles (of the parallelogram in the middle) is the same as the area of the four cyan triangles. Hence, it is quite clear that $\hid{\ln{(\vc{u} + \vc{v}) \times (\vc{u} - \vc{v})}=}$ $\hid{2\ln{\vc{u} \times \vc{v}}}$, which is essentially what we found out in the beginning of this example by simplifying expressions.
$\vc{u}$
$\vc{v}$
$\vc{u}+\vc{v}$
$\vc{u}-\vc{v}$
$\vc{u}+\vc{v}$
$\vc{u}-\vc{v}$
$\vc{e}_1$
$\vc{e}_2$
$\vc{e}_3$
Finally, we notice that the three identities
 \begin{align} \vc{e}_1 \times \vc{e}_2 &= \vc{e}_3, \\ \vc{e}_2 \times \vc{e}_3 &= \vc{e}_1, \\ \vc{e}_3 \times \vc{e}_1 &= \vc{e}_2, \\ \end{align} (4.11)
hold for any right-handed orthonormal basis, $\vc{e}_1$, $\vc{e}_2$, and $\vc{e}_3$. Here the aid of the figure to the right may be useful. Also, since $\vc{u}\times \vc{v} = -\vc{v}\times \vc{u}$, it follows that
 \begin{align} \vc{e}_2 \times \vc{e}_1 &= -\vc{e}_3, \\ \vc{e}_3 \times \vc{e}_2 &= -\vc{e}_1, \\ \vc{e}_1 \times \vc{e}_3 &= -\vc{e}_2. \\ \end{align} (4.12)

Definition 4.1 is somewhat abstract. However, it turns out that there is a very direct way of computing the vector product in an orthonormal basis in three dimensions. This is summarized in the following theorem.

Theorem 4.2: Vector Product in Orthonormal Basis
For three-dimensional vectors, $\vc{u}$ and $\vc{v}$, and for a positively oriented and orthonormal basis, the vector product is
 $$\vc{u} \times \vc{v} = (u_y v_z - u_z v_y, \, u_z v_x - u_x v_z, \, u_x v_y- u_y v_x).$$ (4.13)

Given that $\vc{u} = u_x\vc{e}_1 + u_y\vc{e}_2 + u_z\vc{e}_3$ and $\vc{v} = v_x\vc{e}_1 + v_y\vc{e}_2 + v_z\vc{e}_3$, the vector product can be expressed as below, where the rules in Theorem 4.1 are used,
 \begin{align} \vc{u} \times \vc{v} =& (u_x\vc{e}_1 + u_y\vc{e}_2 + u_z\vc{e}_3) \times (v_x\vc{e}_1 + v_y\vc{e}_2 + v_z\vc{e}_3)\\ =& u_x v_x \underbrace{\vc{e}_1 \times \vc{e}_1}_{=\vc{0}} + u_x v_y \vc{e}_1 \times \vc{e}_2 + u_x v_z \vc{e}_1 \times \vc{e}_3 + \\ & u_y v_x \vc{e}_2 \times \vc{e}_1 + u_y v_y \underbrace{\vc{e}_2 \times \vc{e}_2}_{=\vc{0}} + u_y v_z \vc{e}_2 \times \vc{e}_3 + \\ & u_z v_x \vc{e}_3 \times \vc{e}_1 + u_z v_y \vc{e}_3 \times \vc{e}_2 + u_z v_z \underbrace{\vc{e}_3 \times \vc{e}_3}_{=\vc{0}} \\ =& (u_x v_y - u_y v_x)\vc{e}_1 \times \vc{e}_2 + (u_x v_z - u_z v_x) \vc{e}_1 \times \vc{e}_3 + (u_y v_z - u_z v_y)\vc{e}_2 \times \vc{e}_3. \end{align} (4.14)
With some help from Equation (4.11) and Equation (4.12), this can be rewritten as
 \begin{align} \vc{u} \times \vc{v} =& (u_x v_y - u_y v_x)\underbrace{\vc{e}_1 \times \vc{e}_2}_{=\vc{e}_3} + (u_x v_z - u_z v_x)\underbrace{\vc{e}_1 \times \vc{e}_3}_{=-\vc{e}_2} + (u_y v_z - u_z v_y)\underbrace{\vc{e}_2 \times \vc{e}_3}_{=\vc{e}_1} \\ =& (u_y v_z - u_z v_y)\vc{e}_1 + (u_z v_x -u_x v_z)\vc{e}_2 + (u_x v_y - u_y v_x)\vc{e}_3. \end{align} (4.15)
This concludes the proof.
$\square$

Using Theorem 4.2 to compute the vector product is extremely useful, but it may be difficult to remember. One way to remember it easier is called Sarrus' rule. The basis vectors, $\vc{e}_1$, $\vc{e}_2$, and $\vc{e}_3$ are put on a row twice, and under that the $x$-, $y$-, and $z$-components of $\vc{u}$ are put down on the row below, and also twice. Then on the third row, the same is done for $\vc{v}$. This is shown below:
 $$\begin{array}{cccccc} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!+ &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! + &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! + &\,\,\,\,\,\,\,\,\,\,\,\, - &\,\, - & - \\ \!\!\!\!\!\!\!\!\!\!\searrow &\!\!\!\!\!\!\!\!\! \searrow &\!\!\!\!\!\!\!\!\!\! \searrow & \!\swarrow &\!\!\!\!\!\!\!\!\! \swarrow &\!\!\!\!\!\!\!\!\!\!\! \swarrow \\ \vc{e}_1 & \vc{e}_2 & \vc{e}_3 &\!\!\!\!\!\!\!\!\! \vc{e}_1 &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \vc{e}_2 &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \vc{e}_3 \\ u_x & u_y & u_z &\!\!\!\!\!\!\!\!\! u_x &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! u_y &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! u_z \\ v_x & v_y & v_z &\!\!\!\!\!\!\!\!\! v_x &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! v_y &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! v_z \\ \end{array}$$ (4.16)
Next, you follow the arrows and multiply all the terms on the respective diagonals and add then together with the sign above each arrow. This results in
 $$\vc{u}\times \vc{v} = +\vc{e}_1 u_y v_z +\vc{e}_2 u_z v_x +\vc{e}_3 u_x v_y -\vc{e}_1 u_z v_y -\vc{e}_2 u_x v_z -\vc{e}_3 u_y v_x,$$ (4.17)
which simply is \vc{u}\times \vc{v}=(u_y v_z-u_z v_y,\, u_z v_x-u_x v_z,\, u_x v_y-u_y v_x), i.e., Theorem 4.2. Another way is to write down the vector components in rows, where \vc{u}'s components are on top of \vc{v}'s components, that is,  $$\begin{array}{ccc} u_x & u_y & u_z \\ v_x & v_y & v_z \end{array}$$ (4.18) To find the x-component of \vc{u} \times \vc{v}, simply strike out the first column, and multiply terms in the southeast diagonal, and subtract the product in the southwest diagonal,  $$\left| \begin{array}{cc} u_y & u_z \\ v_y & v_z \end{array} \right| = u_y v_z - u_z v_y.$$ (4.19) Note the vertical lines around the remaining terms above. This is the notation for the determinant, which is the topic of Chapter 7. The same is then done to find the y-component and the z-component, which the exception that there is a minus sign before the y-component. This is summarized as  $$\vc{u}\times \vc{v} = \Biggl( +\left| \begin{array}{cc} u_y & u_z \\ v_y & v_z \end{array} \right|, \, -\left| \begin{array}{cc} u_x & u_z \\ v_x & v_z \end{array} \right|, \, +\left| \begin{array}{cc} u_x & u_y \\ v_x & v_y \end{array} \right| \Biggr).$$ (4.20) There are two types of triple products, i.e., products with three terms, which are based on vector products. One of them generates a vector and is therefore called vector triple product, which is the topic of Section 4.6. The other, which is the topic of this section, is called a scalar triple product, since it generates a scalar. The definition follows below. Definition 4.2: Scalar Triple Product The scalar triple product of three vectors, \vc{u}, \vc{v}, and \vc{w}, is  $$(\vc{u} \times \vc{v}) \cdot \vc{w}.$$ (4.21) Interestingly, this expression is also how the determinant (which is the topic of Chapter 7) of a 3\times 3 matrix is calculated. As it turns out, the scalar triple product can be used to compute the volume of a parallelepiped spanned by three vectors, \vc{u}, \vc{v}, and \vc{w}. This is summarized in the following theorem: Theorem 4.3: Signed Volume of Parallelepiped The scalar triple product, (\vc{u} \times \vc{v}) \cdot \vc{w}, can be used to calculate the volume V of a parallelepiped spanned by \vc{u}, \vc{v}, and \vc{w}, as  \begin{align} V = +(\vc{u} \times \vc{v}) \cdot \vc{w}, & \hspace{5pt} \text{if the vectors are positively oriented}, \\ V = -(\vc{u} \times \vc{v}) \cdot \vc{w}, & \hspace{5pt} \text{if the vectors are negatively oriented}. \\ \end{align} (4.22) This means that the volume is always the absolute value of the scalar triple product, i.e., |(\vc{u} \times \vc{v}) \cdot \vc{w}|. The volume is zero if either of the vectors are zero or if two (or more) of the vectors are parallel, or if all three vectors lie in the same plane. \vc{u} \vc{v} \vc{w} \vc{u} \times \vc{v} With the help of the figure of a parallelepiped, spanned by \vc{u}, \vc{v}, and \vc{w}, to the right, this becomes fairly straightforward. As we know from Figure 4.7 and from the definition of the vector product (Definition 4.1), the length of the vector product is the area of the parallelogram (yellow in the figure). That is, the area of the base parallelogram is \ln{\vc{u} \times \vc{v}}, where the direction of the vector product depends on the orientation of \vc{u} and \vc{v}. Also, from the definition of the dot product (Definition 3.1), we know that \vc{a}\cdot \vc{w} = \ln{\vc{a}}\,\ln{\vc{w}} \cos[\vc{a},\vc{w}]. Now, we introduce $\vc{a} = \vc{u} \times \vc{v}$, with the consequence that
 \begin{align} (\vc{u} \times \vc{v})\cdot \vc{w} =& \vc{a} \cdot \vc{w} = \ln{\vc{a}}\,\ln{\vc{w}} \cos[\vc{a},\vc{w}] \\ =& \underbrace{\ln{\vc{u} \times \vc{v}}}_{\text{base area}} \, \underbrace{\ln{\vc{w}} \cos[\vc{u} \times \vc{v},\vc{w}]}_{\text{height with sign}}. \end{align} (4.23)
As can be seen, the final expression is a multiplication of two terms, where the first it the area of the base parallelogram, and the second is the height with a sign. If we disregard the sign for a bit, we have the volume of the parallelepiped. The sign comes from the cosine term, i.e., $\cos[\vc{u} \times \vc{v},\vc{w}]$. Now, if the smallest angle, $[\vc{u} \times \vc{v},\vc{w}]$, between $\vc{u} \times \vc{v}$ and $\vc{w}$ is less than $\pi/2$ then the cosine term is positive, and if it is bigger than $\pi/2$, it is negative. Since $\vc{u}$, $\vc{v}$, and $\vc{u} \times \vc{v}$ form a right-handed system, the cosine term must be positive when $\vc{u}$, $\vc{v}$, and $\vc{w}$ are positively oriented, and vice versa. This concludes the proof.
$\square$

As can be seen, it is a simple matter to test whether three vectors are positively oriented, i.e., they form a right-handed system. This is done by checking whether $(\vc{u}\times \vc{v}) \cdot \vc{w} > 0$. Also, note that for an right-handed orthonormal basis in three dimensions, we have $\ln{\vc{e}_1}=1$, $\ln{\vc{e}_2}=1$, and $\ln{\vc{e}_3}=1$, and $\vc{e}_1 \cdot \vc{e}_2 = 0$, $\vc{e}_1 \cdot \vc{e}_3 = 0$, and $\vc{e}_2 \cdot \vc{e}_3 = 0$. Furthermore, it must hold that $(\vc{e}_1 \times \vc{e}_2)\cdot \vc{e}_3 > 0$. In fact, due to the vectors being normalized, and pairwise orthogonal, we have
 \begin{gather} \underbrace{(\vc{e}_1 \times \vc{e}_2)}_{\vc{e}_3} \cdot \vc{e}_3 = \vc{e}_3\cdot \vc{e}_3=1, \end{gather} (4.24)
for a right-handed orthonormal basis.

Note that since the scalar triple product computes the volume (with sign) of the parallelepiped, we can actually change the order of the vectors in the expression for the scalar triple product as long as we keep them in a positively oriented system. Hence, we have
 $$(\vc{u} \times \vc{v}) \cdot \vc{w} = (\vc{v} \times \vc{w}) \cdot \vc{u} = (\vc{w} \times \vc{u}) \cdot \vc{v}.$$ (4.25)
There is also the posibility to change signs in these, by using rule (1) of Theorem 4.1, $\vc{u} \times \vc{v} = -\vc{v} \times \vc{u}$:
 $$(\vc{u} \times \vc{v}) \cdot \vc{w} = -(\vc{v} \times \vc{u}) \cdot \vc{w} = -(\vc{w} \times \vc{v}) \cdot \vc{u} = -(\vc{u} \times \vc{w}) \cdot \vc{v}.$$ (4.26)
Note that for an orthonormal basis that is also right-handed, i.e., and positively oriented system, it must hold that $(\vc{e}_1\times \vc{e}_2) \cdot \vc{e}_3 = 1$.

As the name implies the vector triple product is a product of three vectors. Before we present the vector triple product, we present a simplified version of it, where two of the vectors are the same:

Theorem 4.4: Simplified Vector Triple Product
When the two first terms in the vector triple product are the same, we have
 $$\vc{u} \times (\vc{u} \times \vc{v}) = (\vc{u} \cdot \vc{v})\vc{u} - (\vc{u} \cdot \vc{u})\vc{v}.$$ (4.27)

$\vc{u}$
$\vc{v}$
$\vc{u}\times \vc{v}$
$\vc{u}\times (\vc{u}\times \vc{v})$
As can be seen in the figure to the right, $\vc{u}$, $\vc{u}\times \vc{v}$, and $\vc{u}\times (\vc{u}\times \vc{v})$ form a right-handed system, and they are all mutually orthogonal to each other. This means that we can create a vector, which we call $\vc{a}$, by projecting $\vc{v}$ onto $\vc{u}\times (\vc{u}\times \vc{v})$:
 $$\vc{a} = \vc{v} - \proj{\vc{u}}{\vc{v}}.$$ (4.28)
As can be seen, to avoid using $\vc{u}\times (\vc{u}\times \vc{v})$, we have subtracted the projection of $\vc{v}$ onto $\vc{u}$ from $\vc{v}$. By construction (check Figure 4.13), $\vc{a}$ and $\vc{u}\times (\vc{u}\times \vc{v})$ are parallel and have opposite directions. Hence, we need to scale $\vc{a}$ by a factor in order to make it the same length as $\vc{u}\times (\vc{u}\times \vc{v})$. The squared length of $\vc{u}\times (\vc{u}\times \vc{v})$ is
 \begin{align} \ln{\vc{u}\times (\vc{u}\times \vc{v})}^2 =& \ln{\vc{u}}^2 \, \underbrace{ \ln{\vc{u}}^2 \ln{\vc{v}}^2 \sin^2[\vc{u}, \vc{v}] }_{ \ln{\vc{u} \times \vc{v}}^2} \\ =&\ln{\vc{u}}^4\ln{\vc{v}}^2 \bigl(1-\cos^2[\vc{u}, \vc{v}]\bigr) \\ =&\ln{\vc{u}}^4\ln{\vc{v}}^2 - \ln{\vc{u}}^4\ln{\vc{v}}^2 \cos^2[\vc{u}, \vc{v}] \\ =& \ln{\vc{u}}^4\ln{\vc{v}}^2 - \ln{\vc{u}}^2 (\vc{u} \cdot \vc{v})^2. \end{align} (4.29)
On the first line above, we see the length of $\vc{u} \times \vc{v}$ times $\ln{\vc{u}}$. This is because $\vc{u}$ and $\vc{u} \times \vc{v}$ already are orthogonal, and hence the $\sin$-term becomes 1. Next, we write out the entire projection formula in the expression for $\vc{a}$,
 $$\vc{a} = \vc{v} - \frac{\vc{u}\cdot \vc{v}}{\vc{u}\cdot \vc{u}}\vc{u}.$$ (4.30)
If we scale the expression above by $\vc{u}\cdot \vc{u}$, we get
 $$\vc{b} = (\vc{u}\cdot \vc{u})\vc{v} - (\vc{u}\cdot \vc{v})\vc{u}.$$ (4.31)
Now, let us look at the squared length of this vector,
 \begin{align} \ln{\vc{b}}^2 &= (\vc{u}\cdot \vc{u})^2 (\vc{v}\cdot\vc{v}) -2(\vc{u}\cdot \vc{u})(\vc{u}\cdot \vc{v})^2 + (\vc{u}\cdot \vc{v})^2(\vc{u} \cdot \vc{u}) \\ &=(\vc{u}\cdot \vc{u})^2 (\vc{v}\cdot\vc{v}) -(\vc{u}\cdot \vc{u})(\vc{u}\cdot \vc{v})^2 \\ &= \ln{\vc{u}}^4\ln{\vc{v}}^2 - \ln{\vc{u}}^2(\vc{u}\cdot \vc{v})^2, \end{align} (4.32)
which is the same as Equation (4.29), which proves the theorem.
$\square$

Next, the full vector triple product, sometimes called Lagrange's formula, or triple product expansion, is presented in theorem below.

Theorem 4.5: Vector Triple Product
The vector triple product of $\vc{u}$, $\vc{v}$, and $\vc{w}$ is
 $$\vc{u} \times (\vc{v} \times \vc{w}) = (\vc{u} \cdot \vc{w})\vc{v} - (\vc{u} \cdot \vc{v})\vc{w}.$$ (4.33)

We assume that the vectors $\vc{v}$ and $\vc{w}$ not are parallel, because otherwise the product will be the zero vector. Hence, $\vc{u}$ can be expressed in terms of $\vc{v}$ and $\vc{w}$:
 $$\vc{u} = a\vc{v} + b\vc{w} + c(\vc{v} \times \vc{w}),$$ (4.34)
for some values, $a$, $b$, and $c$. Let us simplify the expression for the dot products, $\vc{u}\cdot \vc{v}$ and $\vc{u}\cdot \vc{w}$,
 \begin{align} \vc{u}\cdot \vc{v} & = (a\vc{v} + b\vc{w} + c(\vc{v} \times \vc{w}))\cdot \vc{v} \\ & = a(\vc{v}\cdot\vc{v}) + b(\vc{w}\cdot \vc{v}) + c\underbrace{(\vc{v} \times \vc{w})\cdot \vc{v}}_{=0}\\ & = a(\vc{v}\cdot\vc{v}) + b(\vc{w}\cdot \vc{v}) . \end{align} (4.35)
In the same way, we get
 $$\vc{u}\cdot \vc{w} = a(\vc{v}\cdot\vc{w}) + b(\vc{w}\cdot \vc{w}).$$ (4.36)
Now, let us use Equation (4.34) and Theorem 4.4, and simplify $\vc{u} \times (\vc{v} \times \vc{w})$, i.e.,
 \begin{align} \vc{u} \times (\vc{v} \times \vc{w}) &= (a\vc{v} + b\vc{w} + c(\vc{v} \times \vc{w}))\times (\vc{v} \times \vc{w}) \\ &= a\vc{v}\times (\vc{v} \times \vc{w}) + b\vc{w} \times (\vc{v} \times \vc{w}) + c\underbrace{(\vc{v} \times \vc{w})\times (\vc{v} \times \vc{w})}_{=\vc{0}} \\ &= a\underbrace{\vc{v}\times (\vc{v} \times \vc{w})}_{(\vc{v} \cdot \vc{w})\vc{v} - (\vc{v} \cdot \vc{v})\vc{w}} - b\underbrace{\vc{w} \times (\vc{w} \times \vc{v})}_{(\vc{w} \cdot \vc{v})\vc{w} - (\vc{w} \cdot \vc{w})\vc{v}} \\ &= a\bigl((\vc{v} \cdot \vc{w})\vc{v} - (\vc{v} \cdot \vc{v})\vc{w}\bigr) - b\bigl((\vc{w} \cdot \vc{v})\vc{w} - (\vc{w} \cdot \vc{w})\vc{v}\bigr) \\ &=\bigl(\underbrace{a (\vc{v} \cdot \vc{w}) +b (\vc{w} \cdot \vc{w}}_{\vc{u}\cdot\vc{w}})\bigr)\vc{v} - \bigl(\underbrace{a(\vc{v} \cdot \vc{v}) + b(\vc{w} \cdot \vc{v})}_{\vc{u}\cdot\vc{v}}\bigr)\vc{w}\\ &= (\vc{u}\cdot\vc{w})\vc{v} - (\vc{u}\cdot\vc{v})\vc{w}, \end{align} (4.37)
which is what we wanted to prove. Note that Theorem 4.4 was used twice on the third row. Furthermore, Equation (4.35) and Equation (4.36) were used in the next to last row.

$\square$

We can use the vector triple product to prove that the vector product is not associative, i.e., that $(\vc{a} \times \vc{b}) \times \vc{c} \neq \vc{a} \times (\vc{b} \times \vc{c})$. Using Theorem 4.5 with $\vc{u} = \vc{a}$, $\vc{v} = \vc{b}$, and $\vc{w} = \vc{c}$ gives
 $$\vc{a} \times (\vc{b} \times \vc{c}) = (\vc{a}\cdot\vc{c})\vc{b} - (\vc{a}\cdot \vc{b})\vc{c}.$$ (4.38)
We can now use the theorem again, this time using $\vc{v} = \vc{a}$, $\vc{w} = \vc{b}$ and $\vc{u} = \vc{c}$, which gives
 $$\vc{c} \times (\vc{a} \times \vc{b}) = (\vc{c}\cdot\vc{b})\vc{a} - (\vc{c}\cdot \vc{a})\vc{b}.$$ (4.39)
In this second expression, we can change the order of $\vc{c}$ and $(\vc{a}\times \vc{b})$ if we also change the sign (see first rule of Theorem 4.1). This means that
 $$(\vc{a} \times \vc{b})\times \vc{c} = -(\vc{c}\cdot\vc{b})\vc{a} + (\vc{c}\cdot \vc{a})\vc{b}.$$ (4.40)
Now, for the vector product to be associative, i.e., $(\vc{a} \times \vc{b}) \times \vc{c} = \vc{a} \times (\vc{b} \times \vc{c})$, the right hand sides of Equation (4.38) and Equation (4.40) must be equal. Note that the expressions in brackets are scalar values, and setting the right hand sides equal results in
 $$k_1 \vc{b} + k_2 \vc{c} = k_3 \vc{b} + k_4 \vc{a},$$ (4.41)
for some scalars $k_1, k_2, k_3$, and $k_4$. This is only true if
 $$k_2 \vc{c} = (k_3-k_1) \vc{b} + k_4 \vc{a},$$ (4.42)
which means that $\vc{c}$ lies in the plane spanned by $\vc{b}$ $\vc{a}$, and in general, that is not the case. Therefore, the vector product is not associative.

The vector triple product theorem can also be used to show that the vector product satisfies the Jacobian identity, as shown below.

Theorem 4.6: Jacobian Identity for Vector Products
The triple vector products satisfies the following:
 $$\vc{u} \times (\vc{v} \times \vc{w}) + \vc{v} \times (\vc{w} \times \vc{u}) + \vc{w} \times (\vc{u} \times \vc{v}) =0.$$ (4.43)

We simply use Theorem 4.5 to prove the Jacobian identity:
 \begin{gather} \vc{u} \times (\vc{v} \times \vc{w}) + \vc{v} \times (\vc{w} \times \vc{u}) + \vc{w} \times (\vc{u} \times \vc{v}) = \\ (\vc{u} \cdot \vc{w})\vc{v} - (\vc{u} \cdot \vc{v})\vc{w} + (\vc{v} \cdot \vc{u})\vc{w} - (\vc{v} \cdot \vc{w})\vc{u} + (\vc{w} \cdot \vc{v})\vc{u} - (\vc{w} \cdot \vc{u})\vc{v}=0, \end{gather} (4.44)
since $\vc{u}\cdot\vc{v} = \vc{v}\cdot\vc{u}$, which concludes the proof.
$\square$

This chapter ends with some examples.

Example 4.2: Orientation of Two Two-Dimensional Vectors
Given two two-dimensional vectors, $\vc{u}$ and $\vc{v}$, in an orthonormal basis, we would like to determine how they are oriented. See Figure 4.4 for a small interactive figure on this topic. As we have seen, this is simple to do with three-dimensional vectors, i.e., we can simply use Theorem 4.2 and then investigate in which direction the vector product points. So, this can be done by agumenting the two-dimensional vectors with one more component, namely a $z$-component, which is set to zero. This means that we have $\vc{u}' = (u_x, u_y, 0)$ and $\vc{v}' = (v_x, v_y, 0)$. The vector product is then:
 $$\vc{u}' \times \vc{v}' = (u_x, u_y, 0) \times (v_x, v_y, 0) = (0,0, u_x v_y - u_y v_x).$$ (4.45)
Note that both the $x$- and the $y$-component are zero. This can be realized from the fact that both $\vc{u}'$ and $\vc{v}'$ lie in the $xy$-plane (i.e., their $z$-components are zero), and therefore, the dot product must be orthogonal to the $xy$-plane.
Hence, to determine the orientation of $\vc{u}$ and $\vc{v}$, simply compute $s = u_x v_y - u_y v_x$. If $s>0$ then they are positively oriented, and if $s<0$ they are negatively oriented, and otherwise they are parallel.

Example 4.3: Normal and Plane Equation of a Triangle
Given a triangle with three vertices, $A$, $B$, and $C$, we want to compute the normal of the plane that the triangle lies in. First construct two edge vectors as:
 \begin{align} \vc{u} =& B - A, \\ \vc{v} =& C - A. \\ \end{align} (4.46)
The normal, $\vc{n}$, is then simply the vector product of the edge vectors, i.e.,
 $$\vc{n} = \vc{u} \times \vc{v}.$$ (4.47)
In fact, this is the same technique that was used when computing Lambertian shading in Figure 4.2.

Now that we have the normal, $\vc{n}$, of the plane, and we know that $A$, $B$, and $C$ all lie in the plane of the triangle, it is possible to find the plane equation (Section 3.6.2) of the triangle. The plane equation is simply
 \begin{gather} \vc{n} \cdot (P-A) = 0 \\ \Longleftrightarrow \\ (\vc{u} \times \vc{v})\cdot (P-A) = 0, \end{gather} (4.48)
where $P$ is any point on the plane, and $A$ is a point that we already know lies in the plane. We could have chosen any point, e.g., $B$ or $C$. Note that since the normal is expressed as a vector product, the entire plane equation is expressed as a scalar triple product.

Example 4.4: Volume of Tetrahedron
$A$
$B$
$C$
$D$
Given a tetrahedron that is defined by four points, $A$, $B$, $C$, and $D$, we seek the volume of this geometrial shape. First, the three edge vectors from $A$ are constructed as:
 \begin{align} \vc{u} =& B - A, \\ \vc{v} =& C - A, \\ \vc{w} =& D - A. \\ \end{align} (4.49)
The volume of the tetrahedron is one sixth of the absolute value of the scalar triple product of these three vectors, i.e., you can fit six tetrahedrons inside the parallelepiped spanned by $\vc{u}$, $\vc{v}$, and $\vc{w}$. The reader is encouraged to try to show this with pen and paper. The volume of a tetrahedron is then expressed as
 $$\frac{1}{6} | (\vc{u} \times \vc{v}) \cdot \vc{w} |,$$ (4.50)
where we need to have the absolute value of the scalar triple product in case $\vc{u}$, $\vc{v}$, and $\vc{w}$, are defined in a negatively oriented system.

Example 4.5: Orthonormal Basis from Two Vectors
Assume that we have two non-parallel vectors, $\vc{u}$ and $\vc{v}$, and that we wish to produce an orthonormal basis, where $\vc{u}$ is parallel to one of the basis vectors, that is
 $$\vc{e}_1 = \frac{1}{\ln{\vc{u}}} \vc{u}.$$ (4.51)
The first basis vector is simply $\vc{u}$ normalized. Since the angle between $\vc{u}$ and $\vc{v}$ can be anything, we cannot use a normalized version of $\vc{v}$ directly as a basis vector. However, we can instead use the normalized vector product
 $$\vc{e}_2 = \frac{1}{\ln{\vc{u}\times \vc{v}}} (\vc{u}\times \vc{v}).$$ (4.52)
Finally, $\vc{e}_3$ is created from $\vc{e}_1$ and $\vc{e}_2$ using the vector product as
 $$\vc{e}_3 = \vc{e}_1 \times \vc{e}_2.$$ (4.53)
As can be seen, the third basis vector can be created directly using a simplified vector triple product (Theorem 4.4)
 $$\vc{e}_3 = \vc{e}_1 \times \vc{e}_2 = \frac{1}{\ln{\vc{u}} \, \ln{\vc{u}\times \vc{v}}} \vc{u} \times (\vc{u}\times \vc{v}).$$ (4.54)
The basis is then $\{\vc{e}_1$, $\vc{e}_2$, $\vc{e}_3\}$.

In Section 4.1, we needed to compute a normalized normal vector of a triangle. A normal vector of a triangle is a vector that is orthogonal to the plane of the triangle. Now that we know how the vector product works, this is a simple matter. Assume that the vertices of a triangle are called $P_1$, $P_2$, and $P_3$, i.e., they are three-dimensional points. Let us choose $P_3$ as a reference point from where we compute edge vectors, that is, vectors from $P_3$ to $P_1$ and $P_2$. This can be expressed as
 \begin{align} \vc{e}_1 &= P_1 - P_3, \\ \vc{e}_2 &= P_2 - P_3. \end{align} (4.55)
Now, the non-normalized normal vector is simply the vector product between $\vc{e}_1$ and $\vc{e}_2$, i.e.,
 $$\vc{m} = \vc{e}_1 \times \vc{e}_2.$$ (4.56)
Normalizing $\vc{m}$ gives the normalized normal vector, i.e.,
 $$\vc{n} = \frac{\vc{m}}{\ln{\vc{m}}}.$$ (4.57)
Note that depending on how the points are ordered, we can switch the direction of the normal vector to point in the opposite direction.

 Chapter 3: The Dot Product (previous) Chapter 5: Gaussian Elimination (next)