Definition 3.1: Dot Product The dot product between two vectors, $\vc{u}$ and $\vc{v}$ is denoted $\vc{u}\cdot \vc{v}$, and
is defined as the scalar value
\begin{equation}
\vc{u}\cdot \vc{v} = \left\{
\begin{array}{ll}
\ln{\vc{u}}\ \ln{\vc{v}} \cos[\vc{u},\vc{v}], & \text{if } \vc{u}\neq \vc{0} \text{ and } \vc{v}\neq \vc{0},\\
0, & \text{if } \vc{u}=\vc{0} \text{ or } \vc{v}=\vc{0}.
\end{array}
\right.
\end{equation}
Definition 3.2: Orthogonal Projection If $\vc{v}$ is a non-zero vector, then the orthogonal projection of $\vc{u}$ onto $\vc{v}$ is denoted $\proj{\vc{v}}{\vc{u}}$,
and is defined by
\begin{equation}
\proj{\vc{v}}{\vc{u}} = \frac{\vc{u} \cdot \vc{v}}{ \ln{\vc{v}}^2 } \vc{v}.
\end{equation}
Note that if $\ln{\vc{v}}=1$, i.e., $\vc{v}$ is normalized, then the expression for projection gets simpler:
$\proj{\vc{v}}{\vc{u}} = (\vc{u} \cdot \vc{v})\vc{v}$.
Theorem 3.1: Dot Product Rules The following is a set of useful rules when using dot products.
\begin{align}
\begin{array}{llr}
(i) & \vc{u} \cdot \vc{v} = \vc{v} \cdot \vc{u} & \spc\text{(commutativity)} \\
(ii) & k(\vc{u} \cdot \vc{v}) = (k\vc{u}) \cdot \vc{v} & \spc\text{(associativity)} \\
(iii) & \vc{v} \cdot (\vc{u} +\vc{w}) = \vc{v} \cdot \vc{u} + \vc{v} \cdot \vc{w} & \spc\text{(distributivity)} \\
(iv) & \vc{v} \cdot \vc{v} = \ln{\vc{v}}^2 \geq 0, \mathrm{with\ equality\ only\ when\ } \vc{v}=\vc{0}. & \spc\text{(squared length)} \\
\end{array}
\end{align}
Definition 3.4: Dot Product Calculation in Orthonormal Basis In any orthonormal basis, the dot product between two $n$-dimensional vectors, $\vc{u}$ and $\vc{v}$, can be calculated as
\begin{equation}
\vc{u}\cdot\vc{v} = \sum_{i=1}^{n} u_i v_i,
\end{equation}
which is a sum of component-wise multiplications. The two- and three-dimensional dot products are calculated as
\begin{align}
\mathrm{two\ dimensions\ } &:\ \ \vc{u}\cdot\vc{v} = u_xv_x + u_yv_y, \\
\mathrm{three\ dimensions\ } &:\ \ \vc{u}\cdot\vc{v} = u_xv_x + u_yv_y +u_zv_z. \\
\end{align}
Definition 3.7: Two-dimensional Implicit Line A line can be represented on implicit form by using a starting point, $S$, and a normal vector, $\vc{n}$.
All points, $P$, on a line can then be described by
\begin{equation}
\vc{n} \cdot (P - S) = 0.
\end{equation}
Note that $\vc{n}\neq \vc{0}$, otherwise, all points, $P$, lie on the line.
Definition 3.6: A Parameterized Line A line parameterized by $t\in \R$ can be described by a starting point, $S$, and a direction vector, $\vc{d}$.
All points, $P(t)$, on a line can be described by
\begin{equation}
P(t) = S + t\vc{d}.
\end{equation}
Note that $\vc{d}\neq \vc{0}$, otherwise, only a single point, $S$, will be generated (i.e., no line).
This chapter is about a powerful tool called the dot product.
It is one of the essential building blocks in
computer graphics, and in Interactive Illustration 3.1,
there is a computer graphics program called a ray tracer. The idea of a ray tracer is to generate an image
of a set of geometrical objects (in the case below, there are only spheres). These are lit by
a number of light sources, located at three-dimensional positions. The user must also
set up a virtual camera, i.e., a camera position and field of view, and the camera's direction (i.e., where
does it look). The ray tracer then traces rays from the camera position in the camera direction, through
a set of pixels in the image plane of the camera. The program then finds the closest geometric object,
and determines whether any light reaches that point directly from the light sources (otherwise, it will be in shadow).
Reflection rays may also be traced in order to create reflective objects (such as the
middle sphere in Interactive Illustration 3.1).
Try out the ray tracing program by pressing Start below the illustration.
Interactive Illustration 3.1:
A ray tracing program can create images like the one above. Note that a "Coarse rendering pass" happens
first, which quickly creates an image with lower quality. This is to maintain interactivity. The user
may click/touch and move the mouse/finger (left/right), while clicking/pressing in order to re-render the scene from
another view point. After the coarse rendering pass, a "Refinement rendering pass" occurs, which
render the pixels once again, but at much higher quality.
This removes the jagged edges (often called the jaggies) on the sphere silhouettes, for example.
In that mode, there is a white pixel showing
the current point of progress (which starts at the top, and works downwards, from the left to the right).
Interactive Illustration 3.1:
A ray tracing program can create images like the one above. Note that a "Coarse rendering pass" happens
first, which quickly creates an image with lower quality. This is to maintain interactivity. The user
may click/touch and move the mouse/finger (left/right), while clicking/pressing in order to re-render the scene from
another view point. After the coarse rendering pass, a "Refinement rendering pass" occurs, which
render the pixels once again, but at much higher quality.
This removes the jagged edges (often called the jaggies) on the sphere silhouettes, for example.
In that mode, there is a white pixel showing
the current point of progress (which starts at the top, and works downwards, from the left to the right).
In the ray tracing program above, the dot product was used to calculate the intersection between
a ray and a sphere, and the dot product was also used to measure the length to the intersection points.
In addition, the law of reflection, which is used to calculate reflective objects, was implemented using
dot products. Both these topics will be explained again in Section 3.7,
when the reader knows how the dot products works.
In general, the dot product is really about metrics, i.e., how to measure angles and lengths of vectors.
Two short sections on angles and length follow, and then comes the major section in this chapter,
which defines and motivates the dot product, and also includes, for example,
rules and properties of the dot product in Section 3.2.3.
Section 3.3 introduces the concept of an orthonormal basis
and in Section 3.4, a set of inequalities are presented that often are useful.
Section 3.5 shows some examples on how to use the dot product. Then follows
a section about lines and planes, and finally, there is a follow-up section
on ray tracing.
The smallest angle between one vector, $\vc{u}$, and another vector, $\vc{v}$,
is denoted by $[\vc{u},\vc{v}]$.
To the right, the smallest angles between pairs of vectors are illustrated with green arcs,
and in one case, the angle is illustrated with a green square. In this latter case,
the angle makes $90^\circ$ or $\pi/2$ radians, i.e., $[\vc{u},\vc{v}]=\pi/2$.
This is also denoted $\vc{u} \perp \vc{v}$, which illustrates that the vectors are orthogonal,
which is the same as perpendicular.
Note that the angle is $0$ in the lower left corner, and $\pi$ radians in the middle
in the bottom row. In both these cases, one can say that the vectors are
collinear since they lie on a shared line, and they are in fact also parallel.
In the lower left corner, the vectors are parallel and have the same direction, while
in the middle in the bottom row, the vectors are parallel but have opposite directions.
When two vectors are parallel, this is denoted $\vc{u}\, || \,\vc{v}$.
Also note that if $\vc{u}$ and $\vc{v}$ are parallel, then it must hold that
$\vc{v} = k \vc{u}$ for some value of $k$.
We are now ready for the definition of the dot product itself:
Definition 3.1:Dot Product
The dot product between two vectors, $\vc{u}$ and $\vc{v}$ is denoted $\vc{u}\cdot \vc{v}$, and
is defined as the scalar value
Recall from Chapter 2 that $||\vc{v}||$ denotes the length of the
vector $\vc{v}$. Since the length of a vector is always positive, the
dot product will be positive if and only if $\cos[\vc{u},\vc{v}]$ is
positive. From this we can deduce the following rules about the dot
product
Especially the last one is important: if the two vectors $\vc{u}$ and
$\vc{v}$ are orthogonal (perpendicular) to each other, then $\vc{u}\cdot \vc{v} = 0$. As we will see later in this chapter, orthogonality
is a useful feature which often simplifies calculations.
Note that the dot product produces a scalar value, and therefore, it is sometimes called the scalar product.
Example 3.1:Simple dot product example Assume we have two vectors, $\vc{u}$ and $\vc{v}$. The length of $\vc{u}$ is $4$, and the length of $\vc{v}$ is $3$.
The angle between them is $\frac{\pi}{4}$. Calculate the dot product $\vc{u} \cdot \vc{v}$.
Neither of the vectors have zero length. According to Definition 3.1,
the scalar product of $\vc{u}$ and $\vc{v}$ is
A unit vector is a vector whose length is 1, that is, $\vc{v}$ is a unit vector if $\ln{\vc{v}}=1$.
From every non-zero vector, $\vc{v}$, a unit vector can be created. This is called to normalize
the vector, and the process is called normalization.
A normalized vector $\vc{n}$ is created from $\vc{v}$ by dividing $\vc{v}$ by its length, $\ln{\vc{v}}$, that is
Next, we need to show that $\vc{n}$ is indeed a unit vector, i.e., $\ln{\vc{n}}=1$. Let us denote $l=1/\ln{\vc{v}}$,
and simplify the expression for the length of $\vc{n}$ as
$\ln{\vc{n}} = \ln{l\vc{v}} = \abs{l}\,\ln{\vc{v}}=\frac{1}{\ln{\vc{v}}}\ln{\vc{v}}=1$.
Note that if both vectors are normalized, i.e., $\ln{\vc{u}}=\ln{\vc{v}}=1$,
the dot product simplifies to $\vc{u} \cdot \vc{v} = \cos[\vc{u},\vc{v}]$.
This fact is very useful in shading computations for computer graphics, where the cosine for the angle between two vectors are often needed.
In fact, normalized vectors were used extensively in Interactive Illustration 3.1 for
the tracing of rays and for the shading there.
From trigonometry, it is known that in a right triangle, the cosine of one of the smaller angles
is related to the hypotenuse and the length of one of the shorter sides.
where $c$ is the length of the hypotenuse, $a$ is the length of the shorter side that makes an angle, $\theta$,
with the hypotenuse. This is also illustrated to the right.
So in these very specific situations, it is possible to calculate an angle given that the two lengths,
$a$ and $c$, are known. Alternatively, one side length may be calculated given that the angle
and the other side length are known.
$\vc{u}$
$\vc{v}$
$\vc{w}$
$[\vc{u},\vc{v}]$
Assume that one vector, $\vc{u}$, shall be projected orthogonally onto another vector, $\vc{v}$, in order
to create a new vector, $\vc{w}$. This is illustrated to the right.
Since $\vc{u}$ and $\vc{w}$ make up a triangle with
a right angle, the following must hold: $\cos [\vc{u},\vc{v}] = \ln{\vc{w}} / \ln{\vc{u}}$, which
is simply an application of Equation (3.5) above. This in turn means that the length of $\vc{w}$
is $\ln{\vc{w}} = \ln{\vc{u}}\cos [\vc{u},\vc{v}]$. If $\vc{v}$ has the length one, i.e., $\ln{\vc{v}}=1$,
then the projected vector can be computed as
To convince yourself that $\ln{\vc{w}}\vc{v}$ really is equal to
$\vc{w}$, note that it both has the right direction (the direction of
$\vc{v}$, since $\ln{\vc{w}}$ is just a scalar) and it has the right
length (namely $\ln{\vc{w}}$, since the length of $\vc{v}$ equals
one).
However, it would be nice to handle cases where $\vc{v}$
does not have length one as well.
This can be done by normalizing the vector $\vc{v}$, i.e., multiplying it with $\frac{1}{\ln{\vc{v}}}$, which will give it length one.
If $\vc{v}$ is replaced by
$\frac{1}{\ln{\vc{v}}}\vc{v}$ in Equation (3.6) above, the following is obtained
Here, we have used the fact that the normalization does not change the
direction, and we can hence continue using $\cos[\vc{u},\vc{v}]$ instead of $\cos[\vc{u}, \frac{\vc{v}}{\ln{v}}]$.
Next, multiply both the numerator and the denominator by
$\ln{\vc{v}}$, which leads to the general orthogonal projection formula
The reason for this last step is that the numerator, $\ln{\vc{u}}\ \ln{\vc{v}} \cos[\vc{u},\vc{v}]$, now
becomes equal to the definition of the dot product. Thus we can write the formula even shorter as
Definition 3.2:Orthogonal Projection
If $\vc{v}$ is a non-zero vector, then the orthogonal projection of $\vc{u}$ onto $\vc{v}$ is denoted $\proj{\vc{v}}{\vc{u}}$,
and is defined by
Note that if $\ln{\vc{v}}=1$, i.e., $\vc{v}$ is normalized, then the expression for projection gets simpler:
$\proj{\vc{v}}{\vc{u}} = (\vc{u} \cdot \vc{v})\vc{v}$.
$(i)$ From Definition 3.1, we know that $\vc{u} \cdot \vc{v} = \ln{\vc{u}}\ \ln{\vc{v}} \cos[\vc{u},\vc{v}]$, and
$\vc{v} \cdot \vc{u} = \ln{\vc{v}}\ \ln{\vc{u}} \cos[\vc{v},\vc{u}]$, which are the same since
$[\vc{u},\vc{v}]$ and $[\vc{v},\vc{u}]$ both represent the smallest angles between
$\vc{u}$ and $\vc{v}$.
$(ii)$ Again, from Definition 3.1,
$k(\vc{u} \cdot \vc{v}) =$ $k\ln{\vc{u}}\,\ln{\vc{v}} \cos[\vc{u},\vc{v}]$ for the left hand side of the equal sign, while
$(k\vc{u}) \cdot \vc{v} =$ $\ln{k\vc{u}}\,\ln{\vc{v}} \cos[k\vc{u},\vc{v}]$ for the right hand side.
If $k>0$ then $\cos[k\vc{u},\vc{v}]=$ $\cos[\vc{u},\vc{v}]$, and $\ln{k\vc{u}}=k\ln{\vc{u}}$, which proves the equality
for $k>0$. For $k<0$, the left hand side can be rewritten as
$k\ln{\vc{u}}\ \ln{\vc{v}} \cos[\vc{u},\vc{v}] = - \abs{k}\,\ln{\vc{u}}\ \ln{\vc{v}} \cos[\vc{u},\vc{v}]$.
The right hand side can be rewritten as
$\ln{k\vc{u}}\,\ln{\vc{v}} \cos[k\vc{u},\vc{v}] = \abs{k}\ln{\vc{u}}\,\ln{\vc{v}} \cos(\pi-[\abs{k}\vc{u},\vc{v}])$,
where the last step comes from the fact that the $[k\vc{u},\vc{v}]=[-\vc{u},\vc{v}]=\pi-[\vc{u},\vc{v}]$
for negative $k$. From trigonometry, we know that $\cos (\pi-[\vc{u},\vc{v}]) = -\cos [\vc{u},\vc{v}]$,
and so the right hand side becomes $-\abs{k}\,\ln{\vc{u}}\ \ln{\vc{v}} \cos[\vc{u},\vc{v}]$, which proves the rule
for $k<0$.
Finally, for $k=0$ both sides of the equal sign are trivially zero.
$(iii)$
$\vc{u}$
$\vc{v}$
$\vc{w}$
$\proj{\vc{v}}{\vc{u}}$
$\proj{\vc{v}}{\vc{w}}$
$\vc{v}$
$\vc{u}+\vc{w}$
$\proj{\vc{v}}{(\vc{u}+\vc{w})}$
First, we assume that $\vc{v}\neq \vc{0}$ (in case of $\vc{v}=\vc{0}$, the rule is trivially true).
A geometrical proof will be used for this rule, where we exploit that the sum of the projections
is equal to the projection of the sum.
This can be expressed as
The expression on the left hand side of the equal sign is shown in the top part
of Figure 3.5 (note that the $\vc{u}$ and $\vc{w}$ vectors can be moved in the figure),
and in the bottom part of the same figure, the expression on the right hand side of the equal sign is visualized.
Using Definition 3.2, this can be rewritten as
Note that in the last step, there are three scalars, all of which are multiplied by $\vc{v}$.
Hence, it is possible to remove the vector, $\vc{v}$, and only keep the scalars, which gives us:
$\vc{u}\cdot\vc{v} + \vc{w}\cdot\vc{v} =$ $(\vc{u}+\vc{w})\cdot\vc{v}$.
This in turn can be rewritten as $\vc{v}\cdot\vc{u} + \vc{v}\cdot\vc{w} =$ $\vc{v}\cdot(\vc{u}+\vc{w})$
using rule $(i)$, which together proves rule $(iii)$.
$(iv)$ From Definition 3.1, $\vc{v}\cdot\vc{v}=\ln{\vc{v}}\,\ln{\vc{v}}\cos[\vc{v},\vc{v}]=\ln{\vc{v}}^2$,
since $\cos[\vc{v},\vc{v}]=\cos 0 = 1$.
This concludes the proofs.
$\square$
The rules in Theorem 3.1 are intuitive, since they are the same as for scalar addition and scalar multiplication.
In Section 3.5, several examples will be presented on how to use these rules.
Example 3.2:Law of cosine
$a$
$b$
$c$
$\theta$
$\vc{u}$
$\vc{v}$
$\vc{w}=\vc{u}-\vc{v}$
$[\vc{u},\vc{v}]$
Sometimes, the cosine theorem can be a bit hard to remember, but it is actually very simple
to derive the formula with help of the dot product. The geometrical situation is shown
in Figure 3.6 to the right, where two vectors,
$\vc{u}$ and $\vc{v}$, start at the same point, and
the difference, $\vc{u} - \vc{v}$, is the vector from the endpoint of $\vc{v}$ to the endpoint of $\vc{u}$,
i.e., $\vc{w} = \vc{u} - \vc{v}$.
The one thing to remember is that the cosine theorem starts with the squared length of $\vc{w}$, and then
the expression is developed with the rules for the dot product, where we first use rule $(iv)$ to obtain
This section will describe a simple way of calculating the dot
product. Assume we have two three-dimensional vectors $\vc{u} = (u_1, u_2, u_3)$ and $\vc{v} = (v_1, v_2, v_3)$ expressed
in the same basis
This looks long and complicated. However, assume that $\vc{e}_i \cdot \vc{e}_j$ is
$0$ when $i \neq j$. This is equivalent to saying that
each basis axis is orthogonal to every other basis axes. Then all but
three terms would go away.
Assume further that
$\vc{e}_i \cdot \vc{e}_i = 1$ for all $i$. This is equivalent to
saying that the length of each basis vector should be one. The
remaining terms could then be simplified too, yielding
Now we are ready to define orthonormal basis in any dimension.
Definition 3.3:Orthonormal Basis
For an $n$-dimensional orthonormal basis, consisting of the set of basis vectors, $\{\vc{e}_1, \dots, \vc{e}_n\}$,
the following holds
This simply means that the basis vectors are of unit length, i.e., they are normalized, and that they are pairwise orthogonal.
We also generalize the simplified dot product to vectors of any dimensionality:
Definition 3.4:Dot Product Calculation in Orthonormal Basis
In any orthonormal basis, the dot product between two $n$-dimensional vectors, $\vc{u}$ and $\vc{v}$, can be calculated as
Note that the two different ways of indexing the components of a vectors have been used above, e.g., recall that
$\vc{v}=(v_1, v_2, v_3) = (v_x, v_y, v_z)$.
For vectors in $\R^1$, $\R^2$, and $\R^3$, there is a natural notion of what an angle is. We use this notion to define the scalar product according to (Definition 3.1). If the basis is orthonormal, we obtain the above simple formula
$\vc{u} \cdot \vc{v} = u_1 v_1 + u_2 v_2 + u_3 v_3 $
for calculating the scalar product. For vectors in higher dimensions, there is no notion of what an angle is. In the case of an orthonormal basis, the solution here is to use the simple formula for the scalar product from Definition 3.4 and infer the notion of angle from the scalar product. The angle between two non-zero vectors $\vc{u}$ and $\vc{v}$ then becomes as follows.
Definition 3.5:Angles in higher dimensions
The angle $[\vc{u}, \vc{v}]$ between two non-zero vectors, $\vc{u} = (u_1, u_2, \ldots, u_n)$ and $\vc{v} = (v_1, v_2, \ldots, v_n)$ in $\R^n$ is defined as
Now, let us illustrate how the simple dot product evaluation (Definition 3.4) works with a simple example:
Example 3.3:Simple Calculation In the orthonormal basis shown below in the figure, $\vc{u} = (1,2)$ and $\vc{v} = (3,1.5)$.
The task is to calculate $\ln{\vc{u}}\ \ln{\vc{v}}\cos[\vc{u}, \vc{v}]$.
We recognize that $\ln{\vc{u}}\ \ln{\vc{v}}\cos[\vc{u}, \vc{v}]$
equals the dot product $\vc{u} \cdot \vc{v}$. We also
use the fact the basis is orthonormal, which means we
can use the simplified formula (Definition 3.4) to calculate
$\vc{u} \cdot \vc{v}$, that is,
Interactive Illustration 3.7:
Since $\vc{e}_1$ and $\vc{e}_2$ are orthogonal and both
have length 1, we can use the simple formula to calculate
$\vc{u} \cdot \vc{v}$. Note that this figure is, in fact, not interactive
Interactive Illustration 3.7:
Since $\hid{\vc{e}_1}$ and $\hid{\vc{e}_2}$ are orthogonal and both
have length 1, we can use the simple formula to calculate
$\hid{\vc{u} \cdot \vc{v}}$. Note that this figure is, in fact, not interactive
Example 3.4:Angle Calculation In an orthonormal basis, $\vc{u} = (1,2)$ and $\vc{v} = (3,1)$. Calculate the smallest angle between $\vc{u}$ and $\vc{v}$.
The smallest angle $[\vc{u}, \vc{v}]$ is part of the dot product $\ln{\vc{u}}\ \ln{\vc{v}}\cos[\vc{u}, \vc{v}]$.
Thus, if we calculate the dot product and divide by the length of $\vc{u}$ and $\vc{v}$, the cosine of smallest angle between
$\vc{u}$ and $\vc{v}$ is obtained.
Since the basis is orthonormal, we can use the simple way of calculating the various dot products needed
To get a more intuitive feeling for what the dot product provides, the reader is encouraged to play
with Interactive Illustration 3.8 below.
$\vc{u}$
$\vc{v}$
$\ln{\vc{u}}$
$\ln{\vc{v}}$
$\cos [\vc{u},\vc{v}]$
$\vc{u} \cdot \vc{v}$
Interactive Illustration 3.8:
The dot product between two vectors (which the reader can move around), $\vc{u}$ and $\vc{v}$
in the standard basis,
is shown here together with
the terms. Recall that $\vc{u}\cdot\vc{v} = \ln{\vc{u}}\,\ln{\vc{v}} \cos [\vc{u},\vc{v}]$.
Pay attention to the sign of the dot product and $\cos [\vc{u},\vc{v}]$ when the angle between
$\vc{u}$ and $\vc{v}$ changes from less than $\pi/2$ to greater than $\pi/2$. Note also
that when $\vc{u}$ and $\vc{v}$ are unit vectors, i.e., $\ln{\vc{u}}=\ln{\vc{v}}=1$,
then $\vc{u}\cdot\vc{v} = \cos [\vc{u},\vc{v}]$. Another insight can be gained when one
vector is moved so that $\vc{u}=\vc{v}$.
Interactive Illustration 3.8:
The dot product between two vectors (which the reader can move around), $\hid{\vc{u}}$ and $\hid{\vc{v}}$
in the standard basis,
is shown here together with
the terms. Recall that $\hid{\vc{u}\cdot\vc{v} = \ln{\vc{u}}\,\ln{\vc{v}} \cos [\vc{u},\vc{v}]}$.
Pay attention to the sign of the dot product and $\hid{\cos [\vc{u},\vc{v}]}$ when the angle between
$\hid{\vc{u}}$ and $\hid{\vc{v}}$ changes from less than $\hid{\pi/2}$ to greater than $\hid{\pi/2}$. Note also
that when $\hid{\vc{u}}$ and $\hid{\vc{v}}$ are unit vectors, i.e., $\hid{\ln{\vc{u}}=\ln{\vc{v}}=1}$,
then $\hid{\vc{u}\cdot\vc{v} = \cos [\vc{u},\vc{v}]}$. Another insight can be gained when one
vector is moved so that $\hid{\vc{u}=\vc{v}}$.
As we have already seen, having the vectors in an orthonormal basis simplifies the calculation of their dot product.
In this section, it will become clear that
it also simplifies the calculation of the length, which is denoted $\ln{\vc{v}}$, of a vector.
This is also called magnitude and norm.
$a$
$b$
$c$
$v_x$
$v_y$
$\vc{v}$
$\ln{\vc{v}}$
Recall rule $(iv)$ in Theorem 3.1, which says $\vc{v}\cdot\vc{v} = \ln{\vc{v}}^2$.
If the vector has coordinates $(v_x, v_y)$ in an orthonormal basis, we can use the simple formula (Definition 3.4)
for the dot product and we have $\ln{\vc{v}}^2 = v_x^2 + v_y^2$. In the top figure to the right, we have
drawn the vector $\vc{v}$. Since the values $v_x$ and $v_y$ are the coordinates of $\vc{v}$, they are also equal to the lengths
of the dashed lines. Note how this figure is analogous with the triangle below, where $c = \ln{\vc{v}}$,
$a = v_x$ and $b = v_y$. Hence the expression $\ln{\vc{v}}^2 = v_x^2 + v_y^2$ is a proof of Pythagorean theorem, which states that $c^2 = a^2 + b^2$.
Therefore, the length of a vector in an orthonormal basis can be calculated as
Likewise, for a three-dimensional vector, $\vc{v} = (v_x, v_y, v_z)$, in an orthonormal basis,
the dot product $\vc{v} \cdot \vc{v} = \ln{\vc{v}}^2$ can be calculated as $v_x^2 + v_y^2 + v_z^2$,
and hence the vector length is calculated as
Also in this case, it is possible to use the Pythagorean theorem to get an geometrical explanation why this formula is correct, as is
shown in Interactive Illustration 3.10.
$(v_x, v_y, v_z)$
$(v_x, v_y, 0)$
$\sqrt{v_x^2 + v_y^2}$
$v_x$
$v_y$
$x$
$y$
$z$
$|v_z|$
$||\vc{v}||$
Interactive Illustration 3.10:
We want to calculate the length of the vector $\vc{v}$. We first place its tail in the origin, which means its tip will be in the coordinates $(v_x, v_y, v_z)$.
Interactive Illustration 3.10:
Next, create a second vector $(v_x, v_y, 0)$, where the last coordinate is set to zero. By placing it with its tail in the origin as well, we see that it will lie in the plane spanned by the $x$- and $y$-coordinate axes.
Interactive Illustration 3.10:
If we look at the vector that goes from the origin to $(v_x, v_y, 0)$, it is simple to see that it forms a right triangle. Since the shorter sides are $v_x$ and $v_y$ respectively, the Pythagorean theorem gives that this vector has length $\sqrt{v_x^2 + v_y^2}$.
Interactive Illustration 3.10:
This is even easier to see if we look at it straight from the tip of the $z$-coordinate axis.
Interactive Illustration 3.10:
The two red arrows and the dashed red line also form a right triangle, which is perhaps even easier to see in the next step. As we discovered in the previous step, the short side at the top has the length $\sqrt{v_x^2 + v_y^2}$. The other short side is the vector $(0, 0, v_z)$, which trivially has the length $|v_z|$. The hypothenuse is the vector $\vc{v}$ itself, and the Pythagorean theorem thus gives that its length squared must be $||\vc{v}||^2 = (\sqrt{v_x^2 + v_y^2})^2 + v_z^2$ and hence $||\vc{v}||$ must be $\sqrt{v_x^2 + v_y^2 + v_z^2}$.
Interactive Illustration 3.10:
Here, we have changed the viewpoint to a point closer to the tip of the $y$-axis. This makes it easier to see that the three red lines make up a right triangle.
Interactive Illustration 3.10:
The two red arrows and the dashed red line also form a right triangle, which is perhaps even easier to see in the next step. As we discovered in the previous step, the short side at the top has the length $\hid{\sqrt{v_x^2 + v_y^2}}$. The other short side is the vector $\hid{(0, 0, v_z)}$, which trivially has the length $\hid{|v_z|}$. The hypothenuse is the vector $\hid{\vc{v}}$ itself, and the Pythagorean theorem thus gives that its length squared must be $\hid{||\vc{v}||^2 = (\sqrt{v_x^2 + v_y^2})^2 + v_z^2}$ and hence $\hid{||\vc{v}||}$ must be $\hid{\sqrt{v_x^2 + v_y^2 + v_z^2}}$.
Note that it is often convenient to know what happens to the length of a vector when it is scaled by a factor, $k$.
Using the definition of the dot product, we get
The following are some very useful inequalities in mathematics. They are part of the dot product chapter
since they are easy to prove using the definition of the dot product.
Theorem 3.2:Cauchy-Schwarz Inequality
If $\vc{u}$ and $\vc{v}$ are vectors in $\R^n$, then the following holds
For geometric vectors, the absolute value of the dot product (Definition 3.1) gives us
$\abs{\vc{u} \cdot \vc{v}} = \ln{\vc{u}}\ \ln{\vc{v}}\, \abs{\cos[\vc{u},\vc{v}]}$,
which proves the theorem since $\abs{\cos[\vc{u},\vc{v}]} \leq 1$.
For vectors in higher dimensions, the definition of the dot product in an orthonormal basis is
$\vc{u}\cdot \vc{v} = \sum_i u_i v_i$ (Definition 3.4). To prove the Cauchy-Schwarz inequality in this case, we need to prove that
which clearly is greater than or equal to zero for each $z$, since it is a sum of squares.
We know that a polynomial $p(z) = az^2 + bz + c$
has the two solutions
\begin{equation}
z = \frac{-b \pm \sqrt{b^2-4ac}}{2a} .
\end{equation}
(3.42)
If the so called discriminant $b^2-4ac$ is positive then there are two distinct real roots and the polynomial takes on both positive and negative values. Since $p(z)$ is non-negative, we must have $b^2-4ac \leq 0$.
Now
By squaring both sides, and developing the expressions, the left hand side becomes
$\ln{\vc{u} + \vc{v}}^2 = $
$(\vc{u} + \vc{v})\cdot (\vc{u} + \vc{v}) = $
$\vc{u} \cdot \vc{u} + \vc{v} \cdot \vc{v} + 2\vc{u}\cdot \vc{v} = $
$\ln{\vc{u}}^2 + \ln{\vc{v}}^2 + 2\ln{\vc{u}}\,\ln{\vc{v}} \cos [\vc{u},\vc{v}]$.
The squared right hand side becomes
$(\ln{\vc{u}} + \ln{\vc{v}})^2 = $
$\ln{\vc{u}}^2 + \ln{\vc{v}}^2 +2 \ln{\vc{u}}\,\ln{\vc{v}}$,
which proves the theorem since $\cos [\vc{u},\vc{v}] \leq 1$.
$\square$
$\ln{\vc{u}}$
$\ln{\vc{v}}$
$\ln{\vc{u} + \vc{v}}$
$\ln{\vc{u}}$
$\ln{\vc{v}}$
$\ln{\vc{u} + \vc{v}}$
$\ln{\vc{u}} + \ln{\vc{v}}$
Note that the triangle inequality is easy to understand geometrically, as shown in the figure to the right.
It is clear from the figure that the sum of the lengths of $\vc{u}$ and $\vc{v}$ must be longer or equal to
the length of $\vc{u}+\vc{v}$. In fact, equality can only happen when
when $\vc{u}$ and $\vc{v}$ are parallel and in the same direction.
The reader is encouraged to move the $\vc{u}$ vector so that happens.
In this section, some useful examples will be shown,
where we point out which rules are used to
reach the result. The rule is put on top of the equal sign, for example,
implies that rule $(iii)$ (Theorem 3.1) was used to
arrive at the right hand side of the equal sign.
Next follows an example where the law of parallelograms is derived.
Example 3.5:Law of Parallelograms
$\vc{u}$
$\vc{v}$
$\vc{u} + \vc{v}$
$\vc{u} - \vc{v}$
Assume we have two vectors, $\vc{u}$ and $\vc{v}$, starting in the same point. When performing
vector addition (Section 2.2), we have seen that a parallelogram can be created
in order to form the vector addition. The two diagonals in this
parallelogram are: $\vc{u}-\vc{v}$ and $\vc{u}+\vc{v}$ as can be seen
in Figure 3.12 to the right.
Now, the sum of the squared lengths is
Example 3.6:Polarization Identity The following is very related to Example 3.5.
In that example, we showed all the steps and the respective rules, but what is convenient about
the dot product rules is that they behave like expected. Hence, we will be briefer in
this example. Note that the only difference from the starting equation in Example 3.5 is that a plus sign becomes a minus sign, that is,
which means that $\vc{u}\cdot \vc{v} = \frac{1}{4}\bigl( \ln{ \vc{u} + \vc{v} }^2 - \ln{ \vc{u} - \vc{v} }^2 \bigr)$.
This is also a pretty amazing result.
Example 3.7:Triangle Area using Dot Products
$A$
$B$
$C$
$\vc{u}$
$\vc{v}$
$h$
$[\vc{u},\vc{v}]$
In this example, the area of a triangle defined by three points, $A$, $B$, and $C$,
as shown to the right, will be derived. The final area formula will
be expressed using dot products.
We will use the edge vectors, $\vc{u} = B-A$ and $\vc{v} = C-A$, in the following derivation.
Recall that triangle
area is often computed as $bh/2$, where $b$ is the length of the base,
and $h$ is the height of the triangle. In the figure to the right, we
have $b=\ln{\vc{u}}$, and from trigonometry, the height must be
\begin{equation}
h = \ln{\vc{v}} \sin [\vc{u},\vc{v}].
\end{equation}
(3.51)
The triangle area, a, is then
\begin{equation}
a = \frac{bh}{2} =\frac{1}{2} \underbrace{\ln{\vc{u}}}_{b} \,\underbrace{\ln{\vc{v}} \sin [\vc{u},\vc{v}]}_{h}.
\end{equation}
(3.52)
Since triangle area always is positive, we will square it, and use some trigonometry
($\sin^2 \phi + \cos^2 \phi =1$) to expand the expression into using dot products as
Lines and planes are very common and important geometrical entities useful in many situations, such as when determining whether a ray (i.e., a line) from a virtual eye
looking through a pixel center hits a geometrical object, such as a sphere. More broadly,
lines and planes are often used in computational geometry, computer vision, computer graphics,
computer-aided design (CAD), etc.
Throughout this book, straightlines will mostly be used, and therefore, the shorter term line is often
used instead. A straight line can be described with a starting point, $S$, and a direction, $\vc{d}$,
as illustrated to the right.
To describe a line, it may be convenient to have a representation that can find all possible points
on the line. First, start with the point, $P$, to the right, and create a vector from $S$ to $P$,
i.e., $\overrightarrow{SP}$. If $P$ is located on the line, described by $S$ and $\vc{d}$, then
$\overrightarrow{SP}$ must be parallel to $\vc{d}$. In fact, a scalar, $t_1$, must exist which makes
$t_1\vc{d}$ exactly as long as $\overrightarrow{SP}$, which is expressed as
Since $P$ is located in the direction of $\vc{d}$, we also know that $t_1>0$. For $Q$, which
also lies on the line, there is another scalar, $t_2$, which satisfies: $\overrightarrow{SQ} = t_2 \vc{d}$.
Since $Q$ is located in the opposite direction of $\vc{d}$, it is clear that $t_2<0$.
For $R$, on the other hand, there is no scalar, $t_3$, that satisfies a similar relation. That is,
$\overrightarrow{SR} \neq t_3 \vc{d}$ for all values of $t_3$.
The only difference between $P$ and $Q$ is that they use different scalars, $t_1$ and $t_2$.
Hence, it makes sense that any point on the line can be described by using a particular scalar, $t$.
Therefore, we use $P(t)$ to denote a function of a scalar, $t$, which returns a point, $P$. That is,
for different values of $t$, different points, $P$, will be generated. This expression can be rewritten
as
Note that all points, $P(t)$, on the line are described by starting at $S$ and then adding a scaled direction vector, $t\vc{d}$,
to reach $P(t)$. Some examples: $P(0) = S$, $P(1) = S + \vc{d}$, and $P(-2.5) = S -2.5\vc{d}$.
This type of parameterized line is summarized in the following definition.
Definition 3.6:A Parameterized Line
A line parameterized by $t\in \R$ can be described by a starting point, $S$, and a direction vector, $\vc{d}$.
All points, $P(t)$, on a line can be described by
\begin{equation}
P(t) = S + t\vc{d}.
\end{equation}
(3.57)
Note that $\vc{d}\neq \vc{0}$, otherwise, only a single point, $S$, will be generated (i.e., no line).
One often says that the line above is on explicit form, which simply means that the points, $P(t)$,
on the line can be generated directly from the expression. Lines on explicit form in one, two, and three
dimensions can be found in Interactive Illustration 3.15 below.
$S$
$P(t)$
$\vc{d}$
$S$
$P(t)$
$\vc{d}$
$S$
$P(t)$
$\vc{d}$
$t$
$P(t) = S + t \vc{d}$
$P(t) = S + t \vc{d}$
Interactive Illustration 3.15:
This interactive illustration shows lines on the form: $P(t) = S + t\vc{d}$. Note that the slider can be influenced
in order to change the value of $t$, which in turn alters the point, $P(t)$. First, a line is shown in one dimension,
which here is assumed to be the $x$-axis. The starting point, $S$, of the line can be moved, and in addition.
the length of the direction vector, $\vc{d}$, can be changed. Notice what happens to the speed of the
point, $P$, when the length of $\vc{d}$ changes and the slider is pulled.
Click Forward to see a two-dimensional line.
Interactive Illustration 3.15:
A two-dimensional line, $P(t) = S + t\vc{d}$, is illustrated here. Note that negative values of $t$ moves the
point, $P$, to "behind" $S$, which is the starting point of the line. On the other hand, positive values of $t$
moves $P$ so that it lies in the direction of $\vc{d}$, which is the line's direction vector.
Interactive Illustration 3.15:
Finally, a three-dimensional line is shown. Note that the dashed lines are only there to help understand
the geometric context, i.e., to help see how far it is to the $xz$-plane, etc.
Interactive Illustration 3.15:
This interactive illustration shows lines on the form: $\hid{P(t) = S + t\vc{d}}$. Note that the slider can be influenced
in order to change the value of $\hid{t}$, which in turn alters the point, $\hid{P(t)}$. First, a line is shown in one dimension,
which here is assumed to be the $\hid{x}$-axis. The starting point, $\hid{S}$, of the line can be moved, and in addition.
the length of the direction vector, $\hid{\vc{d}}$, can be changed. Notice what happens to the speed of the
point, $\hid{P}$, when the length of $\hid{\vc{d}}$ changes and the slider is pulled.
Click Forward to see a two-dimensional line.
Note that taking the length of both sides of a line, $P(t)-S = t\vc{d}$, gives $\ln{P(t)-S} = \ln{t\vc{d}}$.
When the line direction is normalized, i.e., $\ln{\vc{d}}=1$, then $\abs{t} = \ln{P(t)-S}$. This can be
very useful when computing intersections between, for example, a line and a sphere, as will be seen in
Section 3.7.
Recall that a two-dimensional point, $S$, has two scalar components $(s_x, s_y)$, and similar,
a two-dimensional vector, $\vc{d}$ has two scalar components, $(d_x,d_y)$. Now, note
that a two-dimensional line, $P(t)=S + t\vc{d}$, can be expressed in terms of its scalar components
of the vectors and points as
The following gives a taste of Chapter 5 on Gaussian elimination. Take the top row above and multiply
by $d_y$ and multiply the bottom row by $d_x$. This leads to the following, where the parameter $t$ has been dropped from
$p_x$ and $p_y$ for clarity purposes,
From algebra, it
is known that a term can be subtracted from an expression as long as the same term is subtracted on both sides.
Hence, it is legal to subtract the bottom row from the top row, which leads to
and as can be seen, the $t$-terms have disappeared from the expression.
This can further be rewritten as
\begin{equation}
d_y p_x - d_x p_y + s_y d_x - s_x d_y =0 \\
\Longleftrightarrow \\
a p_x + b p_y + c = 0,
\end{equation}
(3.61)
where $a=d_y$, $b=-d_x$, and $c=s_y d_x - s_x d_y$.
This equation may be familiar to some, and especially, if we use
$x=p_x$ and $y=p_y$, which gives
\begin{equation}
a x + b y + c = 0.
\end{equation}
(3.62)
For $b\neq 0$, one can further rewrite the expression above as
$y =(-ax -c)/b=$
$kx+m$, where $k=-a/b$ and $m=-c/b$. This expression of a line is definitely familiar
to most people, where $k$ describes how much $y$ changes when $x$ is increased by 1,
and $m$ is the $y$-value at $x=0$.
Note, however, $a x + b y + c = 0$, is a more general description of a two-dimensional
line, since it makes it possible to describe vertical lines as well, which is not possible
with $y=kx+m$.
It is important to note that the implicit form of the line, $a p_x + b p_y + c = 0$, and
the explicit form of the line, $P(t) = S + t\vc{d}$, describe the same line
exactly, since the former expression was derived from the latter.
Next, we will show that if the basis is orthonormal, the explicit form can
be rewritten using dot products.
For that purpose,
we introduce $\vc{n} = (n_x,n_y) = (a,b) = (d_y,-d_x)$, which leads to
where $P=(p_x, p_y)$ and $S=(s_x, s_y)$. Here we have used the fact that
$(d_y, -d_x) \cdot (p_x, p_y) = d_y p_x + (-d_x) p_y$ when vectors ard
described in an orthonormal base, as described in
Theorem 3.4.
As can be seen, if we take the vector from $S$ to any point, $P$, on the line, then
its dot product with $\vc{n}$ must be zero if $P$ is to be on the line.
Interestingly, we see that
$\vc{n}\cdot\vc{d} =$
$(d_y,-d_x)\cdot (d_x,d_y) = $
$d_y d_x - d_x d_y = $
$0$,
i.e., $\vc{n}$ is orthogonal to the line direction, $\vc{d}$.
Therefore, $\vc{n}$ is often called the normal of the line. Note also, that
$\vc{n} \cdot (P - S) = 0$ is said to be in implicit form, since it does not
easily generate points on the line. However, it is straightforward to test
whether a point, $P$, lies on the line.
This leads to the following definition of a two-dimensional line on implicit form.
Definition 3.7:Two-dimensional Implicit Line
A line can be represented on implicit form by using a starting point, $S$, and a normal vector, $\vc{n}$.
All points, $P$, on a line can then be described by
Note that $\vc{n}\neq \vc{0}$, otherwise, all points, $P$, lie on the line.
It should be noted that Definition 3.7 still holds in the case of a non-orthogonalbasis. That is, the line can still
be written as $\vc{n} \cdot (P-S) = 0$ where $\vc{n}$ is a normal
vector to the line. However, it is no longer straightforward to
calculate the coordinates of $\vc{n}$, since, in general, it is no
longer equal to $\vc{n} = (d_y, -d_x)$.
As seen above, there are two different types of mathematical representations for two-dimensional lines.
The same cannot be done for a one-dimensional or three-dimensional line. However, as will be seen in
Section 3.6.2, a three-dimensional plane equation has two similar representations, both
an implicit and an explicit.
Returning to two-dimensional lines on the form: $\vc{n} \cdot (P - S) = 0$, which
says that all points $P$ that lie on the line, represented by $S$ and $\vc{d}$,
fulfil the expression above, i.e., the dot product is equal to zero.
What happens if $P$ is not on the line? Well, the dot product will not be zero, of course,
but can something else be read into that result? It turns out it can be quite useful.
To demonstrate this, a scalar function, $e$, of $P$ is created as
This function is sometimes called edge equation (in computer graphics) or signed distance
function (this is explained later in this section).
Since, $e(P)$ is defined using a dot product, it is known from Section 3.4, that $e(P)$ is positive
when the angle $[\vc{n}, P-S] < \pi/2$, that $e(P)$ is negative when $[\vc{n}, P-S] > \pi/2$,
and that $e(P)=0$ only when $[\vc{n}, P-S] = \pi/2$. However, $e(P)=0$ also means that
$P$ lies on the line defined by $S$ and $\vc{d}$. Therefore, when $e(P)>0$, we say that $P$
is in the positive half-space of the line, and when $e(P)<0$, $P$ is in the negative half-space,
i.e., the line divides the entire two-dimensional plane into two half-spaces.
This is shown in Interactive Illustration 3.16.
$S$
$P$
$\vc{n}$
$e(P)$
Interactive Illustration 3.16:
The edge equation, $e(P) = \vc{n} \cdot (P - S)$ is visualized here. Recall that the line
is represented by a starting point, $S$, and a normal vector, $\vc{n}$, both which the
reader can move around by clicking/touching and moving while pressing.
Note that the line, where $e(P)=0$, is dashed in this illustration.
The circles with plus and minus signs in them show which side of the line is the positive and negative half-space.
In particular, the reader is encouraged to move the point $P$ so that it lies
exactly on the dashed line, and at the same time, keep an eye on the calculation
for $e(P)$ in the bottom left corner. Furthermore, the reader should move $P$ so
that it is located in the positive half-space of the line, and into the negative half-space.
As a final exercise, it is possible normalize $\vc{n}$, i.e., make sure that $\ln{\vc{n}}=1$,
by remembering the definition of the dot product, and make sure that $P-S$ coincides with $\vc{n}$.
Once $\ln{\vc{n}}=1$, $e(P)$ will show the orthogonal, signed distance from the dashed line to $P$.
Interactive Illustration 3.16:
The edge equation, $\hid{e(P) = \vc{n} \cdot (P - S)}$ is visualized here. Recall that the line
is represented by a starting point, $\hid{S}$, and a normal vector, $\hid{\vc{n}}$, both which the
reader can move around by clicking/touching and moving while pressing.
Note that the line, where $\hid{e(P)=0}$, is dashed in this illustration.
The circles with plus and minus signs in them show which side of the line is the positive and negative half-space.
In particular, the reader is encouraged to move the point $\hid{P}$ so that it lies
exactly on the dashed line, and at the same time, keep an eye on the calculation
for $\hid{e(P)}$ in the bottom left corner. Furthermore, the reader should move $\hid{P}$ so
that it is located in the positive half-space of the line, and into the negative half-space.
As a final exercise, it is possible normalize $\hid{\vc{n}}$, i.e., make sure that $\hid{\ln{\vc{n}}=1}$,
by remembering the definition of the dot product, and make sure that $\hid{P-S}$ coincides with $\hid{\vc{n}}$.
Once $\hid{\ln{\vc{n}}=1}$, $\hid{e(P)}$ will show the orthogonal, signed distance from the dashed line to $\hid{P}$.
Note that $\ln{P - S} \cos [\vc{n}, P-S]$ is actually the orthogonal distance from $P$ to the line,
with the caveat that the "distance" is signed. This means that if $P$ is in the positive half-space,
then signed distance is positive, and if $P$ is in the negative half-space, the signed distance is negative.
Also note that if $\vc{n}$ is normalized, then $e(P)$ is exactly the signed orthogonal projection distance function,
i.e., $e(P) = \ln{P - S} \cos [\vc{n}, P-S]$.
Example 3.8: Study the line that passes through the point $S = (2,1)$ and has normal $\vc{n} = (3,4)$. What is the distance from the point $P=(x,y)$ to the line? What is the distance from $P = (1,1)$ to the line?
In the previous paragraph we saw that the distance could be calculated using the signed distance function $e(P) = \vc{n} \cdot (P - S)$. In the discussion we also saw that the
signed distance function can be written in the so called affine form
\begin{equation}
e(P) = \vc{n} \cdot (P - S) = ax + by + c .
\end{equation}
Example 3.9:Game Rendering When playing a computer game, there is usually a graphics processor that draws all the
graphics, and this graphics processor is highly optimized for drawing triangles. The color of each pixel
inside the triangle can be computed using a short program, referred to as a shader. This makes
the visual experience quite rich, as can be seen in Figure 3.17.
The piece of hardware
in the graphics processor that tests if a pixel is inside a triangle uses edge equations. Since the triangle
consists of three edges (or lines), three edge equations, $e_i(P)$, $i\in\{1,2,3\}$, are created.
If $e_1(P) \geq 0$ and $e_2(P) \geq 0$ and $e_3(P) \geq 0$, then the pixel whose center position is at $P$
is considered to be inside the triangle.
The hardware designers of such graphics processors and game developers use a lot of linear algebra
in their daily work.
Interactive Illustration 3.17:
A screen shot from a computer game called Mirror's Edge Catalyst by DICE. Note that every object
rendered in this image consists of a set of triangles, where the color of each
pixel inside the triangle has been computed used a short program written by the game developer. To determine
whether a pixel is inside a triangle, the graphics processor often uses three
edge equations (one for each edge of the triangle).
(Copyright 2015 Electronic Arts Inc.)
Interactive Illustration 3.17:
A screen shot from a computer game called Mirror's Edge Catalyst by DICE. Note that every object
rendered in this image consists of a set of triangles, where the color of each
pixel inside the triangle has been computed used a short program written by the game developer. To determine
whether a pixel is inside a triangle, the graphics processor often uses three
edge equations (one for each edge of the triangle).
(Copyright 2015 Electronic Arts Inc.)
Planes in three dimensions (and higher) are similar to lines in two dimensions in that
they both split their domain into two half-spaces. In Section 3.6.1, we saw that a two-dimensional
line splits the $xy$-plane into a positive half-space and a negative half-space. There
are also two types of representations (one implicit and one explicit) for planes, similar
to lines in two dimensions.
To define a plane on explicit form, i.e., similar to $P(t) = S + t\vc{d}$ for lines,
a starting point, $S$, and two direction vectors, $\vc{d}_1$ and $\vc{d}_2$, are needed.
The direction vectors may not be collinear, i.e., $\vc{d}_1 \neq k \vc{d}_2$ for all values of $k$.
Another way to put it is that the direction vectors may not be parallel, not even parallel but
with opposite directions.
The direction vectors both lie in the plane. Hence, if a point, $P$, is to
lie in the plane, it must hold that
for some values of the scalars $t_1$ and $t_2$.
Similar to lines, this expression is rewritten as below, where the two
scalars, $t_1$ and $t_2$, have been set as parameters to $P$, i.e.,
This form is illustrated to the right as well.
As can be seen, this is very similar to vector addition. The direction vectors, $\vc{d}_1$ and $\vc{d}_2$,
are scaled by $t_1$ and $t_2$, respectively. The resulting vectors are added together with the starting
point, $S$. This leads to the following definition.
Definition 3.8:A Parameterized Plane
A plane, parameterized by $t_1\in\R$ and $t_2\in\R$, can be described by a starting point, $S$, and two
direction vectors, $\vc{d}_1$ and $\vc{d}_2$.
All points, $P(t_1,t_2)$, on the plane can be described by
If one of $\vc{d}_1$ or $\vc{d}_2$ is $\vc{0}$ or if $\vc{d}_1$ and $\vc{d}_2$ are parallel (either in
the same direction, or opposite), then this degrades to a parameterized line equation.
Interestingly, there is also an implicit form of the plane equation. First, let us write out the plane
equation on component form, as shown below, where $d_{1,y}$ means the $y$-component of $\vc{d}_1$, and so on.
Recall that for two-dimensional lines, the parameterized line: $P(t)=S+t\vc{d}$ could be transformed
into a line on implicit form: $\vc{n}\cdot (P-S)$. This was done by eliminating $t$, and the same
can be done for Equation (3.73) above, where both $t_1$ and $t_2$ can be eliminated. This can be done
using Gaussian elimination, and there is an example in Chapter 5 on this.
Here, we simply state that Equation (3.73) above can be transformed into
and refer to
Chapter 5 for more details on how the elimination of $t_1$ and $t_2$ was
done. Note that $\vc{n}$ is the normal of the plane, i.e., it is perpendicular to any vector
that lies in the plane. There is also a relationship between
$\vc{n}$ and $\vc{d}_1$ & $\vc{d}_2$. This is the cross product, which is the topic of
Chapter 4.
Definition 3.9:Implicit Plane Equation
A plane can be represented on implicit form by using a starting point, $S$, and a normal vector, $\vc{n}$.
All points, $P$, in the plane can then be described by
where $e(P)=0$ when $P$ lies in the plane.
Also, $e(P)>0$ when $P$ lies on the same side of the plane as
the point $S+\vc{n}$, and that is called the positive half-space.
Similarly, if $e(P)<0$ then $P$ lies on the same side of the plane as
the point $S-\vc{n}$. That other part of the space is called the negative half-space.
Example 3.10:Orthogonal Projection of a Point onto a Plane In this example, we will show how a point, $P$, can be projected orthogonally onto a plane defined
by a normal, $\vc{n}$, and a starting point, $S$. First, this process is shown in
Interactive Illustration 3.19.
$P$
$S$
$\vc{n}$
$\vc{v}$
$\proj{\vc{n}}{\vc{v}}$
$-\proj{\vc{n}}{\vc{v}}$
$Q$
Interactive Illustration 3.19:
This illustration will show how a point, $P$ (gray circle), is projected orthogonally down
the a plane surface defined by
a normal vector, $\vc{n}$, and a starting point, $S$. The point can be moved around in this illustration.
Click/touch Forward to commence the illustration.
Interactive Illustration 3.19:
First, a vector from $S$ to the point, $P$, is created, i.e., $\vc{v}=P-S$.
Interactive Illustration 3.19:
The vector $\vc{v}=P-S$ is then projected onto the normal, $\vc{n}$, i.e.,
$\proj{\vc{n}}{\vc{v}}$ is created.
Interactive Illustration 3.19:
The projected vector $\proj{\vc{n}}{\vc{v}}$ is then subtracted from $\vc{v}$, which creates a point in the plane.
This point will appear in the next step of this illustration. Press/touch Forward.
Interactive Illustration 3.19:
Finally, the point $P$ has been projected orthogonally to the plane, and the projected point, $Q$,
is shown as a red circle.
Interactive Illustration 3.19:
This illustration will show how a point, $\hid{P}$ (gray circle), is projected orthogonally down
the a plane surface defined by
a normal vector, $\hid{\vc{n}}$, and a starting point, $\hid{S}$. The point can be moved around in this illustration.
Click/touch Forward to commence the illustration.
The expression for the projected point, $Q$, is simply
\begin{equation}
Q = P - \proj{\vc{n}}{\vc{v}},
\end{equation}
(3.77)
where $\vc{v}=P-S$.
Since the projection formula is $\proj{\vc{n}}{\vc{v}} = \bigl( (\vc{v} \cdot \vc{n})/(\ln{\vc{n}}^2) \bigr)\vc{n}$,
i.e., a scalar times $\vc{n}$, we know that $\vc{n}$ is the only vector that is used in creating $Q$ (except for vectors
involved in computing scalar values). Therefore, we know that $Q$ will be projected along a direction orthogonal to the plane.
In fact, since the projection of $\vc{v}$ onto $\vc{n}$ was used to move the point $P$, the point $Q$
must also lie in the plane. However, we can also prove this by entering $Q$ into the plane equation,
i.e., testing whether $\vc{n} \cdot (Q-S)=0$. This is done below.
Note that projecting a point (or vector) to a plane also is similar to computing the reflection vector,
which is the topic of Example 3.13 below.
Next follows an example, where both line equations and plane equations are used.
Example 3.11:Shadow Projection on a Plane In this example, we will assume that there is a light source located at $L$, and
there is also a triangle with three vertex positions, $V_i$, $i\in \{1,2,3\}$.
The triangle will cast a shadow onto a plane, defined by a starting point, $S$,
and a normal, $\vc{n}$, i.e., the plane equation is: $\vc{n} \cdot (P-S)=0$ for all points, $P$,
lying in the plane.
The entire process is illustrated in Interactive Illustration 3.20, and after the reader has
explored it, the math will be derived.
$L$
$V_1$
$V_2$
$V_3$
$S$
$\vc{n}$
$\vc{d}_1$
$\vc{d}_2$
$\vc{d}_3$
Interactive Illustration 3.20:
This illustration shows how a shadow, projected onto a plane, can be calculated.
Initially, there is a light source (located at $L$), illustrated as a yellow circle,
a triangle with three vertices, $V_i$, and a ground plane. The plane
is described as: $\vc{n}\cdot(P-S)=0$.
Click/touch Forward to commence the illustration.
Interactive Illustration 3.20:
First, direction vectors, $\vc{d}_i$, are created from the light source, $L$,
to the triangle vertices, $V_i$.
Interactive Illustration 3.20:
The intersection points are then calculated using dot products with the mathematical
representations of a line (for the ray from the light source to the triangle vertices)
and a plane. The derivation for this is shown below this illustration.
Click/touch Forward to make the shadow appear.
Interactive Illustration 3.20:
Finally, a shadow triangle is shown.
Note that the light source position can be moved around in this illustration.
Be careful, however, in certain situations, unpredictable illustrations will occur.
Still, it is interesting to move the light source so that it is located under the
triangle. The current calculations still produce a shadow (sometimes called an anti-shadow),
even though this would not be physically correct.
Interactive Illustration 3.20:
Finally, a shadow triangle is shown.
Note that the light source position can be moved around in this illustration.
Be careful, however, in certain situations, unpredictable illustrations will occur.
Still, it is interesting to move the light source so that it is located under the
triangle. The current calculations still produce a shadow (sometimes called an anti-shadow),
even though this would not be physically correct.
To calculate where the shadow "lands" on the plane, we need to create one ray per vertex.
All three rays will start at the light source position, $L$, and the direction per vertex
will be: $\vc{d}_i = V_i - L$, i.e., the ray direction is formed by the line segment created
from the $L$ and $V_i$. Hence, the line equations for the rays will be
\begin{equation}
R_i(t) = L + t \vc{d}_i, \text{ where } \vc{d}_i = V_i - L, \text{ for } i\in \{1,2,3\}.
\end{equation}
(3.79)
Now, what we really are looking for is when these rays "hit" the plane, i.e., we need to set up
an expression which uses the line equations, $R_i(t)$, and the plane equation, $\vc{n}\cdot(P-S)=0$
at the same time. Since the only points, $P$, that lie in the plane fulfill $\vc{n}\cdot(P-S)=0$,
and because we also want those points $P$ to lie on the line equation, we can simply replace
$P$ by $R_i(t)$ in the plane equation and simplify the expression. This is done below.
\begin{equation}
\begin{array}{c}
\left.
\begin{array}{l}
R_i(t) = L + t \vc{d}_i \\
\vc{n}\cdot(P-S)=0
\end{array}
\right\} \Longrightarrow \vc{n}\cdot(R_i(t)-S) =0 \\
\Longleftrightarrow \\
\vc{n}\cdot(L + t \vc{d}_i-S) = 0 \\
\Longleftrightarrow \\
\vc{n}\cdot (L-S) + t (\vc{n}\cdot\vc{d}_i) = 0
\end{array}
\end{equation}
(3.80)
As can be seen, this is a first-degree polynomial in $t$, which has the following solution (where
we now have added the subscript $i$ to $t$ in order to clearly show that there is one solution
per triangle vertex), i.e.,
Division by zero must be avoided, so let us take a closer look at the denominator, $\vc{n}\cdot\vc{d}_i$,
which is zero only when $\vc{n} \perp \vc{d}_i$, which make sense, because if the ray direction is
parallel to the plane, the ray cannot hit the plane at all. Alternatively, the ray may
lie exactly in the plane, in which case there are an infinite number of solutions.
However, this would mean that the light source would be located in the (ground) plane
as well as the triangle vertex, which is not a situation that is likely to happen (at least in real life).
Anyway, the points of intersection are calculated as $R_i(t_i) = L + t_i\vc{d}_i$. The shadow triangle
is then formed from $R_1$, $R_2$, and $R_3$, which is exactly what was done in order to create
Interactive Illustration 3.20.
In the introduction of this chapter (Section 3.1), a graphics program,
called a ray tracer, is shown in Interactive Illustration 3.1.
Such a program is rather straightforward to write, once some knowledge about linear algebra has been obtained.
At the core of a ray tracer, there is a visibility function that determines which object a ray can "see".
An example is shown to the right. In a ray tracer, one creates a virtual viewer, which has a position, and
is looking in a certain direction. The ray tracer then computes an image from that position in that direction.
The view position is the starting point of the blue ray to the right. A set of rays is then created. In the simplest
case, one ray per pixel in the image plane is created. It is then up to the ray tracing program to examine
the relevant objects in the scene, and compute whether a ray through a pixel hits an object, and also to find the closest object.
For the ray in Figure 3.21, we can see that the ray hits two of the three circles. However, since the
yellow circle is closer, the pixel is colored yellow. For the pixel above, however, the corresponding ray that
goes through that pixel will hit the green circle, and the pixel is therefore colored green.
To generate images with shadows, reflections, and refraction, more rays may be shot from the first intersection
point between the yellow circle and the ray.
In this section, two examples, which relate to ray tracing, will be shown. The first shows how to compute
the intersections between a three-dimensional ray and a three-dimensional sphere.
The second example, shows how a vector can be reflected in a surface, whose normal is known.
Both of these examples use the dot product as a major tool.
Example 3.12:Ray-Sphere Intersection A sphere can be defined by a radius, $r$, and a center point, $C$. The sphere surface is then described
as all points, $P$, whose distance from the center, $C$, is equal to the radius, $r$. This can be expressed as
\begin{equation}
\ln{P - C} = r.
\end{equation}
(3.82)
As seen in Section 3.6.1, a three-dimensional line, which we also call a ray here, is
often parameterized
by a parameter, $t$, and it has an origin or starting point, $S$, and a direction, $\vc{d}$.
The ray on explicit form is then (see Definition 3.6)
\begin{equation}
R(t) = S + t\vc{d}.
\end{equation}
(3.83)
Now, if $R(t)$ and $P$ are the same, then the ray hits the sphere in that point.
Therefore, we replace $P$ with the ray equation, $R(t)$, and simplify to get
\begin{gather}
\ln{P - C} = r \\
\Longleftrightarrow \\
\ln{S + t\vc{d} - C} = r \\
\Longleftrightarrow \\
(S + t\vc{d} - C) \cdot (S + t\vc{d} - C)= r^2 \\
\Longleftrightarrow \\
t^2(\vc{d}\cdot\vc{d}) + 2t(\vc{d}\cdot(S-C)) + (S-C)\cdot(S-C)-r^2 = 0 \\
\Longleftrightarrow \\
at^2 + 2bt +c =0,
\end{gather}
(3.84)
where $a=\vc{d}\cdot\vc{d}$, $b=\vc{d}\cdot(S-C)$, and $c=(S-C)\cdot(S-C)-r^2$.
As can be seen, this turned into a second degree polynomial, which can be solved analytically, i.e.,
\begin{equation}
t = \frac{-b \pm \sqrt{b^2 -ac}}{a}.
\end{equation}
(3.85)
Note that if $\vc{d}$ is normalized, i.e., $\ln{\vc{d}}=1$, then $t$ is the distance from the origin, $S$,
along the ray to the intersection point(s) between the sphere and the line segment. However,
it must also hold that $b^2 -ac \geq 0$, otherwise $t$ will become a complex number, and there is
no straightforward interpretation of a imaginary distance $t$ along a ray. Hence, the ray does
not intersect the sphere when $b^2 -ac < 0$. As can be seen, there can be two solutions, $t_1$ and $t_2$,
and these correspond to the entry point and exit point, i.e., the ray first intersects the sphere
in an entry point, and the ray can exit the sphere in another point. These points are computed
as $R(t_1)$ and $R(t_2)$. If $t_1=t_2$ the ray just touches the sphere in a single point.
This is all shown in Interactive Illustration 3.22 below.
$S$
$\vc{d}$
$R(t_1)$
$R(t_2)$
$C$
Interactive Illustration 3.22:
A ray, $R(t) = S + t\vc{d}$ is tested for intersection against a circle.
The ray direction and the circle center can be moved by clicking/touching, pressing and moving around.
As we saw above, the derivation ended up in a second-degree polynomial, which may have at most two
solutions, $t_1$ and $t_2$. These can be used to create two points, $R(t_1)$ and $R(t_2)$, which
are the two points of intersection. The two points (when they exist) are shown as a red and a green filled circle.
The reader is encouraged to explore what happens to the intersection points when the ray origin is
inside the circle, and also attempt to make $R(t_1)$ and $R(t_2)$ come as close as possible to each other.
Interactive Illustration 3.22:
A ray, $\hid{R(t) = S + t\vc{d}}$ is tested for intersection against a circle.
The ray direction and the circle center can be moved by clicking/touching, pressing and moving around.
As we saw above, the derivation ended up in a second-degree polynomial, which may have at most two
solutions, $\hid{t_1}$ and $\hid{t_2}$. These can be used to create two points, $\hid{R(t_1)}$ and $\hid{R(t_2)}$, which
are the two points of intersection. The two points (when they exist) are shown as a red and a green filled circle.
The reader is encouraged to explore what happens to the intersection points when the ray origin is
inside the circle, and also attempt to make $\hid{R(t_1)}$ and $\hid{R(t_2)}$ come as close as possible to each other.
Example 3.13:Law of Reflection As seen in the image in the introduction (Section 3.1) of
this chapter, the reflected image in a sphere
could be computed. To be able to do that, one needs to be able to compute the reflected vector, which
can be computed using dot products. We also need the law of reflection, which says that
the angle of incidence is equal to the angle of reflection.
This is shown for three-dimensional vectors in Interactive Illustration 3.23.
$\vc{i}$
$\vc{n}$
$\vc{i}$
$\proj{\vc{n}}{\vc{i}}$
$-\proj{\vc{n}}{\vc{i}}$
$-\proj{\vc{n}}{\vc{i}}$
$\vc{r}$
Interactive Illustration 3.23:
This illustration will show how the reflected vector, $\vc{r}$, can be computed, given that we know
the incident vector, $\vc{i}$, and the normal vector, $\vc{n}$, at the hit point.
Here, the incident ray, $\vc{i}$, points towards a gray point.
Click/touch Forward to
commence the illustration.
Interactive Illustration 3.23:
The normal vector, $\vc{n}$, at the point where the incident ray hits the plane is shown as a black arrow.
Interactive Illustration 3.23:
In this step, we simply added a hemi-circle, in order to more easily see that the plane
that the vectors, $\vc{i}$ and $\vc{n}$, lie in. This is easy to verify by moving the view point (right button,
or swipe with two fingers).
Interactive Illustration 3.23:
To compute the reflection vector, we start by moving the incident vector, $\vc{i}$, so its starting point
is at the hit point, which in this case is the origin.
Interactive Illustration 3.23:
Next, we project $\vc{i}$ onto $\vc{n}$ using the projection formula: $\proj{\vc{n}}{\vc{i}}$.
This gives use the cyan vector.
Interactive Illustration 3.23:
This projected vector is negated, and added to $\vc{i}$.
Interactive Illustration 3.23:
To reach the reflected vector, we simply add the negated projected vector once more.
Interactive Illustration 3.23:
Here, the reflected vector, $\vc{r},$ is shown in red. Click/touch Forward once more to see the vectors that
have been added to reach the reflected vector, $\vc{r}$.
Interactive Illustration 3.23:
Here the orange vectors show which vectors have been added to construct the reflected vector, $\vc{r}$, which is red
in the figure.
Interactive Illustration 3.23:
This illustration will show how the reflected vector, $\hid{\vc{r}}$, can be computed, given that we know
the incident vector, $\hid{\vc{i}}$, and the normal vector, $\hid{\vc{n}}$, at the hit point.
Here, the incident ray, $\hid{\vc{i}}$, points towards a gray point.
Click/touch Forward to
commence the illustration.
Now that we have seen how the reflected vector (Interactive Illustration 3.23)
is constructed geometrically,
let us see how the reflected vector is expressed mathematically, i.e.,
Note that $\vc{r}$ must lie in the same plane as spanned by $\vc{i}$ and $\vc{n}$,
since $2(\vc{i}\cdot\vc{n})$ is a scalar and hence only a scaled version of $\vc{n}$ has been added to $\vc{i}$.
Let us also show that the incident angle is equal to the reflected angle, i.e.,
$[-\vc{i},\vc{n}] = [\vc{r},\vc{n}]$ (note the minus sign on $\vc{i}$, which is needed since $\vc{i}$
points in towards the point, while $\vc{r}$ points away).
For simplicity, let us assume that $\vc{i}$ and $\vc{n}$ are normalized.
This means that $\cos [-\vc{i},\vc{n}] = -\vc{i} \cdot \vc{n}$. The dot product between the reflected
vector, $\vc{r}$, and the normal vector, $\vc{n}$, can be expressed and simplified as below.
As can be seen, the cosine for the incident angle and the reflected angle are the same, and
since only the smallest (and positive) angles between vectors are calculated using
dot products, and because the cosine between $0$ and $\pi$ is unique, then the angles must be the same as well.
This can be proven also for incident and normal vectors of arbitrary lengths. This is, however, left as
an exercise.
Popup-help:
A basis (bases in plural) is a set of linearly independent (basis) vectors, such that each vector in the space
can be described as a linear combination of the basis vectors.
If the basis is $\{\vc{e}_1,\vc{e}_2,\vc{e}_3\}$, then any vector $\vc{v}$ in the space can be described by
three numbers $(x,y,z)$ such that $\vc{v} = x\vc{e}_1 +y\vc{e}_2 +z\vc{e}_3$. Often the basis is implicit,
in which case we write $\vc{v} = (x,y,z)$, or to make it possible to distinguish between vector elements from
one vector $\vc{v}$ to another $\vc{u}$, we may use $\vc{v}=(v_x,v_y,v_z)$ and $\vc{u}=(u_x,u_y,u_z)$ as notation.
Popup-help:
The Cauchy-Schwarz inequality states that $(\vc{u} \cdot \vc{v})^2 \leq \ln{\vc{u}}^2\,\ln{\vc{v}}^2$ for vectors in $\R^n$.
Popup-help:
A mapping $F$ is a rule that is written as $F: N \rightarrow M$,
where for every item in one set $N$, the function $F$
provides one item in another set $M$. Here $N$ is the domain.
Popup-help:
The dot product, also called scalar product, is a scalar value denoted $\vc{u}\cdot \vc{v}$:
where $[\vc{u},\vc{v}]$ is the smallest angle between $\vc{u}$ and $\vc{v}$.
The vectors are orthogonal (perpendicular) to each other if $\vc{u} \cdot \vc{v} = 0$, and vice versa.
We also have $\vc{u} \cdot \vc{v}>0 \Leftrightarrow 0 < [\vc{u},\vc{v}] < \pi/2$ and
$\vc{u} \cdot \vc{v}<0 \Leftrightarrow \pi/2 < [\vc{u},\vc{v}] \leq \pi$.
In an orthonormal basis, we have $\vc{u}\cdot\vc{v} = \sum_{i=1}^{n} u_i v_i$, where $n$ is the number
of dimensions. For three dimensions, for example, we have $\vc{u}\cdot\vc{v} = u_xv_x + u_yv_y +u_zv_z$.
Popup-help:
Gaussian elimination is the process where a system of linear equations reduces the number of free variables
on each line by one until the bottom most row says $kx_n=l$, i.e., $x_n=l/k$. After that, the rest of the
free variables can recovered by substitution.
Popup-help:
On parameterized form, a line is defined by a starting point, $S$, and a line direction $\vc{d}$, and a scalar
parameter, $t$, such that any point on the line can be obtained from the line equation, i.e.,
$P(t) = S + t\vc{d}$. This representation works in any dimension.
On implicit form, a line is defined by a starting point, $S$, and a normal vector, $\vc{n}$. For any point, $P$,
on the line, it holds that $\vc{n}\cdot (P-S) = 0$. This representation works only in two dimensions.
Popup-help:
A systems of equations is called linear if it only contains polynomial terms of the zero:th and first order,
that is, either constants or first-order terms, such as $9x$, $-2y$, and $0.5z$.
Popup-help:
A normalized vector, $\vc{u}$, is created from another vector, $\vc{v}$, as shown below:
A normalized vector is also called a unit vector, since its length is 1, e.g., $||\vc{u}||=1$.
Popup-help:
A normal, $\vc{n}$, is part of the description of the two-dimensional line and the three-dimensional plane on implicit form.
A plane equation is
where $\vc{n}$ is a vector that is orthogonal to the plane, $P$ is any point on the plane,
and $S$ is a known point on the plane.
Similarly, a line is $\vc{n}\cdot (P-S)=0$, where $\vc{n}$ is a vector orthogonal the line, $P$ is any point on the line, and
$S$ is a particular point on the plane. The plane can also be written as $ax+by+cz+d=0$ and a line can be written
as $ax+by+c=0$.
Popup-help:
Orthogonality is when, for example, two lines make a right angle. We say that the lines are orthogonal (to each other).
Two vectors are orthogonal when they may an angle of $90^\circ$ or $\pi/2$ radians, i.e., $[\vc{u},\vc{v}]=\pi/2$.
This is also denoted $\vc{u} \perp \vc{v}$, which illustrates that the vectors are orthogonal,
which is the same as perpendicular.
Note also that the dot product is zero if two vectors are orthogonal, i.e., $\vc{u}\cdot\vc{v}=0$.
An orthogonal matrix $\mx{B}$ is a square matrix where the column vectors constitute an orthonormalbasis.
Popup-help:
Loosely speaking, orthogonal projection is such that a right angle (orthogonal) is formed between the
projected object and the vector from the projected object to the object itself.
More strictly, if $\vc{v}$ is a non-zero vector, then the orthogonal projection of $\vc{u}$ onto $\vc{v}$ is denoted $\proj{\vc{v}}{\vc{u}}$,
and is defined by
Note that if $\ln{\vc{v}}=1$, i.e., $\vc{v}$ is normalized, then the expression for projection gets simpler:
$\proj{\vc{v}}{\vc{u}} = (\vc{u} \cdot \vc{v})\vc{v}$.
Popup-help:
An $n$-dimensional orthonormalbasis consists of $n$ basis vectors $\vc{e}_i$, such that
$\vc{e}_i\cdot \vc{e}_j$ is 0 if $i\neq j$ and 1 if $i=j$. The standard basis is orthonormal.
In two dimensions, we can see that this is true
since $\vc{e}_1\cdot\vc{e}_2=$$(1,0)\cdot(0,1)=0$ and $\vc{e}_i\cdot\vc{e}_i=1$.
Popup-help:
A parallelogram is a quadrilateral (polygon with four vertices), where the opposite sides are of equal length.
This makes the opposite angles inside the parallelogram equal as well, and the opposite sides are indeed also parallel.
Popup-help:
On parameterized form (also called explicit form), a plane equation is given by $P(t_1, t_2) = S + t_1 \vc{d}_1 + t_2\vc{d}_2$,
where $S$ a point in the plane and $\vc{d}_1$ and $\vc{d}_2$ are two non-parallel non-zero vectors
on the plane. Using the parameters $t_1$ and $t_2$, any point $P(t_1,t_2)$ can be reached.
On implicit form, a plane equation is given by $\vc{n}\cdot(P-S)=0$, where $\vc{n}$ is the normal of
the plane and $S$ is a point on the plane. Only for points, $P$, that lie in the plane is the
formula equal to $0$.
Popup-help:
Projection is a mapping from one set onto a smaller set (i.e., a subset) of that set. For example, if a 3D point
is projected to the closest point on a plane, then starting set was all 3D points which was reduced to
the points in the plane using projection.
Popup-help:
The triangle inequality states that $\ln{\vc{u} + \vc{v}} \leq \ln{\vc{u}} + \ln{\vc{v}}$ for vectors in $\R^n$.
Popup-help:
A unit vector, $\vc{u}$, is a vector such that its length is 1, i.e., $||\vc{u}||=1$. The process of creating a unit
vector from another vector is called normalization, and a unit vector is therefore often also called a normalized vector.
To generate a unit vector from an arbitrary vector, $\vc{v}$, simply divide by its length: $\vc{u} = \frac{1}{\ln{\vc{v}}}\vc{v}$.
Popup-help:
Adding two vectors, $\vc{u}$ and $\vc{v}$, forms a new vector, which is denoted $\vc{u}+\vc{v}$.
This is done by translating $\vc{u}$ so that its tail point is at the tip point of $\vc{v}$,
or vice versa, since vector addition is commutative, i.e., $\vc{u}+\vc{v} = \vc{v}+\vc{u}$.
Popup-help:
The length of a vector, $\vc{v}$, is denoted $\ln{\vc{v}}$. In an orthonormal basis,
the squared length is $\ln{\vc{v}}^2 = \vc{v}\cdot\vc{v}$, i.e., $\ln{\vc{v}} = \sqrt{\vc{v}\cdot\vc{v}}$.
For a three-dimensional vector, for example, this simplifies to $\ln{\vc{v}} = \sqrt{v_x^2 + v_y^2 + v_z^2}$.
Popup-help:
The vector product, aka cross product, between two three-dimensional vectors, $\vc{u}$ and $\vc{v}$, is another
three-dimensional vector, denoted $\vc{u} \times \vc{v}$, which is orthogonal to both $\vc{u}$ and $\vc{v}$,
and the length is $\ln{\vc{u}}\ln{\vc{v}}\sin [\vc{u},\vc{v}]$. Also, the vectors $\vc{u}$, $\vc{v}$,
and $\vc{u} \times \vc{v}$ are positively oriented.
In an orthonormal basis, the vector product simplifies to:
$\vc{u} \times \vc{v} = (u_y v_z - u_z v_y, \, u_z v_x - u_x v_z, \, u_x v_y- u_y v_x)$.
Popup-help:
The zero vector, $\vc{0}$, has length 0, and can be created from one point, $P$, as $\vc{0}=\overrightarrow{PP}.$