- Source: Comparison of vector algebra and geometric algebra
Geometric algebra" target="_blank">algebra is an extension of vector algebra" target="_blank">algebra, providing additional algebraic structures on vector spaces, with geometric interpretations.
Vector algebra" target="_blank">algebra uses all dimensions and signatures, as does geometric algebra" target="_blank">algebra, notably 3+1 spacetime as well as 2 dimensions.
Basic concepts and operations
Geometric algebra" target="_blank">algebra (GA) is an extension or completion of vector algebra" target="_blank">algebra (VA). The reader is herein assumed to be familiar with the basic concepts and operations of VA and this article will mainly concern itself with operations in
G
3
{\displaystyle {\mathcal {G}}_{3}}
the GA of 3D space (nor is this article intended to be mathematically rigorous). In GA, vectors are not normally written boldface as the meaning is usually clear from the context.
The fundamental difference is that GA provides a new product of vectors called the "geometric product". Elements of GA are graded multivectors: scalars are grade 0, usual vectors are grade 1, bivectors are grade 2 and the highest grade (3 in the 3D case) is traditionally called the pseudoscalar and designated
I
{\displaystyle I}
.
The ungeneralized 3D vector form of the geometric product is:
a
b
=
a
⋅
b
+
a
∧
b
{\displaystyle ab=a\cdot b+a\wedge b}
that is the sum of the usual dot (inner) product and the outer (exterior) product (this last is closely related to the cross product and will be explained below).
In VA, entities such as pseudovectors and pseudoscalars need to be bolted on, whereas in GA the equivalent bivector and pseudovector respectively exist naturally as subspaces of the algebra" target="_blank">algebra.
For example, applying vector calculus in 2 dimensions, such as to compute torque or curl, requires adding an artificial 3rd dimension and extending the vector field to be constant in that dimension, or alternately considering these to be scalars. The torque or curl is then a normal vector field in this 3rd dimension. By contrast, geometric algebra" target="_blank">algebra in 2 dimensions defines these as a pseudoscalar field (a bivector), without requiring a 3rd dimension. Similarly, the scalar triple product is ad hoc, and can instead be expressed uniformly using the exterior product and the geometric product.
Translations between formalisms
Here are some comparisons between standard
R
3
{\displaystyle {\mathbb {R} }^{3}}
vector relations and their corresponding exterior product and geometric product equivalents. All the exterior and geometric product equivalents here are good for more than three dimensions, and some also for two. In two dimensions the cross product is undefined even if what it describes (like torque) is perfectly well defined in a plane without introducing an arbitrary normal vector outside of the space.
Many of these relationships only require the introduction of the exterior product to generalize, but since that may not be familiar to somebody with only a background in vector algebra" target="_blank">algebra and calculus, some examples are given.
= Cross and exterior products
=u
×
v
{\displaystyle \mathbf {u} \times \mathbf {v} }
is perpendicular to the plane containing
u
{\displaystyle \mathbf {u} }
and
v
{\displaystyle \mathbf {v} }
.
u
∧
v
{\displaystyle \mathbf {u} \wedge \mathbf {v} }
is an oriented representation of the same plane.
We have the pseudoscalar
I
=
e
1
e
2
e
3
{\displaystyle I=e_{1}e_{2}e_{3}}
(right handed orthonormal frame) and so
e
1
I
=
I
e
1
=
e
2
e
3
{\displaystyle e_{1}I=Ie_{1}=e_{2}e_{3}}
returns a bivector and
I
(
e
2
∧
e
3
)
=
I
e
2
e
3
=
−
e
1
{\displaystyle I(e_{2}\wedge e_{3})=Ie_{2}e_{3}=-e_{1}}
returns a vector perpendicular to the
e
2
∧
e
3
{\displaystyle e_{2}\wedge e_{3}}
plane.
This yields a convenient definition for the cross product of traditional vector algebra" target="_blank">algebra:
u
×
v
=
−
I
(
u
∧
v
)
{\displaystyle {u}\times {v}=-I({u}\wedge {v})}
(this is antisymmetric). Relevant is the distinction between polar and axial vectors in vector algebra" target="_blank">algebra, which is natural in geometric algebra" target="_blank">algebra as the distinction between vectors and bivectors (elements of grade two).
The
I
{\displaystyle I}
here is a unit pseudoscalar of Euclidean 3-space, which establishes a duality between the vectors and the bivectors, and is named so because of the expected property
I
2
=
(
e
1
e
2
e
3
)
2
=
e
1
e
2
e
3
e
1
e
2
e
3
=
−
e
1
e
2
e
1
e
3
e
2
e
3
=
e
1
e
1
e
2
e
3
e
2
e
3
=
−
e
3
e
2
e
2
e
3
=
−
1
{\displaystyle I^{2}=(e_{1}e_{2}e_{3})^{2}=e_{1}e_{2}e_{3}e_{1}e_{2}e_{3}=-e_{1}e_{2}e_{1}e_{3}e_{2}e_{3}=e_{1}e_{1}e_{2}e_{3}e_{2}e_{3}=-e_{3}e_{2}e_{2}e_{3}=-1}
The equivalence of the
R
3
{\displaystyle \mathbb {R} ^{3}}
cross product and the exterior product expression above can be confirmed by direct multiplication of
−
I
=
−
e
1
e
2
e
3
{\displaystyle -I=-{e_{1}}{e_{2}}{e_{3}}}
with a determinant expansion of the exterior product
u
∧
v
=
∑
1
≤
i
<
j
≤
3
(
u
i
v
j
−
v
i
u
j
)
e
i
∧
e
j
=
∑
1
≤
i
<
j
≤
3
(
u
i
v
j
−
v
i
u
j
)
e
i
e
j
{\displaystyle u\wedge v=\sum _{1\leq i
See also Cross product as an exterior product. Essentially, the geometric product of a bivector and the pseudoscalar of Euclidean 3-space provides a method of calculation of the Hodge dual.
= Cross and commutator products
=The pseudovector/bivector subalgebra of the geometric algebra" target="_blank">algebra of Euclidean 3-dimensional space form a 3-dimensional vector space themselves. Let the standard unit pseudovectors/bivectors of the subalgebra be
i
=
e
2
e
3
{\displaystyle \mathbf {i} =\mathbf {e_{2}} \mathbf {e_{3}} }
,
j
=
e
1
e
3
{\displaystyle \mathbf {j} =\mathbf {e_{1}} \mathbf {e_{3}} }
, and
k
=
e
1
e
2
{\displaystyle \mathbf {k} =\mathbf {e_{1}} \mathbf {e_{2}} }
, and the anti-commutative commutator product be defined as
A
×
B
=
1
2
(
A
B
−
B
A
)
{\displaystyle A\times B={\tfrac {1}{2}}(AB-BA)}
, where
A
B
{\displaystyle AB}
is the geometric product. The commutator product is distributive over addition and linear, as the geometric product is distributive over addition and linear.
From the definition of the commutator product,
i
{\displaystyle \mathbf {i} }
,
j
{\displaystyle \mathbf {j} }
and
k
{\displaystyle \mathbf {k} }
satisfy the following equalities:
i
×
j
=
1
2
(
i
j
−
j
i
)
=
1
2
(
(
e
2
e
3
e
1
e
3
−
e
1
e
3
e
2
e
3
)
=
1
2
(
−
e
2
e
3
e
3
e
1
+
e
1
e
3
e
3
e
2
)
=
1
2
(
−
e
2
e
1
+
e
1
e
2
)
=
1
2
(
e
1
e
2
+
e
1
e
2
)
=
e
1
e
2
=
k
{\displaystyle \mathbf {i} \times \mathbf {j} ={\tfrac {1}{2}}(\mathbf {i} \mathbf {j} -\mathbf {j} \mathbf {i} )={\tfrac {1}{2}}((\mathbf {e_{2}} \mathbf {e_{3}} \mathbf {e_{1}} \mathbf {e_{3}} -\mathbf {e_{1}} \mathbf {e_{3}} \mathbf {e_{2}} \mathbf {e_{3}} )={\tfrac {1}{2}}(-\mathbf {e_{2}} \mathbf {e_{3}} \mathbf {e_{3}} \mathbf {e_{1}} +\mathbf {e_{1}} \mathbf {e_{3}} \mathbf {e_{3}} \mathbf {e_{2}} )={\tfrac {1}{2}}(-\mathbf {e_{2}} \mathbf {e_{1}} +\mathbf {e_{1}} \mathbf {e_{2}} )={\tfrac {1}{2}}(\mathbf {e_{1}} \mathbf {e_{2}} +\mathbf {e_{1}} \mathbf {e_{2}} )=\mathbf {e_{1}} \mathbf {e_{2}} =\mathbf {k} }
j
×
k
=
1
2
(
j
k
−
k
j
)
=
1
2
(
(
e
1
e
3
e
1
e
2
−
e
1
e
2
e
1
e
3
)
=
1
2
(
−
e
3
e
1
e
1
e
2
+
e
2
e
1
e
1
e
3
)
=
1
2
(
−
e
3
e
2
+
e
2
e
3
)
=
1
2
(
e
2
e
3
+
e
2
e
3
)
=
e
2
e
3
=
i
{\displaystyle \mathbf {j} \times \mathbf {k} ={\tfrac {1}{2}}(\mathbf {j} \mathbf {k} -\mathbf {k} \mathbf {j} )={\tfrac {1}{2}}((\mathbf {e_{1}} \mathbf {e_{3}} \mathbf {e_{1}} \mathbf {e_{2}} -\mathbf {e_{1}} \mathbf {e_{2}} \mathbf {e_{1}} \mathbf {e_{3}} )={\tfrac {1}{2}}(-\mathbf {e_{3}} \mathbf {e_{1}} \mathbf {e_{1}} \mathbf {e_{2}} +\mathbf {e_{2}} \mathbf {e_{1}} \mathbf {e_{1}} \mathbf {e_{3}} )={\tfrac {1}{2}}(-\mathbf {e_{3}} \mathbf {e_{2}} +\mathbf {e_{2}} \mathbf {e_{3}} )={\tfrac {1}{2}}(\mathbf {e_{2}} \mathbf {e_{3}} +\mathbf {e_{2}} \mathbf {e_{3}} )=\mathbf {e_{2}} \mathbf {e_{3}} =\mathbf {i} }
k
×
i
=
1
2
(
k
i
−
i
k
)
=
1
2
(
e
1
e
2
e
2
e
3
−
e
2
e
3
e
1
e
2
)
=
1
2
(
e
1
e
2
e
2
e
3
−
e
3
e
2
e
2
e
1
)
=
1
2
(
e
1
e
3
−
e
3
e
1
)
=
1
2
(
e
1
e
3
+
e
1
e
3
)
=
e
1
e
3
=
j
{\displaystyle \mathbf {k} \times \mathbf {i} ={\tfrac {1}{2}}(\mathbf {k} \mathbf {i} -\mathbf {i} \mathbf {k} )={\tfrac {1}{2}}(\mathbf {e_{1}} \mathbf {e_{2}} \mathbf {e_{2}} \mathbf {e_{3}} -\mathbf {e_{2}} \mathbf {e_{3}} \mathbf {e_{1}} \mathbf {e_{2}} )={\tfrac {1}{2}}(\mathbf {e_{1}} \mathbf {e_{2}} \mathbf {e_{2}} \mathbf {e_{3}} -\mathbf {e_{3}} \mathbf {e_{2}} \mathbf {e_{2}} \mathbf {e_{1}} )={\tfrac {1}{2}}(\mathbf {e_{1}} \mathbf {e_{3}} -\mathbf {e_{3}} \mathbf {e_{1}} )={\tfrac {1}{2}}(\mathbf {e_{1}} \mathbf {e_{3}} +\mathbf {e_{1}} \mathbf {e_{3}} )=\mathbf {e_{1}} \mathbf {e_{3}} =\mathbf {j} }
which imply, by the anti-commutativity of the commutator product, that
j
×
i
=
−
k
{\displaystyle \mathbf {j} \times \mathbf {i} =-\mathbf {k} }
k
×
j
=
−
i
{\displaystyle \mathbf {k} \times \mathbf {j} =-\mathbf {i} }
i
×
k
=
−
j
{\displaystyle \mathbf {i} \times \mathbf {k} =-\mathbf {j} }
The anti-commutativity of the commutator product also implies that
i
×
i
=
j
×
j
=
k
×
k
=
0
{\displaystyle \mathbf {i} \times \mathbf {i} =\mathbf {j} \times \mathbf {j} =\mathbf {k} \times \mathbf {k} =0}
These equalities and properties are sufficient to determine the commutator product of any two pseudovectors/bivectors
A
{\displaystyle \mathbf {A} }
and
B
{\displaystyle \mathbf {B} }
. As the pseudovectors/bivectors form a vector space, each pseudovector/bivector can be defined as the sum of three orthogonal components parallel to the standard basis pseudovectors/bivectors:
A
=
(
A
1
i
+
A
2
j
+
A
3
k
)
{\displaystyle \mathbf {A} =(A_{1}\mathbf {i} +A_{2}\mathbf {j} +A_{3}\mathbf {k} )}
B
=
(
B
1
i
+
B
2
j
+
B
3
k
)
{\displaystyle \mathbf {B} =(B_{1}\mathbf {i} +B_{2}\mathbf {j} +B_{3}\mathbf {k} )}
Their commutator product
A
×
B
{\displaystyle \mathbf {A} \times \mathbf {B} }
can be expanded using its distributive property:
A
×
B
=
(
A
1
i
+
A
2
j
+
A
3
k
)
×
(
B
1
i
+
B
2
j
+
B
3
k
)
=
A
1
B
1
i
×
i
+
A
1
B
2
i
×
j
+
A
1
B
3
i
×
k
+
A
2
B
1
j
×
i
+
A
2
B
2
j
×
j
+
A
2
B
3
j
×
k
+
A
3
B
1
k
×
i
+
A
3
B
2
k
×
j
+
A
3
B
3
k
×
k
=
A
1
B
2
k
−
A
1
B
3
j
−
A
2
B
1
k
+
A
2
B
3
i
+
A
3
B
1
j
−
A
3
B
2
i
=
(
A
2
B
3
−
A
3
B
2
)
i
+
(
A
3
B
1
−
A
1
B
3
)
j
+
(
A
1
B
2
−
A
2
B
1
)
k
{\displaystyle {\begin{aligned}\mathbf {A} \times \mathbf {B} &=(A_{1}\mathbf {i} +A_{2}\mathbf {j} +A_{3}\mathbf {k} )\times (B_{1}\mathbf {i} +B_{2}\mathbf {j} +B_{3}\mathbf {k} )\\&=A_{1}B_{1}\mathbf {i} \times \mathbf {i} +A_{1}B_{2}\mathbf {i} \times \mathbf {j} +A_{1}B_{3}\mathbf {i} \times \mathbf {k} +A_{2}B_{1}\mathbf {j} \times \mathbf {i} +A_{2}B_{2}\mathbf {j} \times \mathbf {j} +A_{2}B_{3}\mathbf {j} \times \mathbf {k} +A_{3}B_{1}\mathbf {k} \times \mathbf {i} +A_{3}B_{2}\mathbf {k} \times \mathbf {j} +A_{3}B_{3}\mathbf {k} \times \mathbf {k} \\&=A_{1}B_{2}\mathbf {k} -A_{1}B_{3}\mathbf {j} -A_{2}B_{1}\mathbf {k} +A_{2}B_{3}\mathbf {i} +A_{3}B_{1}\mathbf {j} -A_{3}B_{2}\mathbf {i} =(A_{2}B_{3}-A_{3}B_{2})\mathbf {i} +(A_{3}B_{1}-A_{1}B_{3})\mathbf {j} +(A_{1}B_{2}-A_{2}B_{1})\mathbf {k} \end{aligned}}}
which is precisely the cross product in vector algebra" target="_blank">algebra for pseudovectors.
= Norm of a vector
=Ordinarily,
‖
u
‖
2
=
u
⋅
u
{\displaystyle {\Vert \mathbf {u} \Vert }^{2}=\mathbf {u} \cdot \mathbf {u} }
Making use of the geometric product and the fact that the exterior product of a vector with itself is zero:
u
u
=
‖
u
‖
2
=
u
2
=
u
⋅
u
+
u
∧
u
=
u
⋅
u
{\displaystyle \mathbf {u} \,\mathbf {u} ={\Vert \mathbf {u} \Vert }^{2}={\mathbf {u} }^{2}=\mathbf {u} \cdot \mathbf {u} +\mathbf {u} \wedge \mathbf {u} =\mathbf {u} \cdot \mathbf {u} }
= Lagrange identity
=In three dimensions the product of two vector lengths can be expressed in terms of the dot and cross products
‖
u
‖
2
‖
v
‖
2
=
(
u
⋅
v
)
2
+
‖
u
×
v
‖
2
{\displaystyle {\Vert \mathbf {u} \Vert }^{2}{\Vert \mathbf {v} \Vert }^{2}=({\mathbf {u} \cdot \mathbf {v} })^{2}+{\Vert \mathbf {u} \times \mathbf {v} \Vert }^{2}}
The corresponding generalization expressed using the geometric product is
‖
u
‖
2
‖
v
‖
2
=
(
u
⋅
v
)
2
−
(
u
∧
v
)
2
{\displaystyle {\Vert \mathbf {u} \Vert }^{2}{\Vert \mathbf {v} \Vert }^{2}=({\mathbf {u} \cdot \mathbf {v} })^{2}-(\mathbf {u} \wedge \mathbf {v} )^{2}}
This follows from expanding the geometric product of a pair of vectors with its reverse
(
u
v
)
(
v
u
)
=
(
u
⋅
v
+
u
∧
v
)
(
u
⋅
v
−
u
∧
v
)
{\displaystyle (\mathbf {u} \mathbf {v} )(\mathbf {v} \mathbf {u} )=({\mathbf {u} \cdot \mathbf {v} }+{\mathbf {u} \wedge \mathbf {v} })({\mathbf {u} \cdot \mathbf {v} }-{\mathbf {u} \wedge \mathbf {v} })}
= Determinant expansion of cross and wedge products
=u
×
v
=
∑
i
<
j
|
u
i
u
j
v
i
v
j
|
e
i
×
e
j
{\displaystyle \mathbf {u} \times \mathbf {v} =\sum _{i
u
∧
v
=
∑
i
<
j
|
u
i
u
j
v
i
v
j
|
e
i
∧
e
j
{\displaystyle \mathbf {u} \wedge \mathbf {v} =\sum _{i
Linear algebra" target="_blank">algebra texts will often use the determinant for the solution of linear systems by Cramer's rule or for and matrix inversion.
An alternative treatment is to axiomatically introduce the wedge product, and then demonstrate that this can be used directly to solve linear systems. This is shown below, and does not require sophisticated math skills to understand.
It is then possible to define determinants as nothing more than the coefficients of the wedge product in terms of "unit k-vectors" (
e
i
∧
e
j
{\displaystyle {\mathbf {e} }_{i}\wedge {\mathbf {e} }_{j}}
terms) expansions as above.
A one-by-one determinant is the coefficient of
e
1
{\displaystyle \mathbf {e} _{1}}
for an
R
1
{\displaystyle \mathbb {R} ^{1}}
1-vector.
A two-by-two determinant is the coefficient of
e
1
∧
e
2
{\displaystyle \mathbf {e} _{1}\wedge \mathbf {e} _{2}}
for an
R
2
{\displaystyle \mathbb {R} ^{2}}
bivector
A three-by-three determinant is the coefficient of
e
1
∧
e
2
∧
e
3
{\displaystyle \mathbf {e} _{1}\wedge \mathbf {e} _{2}\wedge \mathbf {e} _{3}}
for an
R
3
{\displaystyle \mathbb {R} ^{3}}
trivector
...
When linear system solution is introduced via the wedge product, Cramer's rule follows as a side-effect, and there is no need to lead up to the end results with definitions of minors, matrices, matrix invertibility, adjoints, cofactors, Laplace expansions, theorems on determinant multiplication and row column exchanges, and so forth.
= Matrix related
=Matrix inversion (Cramer's rule) and determinants can be naturally expressed in terms of the wedge product.
The use of the wedge product in the solution of linear equations can be quite useful for various geometric product calculations.
Traditionally, instead of using the wedge product, Cramer's rule is usually presented as a generic algorithm that can be used to solve linear equations of the form
A
x
=
b
{\displaystyle Ax=b}
(or equivalently to invert a matrix). Namely
x
=
1
|
A
|
adj
(
A
)
b
.
{\displaystyle x={\frac {1}{|A|}}\operatorname {adj} (A)b.}
This is a useful theoretic result. For numerical problems row reduction with pivots and other methods are more stable and efficient.
When the wedge product is coupled with the Clifford product and put into a natural geometric context, the fact that the determinants are used in the expression of
R
N
{\displaystyle {\mathbb {R} }^{N}}
parallelogram area and parallelepiped volumes (and higher-dimensional generalizations thereof) also comes as a nice side-effect.
As is also shown below, results such as Cramer's rule also follow directly from the wedge product's selection of non-identical elements. The result is then simple enough that it could be derived easily if required instead of having to remember or look up a rule.
Two variables example
[
a
b
]
[
x
y
]
=
a
x
+
b
y
=
c
.
{\displaystyle {\begin{bmatrix}a&b\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}=ax+by=c.}
Pre- and post-multiplying by
a
{\displaystyle a}
and
b
{\displaystyle b}
,
(
a
x
+
b
y
)
∧
b
=
(
a
∧
b
)
x
=
c
∧
b
{\displaystyle (ax+by)\wedge b=(a\wedge b)x=c\wedge b}
a
∧
(
a
x
+
b
y
)
=
(
a
∧
b
)
y
=
a
∧
c
{\displaystyle a\wedge (ax+by)=(a\wedge b)y=a\wedge c}
Provided
a
∧
b
≠
0
{\displaystyle a\wedge b\neq 0}
the solution is
[
x
y
]
=
1
a
∧
b
[
c
∧
b
a
∧
c
]
.
{\displaystyle {\begin{bmatrix}x\\y\end{bmatrix}}={\frac {1}{a\wedge b}}{\begin{bmatrix}c\wedge b\\a\wedge c\end{bmatrix}}.}
For
a
,
b
∈
R
2
{\displaystyle a,b\in {\mathbb {R} }^{2}}
, this is Cramer's rule since the
e
1
∧
e
2
{\displaystyle {e}_{1}\wedge {e}_{2}}
factors of the wedge products
u
∧
v
=
|
u
1
u
2
v
1
v
2
|
e
1
∧
e
2
{\displaystyle u\wedge v={\begin{vmatrix}u_{1}&u_{2}\\v_{1}&v_{2}\end{vmatrix}}{e}_{1}\wedge {e}_{2}}
divide out.
Similarly, for three, or N variables, the same ideas hold
[
a
b
c
]
[
x
y
z
]
=
d
{\displaystyle {\begin{bmatrix}a&b&c\end{bmatrix}}{\begin{bmatrix}x\\y\\z\end{bmatrix}}=d}
[
x
y
z
]
=
1
a
∧
b
∧
c
[
d
∧
b
∧
c
a
∧
d
∧
c
a
∧
b
∧
d
]
{\displaystyle {\begin{bmatrix}x\\y\\z\end{bmatrix}}={\frac {1}{a\wedge b\wedge c}}{\begin{bmatrix}d\wedge b\wedge c\\a\wedge d\wedge c\\a\wedge b\wedge d\end{bmatrix}}}
Again, for the three variable three equation case this is Cramer's rule since the
e
1
∧
e
2
∧
e
3
{\displaystyle {e}_{1}\wedge {e}_{2}\wedge {e}_{3}}
factors of all the wedge products divide out, leaving the familiar determinants.
A numeric example with three equations and two unknowns:
In case there are more equations than variables and the equations have a solution, then each of the k-vector quotients will be scalars.
To illustrate here is the solution of a simple example with three equations and two unknowns.
[
1
1
0
]
x
+
[
1
1
1
]
y
=
[
1
1
2
]
{\displaystyle {\begin{bmatrix}1\\1\\0\end{bmatrix}}x+{\begin{bmatrix}1\\1\\1\end{bmatrix}}y={\begin{bmatrix}1\\1\\2\end{bmatrix}}}
The right wedge product with
(
1
,
1
,
1
)
{\displaystyle (1,1,1)}
solves for
x
{\displaystyle x}
[
1
1
0
]
∧
[
1
1
1
]
x
=
[
1
1
2
]
∧
[
1
1
1
]
{\displaystyle {\begin{bmatrix}1\\1\\0\end{bmatrix}}\wedge {\begin{bmatrix}1\\1\\1\end{bmatrix}}x={\begin{bmatrix}1\\1\\2\end{bmatrix}}\wedge {\begin{bmatrix}1\\1\\1\end{bmatrix}}}
and a left wedge product with
(
1
,
1
,
0
)
{\displaystyle (1,1,0)}
solves for
y
{\displaystyle y}
[
1
1
0
]
∧
[
1
1
1
]
y
=
[
1
1
0
]
∧
[
1
1
2
]
.
{\displaystyle {\begin{bmatrix}1\\1\\0\end{bmatrix}}\wedge {\begin{bmatrix}1\\1\\1\end{bmatrix}}y={\begin{bmatrix}1\\1\\0\end{bmatrix}}\wedge {\begin{bmatrix}1\\1\\2\end{bmatrix}}.}
Observe that both of these equations have the same factor, so
one can compute this only once (if this was zero it would
indicate the system of equations has no solution).
Collection of results for
x
{\displaystyle x}
and
y
{\displaystyle y}
yields a Cramer's rule-like form:
[
x
y
]
=
1
(
1
,
1
,
0
)
∧
(
1
,
1
,
1
)
[
(
1
,
1
,
2
)
∧
(
1
,
1
,
1
)
(
1
,
1
,
0
)
∧
(
1
,
1
,
2
)
]
.
{\displaystyle {\begin{bmatrix}x\\y\end{bmatrix}}={\frac {1}{(1,1,0)\wedge (1,1,1)}}{\begin{bmatrix}(1,1,2)\wedge (1,1,1)\\(1,1,0)\wedge (1,1,2)\end{bmatrix}}.}
Writing
e
i
∧
e
j
=
e
i
j
{\displaystyle {e}_{i}\wedge {e}_{j}={e}_{ij}}
, we have the result:
[
x
y
]
=
1
e
13
+
e
23
[
−
e
13
−
e
23
2
e
13
+
2
e
23
]
=
[
−
1
2
]
.
{\displaystyle {\begin{bmatrix}x\\y\end{bmatrix}}={\frac {1}{{e}_{13}+{e}_{23}}}{\begin{bmatrix}{-{e}_{13}-{e}_{23}}\\{2{e}_{13}+2{e}_{23}}\\\end{bmatrix}}={\begin{bmatrix}-1\\2\end{bmatrix}}.}
= Equation of a plane
=For the plane of all points
r
{\displaystyle {\mathbf {r} }}
through the plane passing through three independent points
r
0
{\displaystyle {\mathbf {r} }_{0}}
,
r
1
{\displaystyle {\mathbf {r} }_{1}}
, and
r
2
{\displaystyle {\mathbf {r} }_{2}}
, the normal form of the equation is
(
(
r
2
−
r
0
)
×
(
r
1
−
r
0
)
)
⋅
(
r
−
r
0
)
=
0.
{\displaystyle (({\mathbf {r} }_{2}-{\mathbf {r} }_{0})\times ({\mathbf {r} }_{1}-{\mathbf {r} }_{0}))\cdot ({\mathbf {r} }-{\mathbf {r} }_{0})=0.}
The equivalent wedge product equation is
(
r
2
−
r
0
)
∧
(
r
1
−
r
0
)
∧
(
r
−
r
0
)
=
0.
{\displaystyle ({\mathbf {r} }_{2}-{\mathbf {r} }_{0})\wedge ({\mathbf {r} }_{1}-{\mathbf {r} }_{0})\wedge ({\mathbf {r} }-{\mathbf {r} }_{0})=0.}
= Projection and rejection
=Using the Gram–Schmidt process a single vector can be decomposed into two components with respect to a reference vector, namely the projection onto a unit vector in a reference direction, and the difference between the vector and that projection.
With
u
^
=
u
/
‖
u
‖
{\displaystyle {\hat {u}}=u/{\Vert u\Vert }}
, the projection of
v
{\displaystyle v}
onto
u
^
{\displaystyle {\hat {u}}}
is
P
r
o
j
u
^
v
=
u
^
(
u
^
⋅
v
)
{\displaystyle \mathrm {Proj} _{\hat {u}}\,{v}={\hat {u}}({\hat {u}}\cdot v)}
Orthogonal to that vector is the difference, designated the rejection,
v
−
u
^
(
u
^
⋅
v
)
=
1
‖
u
‖
2
(
‖
u
‖
2
v
−
u
(
u
⋅
v
)
)
{\displaystyle v-{\hat {u}}({\hat {u}}\cdot v)={\frac {1}{{\Vert u\Vert }^{2}}}({\Vert u\Vert }^{2}v-u(u\cdot v))}
The rejection can be expressed as a single geometric algebraic product in a few different ways
u
u
2
(
u
v
−
u
⋅
v
)
=
1
u
(
u
∧
v
)
=
u
^
(
u
^
∧
v
)
=
(
v
∧
u
^
)
u
^
{\displaystyle {\frac {u}{{u}^{2}}}(uv-u\cdot v)={\frac {1}{u}}(u\wedge v)={\hat {u}}({\hat {u}}\wedge v)=(v\wedge {\hat {u}}){\hat {u}}}
The similarity in form between the projection and the rejection is notable. The sum of these recovers the original vector
v
=
u
^
(
u
^
⋅
v
)
+
u
^
(
u
^
∧
v
)
{\displaystyle v={\hat {u}}({\hat {u}}\cdot v)+{\hat {u}}({\hat {u}}\wedge v)}
Here the projection is in its customary vector form. An alternate formulation is possible that puts the projection in a form that differs from the usual vector formulation
v
=
1
u
(
u
⋅
v
)
+
1
u
(
u
∧
v
)
=
(
v
⋅
u
)
1
u
+
(
v
∧
u
)
1
u
{\displaystyle v={\frac {1}{u}}({u}\cdot v)+{\frac {1}{u}}({u}\wedge v)=({v}\cdot u){\frac {1}{u}}+(v\wedge u){\frac {1}{u}}}
Working backwards from the result, it can be observed that this orthogonal decomposition result can in fact follow more directly from the definition of the geometric product itself.
v
=
u
^
u
^
v
=
u
^
(
u
^
⋅
v
+
u
^
∧
v
)
{\displaystyle v={\hat {u}}{\hat {u}}v={\hat {u}}({\hat {u}}\cdot v+{\hat {u}}\wedge v)}
With this approach, the original geometrical consideration is not necessarily obvious, but it is a much quicker way to get at the same algebraic result.
However, the hint that one can work backwards, coupled with the knowledge that the wedge product can be used to solve sets of linear equations (see: [1] ), the problem of orthogonal decomposition can be posed directly,
Let
v
=
a
u
+
x
{\displaystyle v=au+x}
, where
u
⋅
x
=
0
{\displaystyle u\cdot x=0}
. To discard the portions of
v
{\displaystyle v}
that are colinear with
u
{\displaystyle u}
, take the exterior product
u
∧
v
=
u
∧
(
a
u
+
x
)
=
u
∧
x
{\displaystyle u\wedge v=u\wedge (au+x)=u\wedge x}
Here the geometric product can be employed
u
∧
v
=
u
∧
x
=
u
x
−
u
⋅
x
=
u
x
{\displaystyle u\wedge v=u\wedge x=ux-u\cdot x=ux}
Because the geometric product is invertible, this can be solved for x:
x
=
1
u
(
u
∧
v
)
.
{\displaystyle x={\frac {1}{u}}(u\wedge v).}
The same techniques can be applied to similar problems, such as calculation of the component of a vector in a plane and perpendicular to the plane.
For three dimensions the projective and rejective components of a vector with respect to an arbitrary non-zero unit vector, can be expressed in terms of the dot and cross product
v
=
(
v
⋅
u
^
)
u
^
+
u
^
×
(
v
×
u
^
)
.
{\displaystyle \mathbf {v} =(\mathbf {v} \cdot {\hat {\mathbf {u} }}){\hat {\mathbf {u} }}+{\hat {\mathbf {u} }}\times (\mathbf {v} \times {\hat {\mathbf {u} }}).}
For the general case the same result can be written in terms of the dot and wedge product and the geometric product of that and the unit vector
v
=
(
v
⋅
u
^
)
u
^
+
(
v
∧
u
^
)
u
^
.
{\displaystyle \mathbf {v} =(\mathbf {v} \cdot {\hat {\mathbf {u} }}){\hat {\mathbf {u} }}+(\mathbf {v} \wedge {\hat {\mathbf {u} }}){\hat {\mathbf {u} }}.}
It's also worthwhile to point out that this result can also be expressed using right or left vector division as defined by the geometric product:
v
=
(
v
⋅
u
)
1
u
+
(
v
∧
u
)
1
u
{\displaystyle \mathbf {v} =(\mathbf {v} \cdot \mathbf {u} ){\frac {1}{\mathbf {u} }}+(\mathbf {v} \wedge \mathbf {u} ){\frac {1}{\mathbf {u} }}}
v
=
1
u
(
u
⋅
v
)
+
1
u
(
u
∧
v
)
.
{\displaystyle \mathbf {v} ={\frac {1}{\mathbf {u} }}(\mathbf {u} \cdot \mathbf {v} )+{\frac {1}{\mathbf {u} }}(\mathbf {u} \wedge \mathbf {v} ).}
Like vector projection and rejection, higher-dimensional analogs of that calculation are also possible using the geometric product.
As an example, one can calculate the component of a vector perpendicular to a plane and the projection of that vector onto the plane.
Let
w
=
a
u
+
b
v
+
x
{\displaystyle w=au+bv+x}
, where
u
⋅
x
=
v
⋅
x
=
0
{\displaystyle u\cdot x=v\cdot x=0}
. As above, to discard the portions of
w
{\displaystyle w}
that are colinear with
u
{\displaystyle u}
or
v
{\displaystyle v}
, take the wedge product
w
∧
u
∧
v
=
(
a
u
+
b
v
+
x
)
∧
u
∧
v
=
x
∧
u
∧
v
.
{\displaystyle w\wedge u\wedge v=(au+bv+x)\wedge u\wedge v=x\wedge u\wedge v.}
Having done this calculation with a vector projection, one can guess that this quantity equals
x
(
u
∧
v
)
{\displaystyle x(u\wedge v)}
. One can also guess there is a vector and bivector dot product like quantity such that the allows the calculation of the component of a vector that is in the "direction of a plane". Both of these guesses are correct, and validating these facts is worthwhile. However, skipping ahead slightly, this to-be-proven fact allows for a nice closed form solution of the vector component outside of the plane:
x
=
(
w
∧
u
∧
v
)
1
u
∧
v
=
1
u
∧
v
(
u
∧
v
∧
w
)
.
{\displaystyle x=(w\wedge u\wedge v){\frac {1}{u\wedge v}}={\frac {1}{u\wedge v}}(u\wedge v\wedge w).}
Notice the similarities between this planar rejection result and the vector rejection result. To calculate the component of a vector outside of a plane we take the volume spanned by three vectors (trivector) and "divide out" the plane.
Independent of any use of the geometric product it can be shown that this rejection in terms of the standard basis is
x
=
1
(
A
u
,
v
)
2
∑
i
<
j
<
k
|
w
i
w
j
w
k
u
i
u
j
u
k
v
i
v
j
v
k
|
|
u
i
u
j
u
k
v
i
v
j
v
k
e
i
e
j
e
k
|
{\displaystyle x={\frac {1}{(A_{u,v})^{2}}}\sum _{i
where
(
A
u
,
v
)
2
=
∑
i
<
j
|
u
i
u
j
v
i
v
j
|
=
−
(
u
∧
v
)
2
{\displaystyle (A_{u,v})^{2}=\sum _{i
is the squared area of the parallelogram formed by
u
{\displaystyle u}
, and
v
{\displaystyle v}
.
The (squared) magnitude of
x
{\displaystyle x}
is
‖
x
‖
2
=
x
⋅
w
=
1
(
A
u
,
v
)
2
∑
i
<
j
<
k
|
w
i
w
j
w
k
u
i
u
j
u
k
v
i
v
j
v
k
|
2
{\displaystyle {\Vert x\Vert }^{2}=x\cdot w={\frac {1}{(A_{u,v})^{2}}}\sum _{i
Thus, the (squared) volume of the parallelopiped (base area times perpendicular height) is
∑
i
<
j
<
k
|
w
i
w
j
w
k
u
i
u
j
u
k
v
i
v
j
v
k
|
2
{\displaystyle \sum _{i
Note the similarity in form to the w, u, v trivector itself
∑
i
<
j
<
k
|
w
i
w
j
w
k
u
i
u
j
u
k
v
i
v
j
v
k
|
e
i
∧
e
j
∧
e
k
,
{\displaystyle \sum _{i
which, if you take the set of
e
i
∧
e
j
∧
e
k
{\displaystyle {e}_{i}\wedge {e}_{j}\wedge {e}_{k}}
as a basis for the trivector space, suggests this is the natural way to define the measure of a trivector. Loosely speaking, the measure of a vector is a length, the measure of a bivector is an area, and the measure of a trivector is a volume.
If a vector is factored directly into projective and rejective terms using the geometric product
v
=
1
u
(
u
⋅
v
+
u
∧
v
)
{\displaystyle v={\frac {1}{u}}(u\cdot v+u\wedge v)}
, then it is not necessarily obvious that the rejection term, a product of vector and bivector is even a vector. Expansion of the vector bivector product in terms of the standard basis vectors has the following form
Let
r
=
1
u
(
u
∧
v
)
=
u
u
2
(
u
∧
v
)
=
1
‖
u
‖
2
u
(
u
∧
v
)
{\displaystyle r={\frac {1}{u}}(u\wedge v)={\frac {u}{u^{2}}}(u\wedge v)={\frac {1}{{\Vert u\Vert }^{2}}}u(u\wedge v)}
It can be shown that
r
=
1
‖
u
‖
2
∑
i
<
j
|
u
i
u
j
v
i
v
j
|
|
u
i
u
j
e
i
e
j
|
{\displaystyle r={\frac {1}{{\Vert {u}\Vert }^{2}}}\sum _{i
(a result that can be shown more easily straight from
r
=
v
−
u
^
(
u
^
⋅
v
)
{\displaystyle r=v-{\hat {u}}({\hat {u}}\cdot v)}
).
The rejective term is perpendicular to
u
{\displaystyle u}
, since
|
u
i
u
j
u
i
u
j
|
=
0
{\displaystyle {\begin{vmatrix}u_{i}&u_{j}\\u_{i}&u_{j}\end{vmatrix}}=0}
implies
r
⋅
u
=
0
{\displaystyle r\cdot u=0}
.
The magnitude of
r
{\displaystyle r}
is
‖
r
‖
2
=
r
⋅
v
=
1
‖
u
‖
2
∑
i
<
j
|
u
i
u
j
v
i
v
j
|
2
.
{\displaystyle {\Vert r\Vert }^{2}=r\cdot v={\frac {1}{{\Vert {u}\Vert }^{2}}}\sum _{i
So, the quantity
‖
r
‖
2
‖
u
‖
2
=
∑
i
<
j
|
u
i
u
j
v
i
v
j
|
2
{\displaystyle {\Vert r\Vert }^{2}{\Vert {u}\Vert }^{2}=\sum _{i
is the squared area of the parallelogram formed by
u
{\displaystyle u}
and
v
{\displaystyle v}
.
It is also noteworthy that the bivector can be expressed as
u
∧
v
=
∑
i
<
j
|
u
i
u
j
v
i
v
j
|
e
i
∧
e
j
.
{\displaystyle u\wedge v=\sum _{i
Thus is it natural, if one considers each term
e
i
∧
e
j
{\displaystyle e_{i}\wedge e_{j}}
as a basis vector of the bivector space, to define the (squared) "length" of that bivector as the (squared) area.
Going back to the geometric product expression for the length of the rejection
1
u
(
u
∧
v
)
{\displaystyle {\frac {1}{u}}(u\wedge v)}
we see that the length of the quotient, a vector, is in this case is the "length" of the bivector divided by the length of the divisor.
This may not be a general result for the length of the product of two k-vectors, however it is a result that may help build some intuition about the significance of the algebraic operations. Namely,
When a vector is divided out of the plane (parallelogram span) formed from it and another vector, what remains is the perpendicular component of the remaining vector, and its length is the planar area divided by the length of the vector that was divided out.
= Area of the parallelogram defined by u and v
=If A is the area of the parallelogram defined by u and v, then
A
2
=
‖
u
×
v
‖
2
=
∑
i
<
j
|
u
i
u
j
v
i
v
j
|
2
,
{\displaystyle A^{2}={\Vert \mathbf {u} \times \mathbf {v} \Vert }^{2}=\sum _{i
and
A
2
=
−
(
u
∧
v
)
2
=
∑
i
<
j
|
u
i
u
j
v
i
v
j
|
2
.
{\displaystyle A^{2}=-(\mathbf {u} \wedge \mathbf {v} )^{2}=\sum _{i
Note that this squared bivector is a geometric multiplication; this computation can alternatively be stated as the Gram determinant of the two vectors.
= Angle between two vectors
=(
sin
θ
)
2
=
‖
u
×
v
‖
2
‖
u
‖
2
‖
v
‖
2
{\displaystyle ({\sin \theta })^{2}={\frac {{\Vert \mathbf {u} \times \mathbf {v} \Vert }^{2}}{{\Vert \mathbf {u} \Vert }^{2}{\Vert \mathbf {v} \Vert }^{2}}}}
(
sin
θ
)
2
=
−
(
u
∧
v
)
2
u
2
v
2
{\displaystyle ({\sin \theta })^{2}=-{\frac {(\mathbf {u} \wedge \mathbf {v} )^{2}}{{\mathbf {u} }^{2}{\mathbf {v} }^{2}}}}
= Volume of the parallelopiped formed by three vectors
=In vector algebra" target="_blank">algebra, the volume of a parallelopiped is given by the square root of the squared norm of the scalar triple product:
V
2
=
‖
(
u
×
v
)
⋅
w
‖
2
=
|
u
1
u
2
u
3
v
1
v
2
v
3
w
1
w
2
w
3
|
2
{\displaystyle V^{2}={\Vert (\mathbf {u} \times \mathbf {v} )\cdot \mathbf {w} \Vert }^{2}={\begin{vmatrix}u_{1}&u_{2}&u_{3}\\v_{1}&v_{2}&v_{3}\\w_{1}&w_{2}&w_{3}\\\end{vmatrix}}^{2}}
V
2
=
−
(
u
∧
v
∧
w
)
2
=
−
(
∑
i
<
j
<
k
|
u
i
u
j
u
k
v
i
v
j
v
k
w
i
w
j
w
k
|
e
^
i
∧
e
^
j
∧
e
^
k
)
2
=
∑
i
<
j
<
k
|
u
i
u
j
u
k
v
i
v
j
v
k
w
i
w
j
w
k
|
2
{\displaystyle V^{2}=-(\mathbf {u} \wedge \mathbf {v} \wedge \mathbf {w} )^{2}=-\left(\sum _{i
Product of a vector and a bivector
In order to justify the normal to a plane result above, a general examination of the product of a vector and bivector is required. Namely,
w
(
u
∧
v
)
=
∑
i
,
j
<
k
w
i
e
i
|
u
j
u
k
v
j
v
k
|
e
j
∧
e
k
{\displaystyle w(u\wedge v)=\sum _{i,j
This has two parts, the vector part where
i
=
j
{\displaystyle i=j}
or
i
=
k
{\displaystyle i=k}
, and the trivector parts where no indexes equal. After some index summation trickery, and grouping terms and so forth, this is
w
(
u
∧
v
)
=
∑
i
<
j
(
w
i
e
j
−
w
j
e
i
)
|
u
i
u
j
v
i
v
j
|
+
∑
i
<
j
<
k
|
w
i
w
j
w
k
u
i
u
j
u
k
v
i
v
j
v
k
|
e
i
∧
e
j
∧
e
k
{\displaystyle w(u\wedge v)=\sum _{i
The trivector term is
w
∧
u
∧
v
{\displaystyle w\wedge u\wedge v}
. Expansion of
(
u
∧
v
)
w
{\displaystyle (u\wedge v)w}
yields the same trivector term (it is the completely symmetric part), and the vector term is negated. Like the geometric product of two vectors, this geometric product can be grouped into symmetric and antisymmetric parts, one of which is a pure k-vector. In analogy the antisymmetric part of this product can be called a generalized dot product, and is roughly speaking the dot product of a "plane" (bivector), and a vector.
The properties of this generalized dot product remain to be explored, but first here is a summary of the notation
w
(
u
∧
v
)
=
w
⋅
(
u
∧
v
)
+
w
∧
u
∧
v
{\displaystyle w(u\wedge v)=w\cdot (u\wedge v)+w\wedge u\wedge v}
(
u
∧
v
)
w
=
−
w
⋅
(
u
∧
v
)
+
w
∧
u
∧
v
{\displaystyle (u\wedge v)w=-w\cdot (u\wedge v)+w\wedge u\wedge v}
w
∧
u
∧
v
=
1
2
(
w
(
u
∧
v
)
+
(
u
∧
v
)
w
)
{\displaystyle w\wedge u\wedge v={\frac {1}{2}}(w(u\wedge v)+(u\wedge v)w)}
w
⋅
(
u
∧
v
)
=
1
2
(
w
(
u
∧
v
)
−
(
u
∧
v
)
w
)
{\displaystyle w\cdot (u\wedge v)={\frac {1}{2}}(w(u\wedge v)-(u\wedge v)w)}
Let
w
=
x
+
y
{\displaystyle w=x+y}
, where
x
=
a
u
+
b
v
{\displaystyle x=au+bv}
, and
y
⋅
u
=
y
⋅
v
=
0
{\displaystyle y\cdot u=y\cdot v=0}
. Expressing
w
{\displaystyle w}
and the
u
∧
v
{\displaystyle u\wedge v}
, products in terms of these components is
w
(
u
∧
v
)
=
x
(
u
∧
v
)
+
y
(
u
∧
v
)
=
x
⋅
(
u
∧
v
)
+
y
⋅
(
u
∧
v
)
+
y
∧
u
∧
v
{\displaystyle w(u\wedge v)=x(u\wedge v)+y(u\wedge v)=x\cdot (u\wedge v)+y\cdot (u\wedge v)+y\wedge u\wedge v}
With the conditions and definitions above, and some manipulation, it can be shown that the term
y
⋅
(
u
∧
v
)
=
0
{\displaystyle y\cdot (u\wedge v)=0}
, which then justifies the previous solution of the normal to a plane problem. Since the vector term of the vector bivector product the name dot product is zero when the vector is perpendicular to the plane (bivector), and this vector, bivector "dot product" selects only the components that are in the plane, so in analogy to the vector-vector dot product this name itself is justified by more than the fact this is the non-wedge product term of the geometric vector-bivector product.
= Derivative of a unit vector
=It can be shown that a unit vector derivative can be expressed using the cross product
d
d
t
(
r
‖
r
‖
)
=
1
‖
r
‖
3
(
r
×
d
r
d
t
)
×
r
=
(
r
^
×
1
‖
r
‖
d
r
d
t
)
×
r
^
{\displaystyle {\frac {d}{dt}}\left({\frac {\mathbf {r} }{\Vert \mathbf {r} \Vert }}\right)={\frac {1}{{\Vert \mathbf {r} \Vert }^{3}}}\left(\mathbf {r} \times {\frac {d\mathbf {r} }{dt}}\right)\times \mathbf {r} =\left({\hat {\mathbf {r} }}\times {\frac {1}{\Vert \mathbf {r} \Vert }}{\frac {d\mathbf {r} }{dt}}\right)\times {\hat {\mathbf {r} }}}
The equivalent geometric product generalization is
d
d
t
(
r
‖
r
‖
)
=
1
‖
r
‖
3
r
(
r
∧
d
r
d
t
)
=
1
r
(
r
^
∧
d
r
d
t
)
{\displaystyle {\frac {d}{dt}}\left({\frac {\mathbf {r} }{\Vert \mathbf {r} \Vert }}\right)={\frac {1}{{\Vert \mathbf {r} \Vert }^{3}}}\mathbf {r} \left(\mathbf {r} \wedge {\frac {d\mathbf {r} }{dt}}\right)={\frac {1}{\mathbf {r} }}\left({\hat {\mathbf {r} }}\wedge {\frac {d\mathbf {r} }{dt}}\right)}
Thus this derivative is the component of
1
‖
r
‖
d
r
d
t
{\displaystyle {\frac {1}{\Vert \mathbf {r} \Vert }}{\frac {d\mathbf {r} }{dt}}}
in the direction perpendicular to
r
{\displaystyle \mathbf {r} }
. In other words, this is
1
‖
r
‖
d
r
d
t
{\displaystyle {\frac {1}{\Vert \mathbf {r} \Vert }}{\frac {d\mathbf {r} }{dt}}}
minus the projection of that vector onto
r
^
{\displaystyle {\hat {\mathbf {r} }}}
.
This intuitively makes sense (but a picture would help) since a unit vector is constrained to circular motion, and any change to a unit vector due to a change in its generating vector has to be in the direction of the rejection of
r
^
{\displaystyle {\hat {\mathbf {r} }}}
from
d
r
d
t
{\displaystyle {\frac {d\mathbf {r} }{dt}}}
. That rejection has to be scaled by 1/|r| to get the final result.
When the objective isn't comparing to the cross product, it's also notable that this unit vector derivative can be written
r
d
r
^
d
t
=
r
^
∧
d
r
d
t
{\displaystyle {\mathbf {r} }{\frac {d{\hat {\mathbf {r} }}}{dt}}={\hat {\mathbf {r} }}\wedge {\frac {d\mathbf {r} }{dt}}}
See also
Geometric algebra" target="_blank">algebra
Bivector
Citations
References and further reading
Vold, Terje G. (1993), "An introduction to Geometric algebra" target="_blank">Algebra with an Application in Rigid Body mechanics" (PDF), American Journal of Physics, 61 (6): 491, Bibcode:1993AmJPh..61..491V, doi:10.1119/1.17201
Gull, S.F.; Lasenby, A.N; Doran, C:J:L (1993), Imaginary Numbers are not Real – the Geometric algebra" target="_blank">Algebra of Spacetime (PDF)
Kata Kunci Pencarian:
- Geometric algebra
- Comparison of vector algebra and geometric algebra
- Spacetime algebra
- Vector calculus
- Geometric calculus
- Universal algebra
- Spinor
- Quaternion
- Universal geometric algebra
- Cross product