- Source: Discrete calculus
Discrete calculus or the calculus of discrete functions, is the mathematical study of incremental change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. The word calculus is a Latin word, meaning originally "small pebble"; as such pebbles were used for calculation, the meaning of the word has evolved and today usually means a method of computation. Meanwhile, calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the study of continuous change.
Discrete calculus has two entry points, differential calculus and integral calculus. Differential calculus concerns incremental rates of change and the slopes of piece-wise linear curves. Integral calculus concerns accumulation of quantities and the areas under piece-wise constant curves. These two points of view are related to each other by the fundamental theorem of discrete calculus.
The study of the concepts of change starts with their discrete form. The development is dependent on a parameter, the increment
Δ
x
{\displaystyle \Delta x}
of the independent variable. If we so choose, we can make the increment smaller and smaller and find the continuous counterparts of these concepts as limits. Informally, the limit of discrete calculus as
Δ
x
→
0
{\displaystyle \Delta x\to 0}
is infinitesimal calculus. Even though it serves as a discrete underpinning of calculus, the main value of discrete calculus is in applications.
Two initial constructions
Discrete differential calculus is the study of the definition, properties, and applications of the difference quotient of a function. The process of finding the difference quotient is called differentiation. Given a function defined at several points of the real line, the difference quotient at that point is a way of encoding the small-scale (i.e., from the point to the next) behavior of the function. By finding the difference quotient of a function at every pair of consecutive points in its domain, it is possible to produce a new function, called the difference quotient function or just the difference quotient of the original function. In formal terms, the difference quotient is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by differentiating the squaring function turns out to be something close to the doubling function.
Suppose the functions are defined at points separated by an increment
Δ
x
=
h
>
0
{\displaystyle \Delta x=h>0}
:
a
,
a
+
h
,
a
+
2
h
,
…
,
a
+
n
h
,
…
{\displaystyle a,a+h,a+2h,\ldots ,a+nh,\ldots }
The "doubling function" may be denoted by
g
(
x
)
=
2
x
{\displaystyle g(x)=2x}
and the "squaring function" by
f
(
x
)
=
x
2
{\displaystyle f(x)=x^{2}}
. The "difference quotient" is the rate of change of the function over one of the intervals
[
x
,
x
+
h
]
{\displaystyle [x,x+h]}
defined by the formula:
f
(
x
+
h
)
−
f
(
x
)
h
.
{\displaystyle {\frac {f(x+h)-f(x)}{h}}.}
It takes the function
f
{\displaystyle f}
as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function
g
(
x
)
=
2
x
+
h
{\displaystyle g(x)=2x+h}
, as will turn out. As a matter of convenience, the new function may defined at the middle points of the above intervals:
a
+
h
/
2
,
a
+
h
+
h
/
2
,
a
+
2
h
+
h
/
2
,
.
.
.
,
a
+
n
h
+
h
/
2
,
.
.
.
{\displaystyle a+h/2,a+h+h/2,a+2h+h/2,...,a+nh+h/2,...}
As the rate of change is that for the whole interval
[
x
,
x
+
h
]
{\displaystyle [x,x+h]}
, any point within it can be used as such a reference or, even better, the whole interval which makes the difference quotient a
1
{\displaystyle 1}
-cochain.
The most common notation for the difference quotient is:
Δ
f
Δ
x
(
x
+
h
/
2
)
=
f
(
x
+
h
)
−
f
(
x
)
h
.
{\displaystyle {\frac {\Delta f}{\Delta x}}(x+h/2)={\frac {f(x+h)-f(x)}{h}}.}
If the input of the function represents time, then the difference quotient represents change with respect to time. For example, if
f
{\displaystyle f}
is a function that takes a time as input and gives the position of a ball at that time as output, then the difference quotient of
f
{\displaystyle f}
is how the position is changing in time, that is, it is the velocity of the ball.
If a function is linear (that is, if the points of the graph of the function lie on a straight line), then the function can be written as
y
=
m
x
+
b
{\displaystyle y=mx+b}
, where
x
{\displaystyle x}
is the independent variable,
y
{\displaystyle y}
is the dependent variable,
b
{\displaystyle b}
is the
y
{\displaystyle y}
-intercept, and:
m
=
rise
run
=
change in
y
change in
x
=
Δ
y
Δ
x
.
{\displaystyle m={\frac {\text{rise}}{\text{run}}}={\frac {{\text{change in }}y}{{\text{change in }}x}}={\frac {\Delta y}{\Delta x}}.}
This gives an exact value for the slope of a straight line.
If the function is not linear, however, then the change in
y
{\displaystyle y}
divided by the change in
x
{\displaystyle x}
varies. The difference quotient give an exact meaning to the notion of change in output with respect to change in input. To be concrete, let
f
{\displaystyle f}
be a function, and fix a point
x
{\displaystyle x}
in the domain of
f
{\displaystyle f}
.
(
x
,
f
(
x
)
)
{\displaystyle (x,f(x))}
is a point on the graph of the function. If
h
{\displaystyle h}
is the increment of
x
{\displaystyle x}
, then
x
+
h
{\displaystyle x+h}
is the next value of
x
{\displaystyle x}
. Therefore,
(
x
+
h
,
f
(
x
+
h
)
)
{\displaystyle (x+h,f(x+h))}
is the increment of
(
x
,
f
(
x
)
)
{\displaystyle (x,f(x))}
. The slope of the line between these two points is
m
=
f
(
x
+
h
)
−
f
(
x
)
(
x
+
h
)
−
x
=
f
(
x
+
h
)
−
f
(
x
)
h
.
{\displaystyle m={\frac {f(x+h)-f(x)}{(x+h)-x}}={\frac {f(x+h)-f(x)}{h}}.}
So
m
{\displaystyle m}
is the slope of the line between
(
x
,
f
(
x
)
)
{\displaystyle (x,f(x))}
and
(
x
+
h
,
f
(
x
+
h
)
)
{\displaystyle (x+h,f(x+h))}
.
Here is a particular example, the difference quotient of the squaring function. Let
f
(
x
)
=
x
2
{\displaystyle f(x)=x^{2}}
be the squaring function. Then:
Δ
f
Δ
x
(
x
)
=
(
x
+
h
)
2
−
x
2
h
=
x
2
+
2
h
x
+
h
2
−
x
2
h
=
2
h
x
+
h
2
h
=
2
x
+
h
.
{\displaystyle {\begin{aligned}{\frac {\Delta f}{\Delta x}}(x)&={(x+h)^{2}-x^{2} \over {h}}\\&={x^{2}+2hx+h^{2}-x^{2} \over {h}}\\&={2hx+h^{2} \over {h}}\\&=2x+h.\end{aligned}}}
The difference quotient of the difference quotient is called the second difference quotient and it is defined at
a
+
h
,
a
+
2
h
,
a
+
3
h
,
…
,
a
+
n
h
,
…
{\displaystyle a+h,a+2h,a+3h,\ldots ,a+nh,\ldots }
and so on.
Discrete integral calculus is the study of the definitions, properties, and applications of the Riemann sums. The process of finding the value of a sum is called integration. In technical language, integral calculus studies a certain linear operator.
The Riemann sum inputs a function and outputs a function, which gives the algebraic sum of areas between the part of the graph of the input and the x-axis.
A motivating example is the distances traveled in a given time.
distance
=
speed
⋅
time
{\displaystyle {\text{distance}}={\text{speed}}\cdot {\text{time}}}
If the speed is constant, only multiplication is needed, but if the speed changes, we evaluate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the distance traveled in each interval.
When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, travelling a steady 50 mph for 3 hours results in a total distance of 150 miles. In the diagram on the left, when constant velocity and time are graphed, these two values form a rectangle with height equal to the velocity and width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve. This connection between the area under a curve and distance traveled can be extended to any irregularly shaped region exhibiting an incrementally varying velocity over a given time period. If the bars in the diagram on the right represents speed as it varies from an interval to the next, the distance traveled (between the times represented by
a
{\displaystyle a}
and
b
{\displaystyle b}
) is the area of the shaded region
s
{\displaystyle s}
.
So, the interval between
a
{\displaystyle a}
and
b
{\displaystyle b}
is divided into a number of equal segments, the length of each segment represented by the symbol
Δ
x
{\displaystyle \Delta x}
. For each small segment, we have one value of the function
f
(
x
)
{\displaystyle f(x)}
. Call that value
v
{\displaystyle v}
. Then the area of the rectangle with base
Δ
x
{\displaystyle \Delta x}
and height
v
{\displaystyle v}
gives the distance (time
Δ
x
{\displaystyle \Delta x}
multiplied by speed
v
{\displaystyle v}
) traveled in that segment. Associated with each segment is the value of the function above it,
f
(
x
)
=
v
{\displaystyle f(x)=v}
. The sum of all such rectangles gives the area between the axis and the piece-wise constant curve, which is the total distance traveled.
Suppose a function is defined at the mid-points of the intervals of equal length
Δ
x
=
h
>
0
{\displaystyle \Delta x=h>0}
:
a
+
h
/
2
,
a
+
h
+
h
/
2
,
a
+
2
h
+
h
/
2
,
…
,
a
+
n
h
−
h
/
2
,
…
{\displaystyle a+h/2,a+h+h/2,a+2h+h/2,\ldots ,a+nh-h/2,\ldots }
Then the Riemann sum from
a
{\displaystyle a}
to
b
=
a
+
n
h
{\displaystyle b=a+nh}
in sigma notation is:
∑
i
=
1
n
f
(
a
+
i
h
)
Δ
x
.
{\displaystyle \sum _{i=1}^{n}f(a+ih)\,\Delta x.}
As this computation is carried out for each
n
{\displaystyle n}
, the new function is defined at the points:
a
,
a
+
h
,
a
+
2
h
,
…
,
a
+
n
h
,
…
{\displaystyle a,a+h,a+2h,\ldots ,a+nh,\ldots }
The fundamental theorem of calculus states that differentiation and integration are inverse operations. More precisely, it relates the difference quotients to the Riemann sums. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration.
The fundamental theorem of calculus: If a function
f
{\displaystyle f}
is defined on a partition of the interval
[
a
,
b
]
{\displaystyle [a,b]}
,
b
=
a
+
n
h
{\displaystyle b=a+nh}
, and if
F
{\displaystyle F}
is a function whose difference quotient is
f
{\displaystyle f}
, then we have:
∑
i
=
0
n
−
1
f
(
a
+
i
h
+
h
/
2
)
Δ
x
=
F
(
b
)
−
F
(
a
)
.
{\displaystyle \sum _{i=0}^{n-1}f(a+ih+h/2)\,\Delta x=F(b)-F(a).}
Furthermore, for every
m
=
0
,
1
,
2
,
…
,
n
−
1
{\textstyle m=0,1,2,\ldots ,n-1}
, we have:
Δ
Δ
x
∑
i
=
0
m
f
(
a
+
i
h
+
h
/
2
)
Δ
x
=
f
(
a
+
m
h
+
h
/
2
)
.
{\displaystyle {\frac {\Delta }{\Delta x}}\sum _{i=0}^{m}f(a+ih+h/2)\,\Delta x=f(a+mh+h/2).}
This is also a prototype solution of a difference equation. Difference equations relate an unknown function to its difference or difference quotient, and are ubiquitous in the sciences.
History
The early history of discrete calculus is the history of calculus. Such basic ideas as the difference quotients and the Riemann sums appear implicitly or explicitly in definitions and proofs. After the limit is taken, however, they are never to be seen again. However, the Kirchhoff's voltage law (1847) can be expressed in terms of the one-dimensional discrete exterior derivative.
During the 20th century discrete calculus remains interlinked with infinitesimal calculus especially differential forms but also starts to draw from algebraic topology as both develop. The main contributions come from the following individuals:
Henri Poincaré: triangulations (barycentric subdivision, dual triangulation), Poincaré lemma, the first proof of the general Stokes Theorem, and a lot more
L. E. J. Brouwer: simplicial approximation theorem
Élie Cartan, Georges de Rham: the notion of differential form, the exterior derivative as a coordinate-independent linear operator, exactness/closedness of forms
Emmy Noether, Heinz Hopf, Leopold Vietoris, Walther Mayer: modules of chains, the boundary operator, chain complexes
J. W. Alexander, Solomon Lefschetz, Lev Pontryagin, Andrey Kolmogorov, Norman Steenrod, Eduard Čech: the early cochain notions
Hermann Weyl: the Kirchhoff laws stated in terms of the boundary and the coboundary operators
W. V. D. Hodge: the Hodge star operator, the Hodge decomposition
Samuel Eilenberg, Saunders Mac Lane, Norman Steenrod, J.H.C. Whitehead: the rigorous development of homology and cohomology theory including chain and cochain complexes, the cup product
Hassler Whitney: cochains as integrands
The recent development of discrete calculus, starting with Whitney, has been driven by the needs of applied modeling.
Applications
Discrete calculus is used for modeling either directly or indirectly as a discretization of infinitesimal calculus in every branch of the physical sciences, actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled. It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other.
Physics makes particular use of calculus; all discrete concepts in classical mechanics and electromagnetism are related through discrete calculus. The mass of an object of known density that varies incrementally, the moment of inertia of such objects, as well as the total energy of an object within a discrete conservative field can be found by the use of discrete calculus. An example of the use of discrete calculus in mechanics is Newton's second law of motion: historically stated it expressly uses the term "change of motion" which implies the difference quotient saying The change of momentum of a body is equal to the resultant force acting on the body and is in the same direction. Commonly expressed today as Force = Mass × Acceleration, it invokes discrete calculus when the change is incremental because acceleration is the difference quotient of velocity with respect to time or second difference quotient of the spatial position. Starting from knowing how an object is accelerating, we use the Riemann sums to derive its path.
Maxwell's theory of electromagnetism and Einstein's theory of general relativity have been expressed in the language of discrete calculus.
Chemistry uses calculus in determining reaction rates and radioactive decay (exponential decay).
In biology, population dynamics starts with reproduction and death rates to model population changes (population modeling).
In engineering, difference equations are used to plot a course of a spacecraft within zero gravity environments, to model heat transfer, diffusion, and wave propagation.
The discrete analogue of Green's theorem is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing. For example, it can be used to calculate the amount of area taken up by an irregularly shaped flower bed or swimming pool when designing the layout of a piece of property. It can be used to efficiently calculate sums of rectangular domains in images, to rapidly extract features and detect object; another algorithm that could be used is the summed area table.
In the realm of medicine, calculus can be used to find the optimal branching angle of a blood vessel so as to maximize flow. From the decay laws for a particular drug's elimination from the body, it is used to derive dosing laws. In nuclear medicine, it is used to build models of radiation transport in targeted tumor therapies.
In economics, calculus allows for the determination of maximal profit by calculating both marginal cost and marginal revenue, as well as modeling of markets.
In signal processing and machine learning, discrete calculus allows for appropriate definitions of operators (e.g., convolution), level set optimization and other key functions for neural network analysis on graph structures.
Discrete calculus can be used in conjunction with other mathematical disciplines. For example, it can be used in probability theory to determine the probability of a discrete random variable from an assumed density function.
Calculus of differences and sums
Suppose a function (a
0
{\displaystyle 0}
-cochain)
f
{\displaystyle f}
is defined at points separated by an increment
Δ
x
=
h
>
0
{\displaystyle \Delta x=h>0}
:
a
,
a
+
h
,
a
+
2
h
,
…
,
a
+
n
h
,
…
{\displaystyle a,a+h,a+2h,\ldots ,a+nh,\ldots }
The difference (or the exterior derivative, or the coboundary operator) of the function is given by:
(
Δ
f
)
(
[
x
,
x
+
h
]
)
=
f
(
x
+
h
)
−
f
(
x
)
.
{\displaystyle {\big (}\Delta f{\big )}{\big (}[x,x+h]{\big )}=f(x+h)-f(x).}
It is defined at each of the above intervals; it is a
1
{\displaystyle 1}
-cochain.
Suppose a
1
{\displaystyle 1}
-cochain
g
{\displaystyle g}
is defined at each of the above intervals. Then its sum is a function (a
0
{\displaystyle 0}
-cochain) defined at each of the points by:
(
∑
g
)
(
a
+
n
h
)
=
∑
i
=
1
n
g
(
[
a
+
(
i
−
1
)
h
,
a
+
i
h
]
)
.
{\displaystyle \left(\sum g\right)\!(a+nh)=\sum _{i=1}^{n}g{\big (}[a+(i-1)h,a+ih]{\big )}.}
These are their properties:
Constant rule: If
c
{\displaystyle c}
is a constant, then
Δ
c
=
0
{\displaystyle \Delta c=0}
Linearity: if
a
{\displaystyle a}
and
b
{\displaystyle b}
are constants,
Δ
(
a
f
+
b
g
)
=
a
Δ
f
+
b
Δ
g
,
∑
(
a
f
+
b
g
)
=
a
∑
f
+
b
∑
g
{\displaystyle \Delta (af+bg)=a\,\Delta f+b\,\Delta g,\quad \sum (af+bg)=a\,\sum f+b\,\sum g}
Product rule:
Δ
(
f
g
)
=
f
Δ
g
+
g
Δ
f
+
Δ
f
Δ
g
{\displaystyle \Delta (fg)=f\,\Delta g+g\,\Delta f+\Delta f\,\Delta g}
Fundamental theorem of calculus I:
(
∑
Δ
f
)
(
a
+
n
h
)
=
f
(
a
+
n
h
)
−
f
(
a
)
{\displaystyle \left(\sum \Delta f\right)\!(a+nh)=f(a+nh)-f(a)}
Fundamental theorem of calculus II:
Δ
(
∑
g
)
=
g
{\displaystyle \Delta \!\left(\sum g\right)=g}
The definitions are applied to graphs as follows. If a function (a
0
{\displaystyle 0}
-cochain)
f
{\displaystyle f}
is defined at the nodes of a graph:
a
,
b
,
c
,
…
{\displaystyle a,b,c,\ldots }
then its exterior derivative (or the differential) is the difference, i.e., the following function defined on the edges of the graph (
1
{\displaystyle 1}
-cochain):
(
d
f
)
(
[
a
,
b
]
)
=
f
(
b
)
−
f
(
a
)
.
{\displaystyle \left(df\right)\!{\big (}[a,b]{\big )}=f(b)-f(a).}
If
g
{\displaystyle g}
is a
1
{\displaystyle 1}
-cochain, then its integral over a sequence of edges
σ
{\displaystyle \sigma }
of the graph is the sum of its values over all edges of
σ
{\displaystyle \sigma }
("path integral"):
∫
σ
g
=
∑
σ
g
(
[
a
,
b
]
)
.
{\displaystyle \int _{\sigma }g=\sum _{\sigma }g{\big (}[a,b]{\big )}.}
These are the properties:
Constant rule: If
c
{\displaystyle c}
is a constant, then
d
c
=
0
{\displaystyle dc=0}
Linearity: if
a
{\displaystyle a}
and
b
{\displaystyle b}
are constants,
d
(
a
f
+
b
g
)
=
a
d
f
+
b
d
g
,
∫
σ
(
a
f
+
b
g
)
=
a
∫
σ
f
+
b
∫
σ
g
{\displaystyle d(af+bg)=a\,df+b\,dg,\quad \int _{\sigma }(af+bg)=a\,\int _{\sigma }f+b\,\int _{\sigma }g}
Product rule:
d
(
f
g
)
=
f
d
g
+
g
d
f
+
d
f
d
g
{\displaystyle d(fg)=f\,dg+g\,df+df\,dg}
Fundamental theorem of calculus I: if a
1
{\displaystyle 1}
-chain
σ
{\displaystyle \sigma }
consists of the edges
[
a
0
,
a
1
]
,
[
a
1
,
a
2
]
,
.
.
.
,
[
a
n
−
1
,
a
n
]
{\displaystyle [a_{0},a_{1}],[a_{1},a_{2}],...,[a_{n-1},a_{n}]}
, then for any
0
{\displaystyle 0}
-cochain
f
{\displaystyle f}
∫
σ
d
f
=
f
(
a
n
)
−
f
(
a
0
)
{\displaystyle \int _{\sigma }df=f(a_{n})-f(a_{0})}
Fundamental theorem of calculus II: if the graph is a tree,
g
{\displaystyle g}
is a
1
{\displaystyle 1}
-cochain, and a function (
0
{\displaystyle 0}
-cochain) is defined on the nodes of the graph by
f
(
x
)
=
∫
σ
g
{\displaystyle f(x)=\int _{\sigma }g}
where a
1
{\displaystyle 1}
-chain
σ
{\displaystyle \sigma }
consists of
[
a
0
,
a
1
]
,
[
a
1
,
a
2
]
,
.
.
.
,
[
a
n
−
1
,
x
]
{\displaystyle [a_{0},a_{1}],[a_{1},a_{2}],...,[a_{n-1},x]}
for some fixed
a
0
{\displaystyle a_{0}}
, then
d
f
=
g
{\displaystyle df=g}
See references.
Chains of simplices and cubes
A simplicial complex
S
{\displaystyle S}
is a set of simplices that satisfies the following conditions:
1. Every face of a simplex from
S
{\displaystyle S}
is also in
S
{\displaystyle S}
.
2. The non-empty intersection of any two simplices
σ
1
,
σ
2
∈
S
{\displaystyle \sigma _{1},\sigma _{2}\in S}
is a face of both
σ
1
{\displaystyle \sigma _{1}}
and
σ
2
{\displaystyle \sigma _{2}}
.
By definition, an orientation of a k-simplex is given by an ordering of the vertices, written as
(
v
0
,
.
.
.
,
v
k
)
{\displaystyle (v_{0},...,v_{k})}
, with the rule that two orderings define the same orientation if and only if they differ by an even permutation. Thus every simplex has exactly two orientations, and switching the order of two vertices changes an orientation to the opposite orientation. For example, choosing an orientation of a 1-simplex amounts to choosing one of the two possible directions, and choosing an orientation of a 2-simplex amounts to choosing what "counterclockwise" should mean.
Let
S
{\displaystyle S}
be a simplicial complex. A simplicial k-chain is a finite formal sum
∑
i
=
1
N
c
i
σ
i
,
{\displaystyle \sum _{i=1}^{N}c_{i}\sigma _{i},\,}
where each ci is an integer and σi is an oriented k-simplex. In this definition, we declare that each oriented simplex is equal to the negative of the simplex with the opposite orientation. For example,
(
v
0
,
v
1
)
=
−
(
v
1
,
v
0
)
.
{\displaystyle (v_{0},v_{1})=-(v_{1},v_{0}).}
The vector space of k-chains on
S
{\displaystyle S}
is written
C
k
{\displaystyle C_{k}}
. It has a basis in one-to-one correspondence with the set of k-simplices in
S
{\displaystyle S}
. To define a basis explicitly, one has to choose an orientation of each simplex. One standard way to do this is to choose an ordering of all the vertices and give each simplex the orientation corresponding to the induced ordering of its vertices.
Let
σ
=
(
v
0
,
.
.
.
,
v
k
)
{\displaystyle \sigma =(v_{0},...,v_{k})}
be an oriented k-simplex, viewed as a basis element of
C
k
{\displaystyle C_{k}}
. The boundary operator
∂
k
:
C
k
→
C
k
−
1
{\displaystyle \partial _{k}:C_{k}\rightarrow C_{k-1}}
is the linear operator defined by:
∂
k
(
σ
)
=
∑
i
=
0
k
(
−
1
)
i
(
v
0
,
…
,
v
i
^
,
…
,
v
k
)
,
{\displaystyle \partial _{k}(\sigma )=\sum _{i=0}^{k}(-1)^{i}(v_{0},\dots ,{\widehat {v_{i}}},\dots ,v_{k}),}
where the oriented simplex
(
v
0
,
…
,
v
i
^
,
…
,
v
k
)
{\displaystyle (v_{0},\dots ,{\widehat {v_{i}}},\dots ,v_{k})}
is the
i
{\displaystyle i}
th face of
σ
{\displaystyle \sigma }
, obtained by deleting its
i
{\displaystyle i}
th vertex.
In
C
k
{\displaystyle C_{k}}
, elements of the subgroup
Z
k
=
ker
∂
k
{\displaystyle Z_{k}=\ker \partial _{k}}
are referred to as cycles, and the subgroup
B
k
=
im
∂
k
+
1
{\displaystyle B_{k}=\operatorname {im} \partial _{k+1}}
is said to consist of boundaries.
A direct computation shows that
∂
2
=
0
{\displaystyle \partial ^{2}=0}
. In geometric terms, this says that the boundary of anything has no boundary. Equivalently, the vector spaces
(
C
k
,
∂
k
)
{\displaystyle (C_{k},\partial _{k})}
form a chain complex. Another equivalent statement is that
B
k
{\displaystyle B_{k}}
is contained in
Z
k
{\displaystyle Z_{k}}
.
A cubical complex is a set composed of points, line segments, squares, cubes, and their n-dimensional counterparts. They are used analogously to simplices to form complexes. An elementary interval is a subset
I
⊂
R
{\displaystyle I\subset \mathbf {R} }
of the form
I
=
[
ℓ
,
ℓ
+
1
]
or
I
=
[
ℓ
,
ℓ
]
{\displaystyle I=[\ell ,\ell +1]\quad {\text{or}}\quad I=[\ell ,\ell ]}
for some
ℓ
∈
Z
{\displaystyle \ell \in \mathbf {Z} }
. An elementary cube
Q
{\displaystyle Q}
is the finite product of elementary intervals, i.e.
Q
=
I
1
×
I
2
×
⋯
×
I
d
⊂
R
d
{\displaystyle Q=I_{1}\times I_{2}\times \cdots \times I_{d}\subset \mathbf {R} ^{d}}
where
I
1
,
I
2
,
…
,
I
d
{\displaystyle I_{1},I_{2},\ldots ,I_{d}}
are elementary intervals. Equivalently, an elementary cube is any translate of a unit cube
[
0
,
1
]
n
{\displaystyle [0,1]^{n}}
embedded in Euclidean space
R
d
{\displaystyle \mathbf {R} ^{d}}
(for some
n
,
d
∈
N
∪
{
0
}
{\displaystyle n,d\in \mathbf {N} \cup \{0\}}
with
n
≤
d
{\displaystyle n\leq d}
). A set
X
⊆
R
d
{\displaystyle X\subseteq \mathbf {R} ^{d}}
is a cubical complex if it can be written as a union of elementary cubes (or possibly, is homeomorphic to such a set) and it contains all of the faces of all of its cubes. The boundary operator and the chain complex are defined similarly to those for simplicial complexes.
More general are cell complexes.
A chain complex
(
C
∗
,
∂
∗
)
{\displaystyle (C_{*},\partial _{*})}
is a sequence of vector spaces
…
,
C
0
,
C
1
,
C
2
,
C
3
,
C
4
,
…
{\displaystyle \ldots ,C_{0},C_{1},C_{2},C_{3},C_{4},\ldots }
connected by linear operators (called boundary operators)
∂
n
:
C
n
→
C
n
−
1
{\displaystyle \partial _{n}:C_{n}\to C_{n-1}}
, such that the composition of any two consecutive maps is the zero map. Explicitly, the boundary operators satisfy
∂
n
∘
∂
n
+
1
=
0
{\displaystyle \partial _{n}\circ \partial _{n+1}=0}
, or with indices suppressed,
∂
2
=
0
{\displaystyle \partial ^{2}=0}
. The complex may be written out as follows.
⋯
←
∂
0
C
0
←
∂
1
C
1
←
∂
2
C
2
←
∂
3
C
3
←
∂
4
C
4
←
∂
5
⋯
{\displaystyle \cdots {\xleftarrow {\partial _{0}}}C_{0}{\xleftarrow {\partial _{1}}}C_{1}{\xleftarrow {\partial _{2}}}C_{2}{\xleftarrow {\partial _{3}}}C_{3}{\xleftarrow {\partial _{4}}}C_{4}{\xleftarrow {\partial _{5}}}\cdots }
A simplicial map is a map between simplicial complexes with the property that the images of the vertices of a simplex always span a simplex (therefore, vertices have vertices for images). A simplicial map
f
{\displaystyle f}
from a simplicial complex
S
{\displaystyle S}
to another
T
{\displaystyle T}
is a function from the vertex set of
S
{\displaystyle S}
to the vertex set of
T
{\displaystyle T}
such that the image of each simplex in
S
{\displaystyle S}
(viewed as a set of vertices) is a simplex in
T
{\displaystyle T}
. It generates a linear map, called a chain map, from the chain complex of
S
{\displaystyle S}
to the chain complex of
T
{\displaystyle T}
. Explicitly, it is given on
k
{\displaystyle k}
-chains by
f
(
(
v
0
,
…
,
v
k
)
)
=
(
f
(
v
0
)
,
…
,
f
(
v
k
)
)
{\displaystyle f((v_{0},\ldots ,v_{k}))=(f(v_{0}),\ldots ,f(v_{k}))}
if
f
(
v
0
)
,
.
.
.
,
f
(
v
k
)
{\displaystyle f(v_{0}),...,f(v_{k})}
are all distinct, and otherwise it is set equal to
0
{\displaystyle 0}
.
A chain map
f
{\displaystyle f}
between two chain complexes
(
A
∗
,
d
A
,
∗
)
{\displaystyle (A_{*},d_{A,*})}
and
(
B
∗
,
d
B
,
∗
)
{\displaystyle (B_{*},d_{B,*})}
is a sequence
f
∗
{\displaystyle f_{*}}
of homomorphisms
f
n
:
A
n
→
B
n
{\displaystyle f_{n}:A_{n}\rightarrow B_{n}}
for each
n
{\displaystyle n}
that commutes with the boundary operators on the two chain complexes, so
d
B
,
n
∘
f
n
=
f
n
−
1
∘
d
A
,
n
{\displaystyle d_{B,n}\circ f_{n}=f_{n-1}\circ d_{A,n}}
. This is written out in the following commutative diagram:
A chain map sends cycles to cycles and boundaries to boundaries.
See references.
Discrete differential forms: cochains
For each vector space Ci in the chain complex we consider its dual space
C
i
∗
:=
H
o
m
(
C
i
,
R
)
,
{\displaystyle C_{i}^{*}:=\mathrm {Hom} (C_{i},{\bf {R}}),}
and
d
i
=
∂
i
∗
{\displaystyle d^{i}=\partial _{i}^{*}}
is its dual linear operator
d
i
−
1
:
C
i
−
1
∗
→
C
i
∗
.
{\displaystyle d^{i-1}:C_{i-1}^{*}\to C_{i}^{*}.}
This has the effect of "reversing all the arrows" of the original complex, leaving a cochain complex
⋯
←
C
i
+
1
∗
←
∂
i
∗
C
i
∗
←
∂
i
−
1
∗
C
i
−
1
∗
←
⋯
{\displaystyle \cdots \leftarrow C_{i+1}^{*}{\stackrel {\partial _{i}^{*}}{\leftarrow }}\ C_{i}^{*}{\stackrel {\partial _{i-1}^{*}}{\leftarrow }}C_{i-1}^{*}\leftarrow \cdots }
The cochain complex
(
C
∗
,
d
∗
)
{\displaystyle (C^{*},d^{*})}
is the dual notion to a chain complex. It consists of a sequence of vector spaces
.
.
.
,
C
0
,
C
1
,
C
2
,
C
3
,
C
4
,
.
.
.
{\displaystyle ...,C_{0},C_{1},C_{2},C_{3},C_{4},...}
connected by linear operators
d
n
:
C
n
→
C
n
+
1
{\displaystyle d^{n}:C^{n}\to C^{n+1}}
satisfying
d
n
+
1
∘
d
n
=
0
{\displaystyle d^{n+1}\circ d^{n}=0}
. The cochain complex may be written out in a similar fashion to the chain complex.
⋯
→
d
−
1
C
0
→
d
0
C
1
→
d
1
C
2
→
d
2
C
3
→
d
3
C
4
→
d
4
⋯
{\displaystyle \cdots {\xrightarrow {d^{-1}}}C^{0}{\xrightarrow {d^{0}}}C^{1}{\xrightarrow {d^{1}}}C^{2}{\xrightarrow {d^{2}}}C^{3}{\xrightarrow {d^{3}}}C^{4}{\xrightarrow {d^{4}}}\cdots }
The index
n
{\displaystyle n}
in either
C
n
{\displaystyle C_{n}}
or
C
n
{\displaystyle C^{n}}
is referred to as the degree (or dimension). The difference between chain and cochain complexes is that, in chain complexes, the differentials decrease dimension, whereas in cochain complexes they increase dimension.
The elements of the individual vector spaces of a (co)chain complex are called cochains. The elements in the kernel of
d
{\displaystyle d}
are called cocycles (or closed elements), and the elements in the image of
d
{\displaystyle d}
are called coboundaries (or exact elements). Right from the definition of the differential, all boundaries are cycles.
The Poincaré lemma states that if
B
{\displaystyle B}
is an open ball in
R
n
{\displaystyle {\bf {R}}^{n}}
, any closed
p
{\displaystyle p}
-form
ω
{\displaystyle \omega }
defined on
B
{\displaystyle B}
is exact, for any integer
p
{\displaystyle p}
with
1
≤
p
≤
n
{\displaystyle 1\leq p\leq n}
.
When we refer to cochains as discrete (differential) forms, we refer to
d
{\displaystyle d}
as the exterior derivative. We also use the calculus notation for the values of the forms:
ω
(
s
)
=
∫
s
ω
.
{\displaystyle \omega (s)=\int _{s}\omega .}
Stokes' theorem is a statement about the discrete differential forms on manifolds, which generalizes the fundamental theorem of discrete calculus for a partition of an interval:
∑
i
=
0
n
−
1
Δ
F
Δ
x
(
a
+
i
h
+
h
/
2
)
Δ
x
=
F
(
b
)
−
F
(
a
)
.
{\displaystyle \sum _{i=0}^{n-1}{\frac {\Delta F}{\Delta x}}(a+ih+h/2)\,\Delta x=F(b)-F(a).}
Stokes' theorem says that the sum of a form
ω
{\displaystyle \omega }
over the boundary of some orientable manifold
Ω
{\displaystyle \Omega }
is equal to the sum of its exterior derivative
d
ω
{\displaystyle d\omega }
over the whole of
Ω
{\displaystyle \Omega }
, i.e.,
∫
Ω
d
ω
=
∫
∂
Ω
ω
.
{\displaystyle \int _{\Omega }d\omega =\int _{\partial \Omega }\omega \,.}
It is worthwhile to examine the underlying principle by considering an example for
d
=
2
{\displaystyle d=2}
dimensions. The essential idea can be understood by the diagram on the left, which shows that, in an oriented tiling of a manifold, the interior paths are traversed in opposite directions; their contributions to the path integral thus cancel each other pairwise. As a consequence, only the contribution from the boundary remains.
See references.
The wedge product of forms
In discrete calculus, this is a construction that creates from forms higher order forms: adjoining two cochains of degree
p
{\displaystyle p}
and
q
{\displaystyle q}
to form a composite cochain of degree
p
+
q
{\displaystyle p+q}
.
For cubical complexes, the wedge product is defined on every cube seen as a vector space of the same dimension.
For simplicial complexes, the wedge product is implemented as the cup product: if
f
p
{\displaystyle f^{p}}
is a
p
{\displaystyle p}
-cochain and
g
q
{\displaystyle g^{q}}
is a
q
{\displaystyle q}
-cochain, then
(
f
p
⌣
g
q
)
(
σ
)
=
f
p
(
σ
0
,
1
,
.
.
.
,
p
)
⋅
g
q
(
σ
p
,
p
+
1
,
.
.
.
,
p
+
q
)
{\displaystyle (f^{p}\smile g^{q})(\sigma )=f^{p}(\sigma _{0,1,...,p})\cdot g^{q}(\sigma _{p,p+1,...,p+q})}
where
σ
{\displaystyle \sigma }
is a
(
p
+
q
)
{\displaystyle (p+q)}
-simplex and
σ
S
,
S
⊂
{
0
,
1
,
.
.
.
,
p
+
q
}
{\displaystyle \sigma _{S},\ S\subset \{0,1,...,p+q\}}
,
is the simplex spanned by
S
{\displaystyle S}
into the
(
p
+
q
)
{\displaystyle (p+q)}
-simplex whose vertices are indexed by
{
0
,
.
.
.
,
p
+
q
}
{\displaystyle \{0,...,p+q\}}
. So,
σ
0
,
1
,
.
.
.
,
p
{\displaystyle \sigma _{0,1,...,p}}
is the
p
{\displaystyle p}
-th front face and
σ
p
,
p
+
1
,
.
.
.
,
p
+
q
{\displaystyle \sigma _{p,p+1,...,p+q}}
is the
q
{\displaystyle q}
-th back face of
σ
{\displaystyle \sigma }
, respectively.
The coboundary of the cup product of cochains
f
p
{\displaystyle f^{p}}
and
g
q
{\displaystyle g^{q}}
is given by
d
(
f
p
⌣
g
q
)
=
d
f
p
⌣
g
q
+
(
−
1
)
p
(
f
p
⌣
d
g
q
)
.
{\displaystyle d(f^{p}\smile g^{q})=d{f^{p}}\smile g^{q}+(-1)^{p}(f^{p}\smile d{g^{q}}).}
The cup product of two cocycles is again a cocycle, and the product of a coboundary with a cocycle (in either order) is a coboundary.
The cup product operation satisfies the identity
α
p
⌣
β
q
=
(
−
1
)
p
q
(
β
q
⌣
α
p
)
.
{\displaystyle \alpha ^{p}\smile \beta ^{q}=(-1)^{pq}(\beta ^{q}\smile \alpha ^{p}).}
In other words, the corresponding multiplication is graded-commutative.
See references.
Laplace operator
The Laplace operator
Δ
f
{\displaystyle \Delta f}
of a function
f
{\displaystyle f}
at a vertex
p
{\displaystyle p}
, is (up to a factor) the rate at which the average value of
f
{\displaystyle f}
over a cellular neighborhood of
p
{\displaystyle p}
deviates from
f
(
p
)
{\displaystyle f(p)}
. The Laplace operator represents the flux density of the gradient flow of a function. For instance, the net rate at which a chemical dissolved in a fluid moves toward or away from some point is proportional to the Laplace operator of the chemical concentration at that point; expressed symbolically, the resulting equation is the diffusion equation. For these reasons, it is extensively used in the sciences for modelling various physical phenomena.
The codifferential
δ
:
C
k
→
C
k
−
1
{\displaystyle \delta :C^{k}\to C^{k-1}}
is an operator defined on
k
{\displaystyle k}
-forms by:
δ
=
(
−
1
)
n
(
k
−
1
)
+
1
⋆
d
⋆
=
(
−
1
)
k
⋆
−
1
d
⋆
,
{\displaystyle \delta =(-1)^{n(k-1)+1}{\star }d{\star }=(-1)^{k}\,{\star }^{-1}d{\star },}
where
d
{\displaystyle d}
is the exterior derivative or differential and
⋆
{\displaystyle \star }
is the Hodge star operator.
The codifferential is the adjoint of the exterior derivative according to Stokes' theorem:
(
η
,
δ
ζ
)
=
(
d
η
,
ζ
)
.
{\displaystyle (\eta ,\delta \zeta )=(d\eta ,\zeta ).}
Since the differential satisfies
d
2
=
0
{\displaystyle d^{2}=0}
, the codifferential has the corresponding property
δ
2
=
⋆
d
⋆
⋆
d
⋆
=
(
−
1
)
k
(
n
−
k
)
⋆
d
2
⋆
=
0.
{\displaystyle \delta ^{2}={\star }d{\star }{\star }d{\star }=(-1)^{k(n-k)}{\star }d^{2}{\star }=0.}
The Laplace operator is defined by:
Δ
=
(
δ
+
d
)
2
=
δ
d
+
d
δ
.
{\displaystyle \Delta =(\delta +d)^{2}=\delta d+d\delta .}
See references.
Related
Discrete element method
Divided differences
Finite difference coefficient
Finite difference method
Finite element method
Finite volume method
Numerical differentiation
Numerical integration
Numerical methods for ordinary differential equations
See also
Calculus of finite differences
Calculus on finite weighted graphs
Cellular automaton
Discrete differential geometry
Discrete Laplace operator
Calculus of finite differences, discrete calculus or discrete analysis
Discrete Morse theory
References
Kata Kunci Pencarian:
- Turunan kedua
- Kalkulus diferensial Boolean
- Komposisi fungsi
- Penambahan
- Fisika
- 0 (angka)
- Bilangan segitiga kuadrat
- Daftar algoritme
- Kaidah Cramer
- Daftar tetapan matematis
- Discrete calculus
- Discrete mathematics
- Discrete time and continuous time
- Index calculus algorithm
- Discrete exterior calculus
- Finite difference
- Discrete differential geometry
- Time-scale calculus
- Discrete geometry
- Continuous or discrete variable