- Source: Summation
In mathematics, summation is the addition of a sequence of numbers, called addends or summands; the result is their sum or total. Beside numbers, other types of values can be summed as well: functions, vectors, matrices, polynomials and, in general, elements of any type of mathematical objects on which an operation denoted "+" is defined.
Summations of infinite sequences are called series. They involve the concept of limit, and are not considered in this article.
The summation of an explicit sequence is denoted as a succession of additions. For example, summation of [1, 2, 4, 2] is denoted 1 + 2 + 4 + 2, and results in 9, that is, 1 + 2 + 4 + 2 = 9. Because addition is associative and commutative, there is no need for parentheses, and the result is the same irrespective of the order of the summands. Summation of a sequence of only one summand results in this summand itself. Summation of an empty sequence (a sequence with no elements), by convention, results in 0.
Very often, the elements of a sequence are defined, through a regular pattern, as a function of their place in the sequence. For simple patterns, summation of long sequences may be represented with most summands replaced by ellipses. For example, summation of the first 100 natural numbers may be written as 1 + 2 + 3 + 4 + ⋯ + 99 + 100. Otherwise, summation is denoted by using Σ notation, where
∑
{\textstyle \sum }
is an enlarged capital Greek letter sigma. For example, the sum of the first n natural numbers can be denoted as
∑
i
=
1
n
i
{\textstyle \sum _{i=1}^{n}i}
.
For long summations, and summations of variable length (defined with ellipses or Σ notation), it is a common problem to find closed-form expressions for the result. For example,
∑
i
=
1
n
i
=
n
(
n
+
1
)
2
.
{\displaystyle \sum _{i=1}^{n}i={\frac {n(n+1)}{2}}.}
Although such formulas do not always exist, many summation formulas have been discovered—with some of the most common and elementary ones being listed in the remainder of this article.
Notation
= Capital-sigma notation
=Mathematical notation uses a symbol that compactly represents summation of many similar terms: the summation symbol,
∑
{\textstyle \sum }
, an enlarged form of the upright capital Greek letter sigma. This is defined as
∑
i
=
m
n
a
i
=
a
m
+
a
m
+
1
+
a
m
+
2
+
⋯
+
a
n
−
1
+
a
n
{\displaystyle \sum _{i\mathop {=} m}^{n}a_{i}=a_{m}+a_{m+1}+a_{m+2}+\cdots +a_{n-1}+a_{n}}
where i is the index of summation; ai is an indexed variable representing each term of the sum; m is the lower bound of summation, and n is the upper bound of summation. The "i = m" under the summation symbol means that the index i starts out equal to m. The index, i, is incremented by one for each successive term, stopping when i = n.
This is read as "sum of ai, from i = m to n".
Here is an example showing the summation of squares:
∑
i
=
3
6
i
2
=
3
2
+
4
2
+
5
2
+
6
2
=
86.
{\displaystyle \sum _{i=3}^{6}i^{2}=3^{2}+4^{2}+5^{2}+6^{2}=86.}
In general, while any variable can be used as the index of summation (provided that no ambiguity is incurred), some of the most common ones include letters such as
i
{\displaystyle i}
,
j
{\displaystyle j}
,
k
{\displaystyle k}
, and
n
{\displaystyle n}
; the latter is also often used for the upper bound of a summation.
Alternatively, index and bounds of summation are sometimes omitted from the definition of summation if the context is sufficiently clear. This applies particularly when the index runs from 1 to n. For example, one might write that:
∑
a
i
2
=
∑
i
=
1
n
a
i
2
.
{\displaystyle \sum a_{i}^{2}=\sum _{i=1}^{n}a_{i}^{2}.}
Generalizations of this notation are often used, in which an arbitrary logical condition is supplied, and the sum is intended to be taken over all values satisfying the condition. For example:
∑
0
≤
k
<
100
f
(
k
)
{\displaystyle \sum _{0\leq k<100}f(k)}
is an alternative notation for
∑
k
=
0
99
f
(
k
)
,
{\textstyle \sum _{k=0}^{99}f(k),}
the sum of
f
(
k
)
{\displaystyle f(k)}
over all (integers)
k
{\displaystyle k}
in the specified range. Similarly,
∑
x
∈
S
f
(
x
)
{\displaystyle \sum _{x\mathop {\in } S}f(x)}
is the sum of
f
(
x
)
{\displaystyle f(x)}
over all elements
x
{\displaystyle x}
in the set
S
{\displaystyle S}
, and
∑
d
|
n
μ
(
d
)
{\displaystyle \sum _{d\,|\,n}\;\mu (d)}
is the sum of
μ
(
d
)
{\displaystyle \mu (d)}
over all positive integers
d
{\displaystyle d}
dividing
n
{\displaystyle n}
.
There are also ways to generalize the use of many sigma signs. For example,
∑
i
,
j
{\displaystyle \sum _{i,j}}
is the same as
∑
i
∑
j
.
{\displaystyle \sum _{i}\sum _{j}.}
A similar notation is used for the product of a sequence, where
∏
{\textstyle \prod }
, an enlarged form of the Greek capital letter pi, is used instead of
∑
.
{\textstyle \sum .}
= Special cases
=It is possible to sum fewer than 2 numbers:
If the summation has one summand
x
{\displaystyle x}
, then the evaluated sum is
x
{\displaystyle x}
.
If the summation has no summands, then the evaluated sum is zero, because zero is the identity for addition. This is known as the empty sum.
These degenerate cases are usually only used when the summation notation gives a degenerate result in a special case.
For example, if
n
=
m
{\displaystyle n=m}
in the definition above, then there is only one term in the sum; if
n
=
m
−
1
{\displaystyle n=m-1}
, then there is none.
= Algebraic sum
=The phrase 'algebraic sum' refers to a sum of terms which may have positive or negative signs. Terms with positive signs are added, while terms with negative signs are subtracted.
Formal definition
Summation may be defined recursively as follows:
∑
i
=
a
b
g
(
i
)
=
0
{\displaystyle \sum _{i=a}^{b}g(i)=0}
, for
b
<
a
{\displaystyle b
;
∑
i
=
a
b
g
(
i
)
=
g
(
b
)
+
∑
i
=
a
b
−
1
g
(
i
)
{\displaystyle \sum _{i=a}^{b}g(i)=g(b)+\sum _{i=a}^{b-1}g(i)}
, for
b
⩾
a
{\displaystyle b\geqslant a}
.
Measure theory notation
In the notation of measure and integration theory, a sum can be expressed as a definite integral,
∑
k
=
a
b
f
(
k
)
=
∫
[
a
,
b
]
f
d
μ
{\displaystyle \sum _{k\mathop {=} a}^{b}f(k)=\int _{[a,b]}f\,d\mu }
where
[
a
,
b
]
{\displaystyle [a,b]}
is the subset of the integers from
a
{\displaystyle a}
to
b
{\displaystyle b}
, and where
μ
{\displaystyle \mu }
is the counting measure over the integers.
Calculus of finite differences
Given a function f that is defined over the integers in the interval [m, n], the following equation holds:
f
(
n
)
−
f
(
m
)
=
∑
i
=
m
n
−
1
(
f
(
i
+
1
)
−
f
(
i
)
)
.
{\displaystyle f(n)-f(m)=\sum _{i=m}^{n-1}(f(i+1)-f(i)).}
This is known as a telescoping series and is the analogue of the fundamental theorem of calculus in calculus of finite differences, which states that:
f
(
n
)
−
f
(
m
)
=
∫
m
n
f
′
(
x
)
d
x
,
{\displaystyle f(n)-f(m)=\int _{m}^{n}f'(x)\,dx,}
where
f
′
(
x
)
=
lim
h
→
0
f
(
x
+
h
)
−
f
(
x
)
h
{\displaystyle f'(x)=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}}
is the derivative of f.
An example of application of the above equation is the following:
n
k
=
∑
i
=
0
n
−
1
(
(
i
+
1
)
k
−
i
k
)
.
{\displaystyle n^{k}=\sum _{i=0}^{n-1}\left((i+1)^{k}-i^{k}\right).}
Using binomial theorem, this may be rewritten as:
n
k
=
∑
i
=
0
n
−
1
(
∑
j
=
0
k
−
1
(
k
j
)
i
j
)
.
{\displaystyle n^{k}=\sum _{i=0}^{n-1}{\biggl (}\sum _{j=0}^{k-1}{\binom {k}{j}}i^{j}{\biggr )}.}
The above formula is more commonly used for inverting of the difference operator
Δ
{\displaystyle \Delta }
, defined by:
Δ
(
f
)
(
n
)
=
f
(
n
+
1
)
−
f
(
n
)
,
{\displaystyle \Delta (f)(n)=f(n+1)-f(n),}
where f is a function defined on the nonnegative integers.
Thus, given such a function f, the problem is to compute the antidifference of f, a function
F
=
Δ
−
1
f
{\displaystyle F=\Delta ^{-1}f}
such that
Δ
F
=
f
{\displaystyle \Delta F=f}
. That is,
F
(
n
+
1
)
−
F
(
n
)
=
f
(
n
)
.
{\displaystyle F(n+1)-F(n)=f(n).}
This function is defined up to the addition of a constant, and may be chosen as
F
(
n
)
=
∑
i
=
0
n
−
1
f
(
i
)
.
{\displaystyle F(n)=\sum _{i=0}^{n-1}f(i).}
There is not always a closed-form expression for such a summation, but Faulhaber's formula provides a closed form in the case where
f
(
n
)
=
n
k
{\displaystyle f(n)=n^{k}}
and, by linearity, for every polynomial function of n.
Approximation by definite integrals
Many such approximations can be obtained by the following connection between sums and integrals, which holds for any increasing function f:
∫
s
=
a
−
1
b
f
(
s
)
d
s
≤
∑
i
=
a
b
f
(
i
)
≤
∫
s
=
a
b
+
1
f
(
s
)
d
s
.
{\displaystyle \int _{s=a-1}^{b}f(s)\ ds\leq \sum _{i=a}^{b}f(i)\leq \int _{s=a}^{b+1}f(s)\ ds.}
and for any decreasing function f:
∫
s
=
a
b
+
1
f
(
s
)
d
s
≤
∑
i
=
a
b
f
(
i
)
≤
∫
s
=
a
−
1
b
f
(
s
)
d
s
.
{\displaystyle \int _{s=a}^{b+1}f(s)\ ds\leq \sum _{i=a}^{b}f(i)\leq \int _{s=a-1}^{b}f(s)\ ds.}
For more general approximations, see the Euler–Maclaurin formula.
For summations in which the summand is given (or can be interpolated) by an integrable function of the index, the summation can be interpreted as a Riemann sum occurring in the definition of the corresponding definite integral. One can therefore expect that for instance
b
−
a
n
∑
i
=
0
n
−
1
f
(
a
+
i
b
−
a
n
)
≈
∫
a
b
f
(
x
)
d
x
,
{\displaystyle {\frac {b-a}{n}}\sum _{i=0}^{n-1}f\left(a+i{\frac {b-a}{n}}\right)\approx \int _{a}^{b}f(x)\ dx,}
since the right-hand side is by definition the limit for
n
→
∞
{\displaystyle n\to \infty }
of the left-hand side. However, for a given summation n is fixed, and little can be said about the error in the above approximation without additional assumptions about f: it is clear that for wildly oscillating functions the Riemann sum can be arbitrarily far from the Riemann integral.
Identities
The formulae below involve finite sums; for infinite summations or finite summations of expressions involving trigonometric functions or other transcendental functions, see list of mathematical series.
= General identities
=∑
n
=
s
t
C
⋅
f
(
n
)
=
C
⋅
∑
n
=
s
t
f
(
n
)
{\displaystyle \sum _{n=s}^{t}C\cdot f(n)=C\cdot \sum _{n=s}^{t}f(n)\quad }
(distributivity)
∑
n
=
s
t
f
(
n
)
±
∑
n
=
s
t
g
(
n
)
=
∑
n
=
s
t
(
f
(
n
)
±
g
(
n
)
)
{\displaystyle \sum _{n=s}^{t}f(n)\pm \sum _{n=s}^{t}g(n)=\sum _{n=s}^{t}\left(f(n)\pm g(n)\right)\quad }
(commutativity and associativity)
∑
n
=
s
t
f
(
n
)
=
∑
n
=
s
+
p
t
+
p
f
(
n
−
p
)
{\displaystyle \sum _{n=s}^{t}f(n)=\sum _{n=s+p}^{t+p}f(n-p)\quad }
(index shift)
∑
n
∈
B
f
(
n
)
=
∑
m
∈
A
f
(
σ
(
m
)
)
,
{\displaystyle \sum _{n\in B}f(n)=\sum _{m\in A}f(\sigma (m)),\quad }
for a bijection σ from a finite set A onto a set B (index change); this generalizes the preceding formula.
∑
n
=
s
t
f
(
n
)
=
∑
n
=
s
j
f
(
n
)
+
∑
n
=
j
+
1
t
f
(
n
)
{\displaystyle \sum _{n=s}^{t}f(n)=\sum _{n=s}^{j}f(n)+\sum _{n=j+1}^{t}f(n)\quad }
(splitting a sum, using associativity)
∑
n
=
a
b
f
(
n
)
=
∑
n
=
0
b
f
(
n
)
−
∑
n
=
0
a
−
1
f
(
n
)
{\displaystyle \sum _{n=a}^{b}f(n)=\sum _{n=0}^{b}f(n)-\sum _{n=0}^{a-1}f(n)\quad }
(a variant of the preceding formula)
∑
n
=
s
t
f
(
n
)
=
∑
n
=
0
t
−
s
f
(
t
−
n
)
{\displaystyle \sum _{n=s}^{t}f(n)=\sum _{n=0}^{t-s}f(t-n)\quad }
(the sum from the first term up to the last is equal to the sum from the last down to the first)
∑
n
=
0
t
f
(
n
)
=
∑
n
=
0
t
f
(
t
−
n
)
{\displaystyle \sum _{n=0}^{t}f(n)=\sum _{n=0}^{t}f(t-n)\quad }
(a particular case of the formula above)
∑
i
=
k
0
k
1
∑
j
=
l
0
l
1
a
i
,
j
=
∑
j
=
l
0
l
1
∑
i
=
k
0
k
1
a
i
,
j
{\displaystyle \sum _{i=k_{0}}^{k_{1}}\sum _{j=l_{0}}^{l_{1}}a_{i,j}=\sum _{j=l_{0}}^{l_{1}}\sum _{i=k_{0}}^{k_{1}}a_{i,j}\quad }
(commutativity and associativity, again)
∑
k
≤
j
≤
i
≤
n
a
i
,
j
=
∑
i
=
k
n
∑
j
=
k
i
a
i
,
j
=
∑
j
=
k
n
∑
i
=
j
n
a
i
,
j
=
∑
j
=
0
n
−
k
∑
i
=
k
n
−
j
a
i
+
j
,
i
{\displaystyle \sum _{k\leq j\leq i\leq n}a_{i,j}=\sum _{i=k}^{n}\sum _{j=k}^{i}a_{i,j}=\sum _{j=k}^{n}\sum _{i=j}^{n}a_{i,j}=\sum _{j=0}^{n-k}\sum _{i=k}^{n-j}a_{i+j,i}\quad }
(another application of commutativity and associativity)
∑
n
=
2
s
2
t
+
1
f
(
n
)
=
∑
n
=
s
t
f
(
2
n
)
+
∑
n
=
s
t
f
(
2
n
+
1
)
{\displaystyle \sum _{n=2s}^{2t+1}f(n)=\sum _{n=s}^{t}f(2n)+\sum _{n=s}^{t}f(2n+1)\quad }
(splitting a sum into its odd and even parts, for even indexes)
∑
n
=
2
s
+
1
2
t
f
(
n
)
=
∑
n
=
s
+
1
t
f
(
2
n
)
+
∑
n
=
s
+
1
t
f
(
2
n
−
1
)
{\displaystyle \sum _{n=2s+1}^{2t}f(n)=\sum _{n=s+1}^{t}f(2n)+\sum _{n=s+1}^{t}f(2n-1)\quad }
(splitting a sum into its odd and even parts, for odd indexes)
(
∑
i
=
0
n
a
i
)
(
∑
j
=
0
n
b
j
)
=
∑
i
=
0
n
∑
j
=
0
n
a
i
b
j
{\displaystyle {\biggl (}\sum _{i=0}^{n}a_{i}{\biggr )}{\biggl (}\sum _{j=0}^{n}b_{j}{\biggr )}=\sum _{i=0}^{n}\sum _{j=0}^{n}a_{i}b_{j}\quad }
(distributivity)
∑
i
=
s
m
∑
j
=
t
n
a
i
c
j
=
(
∑
i
=
s
m
a
i
)
(
∑
j
=
t
n
c
j
)
{\displaystyle \sum _{i=s}^{m}\sum _{j=t}^{n}{a_{i}}{c_{j}}={\biggl (}\sum _{i=s}^{m}a_{i}{\biggr )}{\biggl (}\sum _{j=t}^{n}c_{j}{\biggr )}\quad }
(distributivity allows factorization)
∑
n
=
s
t
log
b
f
(
n
)
=
log
b
∏
n
=
s
t
f
(
n
)
{\displaystyle \sum _{n=s}^{t}\log _{b}f(n)=\log _{b}\prod _{n=s}^{t}f(n)\quad }
(the logarithm of a product is the sum of the logarithms of the factors)
C
∑
n
=
s
t
f
(
n
)
=
∏
n
=
s
t
C
f
(
n
)
{\displaystyle C^{\sum \limits _{n=s}^{t}f(n)}=\prod _{n=s}^{t}C^{f(n)}\quad }
(the exponential of a sum is the product of the exponential of the summands)
∑
m
=
0
k
∑
n
=
0
m
f
(
m
,
n
)
=
∑
m
=
0
k
∑
n
=
m
k
f
(
n
,
m
)
,
{\displaystyle \sum _{m=0}^{k}\sum _{n=0}^{m}f(m,n)=\sum _{m=0}^{k}\sum _{n=m}^{k}f(n,m),\quad }
for any function
f
{\textstyle f}
from
Z
×
Z
{\textstyle \mathbb {Z} \times \mathbb {Z} }
.
= Powers and logarithm of arithmetic progressions
=∑
i
=
1
n
c
=
n
c
{\displaystyle \sum _{i=1}^{n}c=nc\quad }
for every c that does not depend on i
∑
i
=
0
n
i
=
∑
i
=
1
n
i
=
n
(
n
+
1
)
2
{\displaystyle \sum _{i=0}^{n}i=\sum _{i=1}^{n}i={\frac {n(n+1)}{2}}\qquad }
(Sum of the simplest arithmetic progression, consisting of the first n natural numbers.): 52
∑
i
=
1
n
(
2
i
−
1
)
=
n
2
{\displaystyle \sum _{i=1}^{n}(2i-1)=n^{2}\qquad }
(Sum of first odd natural numbers)
∑
i
=
0
n
2
i
=
n
(
n
+
1
)
{\displaystyle \sum _{i=0}^{n}2i=n(n+1)\qquad }
(Sum of first even natural numbers)
∑
i
=
1
n
log
i
=
log
n
!
{\displaystyle \sum _{i=1}^{n}\log i=\log n!\qquad }
(A sum of logarithms is the logarithm of the product)
∑
i
=
0
n
i
2
=
∑
i
=
1
n
i
2
=
n
(
n
+
1
)
(
2
n
+
1
)
6
=
n
3
3
+
n
2
2
+
n
6
{\displaystyle \sum _{i=0}^{n}i^{2}=\sum _{i=1}^{n}i^{2}={\frac {n(n+1)(2n+1)}{6}}={\frac {n^{3}}{3}}+{\frac {n^{2}}{2}}+{\frac {n}{6}}\qquad }
(Sum of the first squares, see square pyramidal number.) : 52
∑
i
=
0
n
i
3
=
(
∑
i
=
0
n
i
)
2
=
(
n
(
n
+
1
)
2
)
2
=
n
4
4
+
n
3
2
+
n
2
4
{\displaystyle \sum _{i=0}^{n}i^{3}={\biggl (}\sum _{i=0}^{n}i{\biggr )}^{2}=\left({\frac {n(n+1)}{2}}\right)^{2}={\frac {n^{4}}{4}}+{\frac {n^{3}}{2}}+{\frac {n^{2}}{4}}\qquad }
(Nicomachus's theorem) : 52
More generally, one has Faulhaber's formula for
p
>
1
{\displaystyle p>1}
∑
k
=
1
n
k
p
=
n
p
+
1
p
+
1
+
1
2
n
p
+
∑
k
=
2
p
(
p
k
)
B
k
p
−
k
+
1
n
p
−
k
+
1
,
{\displaystyle \sum _{k=1}^{n}k^{p}={\frac {n^{p+1}}{p+1}}+{\frac {1}{2}}n^{p}+\sum _{k=2}^{p}{\binom {p}{k}}{\frac {B_{k}}{p-k+1}}\,n^{p-k+1},}
where
B
k
{\displaystyle B_{k}}
denotes a Bernoulli number, and
(
p
k
)
{\displaystyle {\binom {p}{k}}}
is a binomial coefficient.
= Summation index in exponents
=In the following summations, a is assumed to be different from 1.
∑
i
=
0
n
−
1
a
i
=
1
−
a
n
1
−
a
{\displaystyle \sum _{i=0}^{n-1}a^{i}={\frac {1-a^{n}}{1-a}}}
(sum of a geometric progression)
∑
i
=
0
n
−
1
1
2
i
=
2
−
1
2
n
−
1
{\displaystyle \sum _{i=0}^{n-1}{\frac {1}{2^{i}}}=2-{\frac {1}{2^{n-1}}}}
(special case for a = 1/2)
∑
i
=
0
n
−
1
i
a
i
=
a
−
n
a
n
+
(
n
−
1
)
a
n
+
1
(
1
−
a
)
2
{\displaystyle \sum _{i=0}^{n-1}ia^{i}={\frac {a-na^{n}+(n-1)a^{n+1}}{(1-a)^{2}}}}
(a times the derivative with respect to a of the geometric progression)
∑
i
=
0
n
−
1
(
b
+
i
d
)
a
i
=
b
∑
i
=
0
n
−
1
a
i
+
d
∑
i
=
0
n
−
1
i
a
i
=
b
(
1
−
a
n
1
−
a
)
+
d
(
a
−
n
a
n
+
(
n
−
1
)
a
n
+
1
(
1
−
a
)
2
)
=
b
(
1
−
a
n
)
−
(
n
−
1
)
d
a
n
1
−
a
+
d
a
(
1
−
a
n
−
1
)
(
1
−
a
)
2
{\displaystyle {\begin{aligned}\sum _{i=0}^{n-1}\left(b+id\right)a^{i}&=b\sum _{i=0}^{n-1}a^{i}+d\sum _{i=0}^{n-1}ia^{i}\\&=b\left({\frac {1-a^{n}}{1-a}}\right)+d\left({\frac {a-na^{n}+(n-1)a^{n+1}}{(1-a)^{2}}}\right)\\&={\frac {b(1-a^{n})-(n-1)da^{n}}{1-a}}+{\frac {da(1-a^{n-1})}{(1-a)^{2}}}\end{aligned}}}
(sum of an arithmetico–geometric sequence)
= Binomial coefficients and factorials
=There exist very many summation identities involving binomial coefficients (a whole chapter of Concrete Mathematics is devoted to just the basic techniques). Some of the most basic ones are the following.
Involving the binomial theorem
∑
i
=
0
n
(
n
i
)
a
n
−
i
b
i
=
(
a
+
b
)
n
,
{\displaystyle \sum _{i=0}^{n}{n \choose i}a^{n-i}b^{i}=(a+b)^{n},}
the binomial theorem
∑
i
=
0
n
(
n
i
)
=
2
n
,
{\displaystyle \sum _{i=0}^{n}{n \choose i}=2^{n},}
the special case where a = b = 1
∑
i
=
0
n
(
n
i
)
p
i
(
1
−
p
)
n
−
i
=
1
{\displaystyle \sum _{i=0}^{n}{n \choose i}p^{i}(1-p)^{n-i}=1}
, the special case where p = a = 1 − b, which, for
0
≤
p
≤
1
,
{\displaystyle 0\leq p\leq 1,}
expresses the sum of the binomial distribution
∑
i
=
0
n
i
(
n
i
)
=
n
(
2
n
−
1
)
,
{\displaystyle \sum _{i=0}^{n}i{n \choose i}=n(2^{n-1}),}
the value at a = b = 1 of the derivative with respect to a of the binomial theorem
∑
i
=
0
n
(
n
i
)
i
+
1
=
2
n
+
1
−
1
n
+
1
,
{\displaystyle \sum _{i=0}^{n}{\frac {n \choose i}{i+1}}={\frac {2^{n+1}-1}{n+1}},}
the value at a = b = 1 of the antiderivative with respect to a of the binomial theorem
Involving permutation numbers
In the following summations,
n
P
k
{\displaystyle {}_{n}P_{k}}
is the number of k-permutations of n.
∑
i
=
0
n
i
P
k
(
n
i
)
=
n
P
k
(
2
n
−
k
)
{\displaystyle \sum _{i=0}^{n}{}_{i}P_{k}{n \choose i}={}_{n}P_{k}(2^{n-k})}
∑
i
=
1
n
i
+
k
P
k
+
1
=
∑
i
=
1
n
∏
j
=
0
k
(
i
+
j
)
=
(
n
+
k
+
1
)
!
(
n
−
1
)
!
(
k
+
2
)
{\displaystyle \sum _{i=1}^{n}{}_{i+k}P_{k+1}=\sum _{i=1}^{n}\prod _{j=0}^{k}(i+j)={\frac {(n+k+1)!}{(n-1)!(k+2)}}}
∑
i
=
0
n
i
!
⋅
(
n
i
)
=
∑
i
=
0
n
n
P
i
=
⌊
n
!
⋅
e
⌋
,
n
∈
Z
+
{\displaystyle \sum _{i=0}^{n}i!\cdot {n \choose i}=\sum _{i=0}^{n}{}_{n}P_{i}=\lfloor n!\cdot e\rfloor ,\quad n\in \mathbb {Z} ^{+}}
, where and
⌊
x
⌋
{\displaystyle \lfloor x\rfloor }
denotes the floor function.
Others
∑
k
=
0
m
(
n
+
k
n
)
=
(
n
+
m
+
1
n
+
1
)
{\displaystyle \sum _{k=0}^{m}{\binom {n+k}{n}}={\binom {n+m+1}{n+1}}}
∑
i
=
k
n
(
i
k
)
=
(
n
+
1
k
+
1
)
{\displaystyle \sum _{i=k}^{n}{i \choose k}={n+1 \choose k+1}}
∑
i
=
0
n
i
⋅
i
!
=
(
n
+
1
)
!
−
1
{\displaystyle \sum _{i=0}^{n}i\cdot i!=(n+1)!-1}
∑
i
=
0
n
(
m
+
i
−
1
i
)
=
(
m
+
n
n
)
{\displaystyle \sum _{i=0}^{n}{m+i-1 \choose i}={m+n \choose n}}
∑
i
=
0
n
(
n
i
)
2
=
(
2
n
n
)
{\displaystyle \sum _{i=0}^{n}{n \choose i}^{2}={2n \choose n}}
∑
i
=
0
n
1
i
!
=
⌊
n
!
e
⌋
n
!
{\displaystyle \sum _{i=0}^{n}{\frac {1}{i!}}={\frac {\lfloor n!\;e\rfloor }{n!}}}
= Harmonic numbers
=∑
i
=
1
n
1
i
=
H
n
{\displaystyle \sum _{i=1}^{n}{\frac {1}{i}}=H_{n}\quad }
(the nth harmonic number)
∑
i
=
1
n
1
i
k
=
H
n
k
{\displaystyle \sum _{i=1}^{n}{\frac {1}{i^{k}}}=H_{n}^{k}\quad }
(a generalized harmonic number)
Growth rates
The following are useful approximations (using theta notation):
∑
i
=
1
n
i
c
∈
Θ
(
n
c
+
1
)
{\displaystyle \sum _{i=1}^{n}i^{c}\in \Theta (n^{c+1})}
for real c greater than −1
∑
i
=
1
n
1
i
∈
Θ
(
log
e
n
)
{\displaystyle \sum _{i=1}^{n}{\frac {1}{i}}\in \Theta (\log _{e}n)}
(See Harmonic number)
∑
i
=
1
n
c
i
∈
Θ
(
c
n
)
{\displaystyle \sum _{i=1}^{n}c^{i}\in \Theta (c^{n})}
for real c greater than 1
∑
i
=
1
n
log
(
i
)
c
∈
Θ
(
n
⋅
log
(
n
)
c
)
{\displaystyle \sum _{i=1}^{n}\log(i)^{c}\in \Theta (n\cdot \log(n)^{c})}
for non-negative real c
∑
i
=
1
n
log
(
i
)
c
⋅
i
d
∈
Θ
(
n
d
+
1
⋅
log
(
n
)
c
)
{\displaystyle \sum _{i=1}^{n}\log(i)^{c}\cdot i^{d}\in \Theta (n^{d+1}\cdot \log(n)^{c})}
for non-negative real c, d
∑
i
=
1
n
log
(
i
)
c
⋅
i
d
⋅
b
i
∈
Θ
(
n
d
⋅
log
(
n
)
c
⋅
b
n
)
{\displaystyle \sum _{i=1}^{n}\log(i)^{c}\cdot i^{d}\cdot b^{i}\in \Theta (n^{d}\cdot \log(n)^{c}\cdot b^{n})}
for non-negative real b > 1, c, d
History
In 1675, Gottfried Wilhelm Leibniz, in a letter to Henry Oldenburg, suggests the symbol ∫ to mark the sum of differentials (Latin: calculus summatorius), hence the S-shape. The renaming of this symbol to integral arose later in exchanges with Johann Bernoulli.
In 1755, the summation symbol Σ is attested in Leonhard Euler's Institutiones calculi differentialis. Euler uses the symbol in expressions like:
Σ
(
2
w
x
+
w
2
)
=
x
2
{\displaystyle \Sigma \ (2wx+w^{2})=x^{2}}
In 1772, usage of Σ and Σn is attested by Lagrange.
In 1823, the capital letter S is attested as a summation symbol for series. This usage was apparently widespread.
In 1829, the summation symbol Σ is attested by Fourier and C. G. J. Jacobi. Fourier's use includes lower and upper bounds, for example:
∑
i
=
1
∞
e
−
i
2
t
…
{\displaystyle \sum _{i=1}^{\infty }e^{-i^{2}t}\ldots }
See also
Capital-pi notation
Einstein notation
Iverson bracket
Iterated binary operation
Kahan summation algorithm
Product (mathematics)
Summation by parts
Sigma § Character encoding
Notes
References
Bibliography
Cajori, Florian (1929). A History Of Mathematical Notations Volume II. Open Court Publishing. ISBN 978-0-486-67766-8.
External links
Media related to Summation at Wikimedia Commons
Kata Kunci Pencarian:
- PID
- Perkalian
- Daftar singkatan matematis
- Penjumlahan Ewald
- Arshile Gorky
- Robert Ley
- Penglihatan binokular
- Terapi manual
- Produk dot
- Evidensialitas
- Summation
- Cesàro summation
- Einstein notation
- Ramanujan summation
- Divergent series
- Euler–Maclaurin formula
- Summation (neurophysiology)
- Poisson summation formula
- Kahan summation algorithm
- Summation (disambiguation)