- Bilangan prima
- Pi
- Mazhab dan cabang Islam
- Leonhard Euler
- Syngenta
- Friedrich Nietzsche
- Transformasi Laplace
- Paul Pogba
- Carl Gustav Jung
- Hipotesis Riemann
- Basel problem
- Basel
- Euler–Maclaurin formula
- Pi
- Prime number
- Riemann zeta function
- Basel (disambiguation)
- Leonhard Euler
- List of topics related to π
- Coupon collector's problem
- Solving the Basel problem using Gauss's law - Physics Forums
- Basel problem, primes and π²/6 - Physics Forums
- Basel Problem Integral: Solving with Calculus - Physics Forums
- The Basel Problem: A Solid Solution Using Derivatives and Integrals
- Euler's solution to the Basel problem - Physics Forums
- How can we prove that the sum of alternating squares equals pi …
- Why Parseval's Identity Fails for Higher Exponents in Basel Problem
- Can Euler's Method Solve Higher Order Sums in the Basel …
- Converging to the Basel Problem: Solving for Poles on the Real Axis
- Euler Solution to the Basel Problem - Physics Forums
Taken 3 (2014)
The Bad News Bears (1976)
Matt Rife: Natural Selection (2023)
Basel problem GudangMovies21 Rebahinxxi LK21
The Basel problem is a problem in mathematical analysis with relevance to number theory, concerning an infinite sum of inverse squares. It was first posed by Pietro Mengoli in 1650 and solved by Leonhard Euler in 1734, and read on 5 December 1735 in The Saint Petersburg Academy of Sciences. Since the problem had withstood the attacks of the leading mathematicians of the day, Euler's solution brought him immediate fame when he was twenty-eight. Euler generalised the problem considerably, and his ideas were taken up more than a century later by Bernhard Riemann in his seminal 1859 paper "On the Number of Primes Less Than a Given Magnitude", in which he defined his zeta function and proved its basic properties. The problem is named after the city of Basel, hometown of Euler as well as of the Bernoulli family who unsuccessfully attacked the problem.
The Basel problem asks for the precise summation of the reciprocals of the squares of the natural numbers, i.e. the precise sum of the infinite series:
∑
n
=
1
∞
1
n
2
=
1
1
2
+
1
2
2
+
1
3
2
+
⋯
.
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+{\frac {1}{3^{2}}}+\cdots .}
The sum of the series is approximately equal to 1.644934. The Basel problem asks for the exact sum of this series (in closed form), as well as a proof that this sum is correct. Euler found the exact sum to be
π
2
/
6
{\displaystyle \pi ^{2}/6}
and announced this discovery in 1735. His arguments were based on manipulations that were not justified at the time, although he was later proven correct. He produced an accepted proof in 1741.
The solution to this problem can be used to estimate the probability that two large random numbers are coprime. Two random integers in the range from 1 to
n
{\displaystyle n}
, in the limit as
n
{\displaystyle n}
goes to infinity, are relatively prime with a probability that approaches
6
/
π
2
{\displaystyle 6/\pi ^{2}}
, the reciprocal of the solution to the Basel problem.
Euler's approach
Euler's original derivation of the value
π
2
/
6
{\displaystyle \pi ^{2}/6}
essentially extended observations about finite polynomials and assumed that these same properties hold true for infinite series.
Of course, Euler's original reasoning requires justification (100 years later, Karl Weierstrass proved that Euler's representation of the sine function as an infinite product is valid, by the Weierstrass factorization theorem), but even without justification, by simply obtaining the correct value, he was able to verify it numerically against partial sums of the series. The agreement he observed gave him sufficient confidence to announce his result to the mathematical community.
To follow Euler's argument, recall the Taylor series expansion of the sine function
sin
x
=
x
−
x
3
3
!
+
x
5
5
!
−
x
7
7
!
+
⋯
{\displaystyle \sin x=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots }
Dividing through by
x
{\displaystyle x}
gives
sin
x
x
=
1
−
x
2
3
!
+
x
4
5
!
−
x
6
7
!
+
⋯
.
{\displaystyle {\frac {\sin x}{x}}=1-{\frac {x^{2}}{3!}}+{\frac {x^{4}}{5!}}-{\frac {x^{6}}{7!}}+\cdots .}
The Weierstrass factorization theorem shows that the right-hand side is the product of linear factors given by its roots, just as for finite polynomials. Euler assumed this as a heuristic for expanding an infinite degree polynomial in terms of its roots, but in fact it is not always true for general
P
(
x
)
{\displaystyle P(x)}
. This factorization expands the equation into:
sin
x
x
=
(
1
−
x
π
)
(
1
+
x
π
)
(
1
−
x
2
π
)
(
1
+
x
2
π
)
(
1
−
x
3
π
)
(
1
+
x
3
π
)
⋯
=
(
1
−
x
2
π
2
)
(
1
−
x
2
4
π
2
)
(
1
−
x
2
9
π
2
)
⋯
{\displaystyle {\begin{aligned}{\frac {\sin x}{x}}&=\left(1-{\frac {x}{\pi }}\right)\left(1+{\frac {x}{\pi }}\right)\left(1-{\frac {x}{2\pi }}\right)\left(1+{\frac {x}{2\pi }}\right)\left(1-{\frac {x}{3\pi }}\right)\left(1+{\frac {x}{3\pi }}\right)\cdots \\&=\left(1-{\frac {x^{2}}{\pi ^{2}}}\right)\left(1-{\frac {x^{2}}{4\pi ^{2}}}\right)\left(1-{\frac {x^{2}}{9\pi ^{2}}}\right)\cdots \end{aligned}}}
If we formally multiply out this product and collect all the x2 terms (we are allowed to do so because of Newton's identities), we see by induction that the x2 coefficient of sin x/x is
−
(
1
π
2
+
1
4
π
2
+
1
9
π
2
+
⋯
)
=
−
1
π
2
∑
n
=
1
∞
1
n
2
.
{\displaystyle -\left({\frac {1}{\pi ^{2}}}+{\frac {1}{4\pi ^{2}}}+{\frac {1}{9\pi ^{2}}}+\cdots \right)=-{\frac {1}{\pi ^{2}}}\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}.}
But from the original infinite series expansion of sin x/x, the coefficient of x2 is −1/3! = −1/6. These two coefficients must be equal; thus,
−
1
6
=
−
1
π
2
∑
n
=
1
∞
1
n
2
.
{\displaystyle -{\frac {1}{6}}=-{\frac {1}{\pi ^{2}}}\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}.}
Multiplying both sides of this equation by −π2 gives the sum of the reciprocals of the positive square integers.
∑
n
=
1
∞
1
n
2
=
π
2
6
.
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {\pi ^{2}}{6}}.}
This method of calculating
ζ
(
2
)
{\displaystyle \zeta (2)}
is detailed in expository fashion most notably in Havil's Gamma book which details many zeta function and logarithm-related series and integrals, as well as a historical perspective, related to the Euler gamma constant.
= Generalizations of Euler's method using elementary symmetric polynomials
=Using formulae obtained from elementary symmetric polynomials, this same approach can be used to enumerate formulae for the even-indexed even zeta constants which have the following known formula expanded by the Bernoulli numbers:
ζ
(
2
n
)
=
(
−
1
)
n
−
1
(
2
π
)
2
n
2
⋅
(
2
n
)
!
B
2
n
.
{\displaystyle \zeta (2n)={\frac {(-1)^{n-1}(2\pi )^{2n}}{2\cdot (2n)!}}B_{2n}.}
For example, let the partial product for
sin
(
x
)
{\displaystyle \sin(x)}
expanded as above be defined by
S
n
(
x
)
x
:=
∏
k
=
1
n
(
1
−
x
2
k
2
⋅
π
2
)
{\displaystyle {\frac {S_{n}(x)}{x}}:=\prod \limits _{k=1}^{n}\left(1-{\frac {x^{2}}{k^{2}\cdot \pi ^{2}}}\right)}
. Then using known formulas for elementary symmetric polynomials (a.k.a., Newton's formulas expanded in terms of power sum identities), we can see (for example) that
[
x
4
]
S
n
(
x
)
x
=
1
2
π
4
(
(
H
n
(
2
)
)
2
−
H
n
(
4
)
)
→
n
→
∞
1
2
π
4
(
ζ
(
2
)
2
−
ζ
(
4
)
)
⟹
ζ
(
4
)
=
π
4
90
=
−
2
π
4
⋅
[
x
4
]
sin
(
x
)
x
+
π
4
36
[
x
6
]
S
n
(
x
)
x
=
−
1
6
π
6
(
(
H
n
(
2
)
)
3
−
2
H
n
(
2
)
H
n
(
4
)
+
2
H
n
(
6
)
)
→
n
→
∞
1
6
π
6
(
ζ
(
2
)
3
−
3
ζ
(
2
)
ζ
(
4
)
+
2
ζ
(
6
)
)
⟹
ζ
(
6
)
=
π
6
945
=
−
3
⋅
π
6
[
x
6
]
sin
(
x
)
x
−
2
3
π
2
6
π
4
90
+
π
6
216
,
{\displaystyle {\begin{aligned}\left[x^{4}\right]{\frac {S_{n}(x)}{x}}&={\frac {1}{2\pi ^{4}}}\left(\left(H_{n}^{(2)}\right)^{2}-H_{n}^{(4)}\right)\qquad \xrightarrow {n\rightarrow \infty } \qquad {\frac {1}{2\pi ^{4}}}\left(\zeta (2)^{2}-\zeta (4)\right)\\[4pt]&\qquad \implies \zeta (4)={\frac {\pi ^{4}}{90}}=-2\pi ^{4}\cdot [x^{4}]{\frac {\sin(x)}{x}}+{\frac {\pi ^{4}}{36}}\\[8pt]\left[x^{6}\right]{\frac {S_{n}(x)}{x}}&=-{\frac {1}{6\pi ^{6}}}\left(\left(H_{n}^{(2)}\right)^{3}-2H_{n}^{(2)}H_{n}^{(4)}+2H_{n}^{(6)}\right)\qquad \xrightarrow {n\rightarrow \infty } \qquad {\frac {1}{6\pi ^{6}}}\left(\zeta (2)^{3}-3\zeta (2)\zeta (4)+2\zeta (6)\right)\\[4pt]&\qquad \implies \zeta (6)={\frac {\pi ^{6}}{945}}=-3\cdot \pi ^{6}[x^{6}]{\frac {\sin(x)}{x}}-{\frac {2}{3}}{\frac {\pi ^{2}}{6}}{\frac {\pi ^{4}}{90}}+{\frac {\pi ^{6}}{216}},\end{aligned}}}
and so on for subsequent coefficients of
[
x
2
k
]
S
n
(
x
)
x
{\displaystyle [x^{2k}]{\frac {S_{n}(x)}{x}}}
. There are other forms of Newton's identities expressing the (finite) power sums
H
n
(
2
k
)
{\displaystyle H_{n}^{(2k)}}
in terms of the elementary symmetric polynomials,
e
i
≡
e
i
(
−
π
2
1
2
,
−
π
2
2
2
,
−
π
2
3
2
,
−
π
2
4
2
,
…
)
,
{\displaystyle e_{i}\equiv e_{i}\left(-{\frac {\pi ^{2}}{1^{2}}},-{\frac {\pi ^{2}}{2^{2}}},-{\frac {\pi ^{2}}{3^{2}}},-{\frac {\pi ^{2}}{4^{2}}},\ldots \right),}
but we can go a more direct route to expressing non-recursive formulas for
ζ
(
2
k
)
{\displaystyle \zeta (2k)}
using the method of elementary symmetric polynomials. Namely, we have a recurrence relation between the elementary symmetric polynomials and the power sum polynomials given as on this page by
(
−
1
)
k
k
e
k
(
x
1
,
…
,
x
n
)
=
∑
j
=
1
k
(
−
1
)
k
−
j
−
1
p
j
(
x
1
,
…
,
x
n
)
e
k
−
j
(
x
1
,
…
,
x
n
)
,
{\displaystyle (-1)^{k}ke_{k}(x_{1},\ldots ,x_{n})=\sum _{j=1}^{k}(-1)^{k-j-1}p_{j}(x_{1},\ldots ,x_{n})e_{k-j}(x_{1},\ldots ,x_{n}),}
which in our situation equates to the limiting recurrence relation (or generating function convolution, or product) expanded as
π
2
k
2
⋅
(
2
k
)
⋅
(
−
1
)
k
(
2
k
+
1
)
!
=
−
[
x
2
k
]
sin
(
π
x
)
π
x
×
∑
i
≥
1
ζ
(
2
i
)
x
i
.
{\displaystyle {\frac {\pi ^{2k}}{2}}\cdot {\frac {(2k)\cdot (-1)^{k}}{(2k+1)!}}=-[x^{2k}]{\frac {\sin(\pi x)}{\pi x}}\times \sum _{i\geq 1}\zeta (2i)x^{i}.}
Then by differentiation and rearrangement of the terms in the previous equation, we obtain that
ζ
(
2
k
)
=
[
x
2
k
]
1
2
(
1
−
π
x
cot
(
π
x
)
)
.
{\displaystyle \zeta (2k)=[x^{2k}]{\frac {1}{2}}\left(1-\pi x\cot(\pi x)\right).}
= Consequences of Euler's proof
=By the above results, we can conclude that
ζ
(
2
k
)
{\displaystyle \zeta (2k)}
is always a rational multiple of
π
2
k
{\displaystyle \pi ^{2k}}
. In particular, since
π
{\displaystyle \pi }
and integer powers of it are transcendental, we can conclude at this point that
ζ
(
2
k
)
{\displaystyle \zeta (2k)}
is irrational, and more precisely, transcendental for all
k
≥
1
{\displaystyle k\geq 1}
. By contrast, the properties of the odd-indexed zeta constants, including Apéry's constant
ζ
(
3
)
{\displaystyle \zeta (3)}
, are almost completely unknown.
The Riemann zeta function
The Riemann zeta function ζ(s) is one of the most significant functions in mathematics because of its relationship to the distribution of the prime numbers. The zeta function is defined for any complex number s with real part greater than 1 by the following formula:
ζ
(
s
)
=
∑
n
=
1
∞
1
n
s
.
{\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}.}
Taking s = 2, we see that ζ(2) is equal to the sum of the reciprocals of the squares of all positive integers:
ζ
(
2
)
=
∑
n
=
1
∞
1
n
2
=
1
1
2
+
1
2
2
+
1
3
2
+
1
4
2
+
⋯
=
π
2
6
≈
1.644934.
{\displaystyle \zeta (2)=\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+{\frac {1}{3^{2}}}+{\frac {1}{4^{2}}}+\cdots ={\frac {\pi ^{2}}{6}}\approx 1.644934.}
Convergence can be proven by the integral test, or by the following inequality:
∑
n
=
1
N
1
n
2
<
1
+
∑
n
=
2
N
1
n
(
n
−
1
)
=
1
+
∑
n
=
2
N
(
1
n
−
1
−
1
n
)
=
1
+
1
−
1
N
⟶
N
→
∞
2.
{\displaystyle {\begin{aligned}\sum _{n=1}^{N}{\frac {1}{n^{2}}}&<1+\sum _{n=2}^{N}{\frac {1}{n(n-1)}}\\&=1+\sum _{n=2}^{N}\left({\frac {1}{n-1}}-{\frac {1}{n}}\right)\\&=1+1-{\frac {1}{N}}\;{\stackrel {N\to \infty }{\longrightarrow }}\;2.\end{aligned}}}
This gives us the upper bound 2, and because the infinite sum contains no negative terms, it must converge to a value strictly between 0 and 2. It can be shown that ζ(s) has a simple expression in terms of the Bernoulli numbers whenever s is a positive even integer. With s = 2n:
ζ
(
2
n
)
=
(
2
π
)
2
n
(
−
1
)
n
+
1
B
2
n
2
⋅
(
2
n
)
!
.
{\displaystyle \zeta (2n)={\frac {(2\pi )^{2n}(-1)^{n+1}B_{2n}}{2\cdot (2n)!}}.}
A proof using Euler's formula and L'Hôpital's rule
The normalized sinc function
sinc
(
x
)
=
sin
(
π
x
)
π
x
{\displaystyle {\text{sinc}}(x)={\frac {\sin(\pi x)}{\pi x}}}
has a Weierstrass factorization representation as an infinite product:
sin
(
π
x
)
π
x
=
∏
n
=
1
∞
(
1
−
x
2
n
2
)
.
{\displaystyle {\frac {\sin(\pi x)}{\pi x}}=\prod _{n=1}^{\infty }\left(1-{\frac {x^{2}}{n^{2}}}\right).}
The infinite product is analytic, so taking the natural logarithm of both sides and differentiating yields
π
cos
(
π
x
)
sin
(
π
x
)
−
1
x
=
−
∑
n
=
1
∞
2
x
n
2
−
x
2
{\displaystyle {\frac {\pi \cos(\pi x)}{\sin(\pi x)}}-{\frac {1}{x}}=-\sum _{n=1}^{\infty }{\frac {2x}{n^{2}-x^{2}}}}
(by uniform convergence, the interchange of the derivative and infinite series is permissible). After dividing the equation by
2
x
{\displaystyle 2x}
and regrouping one gets
1
2
x
2
−
π
cot
(
π
x
)
2
x
=
∑
n
=
1
∞
1
n
2
−
x
2
.
{\displaystyle {\frac {1}{2x^{2}}}-{\frac {\pi \cot(\pi x)}{2x}}=\sum _{n=1}^{\infty }{\frac {1}{n^{2}-x^{2}}}.}
We make a change of variables (
x
=
−
i
t
{\displaystyle x=-it}
):
−
1
2
t
2
+
π
cot
(
−
π
i
t
)
2
i
t
=
∑
n
=
1
∞
1
n
2
+
t
2
.
{\displaystyle -{\frac {1}{2t^{2}}}+{\frac {\pi \cot(-\pi it)}{2it}}=\sum _{n=1}^{\infty }{\frac {1}{n^{2}+t^{2}}}.}
Euler's formula can be used to deduce that
π
cot
(
−
π
i
t
)
2
i
t
=
π
2
i
t
i
(
e
2
π
t
+
1
)
e
2
π
t
−
1
=
π
2
t
+
π
t
(
e
2
π
t
−
1
)
.
{\displaystyle {\frac {\pi \cot(-\pi it)}{2it}}={\frac {\pi }{2it}}{\frac {i\left(e^{2\pi t}+1\right)}{e^{2\pi t}-1}}={\frac {\pi }{2t}}+{\frac {\pi }{t\left(e^{2\pi t}-1\right)}}.}
or using the corresponding hyperbolic function:
π
cot
(
−
π
i
t
)
2
i
t
=
π
2
t
i
cot
(
π
i
t
)
=
π
2
t
coth
(
π
t
)
.
{\displaystyle {\frac {\pi \cot(-\pi it)}{2it}}={\frac {\pi }{2t}}{i\cot(\pi it)}={\frac {\pi }{2t}}\coth(\pi t).}
Then
∑
n
=
1
∞
1
n
2
+
t
2
=
π
(
t
e
2
π
t
+
t
)
−
e
2
π
t
+
1
2
(
t
2
e
2
π
t
−
t
2
)
=
−
1
2
t
2
+
π
2
t
coth
(
π
t
)
.
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{2}+t^{2}}}={\frac {\pi \left(te^{2\pi t}+t\right)-e^{2\pi t}+1}{2\left(t^{2}e^{2\pi t}-t^{2}\right)}}=-{\frac {1}{2t^{2}}}+{\frac {\pi }{2t}}\coth(\pi t).}
Now we take the limit as
t
{\displaystyle t}
approaches zero and use L'Hôpital's rule thrice. By Tannery's theorem applied to
lim
t
→
∞
∑
n
=
1
∞
1
/
(
n
2
+
1
/
t
2
)
{\textstyle \lim _{t\to \infty }\sum _{n=1}^{\infty }1/(n^{2}+1/t^{2})}
, we can interchange the limit and infinite series so that
lim
t
→
0
∑
n
=
1
∞
1
/
(
n
2
+
t
2
)
=
∑
n
=
1
∞
1
/
n
2
{\textstyle \lim _{t\to 0}\sum _{n=1}^{\infty }1/(n^{2}+t^{2})=\sum _{n=1}^{\infty }1/n^{2}}
and by L'Hôpital's rule
∑
n
=
1
∞
1
n
2
=
lim
t
→
0
π
4
2
π
t
e
2
π
t
−
e
2
π
t
+
1
π
t
2
e
2
π
t
+
t
e
2
π
t
−
t
=
lim
t
→
0
π
3
t
e
2
π
t
2
π
(
π
t
2
e
2
π
t
+
2
t
e
2
π
t
)
+
e
2
π
t
−
1
=
lim
t
→
0
π
2
(
2
π
t
+
1
)
4
π
2
t
2
+
12
π
t
+
6
=
π
2
6
.
{\displaystyle {\begin{aligned}\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}&=\lim _{t\to 0}{\frac {\pi }{4}}{\frac {2\pi te^{2\pi t}-e^{2\pi t}+1}{\pi t^{2}e^{2\pi t}+te^{2\pi t}-t}}\\[6pt]&=\lim _{t\to 0}{\frac {\pi ^{3}te^{2\pi t}}{2\pi \left(\pi t^{2}e^{2\pi t}+2te^{2\pi t}\right)+e^{2\pi t}-1}}\\[6pt]&=\lim _{t\to 0}{\frac {\pi ^{2}(2\pi t+1)}{4\pi ^{2}t^{2}+12\pi t+6}}\\[6pt]&={\frac {\pi ^{2}}{6}}.\end{aligned}}}
A proof using Fourier series
Use Parseval's identity (applied to the function f(x) = x) to obtain
∑
n
=
−
∞
∞
|
c
n
|
2
=
1
2
π
∫
−
π
π
x
2
d
x
,
{\displaystyle \sum _{n=-\infty }^{\infty }|c_{n}|^{2}={\frac {1}{2\pi }}\int _{-\pi }^{\pi }x^{2}\,dx,}
where
c
n
=
1
2
π
∫
−
π
π
x
e
−
i
n
x
d
x
=
n
π
cos
(
n
π
)
−
sin
(
n
π
)
π
n
2
i
=
cos
(
n
π
)
n
i
=
(
−
1
)
n
n
i
{\displaystyle {\begin{aligned}c_{n}&={\frac {1}{2\pi }}\int _{-\pi }^{\pi }xe^{-inx}\,dx\\[4pt]&={\frac {n\pi \cos(n\pi )-\sin(n\pi )}{\pi n^{2}}}i\\[4pt]&={\frac {\cos(n\pi )}{n}}i\\[4pt]&={\frac {(-1)^{n}}{n}}i\end{aligned}}}
for n ≠ 0, and c0 = 0. Thus,
|
c
n
|
2
=
{
1
n
2
,
for
n
≠
0
,
0
,
for
n
=
0
,
{\displaystyle |c_{n}|^{2}={\begin{cases}{\dfrac {1}{n^{2}}},&{\text{for }}n\neq 0,\\0,&{\text{for }}n=0,\end{cases}}}
and
∑
n
=
−
∞
∞
|
c
n
|
2
=
2
∑
n
=
1
∞
1
n
2
=
1
2
π
∫
−
π
π
x
2
d
x
.
{\displaystyle \sum _{n=-\infty }^{\infty }|c_{n}|^{2}=2\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {1}{2\pi }}\int _{-\pi }^{\pi }x^{2}\,dx.}
Therefore,
∑
n
=
1
∞
1
n
2
=
1
4
π
∫
−
π
π
x
2
d
x
=
π
2
6
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {1}{4\pi }}\int _{-\pi }^{\pi }x^{2}\,dx={\frac {\pi ^{2}}{6}}}
as required.
Another proof using Parseval's identity
Given a complete orthonormal basis in the space
L
per
2
(
0
,
1
)
{\displaystyle L_{\operatorname {per} }^{2}(0,1)}
of L2 periodic functions over
(
0
,
1
)
{\displaystyle (0,1)}
(i.e., the subspace of square-integrable functions which are also periodic), denoted by
{
e
i
}
i
=
−
∞
∞
{\displaystyle \{e_{i}\}_{i=-\infty }^{\infty }}
, Parseval's identity tells us that
‖
x
‖
2
=
∑
i
=
−
∞
∞
|
⟨
e
i
,
x
⟩
|
2
,
{\displaystyle \|x\|^{2}=\sum _{i=-\infty }^{\infty }|\langle e_{i},x\rangle |^{2},}
where
‖
x
‖
:=
⟨
x
,
x
⟩
{\displaystyle \|x\|:={\sqrt {\langle x,x\rangle }}}
is defined in terms of the inner product on this Hilbert space given by
⟨
f
,
g
⟩
=
∫
0
1
f
(
x
)
g
(
x
)
¯
d
x
,
f
,
g
∈
L
per
2
(
0
,
1
)
.
{\displaystyle \langle f,g\rangle =\int _{0}^{1}f(x){\overline {g(x)}}\,dx,\ f,g\in L_{\operatorname {per} }^{2}(0,1).}
We can consider the orthonormal basis on this space defined by
e
k
≡
e
k
(
ϑ
)
:=
exp
(
2
π
ı
k
ϑ
)
{\displaystyle e_{k}\equiv e_{k}(\vartheta ):=\exp(2\pi \imath k\vartheta )}
such that
⟨
e
k
,
e
j
⟩
=
∫
0
1
e
2
π
ı
(
k
−
j
)
ϑ
d
ϑ
=
δ
k
,
j
{\displaystyle \langle e_{k},e_{j}\rangle =\int _{0}^{1}e^{2\pi \imath (k-j)\vartheta }\,d\vartheta =\delta _{k,j}}
. Then if we take
f
(
ϑ
)
:=
ϑ
{\displaystyle f(\vartheta ):=\vartheta }
, we can compute both that
‖
f
‖
2
=
∫
0
1
ϑ
2
d
ϑ
=
1
3
⟨
f
,
e
k
⟩
=
∫
0
1
ϑ
e
−
2
π
ı
k
ϑ
d
ϑ
=
{
1
2
,
k
=
0
−
1
2
π
ı
k
k
≠
0
,
{\displaystyle {\begin{aligned}\|f\|^{2}&=\int _{0}^{1}\vartheta ^{2}\,d\vartheta ={\frac {1}{3}}\\\langle f,e_{k}\rangle &=\int _{0}^{1}\vartheta e^{-2\pi \imath k\vartheta }\,d\vartheta ={\Biggl \{}{\begin{array}{ll}{\frac {1}{2}},&k=0\\-{\frac {1}{2\pi \imath k}}&k\neq 0,\end{array}}\end{aligned}}}
by elementary calculus and integration by parts, respectively. Finally, by Parseval's identity stated in the form above, we obtain that
‖
f
‖
2
=
1
3
=
∑
k
≠
0
k
=
−
∞
∞
1
(
2
π
k
)
2
+
1
4
=
2
∑
k
=
1
∞
1
(
2
π
k
)
2
+
1
4
⟹
π
2
6
=
2
π
2
3
−
π
2
2
=
ζ
(
2
)
.
{\displaystyle {\begin{aligned}\|f\|^{2}={\frac {1}{3}}&=\sum _{\stackrel {k=-\infty }{k\neq 0}}^{\infty }{\frac {1}{(2\pi k)^{2}}}+{\frac {1}{4}}=2\sum _{k=1}^{\infty }{\frac {1}{(2\pi k)^{2}}}+{\frac {1}{4}}\\&\implies {\frac {\pi ^{2}}{6}}={\frac {2\pi ^{2}}{3}}-{\frac {\pi ^{2}}{2}}=\zeta (2).\end{aligned}}}
= Generalizations and recurrence relations
=Note that by considering higher-order powers of
f
j
(
ϑ
)
:=
ϑ
j
∈
L
per
2
(
0
,
1
)
{\displaystyle f_{j}(\vartheta ):=\vartheta ^{j}\in L_{\operatorname {per} }^{2}(0,1)}
we can use integration by parts to extend this method to enumerating formulas for
ζ
(
2
j
)
{\displaystyle \zeta (2j)}
when
j
>
1
{\displaystyle j>1}
. In particular, suppose we let
I
j
,
k
:=
∫
0
1
ϑ
j
e
−
2
π
ı
k
ϑ
d
ϑ
,
{\displaystyle I_{j,k}:=\int _{0}^{1}\vartheta ^{j}e^{-2\pi \imath k\vartheta }\,d\vartheta ,}
so that integration by parts yields the recurrence relation that
I
j
,
k
=
{
1
j
+
1
,
k
=
0
;
−
1
2
π
ı
⋅
k
+
j
2
π
ı
⋅
k
I
j
−
1
,
k
,
k
≠
0
=
{
1
j
+
1
,
k
=
0
;
−
∑
m
=
1
j
j
!
(
j
+
1
−
m
)
!
⋅
1
(
2
π
ı
⋅
k
)
m
,
k
≠
0.
{\displaystyle {\begin{aligned}I_{j,k}&={\begin{cases}{\frac {1}{j+1}},&k=0;\\[4pt]-{\frac {1}{2\pi \imath \cdot k}}+{\frac {j}{2\pi \imath \cdot k}}I_{j-1,k},&k\neq 0\end{cases}}\\[6pt]&={\begin{cases}{\frac {1}{j+1}},&k=0;\\[4pt]-\sum \limits _{m=1}^{j}{\frac {j!}{(j+1-m)!}}\cdot {\frac {1}{(2\pi \imath \cdot k)^{m}}},&k\neq 0.\end{cases}}\end{aligned}}}
Then by applying Parseval's identity as we did for the first case above along with the linearity of the inner product yields that
‖
f
j
‖
2
=
1
2
j
+
1
=
2
∑
k
≥
1
I
j
,
k
I
¯
j
,
k
+
1
(
j
+
1
)
2
=
2
∑
m
=
1
j
∑
r
=
1
j
j
!
2
(
j
+
1
−
m
)
!
(
j
+
1
−
r
)
!
(
−
1
)
r
ı
m
+
r
ζ
(
m
+
r
)
(
2
π
)
m
+
r
+
1
(
j
+
1
)
2
.
{\displaystyle {\begin{aligned}\|f_{j}\|^{2}={\frac {1}{2j+1}}&=2\sum _{k\geq 1}I_{j,k}{\bar {I}}_{j,k}+{\frac {1}{(j+1)^{2}}}\\[6pt]&=2\sum _{m=1}^{j}\sum _{r=1}^{j}{\frac {j!^{2}}{(j+1-m)!(j+1-r)!}}{\frac {(-1)^{r}}{\imath ^{m+r}}}{\frac {\zeta (m+r)}{(2\pi )^{m+r}}}+{\frac {1}{(j+1)^{2}}}.\end{aligned}}}
Proof using differentiation under the integral sign
It's possible to prove the result using elementary calculus by applying the differentiation under the integral sign technique to an integral due to Freitas:
I
(
α
)
=
∫
0
∞
ln
(
1
+
α
e
−
x
+
e
−
2
x
)
d
x
.
{\displaystyle I(\alpha )=\int _{0}^{\infty }\ln \left(1+\alpha e^{-x}+e^{-2x}\right)dx.}
While the primitive function of the integrand cannot be expressed in terms of elementary functions, by differentiating with respect to
α
{\displaystyle \alpha }
we arrive at
d
I
d
α
=
∫
0
∞
e
−
x
1
+
α
e
−
x
+
e
−
2
x
d
x
,
{\displaystyle {\frac {dI}{d\alpha }}=\int _{0}^{\infty }{\frac {e^{-x}}{1+\alpha e^{-x}+e^{-2x}}}dx,}
which can be integrated by substituting
u
=
e
−
x
{\displaystyle u=e^{-x}}
and decomposing into partial fractions. In the range
−
2
≤
α
≤
2
{\displaystyle -2\leq \alpha \leq 2}
the definite integral reduces to
d
I
d
α
=
2
4
−
α
2
[
arctan
(
α
+
2
4
−
α
2
)
−
arctan
(
α
4
−
α
2
)
]
.
{\displaystyle {\frac {dI}{d\alpha }}={\frac {2}{\sqrt {4-\alpha ^{2}}}}\left[\arctan \left({\frac {\alpha +2}{\sqrt {4-\alpha ^{2}}}}\right)-\arctan \left({\frac {\alpha }{\sqrt {4-\alpha ^{2}}}}\right)\right].}
The expression can be simplified using the arctangent addition formula and integrated with respect to
α
{\displaystyle \alpha }
by means of trigonometric substitution, resulting in
I
(
α
)
=
−
1
2
arccos
(
α
2
)
2
+
c
.
{\displaystyle I(\alpha )=-{\frac {1}{2}}\arccos \left({\frac {\alpha }{2}}\right)^{2}+c.}
The integration constant
c
{\displaystyle c}
can be determined by noticing that two distinct values of
I
(
α
)
{\displaystyle I(\alpha )}
are related by
I
(
2
)
=
4
I
(
0
)
,
{\displaystyle I(2)=4I(0),}
because when calculating
I
(
2
)
{\displaystyle I(2)}
we can factor
1
+
2
e
−
x
+
e
−
2
x
=
(
1
+
e
−
x
)
2
{\displaystyle 1+2e^{-x}+e^{-2x}=(1+e^{-x})^{2}}
and express it in terms of
I
(
0
)
{\displaystyle I(0)}
using the logarithm of a power identity and the substitution
u
=
x
/
2
{\displaystyle u=x/2}
. This makes it possible to determine
c
=
π
2
6
{\displaystyle c={\frac {\pi ^{2}}{6}}}
, and it follows that
I
(
−
2
)
=
2
∫
0
∞
ln
(
1
−
e
−
x
)
d
x
=
−
π
2
3
.
{\displaystyle I(-2)=2\int _{0}^{\infty }\ln(1-e^{-x})dx=-{\frac {\pi ^{2}}{3}}.}
This final integral can be evaluated by expanding the natural logarithm into its Taylor series:
∫
0
∞
ln
(
1
−
e
−
x
)
d
x
=
−
∑
n
=
1
∞
∫
0
∞
e
−
n
x
n
d
x
=
−
∑
n
=
1
∞
1
n
2
.
{\displaystyle \int _{0}^{\infty }\ln(1-e^{-x})dx=-\sum _{n=1}^{\infty }\int _{0}^{\infty }{\frac {e^{-nx}}{n}}dx=-\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}.}
The last two identities imply
∑
n
=
1
∞
1
n
2
=
π
2
6
.
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {\pi ^{2}}{6}}.}
Cauchy's proof
While most proofs use results from advanced mathematics, such as Fourier analysis, complex analysis, and multivariable calculus, the following does not even require single-variable calculus (until a single limit is taken at the end).
For a proof using the residue theorem, see here.
= History of this proof
=The proof goes back to Augustin Louis Cauchy (Cours d'Analyse, 1821, Note VIII). In 1954, this proof appeared in the book of Akiva and Isaak Yaglom "Nonelementary Problems in an Elementary Exposition". Later, in 1982, it appeared in the journal Eureka, attributed to John Scholes, but Scholes claims he learned the proof from Peter Swinnerton-Dyer, and in any case he maintains the proof was "common knowledge at Cambridge in the late 1960s".
= The proof
=The main idea behind the proof is to bound the partial (finite) sums
∑
k
=
1
m
1
k
2
=
1
1
2
+
1
2
2
+
⋯
+
1
m
2
{\displaystyle \sum _{k=1}^{m}{\frac {1}{k^{2}}}={\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+\cdots +{\frac {1}{m^{2}}}}
between two expressions, each of which will tend to π2/6 as m approaches infinity. The two expressions are derived from identities involving the cotangent and cosecant functions. These identities are in turn derived from de Moivre's formula, and we now turn to establishing these identities.
Let x be a real number with 0 < x < π/2, and let n be a positive odd integer. Then from de Moivre's formula and the definition of the cotangent function, we have
cos
(
n
x
)
+
i
sin
(
n
x
)
sin
n
x
=
(
cos
x
+
i
sin
x
)
n
sin
n
x
=
(
cos
x
+
i
sin
x
sin
x
)
n
=
(
cot
x
+
i
)
n
.
{\displaystyle {\begin{aligned}{\frac {\cos(nx)+i\sin(nx)}{\sin ^{n}x}}&={\frac {(\cos x+i\sin x)^{n}}{\sin ^{n}x}}\\[4pt]&=\left({\frac {\cos x+i\sin x}{\sin x}}\right)^{n}\\[4pt]&=(\cot x+i)^{n}.\end{aligned}}}
From the binomial theorem, we have
(
cot
x
+
i
)
n
=
(
n
0
)
cot
n
x
+
(
n
1
)
(
cot
n
−
1
x
)
i
+
⋯
+
(
n
n
−
1
)
(
cot
x
)
i
n
−
1
+
(
n
n
)
i
n
=
(
(
n
0
)
cot
n
x
−
(
n
2
)
cot
n
−
2
x
±
⋯
)
+
i
(
(
n
1
)
cot
n
−
1
x
−
(
n
3
)
cot
n
−
3
x
±
⋯
)
.
{\displaystyle {\begin{aligned}(\cot x+i)^{n}=&{n \choose 0}\cot ^{n}x+{n \choose 1}(\cot ^{n-1}x)i+\cdots +{n \choose {n-1}}(\cot x)i^{n-1}+{n \choose n}i^{n}\\[6pt]=&{\Bigg (}{n \choose 0}\cot ^{n}x-{n \choose 2}\cot ^{n-2}x\pm \cdots {\Bigg )}\;+\;i{\Bigg (}{n \choose 1}\cot ^{n-1}x-{n \choose 3}\cot ^{n-3}x\pm \cdots {\Bigg )}.\end{aligned}}}
Combining the two equations and equating imaginary parts gives the identity
sin
(
n
x
)
sin
n
x
=
(
(
n
1
)
cot
n
−
1
x
−
(
n
3
)
cot
n
−
3
x
±
⋯
)
.
{\displaystyle {\frac {\sin(nx)}{\sin ^{n}x}}={\Bigg (}{n \choose 1}\cot ^{n-1}x-{n \choose 3}\cot ^{n-3}x\pm \cdots {\Bigg )}.}
We take this identity, fix a positive integer m, set n = 2m + 1, and consider xr = rπ/2m + 1 for r = 1, 2, ..., m. Then nxr is a multiple of π and therefore sin(nxr) = 0. So,
0
=
(
2
m
+
1
1
)
cot
2
m
x
r
−
(
2
m
+
1
3
)
cot
2
m
−
2
x
r
±
⋯
+
(
−
1
)
m
(
2
m
+
1
2
m
+
1
)
{\displaystyle 0={{2m+1} \choose 1}\cot ^{2m}x_{r}-{{2m+1} \choose 3}\cot ^{2m-2}x_{r}\pm \cdots +(-1)^{m}{{2m+1} \choose {2m+1}}}
for every r = 1, 2, ..., m. The values xr = x1, x2, ..., xm are distinct numbers in the interval 0 < xr < π/2. Since the function cot2 x is one-to-one on this interval, the numbers tr = cot2 xr are distinct for r = 1, 2, ..., m. By the above equation, these m numbers are the roots of the mth degree polynomial
p
(
t
)
=
(
2
m
+
1
1
)
t
m
−
(
2
m
+
1
3
)
t
m
−
1
±
⋯
+
(
−
1
)
m
(
2
m
+
1
2
m
+
1
)
.
{\displaystyle p(t)={{2m+1} \choose 1}t^{m}-{{2m+1} \choose 3}t^{m-1}\pm \cdots +(-1)^{m}{{2m+1} \choose {2m+1}}.}
By Vieta's formulas we can calculate the sum of the roots directly by examining the first two coefficients of the polynomial, and this comparison shows that
cot
2
x
1
+
cot
2
x
2
+
⋯
+
cot
2
x
m
=
(
2
m
+
1
3
)
(
2
m
+
1
1
)
=
2
m
(
2
m
−
1
)
6
.
{\displaystyle \cot ^{2}x_{1}+\cot ^{2}x_{2}+\cdots +\cot ^{2}x_{m}={\frac {\binom {2m+1}{3}}{\binom {2m+1}{1}}}={\frac {2m(2m-1)}{6}}.}
Substituting the identity csc2 x = cot2 x + 1, we have
csc
2
x
1
+
csc
2
x
2
+
⋯
+
csc
2
x
m
=
2
m
(
2
m
−
1
)
6
+
m
=
2
m
(
2
m
+
2
)
6
.
{\displaystyle \csc ^{2}x_{1}+\csc ^{2}x_{2}+\cdots +\csc ^{2}x_{m}={\frac {2m(2m-1)}{6}}+m={\frac {2m(2m+2)}{6}}.}
Now consider the inequality cot2 x < 1/x2 < csc2 x (illustrated geometrically above). If we add up all these inequalities for each of the numbers xr = rπ/2m + 1, and if we use the two identities above, we get
2
m
(
2
m
−
1
)
6
<
(
2
m
+
1
π
)
2
+
(
2
m
+
1
2
π
)
2
+
⋯
+
(
2
m
+
1
m
π
)
2
<
2
m
(
2
m
+
2
)
6
.
{\displaystyle {\frac {2m(2m-1)}{6}}<\left({\frac {2m+1}{\pi }}\right)^{2}+\left({\frac {2m+1}{2\pi }}\right)^{2}+\cdots +\left({\frac {2m+1}{m\pi }}\right)^{2}<{\frac {2m(2m+2)}{6}}.}
Multiplying through by (π/2m + 1)2, this becomes
π
2
6
(
2
m
2
m
+
1
)
(
2
m
−
1
2
m
+
1
)
<
1
1
2
+
1
2
2
+
⋯
+
1
m
2
<
π
2
6
(
2
m
2
m
+
1
)
(
2
m
+
2
2
m
+
1
)
.
{\displaystyle {\frac {\pi ^{2}}{6}}\left({\frac {2m}{2m+1}}\right)\left({\frac {2m-1}{2m+1}}\right)<{\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+\cdots +{\frac {1}{m^{2}}}<{\frac {\pi ^{2}}{6}}\left({\frac {2m}{2m+1}}\right)\left({\frac {2m+2}{2m+1}}\right).}
As m approaches infinity, the left and right hand expressions each approach π2/6, so by the squeeze theorem,
ζ
(
2
)
=
∑
k
=
1
∞
1
k
2
=
lim
m
→
∞
(
1
1
2
+
1
2
2
+
⋯
+
1
m
2
)
=
π
2
6
{\displaystyle \zeta (2)=\sum _{k=1}^{\infty }{\frac {1}{k^{2}}}=\lim _{m\to \infty }\left({\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+\cdots +{\frac {1}{m^{2}}}\right)={\frac {\pi ^{2}}{6}}}
and this completes the proof.
Proof assuming Weil's conjecture on Tamagawa numbers
A proof is also possible assuming Weil's conjecture on Tamagawa numbers. The conjecture asserts for the case of the algebraic group SL2(R) that the Tamagawa number of the group is one. That is, the quotient of the special linear group over the rational adeles by the special linear group of the rationals (a compact set, because
S
L
2
(
Q
)
{\displaystyle SL_{2}(\mathbb {Q} )}
is a lattice in the adeles) has Tamagawa measure 1:
τ
(
S
L
2
(
Q
)
∖
S
L
2
(
A
Q
)
)
=
1.
{\displaystyle \tau (SL_{2}(\mathbb {Q} )\setminus SL_{2}(A_{\mathbb {Q} }))=1.}
To determine a Tamagawa measure, the group
S
L
2
{\displaystyle SL_{2}}
consists of matrices
[
x
y
z
t
]
{\displaystyle {\begin{bmatrix}x&y\\z&t\end{bmatrix}}}
with
x
t
−
y
z
=
1
{\displaystyle xt-yz=1}
. An invariant volume form on the group is
ω
=
1
x
d
x
∧
d
y
∧
d
z
.
{\displaystyle \omega ={\frac {1}{x}}dx\wedge dy\wedge dz.}
The measure of the quotient is the product of the measures of
S
L
2
(
Z
)
∖
S
L
2
(
R
)
{\displaystyle SL_{2}(\mathbb {Z} )\setminus SL_{2}(\mathbb {R} )}
corresponding to the infinite place, and the measures of
S
L
2
(
Z
p
)
{\displaystyle SL_{2}(\mathbb {Z} _{p})}
in each finite place, where
Z
p
{\displaystyle \mathbb {Z} _{p}}
is the p-adic integers.
For the local factors,
ω
(
S
L
2
(
Z
p
)
)
=
|
S
L
2
(
F
p
)
|
ω
(
S
L
2
(
Z
p
,
p
)
)
{\displaystyle \omega (SL_{2}(\mathbb {Z} _{p}))=|SL_{2}(F_{p})|\omega (SL_{2}(\mathbb {Z} _{p},p))}
where
F
p
{\displaystyle F_{p}}
is the field with
p
{\displaystyle p}
elements, and
S
L
2
(
Z
p
,
p
)
{\displaystyle SL_{2}(\mathbb {Z} _{p},p)}
is the congruence subgroup modulo
p
{\displaystyle p}
. Since each of the coordinates
x
,
y
,
z
{\displaystyle x,y,z}
map the latter group onto
p
Z
p
{\displaystyle p\mathbb {Z} _{p}}
and
|
1
x
|
p
=
1
{\displaystyle \left|{\frac {1}{x}}\right|_{p}=1}
, the measure of
S
L
2
(
Z
p
,
p
)
{\displaystyle SL_{2}(\mathbb {Z} _{p},p)}
is
μ
p
(
p
Z
p
)
3
=
p
−
3
{\displaystyle \mu _{p}(p\mathbb {Z} _{p})^{3}=p^{-3}}
, where
μ
p
{\displaystyle \mu _{p}}
is the normalized Haar measure on
Z
p
{\displaystyle \mathbb {Z} _{p}}
. Also, a standard computation shows that
|
S
L
2
(
F
p
)
|
=
p
(
p
2
−
1
)
{\displaystyle |SL_{2}(F_{p})|=p(p^{2}-1)}
. Putting these together gives
ω
(
S
L
2
(
Z
p
)
)
=
(
1
−
1
/
p
2
)
{\displaystyle \omega (SL_{2}(\mathbb {Z} _{p}))=(1-1/p^{2})}
.
At the infinite place, an integral computation over the fundamental domain of
S
L
2
(
Z
)
{\displaystyle SL_{2}(\mathbb {Z} )}
shows that
ω
(
S
L
2
(
Z
)
∖
S
L
2
(
R
)
=
π
2
/
6
{\displaystyle \omega (SL_{2}(\mathbb {Z} )\setminus SL_{2}(\mathbb {R} )=\pi ^{2}/6}
, and therefore the Weil conjecture finally gives
1
=
π
2
6
∏
p
(
1
−
1
p
2
)
.
{\displaystyle 1={\frac {\pi ^{2}}{6}}\prod _{p}\left(1-{\frac {1}{p^{2}}}\right).}
On the right-hand side, we recognize the Euler product for
1
/
ζ
(
2
)
{\displaystyle 1/\zeta (2)}
, and so this gives the solution to the Basel problem.
This approach shows the connection between (hyperbolic) geometry and arithmetic, and can be inverted to give a proof of the Weil conjecture for the special case of
S
L
2
{\displaystyle SL_{2}}
, contingent on an independent proof that
ζ
(
2
)
=
π
2
/
6
{\displaystyle \zeta (2)=\pi ^{2}/6}
.
Geometric proof
The Basel problem can be proved with Euclidean geometry, using the insight that the real line can be seen as a circle of infinite radius. An intuitive, if not completely rigorous, sketch is given here.
Choose an integer
N
{\displaystyle N}
, and take
N
{\displaystyle N}
equally spaced points on a circle with circumference equal to
2
N
{\displaystyle 2N}
. The radius of the circle is
N
/
π
{\displaystyle N/\pi }
and the length of each arc between two points is
2
{\displaystyle 2}
. Call the points
P
1..
N
{\displaystyle P_{1..N}}
.
Take another generic point
Q
{\displaystyle Q}
on the circle, which will lie at a fraction
0
<
α
<
1
{\displaystyle 0<\alpha <1}
of the arc between two consecutive points (say
P
1
{\displaystyle P_{1}}
and
P
2
{\displaystyle P_{2}}
without loss of generality).
Draw all the chords joining
Q
{\displaystyle Q}
with each of the
P
1..
N
{\displaystyle P_{1..N}}
points. Now (this is the key to the proof), compute the sum of the inverse squares of the lengths of all these chords, call it
s
i
s
c
{\displaystyle sisc}
.
The proof relies on the notable fact that (for a fixed
α
{\displaystyle \alpha }
), the
s
i
s
c
{\displaystyle sisc}
does not depend on
N
{\displaystyle N}
. Note that intuitively, as
N
{\displaystyle N}
increases, the number of chords increases, but their length increases too (as the circle gets bigger), so their inverse square decreases.
In particular, take the case where
α
=
1
/
2
{\displaystyle \alpha =1/2}
, meaning that
Q
{\displaystyle Q}
is the midpoint of the arc between two consecutive
P
{\displaystyle P}
's. The
s
i
s
c
{\displaystyle sisc}
can then be found trivially from the case
N
=
1
{\displaystyle N=1}
, where there is only one
P
{\displaystyle P}
, and one
Q
{\displaystyle Q}
on the opposite side of the circle. Then the chord is the diameter of the circle, of length
2
/
π
{\displaystyle 2/\pi }
. The
s
i
s
c
{\displaystyle sisc}
is then
π
2
/
4
{\displaystyle \pi ^{2}/4}
.
When
N
{\displaystyle N}
goes to infinity, the circle approaches the real line. If you set the origin at
Q
{\displaystyle Q}
, the points
P
1..
N
{\displaystyle P_{1..N}}
are positioned at the odd integer positions (positive and negative), since the arcs have length 1 from
Q
{\displaystyle Q}
to
P
1
{\displaystyle P_{1}}
, and 2 onward. You hence get this variation of the Basel Problem:
∑
z
=
−
∞
∞
1
(
2
z
−
1
)
2
=
π
2
4
{\displaystyle \sum _{z=-\infty }^{\infty }{\frac {1}{(2z-1)^{2}}}={\frac {\pi ^{2}}{4}}}
From here, you can recover the original formulation with a bit of algebra, as:
∑
n
=
1
∞
1
n
2
=
∑
n
=
1
∞
1
(
2
n
−
1
)
2
+
∑
n
=
1
∞
1
(
2
n
)
2
=
1
2
∑
z
=
−
∞
∞
1
(
2
z
−
1
)
2
+
1
4
∑
n
=
1
∞
1
n
2
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{2}}}=\sum _{n=1}^{\infty }{\frac {1}{(2n-1)^{2}}}+\sum _{n=1}^{\infty }{\frac {1}{(2n)^{2}}}={\frac {1}{2}}\sum _{z=-\infty }^{\infty }{\frac {1}{(2z-1)^{2}}}+{\frac {1}{4}}\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}}
that is,
3
4
∑
n
=
1
∞
1
n
2
=
π
2
8
{\displaystyle {\frac {3}{4}}\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {\pi ^{2}}{8}}}
or
∑
n
=
1
∞
1
n
2
=
π
2
6
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {\pi ^{2}}{6}}}
.
The independence of the
s
i
s
c
{\displaystyle sisc}
from
N
{\displaystyle N}
can be proved easily with Euclidean geometry for the more restrictive case where
N
{\displaystyle N}
is a power of 2, i.e.
N
=
2
n
{\displaystyle N=2^{n}}
, which still allows the limiting argument to be applied. The proof proceeds by induction on
n
{\displaystyle n}
, and uses the Inverse Pythagorean Theorem, which states that:
1
a
2
+
1
b
2
=
1
h
2
{\displaystyle {\frac {1}{a^{2}}}+{\frac {1}{b^{2}}}={\frac {1}{h^{2}}}}
where
a
{\displaystyle a}
and
b
{\displaystyle b}
are the cathetes and
h
{\displaystyle h}
is the height of a right triangle.
In the base case of
n
=
0
{\displaystyle n=0}
, there is only 1 chord. In the case of
α
=
1
/
2
{\displaystyle \alpha =1/2}
, it corresponds to the diameter and the
s
i
s
c
{\displaystyle sisc}
is
π
2
/
4
{\displaystyle \pi ^{2}/4}
as stated above.
Now, assume that you have
2
n
{\displaystyle 2^{n}}
points on a circle with radius
2
n
/
π
{\displaystyle 2^{n}/\pi }
and center
O
{\displaystyle O}
, and
2
n
+
1
{\displaystyle 2^{n+1}}
points on a circle with radius
2
n
+
1
/
π
{\displaystyle 2^{n+1}/\pi }
and center
R
{\displaystyle R}
. The induction step consists in showing that these 2 circles have the same
s
i
s
c
{\displaystyle sisc}
for a given
α
{\displaystyle \alpha }
.
Start by drawing the circles so that they share point
Q
{\displaystyle Q}
. Note that
R
{\displaystyle R}
lies on the smaller circle. Then, note that
2
n
+
1
{\displaystyle 2^{n+1}}
is always even, and a simple geometric argument shows that you can pick pairs of opposite points
P
1
{\displaystyle P_{1}}
and
P
2
{\displaystyle P_{2}}
on the larger circle by joining each pair with a diameter. Furthermore, for each pair, one of the points will be in the "lower" half of the circle (closer to
Q
{\displaystyle Q}
) and the other in the "upper" half.
The diameter of the bigger circle
P
1
P
2
{\displaystyle P_{1}P_{2}}
cuts the smaller circle at
R
{\displaystyle R}
and at another point
P
{\displaystyle P}
. You can then make the following considerations:
P
1
Q
^
P
2
{\displaystyle P_{1}{\widehat {Q}}P_{2}}
is a right angle, since
P
1
P
2
{\displaystyle P_{1}P_{2}}
is a diameter.
Q
P
^
R
{\displaystyle Q{\widehat {P}}R}
is a right angle, since
Q
R
{\displaystyle QR}
is a diameter.
Q
R
^
P
2
=
Q
R
^
P
{\displaystyle Q{\widehat {R}}P_{2}=Q{\widehat {R}}P}
is half of
Q
O
^
P
{\displaystyle Q{\widehat {O}}P}
for the Inscribed Angle Theorem.
Hence, the arc
Q
P
{\displaystyle QP}
is equal to the arc
Q
P
2
{\displaystyle QP_{2}}
, again because the radius is half.
The chord
Q
P
{\displaystyle QP}
is the height of the right triangle
Q
P
1
P
2
{\displaystyle QP_{1}P_{2}}
, hence for the Inverse Pythagorean Theorem:
1
Q
P
¯
2
=
1
Q
P
1
¯
2
+
1
Q
P
2
¯
2
{\displaystyle {\frac {1}{{\overline {QP}}^{2}}}={\frac {1}{{\overline {QP_{1}}}^{2}}}+{\frac {1}{{\overline {QP_{2}}}^{2}}}}
Hence for half of the points on the bigger circle (the ones in the lower half) there is a corresponding point on the smaller circle with the same arc distance from
Q
{\displaystyle Q}
(since the circumference of the smaller circle is half the one of the bigger circle, the last two points closer to
R
{\displaystyle R}
must have arc distance 2 as well). Vice versa, for each of the
2
n
{\displaystyle 2^{n}}
points on the smaller circle, we can build a pair of points on the bigger circle, and all of these points are equidistant and have the same arc distance from
Q
{\displaystyle Q}
.
Furthermore, the total
s
i
s
c
{\displaystyle sisc}
for the bigger circle is the same as the
s
i
s
c
{\displaystyle sisc}
for the smaller circle, since each pair of points on the bigger circle has the same inverse square sum as the corresponding point on the smaller circle.
Other identities
See the special cases of the identities for the Riemann zeta function when
s
=
2.
{\displaystyle s=2.}
Other notably special identities and representations of this constant appear in the sections below.
= Series representations
=The following are series representations of the constant:
ζ
(
2
)
=
3
∑
k
=
1
∞
1
k
2
(
2
k
k
)
=
∑
i
=
1
∞
∑
j
=
1
∞
(
i
−
1
)
!
(
j
−
1
)
!
(
i
+
j
)
!
.
{\displaystyle {\begin{aligned}\zeta (2)&=3\sum _{k=1}^{\infty }{\frac {1}{k^{2}{\binom {2k}{k}}}}\\[6pt]&=\sum _{i=1}^{\infty }\sum _{j=1}^{\infty }{\frac {(i-1)!(j-1)!}{(i+j)!}}.\end{aligned}}}
There are also BBP-type series expansions for ζ(2).
= Integral representations
=The following are integral representations of
ζ
(
2
)
:
{\displaystyle \zeta (2){\text{:}}}
ζ
(
2
)
=
−
∫
0
1
log
x
1
−
x
d
x
=
∫
0
∞
x
e
x
−
1
d
x
=
∫
0
1
(
log
x
)
2
(
1
+
x
)
2
d
x
=
2
+
2
∫
1
∞
⌊
x
⌋
−
x
x
3
d
x
=
exp
(
2
∫
2
∞
π
(
x
)
x
(
x
2
−
1
)
d
x
)
=
∫
0
1
∫
0
1
d
x
d
y
1
−
x
y
=
4
3
∫
0
1
∫
0
1
d
x
d
y
1
−
(
x
y
)
2
=
∫
0
1
∫
0
1
1
−
x
1
−
x
y
d
x
d
y
+
2
3
.
{\displaystyle {\begin{aligned}\zeta (2)&=-\int _{0}^{1}{\frac {\log x}{1-x}}\,dx\\[6pt]&=\int _{0}^{\infty }{\frac {x}{e^{x}-1}}\,dx\\[6pt]&=\int _{0}^{1}{\frac {(\log x)^{2}}{(1+x)^{2}}}\,dx\\[6pt]&=2+2\int _{1}^{\infty }{\frac {\lfloor x\rfloor -x}{x^{3}}}\,dx\\[6pt]&=\exp \left(2\int _{2}^{\infty }{\frac {\pi (x)}{x(x^{2}-1)}}\,dx\right)\\[6pt]&=\int _{0}^{1}\int _{0}^{1}{\frac {dx\,dy}{1-xy}}\\[6pt]&={\frac {4}{3}}\int _{0}^{1}\int _{0}^{1}{\frac {dx\,dy}{1-(xy)^{2}}}\\[6pt]&=\int _{0}^{1}\int _{0}^{1}{\frac {1-x}{1-xy}}\,dx\,dy+{\frac {2}{3}}.\end{aligned}}}
= Continued fractions
=In van der Poorten's classic article chronicling Apéry's proof of the irrationality of
ζ
(
3
)
{\displaystyle \zeta (3)}
, the author notes as "a red herring" the similarity of a simple continued fraction for Apery's constant, and the following one for the Basel constant:
ζ
(
2
)
5
=
1
v
~
1
−
1
4
v
~
2
−
2
4
v
~
3
−
3
4
v
~
4
−
⋱
,
{\displaystyle {\frac {\zeta (2)}{5}}={\cfrac {1}{{\widetilde {v}}_{1}-{\cfrac {1^{4}}{{\widetilde {v}}_{2}-{\cfrac {2^{4}}{{\widetilde {v}}_{3}-{\cfrac {3^{4}}{{\widetilde {v}}_{4}-\ddots }}}}}}}},}
where
v
~
n
=
11
n
2
−
11
n
+
3
↦
{
3
,
25
,
69
,
135
,
…
}
{\displaystyle {\widetilde {v}}_{n}=11n^{2}-11n+3\mapsto \{3,25,69,135,\ldots \}}
. Another continued fraction of a similar form is:
ζ
(
2
)
2
=
1
v
1
−
1
4
v
2
−
2
4
v
3
−
3
4
v
4
−
⋱
,
{\displaystyle {\frac {\zeta (2)}{2}}={\cfrac {1}{v_{1}-{\cfrac {1^{4}}{v_{2}-{\cfrac {2^{4}}{v_{3}-{\cfrac {3^{4}}{v_{4}-\ddots }}}}}}}},}
where
v
n
=
2
n
−
1
↦
{
1
,
3
,
5
,
7
,
9
,
…
}
{\displaystyle v_{n}=2n-1\mapsto \{1,3,5,7,9,\ldots \}}
.
See also
List of sums of reciprocals
References
Weil, André (1983), Number Theory: An Approach Through History, Springer-Verlag, ISBN 0-8176-3141-0.
Dunham, William (1999), Euler: The Master of Us All, Mathematical Association of America, ISBN 0-88385-328-0.
Derbyshire, John (2003), Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics, Joseph Henry Press, ISBN 0-309-08549-7.
Edwards, Harold M. (2001), Riemann's Zeta Function, Dover, ISBN 0-486-41740-9.
Notes
External links
An infinite series of surprises by C. J. Sangwin
From ζ(2) to Π. The Proof. step-by-step proof
Remarques sur un beau rapport entre les series des puissances tant directes que reciproques (PDF), English translation with notes of Euler's paper by Lucas Willis and Thomas J. Osler
Ed Sandifer, How Euler did it (PDF)
James A. Sellers (February 5, 2002), Beyond Mere Convergence (PDF), retrieved 2004-02-27
Robin Chapman, Evaluating ζ(2) (fourteen proofs)
Visualization of Euler's factorization of the sine function
Johan W Ästlund (December 8, 2010), Summing inverse squares by Euclidean geometry (PDF)
Why is pi here? And why is it squared? A geometric answer to the Basel problem on YouTube (animated proof based on the above)
Kata Kunci Pencarian:
Basel Problem Proof | PDF | Series (Mathematics) | Fourier Series

baselproblem – Almost Surely a Math Blog

Basel problem

Basel problem | Wiki | Everipedia
Basel Problem | Mathematical Concepts | Numbers | Free 30-day Trial ...

Solved The Basel Problem: The Basel problem (named for the | Chegg.com

Solved The Basel Problem: The Basel problem (named for the | Chegg.com

The Basel Problem: An interesting approach to the Basel problem ...
Basel Problem | Series (Mathematics) | Mathematical Analysis
Basel problem - Wikipedia.pdf | Numbers | Physics & Mathematics

Basel Problem. Link to wiki. | History of math, Problem, Basel

(PDF) Basel Problem
basel problem
Daftar Isi
Solving the Basel problem using Gauss's law - Physics Forums
Jun 20, 2018 · Imagine a lighthouse that has absolute brightness 1. The apparent brightness then follows an inverse-square law. Now imagine an infinite number line with positive integers only (and 0), with a lighthouse at each integer. The Basel problem then becomes: find the apparent brightness at the 0 point. This is simple enough.
Basel problem, primes and π²/6 - Physics Forums
Dec 15, 2015 · Basel problem, primes and π²/6 I; Thread starter Borek; Start date Dec 15, 2015; Tags Primes Dec 15 ...
Basel Problem Integral: Solving with Calculus - Physics Forums
Jan 12, 2021 · Summary:: Using an integral and taylor series to prove the Basel Problem The Basel problem is a famous math problem. It asked, 'What is the sum of 1/n^2 from n=1 to infinity?'. The solution is pi^2/6. Most proofs are somewhat convoluted. I'm attempting to …
The Basel Problem: A Solid Solution Using Derivatives and Integrals
Jun 3, 2010 · The Basel problem has been a topic of interest for centuries and it is exciting to see new solutions being proposed. The use of derivatives and integrals in this solution is a clever approach and shows the power of mathematical tools in solving complex problems.
Euler's solution to the Basel problem - Physics Forums
May 6, 2010 · Euler Solution to the Basel Problem. Oct 26, 2013; Replies 1 Views 2K.
How can we prove that the sum of alternating squares equals pi …
Sep 7, 2012 · The Basel Problem is a well known result in analysis which basically states: \\frac{1}{1^2} + \\frac{1}{2^2} + \\frac{1}{3^2} + \\frac{1}{4^2} + ... = \\frac{\\pi^2}{6} There are various well-known ways to prove this. I was wondering if there is a similar, simple way to calculate the value of the...
Why Parseval's Identity Fails for Higher Exponents in Basel Problem
Aug 13, 2012 · How does Parseval's identity fail in the Basel Problem for higher exponents? Parseval's identity fails in the Basel Problem for higher exponents because the integral of the square of the function diverges for exponents greater than 2. This means that the integral does not have a finite value and cannot be used to evaluate the infinite series.
Can Euler's Method Solve Higher Order Sums in the Basel …
Feb 26, 2013 · Is it possible to extend Euler's method of solving the Basel Problem to higher orders of even n (expanding the sine series, collecting the roots, and equating terms)? Physics news on Phys.org Quantum 'umbilical cord' links metal …
Converging to the Basel Problem: Solving for Poles on the Real Axis
Jun 16, 2013 · I Solving the Basel problem using Gauss's law. Jun 21, 2018; Replies 4 Views 2K.
Euler Solution to the Basel Problem - Physics Forums
Oct 25, 2013 · Search titles only By: Search Advanced search…