- Source: Total variation
In mathematics, the total variation identifies several slightly different concepts, related to the (local or global) structure of the codomain of a function or a measure. For a real-valued continuous function f, defined on an interval [a, b] ⊂ R, its total variation on the interval of definition is a measure of the one-dimensional arclength of the curve with parametric equation x ↦ f(x), for x ∈ [a, b]. Functions whose total variation is finite are called functions of bounded variation.
Historical note
The concept of total variation for functions of one real variable was first introduced by Camille Jordan in the paper (Jordan 1881). He used the new concept in order to prove a convergence theorem for Fourier series of discontinuous periodic functions whose variation is bounded. The extension of the concept to functions of more than one variable however is not simple for various reasons.
Definitions
= Total variation for functions of one real variable
=Definition 1.1. The total variation of a real-valued (or more generally complex-valued) function
f
{\displaystyle f}
, defined on an interval
[
a
,
b
]
⊂
R
{\displaystyle [a,b]\subset \mathbb {R} }
is the quantity
V
a
b
(
f
)
=
sup
P
∑
i
=
0
n
P
−
1
|
f
(
x
i
+
1
)
−
f
(
x
i
)
|
,
{\displaystyle V_{a}^{b}(f)=\sup _{\mathcal {P}}\sum _{i=0}^{n_{P}-1}|f(x_{i+1})-f(x_{i})|,}
where the supremum runs over the set of all partitions
P
=
{
P
=
{
x
0
,
…
,
x
n
P
}
∣
P
is a partition of
[
a
,
b
]
}
{\displaystyle {\mathcal {P}}=\left\{P=\{x_{0},\dots ,x_{n_{P}}\}\mid P{\text{ is a partition of }}[a,b]\right\}}
of the given interval. Which means that
a
=
x
0
<
x
1
<
.
.
.
<
x
n
P
=
b
{\displaystyle a=x_{0}
.
= Total variation for functions of n > 1 real variables
=Definition 1.2. Let Ω be an open subset of Rn. Given a function f belonging to L1(Ω), the total variation of f in Ω is defined as
V
(
f
,
Ω
)
:=
sup
{
∫
Ω
f
(
x
)
div
ϕ
(
x
)
d
x
:
ϕ
∈
C
c
1
(
Ω
,
R
n
)
,
‖
ϕ
‖
L
∞
(
Ω
)
≤
1
}
,
{\displaystyle V(f,\Omega ):=\sup \left\{\int _{\Omega }f(x)\operatorname {div} \phi (x)\,\mathrm {d} x\colon \phi \in C_{c}^{1}(\Omega ,\mathbb {R} ^{n}),\ \Vert \phi \Vert _{L^{\infty }(\Omega )}\leq 1\right\},}
where
C
c
1
(
Ω
,
R
n
)
{\displaystyle C_{c}^{1}(\Omega ,\mathbb {R} ^{n})}
is the set of continuously differentiable vector functions of compact support contained in
Ω
{\displaystyle \Omega }
,
‖
‖
L
∞
(
Ω
)
{\displaystyle \Vert \;\Vert _{L^{\infty }(\Omega )}}
is the essential supremum norm, and
div
{\displaystyle \operatorname {div} }
is the divergence operator.
This definition does not require that the domain
Ω
⊆
R
n
{\displaystyle \Omega \subseteq \mathbb {R} ^{n}}
of the given function be a bounded set.
= Total variation in measure theory
=Classical total variation definition
Following Saks (1937, p. 10), consider a signed measure
μ
{\displaystyle \mu }
on a measurable space
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
: then it is possible to define two set functions
W
¯
(
μ
,
⋅
)
{\displaystyle {\overline {\mathrm {W} }}(\mu ,\cdot )}
and
W
_
(
μ
,
⋅
)
{\displaystyle {\underline {\mathrm {W} }}(\mu ,\cdot )}
, respectively called upper variation and lower variation, as follows
W
¯
(
μ
,
E
)
=
sup
{
μ
(
A
)
∣
A
∈
Σ
and
A
⊂
E
}
∀
E
∈
Σ
{\displaystyle {\overline {\mathrm {W} }}(\mu ,E)=\sup \left\{\mu (A)\mid A\in \Sigma {\text{ and }}A\subset E\right\}\qquad \forall E\in \Sigma }
W
_
(
μ
,
E
)
=
inf
{
μ
(
A
)
∣
A
∈
Σ
and
A
⊂
E
}
∀
E
∈
Σ
{\displaystyle {\underline {\mathrm {W} }}(\mu ,E)=\inf \left\{\mu (A)\mid A\in \Sigma {\text{ and }}A\subset E\right\}\qquad \forall E\in \Sigma }
clearly
W
¯
(
μ
,
E
)
≥
0
≥
W
_
(
μ
,
E
)
∀
E
∈
Σ
{\displaystyle {\overline {\mathrm {W} }}(\mu ,E)\geq 0\geq {\underline {\mathrm {W} }}(\mu ,E)\qquad \forall E\in \Sigma }
Definition 1.3. The variation (also called absolute variation) of the signed measure
μ
{\displaystyle \mu }
is the set function
|
μ
|
(
E
)
=
W
¯
(
μ
,
E
)
+
|
W
_
(
μ
,
E
)
|
∀
E
∈
Σ
{\displaystyle |\mu |(E)={\overline {\mathrm {W} }}(\mu ,E)+\left|{\underline {\mathrm {W} }}(\mu ,E)\right|\qquad \forall E\in \Sigma }
and its total variation is defined as the value of this measure on the whole space of definition, i.e.
‖
μ
‖
=
|
μ
|
(
X
)
{\displaystyle \|\mu \|=|\mu |(X)}
Modern definition of total variation norm
Saks (1937, p. 11) uses upper and lower variations to prove the Hahn–Jordan decomposition: according to his version of this theorem, the upper and lower variation are respectively a non-negative and a non-positive measure. Using a more modern notation, define
μ
+
(
⋅
)
=
W
¯
(
μ
,
⋅
)
,
{\displaystyle \mu ^{+}(\cdot )={\overline {\mathrm {W} }}(\mu ,\cdot )\,,}
μ
−
(
⋅
)
=
−
W
_
(
μ
,
⋅
)
,
{\displaystyle \mu ^{-}(\cdot )=-{\underline {\mathrm {W} }}(\mu ,\cdot )\,,}
Then
μ
+
{\displaystyle \mu ^{+}}
and
μ
−
{\displaystyle \mu ^{-}}
are two non-negative measures such that
μ
=
μ
+
−
μ
−
{\displaystyle \mu =\mu ^{+}-\mu ^{-}}
|
μ
|
=
μ
+
+
μ
−
{\displaystyle |\mu |=\mu ^{+}+\mu ^{-}}
The last measure is sometimes called, by abuse of notation, total variation measure.
Total variation norm of complex measures
If the measure
μ
{\displaystyle \mu }
is complex-valued i.e. is a complex measure, its upper and lower variation cannot be defined and the Hahn–Jordan decomposition theorem can only be applied to its real and imaginary parts. However, it is possible to follow Rudin (1966, pp. 137–139) and define the total variation of the complex-valued measure
μ
{\displaystyle \mu }
as follows
Definition 1.4. The variation of the complex-valued measure
μ
{\displaystyle \mu }
is the set function
|
μ
|
(
E
)
=
sup
π
∑
A
∈
π
|
μ
(
A
)
|
∀
E
∈
Σ
{\displaystyle |\mu |(E)=\sup _{\pi }\sum _{A\in \pi }|\mu (A)|\qquad \forall E\in \Sigma }
where the supremum is taken over all partitions
π
{\displaystyle \pi }
of a measurable set
E
{\displaystyle E}
into a countable number of disjoint measurable subsets.
This definition coincides with the above definition
|
μ
|
=
μ
+
+
μ
−
{\displaystyle |\mu |=\mu ^{+}+\mu ^{-}}
for the case of real-valued signed measures.
Total variation norm of vector-valued measures
The variation so defined is a positive measure (see Rudin (1966, p. 139)) and coincides with the one defined by 1.3 when
μ
{\displaystyle \mu }
is a signed measure: its total variation is defined as above. This definition works also if
μ
{\displaystyle \mu }
is a vector measure: the variation is then defined by the following formula
|
μ
|
(
E
)
=
sup
π
∑
A
∈
π
‖
μ
(
A
)
‖
∀
E
∈
Σ
{\displaystyle |\mu |(E)=\sup _{\pi }\sum _{A\in \pi }\|\mu (A)\|\qquad \forall E\in \Sigma }
where the supremum is as above. This definition is slightly more general than the one given by Rudin (1966, p. 138) since it requires only to consider finite partitions of the space
X
{\displaystyle X}
: this implies that it can be used also to define the total variation on finite-additive measures.
Total variation of probability measures
The total variation of any probability measure is exactly one, therefore it is not interesting as a means of investigating the properties of such measures. However, when μ and ν are probability measures, the total variation distance of probability measures can be defined as
‖
μ
−
ν
‖
{\displaystyle \|\mu -\nu \|}
where the norm is the total variation norm of signed measures. Using the property that
(
μ
−
ν
)
(
X
)
=
0
{\displaystyle (\mu -\nu )(X)=0}
, we eventually arrive at the equivalent definition
‖
μ
−
ν
‖
=
|
μ
−
ν
|
(
X
)
=
2
sup
{
|
μ
(
A
)
−
ν
(
A
)
|
:
A
∈
Σ
}
{\displaystyle \|\mu -\nu \|=|\mu -\nu |(X)=2\sup \left\{\,\left|\mu (A)-\nu (A)\right|:A\in \Sigma \,\right\}}
and its values are non-trivial. The factor
2
{\displaystyle 2}
above is usually dropped (as is the convention in the article total variation distance of probability measures). Informally, this is the largest possible difference between the probabilities that the two probability distributions can assign to the same event. For a categorical distribution it is possible to write the total variation distance as follows
δ
(
μ
,
ν
)
=
∑
x
|
μ
(
x
)
−
ν
(
x
)
|
.
{\displaystyle \delta (\mu ,\nu )=\sum _{x}\left|\mu (x)-\nu (x)\right|\;.}
It may also be normalized to values in
[
0
,
1
]
{\displaystyle [0,1]}
by halving the previous definition as follows
δ
(
μ
,
ν
)
=
1
2
∑
x
|
μ
(
x
)
−
ν
(
x
)
|
{\displaystyle \delta (\mu ,\nu )={\frac {1}{2}}\sum _{x}\left|\mu (x)-\nu (x)\right|}
Basic properties
= Total variation of differentiable functions
=The total variation of a
C
1
(
Ω
¯
)
{\displaystyle C^{1}({\overline {\Omega }})}
function
f
{\displaystyle f}
can be expressed as an integral involving the given function instead of as the supremum of the functionals of definitions 1.1 and 1.2.
The form of the total variation of a differentiable function of one variable
Theorem 1. The total variation of a differentiable function
f
{\displaystyle f}
, defined on an interval
[
a
,
b
]
⊂
R
{\displaystyle [a,b]\subset \mathbb {R} }
, has the following expression if
f
′
{\displaystyle f'}
is Riemann integrable
V
a
b
(
f
)
=
∫
a
b
|
f
′
(
x
)
|
d
x
{\displaystyle V_{a}^{b}(f)=\int _{a}^{b}|f'(x)|\mathrm {d} x}
If
f
{\displaystyle f}
is differentiable and monotonic, then the above simplifies to
V
a
b
(
f
)
=
|
f
(
a
)
−
f
(
b
)
|
{\displaystyle V_{a}^{b}(f)=|f(a)-f(b)|}
For any differentiable function
f
{\displaystyle f}
, we can decompose the domain interval
[
a
,
b
]
{\displaystyle [a,b]}
, into subintervals
[
a
,
a
1
]
,
[
a
1
,
a
2
]
,
…
,
[
a
N
,
b
]
{\displaystyle [a,a_{1}],[a_{1},a_{2}],\dots ,[a_{N},b]}
(with
a
<
a
1
<
a
2
<
⋯
<
a
N
<
b
{\displaystyle a
) in which
f
{\displaystyle f}
is locally monotonic, then the total variation of
f
{\displaystyle f}
over
[
a
,
b
]
{\displaystyle [a,b]}
can be written as the sum of local variations on those subintervals:
V
a
b
(
f
)
=
V
a
a
1
(
f
)
+
V
a
1
a
2
(
f
)
+
⋯
+
V
a
N
b
(
f
)
=
|
f
(
a
)
−
f
(
a
1
)
|
+
|
f
(
a
1
)
−
f
(
a
2
)
|
+
⋯
+
|
f
(
a
N
)
−
f
(
b
)
|
{\displaystyle {\begin{aligned}V_{a}^{b}(f)&=V_{a}^{a_{1}}(f)+V_{a_{1}}^{a_{2}}(f)+\,\cdots \,+V_{a_{N}}^{b}(f)\\[0.3em]&=|f(a)-f(a_{1})|+|f(a_{1})-f(a_{2})|+\,\cdots \,+|f(a_{N})-f(b)|\end{aligned}}}
The form of the total variation of a differentiable function of several variables
Theorem 2. Given a
C
1
(
Ω
¯
)
{\displaystyle C^{1}({\overline {\Omega }})}
function
f
{\displaystyle f}
defined on a bounded open set
Ω
⊆
R
n
{\displaystyle \Omega \subseteq \mathbb {R} ^{n}}
, with
∂
Ω
{\displaystyle \partial \Omega }
of class
C
1
{\displaystyle C^{1}}
, the total variation of
f
{\displaystyle f}
has the following expression
V
(
f
,
Ω
)
=
∫
Ω
|
∇
f
(
x
)
|
d
x
{\displaystyle V(f,\Omega )=\int _{\Omega }\left|\nabla f(x)\right|\mathrm {d} x}
.
= Proof =
The first step in the proof is to first prove an equality which follows from the Gauss–Ostrogradsky theorem.
= Lemma =
Under the conditions of the theorem, the following equality holds:
∫
Ω
f
div
φ
=
−
∫
Ω
∇
f
⋅
φ
{\displaystyle \int _{\Omega }f\operatorname {div} \varphi =-\int _{\Omega }\nabla f\cdot \varphi }
Proof of the lemma
From the Gauss–Ostrogradsky theorem:
∫
Ω
div
R
=
∫
∂
Ω
R
⋅
n
{\displaystyle \int _{\Omega }\operatorname {div} \mathbf {R} =\int _{\partial \Omega }\mathbf {R} \cdot \mathbf {n} }
by substituting
R
:=
f
φ
{\displaystyle \mathbf {R} :=f\mathbf {\varphi } }
, we have:
∫
Ω
div
(
f
φ
)
=
∫
∂
Ω
(
f
φ
)
⋅
n
{\displaystyle \int _{\Omega }\operatorname {div} \left(f\mathbf {\varphi } \right)=\int _{\partial \Omega }\left(f\mathbf {\varphi } \right)\cdot \mathbf {n} }
where
φ
{\displaystyle \mathbf {\varphi } }
is zero on the border of
Ω
{\displaystyle \Omega }
by definition:
∫
Ω
div
(
f
φ
)
=
0
{\displaystyle \int _{\Omega }\operatorname {div} \left(f\mathbf {\varphi } \right)=0}
∫
Ω
∂
x
i
(
f
φ
i
)
=
0
{\displaystyle \int _{\Omega }\partial _{x_{i}}\left(f\mathbf {\varphi } _{i}\right)=0}
∫
Ω
φ
i
∂
x
i
f
+
f
∂
x
i
φ
i
=
0
{\displaystyle \int _{\Omega }\mathbf {\varphi } _{i}\partial _{x_{i}}f+f\partial _{x_{i}}\mathbf {\varphi } _{i}=0}
∫
Ω
f
∂
x
i
φ
i
=
−
∫
Ω
φ
i
∂
x
i
f
{\displaystyle \int _{\Omega }f\partial _{x_{i}}\mathbf {\varphi } _{i}=-\int _{\Omega }\mathbf {\varphi } _{i}\partial _{x_{i}}f}
∫
Ω
f
div
φ
=
−
∫
Ω
φ
⋅
∇
f
{\displaystyle \int _{\Omega }f\operatorname {div} \mathbf {\varphi } =-\int _{\Omega }\mathbf {\varphi } \cdot \nabla f}
= Proof of the equality =
Under the conditions of the theorem, from the lemma we have:
∫
Ω
f
div
φ
=
−
∫
Ω
φ
⋅
∇
f
≤
|
∫
Ω
φ
⋅
∇
f
|
≤
∫
Ω
|
φ
|
⋅
|
∇
f
|
≤
∫
Ω
|
∇
f
|
{\displaystyle \int _{\Omega }f\operatorname {div} \mathbf {\varphi } =-\int _{\Omega }\mathbf {\varphi } \cdot \nabla f\leq \left|\int _{\Omega }\mathbf {\varphi } \cdot \nabla f\right|\leq \int _{\Omega }\left|\mathbf {\varphi } \right|\cdot \left|\nabla f\right|\leq \int _{\Omega }\left|\nabla f\right|}
in the last part
φ
{\displaystyle \mathbf {\varphi } }
could be omitted, because by definition its essential supremum is at most one.
On the other hand, we consider
θ
N
:=
−
I
[
−
N
,
N
]
I
{
∇
f
≠
0
}
∇
f
|
∇
f
|
{\displaystyle \theta _{N}:=-\mathbb {I} _{\left[-N,N\right]}\mathbb {I} _{\{\nabla f\neq 0\}}{\frac {\nabla f}{\left|\nabla f\right|}}}
and
θ
N
∗
{\displaystyle \theta _{N}^{*}}
which is the up to
ε
{\displaystyle \varepsilon }
approximation of
θ
N
{\displaystyle \theta _{N}}
in
C
c
1
{\displaystyle C_{c}^{1}}
with the same integral. We can do this since
C
c
1
{\displaystyle C_{c}^{1}}
is dense in
L
1
{\displaystyle L^{1}}
. Now again substituting into the lemma:
lim
N
→
∞
∫
Ω
f
div
θ
N
∗
=
lim
N
→
∞
∫
{
∇
f
≠
0
}
I
[
−
N
,
N
]
∇
f
⋅
∇
f
|
∇
f
|
=
lim
N
→
∞
∫
[
−
N
,
N
]
∩
{
∇
f
≠
0
}
∇
f
⋅
∇
f
|
∇
f
|
=
∫
Ω
|
∇
f
|
{\displaystyle {\begin{aligned}&\lim _{N\to \infty }\int _{\Omega }f\operatorname {div} \theta _{N}^{*}\\[4pt]&=\lim _{N\to \infty }\int _{\{\nabla f\neq 0\}}\mathbb {I} _{\left[-N,N\right]}\nabla f\cdot {\frac {\nabla f}{\left|\nabla f\right|}}\\[4pt]&=\lim _{N\to \infty }\int _{\left[-N,N\right]\cap {\{\nabla f\neq 0\}}}\nabla f\cdot {\frac {\nabla f}{\left|\nabla f\right|}}\\[4pt]&=\int _{\Omega }\left|\nabla f\right|\end{aligned}}}
This means we have a convergent sequence of
∫
Ω
f
div
φ
{\textstyle \int _{\Omega }f\operatorname {div} \mathbf {\varphi } }
that tends to
∫
Ω
|
∇
f
|
{\textstyle \int _{\Omega }\left|\nabla f\right|}
as well as we know that
∫
Ω
f
div
φ
≤
∫
Ω
|
∇
f
|
{\textstyle \int _{\Omega }f\operatorname {div} \mathbf {\varphi } \leq \int _{\Omega }\left|\nabla f\right|}
. Q.E.D.
It can be seen from the proof that the supremum is attained when
φ
→
−
∇
f
|
∇
f
|
.
{\displaystyle \varphi \to {\frac {-\nabla f}{\left|\nabla f\right|}}.}
The function
f
{\displaystyle f}
is said to be of bounded variation precisely if its total variation is finite.
= Total variation of a measure
=The total variation is a norm defined on the space of measures of bounded variation. The space of measures on a σ-algebra of sets is a Banach space, called the ca space, relative to this norm. It is contained in the larger Banach space, called the ba space, consisting of finitely additive (as opposed to countably additive) measures, also with the same norm. The distance function associated to the norm gives rise to the total variation distance between two measures μ and ν.
For finite measures on R, the link between the total variation of a measure μ and the total variation of a function, as described above, goes as follows. Given μ, define a function
φ
:
R
→
R
{\displaystyle \varphi \colon \mathbb {R} \to \mathbb {R} }
by
φ
(
t
)
=
μ
(
(
−
∞
,
t
]
)
.
{\displaystyle \varphi (t)=\mu ((-\infty ,t])~.}
Then, the total variation of the signed measure μ is equal to the total variation, in the above sense, of the function
φ
{\displaystyle \varphi }
. In general, the total variation of a signed measure can be defined using Jordan's decomposition theorem by
‖
μ
‖
T
V
=
μ
+
(
X
)
+
μ
−
(
X
)
,
{\displaystyle \|\mu \|_{TV}=\mu _{+}(X)+\mu _{-}(X)~,}
for any signed measure μ on a measurable space
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
.
Applications
Total variation can be seen as a non-negative real-valued functional defined on the space of real-valued functions (for the case of functions of one variable) or on the space of integrable functions (for the case of functions of several variables). As a functional, total variation finds applications in several branches of mathematics and engineering, like optimal control, numerical analysis, and calculus of variations, where the solution to a certain problem has to minimize its value. As an example, use of the total variation functional is common in the following two kind of problems
Numerical analysis of differential equations: it is the science of finding approximate solutions to differential equations. Applications of total variation to these problems are detailed in the article "total variation diminishing"
Image denoising: in image processing, denoising is a collection of methods used to reduce the noise in an image reconstructed from data obtained by electronic means, for example data transmission or sensing. "Total variation denoising" is the name for the application of total variation to image noise reduction; further details can be found in the papers of (Rudin, Osher & Fatemi 1992) and (Caselles, Chambolle & Novaga 2007). A sensible extension of this model to colour images, called Colour TV, can be found in (Blomgren & Chan 1998).
See also
Bounded variation
p-variation
Total variation diminishing
Total variation denoising
Quadratic variation
Total variation distance of probability measures
Kolmogorov–Smirnov test
Anisotropic diffusion
Notes
Historical references
Arzelà, Cesare (7 May 1905), "Sulle funzioni di due variabili a variazione limitata (On functions of two variables of bounded variation)", Rendiconto delle Sessioni della Reale Accademia delle Scienze dell'Istituto di Bologna, Nuova serie (in Italian), IX (4): 100–107, JFM 36.0491.02, archived from the original on 2007-08-07.
Golubov, Boris I. (2001) [1994], "Arzelà variation", Encyclopedia of Mathematics, EMS Press.
Golubov, Boris I. (2001) [1994], "Fréchet variation", Encyclopedia of Mathematics, EMS Press.
Golubov, Boris I. (2001) [1994], "Hardy variation", Encyclopedia of Mathematics, EMS Press.
Golubov, Boris I. (2001) [1994], "Pierpont variation", Encyclopedia of Mathematics, EMS Press.
Golubov, Boris I. (2001) [1994], "Vitali variation", Encyclopedia of Mathematics, EMS Press.
Golubov, Boris I. (2001) [1994], "Tonelli plane variation", Encyclopedia of Mathematics, EMS Press.
Golubov, Boris I.; Vitushkin, Anatoli G. (2001) [1994], "Variation of a function", Encyclopedia of Mathematics, EMS Press
Jordan, Camille (1881), "Sur la série de Fourier", Comptes rendus hebdomadaires des séances de l'Académie des sciences (in French), 92: 228–230, JFM 13.0184.01 (available at Gallica). This is, according to Boris Golubov, the first paper on functions of bounded variation.
Hahn, Hans (1921), Theorie der reellen Funktionen (in German), Berlin: Springer Verlag, pp. VII+600, JFM 48.0261.09.
Vitali, Giuseppe (1908) [17 dicembre 1907], "Sui gruppi di punti e sulle funzioni di variabili reali (On groups of points and functions of real variables)", Atti dell'Accademia delle Scienze di Torino (in Italian), 43: 75–92, JFM 39.0101.05, archived from the original on 2009-03-31. The paper containing the first proof of Vitali covering theorem.
References
Adams, C. Raymond; Clarkson, James A. (1933), "On definitions of bounded variation for functions of two variables", Transactions of the American Mathematical Society, 35 (4): 824–854, doi:10.1090/S0002-9947-1933-1501718-2, JFM 59.0285.01, MR 1501718, Zbl 0008.00602.
Cesari, Lamberto (1936), "Sulle funzioni a variazione limitata (On the functions of bounded variation)", Annali della Scuola Normale Superiore, II (in Italian), 5 (3–4): 299–313, JFM 62.0247.03, MR 1556778, Zbl 0014.29605. Available at Numdam.
Leoni, Giovanni (2017), A First Course in Sobolev Spaces: Second Edition, Graduate Studies in Mathematics, American Mathematical Society, pp. xxii+734, ISBN 978-1-4704-2921-8.
Saks, Stanisław (1937). Theory of the Integral. Monografie Matematyczne. Vol. 7 (2nd ed.). Warszawa–Lwów: G.E. Stechert & Co. pp. VI+347. JFM 63.0183.05. Zbl 0017.30004.. (available at the Polish Virtual Library of Science). English translation from the original French by Laurence Chisholm Young, with two additional notes by Stefan Banach.
Rudin, Walter (1966), Real and Complex Analysis, McGraw-Hill Series in Higher Mathematics (1st ed.), New York: McGraw-Hill, pp. xi+412, MR 0210528, Zbl 0142.01701.
External links
One variable
"Total variation" on PlanetMath.
One and more variables
Function of bounded variation at Encyclopedia of Mathematics
Measure theory
Rowland, Todd. "Total Variation". MathWorld..
Jordan decomposition at PlanetMath..
Jordan decomposition at Encyclopedia of Mathematics
= Applications
=Caselles, Vicent; Chambolle, Antonin; Novaga, Matteo (2007), The discontinuity set of solutions of the TV denoising problem and some extensions, SIAM, Multiscale Modeling and Simulation, vol. 6 n. 3, archived from the original on 2011-09-27 (a work dealing with total variation application in denoising problems for image processing).
Rudin, Leonid I.; Osher, Stanley; Fatemi, Emad (1992), "Nonlinear total variation based noise removal algorithms", Physica D: Nonlinear Phenomena, 60 (1–4), Physica D: Nonlinear Phenomena 60.1: 259-268: 259–268, Bibcode:1992PhyD...60..259R, doi:10.1016/0167-2789(92)90242-F.
Blomgren, Peter; Chan, Tony F. (1998), "Color TV: total variation methods for restoration of vector-valued images", IEEE Transactions on Image Processing, 7 (3), Image Processing, IEEE Transactions on, vol. 7, no. 3: 304-309: 304, Bibcode:1998ITIP....7..304B, doi:10.1109/83.661180, PMID 18276250.
Tony F. Chan and Jackie (Jianhong) Shen (2005), Image Processing and Analysis - Variational, PDE, Wavelet, and Stochastic Methods, SIAM, ISBN 0-89871-589-X (with in-depth coverage and extensive applications of Total Variations in modern image processing, as started by Rudin, Osher, and Fatemi).
Kata Kunci Pencarian:
- Wallacea
- Bahasa Inggris
- Ayam arab
- Lenna
- Performa jaringan
- Keanekaragaman hayati
- Orang Ainu
- Bahasa Sikeloi
- Kenanga
- Rambut merah
- Total variation
- Total variation denoising
- Total variation distance of probability measures
- Total variation diminishing
- Compressed sensing
- Variation
- Explained variation
- Bounded variation
- Coefficient of determination
- Trace distance