- Source: Pushforward measure
In measure theory, a pushforward measure (also known as push forward, push-forward or image measure) is obtained by transferring ("pushing forward") a measure from one measurable space to another using a measurable function.
Definition
Given measurable spaces
(
X
1
,
Σ
1
)
{\displaystyle (X_{1},\Sigma _{1})}
and
(
X
2
,
Σ
2
)
{\displaystyle (X_{2},\Sigma _{2})}
, a measurable mapping
f
:
X
1
→
X
2
{\displaystyle f\colon X_{1}\to X_{2}}
and a measure
μ
:
Σ
1
→
[
0
,
+
∞
]
{\displaystyle \mu \colon \Sigma _{1}\to [0,+\infty ]}
, the pushforward of
μ
{\displaystyle \mu }
is defined to be the measure
f
∗
(
μ
)
:
Σ
2
→
[
0
,
+
∞
]
{\displaystyle f_{*}(\mu )\colon \Sigma _{2}\to [0,+\infty ]}
given by
f
∗
(
μ
)
(
B
)
=
μ
(
f
−
1
(
B
)
)
{\displaystyle f_{*}(\mu )(B)=\mu \left(f^{-1}(B)\right)}
for
B
∈
Σ
2
.
{\displaystyle B\in \Sigma _{2}.}
This definition applies mutatis mutandis for a signed or complex measure.
The pushforward measure is also denoted as
μ
∘
f
−
1
{\displaystyle \mu \circ f^{-1}}
,
f
♯
μ
{\displaystyle f_{\sharp }\mu }
,
f
♯
μ
{\displaystyle f\sharp \mu }
, or
f
#
μ
{\displaystyle f\#\mu }
.
Properties
= Change of variable formula
=Theorem: A measurable function g on X2 is integrable with respect to the pushforward measure f∗(μ) if and only if the composition
g
∘
f
{\displaystyle g\circ f}
is integrable with respect to the measure μ. In that case, the integrals coincide, i.e.,
∫
X
2
g
d
(
f
∗
μ
)
=
∫
X
1
g
∘
f
d
μ
.
{\displaystyle \int _{X_{2}}g\,d(f_{*}\mu )=\int _{X_{1}}g\circ f\,d\mu .}
Note that in the previous formula
X
1
=
f
−
1
(
X
2
)
{\displaystyle X_{1}=f^{-1}(X_{2})}
.
= Functoriality
=Pushforwards of measures allow to induce, from a function between measurable spaces
f
:
X
→
Y
{\displaystyle f:X\to Y}
, a function between the spaces of measures
M
(
X
)
→
M
(
Y
)
{\displaystyle M(X)\to M(Y)}
.
As with many induced mappings, this construction has the structure of a functor, on the category of measurable spaces.
For the special case of probability measures, this property amounts to functoriality of the Giry monad.
Examples and applications
A natural "Lebesgue measure" on the unit circle S1 (here thought of as a subset of the complex plane C) may be defined using a push-forward construction and Lebesgue measure λ on the real line R. Let λ also denote the restriction of Lebesgue measure to the interval [0, 2π) and let f : [0, 2π) → S1 be the natural bijection defined by f(t) = exp(i t). The natural "Lebesgue measure" on S1 is then the push-forward measure f∗(λ). The measure f∗(λ) might also be called "arc length measure" or "angle measure", since the f∗(λ)-measure of an arc in S1 is precisely its arc length (or, equivalently, the angle that it subtends at the centre of the circle.)
The previous example extends nicely to give a natural "Lebesgue measure" on the n-dimensional torus Tn. The previous example is a special case, since S1 = T1. This Lebesgue measure on Tn is, up to normalization, the Haar measure for the compact, connected Lie group Tn.
Gaussian measures on infinite-dimensional vector spaces are defined using the push-forward and the standard Gaussian measure on the real line: a Borel measure γ on a separable Banach space X is called Gaussian if the push-forward of γ by any non-zero linear functional in the continuous dual space to X is a Gaussian measure on R.
Consider a measurable function f : X → X and the composition of f with itself n times:
f
(
n
)
=
f
∘
f
∘
⋯
∘
f
⏟
n
t
i
m
e
s
:
X
→
X
.
{\displaystyle f^{(n)}=\underbrace {f\circ f\circ \dots \circ f} _{n\mathrm {\,times} }:X\to X.}
This iterated function forms a dynamical system. It is often of interest in the study of such systems to find a measure μ on X that the map f leaves unchanged, a so-called invariant measure, i.e one for which f∗(μ) = μ.
One can also consider quasi-invariant measures for such a dynamical system: a measure
μ
{\displaystyle \mu }
on
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
is called quasi-invariant under
f
{\displaystyle f}
if the push-forward of
μ
{\displaystyle \mu }
by
f
{\displaystyle f}
is merely equivalent to the original measure μ, not necessarily equal to it. A pair of measures
μ
,
ν
{\displaystyle \mu ,\nu }
on the same space are equivalent if and only if
∀
A
∈
Σ
:
μ
(
A
)
=
0
⟺
ν
(
A
)
=
0
{\displaystyle \forall A\in \Sigma :\ \mu (A)=0\iff \nu (A)=0}
, so
μ
{\displaystyle \mu }
is quasi-invariant under
f
{\displaystyle f}
if
∀
A
∈
Σ
:
μ
(
A
)
=
0
⟺
f
∗
μ
(
A
)
=
μ
(
f
−
1
(
A
)
)
=
0
{\displaystyle \forall A\in \Sigma :\ \mu (A)=0\iff f_{*}\mu (A)=\mu {\big (}f^{-1}(A){\big )}=0}
Many natural probability distributions, such as the chi distribution, can be obtained via this construction.
Random variables induce pushforward measures. They map a probability space into a codomain space and endow that space with a probability measure defined by the pushforward. Furthermore, because random variables are functions (and hence total functions), the inverse image of the whole codomain is the whole domain, and the measure of the whole domain is 1, so the measure of the whole codomain is 1. This means that random variables can be composed ad infinitum and they will always remain random variables and endow the codomain spaces with probability measures.
A generalization
In general, any measurable function can be pushed forward. The push-forward then becomes a linear operator, known as the transfer operator or Frobenius–Perron operator. In finite spaces this operator typically satisfies the requirements of the Frobenius–Perron theorem, and the maximal eigenvalue of the operator corresponds to the invariant measure.
The adjoint to the push-forward is the pullback; as an operator on spaces of functions on measurable spaces, it is the composition operator or Koopman operator.
See also
Measure-preserving dynamical system
Normalizing flow
Optimal transport
Notes
References
Bogachev, Vladimir I. (2007), Measure Theory, Berlin: Springer Verlag, ISBN 9783540345138
Teschl, Gerald (2015), Topics in Real and Functional Analysis
Kata Kunci Pencarian:
- Ukuran (matematika)
- Pushforward measure
- Pushforward
- Law of the unconscious statistician
- Probability mass function
- Joint probability distribution
- Random variable
- Wiener process
- Change of variables
- Measure (mathematics)
- Probability integral transform