- Source: Maurice Tweedie
No More Posts Available.
No more pages to load.
Maurice Charles Kenneth Tweedie (30 September 1919 – 14 March 1996) was a British medical physicist and statistician from the University of Liverpool. He was known for research into the exponential family probability distributions.
Education and career
Tweedie read physics at the University of Reading and attained a BSc (general) and BSc (special) in physics in 1939 followed by a MSc in physics 1941. He found a career in radiation physics, but his primary interest was in mathematical statistics where his accomplishments far surpassed his academic postings.
Contributions
= Tweedie distributions
=Tweedie's contributions included pioneering work with the Inverse Gaussian distribution. Arguably his major achievement rests with the definition of a family of exponential dispersion models characterized by closure under additive and reproductive convolution as well as under transformations of scale that are now known as the Tweedie exponential dispersion models.
As a consequence of these properties the Tweedie exponential dispersion models are characterized by a power law relationship between the variance and the mean which leads them to become the foci of convergence for a central limit like effect that acts on a wide variety of random data. The range of application of the Tweedie distributions is wide and includes:
Taylor's law,
fluctuation scaling,
1/f noise,
random matrix theory,
hematogenous cancer metastasis,
genomic structure and evolution,
regional blood flow heterogeneity,
multifractality.
self-organized criticality
= Tweedie's formula
=Tweedie is credited for a formula first published in Robbins (1956), which offers "a simple empirical Bayes approach to correcting selection bias". Let
μ
{\displaystyle \mu }
be a latent variable we don't observe, but we know it has a certain prior distribution
p
(
μ
)
{\displaystyle p(\mu )}
. Let
x
=
μ
+
ϵ
{\displaystyle x=\mu +\epsilon }
be an observable, where
ϵ
∼
N
(
0
,
Σ
)
{\displaystyle \epsilon \sim N(0,\Sigma )}
is a Gaussian noise variable (so
p
(
x
|
μ
)
=
N
(
x
|
μ
,
Σ
)
{\displaystyle p(x|\mu )=N(x|\mu ,\Sigma )}
) . Let
ρ
(
x
)
=
∫
p
(
x
|
μ
)
p
(
μ
)
d
μ
{\displaystyle \rho (x)=\int p(x|\mu )p(\mu ){\text{d}}\mu }
be the probability density of
x
{\displaystyle x}
, then the posterior mean and variance of
μ
{\displaystyle \mu }
given the observed
x
{\displaystyle x}
are:
E
[
μ
|
x
]
=
x
+
Σ
∇
ρ
(
x
)
ρ
(
x
)
;
V
a
r
[
μ
|
x
]
=
E
[
μ
μ
T
|
x
]
−
E
[
μ
|
x
]
E
[
μ
|
x
]
T
=
Σ
(
∇
2
ρ
(
x
)
ρ
(
x
)
−
∇
ρ
(
x
)
∇
ρ
(
x
)
T
ρ
(
x
)
2
)
Σ
+
Σ
{\displaystyle E[\mu |x]=x+\Sigma {\frac {\nabla \rho (x)}{\rho (x)}};\quad Var[\mu |x]=E[\mu \mu ^{T}|x]-E[\mu |x]E[\mu |x]^{T}=\Sigma \left({\frac {\nabla ^{2}\rho (x)}{\rho (x)}}-{\frac {\nabla \rho (x)\nabla \rho (x)^{T}}{\rho (x)^{2}}}\right)\Sigma +\Sigma }
The posterior higher order moments of
μ
{\displaystyle \mu }
are also obtainable as algebraic expressions of
∇
ρ
,
ρ
,
Σ
{\displaystyle \nabla \rho ,\rho ,\Sigma }
.
Proof for first part
Using
∇
N
(
x
|
μ
,
Σ
)
=
N
(
x
|
μ
,
Σ
)
∇
log
N
(
x
|
μ
,
Σ
)
=
−
Σ
−
1
(
x
−
μ
)
N
(
x
|
μ
,
Σ
)
{\displaystyle \nabla N(x|\mu ,\Sigma )=N(x|\mu ,\Sigma )\nabla \log N(x|\mu ,\Sigma )=-\Sigma ^{-1}(x-\mu )N(x|\mu ,\Sigma )}
, we get
Σ
∇
ρ
(
x
)
ρ
(
x
)
=
Σ
∫
∇
N
(
x
|
μ
,
Σ
)
p
(
μ
)
d
μ
∫
N
(
x
|
μ
,
Σ
)
p
(
μ
)
d
μ
=
−
∫
(
x
−
μ
)
N
(
x
|
μ
,
Σ
)
p
(
μ
)
d
μ
∫
N
(
x
|
μ
,
Σ
)
p
(
μ
)
d
μ
=
−
∫
(
x
−
μ
)
p
(
μ
|
x
)
d
μ
=
−
x
+
E
[
μ
|
x
]
,
{\displaystyle \Sigma \,{\frac {\nabla \rho (x)}{\rho (x)}}={\frac {\Sigma \int \nabla N(x|\mu ,\Sigma )\,p(\mu ){\text{d}}\,\mu }{\int N(x|\mu ,\Sigma )\,p(\mu )\,{\text{d}}\mu }}=-{\frac {\int (x-\mu )N(x|\mu ,\Sigma )\,p(\mu )\,{\text{d}}\mu }{\int N(x|\mu ,\Sigma )\,p(\mu )\,{\text{d}}\mu }}=-\int (x-\mu )p(\mu |x)\,{\text{d}}\mu =-x+E[\mu |x],}
where we have used Bayes' theorem to write
p
(
μ
|
x
)
=
p
(
x
|
μ
)
p
(
μ
)
p
(
x
)
=
N
(
x
|
μ
,
Σ
)
p
(
μ
)
∫
N
(
x
|
μ
,
Σ
)
p
(
μ
)
d
μ
.
{\displaystyle p(\mu |x)={\frac {p(x|\mu )p(\mu )}{p(x)}}={\frac {N(x|\mu ,\Sigma )\,p(\mu )}{\int N(x|\mu ,\Sigma )\,p(\mu )\,{\text{d}}\mu }}.}
Tweedie's formula is used in empirical Bayes method and diffusion models.