- Source: Whittle likelihood
In statistics, Whittle likelihood is an approximation to the likelihood function of a stationary Gaussian time series. It is named after the mathematician and statistician Peter Whittle, who introduced it in his PhD thesis in 1951.
It is commonly used in time series analysis and signal processing for parameter estimation and signal detection.
Context
In a stationary Gaussian time series model, the likelihood function is (as usual in Gaussian models) a function of the associated mean and covariance parameters. With a large number (
N
{\displaystyle N}
) of observations, the (
N
×
N
{\displaystyle N\times N}
) covariance matrix may become very large, making computations very costly in practice. However, due to stationarity, the covariance matrix has a rather simple structure, and by using an approximation, computations may be simplified considerably (from
O
(
N
2
)
{\displaystyle O(N^{2})}
to
O
(
N
log
(
N
)
)
{\displaystyle O(N\log(N))}
). The idea effectively boils down to assuming a heteroscedastic zero-mean Gaussian model in Fourier domain; the model formulation is based on the time series' discrete Fourier transform and its power spectral density.
Definition
Let
X
1
,
…
,
X
N
{\displaystyle X_{1},\ldots ,X_{N}}
be a stationary Gaussian time series with (one-sided) power spectral density
S
1
(
f
)
{\displaystyle S_{1}(f)}
, where
N
{\displaystyle N}
is even and samples are taken at constant sampling intervals
Δ
t
{\displaystyle \Delta _{t}}
.
Let
X
~
1
,
…
,
X
~
N
/
2
+
1
{\displaystyle {\tilde {X}}_{1},\ldots ,{\tilde {X}}_{N/2+1}}
be the (complex-valued) discrete Fourier transform (DFT) of the time series. Then for the Whittle likelihood one effectively assumes independent zero-mean Gaussian distributions for all
X
~
j
{\displaystyle {\tilde {X}}_{j}}
with variances for the real and imaginary parts given by
Var
(
Re
(
X
~
j
)
)
=
Var
(
Im
(
X
~
j
)
)
=
S
1
(
f
j
)
{\displaystyle \operatorname {Var} \left(\operatorname {Re} ({\tilde {X}}_{j})\right)=\operatorname {Var} \left(\operatorname {Im} ({\tilde {X}}_{j})\right)=S_{1}(f_{j})}
where
f
j
=
j
N
Δ
t
{\displaystyle f_{j}={\frac {j}{N\,\Delta _{t}}}}
is the
j
{\displaystyle j}
th Fourier frequency. This approximate model immediately leads to the (logarithmic) likelihood function
log
(
P
(
x
1
,
…
,
x
N
)
)
∝
−
∑
j
(
log
(
S
1
(
f
j
)
)
+
|
x
~
j
|
2
N
2
Δ
t
S
1
(
f
j
)
)
{\displaystyle \log \left(P(x_{1},\ldots ,x_{N})\right)\propto -\sum _{j}\left(\log \left(S_{1}(f_{j})\right)+{\frac {|{\tilde {x}}_{j}|^{2}}{{\frac {N}{2\,\Delta _{t}}}S_{1}(f_{j})}}\right)}
where
|
⋅
|
{\displaystyle |\cdot |}
denotes the absolute value with
|
x
~
j
|
2
=
(
Re
(
x
~
j
)
)
2
+
(
Im
(
x
~
j
)
)
2
{\displaystyle |{\tilde {x}}_{j}|^{2}=\left(\operatorname {Re} ({\tilde {x}}_{j})\right)^{2}+\left(\operatorname {Im} ({\tilde {x}}_{j})\right)^{2}}
.
Special case of a known noise spectrum
In case the noise spectrum is assumed a-priori known, and noise properties are not to be inferred from the data, the likelihood function may be simplified further by ignoring constant terms, leading to the sum-of-squares expression
log
(
P
(
x
1
,
…
,
x
N
)
)
∝
−
∑
j
|
x
~
j
|
2
N
2
Δ
t
S
1
(
f
j
)
{\displaystyle \log \left(P(x_{1},\ldots ,x_{N})\right)\;\propto \;-\sum _{j}{\frac {|{\tilde {x}}_{j}|^{2}}{{\frac {N}{2\,\Delta _{t}}}S_{1}(f_{j})}}}
This expression also is the basis for the common matched filter.
Accuracy of approximation
The Whittle likelihood in general is only an approximation, it is only exact if the spectrum is constant, i.e., in the trivial case of white noise.
The efficiency of the Whittle approximation always depends on the particular circumstances.
Note that due to linearity of the Fourier transform, Gaussianity in Fourier domain implies Gaussianity in time domain and vice versa. What makes the Whittle likelihood only approximately accurate is related to the sampling theorem—the effect of Fourier-transforming only a finite number of data points, which also manifests itself as spectral leakage in related problems (and which may be ameliorated using the same methods, namely, windowing). In the present case, the implicit periodicity assumption implies correlation between the first and last samples (
x
1
{\displaystyle x_{1}}
and
x
N
{\displaystyle x_{N}}
), which are effectively treated as "neighbouring" samples (like
x
1
{\displaystyle x_{1}}
and
x
2
{\displaystyle x_{2}}
).
Applications
= Parameter estimation
=Whittle's likelihood is commonly used to estimate signal parameters for signals that are buried in non-white noise. The noise spectrum then may be assumed known,
or it may be inferred along with the signal parameters.
= Signal detection
=Signal detection is commonly performed with the matched filter, which is based on the Whittle likelihood for the case of a known noise power spectral density.
The matched filter effectively does a maximum-likelihood fit of the signal to the noisy data and uses the resulting likelihood ratio as the detection statistic.
The matched filter may be generalized to an analogous procedure based on a Student-t distribution by also considering uncertainty (e.g. estimation uncertainty) in the noise spectrum. On the technical side, the EM algorithm may be utilized here, effectively leading to repeated or iterative matched-filtering.
= Spectrum estimation
=The Whittle likelihood is also applicable for estimation of the noise spectrum, either alone or in conjunction with signal parameters.
See also
Coloured noise
Discrete Fourier transform
Likelihood function
Matched filter
Power spectral density
Statistical signal processing
Weighted least squares
References
Kata Kunci Pencarian:
- Statistika
- Ilmu aktuaria
- Variabel acak
- Statistika matematika
- Model generatif
- Efek pengacau
- Eksperimen semu
- Whittle likelihood
- Maximum likelihood estimation
- Likelihood function
- Likelihood-ratio test
- Chi-squared test
- Central tendency
- Logistic regression
- Linear regression
- Z-test
- Zero-inflated model