- Source: Reduced chi-squared statistic
In statistics, the reduced chi-square statistic is used extensively in goodness of fit testing. It is also known as mean squared weighted deviation (MSWD) in isotopic dating and variance of unit weight in the context of weighted least squares.
Its square root is called regression standard error, standard error of the regression, or standard error of the equation
(see Ordinary least squares § Reduced chi-squared)
Definition
It is defined as chi-square per degree of freedom:: 85
χ
ν
2
=
χ
2
ν
,
{\displaystyle \chi _{\nu }^{2}={\frac {\chi ^{2}}{\nu }},}
where the chi-squared is a weighted sum of squared deviations:
χ
2
=
∑
i
(
O
i
−
C
i
)
2
σ
i
2
{\displaystyle \chi ^{2}=\sum _{i}{\frac {(O_{i}-C_{i})^{2}}{\sigma _{i}^{2}}}}
with inputs: variance
σ
i
2
{\displaystyle \sigma _{i}^{2}}
, observations O, and calculated data C.
The degree of freedom,
ν
=
n
−
m
{\displaystyle \nu =n-m}
, equals the number of observations n minus the number of fitted parameters m.
In weighted least squares, the definition is often written in matrix notation as
χ
ν
2
=
r
T
W
r
ν
,
{\displaystyle \chi _{\nu }^{2}={\frac {r^{\mathrm {T} }Wr}{\nu }},}
where r is the vector of residuals, and W is the weight matrix, the inverse of the input (diagonal) covariance matrix of observations. If W is non-diagonal, then generalized least squares applies.
In ordinary least squares, the definition simplifies to:
χ
ν
2
=
R
S
S
ν
,
{\displaystyle \chi _{\nu }^{2}={\frac {\mathrm {RSS} }{\nu }},}
R
S
S
=
∑
r
2
,
{\displaystyle \mathrm {RSS} =\sum r^{2},}
where the numerator is the residual sum of squares (RSS).
When the fit is just an ordinary mean, then
χ
ν
2
{\displaystyle \chi _{\nu }^{2}}
equals the sample variance, the squared sample standard deviation.
Discussion
As a general rule, when the variance of the measurement error is known a priori, a
χ
ν
2
≫
1
{\displaystyle \chi _{\nu }^{2}\gg 1}
indicates a poor model fit. A
χ
ν
2
>
1
{\displaystyle \chi _{\nu }^{2}>1}
indicates that the fit has not fully captured the data (or that the error variance has been underestimated). In principle, a value of
χ
ν
2
{\displaystyle \chi _{\nu }^{2}}
around
1
{\displaystyle 1}
indicates that the extent of the match between observations and estimates is in accord with the error variance. A
χ
ν
2
<
1
{\displaystyle \chi _{\nu }^{2}<1}
indicates that the model is "overfitting" the data: either the model is improperly fitting noise, or the error variance has been overestimated.: 89
When the variance of the measurement error is only partially known, the reduced chi-squared may serve as a correction estimated a posteriori.
Applications
= Geochronology
=In geochronology, the MSWD is a measure of goodness of fit that takes into account the relative importance of both the internal and external reproducibility, with most common usage in isotopic dating.
In general when:
MSWD = 1 if the age data fit a univariate normal distribution in t (for the arithmetic mean age) or log(t) (for the geometric mean age) space, or if the compositional data fit a bivariate normal distribution in [log(U/He),log(Th/He)]-space (for the central age).
MSWD < 1 if the observed scatter is less than that predicted by the analytical uncertainties. In this case, the data are said to be "underdispersed", indicating that the analytical uncertainties were overestimated.
MSWD > 1 if the observed scatter exceeds that predicted by the analytical uncertainties. In this case, the data are said to be "overdispersed". This situation is the rule rather than the exception in (U-Th)/He geochronology, indicating an incomplete understanding of the isotope system. Several reasons have been proposed to explain the overdispersion of (U-Th)/He data, including unevenly distributed U-Th distributions and radiation damage.
Often the geochronologist will determine a series of age measurements on a single sample, with the measured value
x
i
{\displaystyle x_{i}}
having a weighting
w
i
{\displaystyle w_{i}}
and an associated error
σ
x
i
{\displaystyle \sigma _{x_{i}}}
for each age determination. As regards weighting, one can either weight all of the measured ages equally, or weight them by the proportion of the sample that they represent. For example, if two thirds of the sample was used for the first measurement and one third for the second and final measurement, then one might weight the first measurement twice that of the second.
The arithmetic mean of the age determinations is
x
¯
=
∑
i
=
1
N
x
i
N
,
{\displaystyle {\overline {x}}={\frac {\sum _{i=1}^{N}x_{i}}{N}},}
but this value can be misleading, unless each determination of the age is of equal significance.
When each measured value can be assumed to have the same weighting, or significance, the biased and unbiased (or "sample" and "population" respectively) estimators of the variance are computed as follows:
σ
2
=
∑
i
=
1
N
(
x
i
−
x
¯
)
2
N
and
s
2
=
N
N
−
1
⋅
σ
2
=
1
N
−
1
⋅
∑
i
=
1
N
(
x
i
−
x
¯
)
2
.
{\displaystyle \sigma ^{2}={\frac {\sum _{i=1}^{N}(x_{i}-{\overline {x}})^{2}}{N}}{\text{ and }}s^{2}={\frac {N}{N-1}}\cdot \sigma ^{2}={\frac {1}{N-1}}\cdot \sum _{i=1}^{N}(x_{i}-{\overline {x}})^{2}.}
The standard deviation is the square root of the variance.
When individual determinations of an age are not of equal significance, it is better to use a weighted mean to obtain an "average" age, as follows:
x
¯
∗
=
∑
i
=
1
N
w
i
x
i
∑
i
=
1
N
w
i
.
{\displaystyle {\overline {x}}^{*}={\frac {\sum _{i=1}^{N}w_{i}x_{i}}{\sum _{i=1}^{N}w_{i}}}.}
The biased weighted estimator of variance can be shown to be
σ
2
=
∑
i
=
1
N
w
i
(
x
i
−
x
¯
∗
)
2
∑
i
=
1
N
w
i
,
{\displaystyle \sigma ^{2}={\frac {\sum _{i=1}^{N}w_{i}(x_{i}-{\overline {x}}^{*})^{2}}{\sum _{i=1}^{N}w_{i}}},}
which can be computed as
σ
2
=
∑
i
=
1
N
w
i
x
i
2
⋅
∑
i
=
1
N
w
i
−
(
∑
i
=
1
N
w
i
x
i
)
2
(
∑
i
=
1
N
w
i
)
2
.
{\displaystyle \sigma ^{2}={\frac {\sum _{i=1}^{N}w_{i}x_{i}^{2}\cdot \sum _{i=1}^{N}w_{i}-{\big (}\sum _{i=1}^{N}w_{i}x_{i}{\big )}^{2}}{{\big (}\sum _{i=1}^{N}w_{i}{\big )}^{2}}}.}
The unbiased weighted estimator of the sample variance can be computed as follows:
s
2
=
∑
i
=
1
N
w
i
(
∑
i
=
1
N
w
i
)
2
−
∑
i
=
1
N
w
i
2
⋅
∑
i
=
1
N
w
i
(
x
i
−
x
¯
∗
)
2
.
{\displaystyle s^{2}={\frac {\sum _{i=1}^{N}w_{i}}{{\big (}\sum _{i=1}^{N}w_{i}{\big )}^{2}-\sum _{i=1}^{N}w_{i}^{2}}}\cdot {\sum _{i=1}^{N}w_{i}(x_{i}-{\overline {x}}^{*})^{2}}.}
Again, the corresponding standard deviation is the square root of the variance.
The unbiased weighted estimator of the sample variance can also be computed on the fly as follows:
s
2
=
∑
i
=
1
N
w
i
x
i
2
⋅
∑
i
=
1
N
w
i
−
(
∑
i
=
1
N
w
i
x
i
)
2
(
∑
i
=
1
N
w
i
)
2
−
∑
i
=
1
N
w
i
2
.
{\displaystyle s^{2}={\frac {\sum _{i=1}^{N}w_{i}x_{i}^{2}\cdot \sum _{i=1}^{N}w_{i}-{\big (}\sum _{i=1}^{N}w_{i}x_{i}{\big )}^{2}}{{\big (}\sum _{i=1}^{N}w_{i}{\big )}^{2}-\sum _{i=1}^{N}w_{i}^{2}}}.}
The unweighted mean square of the weighted deviations (unweighted MSWD) can then be computed, as follows:
MSWD
u
=
1
N
−
1
⋅
∑
i
=
1
N
(
x
i
−
x
¯
)
2
σ
x
i
2
.
{\displaystyle {\text{MSWD}}_{u}={\frac {1}{N-1}}\cdot \sum _{i=1}^{N}{\frac {(x_{i}-{\overline {x}})^{2}}{\sigma _{x_{i}}^{2}}}.}
By analogy, the weighted mean square of the weighted deviations (weighted MSWD) can be computed as follows:
MSWD
w
=
∑
i
=
1
N
w
i
(
∑
i
=
1
N
w
i
)
2
−
∑
i
=
1
N
w
i
2
⋅
∑
i
=
1
N
w
i
(
x
i
−
x
¯
∗
)
2
(
σ
x
i
)
2
.
{\displaystyle {\text{MSWD}}_{w}={\frac {\sum _{i=1}^{N}w_{i}}{{\big (}\sum _{i=1}^{N}w_{i}{\big )}^{2}-\sum _{i=1}^{N}w_{i}^{2}}}\cdot \sum _{i=1}^{N}{\frac {w_{i}(x_{i}-{\overline {x}}^{*})^{2}}{(\sigma _{x_{i}})^{2}}}.}
= Rasch analysis
=In data analysis based on the Rasch model, the reduced chi-squared statistic is called the outfit mean-square statistic, and the information-weighted reduced chi-squared statistic is called the infit mean-square statistic.
References
Kata Kunci Pencarian:
- Reduced chi-squared statistic
- Pearson's chi-squared test
- Chi-squared distribution
- Chi-squared test
- Errors and residuals
- Sum of squares
- Residual sum of squares
- Test statistic
- Standard error
- Least squares