- Source: Generalized least squares
In statistics, generalized least squares (GLS) is a method used to estimate the unknown parameters in a linear regression model. It is used when there is a non-zero amount of correlation between the residuals in the regression model. GLS is employed to improve statistical efficiency and reduce the risk of drawing erroneous inferences, as compared to conventional least squares and weighted least squares methods. It was first described by Alexander Aitken in 1935.
It requires knowledge of the covariance matrix for the residuals. If this is unknown, estimating the covariance matrix gives the method of feasible generalized least squares (FGLS). However, FGLS provides fewer guarantees of improvement.
Method
In standard linear regression models, one observes data
{
y
i
,
x
i
j
}
i
=
1
,
…
,
n
,
j
=
2
,
…
,
k
{\displaystyle \{y_{i},x_{ij}\}_{i=1,\dots ,n,j=2,\dots ,k}}
on n statistical units with k − 1 predictor values and one response value each.
The response values are placed in a vector,
y
≡
(
y
1
⋮
y
n
)
,
{\displaystyle \mathbf {y} \equiv {\begin{pmatrix}y_{1}\\\vdots \\y_{n}\end{pmatrix}},}
and the predictor values are placed in the design matrix,
X
≡
(
1
x
12
x
13
⋯
x
1
k
1
x
22
x
23
⋯
x
2
k
⋮
⋮
⋮
⋱
⋮
1
x
n
2
x
n
3
⋯
x
n
k
)
,
{\displaystyle \mathbf {X} \equiv {\begin{pmatrix}1&x_{12}&x_{13}&\cdots &x_{1k}\\1&x_{22}&x_{23}&\cdots &x_{2k}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{n2}&x_{n3}&\cdots &x_{nk}\end{pmatrix}},}
where each row is a vector of the
k
{\displaystyle k}
predictor variables (including a constant) for the
i
{\displaystyle i}
th data point.
The model assumes that the conditional mean of
y
{\displaystyle \mathbf {y} }
given
X
{\displaystyle \mathbf {X} }
to be a linear function of
X
{\displaystyle \mathbf {X} }
and that the conditional variance of the error term given
X
{\displaystyle \mathbf {X} }
is a known non-singular covariance matrix,
Ω
{\displaystyle \mathbf {\Omega } }
. That is,
y
=
X
β
+
ε
,
E
[
ε
∣
X
]
=
0
,
Cov
[
ε
∣
X
]
=
Ω
,
{\displaystyle \mathbf {y} =\mathbf {X} {\boldsymbol {\beta }}+{\boldsymbol {\varepsilon }},\quad \operatorname {E} [{\boldsymbol {\varepsilon }}\mid \mathbf {X} ]=0,\quad \operatorname {Cov} [{\boldsymbol {\varepsilon }}\mid \mathbf {X} ]={\boldsymbol {\Omega }},}
where
β
∈
R
k
{\displaystyle {\boldsymbol {\beta }}\in \mathbb {R} ^{k}}
is a vector of unknown constants, called "regression coefficients", which are estimated from the data.
If
b
{\displaystyle \mathbf {b} }
is a candidate estimate for
β
{\displaystyle {\boldsymbol {\beta }}}
, then the residual vector for
b
{\displaystyle \mathbf {b} }
is
y
−
X
b
{\displaystyle \mathbf {y} -\mathbf {X} \mathbf {b} }
. The generalized least squares method estimates
β
{\displaystyle {\boldsymbol {\beta }}}
by minimizing the squared Mahalanobis length of this residual vector:
β
^
=
argmin
b
(
y
−
X
b
)
T
Ω
−
1
(
y
−
X
b
)
=
argmin
b
y
T
Ω
−
1
y
+
(
X
b
)
T
Ω
−
1
X
b
−
y
T
Ω
−
1
X
b
−
(
X
b
)
T
Ω
−
1
y
,
{\displaystyle {\begin{aligned}{\hat {\boldsymbol {\beta }}}&={\underset {\mathbf {b} }{\operatorname {argmin} }}\,(\mathbf {y} -\mathbf {X} \mathbf {b} )^{\mathrm {T} }\mathbf {\Omega } ^{-1}(\mathbf {y} -\mathbf {X} \mathbf {b} )\\&={\underset {\mathbf {b} }{\operatorname {argmin} }}\,\mathbf {y} ^{\mathrm {T} }\,\mathbf {\Omega } ^{-1}\mathbf {y} +(\mathbf {X} \mathbf {b} )^{\mathrm {T} }\mathbf {\Omega } ^{-1}\mathbf {X} \mathbf {b} -\mathbf {y} ^{\mathrm {T} }\mathbf {\Omega } ^{-1}\mathbf {X} \mathbf {b} -(\mathbf {X} \mathbf {b} )^{\mathrm {T} }\mathbf {\Omega } ^{-1}\mathbf {y} \,,\end{aligned}}}
which is equivalent to
β
^
=
argmin
b
y
T
Ω
−
1
y
+
b
T
X
T
Ω
−
1
X
b
−
2
b
T
X
T
Ω
−
1
y
,
{\displaystyle {\hat {\boldsymbol {\beta }}}={\underset {\mathbf {b} }{\operatorname {argmin} }}\,\mathbf {y} ^{\mathrm {T} }\,\mathbf {\Omega } ^{-1}\mathbf {y} +\mathbf {b} ^{\mathrm {T} }\mathbf {X} ^{\mathrm {T} }\mathbf {\Omega } ^{-1}\mathbf {X} \mathbf {b} -2\mathbf {b} ^{\mathrm {T} }\mathbf {X} ^{\mathrm {T} }\mathbf {\Omega } ^{-1}\mathbf {y} ,}
which is a quadratic programming problem. The stationary point of the objective function occurs when
2
X
T
Ω
−
1
X
b
−
2
X
T
Ω
−
1
y
=
0
,
{\displaystyle 2\mathbf {X} ^{\mathrm {T} }\mathbf {\Omega } ^{-1}\mathbf {X} {\mathbf {b} }-2\mathbf {X} ^{\mathrm {T} }\mathbf {\Omega } ^{-1}\mathbf {y} =0,}
so the estimator is
β
^
=
(
X
T
Ω
−
1
X
)
−
1
X
T
Ω
−
1
y
.
{\displaystyle {\hat {\boldsymbol {\beta }}}=\left(\mathbf {X} ^{\mathrm {T} }\mathbf {\Omega } ^{-1}\mathbf {X} \right)^{-1}\mathbf {X} ^{\mathrm {T} }\mathbf {\Omega } ^{-1}\mathbf {y} .}
The quantity
Ω
−
1
{\displaystyle \mathbf {\Omega } ^{-1}}
is known as the precision matrix (or dispersion matrix), a generalization of the diagonal weight matrix.
= Properties
=The GLS estimator is unbiased, consistent, efficient, and asymptotically normal with
E
[
β
^
∣
X
]
=
β
,
and
Cov
[
β
^
∣
X
]
=
(
X
T
Ω
−
1
X
)
−
1
.
{\displaystyle \operatorname {E} [{\hat {\boldsymbol {\beta }}}\mid \mathbf {X} ]={\boldsymbol {\beta }},\quad {\text{and}}\quad \operatorname {Cov} [{\hat {\boldsymbol {\beta }}}\mid \mathbf {X} ]=(\mathbf {X} ^{\mathrm {T} }{\boldsymbol {\Omega }}^{-1}\mathbf {X} )^{-1}.}
GLS is equivalent to applying ordinary least squares (OLS) to a linearly transformed version of the data. This can be seen by factoring
Ω
=
C
C
T
{\displaystyle \mathbf {\Omega } =\mathbf {C} \mathbf {C} ^{\mathrm {T} }}
using a method such as Cholesky decomposition. Left-multiplying both sides of
y
=
X
β
+
ε
{\displaystyle \mathbf {y} =\mathbf {X} {\boldsymbol {\beta }}+{\boldsymbol {\varepsilon }}}
by
C
−
1
{\displaystyle \mathbf {C} ^{-1}}
yields an equivalent linear model:
y
∗
=
X
∗
β
+
ε
∗
,
where
y
∗
=
C
−
1
y
,
X
∗
=
C
−
1
X
,
ε
∗
=
C
−
1
ε
.
{\displaystyle \mathbf {y} ^{*}=\mathbf {X} ^{*}{\boldsymbol {\beta }}+{\boldsymbol {\varepsilon }}^{*},\quad {\text{where}}\quad \mathbf {y} ^{*}=\mathbf {C} ^{-1}\mathbf {y} ,\quad \mathbf {X} ^{*}=\mathbf {C} ^{-1}\mathbf {X} ,\quad {\boldsymbol {\varepsilon }}^{*}=\mathbf {C} ^{-1}{\boldsymbol {\varepsilon }}.}
In this model,
Var
[
ε
∗
∣
X
]
=
C
−
1
Ω
(
C
−
1
)
T
=
I
{\displaystyle \operatorname {Var} [{\boldsymbol {\varepsilon }}^{*}\mid \mathbf {X} ]=\mathbf {C} ^{-1}\mathbf {\Omega } \left(\mathbf {C} ^{-1}\right)^{\mathrm {T} }=\mathbf {I} }
, where
I
{\displaystyle \mathbf {I} }
is the identity matrix. Then,
β
{\displaystyle {\boldsymbol {\beta }}}
can be efficiently estimated by applying OLS to the transformed data, which requires minimizing the objective,
(
y
∗
−
X
∗
β
)
T
(
y
∗
−
X
∗
β
)
=
(
y
−
X
b
)
T
Ω
−
1
(
y
−
X
b
)
.
{\displaystyle \left(\mathbf {y} ^{*}-\mathbf {X} ^{*}{\boldsymbol {\beta }}\right)^{\mathrm {T} }(\mathbf {y} ^{*}-\mathbf {X} ^{*}{\boldsymbol {\beta }})=(\mathbf {y} -\mathbf {X} \mathbf {b} )^{\mathrm {T} }\,\mathbf {\Omega } ^{-1}(\mathbf {y} -\mathbf {X} \mathbf {b} ).}
This transformation effectively standardizes the scale of and de-correlates the errors. When OLS is used on data with homoscedastic errors, the Gauss–Markov theorem applies, so the GLS estimate is the best linear unbiased estimator for
β
{\displaystyle {\boldsymbol {\beta }}}
.
Weighted least squares
A special case of GLS, called weighted least squares (WLS), occurs when all the off-diagonal entries of Ω are 0. This situation arises when the variances of the observed values are unequal or when heteroscedasticity is present, but no correlations exist among the observed variances. The weight for unit i is proportional to the reciprocal of the variance of the response for unit i.
Derivation by maximum likelihood estimation
Ordinary least squares can be interpreted as maximum likelihood estimation with the prior that the errors are independent and normally distributed with zero mean and common variance. In GLS, the prior is generalized to the case where errors may not be independent and may have differing variances. For given fit parameters
b
{\displaystyle \mathbf {b} }
, the conditional probability density function of the errors are assumed to be:
p
(
ε
|
b
)
=
1
(
2
π
)
n
det
Ω
exp
(
−
1
2
ε
T
Ω
−
1
ε
)
.
{\displaystyle p({\boldsymbol {\varepsilon }}|\mathbf {b} )={\frac {1}{\sqrt {(2\pi )^{n}\det {\boldsymbol {\Omega }}}}}\exp \left(-{\frac {1}{2}}{\boldsymbol {\varepsilon }}^{\mathrm {T} }{\boldsymbol {\Omega }}^{-1}{\boldsymbol {\varepsilon }}\right).}
By Bayes' theorem,
p
(
b
|
ε
)
=
p
(
ε
|
b
)
p
(
b
)
p
(
ε
)
.
{\displaystyle p(\mathbf {b} |{\boldsymbol {\varepsilon }})={\frac {p({\boldsymbol {\varepsilon }}|\mathbf {b} )p(\mathbf {b} )}{p({\boldsymbol {\varepsilon }})}}.}
In GLS, a uniform (improper) prior is taken for
p
(
b
)
{\displaystyle p(\mathbf {b} )}
, and as
p
(
ε
)
{\displaystyle p({\boldsymbol {\varepsilon }})}
is a marginal distribution, it does not depend on
b
{\displaystyle \mathbf {b} }
. Therefore the log-probability is
log
p
(
b
|
ε
)
=
log
p
(
ε
|
b
)
+
⋯
=
−
1
2
ε
T
Ω
−
1
ε
+
⋯
,
{\displaystyle \log p(\mathbf {b} |{\boldsymbol {\varepsilon }})=\log p({\boldsymbol {\varepsilon }}|\mathbf {b} )+\cdots =-{\frac {1}{2}}{\boldsymbol {\varepsilon }}^{\mathrm {T} }{\boldsymbol {\Omega }}^{-1}{\boldsymbol {\varepsilon }}+\cdots ,}
where the hidden terms are those that do not depend on
b
{\displaystyle \mathbf {b} }
, and
log
p
(
ε
|
b
)
{\displaystyle \log p({\boldsymbol {\varepsilon }}|\mathbf {b} )}
is the log-likelihood. The maximum a posteriori (MAP) estimate is then the maximum likelihood estimate (MLE), which is equivalent to the optimization problem from above,
β
^
=
argmax
b
p
(
b
|
ε
)
=
argmax
b
log
p
(
b
|
ε
)
=
argmax
b
log
p
(
ε
|
b
)
,
{\displaystyle {\hat {\boldsymbol {\beta }}}={\underset {\mathbf {b} }{\operatorname {argmax} }}\;p(\mathbf {b} |{\boldsymbol {\varepsilon }})={\underset {\mathbf {b} }{\operatorname {argmax} }}\;\log p(\mathbf {b} |{\boldsymbol {\varepsilon }})={\underset {\mathbf {b} }{\operatorname {argmax} }}\;\log p({\boldsymbol {\varepsilon }}|\mathbf {b} ),}
where the optimization problem has been re-written using the fact that the logarithm is a strictly increasing function and the property that the argument solving an optimization problem is independent of terms in the objective function which do not involve said terms.
Substituting
y
−
X
b
{\displaystyle \mathbf {y} -\mathbf {X} \mathbf {b} }
for
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
,
β
^
=
argmin
b
1
2
(
y
−
X
b
)
T
Ω
−
1
(
y
−
X
b
)
.
{\displaystyle {\hat {\boldsymbol {\beta }}}={\underset {\mathbf {b} }{\operatorname {argmin} }}\;{\frac {1}{2}}(\mathbf {y} -\mathbf {X} \mathbf {b} )^{\mathrm {T} }{\boldsymbol {\Omega }}^{-1}(\mathbf {y} -\mathbf {X} \mathbf {b} ).}
Feasible generalized least squares
If the covariance of the errors
Ω
{\displaystyle \Omega }
is unknown, one can get a consistent estimate of
Ω
{\displaystyle \Omega }
, say
Ω
^
{\displaystyle {\widehat {\Omega }}}
, using an implementable version of GLS known as the feasible generalized least squares (FGLS) estimator.
In FGLS, modeling proceeds in two stages:
The model is estimated by OLS or another consistent (but inefficient) estimator, and the residuals are used to build a consistent estimator of the errors covariance matrix (to do so, one often needs to examine the model adding additional constraints; for example, if the errors follow a time series process, a statistician generally needs some theoretical assumptions on this process to ensure that a consistent estimator is available).
Then, using the consistent estimator of the covariance matrix of the errors, one can implement GLS ideas.
Whereas GLS is more efficient than OLS under heteroscedasticity (also spelled heteroskedasticity) or autocorrelation, this is not true for FGLS. The feasible estimator is asymptotically more efficient (provided the errors covariance matrix is consistently estimated), but for a small to medium-sized sample, it can be actually less efficient than OLS. This is why some authors prefer to use OLS and reformulate their inferences by simply considering an alternative estimator for the variance of the estimator robust to heteroscedasticity or serial autocorrelation. However, for large samples, FGLS is preferred over OLS under heteroskedasticity or serial correlation. A cautionary note is that the FGLS estimator is not always consistent. One case in which FGLS might be inconsistent is if there are individual-specific fixed effects.
In general, this estimator has different properties than GLS. For large samples (i.e., asymptotically), all properties are (under appropriate conditions) common with respect to GLS, but for finite samples, the properties of FGLS estimators are unknown: they vary dramatically with each particular model, and as a general rule, their exact distributions cannot be derived analytically. For finite samples, FGLS may be less efficient than OLS in some cases. Thus, while GLS can be made feasible, it is not always wise to apply this method when the sample is small. A method used to improve the accuracy of the estimators in finite samples is to iterate; that is, to take the residuals from FGLS to update the errors' covariance estimator and then update the FGLS estimation, applying the same idea iteratively until the estimators vary less than some tolerance. However, this method does not necessarily improve the efficiency of the estimator very much if the original sample was small.
A reasonable option when samples are not too large is to apply OLS but discard the classical variance estimator
σ
2
∗
(
X
T
X
)
−
1
{\displaystyle \sigma ^{2}*(X^{\operatorname {T} }X)^{-1}}
(which is inconsistent in this framework) and instead use a HAC (Heteroskedasticity and Autocorrelation Consistent) estimator. In the context of autocorrelation, the Newey–West estimator can be used, and in heteroscedastic contexts, the Eicker–White estimator can be used instead. This approach is much safer, and it is the appropriate path to take unless the sample is large, where "large" is sometimes a slippery issue (e.g., if the error distribution is asymmetric the required sample will be much larger).
The ordinary least squares (OLS) estimator is calculated by:
β
^
OLS
=
(
X
T
X
)
−
1
X
T
y
{\displaystyle {\widehat {\beta }}_{\text{OLS}}=(X^{\operatorname {T} }X)^{-1}X^{\operatorname {T} }y}
and estimates of the residuals
u
^
j
=
(
Y
−
X
β
^
OLS
)
j
{\displaystyle {\widehat {u}}_{j}=(Y-X{\widehat {\beta }}_{\text{OLS}})_{j}}
are constructed.
For simplicity, consider the model for heteroscedastic and non-autocorrelated errors. Assume that the variance-covariance matrix
Ω
{\displaystyle \Omega }
of the error vector is diagonal, or equivalently that errors from distinct observations are uncorrelated. Then each diagonal entry may be estimated by the fitted residuals
u
^
j
{\displaystyle {\widehat {u}}_{j}}
so
Ω
^
O
L
S
{\displaystyle {\widehat {\Omega }}_{OLS}}
may be constructed by:
Ω
^
OLS
=
diag
(
σ
^
1
2
,
σ
^
2
2
,
…
,
σ
^
n
2
)
.
{\displaystyle {\widehat {\Omega }}_{\text{OLS}}=\operatorname {diag} ({\widehat {\sigma }}_{1}^{2},{\widehat {\sigma }}_{2}^{2},\dots ,{\widehat {\sigma }}_{n}^{2}).}
It is important to notice that the squared residuals cannot be used in the previous expression; an estimator of the errors' variances is needed. To do so, a parametric heteroskedasticity model or nonparametric estimator can be used.
Estimate
β
F
G
L
S
1
{\displaystyle \beta _{FGLS1}}
using
Ω
^
OLS
{\displaystyle {\widehat {\Omega }}_{\text{OLS}}}
using weighted least squares:
β
^
F
G
L
S
1
=
(
X
T
Ω
^
OLS
−
1
X
)
−
1
X
T
Ω
^
OLS
−
1
y
{\displaystyle {\widehat {\beta }}_{FGLS1}=(X^{\operatorname {T} }{\widehat {\Omega }}_{\text{OLS}}^{-1}X)^{-1}X^{\operatorname {T} }{\widehat {\Omega }}_{\text{OLS}}^{-1}y}
The procedure can be iterated. The first iteration is given by:
u
^
F
G
L
S
1
=
Y
−
X
β
^
F
G
L
S
1
{\displaystyle {\widehat {u}}_{FGLS1}=Y-X{\widehat {\beta }}_{FGLS1}}
Ω
^
F
G
L
S
1
=
diag
(
σ
^
F
G
L
S
1
,
1
2
,
σ
^
F
G
L
S
1
,
2
2
,
…
,
σ
^
F
G
L
S
1
,
n
2
)
{\displaystyle {\widehat {\Omega }}_{FGLS1}=\operatorname {diag} ({\widehat {\sigma }}_{FGLS1,1}^{2},{\widehat {\sigma }}_{FGLS1,2}^{2},\dots ,{\widehat {\sigma }}_{FGLS1,n}^{2})}
β
^
F
G
L
S
2
=
(
X
T
Ω
^
F
G
L
S
1
−
1
X
)
−
1
X
T
Ω
^
F
G
L
S
1
−
1
y
{\displaystyle {\widehat {\beta }}_{FGLS2}=(X^{\operatorname {T} }{\widehat {\Omega }}_{FGLS1}^{-1}X)^{-1}X^{\operatorname {T} }{\widehat {\Omega }}_{FGLS1}^{-1}y}
This estimation of
Ω
^
{\displaystyle {\widehat {\Omega }}}
can be iterated to convergence.
Under regularity conditions, the FGLS estimator (or the estimator of its iterations, if a finite number of iterations are conducted) is asymptotically distributed as:
n
(
β
^
F
G
L
S
−
β
)
→
d
N
(
0
,
V
)
{\displaystyle {\sqrt {n}}({\hat {\beta }}_{FGLS}-\beta )\ \xrightarrow {d} \ {\mathcal {N}}\!\left(0,\,V\right)}
where
n
{\displaystyle n}
is the sample size, and
V
=
p
-
l
i
m
(
X
T
Ω
−
1
X
/
n
)
{\displaystyle V=\operatorname {p-lim} (X^{\operatorname {T} }\Omega ^{-1}X/n)}
where
p-lim
{\displaystyle {\text{p-lim}}}
means limit in probability.
See also
Confidence region
Effective degrees of freedom
Prais–Winsten estimation
Whitening transformation
References
Further reading
Amemiya, Takeshi (1985). "Generalized Least Squares Theory". Advanced Econometrics. Harvard University Press. ISBN 0-674-00560-0.
Johnston, John (1972). "Generalized Least-squares". Econometric Methods (Second ed.). New York: McGraw-Hill. pp. 208–242.
Kmenta, Jan (1986). "Generalized Linear Regression Model and Its Applications". Elements of Econometrics (Second ed.). New York: Macmillan. pp. 607–650. ISBN 0-472-10886-7.
Beck, Nathaniel; Katz, Jonathan N. (September 1995). "What To Do (and Not to Do) with Time-Series Cross-Section Data". American Political Science Review. 89 (3): 634–647. doi:10.2307/2082979. ISSN 1537-5943. JSTOR 2082979. S2CID 63222945.
Kata Kunci Pencarian:
- Statistika
- Daftar bilangan prima
- Ilmu aktuaria
- Statistika matematika
- Variabel acak
- Model generatif
- Efek pengacau
- Analitik prediktif
- Eksperimen semu
- Pemrograman tujuan
- Generalized least squares
- Weighted least squares
- Linear least squares
- Least squares
- Iteratively reweighted least squares
- Gauss–Markov theorem
- Non-linear least squares
- Projection matrix
- Ordinary least squares
- Moving least squares