- Source: Principal component regression
In statistics, principal component regression (PCR) is a regression analysis technique that is based on principal component analysis (PCA). PCR is a form of reduced rank regression. More specifically, PCR is used for estimating the unknown regression coefficients in a standard linear regression model.
In PCR, instead of regressing the dependent variable on the explanatory variables directly, the principal components of the explanatory variables are used as regressors. One typically uses only a subset of all the principal components for regression, making PCR a kind of regularized procedure and also a type of shrinkage estimator.
Often the principal components with higher variances (the ones based on eigenvectors corresponding to the higher eigenvalues of the sample variance-covariance matrix of the explanatory variables) are selected as regressors. However, for the purpose of predicting the outcome, the principal components with low variances may also be important, in some cases even more important.
One major use of PCR lies in overcoming the multicollinearity problem which arises when two or more of the explanatory variables are close to being collinear. PCR can aptly deal with such situations by excluding some of the low-variance principal components in the regression step. In addition, by usually regressing on only a subset of all the principal components, PCR can result in dimension reduction through substantially lowering the effective number of parameters characterizing the underlying model. This can be particularly useful in settings with high-dimensional covariates. Also, through appropriate selection of the principal components to be used for regression, PCR can lead to efficient prediction of the outcome based on the assumed model.
The principle
The PCR method may be broadly divided into three major steps:
1.
{\displaystyle \;\;}
Perform PCA on the observed data matrix for the explanatory variables to obtain the principal components, and then (usually) select a subset, based on some appropriate criteria, of the principal components so obtained for further use.
2.
{\displaystyle \;\;}
Now regress the observed vector of outcomes on the selected principal components as covariates, using ordinary least squares regression (linear regression) to get a vector of estimated regression coefficients (with dimension equal to the number of selected principal components).
3.
{\displaystyle \;\;}
Now transform this vector back to the scale of the actual covariates, using the selected PCA loadings (the eigenvectors corresponding to the selected principal components) to get the final PCR estimator (with dimension equal to the total number of covariates) for estimating the regression coefficients characterizing the original model.
Details of the method
Data representation: Let
Y
n
×
1
=
(
y
1
,
…
,
y
n
)
T
{\displaystyle \mathbf {Y} _{n\times 1}=\left(y_{1},\ldots ,y_{n}\right)^{T}}
denote the vector of observed outcomes and
X
n
×
p
=
(
x
1
,
…
,
x
n
)
T
{\displaystyle \mathbf {X} _{n\times p}=\left(\mathbf {x} _{1},\ldots ,\mathbf {x} _{n}\right)^{T}}
denote the corresponding data matrix of observed covariates where,
n
{\displaystyle n}
and
p
{\displaystyle p}
denote the size of the observed sample and the number of covariates respectively, with
n
≥
p
{\displaystyle n\geq p}
. Each of the
n
{\displaystyle n}
rows of
X
{\displaystyle \mathbf {X} }
denotes one set of observations for the
p
{\displaystyle p}
dimensional covariate and the respective entry of
Y
{\displaystyle \mathbf {Y} }
denotes the corresponding observed outcome.
Data pre-processing: Assume that
Y
{\displaystyle \mathbf {Y} }
and each of the
p
{\displaystyle p}
columns of
X
{\displaystyle \mathbf {X} }
have already been centered so that all of them have zero empirical means. This centering step is crucial (at least for the columns of
X
{\displaystyle \mathbf {X} }
) since PCR involves the use of PCA on
X
{\displaystyle \mathbf {X} }
and PCA is sensitive to centering of the data.
Underlying model: Following centering, the standard Gauss–Markov linear regression model for
Y
{\displaystyle \mathbf {Y} }
on
X
{\displaystyle \mathbf {X} }
can be represented as:
Y
=
X
β
+
ε
,
{\displaystyle \mathbf {Y} =\mathbf {X} {\boldsymbol {\beta }}+{\boldsymbol {\varepsilon }},\;}
where
β
∈
R
p
{\displaystyle {\boldsymbol {\beta }}\in \mathbb {R} ^{p}}
denotes the unknown parameter vector of regression coefficients and
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
denotes the vector of random errors with
E
(
ε
)
=
0
{\displaystyle \operatorname {E} \left({\boldsymbol {\varepsilon }}\right)=\mathbf {0} \;}
and
Var
(
ε
)
=
σ
2
I
n
×
n
{\displaystyle \;\operatorname {Var} \left({\boldsymbol {\varepsilon }}\right)=\sigma ^{2}I_{n\times n}}
for some unknown variance parameter
σ
2
>
0
{\displaystyle \sigma ^{2}>0\;\;}
Objective: The primary goal is to obtain an efficient estimator
β
^
{\displaystyle {\widehat {\boldsymbol {\beta }}}}
for the parameter
β
{\displaystyle {\boldsymbol {\beta }}}
, based on the data. One frequently used approach for this is ordinary least squares regression which, assuming
X
{\displaystyle \mathbf {X} }
is full column rank, gives the unbiased estimator:
β
^
o
l
s
=
(
X
T
X
)
−
1
X
T
Y
{\displaystyle {\widehat {\boldsymbol {\beta }}}_{\mathrm {ols} }=(\mathbf {X} ^{T}\mathbf {X} )^{-1}\mathbf {X} ^{T}\mathbf {Y} }
of
β
{\displaystyle {\boldsymbol {\beta }}}
. PCR is another technique that may be used for the same purpose of estimating
β
{\displaystyle {\boldsymbol {\beta }}}
.
PCA step: PCR starts by performing a PCA on the centered data matrix
X
{\displaystyle \mathbf {X} }
. For this, let
X
=
U
Δ
V
T
{\displaystyle \mathbf {X} =U\Delta V^{T}}
denote the singular value decomposition of
X
{\displaystyle \mathbf {X} }
where,
Δ
p
×
p
=
diag
[
δ
1
,
…
,
δ
p
]
{\displaystyle \Delta _{p\times p}=\operatorname {diag} \left[\delta _{1},\ldots ,\delta _{p}\right]}
with
δ
1
≥
⋯
≥
δ
p
≥
0
{\displaystyle \delta _{1}\geq \cdots \geq \delta _{p}\geq 0}
denoting the non-negative singular values of
X
{\displaystyle \mathbf {X} }
, while the columns of
U
n
×
p
=
[
u
1
,
…
,
u
p
]
{\displaystyle U_{n\times p}=[\mathbf {u} _{1},\ldots ,\mathbf {u} _{p}]}
and
V
p
×
p
=
[
v
1
,
…
,
v
p
]
{\displaystyle V_{p\times p}=[\mathbf {v} _{1},\ldots ,\mathbf {v} _{p}]}
are both orthonormal sets of vectors denoting the left and right singular vectors of
X
{\displaystyle \mathbf {X} }
respectively.
The principal components:
V
Λ
V
T
{\displaystyle V\Lambda V^{T}}
gives a spectral decomposition of
X
T
X
{\displaystyle \mathbf {X} ^{T}\mathbf {X} }
where
Λ
p
×
p
=
diag
[
λ
1
,
…
,
λ
p
]
=
diag
[
δ
1
2
,
…
,
δ
p
2
]
=
Δ
2
{\displaystyle \Lambda _{p\times p}=\operatorname {diag} \left[\lambda _{1},\ldots ,\lambda _{p}\right]=\operatorname {diag} \left[\delta _{1}^{2},\ldots ,\delta _{p}^{2}\right]=\Delta ^{2}}
with
λ
1
≥
⋯
≥
λ
p
≥
0
{\displaystyle \lambda _{1}\geq \cdots \geq \lambda _{p}\geq 0}
denoting the non-negative eigenvalues (also known as the principal values) of
X
T
X
{\displaystyle \mathbf {X} ^{T}\mathbf {X} }
, while the columns of
V
{\displaystyle V}
denote the corresponding orthonormal set of eigenvectors. Then,
X
v
j
{\displaystyle \mathbf {X} \mathbf {v} _{j}}
and
v
j
{\displaystyle \mathbf {v} _{j}}
respectively denote the
j
t
h
{\displaystyle j^{th}}
principal component and the
j
t
h
{\displaystyle j^{th}}
principal component direction (or PCA loading) corresponding to the
j
th
{\displaystyle j^{\text{th}}}
largest principal value
λ
j
{\displaystyle \lambda _{j}}
for each
j
∈
{
1
,
…
,
p
}
{\displaystyle j\in \{1,\ldots ,p\}}
.
Derived covariates: For any
k
∈
{
1
,
…
,
p
}
{\displaystyle k\in \{1,\ldots ,p\}}
, let
V
k
{\displaystyle V_{k}}
denote the
p
×
k
{\displaystyle p\times k}
matrix with orthonormal columns consisting of the first
k
{\displaystyle k}
columns of
V
{\displaystyle V}
. Let
W
k
=
X
V
k
{\displaystyle W_{k}=\mathbf {X} V_{k}}
=
[
X
v
1
,
…
,
X
v
k
]
{\displaystyle =[\mathbf {X} \mathbf {v} _{1},\ldots ,\mathbf {X} \mathbf {v} _{k}]}
denote the
n
×
k
{\displaystyle n\times k}
matrix having the first
k
{\displaystyle k}
principal components as its columns.
W
{\displaystyle W}
may be viewed as the data matrix obtained by using the transformed covariates
x
i
k
=
V
k
T
x
i
∈
R
k
{\displaystyle \mathbf {x} _{i}^{k}=V_{k}^{T}\mathbf {x} _{i}\in \mathbb {R} ^{k}}
instead of using the original covariates
x
i
∈
R
p
∀
1
≤
i
≤
n
{\displaystyle \mathbf {x} _{i}\in \mathbb {R} ^{p}\;\;\forall \;\;1\leq i\leq n}
.
The PCR estimator: Let
γ
^
k
=
(
W
k
T
W
k
)
−
1
W
k
T
Y
∈
R
k
{\displaystyle {\widehat {\gamma }}_{k}=(W_{k}^{T}W_{k})^{-1}W_{k}^{T}\mathbf {Y} \in \mathbb {R} ^{k}}
denote the vector of estimated regression coefficients obtained by ordinary least squares regression of the response vector
Y
{\displaystyle \mathbf {Y} }
on the data matrix
W
k
{\displaystyle W_{k}}
. Then, for any
k
∈
{
1
,
…
,
p
}
{\displaystyle k\in \{1,\ldots ,p\}}
, the final PCR estimator of
β
{\displaystyle {\boldsymbol {\beta }}}
based on using the first
k
{\displaystyle k}
principal components is given by:
β
^
k
=
V
k
γ
^
k
∈
R
p
{\displaystyle {\widehat {\boldsymbol {\beta }}}_{k}=V_{k}{\widehat {\gamma }}_{k}\in \mathbb {R} ^{p}}
.
Fundamental characteristics and applications of the PCR estimator
= Two basic properties
=The fitting process for obtaining the PCR estimator involves regressing the response vector on the derived data matrix
W
k
{\displaystyle W_{k}}
which has orthogonal columns for any
k
∈
{
1
,
…
,
p
}
{\displaystyle k\in \{1,\ldots ,p\}}
since the principal components are mutually orthogonal to each other. Thus in the regression step, performing a multiple linear regression jointly on the
k
{\displaystyle k}
selected principal components as covariates is equivalent to carrying out
k
{\displaystyle k}
independent simple linear regressions (or univariate regressions) separately on each of the
k
{\displaystyle k}
selected principal components as a covariate.
When all the principal components are selected for regression so that
k
=
p
{\displaystyle k=p}
, then the PCR estimator is equivalent to the ordinary least squares estimator. Thus,
β
^
p
=
β
^
o
l
s
{\displaystyle {\widehat {\boldsymbol {\beta }}}_{p}={\widehat {\boldsymbol {\beta }}}_{\mathrm {ols} }}
. This is easily seen from the fact that
W
p
=
X
V
p
=
X
V
{\displaystyle W_{p}=\mathbf {X} V_{p}=\mathbf {X} V}
and also observing that
V
{\displaystyle V}
is an orthogonal matrix.
= Variance reduction
=For any
k
∈
{
1
,
…
,
p
}
{\displaystyle k\in \{1,\ldots ,p\}}
, the variance of
β
^
k
{\displaystyle {\widehat {\boldsymbol {\beta }}}_{k}}
is given by
Var
(
β
^
k
)
=
σ
2
V
k
(
W
k
T
W
k
)
−
1
V
k
T
=
σ
2
V
k
diag
(
λ
1
−
1
,
…
,
λ
k
−
1
)
V
k
T
=
σ
2
∑
∑
j
=
1
k
v
j
v
j
T
λ
j
.
{\displaystyle \operatorname {Var} ({\widehat {\boldsymbol {\beta }}}_{k})=\sigma ^{2}\;V_{k}(W_{k}^{T}W_{k})^{-1}V_{k}^{T}=\sigma ^{2}\;V_{k}\;\operatorname {diag} \left(\lambda _{1}^{-1},\ldots ,\lambda _{k}^{-1}\right)V_{k}^{T}=\sigma ^{2}\sideset {}{}\sum _{j=1}^{k}{\frac {\mathbf {v} _{j}\mathbf {v} _{j}^{T}}{\lambda _{j}}}.}
In particular:
Var
(
β
^
p
)
=
Var
(
β
^
o
l
s
)
=
σ
2
∑
∑
j
=
1
p
v
j
v
j
T
λ
j
.
{\displaystyle \operatorname {Var} ({\widehat {\boldsymbol {\beta }}}_{p})=\operatorname {Var} ({\widehat {\boldsymbol {\beta }}}_{\mathrm {ols} })=\sigma ^{2}\sideset {}{}\sum _{j=1}^{p}{\frac {\mathbf {v} _{j}\mathbf {v} _{j}^{T}}{\lambda _{j}}}.}
Hence for all
k
∈
{
1
,
…
,
p
−
1
}
{\displaystyle k\in \{1,\ldots ,p-1\}}
we have:
Var
(
β
^
o
l
s
)
−
Var
(
β
^
k
)
=
σ
2
∑
∑
j
=
k
+
1
p
v
j
v
j
T
λ
j
.
{\displaystyle \operatorname {Var} ({\widehat {\boldsymbol {\beta }}}_{\mathrm {ols} })-\operatorname {Var} ({\widehat {\boldsymbol {\beta }}}_{k})=\sigma ^{2}\sideset {}{}\sum _{j=k+1}^{p}{\frac {\mathbf {v} _{j}\mathbf {v} _{j}^{T}}{\lambda _{j}}}.}
Thus, for all
k
∈
{
1
,
…
,
p
}
{\displaystyle k\in \{1,\ldots ,p\}}
we have:
Var
(
β
^
o
l
s
)
−
Var
(
β
^
k
)
⪰
0
{\displaystyle \operatorname {Var} ({\widehat {\boldsymbol {\beta }}}_{\mathrm {ols} })-\operatorname {Var} ({\widehat {\boldsymbol {\beta }}}_{k})\succeq 0}
where
A
⪰
0
{\displaystyle A\succeq 0}
indicates that a square symmetric matrix
A
{\displaystyle A}
is non-negative definite. Consequently, any given linear form of the PCR estimator has a lower variance compared to that of the same linear form of the ordinary least squares estimator.
= Addressing multicollinearity
=Under multicollinearity, two or more of the covariates are highly correlated, so that one can be linearly predicted from the others with a non-trivial degree of accuracy. Consequently, the columns of the data matrix
X
{\displaystyle \mathbf {X} }
that correspond to the observations for these covariates tend to become linearly dependent and therefore,
X
{\displaystyle \mathbf {X} }
tends to become rank deficient losing its full column rank structure. More quantitatively, one or more of the smaller eigenvalues of
X
T
X
{\displaystyle \mathbf {X} ^{T}\mathbf {X} }
get(s) very close or become(s) exactly equal to
0
{\displaystyle 0}
under such situations. The variance expressions above indicate that these small eigenvalues have the maximum inflation effect on the variance of the least squares estimator, thereby destabilizing the estimator significantly when they are close to
0
{\displaystyle 0}
. This issue can be effectively addressed through using a PCR estimator obtained by excluding the principal components corresponding to these small eigenvalues.
= Dimension reduction
=PCR may also be used for performing dimension reduction. To see this, let
L
k
{\displaystyle L_{k}}
denote any
p
×
k
{\displaystyle p\times k}
matrix having orthonormal columns, for any
k
∈
{
1
,
…
,
p
}
.
{\displaystyle k\in \{1,\ldots ,p\}.}
Suppose now that we want to approximate each of the covariate observations
x
i
{\displaystyle \mathbf {x} _{i}}
through the rank
k
{\displaystyle k}
linear transformation
L
k
z
i
{\displaystyle L_{k}\mathbf {z} _{i}}
for some
z
i
∈
R
k
(
1
≤
i
≤
n
)
{\displaystyle \mathbf {z} _{i}\in \mathbb {R} ^{k}(1\leq i\leq n)}
.
Then, it can be shown that
∑
i
=
1
n
‖
x
i
−
L
k
z
i
‖
2
{\displaystyle \sum _{i=1}^{n}\left\|\mathbf {x} _{i}-L_{k}\mathbf {z} _{i}\right\|^{2}}
is minimized at
L
k
=
V
k
,
{\displaystyle L_{k}=V_{k},}
the matrix with the first
k
{\displaystyle k}
principal component directions as columns, and
z
i
=
x
i
k
=
V
k
T
x
i
,
{\displaystyle \mathbf {z} _{i}=\mathbf {x} _{i}^{k}=V_{k}^{T}\mathbf {x} _{i},}
the corresponding
k
{\displaystyle k}
dimensional derived covariates. Thus the
k
{\displaystyle k}
dimensional principal components provide the best linear approximation of rank
k
{\displaystyle k}
to the observed data matrix
X
{\displaystyle \mathbf {X} }
.
The corresponding reconstruction error is given by:
∑
i
=
1
n
‖
x
i
−
V
k
x
i
k
‖
2
=
{
∑
j
=
k
+
1
n
λ
j
1
⩽
k
<
p
0
k
=
p
{\displaystyle \sum _{i=1}^{n}\left\|\mathbf {x} _{i}-V_{k}\mathbf {x} _{i}^{k}\right\|^{2}={\begin{cases}\sum _{j=k+1}^{n}\lambda _{j}&1\leqslant k
Thus any potential dimension reduction may be achieved by choosing
k
{\displaystyle k}
, the number of principal components to be used, through appropriate thresholding on the cumulative sum of the eigenvalues of
X
T
X
{\displaystyle \mathbf {X} ^{T}\mathbf {X} }
. Since the smaller eigenvalues do not contribute significantly to the cumulative sum, the corresponding principal components may be continued to be dropped as long as the desired threshold limit is not exceeded. The same criteria may also be used for addressing the multicollinearity issue whereby the principal components corresponding to the smaller eigenvalues may be ignored as long as the threshold limit is maintained.
= Regularization effect
=Since the PCR estimator typically uses only a subset of all the principal components for regression, it can be viewed as some sort of a regularized procedure. More specifically, for any
1
⩽
k
<
p
{\displaystyle 1\leqslant k
, the PCR estimator
β
^
k
{\displaystyle {\widehat {\boldsymbol {\beta }}}_{k}}
denotes the regularized solution to the following constrained minimization problem:
min
β
∗
∈
R
p
‖
Y
−
X
β
∗
‖
2
subject to
β
∗
⊥
{
v
k
+
1
,
…
,
v
p
}
.
{\displaystyle \min _{{\boldsymbol {\beta }}_{*}\in \mathbb {R} ^{p}}\left\|\mathbf {Y} -\mathbf {X} {\boldsymbol {\beta }}_{*}\right\|^{2}\quad {\text{ subject to }}\quad {\boldsymbol {\beta }}_{*}\perp \{\mathbf {v} _{k+1},\ldots ,\mathbf {v} _{p}\}.}
The constraint may be equivalently written as:
V
(
p
−
k
)
T
β
∗
=
0
,
{\displaystyle V_{(p-k)}^{T}{\boldsymbol {\beta }}_{*}=\mathbf {0} ,}
where:
V
(
p
−
k
)
=
[
v
k
+
1
,
…
,
v
p
]
p
×
(
p
−
k
)
.
{\displaystyle V_{(p-k)}=\left[\mathbf {v} _{k+1},\ldots ,\mathbf {v} _{p}\right]_{p\times (p-k)}.}
Thus, when only a proper subset of all the principal components are selected for regression, the PCR estimator so obtained is based on a hard form of regularization that constrains the resulting solution to the column space of the selected principal component directions, and consequently restricts it to be orthogonal to the excluded directions.
= Optimality of PCR among a class of regularized estimators
=Given the constrained minimization problem as defined above, consider the following generalized version of it:
min
β
∗
∈
R
p
‖
Y
−
X
β
∗
‖
2
subject to
L
(
p
−
k
)
T
β
∗
=
0
{\displaystyle \min _{{\boldsymbol {\beta }}_{*}\in \mathbb {R} ^{p}}\|\mathbf {Y} -\mathbf {X} {\boldsymbol {\beta }}_{*}\|^{2}\quad {\text{ subject to }}\quad L_{(p-k)}^{T}{\boldsymbol {\beta }}_{*}=\mathbf {0} }
where,
L
(
p
−
k
)
{\displaystyle L_{(p-k)}}
denotes any full column rank matrix of order
p
×
(
p
−
k
)
{\displaystyle p\times (p-k)}
with
1
⩽
k
<
p
{\displaystyle 1\leqslant k
.
Let
β
^
L
{\displaystyle {\widehat {\boldsymbol {\beta }}}_{L}}
denote the corresponding solution. Thus
β
^
L
=
arg
min
β
∗
∈
R
p
‖
Y
−
X
β
∗
‖
2
subject to
L
(
p
−
k
)
T
β
∗
=
0
.
{\displaystyle {\widehat {\boldsymbol {\beta }}}_{L}=\arg \min _{{\boldsymbol {\beta }}_{*}\in \mathbb {R} ^{p}}\|\mathbf {Y} -\mathbf {X} {\boldsymbol {\beta }}_{*}\|^{2}\quad {\text{ subject to }}\quad L_{(p-k)}^{T}{\boldsymbol {\beta }}_{*}=\mathbf {0} .}
Then the optimal choice of the restriction matrix
L
(
p
−
k
)
{\displaystyle L_{(p-k)}}
for which the corresponding estimator
β
^
L
{\displaystyle {\widehat {\boldsymbol {\beta }}}_{L}}
achieves the minimum prediction error is given by:
L
(
p
−
k
)
∗
=
V
(
p
−
k
)
Λ
(
p
−
k
)
1
/
2
,
{\displaystyle L_{(p-k)}^{*}=V_{(p-k)}\Lambda _{(p-k)}^{1/2},}
where
Λ
(
p
−
k
)
1
/
2
=
diag
(
λ
k
+
1
1
/
2
,
…
,
λ
p
1
/
2
)
.
{\displaystyle \Lambda _{(p-k)}^{1/2}=\operatorname {diag} \left(\lambda _{k+1}^{1/2},\ldots ,\lambda _{p}^{1/2}\right).}
Quite clearly, the resulting optimal estimator
β
^
L
∗
{\displaystyle {\widehat {\boldsymbol {\beta }}}_{L^{*}}}
is then simply given by the PCR estimator
β
^
k
{\displaystyle {\widehat {\boldsymbol {\beta }}}_{k}}
based on the first
k
{\displaystyle k}
principal components.
= Efficiency
=Since the ordinary least squares estimator is unbiased for
β
{\displaystyle {\boldsymbol {\beta }}}
, we have
Var
(
β
^
o
l
s
)
=
MSE
(
β
^
o
l
s
)
,
{\displaystyle \operatorname {Var} ({\widehat {\boldsymbol {\beta }}}_{\mathrm {ols} })=\operatorname {MSE} ({\widehat {\boldsymbol {\beta }}}_{\mathrm {ols} }),}
where, MSE denotes the mean squared error. Now, if for some
k
∈
{
1
,
…
,
p
}
{\displaystyle k\in \{1,\ldots ,p\}}
, we additionally have:
V
(
p
−
k
)
T
β
=
0
{\displaystyle V_{(p-k)}^{T}{\boldsymbol {\beta }}=\mathbf {0} }
, then the corresponding
β
^
k
{\displaystyle {\widehat {\boldsymbol {\beta }}}_{k}}
is also unbiased for
β
{\displaystyle {\boldsymbol {\beta }}}
and therefore
Var
(
β
^
k
)
=
MSE
(
β
^
k
)
.
{\displaystyle \operatorname {Var} ({\widehat {\boldsymbol {\beta }}}_{k})=\operatorname {MSE} ({\widehat {\boldsymbol {\beta }}}_{k}).}
We have already seen that
∀
j
∈
{
1
,
…
,
p
}
:
Var
(
β
^
o
l
s
)
−
Var
(
β
^
j
)
⪰
0
,
{\displaystyle \forall j\in \{1,\ldots ,p\}:\quad \operatorname {Var} ({\widehat {\boldsymbol {\beta }}}_{\mathrm {ols} })-\operatorname {Var} ({\widehat {\boldsymbol {\beta }}}_{j})\succeq 0,}
which then implies:
MSE
(
β
^
o
l
s
)
−
MSE
(
β
^
k
)
⪰
0
{\displaystyle \operatorname {MSE} ({\widehat {\boldsymbol {\beta }}}_{\mathrm {ols} })-\operatorname {MSE} ({\widehat {\boldsymbol {\beta }}}_{k})\succeq 0}
for that particular
k
{\displaystyle k}
. Thus in that case, the corresponding
β
^
k
{\displaystyle {\widehat {\boldsymbol {\beta }}}_{k}}
would be a more efficient estimator of
β
{\displaystyle {\boldsymbol {\beta }}}
compared to
β
^
o
l
s
{\displaystyle {\widehat {\boldsymbol {\beta }}}_{\mathrm {ols} }}
, based on using the mean squared error as the performance criteria. In addition, any given linear form of the corresponding
β
^
k
{\displaystyle {\widehat {\boldsymbol {\beta }}}_{k}}
would also have a lower mean squared error compared to that of the same linear form of
β
^
o
l
s
{\displaystyle {\widehat {\boldsymbol {\beta }}}_{\mathrm {ols} }}
.
Now suppose that for a given
k
∈
{
1
,
…
,
p
}
,
V
(
p
−
k
)
T
β
≠
0
{\displaystyle k\in \{1,\ldots ,p\},V_{(p-k)}^{T}{\boldsymbol {\beta }}\neq \mathbf {0} }
. Then the corresponding
β
^
k
{\displaystyle {\widehat {\boldsymbol {\beta }}}_{k}}
is biased for
β
{\displaystyle {\boldsymbol {\beta }}}
. However, since
∀
k
∈
{
1
,
…
,
p
}
:
Var
(
β
^
o
l
s
)
−
Var
(
β
^
k
)
⪰
0
,
{\displaystyle \forall k\in \{1,\ldots ,p\}:\quad \operatorname {Var} ({\widehat {\boldsymbol {\beta }}}_{\mathrm {ols} })-\operatorname {Var} ({\widehat {\boldsymbol {\beta }}}_{k})\succeq 0,}
it is still possible that
MSE
(
β
^
o
l
s
)
−
MSE
(
β
^
k
)
⪰
0
{\displaystyle \operatorname {MSE} ({\widehat {\boldsymbol {\beta }}}_{\mathrm {ols} })-\operatorname {MSE} ({\widehat {\boldsymbol {\beta }}}_{k})\succeq 0}
, especially if
k
{\displaystyle k}
is such that the excluded principal components correspond to the smaller eigenvalues, thereby resulting in lower bias.
In order to ensure efficient estimation and prediction performance of PCR as an estimator of
β
{\displaystyle {\boldsymbol {\beta }}}
, Park (1981) proposes the following guideline for selecting the principal components to be used for regression: Drop the
j
t
h
{\displaystyle j^{th}}
principal component if and only if
λ
j
<
(
p
σ
2
)
/
β
T
β
.
{\displaystyle \lambda _{j}<(p\sigma ^{2})/{\boldsymbol {\beta }}^{T}{\boldsymbol {\beta }}.}
Practical implementation of this guideline of course requires estimates for the unknown model parameters
σ
2
{\displaystyle \sigma ^{2}}
and
β
{\displaystyle {\boldsymbol {\beta }}}
. In general, they may be estimated using the unrestricted least squares estimates obtained from the original full model. Park (1981) however provides a slightly modified set of estimates that may be better suited for this purpose.
Unlike the criteria based on the cumulative sum of the eigenvalues of
X
T
X
{\displaystyle \mathbf {X} ^{T}\mathbf {X} }
, which is probably more suited for addressing the multicollinearity problem and for performing dimension reduction, the above criteria actually attempts to improve the prediction and estimation efficiency of the PCR estimator by involving both the outcome as well as the covariates in the process of selecting the principal components to be used in the regression step. Alternative approaches with similar goals include selection of the principal components based on cross-validation or the Mallow's Cp criteria. Often, the principal components are also selected based on their degree of association with the outcome.
= Shrinkage effect of PCR
=In general, PCR is essentially a shrinkage estimator that usually retains the high variance principal components (corresponding to the higher eigenvalues of
X
T
X
{\displaystyle \mathbf {X} ^{T}\mathbf {X} }
) as covariates in the model and discards the remaining low variance components (corresponding to the lower eigenvalues of
X
T
X
{\displaystyle \mathbf {X} ^{T}\mathbf {X} }
). Thus it exerts a discrete shrinkage effect on the low variance components nullifying their contribution completely in the original model. In contrast, the ridge regression estimator exerts a smooth shrinkage effect through the regularization parameter (or the tuning parameter) inherently involved in its construction. While it does not completely discard any of the components, it exerts a shrinkage effect over all of them in a continuous manner so that the extent of shrinkage is higher for the low variance components and lower for the high variance components. Frank and Friedman (1993) conclude that for the purpose of prediction itself, the ridge estimator, owing to its smooth shrinkage effect, is perhaps a better choice compared to the PCR estimator having a discrete shrinkage effect.
In addition, the principal components are obtained from the eigen-decomposition of
X
{\displaystyle \mathbf {X} }
that involves the observations for the explanatory variables only. Therefore, the resulting PCR estimator obtained from using these principal components as covariates need not necessarily have satisfactory predictive performance for the outcome. A somewhat similar estimator that tries to address this issue through its very construction is the partial least squares (PLS) estimator. Similar to PCR, PLS also uses derived covariates of lower dimensions. However unlike PCR, the derived covariates for PLS are obtained based on using both the outcome as well as the covariates. While PCR seeks the high variance directions in the space of the covariates, PLS seeks the directions in the covariate space that are most useful for the prediction of the outcome.
2006 a variant of the classical PCR known as the supervised PCR was proposed. In a spirit similar to that of PLS, it attempts at obtaining derived covariates of lower dimensions based on a criterion that involves both the outcome as well as the covariates. The method starts by performing a set of
p
{\displaystyle p}
simple linear regressions (or univariate regressions) wherein the outcome vector is regressed separately on each of the
p
{\displaystyle p}
covariates taken one at a time. Then, for some
m
∈
{
1
,
…
,
p
}
{\displaystyle m\in \{1,\ldots ,p\}}
, the first
m
{\displaystyle m}
covariates that turn out to be the most correlated with the outcome (based on the degree of significance of the corresponding estimated regression coefficients) are selected for further use. A conventional PCR, as described earlier, is then performed, but now it is based on only the
n
×
m
{\displaystyle n\times m}
data matrix corresponding to the observations for the selected covariates. The number of covariates used:
m
∈
{
1
,
…
,
p
}
{\displaystyle m\in \{1,\ldots ,p\}}
and the subsequent number of principal components used:
k
∈
{
1
,
…
,
m
}
{\displaystyle k\in \{1,\ldots ,m\}}
are usually selected by cross-validation.
Generalization to kernel settings
The classical PCR method as described above is based on classical PCA and considers a linear regression model for predicting the outcome based on the covariates. However, it can be easily generalized to a kernel machine setting whereby the regression function need not necessarily be linear in the covariates, but instead it can belong to the Reproducing Kernel Hilbert Space associated with any arbitrary (possibly non-linear), symmetric positive-definite kernel. The linear regression model turns out to be a special case of this setting when the kernel function is chosen to be the linear kernel.
In general, under the kernel machine setting, the vector of covariates is first mapped into a high-dimensional (potentially infinite-dimensional) feature space characterized by the kernel function chosen. The mapping so obtained is known as the feature map and each of its coordinates, also known as the feature elements, corresponds to one feature (may be linear or non-linear) of the covariates. The regression function is then assumed to be a linear combination of these feature elements. Thus, the underlying regression model in the kernel machine setting is essentially a linear regression model with the understanding that instead of the original set of covariates, the predictors are now given by the vector (potentially infinite-dimensional) of feature elements obtained by transforming the actual covariates using the feature map.
However, the kernel trick actually enables us to operate in the feature space without ever explicitly computing the feature map. It turns out that it is only sufficient to compute the pairwise inner products among the feature maps for the observed covariate vectors and these inner products are simply given by the values of the kernel function evaluated at the corresponding pairs of covariate vectors. The pairwise inner products so obtained may therefore be represented in the form of a
n
×
n
{\displaystyle n\times n}
symmetric non-negative definite matrix also known as the kernel matrix.
PCR in the kernel machine setting can now be implemented by first appropriately centering this kernel matrix (K, say) with respect to the feature space and then performing a kernel PCA on the centered kernel matrix (K', say) whereby an eigendecomposition of K' is obtained. Kernel PCR then proceeds by (usually) selecting a subset of all the eigenvectors so obtained and then performing a standard linear regression of the outcome vector on these selected eigenvectors. The eigenvectors to be used for regression are usually selected using cross-validation. The estimated regression coefficients (having the same dimension as the number of selected eigenvectors) along with the corresponding selected eigenvectors are then used for predicting the outcome for a future observation. In machine learning, this technique is also known as spectral regression.
Clearly, kernel PCR has a discrete shrinkage effect on the eigenvectors of K', quite similar to the discrete shrinkage effect of classical PCR on the principal components, as discussed earlier. However, the feature map associated with the chosen kernel could potentially be infinite-dimensional, and hence the corresponding principal components and principal component directions could be infinite-dimensional as well. Therefore, these quantities are often practically intractable under the kernel machine setting. Kernel PCR essentially works around this problem by considering an equivalent dual formulation based on using the spectral decomposition of the associated kernel matrix. Under the linear regression model (which corresponds to choosing the kernel function as the linear kernel), this amounts to considering a spectral decomposition of the corresponding
n
×
n
{\displaystyle n\times n}
kernel matrix
X
X
T
{\displaystyle \mathbf {X} \mathbf {X} ^{T}}
and then regressing the outcome vector on a selected subset of the eigenvectors of
X
X
T
{\displaystyle \mathbf {X} \mathbf {X} ^{T}}
so obtained. It can be easily shown that this is the same as regressing the outcome vector on the corresponding principal components (which are finite-dimensional in this case), as defined in the context of the classical PCR. Thus, for the linear kernel, the kernel PCR based on a dual formulation is exactly equivalent to the classical PCR based on a primal formulation. However, for arbitrary (and possibly non-linear) kernels, this primal formulation may become intractable owing to the infinite dimensionality of the associated feature map. Thus classical PCR becomes practically infeasible in that case, but kernel PCR based on the dual formulation still remains valid and computationally scalable.
See also
Principal component analysis
Partial least squares regression
Ridge regression
Canonical correlation
Deming regression
Total sum of squares
References
Further reading
Amemiya, Takeshi (1985). Advanced Econometrics. Harvard University Press. pp. 57–60. ISBN 978-0-674-00560-0.
Theil, Henri (1971). Principles of Econometrics. Wiley. pp. 46–55. ISBN 978-0-471-85845-4.
Kata Kunci Pencarian:
- Principal component regression
- Principal component analysis
- Partial least squares regression
- Linear regression
- Functional principal component analysis
- PCR
- Ridge regression
- Total least squares
- List of statistics articles
- Ordinal regression