- Source: Variance decomposition of forecast errors
In econometrics and other applications of multivariate time series analysis, a variance decomposition or forecast error variance decomposition (FEVD) is used to aid in the interpretation of a vector autoregression (VAR) model once it has been fitted. The variance decomposition indicates the amount of information each variable contributes to the other variables in the autoregression. It determines how much of the forecast error variance of each of the variables can be explained by exogenous shocks to the other variables.
Calculating the forecast error variance
For the VAR (p) of form
y
t
=
ν
+
A
1
y
t
−
1
+
⋯
+
A
p
y
t
−
p
+
u
t
{\displaystyle y_{t}=\nu +A_{1}y_{t-1}+\dots +A_{p}y_{t-p}+u_{t}}
.
This can be changed to a VAR(1) structure by writing it in companion form (see general matrix notation of a VAR(p))
Y
t
=
V
+
A
Y
t
−
1
+
U
t
{\displaystyle Y_{t}=V+AY_{t-1}+U_{t}}
where
A
=
[
A
1
A
2
…
A
p
−
1
A
p
I
k
0
…
0
0
0
I
k
0
0
⋮
⋱
⋮
⋮
0
0
…
I
k
0
]
{\displaystyle A={\begin{bmatrix}A_{1}&A_{2}&\dots &A_{p-1}&A_{p}\\\mathbf {I} _{k}&0&\dots &0&0\\0&\mathbf {I} _{k}&&0&0\\\vdots &&\ddots &\vdots &\vdots \\0&0&\dots &\mathbf {I} _{k}&0\\\end{bmatrix}}}
,
Y
=
[
y
1
⋮
y
p
]
{\displaystyle Y={\begin{bmatrix}y_{1}\\\vdots \\y_{p}\end{bmatrix}}}
,
V
=
[
ν
0
⋮
0
]
{\displaystyle V={\begin{bmatrix}\nu \\0\\\vdots \\0\end{bmatrix}}}
and
U
t
=
[
u
t
0
⋮
0
]
{\displaystyle U_{t}={\begin{bmatrix}u_{t}\\0\\\vdots \\0\end{bmatrix}}}
where
y
t
{\displaystyle y_{t}}
,
ν
{\displaystyle \nu }
and
u
{\displaystyle u}
are
k
{\displaystyle k}
dimensional column vectors,
A
{\displaystyle A}
is
k
p
{\displaystyle kp}
by
k
p
{\displaystyle kp}
dimensional matrix and
Y
{\displaystyle Y}
,
V
{\displaystyle V}
and
U
{\displaystyle U}
are
k
p
{\displaystyle kp}
dimensional column vectors.
The mean squared error of the h-step forecast of variable
j
{\displaystyle j}
is
M
S
E
[
y
j
,
t
(
h
)
]
=
∑
i
=
0
h
−
1
∑
l
=
1
k
(
e
j
′
Θ
i
e
l
)
2
=
(
∑
i
=
0
h
−
1
Θ
i
Θ
i
′
)
j
j
=
(
∑
i
=
0
h
−
1
Φ
i
Σ
u
Φ
i
′
)
j
j
,
{\displaystyle \mathbf {MSE} [y_{j,t}(h)]=\sum _{i=0}^{h-1}\sum _{l=1}^{k}(e_{j}'\Theta _{i}e_{l})^{2}={\bigg (}\sum _{i=0}^{h-1}\Theta _{i}\Theta _{i}'{\bigg )}_{jj}={\bigg (}\sum _{i=0}^{h-1}\Phi _{i}\Sigma _{u}\Phi _{i}'{\bigg )}_{jj},}
and where
e
j
{\displaystyle e_{j}}
is the jth column of
I
k
{\displaystyle I_{k}}
and the subscript
j
j
{\displaystyle jj}
refers to that element of the matrix
Θ
i
=
Φ
i
P
,
{\displaystyle \Theta _{i}=\Phi _{i}P,}
where
P
{\displaystyle P}
is a lower triangular matrix obtained by a Cholesky decomposition of
Σ
u
{\displaystyle \Sigma _{u}}
such that
Σ
u
=
P
P
′
{\displaystyle \Sigma _{u}=PP'}
, where
Σ
u
{\displaystyle \Sigma _{u}}
is the covariance matrix of the errors
u
t
{\displaystyle u_{t}}
Φ
i
=
J
A
i
J
′
,
{\displaystyle \Phi _{i}=JA^{i}J',}
where
J
=
[
I
k
0
…
0
]
,
{\displaystyle J={\begin{bmatrix}\mathbf {I} _{k}&0&\dots &0\end{bmatrix}},}
so that
J
{\displaystyle J}
is a
k
{\displaystyle k}
by
k
p
{\displaystyle kp}
dimensional matrix.
The amount of forecast error variance of variable
j
{\displaystyle j}
accounted for by exogenous shocks to variable
l
{\displaystyle l}
is given by
ω
j
l
,
h
,
{\displaystyle \omega _{jl,h},}
ω
j
l
,
h
=
∑
i
=
0
h
−
1
(
e
j
′
Θ
i
e
l
)
2
/
M
S
E
[
y
j
,
t
(
h
)
]
.
{\displaystyle \omega _{jl,h}=\sum _{i=0}^{h-1}(e_{j}'\Theta _{i}e_{l})^{2}/MSE[y_{j,t}(h)].}
See also
Analysis of variance
Notes
Kata Kunci Pencarian:
- Variance decomposition of forecast errors
- List of statistics articles
- Errors and residuals
- Autoregressive integrated moving average
- Principal component analysis
- Decomposition of time series
- Time series
- Linear regression
- Covariance
- Nonlinear regression