- Source: Low-rank matrix approximations
Low-rank matrix approximations are essential tools in the application of kernel methods to large-scale learning problems.
Kernel methods (for instance, support vector machines or Gaussian processes) project data points into a high-dimensional or infinite-dimensional feature space and find the optimal splitting hyperplane. In the kernel method the data is represented in a kernel matrix (or, Gram matrix). Many algorithms can solve machine learning problems using the kernel matrix. The main problem of kernel method is its high computational cost associated with kernel matrices. The cost is at least quadratic in the number of training data points, but most kernel methods include computation of matrix inversion or eigenvalue decomposition and the cost becomes cubic in the number of training data. Large training sets cause large storage and computational costs. While low rank decomposition methods (Cholesky decomposition) reduce this cost, they still require computing the kernel matrix. One of the approaches to deal with this problem is low-rank matrix approximations. The most popular examples of them are the Nyström approximation and randomized feature maps approximation methods. Both of them have been successfully applied to efficient kernel learning.
Nyström approximation
Kernel methods become unfeasible when the number of points
n
{\displaystyle n}
is so large such that the kernel matrix
K
^
{\displaystyle {\hat {K}}}
cannot be stored in memory.
If
n
{\displaystyle n}
is the number of training examples, the storage and computational cost required to find the solution of the problem using general kernel method is
O
(
n
2
)
{\displaystyle O(n^{2})}
and
O
(
n
3
)
{\displaystyle O(n^{3})}
respectively. The Nyström approximation can allow a significant speed-up of the computations. This speed-up is achieved by using, instead of the kernel matrix, its approximation
K
~
{\displaystyle {\tilde {K}}}
of rank
q
{\displaystyle q}
. An advantage of the method is that it is not necessary to compute or store the whole kernel matrix, but only a submatrix of size
q
×
n
{\displaystyle q\times n}
.
It reduces the storage and complexity requirements to
O
(
n
q
)
{\displaystyle O(nq)}
and
O
(
n
q
2
)
{\displaystyle O(nq^{2})}
respectively.
The method is named "Nyström approximation" because it can be interpreted as a case of the Nyström method from integral equation theory.
= Theorem for kernel approximation
=K
^
{\displaystyle {\hat {K}}}
is a kernel matrix for some kernel method. Consider the first
q
<
n
{\displaystyle q
points in the training set. Then there exists a matrix
K
~
{\displaystyle {\tilde {K}}}
of rank
q
{\displaystyle q}
:
K
~
=
K
^
n
,
q
K
^
q
−
1
K
^
n
,
q
T
{\displaystyle {\tilde {K}}={\hat {K}}_{n,q}{\hat {K}}_{q}^{-1}{\hat {K}}_{n,q}^{\text{T}}}
, where
(
K
^
q
)
i
,
j
=
K
(
x
i
,
x
j
)
,
i
,
j
=
1
,
…
,
q
{\displaystyle ({\hat {K}}_{q})_{i,j}=K(x_{i},x_{j}),i,j=1,\dots ,q}
,
K
^
q
{\displaystyle {\hat {K}}_{q}}
is invertible matrix
and
(
K
^
n
,
q
)
i
,
j
=
K
(
x
i
,
x
j
)
,
i
=
1
,
…
,
n
and
j
=
1
,
…
,
q
.
{\displaystyle ({\hat {K}}_{n,q})_{i,j}=K(x_{i},x_{j}),i=1,\dots ,n{\text{ and }}j=1,\dots ,q.}
Proof
= Singular-value decomposition application =
Applying singular-value decomposition (SVD) to matrix
A
{\displaystyle A}
with dimensions
p
×
m
{\displaystyle p\times m}
produces a singular system consisting of singular values
{
σ
j
}
j
=
1
k
,
(
σ
j
>
0
∀
j
=
1
,
…
,
k
)
,
{\displaystyle \{\sigma _{j}\}_{j=1}^{k},{\text{ }}(\sigma _{j}>0{\text{ }}\forall j=1,\dots ,k),}
vectors
{
v
j
}
j
=
1
m
∈
C
m
{\displaystyle \{v_{j}\}_{j=1}^{m}\in \mathbb {C} ^{m}}
and
{
u
j
}
j
=
1
p
∈
C
p
{\displaystyle \{u_{j}\}_{j=1}^{p}\in \mathbb {C} ^{p}}
such that they form orthonormal bases of
C
m
{\displaystyle \mathbb {C} ^{m}}
and
C
p
{\displaystyle \mathbb {C} ^{p}}
respectively:
{
A
T
A
v
j
=
σ
j
v
j
,
j
=
1
,
…
,
k
,
A
T
A
v
j
=
0
,
j
=
k
+
1
,
…
,
m
,
A
A
T
u
j
=
σ
j
u
j
,
j
=
1
,
…
,
k
,
A
A
T
u
j
=
0
,
j
=
k
+
1
,
…
,
p
.
{\displaystyle {\begin{cases}A^{\text{T}}Av_{j}=\sigma _{j}v_{j},{\text{ }}j=1,\dots ,k,\\A^{\text{T}}Av_{j}=0,{\text{ }}j=k+1,\dots ,m,\\AA^{\text{T}}u_{j}=\sigma _{j}u_{j},{\text{ }}j=1,\dots ,k,\\AA^{\text{T}}u_{j}=0,{\text{ }}j=k+1,\dots ,p.\end{cases}}}
If
U
{\displaystyle U}
and
V
{\displaystyle V}
are matrices with
u
{\displaystyle u}
's and
v
{\displaystyle v}
's in the columns and
Σ
{\displaystyle \Sigma }
is a diagonal
p
×
m
{\displaystyle p\times m}
matrix having singular values
σ
i
{\displaystyle \sigma _{i}}
on the first
k
{\displaystyle k}
-entries on the diagonal (all the other elements of the matrix are zeros):
{
A
v
j
=
σ
j
u
j
,
j
=
1
,
…
,
k
,
A
v
j
=
0
,
j
=
k
+
1
,
…
,
m
,
A
T
u
j
=
σ
j
v
j
,
j
=
1
,
…
,
k
,
A
T
u
j
=
0
,
j
=
k
+
1
,
…
,
p
,
{\displaystyle {\begin{cases}Av_{j}={\sqrt {\sigma _{j}}}u_{j},{\text{ }}j=1,\dots ,k,\\Av_{j}=0,{\text{ }}j=k+1,\dots ,m,\\A^{\text{T}}u_{j}={\sqrt {\sigma _{j}}}v_{j},{\text{ }}j=1,\dots ,k,\\A^{\text{T}}u_{j}=0,{\text{ }}j=k+1,\dots ,p,\end{cases}}}
then the matrix
A
{\displaystyle A}
can be rewritten as:
A
=
U
Σ
1
/
2
V
T
.
{\displaystyle A=U\Sigma ^{1/2}V^{\text{T}}.}
= Further proof =
X
^
{\displaystyle {\hat {X}}}
is
n
×
D
{\displaystyle n\times D}
data matrix
K
^
=
X
^
X
^
T
{\displaystyle {\hat {K}}={\hat {X}}{\hat {X}}^{\text{T}}}
C
^
=
X
^
T
X
^
{\displaystyle {\hat {C}}={\hat {X}}^{\text{T}}{\hat {X}}}
Applying singular-value decomposition to these matrices:
X
^
=
U
^
Σ
^
1
/
2
V
^
T
,
K
^
=
U
^
Σ
^
U
^
T
,
C
^
=
V
^
Σ
^
V
^
T
.
{\displaystyle {\hat {X}}={\hat {U}}{\hat {\Sigma }}^{1/2}{\hat {V}}^{\text{T}},{\text{ }}{\hat {K}}={\hat {U}}{\hat {\Sigma }}{\hat {U}}^{T},{\text{ }}{\hat {C}}={\hat {V}}{\hat {\Sigma }}{\hat {V}}^{\text{T}}.}
X
^
q
{\displaystyle {\hat {X}}_{q}}
is the
q
×
D
{\displaystyle q\times D}
-dimensional matrix consisting of the first
q
{\displaystyle q}
rows of matrix
X
^
{\displaystyle {\hat {X}}}
K
^
q
=
X
^
q
X
^
q
T
{\displaystyle {\hat {K}}_{q}={\hat {X}}_{q}{\hat {X}}_{q}^{\text{T}}}
C
^
q
=
X
^
q
T
X
^
q
{\displaystyle {\hat {C}}_{q}={\hat {X}}_{q}^{\text{T}}{\hat {X}}_{q}}
Applying singular-value decomposition to these matrices:
X
^
q
=
U
^
q
Σ
^
q
1
/
2
V
^
q
T
,
K
^
q
=
U
^
q
Σ
^
q
U
^
q
T
,
C
^
q
=
V
^
q
Σ
^
q
V
^
q
T
.
{\displaystyle {\hat {X}}_{q}={\hat {U}}_{q}{\hat {\Sigma }}_{q}^{1/2}{\hat {V}}_{q}^{\text{T}},{\text{ }}{\hat {K}}_{q}={\hat {U}}_{q}{\hat {\Sigma }}_{q}{\hat {U}}_{q}^{T},{\text{ }}{\hat {C}}_{q}={\hat {V}}_{q}{\hat {\Sigma }}_{q}{\hat {V}}_{q}^{\text{T}}.}
Since
U
^
,
V
^
,
U
^
q
and
V
^
q
{\displaystyle {\hat {U}},{\text{ }}{\hat {V}},{\hat {U}}_{q}{\text{ and }}{\hat {V}}_{q}}
are orthogonal matrices,
U
^
=
X
^
V
^
Σ
^
−
1
/
2
,
V
^
q
=
X
^
q
T
U
^
q
Σ
^
q
−
1
/
2
.
{\displaystyle {\hat {U}}={\hat {X}}{\hat {V}}{\hat {\Sigma }}^{-1/2},{\text{ }}{\hat {V}}_{q}={\hat {X}}_{q}^{\text{T}}{\hat {U}}_{q}{\hat {\Sigma }}_{q}^{-1/2}.}
Replacing
V
^
,
Σ
^
{\displaystyle {\hat {V}},{\hat {\Sigma }}}
by
V
^
q
{\displaystyle {\hat {V}}_{q}}
and
Σ
^
q
{\displaystyle {\hat {\Sigma }}_{q}}
, an approximation for
U
^
{\displaystyle {\hat {U}}}
can be obtained:
U
~
=
X
^
X
^
q
T
U
^
q
Σ
^
q
−
1
{\displaystyle {\tilde {U}}={\hat {X}}{\hat {X}}_{q}^{\text{T}}{\hat {U}}_{q}{\hat {\Sigma }}_{q}^{-1}}
(
U
~
{\displaystyle {\tilde {U}}}
is not necessarily an orthogonal matrix).
However, defining
K
~
=
U
~
Σ
^
q
U
~
T
{\displaystyle {\tilde {K}}={\tilde {U}}{\hat {\Sigma }}_{q}{\tilde {U}}^{\text{T}}}
, it can be computed the next way:
K
~
=
U
~
Σ
^
q
U
~
T
=
X
^
X
^
q
T
U
^
q
Σ
^
q
−
1
Σ
^
q
(
X
^
X
^
q
T
U
^
q
Σ
^
q
−
1
)
T
=
X
^
X
^
q
T
{
U
^
q
(
Σ
^
q
−
1
)
T
U
^
q
T
}
(
X
^
X
^
q
T
)
T
{\displaystyle {\begin{aligned}{\tilde {K}}={\tilde {U}}{\hat {\Sigma }}_{q}{\tilde {U}}^{\text{T}}={\hat {X}}{\hat {X}}_{q}^{\text{T}}{\hat {U}}_{q}{\hat {\Sigma }}_{q}^{-1}{\hat {\Sigma }}_{q}({\hat {X}}{\hat {X}}_{q}^{\text{T}}{\hat {U}}_{q}{\hat {\Sigma }}_{q}^{-1})^{\text{T}}\\={\hat {X}}{\hat {X}}_{q}^{\text{T}}{\big \{}{\hat {U}}_{q}({\hat {\Sigma }}_{q}^{-1})^{\text{T}}{\hat {U}}_{q}^{\text{T}}{\big \}}({\hat {X}}{\hat {X}}_{q}^{\text{T}})^{\text{T}}\\\end{aligned}}}
By the characterization for orthogonal matrix
U
^
q
{\displaystyle {\hat {U}}_{q}}
: equality
(
U
^
q
)
T
=
(
U
^
q
)
−
1
{\displaystyle ({\hat {U}}_{q})^{\text{T}}=({\hat {U}}_{q})^{-1}}
holds. Then, using the formula for the inverse of matrix multiplication
(
A
B
)
−
1
=
B
−
1
A
−
1
{\displaystyle (AB)^{-1}=B^{-1}A^{-1}}
for invertible matrices
A
{\displaystyle A}
and
B
{\displaystyle B}
, the expression in braces can be rewritten as:
U
^
q
(
Σ
^
q
−
1
)
T
U
^
q
T
=
(
U
^
q
Σ
^
q
T
U
^
q
T
)
−
1
=
(
K
^
q
)
−
1
.
{\displaystyle {\hat {U}}_{q}({\hat {\Sigma }}_{q}^{-1})^{\text{T}}{\hat {U}}_{q}^{\text{T}}=({\hat {U}}_{q}{\hat {\Sigma }}_{q}^{\text{T}}{\hat {U}}_{q}^{\text{T}})^{-1}=({\hat {K}}_{q})^{-1}.}
Then the expression for
K
~
{\displaystyle {\tilde {K}}}
:
K
~
=
(
X
^
X
^
q
T
)
K
^
q
−
1
(
X
^
X
^
q
T
)
T
.
{\displaystyle {\tilde {K}}=({\hat {X}}{\hat {X}}_{q}^{\text{T}}){\hat {K}}_{q}^{-1}({\hat {X}}{\hat {X}}_{q}^{\text{T}})^{\text{T}}.}
Defining
K
^
n
,
q
=
X
^
X
^
q
T
{\displaystyle {\hat {K}}_{n,q}={\hat {X}}{\hat {X}}_{q}^{\text{T}}}
, the proof is finished.
= General theorem for kernel approximation for a feature map
=For a feature map
Φ
:
X
→
F
{\displaystyle \Phi :{\mathcal {X}}\to {\mathcal {F}}}
with associated kernel
K
(
x
,
x
′
)
=
⟨
Φ
(
x
)
,
Φ
(
x
′
)
⟩
F
{\displaystyle K(x,x')=\langle \Phi (x),\Phi (x')\rangle _{\mathcal {F}}}
: equality
K
^
=
K
^
n
,
q
K
^
q
−
1
K
^
n
,
q
T
{\displaystyle {\hat {K}}={\hat {K}}_{n,q}{\hat {K}}_{q}^{-1}{\hat {K}}_{n,q}^{\text{T}}}
also follows by replacing
X
^
{\displaystyle {\hat {X}}}
by the operator
Φ
^
:
F
→
R
n
{\displaystyle {\hat {\Phi }}:{\mathcal {F}}\to \mathbb {R} ^{n}}
such that
⟨
Φ
^
w
⟩
i
=
⟨
Φ
(
x
i
)
,
w
⟩
F
{\displaystyle \langle {\hat {\Phi }}w\rangle _{i}=\langle \Phi (x_{i}),w\rangle _{\mathcal {F}}}
,
i
=
1
,
…
,
n
{\displaystyle {\text{ }}i=1,\dots ,n}
,
w
∈
F
{\displaystyle w\in {\mathcal {F}}}
, and
X
^
q
{\displaystyle {\hat {X}}_{q}}
by the operator
Φ
^
q
:
F
→
R
q
{\displaystyle {\hat {\Phi }}_{q}:{\mathcal {F}}\to \mathbb {R} ^{q}}
such that
⟨
Φ
^
q
w
⟩
i
=
⟨
Φ
(
x
i
)
,
w
⟩
F
{\displaystyle \langle {\hat {\Phi }}_{q}w\rangle _{i}=\langle \Phi (x_{i}),w\rangle _{\mathcal {F}}}
,
i
=
1
,
…
,
q
{\displaystyle {\text{ }}i=1,\dots ,q}
,
w
∈
F
{\displaystyle w\in {\mathcal {F}}}
. Once again, a simple inspection shows that the feature map is only needed in the proof while the end result only depends on computing the kernel function.
= Application for regularized least squares
=In a vector and kernel notation, the problem of Regularized least squares can be rewritten as:
min
c
∈
R
n
1
n
‖
Y
^
−
K
^
c
‖
R
n
2
+
λ
⟨
c
,
K
^
c
⟩
R
n
.
{\displaystyle \min _{c\in \mathbb {R} ^{n}}{\frac {1}{n}}\|{\hat {Y}}-{\hat {K}}c\|_{\mathbb {R} ^{n}}^{2}+\lambda \langle c,{\hat {K}}c\rangle _{\mathbb {R} ^{n}}.}
Computing the gradient and setting in to 0, the minimum can be obtained:
−
1
n
K
^
(
Y
^
−
K
^
c
)
+
λ
K
^
c
=
0
⇒
K
^
(
K
^
+
λ
n
I
)
c
=
K
^
Y
^
⇒
c
=
(
K
^
+
λ
n
I
)
−
1
Y
^
,
where
c
∈
R
n
{\displaystyle {\begin{aligned}&-{\frac {1}{n}}{\hat {K}}({\hat {Y}}-{\hat {K}}c)+\lambda {\hat {K}}c=0\\\Rightarrow {}&{\hat {K}}({\hat {K}}+\lambda nI)c={\hat {K}}{\hat {Y}}\\\Rightarrow {}&c=({\hat {K}}+\lambda nI)^{-1}{\hat {Y}},{\text{ where }}c\in \mathbb {R} ^{n}\end{aligned}}}
The inverse matrix
(
K
^
+
λ
n
I
)
−
1
{\displaystyle ({\hat {K}}+\lambda nI)^{-1}}
can be computed using Woodbury matrix identity:
(
K
^
+
λ
n
I
)
−
1
=
1
λ
n
(
1
λ
n
K
^
+
I
)
−
1
=
1
λ
n
(
I
+
K
^
n
,
q
(
λ
n
K
^
q
)
−
1
K
^
n
,
q
T
)
−
1
=
1
λ
n
(
I
−
K
^
n
,
q
(
λ
n
K
^
q
+
K
^
n
,
q
T
K
^
n
,
q
)
−
1
K
^
n
,
q
T
)
{\displaystyle {\begin{aligned}({\hat {K}}+\lambda nI)^{-1}&={\frac {1}{\lambda n}}\left({\frac {1}{\lambda n}}{\hat {K}}+I\right)^{-1}\\&={\frac {1}{\lambda n}}\left(I+{\hat {K}}_{n,q}({\lambda n}{\hat {K}}_{q})^{-1}{\hat {K}}_{n,q}^{\text{T}}\right)^{-1}\\&={\frac {1}{\lambda n}}\left(I-{\hat {K}}_{n,q}(\lambda n{\hat {K}}_{q}+{\hat {K}}_{n,q}^{\text{T}}{\hat {K}}_{n,q})^{-1}{\hat {K}}_{n,q}^{\text{T}}\right)\end{aligned}}}
It has the desired storage and complexity requirements.
Randomized feature maps approximation
Let
x
,
x
′
∈
R
d
{\displaystyle \mathbf {x} ,\mathbf {x'} \in \mathbb {R} ^{d}}
– samples of data,
z
:
R
d
→
R
D
{\displaystyle z:\mathbb {R} ^{d}\to \mathbb {R} ^{D}}
– a randomized feature map (maps a single vector to a vector of higher dimensionality) so that the inner product between a pair of transformed points approximates their kernel evaluation:
K
(
x
,
x
′
)
=
⟨
Φ
(
x
)
,
Φ
(
x
′
)
⟩
≈
z
(
x
)
T
z
(
x
′
)
,
{\displaystyle K(\mathbf {x} ,\mathbf {x'} )=\langle \Phi (\mathbf {x} ),\Phi (\mathbf {x'} )\rangle \approx z(\mathbf {x} )^{\text{T}}z(\mathbf {x'} ),}
where
Φ
{\displaystyle \Phi }
is the mapping embedded in the RBF kernel.
Since
z
{\displaystyle z}
is low-dimensional, the input can be easily transformed with
z
{\displaystyle z}
, after that different linear learning methods to approximate the answer of the corresponding nonlinear kernel can be applied. There are different randomized feature maps to compute the approximations to the RBF kernels. For instance, random Fourier features and random binning features.
= Random Fourier features
=The random Fourier features map produces a Monte Carlo approximation to the feature map. The Monte Carlo method is considered to be randomized. These random features consists of sinusoids
cos
(
w
T
x
+
b
)
{\displaystyle \cos(w^{\text{T}}\mathbf {x} +b)}
randomly drawn from Fourier transform of the kernel to be approximated, where
w
∈
R
d
{\displaystyle w\in \mathbb {R} ^{d}}
and
b
∈
R
{\displaystyle b\in \mathbb {R} }
are random variables. The line is randomly chosen, then the data points are projected on it by the mappings. The resulting scalar is passed through a sinusoid. The product of the transformed points will approximate a shift-invariant kernel. Since the map is smooth, random Fourier features work well on interpolation tasks.
= Random binning features
=A random binning features map partitions the input space using randomly shifted grids at randomly chosen resolutions and assigns to an input point a binary bit string that corresponds to the bins in which it falls. The grids are constructed so that the probability that two points
x
,
x
′
∈
R
d
{\displaystyle \mathbf {x} ,\mathbf {x'} \in \mathbb {R} ^{d}}
are assigned to the same bin is proportional to
K
(
x
,
x
′
)
{\displaystyle K(\mathbf {x} ,\mathbf {x'} )}
. The inner product between a pair of transformed points is proportional to the number of times the two points are binned together, and is therefore an unbiased estimate of
K
(
x
,
x
′
)
{\displaystyle K(\mathbf {x} ,\mathbf {x'} )}
. Since this mapping is not smooth and uses the proximity between input points, Random binning features works well for approximating kernels that depend only on the
L
1
{\displaystyle L_{1}}
distance between datapoints.
Comparison of approximation methods
The approaches for large-scale kernel learning (Nyström method and random features) differs in the fact that the Nyström method uses data dependent basis functions while in random features approach the basis functions are sampled from a distribution independent from the training data. This difference leads to an improved analysis for kernel learning approaches based on the Nyström method. When there is a large gap in the eigen-spectrum of the kernel matrix, approaches based on the Nyström method can achieve better results than the random features based approach.
See also
Nyström method
Support vector machine
Radial basis function kernel
Regularized least squares
External links
Andreas Müller (2012). Kernel Approximations for Efficient SVMs (and other feature extraction methods).
References
Kata Kunci Pencarian:
- Low-rank matrix approximations
- Low-rank approximation
- CUR matrix approximation
- Singular value decomposition
- Hankel matrix
- Outline of machine learning
- Matrix completion
- Hierarchical matrix
- Woodbury matrix identity
- Matrix norm