- Source: Representer theorem
For computer science, in statistical learning theory, a representer theorem is any of several related results stating that a minimizer
f
∗
{\displaystyle f^{*}}
of a regularized empirical risk functional defined over a reproducing kernel Hilbert space can be represented as a finite linear combination of kernel products evaluated on the input points in the training set data.
Formal statement
The following Representer Theorem and its proof are due to Schölkopf, Herbrich, and Smola:
Theorem: Consider a positive-definite real-valued kernel
k
:
X
×
X
→
R
{\displaystyle k:{\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} }
on a non-empty set
X
{\displaystyle {\mathcal {X}}}
with a corresponding reproducing kernel Hilbert space
H
k
{\displaystyle H_{k}}
. Let there be given
a training sample
(
x
1
,
y
1
)
,
…
,
(
x
n
,
y
n
)
∈
X
×
R
{\displaystyle (x_{1},y_{1}),\dotsc ,(x_{n},y_{n})\in {\mathcal {X}}\times \mathbb {R} }
,
a strictly increasing real-valued function
g
:
[
0
,
∞
)
→
R
{\displaystyle g\colon [0,\infty )\to \mathbb {R} }
, and
an arbitrary error function
E
:
(
X
×
R
2
)
n
→
R
∪
{
∞
}
{\displaystyle E\colon ({\mathcal {X}}\times \mathbb {R} ^{2})^{n}\to \mathbb {R} \cup \lbrace \infty \rbrace }
,
which together define the following regularized empirical risk functional on
H
k
{\displaystyle H_{k}}
:
f
↦
E
(
(
x
1
,
y
1
,
f
(
x
1
)
)
,
…
,
(
x
n
,
y
n
,
f
(
x
n
)
)
)
+
g
(
‖
f
‖
)
.
{\displaystyle f\mapsto E\left((x_{1},y_{1},f(x_{1})),\ldots ,(x_{n},y_{n},f(x_{n}))\right)+g\left(\lVert f\rVert \right).}
Then, any minimizer of the empirical risk
f
∗
=
argmin
f
∈
H
k
{
E
(
(
x
1
,
y
1
,
f
(
x
1
)
)
,
…
,
(
x
n
,
y
n
,
f
(
x
n
)
)
)
+
g
(
‖
f
‖
)
}
,
(
∗
)
{\displaystyle f^{*}={\underset {f\in H_{k}}{\operatorname {argmin} }}\left\lbrace E\left((x_{1},y_{1},f(x_{1})),\ldots ,(x_{n},y_{n},f(x_{n}))\right)+g\left(\lVert f\rVert \right)\right\rbrace ,\quad (*)}
admits a representation of the form:
f
∗
(
⋅
)
=
∑
i
=
1
n
α
i
k
(
⋅
,
x
i
)
,
{\displaystyle f^{*}(\cdot )=\sum _{i=1}^{n}\alpha _{i}k(\cdot ,x_{i}),}
where
α
i
∈
R
{\displaystyle \alpha _{i}\in \mathbb {R} }
for all
1
≤
i
≤
n
{\displaystyle 1\leq i\leq n}
.
Proof:
Define a mapping
φ
:
X
→
H
k
φ
(
x
)
=
k
(
⋅
,
x
)
{\displaystyle {\begin{aligned}\varphi \colon {\mathcal {X}}&\to H_{k}\\\varphi (x)&=k(\cdot ,x)\end{aligned}}}
(so that
φ
(
x
)
=
k
(
⋅
,
x
)
{\displaystyle \varphi (x)=k(\cdot ,x)}
is itself a map
X
→
R
{\displaystyle {\mathcal {X}}\to \mathbb {R} }
). Since
k
{\displaystyle k}
is a reproducing kernel, then
φ
(
x
)
(
x
′
)
=
k
(
x
′
,
x
)
=
⟨
φ
(
x
′
)
,
φ
(
x
)
⟩
,
{\displaystyle \varphi (x)(x')=k(x',x)=\langle \varphi (x'),\varphi (x)\rangle ,}
where
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
is the inner product on
H
k
{\displaystyle H_{k}}
.
Given any
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
, one can use orthogonal projection to decompose any
f
∈
H
k
{\displaystyle f\in H_{k}}
into a sum of two functions, one lying in
span
{
φ
(
x
1
)
,
…
,
φ
(
x
n
)
}
{\displaystyle \operatorname {span} \left\lbrace \varphi (x_{1}),\ldots ,\varphi (x_{n})\right\rbrace }
, and the other lying in the orthogonal complement:
f
=
∑
i
=
1
n
α
i
φ
(
x
i
)
+
v
,
{\displaystyle f=\sum _{i=1}^{n}\alpha _{i}\varphi (x_{i})+v,}
where
⟨
v
,
φ
(
x
i
)
⟩
=
0
{\displaystyle \langle v,\varphi (x_{i})\rangle =0}
for all
i
{\displaystyle i}
.
The above orthogonal decomposition and the reproducing property together show that applying
f
{\displaystyle f}
to any training point
x
j
{\displaystyle x_{j}}
produces
f
(
x
j
)
=
⟨
∑
i
=
1
n
α
i
φ
(
x
i
)
+
v
,
φ
(
x
j
)
⟩
=
∑
i
=
1
n
α
i
⟨
φ
(
x
i
)
,
φ
(
x
j
)
⟩
,
{\displaystyle f(x_{j})=\left\langle \sum _{i=1}^{n}\alpha _{i}\varphi (x_{i})+v,\varphi (x_{j})\right\rangle =\sum _{i=1}^{n}\alpha _{i}\langle \varphi (x_{i}),\varphi (x_{j})\rangle ,}
which we observe is independent of
v
{\displaystyle v}
. Consequently, the value of the error function
E
{\displaystyle E}
in (*) is likewise independent of
v
{\displaystyle v}
. For the second term (the regularization term), since
v
{\displaystyle v}
is orthogonal to
∑
i
=
1
n
α
i
φ
(
x
i
)
{\displaystyle \sum _{i=1}^{n}\alpha _{i}\varphi (x_{i})}
and
g
{\displaystyle g}
is strictly monotonic, we have
g
(
‖
f
‖
)
=
g
(
‖
∑
i
=
1
n
α
i
φ
(
x
i
)
+
v
‖
)
=
g
(
‖
∑
i
=
1
n
α
i
φ
(
x
i
)
‖
2
+
‖
v
‖
2
)
≥
g
(
‖
∑
i
=
1
n
α
i
φ
(
x
i
)
‖
)
.
{\displaystyle {\begin{aligned}g\left(\lVert f\rVert \right)&=g\left(\lVert \sum _{i=1}^{n}\alpha _{i}\varphi (x_{i})+v\rVert \right)\\&=g\left({\sqrt {\lVert \sum _{i=1}^{n}\alpha _{i}\varphi (x_{i})\rVert ^{2}+\lVert v\rVert ^{2}}}\right)\\&\geq g\left(\lVert \sum _{i=1}^{n}\alpha _{i}\varphi (x_{i})\rVert \right).\end{aligned}}}
Therefore setting
v
=
0
{\displaystyle v=0}
does not affect the first term of (*), while it strictly decreases the second term. Consequently, any minimizer
f
∗
{\displaystyle f^{*}}
in (*) must have
v
=
0
{\displaystyle v=0}
, i.e., it must be of the form
f
∗
(
⋅
)
=
∑
i
=
1
n
α
i
φ
(
x
i
)
=
∑
i
=
1
n
α
i
k
(
⋅
,
x
i
)
,
{\displaystyle f^{*}(\cdot )=\sum _{i=1}^{n}\alpha _{i}\varphi (x_{i})=\sum _{i=1}^{n}\alpha _{i}k(\cdot ,x_{i}),}
which is the desired result.
Generalizations
The Theorem stated above is a particular example of a family of results that are collectively referred to as "representer theorems"; here we describe several such.
The first statement of a representer theorem was due to Kimeldorf and Wahba for the special case in which
E
(
(
x
1
,
y
1
,
f
(
x
1
)
)
,
…
,
(
x
n
,
y
n
,
f
(
x
n
)
)
)
=
1
n
∑
i
=
1
n
(
f
(
x
i
)
−
y
i
)
2
,
g
(
‖
f
‖
)
=
λ
‖
f
‖
2
{\displaystyle {\begin{aligned}E\left((x_{1},y_{1},f(x_{1})),\ldots ,(x_{n},y_{n},f(x_{n}))\right)&={\frac {1}{n}}\sum _{i=1}^{n}(f(x_{i})-y_{i})^{2},\\g(\lVert f\rVert )&=\lambda \lVert f\rVert ^{2}\end{aligned}}}
for
λ
>
0
{\displaystyle \lambda >0}
. Schölkopf, Herbrich, and Smola generalized this result by relaxing the assumption of the squared-loss cost and allowing the regularizer to be any strictly monotonically increasing function
g
(
⋅
)
{\displaystyle g(\cdot )}
of the Hilbert space norm.
It is possible to generalize further by augmenting the regularized empirical risk functional through the addition of unpenalized offset terms. For example, Schölkopf, Herbrich, and Smola also consider the minimization
f
~
∗
=
argmin
{
E
(
(
x
1
,
y
1
,
f
~
(
x
1
)
)
,
…
,
(
x
n
,
y
n
,
f
~
(
x
n
)
)
)
+
g
(
‖
f
‖
)
∣
f
~
=
f
+
h
∈
H
k
⊕
span
{
ψ
p
∣
1
≤
p
≤
M
}
}
,
(
†
)
{\displaystyle {\tilde {f}}^{*}=\operatorname {argmin} \left\lbrace E\left((x_{1},y_{1},{\tilde {f}}(x_{1})),\ldots ,(x_{n},y_{n},{\tilde {f}}(x_{n}))\right)+g\left(\lVert f\rVert \right)\mid {\tilde {f}}=f+h\in H_{k}\oplus \operatorname {span} \lbrace \psi _{p}\mid 1\leq p\leq M\rbrace \right\rbrace ,\quad (\dagger )}
i.e., we consider functions of the form
f
~
=
f
+
h
{\displaystyle {\tilde {f}}=f+h}
, where
f
∈
H
k
{\displaystyle f\in H_{k}}
and
h
{\displaystyle h}
is an unpenalized function lying in the span of a finite set of real-valued functions
{
ψ
p
:
X
→
R
∣
1
≤
p
≤
M
}
{\displaystyle \lbrace \psi _{p}\colon {\mathcal {X}}\to \mathbb {R} \mid 1\leq p\leq M\rbrace }
. Under the assumption that the
n
×
M
{\displaystyle n\times M}
matrix
(
ψ
p
(
x
i
)
)
i
p
{\displaystyle \left(\psi _{p}(x_{i})\right)_{ip}}
has rank
M
{\displaystyle M}
, they show that the minimizer
f
~
∗
{\displaystyle {\tilde {f}}^{*}}
in
(
†
)
{\displaystyle (\dagger )}
admits a representation of the form
f
~
∗
(
⋅
)
=
∑
i
=
1
n
α
i
k
(
⋅
,
x
i
)
+
∑
p
=
1
M
β
p
ψ
p
(
⋅
)
{\displaystyle {\tilde {f}}^{*}(\cdot )=\sum _{i=1}^{n}\alpha _{i}k(\cdot ,x_{i})+\sum _{p=1}^{M}\beta _{p}\psi _{p}(\cdot )}
where
α
i
,
β
p
∈
R
{\displaystyle \alpha _{i},\beta _{p}\in \mathbb {R} }
and the
β
p
{\displaystyle \beta _{p}}
are all uniquely determined.
The conditions under which a representer theorem exists were investigated by Argyriou, Micchelli, and Pontil, who proved the following:
Theorem: Let
X
{\displaystyle {\mathcal {X}}}
be a nonempty set,
k
{\displaystyle k}
a positive-definite real-valued kernel on
X
×
X
{\displaystyle {\mathcal {X}}\times {\mathcal {X}}}
with corresponding reproducing kernel Hilbert space
H
k
{\displaystyle H_{k}}
, and let
R
:
H
k
→
R
{\displaystyle R\colon H_{k}\to \mathbb {R} }
be a differentiable regularization function. Then given a training sample
(
x
1
,
y
1
)
,
…
,
(
x
n
,
y
n
)
∈
X
×
R
{\displaystyle (x_{1},y_{1}),\ldots ,(x_{n},y_{n})\in {\mathcal {X}}\times \mathbb {R} }
and an arbitrary error function
E
:
(
X
×
R
2
)
m
→
R
∪
{
∞
}
{\displaystyle E\colon ({\mathcal {X}}\times \mathbb {R} ^{2})^{m}\to \mathbb {R} \cup \lbrace \infty \rbrace }
, a minimizer
f
∗
=
argmin
f
∈
H
k
{
E
(
(
x
1
,
y
1
,
f
(
x
1
)
)
,
…
,
(
x
n
,
y
n
,
f
(
x
n
)
)
)
+
R
(
f
)
}
(
‡
)
{\displaystyle f^{*}={\underset {f\in H_{k}}{\operatorname {argmin} }}\left\lbrace E\left((x_{1},y_{1},f(x_{1})),\ldots ,(x_{n},y_{n},f(x_{n}))\right)+R(f)\right\rbrace \quad (\ddagger )}
of the regularized empirical risk admits a representation of the form
f
∗
(
⋅
)
=
∑
i
=
1
n
α
i
k
(
⋅
,
x
i
)
,
{\displaystyle f^{*}(\cdot )=\sum _{i=1}^{n}\alpha _{i}k(\cdot ,x_{i}),}
where
α
i
∈
R
{\displaystyle \alpha _{i}\in \mathbb {R} }
for all
1
≤
i
≤
n
{\displaystyle 1\leq i\leq n}
, if and only if there exists a nondecreasing function
h
:
[
0
,
∞
)
→
R
{\displaystyle h\colon [0,\infty )\to \mathbb {R} }
for which
R
(
f
)
=
h
(
‖
f
‖
)
.
{\displaystyle R(f)=h(\lVert f\rVert ).}
Effectively, this result provides a necessary and sufficient condition on a differentiable regularizer
R
(
⋅
)
{\displaystyle R(\cdot )}
under which the corresponding regularized empirical risk minimization
(
‡
)
{\displaystyle (\ddagger )}
will have a representer theorem. In particular, this shows that a broad class of regularized risk minimizations (much broader than those originally considered by Kimeldorf and Wahba) have representer theorems.
Applications
Representer theorems are useful from a practical standpoint because they dramatically simplify the regularized empirical risk minimization problem
(
‡
)
{\displaystyle (\ddagger )}
. In most interesting applications, the search domain
H
k
{\displaystyle H_{k}}
for the minimization will be an infinite-dimensional subspace of
L
2
(
X
)
{\displaystyle L^{2}({\mathcal {X}})}
, and therefore the search (as written) does not admit implementation on finite-memory and finite-precision computers. In contrast, the representation of
f
∗
(
⋅
)
{\displaystyle f^{*}(\cdot )}
afforded by a representer theorem reduces the original (infinite-dimensional) minimization problem to a search for the optimal
n
{\displaystyle n}
-dimensional vector of coefficients
α
=
(
α
1
,
…
,
α
n
)
∈
R
n
{\displaystyle \alpha =(\alpha _{1},\ldots ,\alpha _{n})\in \mathbb {R} ^{n}}
;
α
{\displaystyle \alpha }
can then be obtained by applying any standard function minimization algorithm. Consequently, representer theorems provide the theoretical basis for the reduction of the general machine learning problem to algorithms that can actually be implemented on computers in practice.
The following provides an example of how to solve for the minimizer whose existence is guaranteed by the representer theorem. This method works for any positive definite kernel
K
{\displaystyle K}
, and allows us to transform a complicated (possibly infinite dimensional) optimization problem into a simple linear system that can be solved numerically.
Assume that we are using a least squares error function
and a regularization function
g
(
x
)
=
λ
x
2
{\displaystyle g(x)=\lambda x^{2}}
for some
λ
>
0
{\displaystyle \lambda >0}
. By the representer theorem, the minimizer
has the form
for some
α
∗
=
(
α
1
∗
,
…
,
α
n
∗
)
∈
R
n
{\displaystyle \alpha ^{*}=(\alpha _{1}^{*},\dots ,\alpha _{n}^{*})\in \mathbb {R} ^{n}}
. Noting that
we see that
α
∗
{\displaystyle \alpha ^{*}}
has the form
where
A
i
j
=
k
(
x
i
,
x
j
)
{\displaystyle A_{ij}=k(x_{i},x_{j})}
and
y
=
(
y
1
,
…
,
y
n
)
{\displaystyle y=(y_{1},\dots ,y_{n})}
. This can be factored out and simplified to
Since
A
⊺
A
+
λ
A
{\displaystyle A^{\intercal }A+\lambda A}
is positive definite, there is indeed a single global minimum for this expression. Let
F
(
α
)
=
α
⊺
(
A
⊺
A
+
λ
A
)
α
−
2
α
⊺
A
y
{\displaystyle F(\alpha )=\alpha ^{\intercal }(A^{\intercal }A+\lambda A)\alpha -2\alpha ^{\intercal }Ay}
and note that
F
{\displaystyle F}
is convex. Then
α
∗
{\displaystyle \alpha ^{*}}
, the global minimum, can be solved by setting
∇
α
F
=
0
{\displaystyle \nabla _{\alpha }F=0}
. Recalling that all positive definite matrices are invertible, we see that
so the minimizer may be found via a linear solve.
See also
Mercer's theorem
Kernel methods
References
Argyriou, Andreas; Micchelli, Charles A.; Pontil, Massimiliano (2009). "When Is There a Representer Theorem? Vector Versus Matrix Regularizers". Journal of Machine Learning Research. 10 (Dec): 2507–2529.
Cucker, Felipe; Smale, Steve (2002). "On the Mathematical Foundations of Learning". Bulletin of the American Mathematical Society. 39 (1): 1–49. doi:10.1090/S0273-0979-01-00923-5. MR 1864085.
Kimeldorf, George S.; Wahba, Grace (1970). "A correspondence between Bayesian estimation on stochastic processes and smoothing by splines". The Annals of Mathematical Statistics. 41 (2): 495–502. doi:10.1214/aoms/1177697089.
Schölkopf, Bernhard; Herbrich, Ralf; Smola, Alex J. (2001). "A Generalized Representer Theorem". Computational Learning Theory. Lecture Notes in Computer Science. Vol. 2111. pp. 416–426. CiteSeerX 10.1.1.42.8617. doi:10.1007/3-540-44581-1_27. ISBN 978-3-540-42343-0.
Kata Kunci Pencarian:
- Pemelajaran mesin daring
- Representer theorem
- Brown's representability theorem
- Reproducing kernel Hilbert space
- Kernel method
- Representable functor
- Mercer's theorem
- Representability
- Universal approximation theorem
- Central limit theorem
- Manifold regularization