- Source: Kernelization
In computer science, a kernelization is a technique for designing efficient algorithms that achieve their efficiency by a preprocessing stage in which inputs to the algorithm are replaced by a smaller input, called a "kernel". The result of solving the problem on the kernel should either be the same as on the original input, or it should be easy to transform the output on the kernel to the desired output for the original problem.
Kernelization is often achieved by applying a set of reduction rules that cut away parts of the instance that are easy to handle. In parameterized complexity theory, it is often possible to prove that a kernel with guaranteed bounds on the size of a kernel (as a function of some parameter associated to the problem) can be found in polynomial time. When this is possible, it results in a fixed-parameter tractable algorithm whose running time is the sum of the (polynomial time) kernelization step and the (non-polynomial but bounded by the parameter) time to solve the kernel. Indeed, every problem that can be solved by a fixed-parameter tractable algorithm can be solved by a kernelization algorithm of this type. This is also true for approximate kernelization.
Example: vertex cover
A standard example for a kernelization algorithm is the kernelization of the vertex cover problem by S. Buss.
In this problem, the input is an undirected graph
G
{\displaystyle G}
together with a number
k
{\displaystyle k}
. The output is a set of at most
k
{\displaystyle k}
vertices that includes an endpoint of every edge in the graph, if such a set exists, or a failure exception if no such set exists. This problem is NP-hard. However, the following reduction rules may be used to kernelize it:
If
k
>
0
{\displaystyle k>0}
and
v
{\displaystyle v}
is a vertex of degree greater than
k
{\displaystyle k}
, remove
v
{\displaystyle v}
from the graph and decrease
k
{\displaystyle k}
by one. Every vertex cover of size
k
{\displaystyle k}
must contain
v
{\displaystyle v}
since otherwise too many of its neighbors would have to be picked to cover the incident edges. Thus, an optimal vertex cover for the original graph may be formed from a cover of the reduced problem by adding
v
{\displaystyle v}
back to the cover.
If
v
{\displaystyle v}
is an isolated vertex, remove it. An isolated vertex cannot cover any edges, so in this case
v
{\displaystyle v}
cannot be part of any minimal cover.
If more than
k
2
{\displaystyle k^{2}}
edges remain in the graph, and neither of the previous two rules can be applied, then the graph cannot contain a vertex cover of size
k
{\displaystyle k}
. For, after eliminating all vertices of degree greater than
k
{\displaystyle k}
, each remaining vertex can only cover at most
k
{\displaystyle k}
edges and a set of
k
{\displaystyle k}
vertices could only cover at most
k
2
{\displaystyle k^{2}}
edges. In this case, the instance may be replaced by an instance with two vertices, one edge, and
k
=
0
{\displaystyle k=0}
, which also has no solution.
An algorithm that applies these rules repeatedly until no more reductions can be made necessarily terminates with a kernel that has at most
k
2
{\displaystyle k^{2}}
edges and (because each edge has at most two endpoints and there are no isolated vertices) at most
2
k
2
{\displaystyle 2k^{2}}
vertices. This kernelization may be implemented in linear time. Once the kernel has been constructed, the vertex cover problem may be solved by a brute force search algorithm that tests whether each subset of the kernel is a cover of the kernel.
Thus, the vertex cover problem can be solved in time
O
(
2
2
k
2
+
n
+
m
)
{\displaystyle O(2^{2k^{2}}+n+m)}
for a graph with
n
{\displaystyle n}
vertices and
m
{\displaystyle m}
edges, allowing it to be solved efficiently when
k
{\displaystyle k}
is small even if
n
{\displaystyle n}
and
m
{\displaystyle m}
are both large.
Although this bound is fixed-parameter tractable, its dependence on the parameter is higher than might be desired. More complex kernelization procedures can improve this bound, by finding smaller kernels, at the expense of greater running time in the kernelization step. In the vertex cover example, kernelization algorithms are known that produce kernels with at most
2
k
{\displaystyle 2k}
vertices.
One algorithm that achieves this improved bound exploits the half-integrality of the linear program relaxation of vertex cover due to Nemhauser and Trotter. Another kernelization algorithm achieving that bound is based on what is known as the crown reduction rule and uses alternating path arguments. The currently best known kernelization algorithm in terms of the number of vertices is due to Lampis (2011) and achieves
2
k
−
c
log
k
{\displaystyle 2k-c\log k}
vertices for any fixed constant
c
{\displaystyle c}
.
It is not possible, in this problem, to find a kernel of size
O
(
log
k
)
{\displaystyle O(\log k)}
, unless P = NP, for such a kernel would lead to a polynomial-time algorithm for the NP-hard vertex cover problem. However, much stronger bounds on the kernel size can be proven in this case: unless coNP
⊆
{\displaystyle \subseteq }
NP/poly (believed unlikely by complexity theorists), for every
ϵ
>
0
{\displaystyle \epsilon >0}
it is impossible in polynomial time to find kernels with
O
(
k
2
−
ϵ
)
{\displaystyle O(k^{2-\epsilon })}
edges.
It is unknown for vertex cover whether kernels with
(
2
−
ϵ
)
k
{\displaystyle (2-\epsilon )k}
vertices for some
ϵ
>
0
{\displaystyle \epsilon >0}
would have any unlikely complexity-theoretic consequences.
Definition
In the literature, there is no clear consensus on how kernelization should be formally defined and there are subtle differences in the uses of that expression.
= Downey–Fellows notation
=In the notation of Downey & Fellows (1999), a parameterized problem is a subset
L
⊆
Σ
∗
×
N
{\displaystyle L\subseteq \Sigma ^{*}\times \mathbb {N} }
describing a decision problem.
A kernelization for a parameterized problem
L
{\displaystyle L}
is an algorithm that takes an instance
(
x
,
k
)
{\displaystyle (x,k)}
and maps it in time polynomial in
|
x
|
{\displaystyle |x|}
and
k
{\displaystyle k}
to an instance
(
x
′
,
k
′
)
{\displaystyle (x',k')}
such that
(
x
,
k
)
{\displaystyle (x,k)}
is in
L
{\displaystyle L}
if and only if
(
x
′
,
k
′
)
{\displaystyle (x',k')}
is in
L
{\displaystyle L}
,
the size of
x
′
{\displaystyle x'}
is bounded by a computable function
f
{\displaystyle f}
in
k
{\displaystyle k}
, and
k
′
{\displaystyle k'}
is bounded by a function in
k
{\displaystyle k}
.
The output
(
x
′
,
k
′
)
{\displaystyle (x',k')}
of kernelization is called a kernel. In this general context, the size of the string
x
′
{\displaystyle x'}
just refers to its length.
Some authors prefer to use the number of vertices or the number of edges as the size measure in the context of graph problems.
= Flum–Grohe notation
=In the notation of Flum & Grohe (2006, p. 4), a parameterized problem consists of a decision problem
L
⊆
Σ
∗
{\displaystyle L\subseteq \Sigma ^{*}}
and a function
κ
:
Σ
∗
→
N
{\displaystyle \kappa :\Sigma ^{*}\to \mathbb {N} }
, the parameterization. The parameter of an instance
x
{\displaystyle x}
is the number
κ
(
x
)
{\displaystyle \kappa (x)}
.
A kernelization for a parameterized problem
L
{\displaystyle L}
is an algorithm that takes an instance
x
{\displaystyle x}
with parameter
k
{\displaystyle k}
and maps it in polynomial time to an instance
y
{\displaystyle y}
such that
x
{\displaystyle x}
is in
L
{\displaystyle L}
if and only if
y
{\displaystyle y}
is in
L
{\displaystyle L}
and
the size of
y
{\displaystyle y}
is bounded by a computable function
f
{\displaystyle f}
in
k
{\displaystyle k}
.
Note that in this notation, the bound on the size of
y
{\displaystyle y}
implies that the parameter of
y
{\displaystyle y}
is also bounded by a function in
k
{\displaystyle k}
.
The function
f
{\displaystyle f}
is often referred to as the size of
the kernel. If
f
=
k
O
(
1
)
{\displaystyle f=k^{O(1)}}
, it is said that
L
{\displaystyle L}
admits a polynomial kernel. Similarly, for
f
=
O
(
k
)
{\displaystyle f={O(k)}}
, the problem admits linear kernel.
Kernelizability and fixed-parameter tractability are equivalent
A problem is fixed-parameter tractable if and only if it is kernelizable and decidable.
That a kernelizable and decidable problem is fixed-parameter tractable can be seen from the definition above:
First the kernelization algorithm, which runs in time
O
(
|
x
|
c
)
{\displaystyle O(|x|^{c})}
for some c, is invoked to generate a kernel of size
f
(
k
)
{\displaystyle f(k)}
.
The kernel is then solved by the algorithm that proves that the problem is decidable.
The total running time of this procedure is
g
(
f
(
k
)
)
+
O
(
|
x
|
c
)
{\displaystyle g(f(k))+O(|x|^{c})}
, where
g
(
n
)
{\displaystyle g(n)}
is the running time for the algorithm used to solve the kernels.
Since
g
(
f
(
k
)
)
{\displaystyle g(f(k))}
is computable, e.g. by using the assumption that
f
(
k
)
{\displaystyle f(k)}
is computable and testing all possible inputs of length
f
(
k
)
{\displaystyle f(k)}
, this implies that the problem is fixed-parameter tractable.
The other direction, that a fixed-parameter tractable problem is kernelizable and decidable is a bit more involved. Assume that the question is non-trivial, meaning that there is at least one instance that is in the language, called
I
y
e
s
{\displaystyle I_{yes}}
, and at least one instance that is not in the language, called
I
n
o
{\displaystyle I_{no}}
; otherwise, replacing any instance by the empty string is a valid kernelization. Assume also that the problem is fixed-parameter tractable, i.e., it has an algorithm that runs in at most
f
(
k
)
⋅
|
x
|
c
{\displaystyle f(k)\cdot |x|^{c}}
steps on instances
(
x
,
k
)
{\displaystyle (x,k)}
, for some constant
c
{\displaystyle c}
and some function
f
(
k
)
{\displaystyle f(k)}
. To kernelize an input, run this algorithm on the given input for at most
|
x
|
c
+
1
{\displaystyle |x|^{c+1}}
steps. If it terminates with an answer, use that answer to select either
I
y
e
s
{\displaystyle I_{yes}}
or
I
n
o
{\displaystyle I_{no}}
as the kernel. If, instead, it exceeds the
|
x
|
c
+
1
{\displaystyle |x|^{c+1}}
bound on the number of steps without terminating, then return
(
x
,
k
)
{\displaystyle (x,k)}
itself as the kernel. Because
(
x
,
k
)
{\displaystyle (x,k)}
is only returned as a kernel for inputs with
f
(
k
)
⋅
|
x
|
c
>
|
x
|
c
+
1
{\displaystyle f(k)\cdot |x|^{c}>|x|^{c+1}}
, it follows that the size of the kernel produced in this way is at most
max
{
|
I
y
e
s
|
,
|
I
n
o
|
,
f
(
k
)
}
{\displaystyle \max\{|I_{yes}|,|I_{no}|,f(k)\}}
. This size bound is computable, by the assumption from fixed-parameter tractability that
f
(
k
)
{\displaystyle f(k)}
is computable.
More examples
Vertex cover parametrized by the size of the vertex cover: The vertex cover problem has kernels with at most
2
k
{\displaystyle 2k}
vertices and
O
(
k
2
)
{\displaystyle O(k^{2})}
edges. Furthermore, for any
ε
>
0
{\displaystyle \varepsilon >0}
, vertex cover does not have kernels with
O
(
k
2
−
ε
)
{\displaystyle O(k^{2-\varepsilon })}
edges unless
coNP
⊆
NP/poly
{\displaystyle {\text{coNP}}\subseteq {\text{NP/poly}}}
. The vertex cover problems in
d
{\displaystyle d}
-uniform hypergraphs has kernels with
O
(
k
d
)
{\displaystyle O(k^{d})}
edges using the sunflower lemma, and it does not have kernels of size
O
(
k
d
−
ε
)
{\displaystyle O(k^{d-\varepsilon })}
unless
coNP
⊆
NP/poly
{\displaystyle {\text{coNP}}\subseteq {\text{NP/poly}}}
.
Feedback vertex set parametrized by the size of the feedback vertex set: The feedback vertex set problem has kernels with
4
k
2
{\displaystyle 4k^{2}}
vertices and
O
(
k
2
)
{\displaystyle O(k^{2})}
edges. Furthermore, it does not have kernels with
O
(
k
2
−
ε
)
{\displaystyle O(k^{2-\varepsilon })}
edges unless
coNP
⊆
NP/poly
{\displaystyle {\text{coNP}}\subseteq {\text{NP/poly}}}
.
k
{\displaystyle k}
-path: The
k
{\displaystyle k}
-path problem is to decide whether a given graph has a path of length at least
k
{\displaystyle k}
. This problem has kernels of size exponential in
k
{\displaystyle k}
, and it does not have kernels of size polynomial in
k
{\displaystyle k}
unless
coNP
⊆
NP/poly
{\displaystyle {\text{coNP}}\subseteq {\text{NP/poly}}}
.
Bidimensional problems: Many parameterized versions of bidimensional problems have linear kernels on planar graphs, and more generally, on graphs excluding some fixed graph as a minor.
Kernelization for structural parameterizations
While the parameter
k
{\displaystyle k}
in the examples above is chosen as the size of the desired solution, this is not necessary. It is also possible to choose a structural complexity measure of the input as the parameter value, leading to so-called structural parameterizations. This approach is fruitful for instances whose solution size is large, but for which some other complexity measure is bounded. For example, the feedback vertex number of an undirected graph
G
{\displaystyle G}
is defined as the minimum cardinality of a set of vertices whose removal makes
G
{\displaystyle G}
acyclic. The vertex cover problem parameterized by the feedback vertex number of the input graph has a polynomial kernelization: There is a polynomial-time algorithm that, given a graph
G
{\displaystyle G}
whose feedback vertex number is
k
{\displaystyle k}
, outputs a graph
G
′
{\displaystyle G'}
on
O
(
k
3
)
{\displaystyle O(k^{3})}
vertices such that a minimum vertex cover in
G
′
{\displaystyle G'}
can be transformed into a minimum vertex cover for
G
{\displaystyle G}
in polynomial time. The kernelization algorithm therefore guarantees that instances with a small feedback vertex number
k
{\displaystyle k}
are reduced to small instances.
See also
Iterative compression, a different design technique for fixed-parameter tractable algorithms
Approximate kernelization, for optimization problems a kernel may lose a given factor in the solution quality
Notes
References
Abu-Khzam, Faisal N.; Collins, Rebecca L.; Fellows, Michael R.; Langston, Michael A.; Suters, W. Henry; Symons, Chris T. (2004), Kernelization Algorithms for the Vertex Cover Problem: Theory and Experiments (PDF), University of Tennessee.
Bodlaender, Hans L.; Downey, Rod G.; Fellows, Michael R.; Hermelin, Danny (2009), "On problems without polynomial kernels", Journal of Computer and System Sciences, 75 (8): 423–434, doi:10.1016/j.jcss.2009.04.001.
Buss, Jonathan F.; Goldsmith, Judy (1993), "Nondeterminism within
P
∗
{\displaystyle P^{*}}
", SIAM Journal on Computing, 22 (3): 560–572, doi:10.1137/0222038, S2CID 43081484.
Chen, Jianer; Kanj, Iyad A.; Jia, Weijia (2001), "Vertex cover: Further observations and further improvements", Journal of Algorithms, 41 (2): 280–301, doi:10.1006/jagm.2001.1186, S2CID 13557005.
Dell, Holger; van Melkebeek, Dieter (2010), "Satisfiability allows no nontrivial sparsification unless the polynomial-time hierarchy collapses" (PDF), Proceedings of the 42nd ACM Symposium on Theory of Computing (STOC 2010) (PDF), pp. 251–260, doi:10.1145/1806689.1806725, ISBN 978-1-4503-0050-6, S2CID 1117711.
Downey, R. G.; Fellows, M. R. (1999), Parameterized Complexity, Monographs in Computer Science, Springer, doi:10.1007/978-1-4612-0515-9, ISBN 0-387-94883-X, MR 1656112, S2CID 15271852.
Flum, Jörg; Grohe, Martin (2006), Parameterized Complexity Theory, Springer, ISBN 978-3-540-29952-3, retrieved 2010-03-05.
Fomin, Fedor V.; Lokshtanov, Daniel; Saurabh, Saket; Thilikos, Dimitrios M. (2010), "Bidimensionality and kernels", Proceedings of the 21st ACM-SIAM Symposium on Discrete Algorithms (SODA 2010), pp. 503–510.
Jansen, Bart M. P.; Bodlaender, Hans L. (2013), "Vertex Cover Kernelization Revisited - Upper and Lower Bounds for a Refined Parameter", Theory Comput. Syst., 53 (2): 263–299, arXiv:1012.4701, doi:10.1007/s00224-012-9393-4,
Lampis, Michael (2011), "A kernel of order 2k − c log k for vertex cover", Information Processing Letters, 111 (23–24): 1089–1091, doi:10.1016/j.ipl.2011.09.003.
Thomassé, Stéphan (2010), "A 4k2 kernel for feedback vertex set", ACM Transactions on Algorithms, 6 (2): 1–8, doi:10.1145/1721837.1721848, S2CID 7510317.
Niedermeier, Rolf (2006), Invitation to Fixed-Parameter Algorithms, Oxford University Press, CiteSeerX 10.1.1.2.9618, ISBN 0-19-856607-7, archived from the original on 2008-09-24, retrieved 2017-06-01.
Further reading
Fomin, Fedor V.; Lokshtanov, Daniel; Saurabh, Saket; Zehavi, Meirav (2019), Kernelization: Theory of Parameterized Preprocessing, Cambridge University Press, p. 528, doi:10.1017/9781107415157, ISBN 978-1107057760
Niedermeier, Rolf (2006), Invitation to Fixed-Parameter Algorithms, Oxford University Press, Chapter 7, ISBN 0-19-856607-7, archived from the original on 2007-09-29, retrieved 2017-06-01
Cygan, Marek; Fomin, Fedor V.; Kowalik, Lukasz; Lokshtanov, Daniel; Marx, Daniel; Pilipczuk, Marcin; Pilipczuk, Michal; Saurabh, Saket (2015), Parameterized Algorithms, Springer, Chapters 2 and 9, ISBN 978-3-319-21274-6
Kata Kunci Pencarian:
- Kernel
- Kernelization
- Linux kernel
- Kernel (operating system)
- Monolithic kernel
- Rump kernel
- Windows kernel
- Kernel method
- Palm kernel
- Kernel panic