- Source: PLS (complexity)
In computational complexity theory, Polynomial Local Search (PLS) is a complexity class that models the difficulty of finding a locally optimal solution to an optimization problem. The main characteristics of problems that lie in PLS are that the cost of a solution can be calculated in polynomial time and the neighborhood of a solution can be searched in polynomial time. Therefore it is possible to verify whether or not a solution is a local optimum in polynomial time.
Furthermore, depending on the problem and the algorithm that is used for solving the problem, it might be faster to find a local optimum instead of a global optimum.
Description
When searching for a local optimum, there are two interesting issues to deal with: First how to find a local optimum, and second how long it takes to find a local optimum. For many local search algorithms, it is not known, whether they can find a local optimum in polynomial time or not. So to answer the question of how long it takes to find a local optimum, Johnson, Papadimitriou and Yannakakis introduced the complexity class PLS in their paper "How easy is local search?". It contains local search problems for which the local optimality can be verified in polynomial time.
A local search problem is in PLS, if the following properties are satisfied:
The size of every solution is polynomially bounded in the size of the instance
I
{\displaystyle I}
.
It is possible to find some solution of a problem instance in polynomial time.
It is possible to calculate the cost of each solution in polynomial time.
It is possible to find all neighbors of each solution in polynomial time.
With these properties, it is possible to find for each solution
s
{\displaystyle s}
the best neighboring solution or if there is no such better neighboring solution, state that
s
{\displaystyle s}
is a local optimum.
= Example
=Consider the following instance
I
{\displaystyle I}
of the Max-2Sat Problem:
(
x
1
∨
x
2
)
∧
(
¬
x
1
∨
x
3
)
∧
(
¬
x
2
∨
x
3
)
{\displaystyle (x_{1}\vee x_{2})\wedge (\neg x_{1}\vee x_{3})\wedge (\neg x_{2}\vee x_{3})}
. The aim is to find an assignment, that maximizes the sum of the satisfied clauses.
A solution
s
{\displaystyle s}
for that instance is a bit string that assigns every
x
i
{\displaystyle x_{i}}
the value 0 or 1. In this case, a solution consists of 3 bits, for example
s
=
000
{\displaystyle s=000}
, which stands for the assignment of
x
1
{\displaystyle x_{1}}
to
x
3
{\displaystyle x_{3}}
with the value 0. The set of solutions
F
L
(
I
)
{\displaystyle F_{L}(I)}
is the set of all possible assignments of
x
1
{\displaystyle x_{1}}
,
x
2
{\displaystyle x_{2}}
and
x
3
{\displaystyle x_{3}}
.
The cost of each solution is the number of satisfied clauses, so
c
L
(
I
,
s
=
000
)
=
2
{\displaystyle c_{L}(I,s=000)=2}
because the second and third clause are satisfied.
The Flip-neighbor of a solution
s
{\displaystyle s}
is reached by flipping one bit of the bit string
s
{\displaystyle s}
, so the neighbors of
s
{\displaystyle s}
are
N
(
I
,
000
)
=
{
100
,
010
,
001
}
{\displaystyle N(I,000)=\{100,010,001\}}
with the following costs:
c
L
(
I
,
100
)
=
2
{\displaystyle c_{L}(I,100)=2}
c
L
(
I
,
010
)
=
2
{\displaystyle c_{L}(I,010)=2}
c
L
(
I
,
001
)
=
2
{\displaystyle c_{L}(I,001)=2}
There are no neighbors with better costs than
s
{\displaystyle s}
, if we are looking for a solution with maximum cost. Even though
s
{\displaystyle s}
is not a global optimum (which for example would be a solution
s
′
=
111
{\displaystyle s'=111}
that satisfies all clauses and has
c
L
(
I
,
s
′
)
=
3
{\displaystyle c_{L}(I,s')=3}
),
s
{\displaystyle s}
is a local optimum, because none of its neighbors has better costs.
Intuitively it can be argued that this problem lies in PLS, because:
It is possible to find a solution to an instance in polynomial time, for example by setting all bits to 0.
It is possible to calculate the cost of a solution in polynomial time, by going once through the whole instance and counting the clauses that are satisfied.
It is possible to find all neighbors of a solution in polynomial time, by taking the set of solutions that differ from
s
{\displaystyle s}
in exactly one bit.
If we are simply counting the number of satisfied clauses, the problem can be solved in polynomial time since the number of possible costs is polynomial. However, if we assign each clause a positive integer weight (and seek to locally maximize the sum of weights of satisfied clauses), the problem becomes PLS-complete (below).
Formal Definition
A local search problem
L
{\displaystyle L}
has a set
D
L
{\displaystyle D_{L}}
of instances which are encoded using strings over a finite alphabet
Σ
{\displaystyle \Sigma }
. For each instance
I
{\displaystyle I}
there exists a finite solution set
F
L
(
I
)
{\displaystyle F_{L}(I)}
. Let
R
{\displaystyle R}
be the relation that models
L
{\displaystyle L}
. The relation
R
⊆
D
L
×
F
L
(
I
)
:=
{
(
I
,
s
)
∣
I
∈
D
L
,
s
∈
F
L
(
I
)
}
{\displaystyle R\subseteq D_{L}\times F_{L}(I):=\{(I,s)\mid I\in D_{L},s\in F_{L}(I)\}}
is in PLS if:
The size of every solution
s
∈
F
L
(
I
)
{\displaystyle s\in F_{L}(I)}
is polynomial bounded in the size of
I
{\displaystyle I}
Problem instances
I
∈
D
L
{\displaystyle I\in D_{L}}
and solutions
s
∈
F
L
(
I
)
{\displaystyle s\in F_{L}(I)}
are polynomial time verifiable
There is a polynomial time computable function
A
:
D
L
→
F
L
(
I
)
{\displaystyle A:D_{L}\rightarrow F_{L}(I)}
that returns for each instance
I
∈
D
L
{\displaystyle I\in D_{L}}
some solution
s
∈
F
L
(
I
)
{\displaystyle s\in F_{L}(I)}
There is a polynomial time computable function
B
:
D
L
×
F
L
(
I
)
→
R
+
{\displaystyle B:D_{L}\times F_{L}(I)\rightarrow \mathbb {R} ^{+}}
that returns for each solution
s
∈
F
L
(
I
)
{\displaystyle s\in F_{L}(I)}
of an instance
I
∈
D
L
{\displaystyle I\in D_{L}}
the cost
c
L
(
I
,
s
)
{\displaystyle c_{L}(I,s)}
There is a polynomial time computable function
N
:
D
L
×
F
L
(
I
)
→
P
o
w
e
r
s
e
t
(
F
L
(
I
)
)
{\displaystyle N:D_{L}\times F_{L}(I)\rightarrow Powerset(F_{L}(I))}
that returns the set of neighbors for an instance-solution pair
There is a polynomial time computable function
C
:
D
L
×
F
L
(
I
)
→
N
(
I
,
s
)
∪
{
O
P
T
}
{\displaystyle C:D_{L}\times F_{L}(I)\rightarrow N(I,s)\cup \{OPT\}}
that returns a neighboring solution
s
′
{\displaystyle s'}
with better cost than solution
s
{\displaystyle s}
, or states that
s
{\displaystyle s}
is locally optimal
For every instance
I
∈
D
L
{\displaystyle I\in D_{L}}
,
R
{\displaystyle R}
exactly contains the pairs
(
I
,
s
)
{\displaystyle (I,s)}
where
s
{\displaystyle s}
is a local optimal solution of
I
{\displaystyle I}
An instance
D
L
{\displaystyle D_{L}}
has the structure of an implicit graph (also called Transition graph ), the vertices being the solutions with two solutions
s
,
s
′
∈
F
L
(
I
)
{\displaystyle s,s'\in F_{L}(I)}
connected by a directed arc iff
s
′
∈
N
(
I
,
s
)
{\displaystyle s'\in N(I,s)}
.
A local optimum is a solution
s
{\displaystyle s}
, that has no neighbor with better costs. In the implicit graph, a local optimum is a sink. A neighborhood where every local optimum is a global optimum, which is a solution with the best possible cost, is called an exact neighborhood.
= Alternative Definition
=The class PLS is the class containing all problems that can be reduced in polynomial time to the problem Sink-of-DAG (also called Local-Opt ):
Given two integers
n
{\displaystyle n}
and
m
{\displaystyle m}
and two Boolean circuits
S
:
{
0
,
1
}
n
→
{
0
,
1
}
n
{\displaystyle S:\{0,1\}^{n}\rightarrow \{0,1\}^{n}}
such that
S
(
0
n
)
≠
0
n
{\displaystyle S(0^{n})\neq 0^{n}}
and
V
:
{
0
,
1
}
n
→
{
0
,
1
,
.
.
,
2
m
−
1
}
{\displaystyle V:\{0,1\}^{n}\rightarrow \{0,1,..,2^{m}-1\}}
, find a vertex
x
∈
{
0
,
1
}
n
{\displaystyle x\in \{0,1\}^{n}}
such that
S
(
x
)
≠
x
{\displaystyle S(x)\neq x}
and either
S
(
S
(
x
)
)
=
S
(
x
)
{\displaystyle S(S(x))=S(x)}
or
V
(
S
(
x
)
)
≤
V
(
x
)
{\displaystyle V(S(x))\leq V(x)}
.
= Example neighborhood structures
=Example neighborhood structures for problems with boolean variables (or bit strings) as solution:
Flip - The neighborhood of a solution
s
=
x
1
,
.
.
.
,
x
n
{\displaystyle s=x_{1},...,x_{n}}
can be achieved by negating (flipping) one arbitrary input bit
x
i
{\displaystyle x_{i}}
. So one solution
s
{\displaystyle s}
and all its neighbors
r
∈
N
(
I
,
s
)
{\displaystyle r\in N(I,s)}
have Hamming distance one:
H
(
s
,
r
)
=
1
{\displaystyle H(s,r)=1}
.
Kernighan-Lin - A solution
r
{\displaystyle r}
is a neighbor of solution
s
{\displaystyle s}
if
r
{\displaystyle r}
can be obtained from
s
{\displaystyle s}
by a sequence of greedy flips, where no bit is flipped twice. This means, starting with
s
{\displaystyle s}
, the Flip-neighbor
s
1
{\displaystyle s_{1}}
of
s
{\displaystyle s}
with the best cost, or the least loss of cost, is chosen to be a neighbor of s in the Kernighan-Lin structure. As well as best (or least worst) neighbor of
s
1
{\displaystyle s_{1}}
, and so on, until
s
i
{\displaystyle s_{i}}
is a solution where every bit of
s
{\displaystyle s}
is negated. Note that it is not allowed to flip a bit back, if it once has been flipped.
k-Flip - A solution
r
{\displaystyle r}
is a neighbor of solution
s
{\displaystyle s}
if the Hamming distance
H
{\displaystyle H}
between
s
{\displaystyle s}
and
r
{\displaystyle r}
is at most
k
{\displaystyle k}
, so
H
(
s
,
r
)
≤
k
{\displaystyle H(s,r)\leq k}
.
Example neighborhood structures for problems on graphs:
Swap - A partition
(
P
2
,
P
3
)
{\displaystyle (P_{2},P_{3})}
of nodes in a graph is a neighbor of a partition
(
P
0
,
P
1
)
{\displaystyle (P_{0},P_{1})}
if
(
P
2
,
P
3
)
{\displaystyle (P_{2},P_{3})}
can be obtained from
(
P
0
,
P
1
)
{\displaystyle (P_{0},P_{1})}
by swapping one node
p
0
∈
P
0
{\displaystyle p_{0}\in P_{0}}
with a node
p
1
∈
P
1
{\displaystyle p_{1}\in P_{1}}
.
Kernighan-Lin - A partition
(
P
2
,
P
3
)
{\displaystyle (P_{2},P_{3})}
is a neighbor of
(
P
0
,
P
1
)
{\displaystyle (P_{0},P_{1})}
if
(
P
2
,
P
3
)
{\displaystyle (P_{2},P_{3})}
can be obtained by a greedy sequence of swaps from nodes in
P
0
{\displaystyle P_{0}}
with nodes in
P
1
{\displaystyle P_{1}}
. This means the two nodes
p
0
∈
P
0
{\displaystyle p_{0}\in P_{0}}
and
p
1
∈
P
1
{\displaystyle p_{1}\in P_{1}}
are swapped, where the partition
(
(
P
0
∖
p
0
)
∪
p
1
,
(
P
1
∖
p
1
)
∪
p
0
)
{\displaystyle ((P_{0}\setminus p_{0})\cup p1,(P_{1}\setminus p_{1})\cup p_{0})}
gains the highest possible weight, or loses the least possible weight. Note that no node is allowed to be swapped twice. This rule is based on the Kernighan–Lin heuristic for graph partition.
Fiduccia-Matheyses - This neighborhood is similar to the Kernighan-Lin neighborhood structure, it is a greedy sequence of swaps, except that each swap happens in two steps. First the
p
0
∈
P
0
{\displaystyle p_{0}\in P_{0}}
with the most gain of cost, or the least loss of cost, is swapped to
P
1
{\displaystyle P_{1}}
, then the node
p
1
∈
P
1
{\displaystyle p_{1}\in P_{1}}
with the most cost, or the least loss of cost is swapped to
P
0
{\displaystyle P_{0}}
to balance the partitions again. Experiments have shown that Fiduccia-Mattheyses has a smaller run time in each iteration of the standard algorithm, though it sometimes finds an inferior local optimum.
FM-Swap - This neighborhood structure is based on the Fiduccia-Mattheyses neighborhood structure. Each solution
s
=
(
P
0
,
P
1
)
{\displaystyle s=(P_{0},P_{1})}
has only one neighbor, the partition obtained after the first swap of the Fiduccia-Mattheyses.
The standard Algorithm
Consider the following computational problem:
Given some instance
I
{\displaystyle I}
of a PLS problem
L
{\displaystyle L}
, find a locally optimal solution
s
∈
F
L
(
I
)
{\displaystyle s\in F_{L}(I)}
such that
c
L
(
I
,
s
′
)
≥
c
L
(
I
,
s
)
{\displaystyle c_{L}(I,s')\geq c_{L}(I,s)}
for all
s
′
∈
N
(
I
,
s
)
{\displaystyle s'\in N(I,s)}
.
Every local search problem can be solved using the following iterative improvement algorithm:
Use
A
L
{\displaystyle A_{L}}
to find an initial solution
s
{\displaystyle s}
Use algorithm
C
L
{\displaystyle C_{L}}
to find a better solution
s
′
∈
N
(
I
,
s
)
{\displaystyle s'\in N(I,s)}
. If such a solution exists, replace
s
{\displaystyle s}
by
s
′
{\displaystyle s'}
and repeat step 2, else return
s
{\displaystyle s}
Unfortunately, it generally takes an exponential number of improvement steps to find a local optimum even if the problem
L
{\displaystyle L}
can be solved exactly in polynomial time. It is not necessary always to use the standard algorithm, there may be a different, faster algorithm for a certain problem. For example a local search algorithm used for Linear programming is the Simplex algorithm.
The run time of the standard algorithm is pseudo-polynomial in the number of different costs of a solution.
The space the standard algorithm needs is only polynomial. It only needs to save the current solution
s
{\displaystyle s}
, which is polynomial bounded by definition.
Reductions
A Reduction of one problem to another may be used to show that the second problem is at least as difficult as the first. In particular, a PLS-reduction is used to prove that a local search problem that lies in PLS is also PLS-complete, by reducing a PLS-complete Problem to the one that shall be proven to be PLS-complete.
= PLS-reduction
=A local search problem
L
1
{\displaystyle L_{1}}
is PLS-reducible to a local search problem
L
2
{\displaystyle L_{2}}
if there are two polynomial time functions
f
:
D
1
→
D
2
{\displaystyle f:D_{1}\rightarrow D_{2}}
and
g
:
D
1
×
F
2
(
f
(
I
1
)
)
→
F
1
(
I
1
)
{\displaystyle g:D_{1}\times F_{2}(f(I_{1}))\rightarrow F_{1}(I_{1})}
such that:
if
I
1
{\displaystyle I_{1}}
is an instance of
L
1
{\displaystyle L_{1}}
, then
f
(
I
1
)
{\displaystyle f(I_{1})}
is an instance of
L
2
{\displaystyle L_{2}}
if
s
2
{\displaystyle s_{2}}
is a solution for
f
(
I
1
)
{\displaystyle f(I_{1})}
of
L
2
{\displaystyle L_{2}}
, then
g
(
I
1
,
s
2
)
{\displaystyle g(I_{1},s_{2})}
is a solution for
I
1
{\displaystyle I_{1}}
of
L
1
{\displaystyle L_{1}}
if
s
2
{\displaystyle s_{2}}
is a local optimum for instance
f
(
I
1
)
{\displaystyle f(I_{1})}
of
L
2
{\displaystyle L_{2}}
, then
g
(
I
1
,
s
2
)
{\displaystyle g(I_{1},s_{2})}
has to be a local optimum for instance
I
1
{\displaystyle I_{1}}
of
L
1
{\displaystyle L_{1}}
It is sufficient to only map the local optima of
f
(
I
1
)
{\displaystyle f(I_{1})}
to the local optima of
I
1
{\displaystyle I_{1}}
, and to map all other solutions for example to the standard solution returned by
A
1
{\displaystyle A_{1}}
.
PLS-reductions are transitive.
= Tight PLS-reduction
=Definition Transition graph
The transition graph
T
I
{\displaystyle T_{I}}
of an instance
I
{\displaystyle I}
of a problem
L
{\displaystyle L}
is a directed graph. The nodes represent all elements of the finite set of solutions
F
L
(
I
)
{\displaystyle F_{L}(I)}
and the edges point from one solution to the neighbor with strictly better cost. Therefore it is an acyclic graph. A sink, which is a node with no outgoing edges, is a local optimum.
The height of a vertex
v
{\displaystyle v}
is the length of the shortest path from
v
{\displaystyle v}
to the nearest sink.
The height of the transition graph is the largest of the heights of all vertices, so it is the height of the largest shortest possible path from a node to its nearest sink.
Definition Tight PLS-reduction
A PLS-reduction
(
f
,
g
)
{\displaystyle (f,g)}
from a local search problem
L
1
{\displaystyle L_{1}}
to a local search problem
L
2
{\displaystyle L_{2}}
is a
tight PLS-reduction if for any instance
I
1
{\displaystyle I_{1}}
of
L
1
{\displaystyle L_{1}}
, a subset
R
{\displaystyle R}
of solutions
of instance
I
2
=
f
(
I
1
)
{\displaystyle I_{2}=f(I_{1})}
of
L
2
{\displaystyle L_{2}}
can be chosen, so that the following properties are satisfied:
R
{\displaystyle R}
contains, among other solutions, all local optima of
I
2
{\displaystyle I_{2}}
For every solution
p
{\displaystyle p}
of
I
1
{\displaystyle I_{1}}
, a solution
q
∈
R
{\displaystyle q\in R}
of
I
2
=
f
(
I
1
)
{\displaystyle I_{2}=f(I_{1})}
can be constructed in polynomial time, so that
g
(
I
1
,
q
)
=
p
{\displaystyle g(I_{1},q)=p}
If the transition graph
T
f
(
I
1
)
{\displaystyle T_{f(I_{1})}}
of
f
(
I
1
)
{\displaystyle f(I_{1})}
contains a direct path from
q
{\displaystyle q}
to
q
0
{\displaystyle q_{0}}
, and
q
,
q
0
∈
R
{\displaystyle q,q_{0}\in R}
, but all internal path vertices are outside
R
{\displaystyle R}
, then for the corresponding solutions
p
=
g
(
I
1
,
q
)
{\displaystyle p=g(I_{1},q)}
and
p
0
=
g
(
I
1
,
q
0
)
{\displaystyle p_{0}=g(I_{1},q_{0})}
holds either
p
=
p
0
{\displaystyle p=p_{0}}
or
T
I
1
{\displaystyle T_{I_{1}}}
contains an edge from
p
{\displaystyle p}
to
p
0
{\displaystyle p_{0}}
Relationship to other complexity classes
PLS lies between the functional versions of P and NP: FP ⊆ PLS ⊆ FNP.
PLS also is a subclass of TFNP, that describes computational problems in which a solution is guaranteed to exist and can be recognized in polynomial time. For a problem in PLS, a solution is guaranteed to exist because the minimum-cost vertex of the entire graph is a valid solution, and the validity of a solution can be checked by computing its neighbors and comparing the costs of each one to another.
It is also proven that if a PLS problem is NP-hard, then NP = co-NP.
PLS-completeness
= Definition
=A local search problem
L
{\displaystyle L}
is PLS-complete, if
L
{\displaystyle L}
is in PLS
every problem in PLS can be PLS-reduced to
L
{\displaystyle L}
The optimization version of the circuit problem under the Flip neighborhood structure has been shown to be a first PLS-complete problem.
= List of PLS-complete Problems
=This is an incomplete list of some known problems that are PLS-complete. The problems here are the weighted versions; for example, Max-2SAT/Flip is weighted even though Max-2SAT ordinarily refers to the unweighted version.
Notation: Problem / Neighborhood structure
Min/Max-circuit/Flip has been proven to be the first PLS-complete problem.
Sink-of-DAG is complete by definition.
Positive-not-all-equal-max-3Sat/Flip has been proven to be PLS-complete via a tight PLS-reduction from Min/Max-circuit/Flip to Positive-not-all-equal-max-3Sat/Flip. Note that Positive-not-all-equal-max-3Sat/Flip can be reduced from Max-Cut/Flip too.
Positive-not-all-equal-max-3Sat/Kernighan-Lin has been proven to be PLS-complete via a tight PLS-reduction from Min/Max-circuit/Flip to Positive-not-all-equal-max-3Sat/Kernighan-Lin.
Max-2Sat/Flip has been proven to be PLS-complete via a tight PLS-reduction from Max-Cut/Flip to Max-2Sat/Flip.
Min-4Sat-B/Flip has been proven to be PLS-complete via a tight PLS-reduction from Min-circuit/Flip to Min-4Sat-B/Flip.
Max-4Sat-B/Flip(or CNF-SAT) has been proven to be PLS-complete via a PLS-reduction from Max-circuit/Flip to Max-4Sat-B/Flip.
Max-4Sat-(B=3)/Flip has been proven to be PLS-complete via a PLS-reduction from Max-circuit/Flip to Max-4Sat-(B=3)/Flip.
Max-Uniform-Graph-Partitioning/Swap has been proven to be PLS-complete via a tight PLS-reduction from Max-Cut/Flip to Max-Uniform-Graph-partitioning/Swap.
Max-Uniform-Graph-Partitioning/Fiduccia-Matheyses is stated to be PLS-complete without proof.
Max-Uniform-Graph-Partitioning/FM-Swap has been proven to be PLS-complete via a tight PLS-reduction from Max-Cut/Flip to Max-Uniform-Graph-partitioning/FM-Swap.
Max-Uniform-Graph-Partitioning/Kernighan-Lin has been proven to be PLS-complete via a PLS-reduction from Min/Max-circuit/Flip to Max-Uniform-Graph-Partitioning/Kernighan-Lin. There is also a tight PLS-reduction from Positive-not-all-equal-max-3Sat/Kernighan-Lin to Max-Uniform-Graph-Partitioning/Kernighan-Lin.
Max-Cut/Flip has been proven to be PLS-complete via a tight PLS-reduction from Positive-not-all-equal-max-3Sat/Flip to Max-Cut/Flip.
Max-Cut/Kernighan-Lin is claimed to be PLS-complete without proof.
Min-Independent-Dominating-Set-B/k-Flip has been proven to be PLS-complete via a tight PLS-reduction from Min-4Sat-B′/Flip to Min-Independent-Dominating-Set-B/k-Flip.
Weighted-Independent-Set/Change is claimed to be PLS-complete without proof.
Maximum-Weighted-Subgraph-with-property-P/Change is PLS-complete if property P = "has no edges", as it then equals Weighted-Independent-Set/Change. It has also been proven to be PLS-complete for a general hereditary, non-trivial property P via a tight PLS-reduction from Weighted-Independent-Set/Change to Maximum-Weighted-Subgraph-with-property-P/Change.
Set-Cover/k-change has been proven to be PLS-complete for each k ≥ 2 via a tight PLS-reduction from (3, 2, r)-Max-Constraint-Assignment/Change to Set-Cover/k-change.
Metric-TSP/k-Change has been proven to be PLS-complete via a PLS-reduction from Max-4Sat-B/Flip to Metric-TSP/k-Change.
Metric-TSP/Lin-Kernighan has been proven to be PLS-complete via a tight PLS-reduction from Max-2Sat/Flip to Metric-TSP/Lin-Kernighan.
Local-Multi-Processor-Scheduling/k-change has been proven to be PLS-complete via a tight PLS-reduction from Weighted-3Dimensional-Matching/(p, q)-Swap to Local-Multi-Processor-scheduling/(2p+q)-change, where (2p + q) ≥ 8.
Selfish-Multi-Processor-Scheduling/k-change-with-property-t has been proven to be PLS-complete via a tight PLS-reduction from Weighted-3Dimensional-Matching/(p, q)-Swap to (2p+q)-Selfish-Multi-Processor-Scheduling/k-change-with-property-t, where (2p + q) ≥ 8.
Finding a pure Nash Equilibrium in a General-Congestion-Game/Change has been proven PLS-complete via a tight PLS-reduction from Positive-not-all-equal-max-3Sat/Flip to General-Congestion-Game/Change.
Finding a pure Nash Equilibrium in a Symmetric General-Congestion-Game/Change has been proven to be PLS-complete via a tight PLS-reduction from an asymmetric General-Congestion-Game/Change to symmetric General-Congestion-Game/Change.
Finding a pure Nash Equilibrium in an Asymmetric Directed-Network-Congestion-Games/Change has been proven to be PLS-complete via a tight reduction from Positive-not-all-equal-max-3Sat/Flip to Directed-Network-Congestion-Games/Change and also via a tight PLS-reduction from 2-Threshold-Games/Change to Directed-Network-Congestion-Games/Change.
Finding a pure Nash Equilibrium in an Asymmetric Undirected-Network-Congestion-Games/Change has been proven to be PLS-complete via a tight PLS-reduction from 2-Threshold-Games/Change to Asymmetric Undirected-Network-Congestion-Games/Change.
Finding a pure Nash Equilibrium in a Symmetric Distance-Bounded-Network-Congestion-Games has been proven to be PLS-complete via a tight PLS-reduction from 2-Threshold-Games to Symmetric Distance-Bounded-Network-Congestion-Games.
Finding a pure Nash Equilibrium in a 2-Threshold-Game/Change has been proven to be PLS-complete via a tight reduction from Max-Cut/Flip to 2-Threshold-Game/Change.
Finding a pure Nash Equilibrium in Market-Sharing-Game/Change with polynomial bounded costs has been proven to be PLS-complete via a tight PLS-reduction from 2-Threshold-Games/Change to Market-Sharing-Game/Change.
Finding a pure Nash Equilibrium in an Overlay-Network-Design/Change has been proven to be PLS-complete via a reduction from 2-Threshold-Games/Change to Overlay-Network-Design/Change. Analogously to the proof of asymmetric Directed-Network-Congestion-Game/Change, the reduction is tight.
Min-0-1-Integer Programming/k-Flip has been proven to be PLS-complete via a tight PLS-reduction from Min-4Sat-B′/Flip to Min-0-1-Integer Programming/k-Flip.
Max-0-1-Integer Programming/k-Flip is claimed to be PLS-complete because of PLS-reduction to Max-0-1-Integer Programming/k-Flip, but the proof is left out.
(p, q, r)-Max-Constraint-Assignment
(3, 2, 3)-Max-Constraint-Assignment-3-partite/Change has been proven to be PLS-complete via a tight PLS-reduction from Circuit/Flip to (3, 2, 3)-Max-Constraint-Assignment-3-partite/Change.
(2, 3, 6)-Max-Constraint-Assignment-2-partite/Change has been proven to be PLS-complete via a tight PLS-reduction from Circuit/Flip to (2, 3, 6)-Max-Constraint-Assignment-2-partite/Change.
(6, 2, 2)-Max-Constraint-Assignment/Change has been proven to be PLS-complete via a tight reduction from Circuit/Flip to (6,2, 2)-Max-Constraint-Assignment/Change.
(4, 3, 3)-Max-Constraint-Assignment/Change equals Max-4Sat-(B=3)/Flip and has been proven to be PLS-complete via a PLS-reduction from Max-circuit/Flip. It is claimed that the reduction can be extended so tightness is obtained.
Nearest-Colorful-Polytope/Change has been proven to be PLS-complete via a PLS-reduction from Max-2Sat/Flip to Nearest-Colorful-Polytope/Change.
Stable-Configuration/Flip in a Hopfield network has been proven to be PLS-complete if the thresholds are 0 and the weights are negative via a tight PLS-reduction from Max-Cut/Flip to Stable-Configuration/Flip.
Weighted-3Dimensional-Matching/(p, q)-Swap has been proven to be PLS-complete for p ≥9 and q ≥ 15 via a tight PLS-reduction from (2, 3, r)-Max-Constraint-Assignment-2-partite/Change to Weighted-3Dimensional-Matching/(p, q)-Swap.
The problem Real-Local-Opt (finding the ɛ local optimum of a λ-Lipschitz continuous objective function
V
:
[
0
,
1
]
3
→
[
0
,
1
]
{\displaystyle V:[0,1]^{3}\rightarrow [0,1]}
and a neighborhood function
S
:
[
0
,
1
]
3
→
[
0
,
1
]
3
{\displaystyle S:[0,1]^{3}\rightarrow [0,1]^{3}}
) is PLS-complete.
Finding a local fitness peak in a biological fitness landscapes specified by the NK-model/Point-mutation with K ≥ 2 was proven to be PLS-complete via a tight PLS-reduction from Max-2SAT/Flip.
Relations to other complexity classes
Fearnley, Goldberg, Hollender and Savani proved that a complexity class called CLS is equal to the intersection of PPAD and PLS.
Further reading
Equilibria, fixed points, and complexity classes: a survey.
References
Yannakakis, Mihalis (2009), "Equilibria, fixed points, and complexity classes", Computer Science Review, 3 (2): 71–85, CiteSeerX 10.1.1.371.5034, doi:10.1016/j.cosrev.2009.03.004.
Kata Kunci Pencarian:
- PLS (complexity)
- PLS
- Complete (complexity)
- PPAD (complexity)
- WarpPLS
- TFNP
- Challenges in Islamic finance
- Hedonic game
- Structural equation modeling
- List of people with motor neuron disease