• Source: Generic-case complexity
    • Generic-case complexity is a subfield of computational complexity theory that studies the complexity of computational problems on "most inputs".
      Generic-case complexity is a way of measuring the complexity of a computational problem by neglecting a small set of
      unrepresentative inputs and considering worst-case complexity on the rest.
      Small is defined in terms of asymptotic density.
      The apparent efficacy of generic case complexity is because for a wide variety of concrete computational problems, the most difficult instances seem to be rare. Typical instances are relatively easy.
      This approach to complexity originated in combinatorial group theory, which has a computational tradition going back to the beginning of the last century.
      The notion of generic complexity was introduced in a 2003 paper, where authors showed that for a large class of finitely generated groups the generic time complexity of some classical decision problems from combinatorial group theory, namely the word problem, conjugacy problem and membership problem, are linear.
      A detailed introduction of generic case complexity can be found in the surveys.


      Basic definitions




      = Asymptotic density

      =
      Let I be an infinite set of inputs for a computational problem.
      Definition 1. A size function on I is a map



      σ
      :
      I


      N



      {\displaystyle \sigma :I\to \mathbb {N} }

      with infinite range.
      The ball of radius n is




      B

      n


      =
      {
      x

      I

      σ
      (
      x
      )

      n
      }


      {\displaystyle B_{n}=\{x\in I\mid \sigma (x)\leq n\}}

      .
      If the inputs are coded as strings over a finite alphabet, size might be the string length.
      Let



      {

      μ

      n


      }


      {\displaystyle \{\mu _{n}\}}

      be an ensemble of probability distributions where




      μ

      n




      {\displaystyle \mu _{n}}

      is a probability distribution on




      B

      n




      {\displaystyle B_{n}}

      .
      If the balls




      B

      n




      {\displaystyle B_{n}}

      are finite, then each




      μ

      n




      {\displaystyle \mu _{n}}

      can be taken to be the equiprobable distribution which is the most common case. Notice that only finitely many




      B

      n




      {\displaystyle B_{n}}

      's can be empty or have




      μ

      n


      (

      B

      n


      )
      =
      0


      {\displaystyle \mu _{n}(B_{n})=0}

      ; we ignore them.
      Definition 2. The asymptotic density of a subset



      X

      I


      {\displaystyle X\subset I}

      is



      ρ
      (
      X
      )
      =

      lim

      n





      μ

      n


      (
      X


      B

      n


      )


      {\displaystyle \rho (X)=\lim _{n\to \infty }\mu _{n}(X\cap B_{n})}

      when this limit exists.
      When the balls




      B

      n




      {\displaystyle B_{n}}

      are finite and




      μ

      n




      {\displaystyle \mu _{n}}

      is the equiprobable measure,




      ρ
      (
      X
      )
      =
      lim




      |

      X


      B

      n



      |




      |


      B

      n



      |




      .


      {\displaystyle \rho (X)=\lim {\frac {|X\cap B_{n}|}{|B_{n}|}}.}


      In this case it is often convenient to use spheres




      I

      n


      =
      {
      x

      I

      σ
      (
      x
      )
      =
      n
      }


      {\displaystyle I_{n}=\{x\in I\mid \sigma (x)=n\}}

      instead of balls and define




      ρ


      (
      X
      )
      =
      lim




      |

      X


      I

      n



      |




      |


      I

      n



      |






      {\displaystyle \rho '(X)=\lim {\frac {|X\cap I_{n}|}{|I_{n}|}}}

      . An argument using Stolz theorem shows that



      ρ
      (
      X
      )


      {\displaystyle \rho (X)}

      exists if




      ρ


      (
      X
      )


      {\displaystyle \rho '(X)}

      does, and in that case they are equal.
      Definition 3



      X

      I


      {\displaystyle X\subseteq I}

      is generic if



      ρ
      (
      X
      )
      =
      1


      {\displaystyle \rho (X)=1}

      and negligible if



      ρ
      (
      X
      )
      =
      0


      {\displaystyle \rho (X)=0}

      .
      X is exponentially (superpolynomially) generic if the convergence to the limit in Definition 2 is exponentially (superpolynomially) fast, etc.
      A generic subset X is asymptotically large. Whether X appears large in practice depends
      on how fast




      μ

      n


      (
      X


      B

      n


      )


      {\displaystyle \mu _{n}(X\cap B_{n})}

      converges to 1. Superpolynomial convergence seems to be fast enough.


      = Generic complexity classes

      =
      Definition 4 An algorithm is in GenP (generically polynomial time) if it never gives incorrect answers and if it
      gives correct answers in polynomial time on a generic set of inputs. A problem is in GenP if it
      admits an algorithm in GenP. Likewise for GenL (generically linear time), GenE (generically
      exponential time with a linear exponent) GenExp (generically exponential time), etc.
      ExpGenP is the subclass of GenP for which the relevant generic set is exponentially generic.
      More generally for any



      f
      :

      N



      N



      {\displaystyle f:\mathbb {N} \to \mathbb {N} }

      we can define the class Gen(f) corresponding to
      time complexity O(f) on a generic set of input.
      Definition 5. An algorithm solves a problem generically if it never gives incorrect answers and if it gives correct answers on a generic set of inputs. A problem is generically solvable if it is solved generically by some algorithm.


      Theory and applications




      = Combinatorial group theory problems

      =
      The famous undecidable problems: under suitable hypotheses, the word, conjugacy and membership decision problems are in generically polynomial.
      The word and conjugacy search problems are in GenP for all fixed finitely presented groups.
      The well known coset enumeration procedure admits a computable upper bound on a generic set of inputs.
      The Whitehead algorithm for testing whether or not one element of a free group is mapped to another by an automorphism has an exponential worst case upper bound but runs well in practice. The algorithm is shown to be in GenL.
      The conjugacy problem in HNN extensions can be unsolvable even for free groups. However, it is generically cubic time.


      = The halting problem and the Post correspondence problem

      =
      The halting problem for Turing machine with one-sided tape is easily decidable most of the time; it is in GenP
      The situation for two-sided tape is unknown. However, there is a kind of lower bound for machines of both types.
      The halting problem is not in ExpGenP for any model of Turing machine,

      The Post correspondence problem is in ExpGenP.


      = Presburger arithmetic

      =
      The decision problem for Presburger arithmetic admits a double exponential worst case lower bound and a triple exponential worst case upper bound. The generic complexity is not known, but it is known that the problem is not in ExpGenP.


      = NP complete problems

      =
      As it is well known that NP-complete problems can be easy on average, it is not a surprise that several of them are generically easy too.

      The three satisfiability problem is in ExpGenP.
      The subset sum problem is in GenP.


      = One way functions

      =
      There is a generic complexity version of a one-way function which yields the same class of functions but allows one to consider different security assumptions than usual.


      = Public-key cryptography

      =
      A series of articles is devoted to cryptanalysis of the Anshel–Anshel–Goldfeld key exchange protocol, whose security is based on assumptions about the braid group. This series culminates in Miasnikov and Ushakov (2008) which applies techniques from generic case complexity to obtain a complete analysis of the length based attack and the conditions under which it works. The generic point of view also suggests a kind of new attack called the quotient attack, and a more secure version of the Anshel–Anshel–Goldfeld protocol.


      = List of general theoretical results

      =
      A famous Rice's theorem states that if F is a subset of the set of partial computable functions from




      N



      {\displaystyle \mathbb {N} }

      to



      {
      0
      ,
      1
      }


      {\displaystyle \{0,1\}}

      , then unless F or its complement is empty, the problem of deciding whether or not a particular Turing machine computes a function in F is undecidable. The following theorem gives a generic version.
      Theorem 1 Let I be the set of all Turing machines. If F is a subset of the set of all
      partial computable function from




      N



      {\displaystyle \mathbb {N} }

      to itself such that F and its complement are both non-empty,
      then the problem of deciding whether or not a given Turing machine computes a function from
      F is not decidable on any exponentially generic subset of I.

      The following theorems are from:
      Theorem 2 The set of formal languages which are generically computable has measure zero.
      Theorem 3 There is an infinite hierarchy of generic complexity classes. More precisely for a proper complexity function f,



      G
      e
      n
      (
      f
      )

      G
      e
      n
      (

      f

      3


      )


      {\displaystyle Gen(f)\subsetneq Gen(f^{3})}

      .
      The next theorem shows that just as there are average case complete problems within distributional NP problems, there are also generic case complete problems. The arguments in the generic case are similar to those in the average case, and the generic case complete problem is also average case complete. It is the distributional bounded halting problem.
      Theorem 4 There is a notion of generic-polynomial-time reduction with respect to which the distributional bounded halting problem is complete within class of distributional NP problems.


      Comparisons with previous work




      = Almost polynomial time

      =
      Meyer and Paterson define an algorithm to be almost polynomial time, or APT, if it halts
      within p(n) steps on all but p(n) inputs of size n. Clearly, APT algorithms are included in our class GenP. We have seen several NP complete problems in GenP, but Meyer and Paterson show that this is not the case for APT. They prove that an NP complete problem is reducible to a problem in APT if and only if P = NP. Thus APT seems much more restrictive than GenP.


      = Average-case complexity

      =
      Generic case complexity is similar to average-case complexity. However, there are some significant differences.
      Generic case complexity is a direct measure of the performance of an algorithm on most inputs while average case complexity
      gives a measure of the balance between easy and difficult instances. In addition Generic-case complexity naturally applies to undecidable problems.
      Suppose





      A




      {\displaystyle {\mathcal {A}}}

      is an algorithm whose time complexity,



      T
      :
      I


      N



      {\displaystyle T:I\to \mathbb {N} }

      is polynomial on



      μ


      {\displaystyle \mu }

      average.
      What can we infer about the behavior of





      A




      {\displaystyle {\mathcal {A}}}

      on typical inputs?
      Example 1 Let I be the set of all words over



      {
      0
      ,
      1
      }


      {\displaystyle \{0,1\}}

      and define the size



      σ
      (
      w
      )


      {\displaystyle \sigma (w)}

      to be word length,





      |

      w

      |



      {\displaystyle |w|}

      . Define




      I

      n




      {\displaystyle I_{n}}

      to be the set of words of length n, and assume that each




      μ

      n




      {\displaystyle \mu _{n}}

      is the equiprobable measure.
      Suppose that T(w)=n for all but one word in each




      I

      n




      {\displaystyle I_{n}}

      , and



      T
      (
      w
      )
      =

      2


      2

      n






      {\displaystyle T(w)=2^{2^{n}}}

      on the exceptional words.
      In this example T is certainly polynomial on typical inputs, but T is not polynomial on average. T is in GenP.
      Example 2 Keep I and



      σ
      (
      w
      )
      =

      |

      w

      |



      {\displaystyle \sigma (w)=|w|}

      as before, but define



      μ
      (
      w
      )
      =

      2


      2

      |

      w

      |


      1




      {\displaystyle \mu (w)=2^{-2|w|-1}}

      and




      T
      (
      w
      )
      =

      2


      |

      w

      |





      {\displaystyle T(w)=2^{|w|}}

      . T is polynomial on average even though it is exponential on typical inputs. T is not in GenP.
      In these two examples the generic complexity is more closely related to behavior on typical inputs than average case complexity. Average case complexity measures something else: the balance between the frequency of difficult instances and the degree of difficulty.
      Roughly speaking an algorithm which is polynomial time on average can have only a subpolynomial fraction of inputs that require superpolynomial time to compute.
      Nevertheless, in some cases generic and average case complexity are quite close to each other.
      A function



      f
      :
      I



      R


      +




      {\displaystyle f:I\rightarrow \mathbb {R} ^{+}}

      is polynomial on



      μ


      {\displaystyle \mu }

      -average on spheres if there
      exists



      k

      1


      {\displaystyle k\geq 1}

      such that






      w


      I

      n





      f

      1

      /

      k


      (
      w
      )

      μ

      n


      (
      w
      )
      =
      O
      (
      n
      )


      {\displaystyle \sum _{w\in I_{n}}f^{1/k}(w)\mu _{n}(w)=O(n)}

      where



      {

      μ

      n


      }


      {\displaystyle \{\mu _{n}\}}


      is the ensemble induced by



      μ


      {\displaystyle \mu }

      . If f is polynomial on



      μ


      {\displaystyle \mu }

      -average on spheres, the f is
      polynomial on



      μ


      {\displaystyle \mu }

      -average, and for many distributions the converse holds

      Theorem 5 If a function



      f
      :
      I



      R


      +




      {\displaystyle f:I\rightarrow \mathbb {R} ^{+}}

      is polynomial on



      μ


      {\displaystyle \mu }

      -average on spheres then f
      is generically polynomial relative to the spherical asymptotic density




      ρ




      {\displaystyle \rho '}

      .
      Theorem 6 Suppose a complete algorithm





      A




      {\displaystyle {\mathcal {A}}}

      has subexponential time bound T and a partial algorithm





      B




      {\displaystyle {\mathcal {B}}}


      for the same problem is in ExpGenP with respect to the ensemble



      {

      μ

      n


      }


      {\displaystyle \{\mu _{n}\}}

      corresponding to a probability measure



      μ


      {\displaystyle \mu }


      on the inputs I for





      A




      {\displaystyle {\mathcal {A}}}

      . Then there is a complete algorithm which is



      μ


      {\displaystyle \mu }

      -average time complexity.


      = Errorless heuristic algorithms

      =
      In a 2006 paper, Bogdanov and Trevisan came close to defining generic case complexity. Instead of partial algorithms, they consider so-called errorless heuristic algorithms. These are
      complete algorithms which may fail by halting with output "?". The class AvgnegP is defined
      to consist of all errorless heuristic algorithms A which run in polynomial time and for which the
      probability of failure on




      I

      n




      {\displaystyle I_{n}}

      is negligible, i.e., converges superpolynomially fast to 0.
      AvgnegP is a subset of GenP. Errorless heuristic algorithms are essentially the same as the algorithms with
      benign faults defined by Impagliazzo where polynomial time on average algorithms are
      characterized in terms of so-called benign algorithm schemes.


      See also


      Smoothed analysis - a similar concept: measures the worst-case of the average runtime.


      References

    Kata Kunci Pencarian: