• Source: Halting problem
    • In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. The halting problem is undecidable, meaning that no general algorithm exists that solves the halting problem for all possible program–input pairs. The problem comes up often in discussions of computability since it demonstrates that some functions are mathematically definable but not computable.
      A key part of the formal statement of the problem is a mathematical definition of a computer and program, usually via a Turing machine. The proof then shows, for any program f that might determine whether programs halt, that a "pathological" program g exists for which f makes an incorrect determination. Specifically, g is the program that, when called with some input, passes its own source and its input to f and does the opposite of what f predicts g will do. The behavior of f on g shows undecidability as it means no program f will solve the halting problem in every possible case.


      Background


      The halting problem is a decision problem about properties of computer programs on a fixed Turing-complete model of computation, i.e., all programs that can be written in some given programming language that is general enough to be equivalent to a Turing machine. The problem is to determine, given a program and an input to the program, whether the program will eventually halt when run with that input. In this abstract framework, there are no resource limitations on the amount of memory or time required for the program's execution; it can take arbitrarily long and use an arbitrary amount of storage space before halting. The question is simply whether the given program will ever halt on a particular input.
      For example, in pseudocode, the program

      while (true) continue
      does not halt; rather, it goes on forever in an infinite loop. On the other hand, the program

      print "Hello, world!"
      does halt.
      While deciding whether these programs halt is simple, more complex programs prove problematic. One approach to the problem might be to run the program for some number of steps and check if it halts. However, as long as the program is running, it is unknown whether it will eventually halt or run forever. Turing proved no algorithm exists that always correctly decides whether, for a given arbitrary program and input, the program halts when run with that input. The essence of Turing's proof is that any such algorithm can be made to produce contradictory output and therefore cannot be correct.


      = Programming consequences

      =
      Some infinite loops can be quite useful. For instance, event loops are typically coded as infinite loops. However, most subroutines are intended to finish. In particular, in hard real-time computing, programmers attempt to write subroutines that are not only guaranteed to finish, but are also guaranteed to finish before a given deadline.
      Sometimes these programmers use some general-purpose (Turing-complete) programming language,
      but attempt to write in a restricted style—such as MISRA C or SPARK—that makes it easy to prove that the resulting subroutines finish before the given deadline.
      Other times these programmers apply the rule of least power—they deliberately use a computer language that is not quite fully Turing-complete. Frequently, these are languages that guarantee all subroutines finish, such as Coq.


      = Common pitfalls

      =
      The difficulty in the halting problem lies in the requirement that the decision procedure must work for all programs and inputs. A particular program either halts on a given input or does not halt. Consider one algorithm that always answers "halts" and another that always answers "does not halt". For any specific program and input, one of these two algorithms answers correctly, even though nobody may know which one. Yet neither algorithm solves the halting problem generally.
      There are programs (interpreters) that simulate the execution of whatever source code they are given. Such programs can demonstrate that a program does halt if this is the case: the interpreter itself will eventually halt its simulation, which shows that the original program halted. However, an interpreter will not halt if its input program does not halt, so this approach cannot solve the halting problem as stated; it does not successfully answer "does not halt" for programs that do not halt.
      The halting problem is theoretically decidable for linear bounded automata (LBAs) or deterministic machines with finite memory. A machine with finite memory has a finite number of configurations, and thus any deterministic program on it must eventually either halt or repeat a previous configuration:

      ...any finite-state machine, if left completely to itself, will fall eventually into a perfectly periodic repetitive pattern. The duration of this repeating pattern cannot exceed the number of internal states of the machine...
      However, a computer with a million small parts, each with two states, would have at least 21,000,000 possible states:

      This is a 1 followed by about three hundred thousand zeroes ... Even if such a machine were to operate at the frequencies of cosmic rays, the aeons of galactic evolution would be as nothing compared to the time of a journey through such a cycle:
      Although a machine may be finite, and finite automata "have a number of theoretical limitations":

      ...the magnitudes involved should lead one to suspect that theorems and arguments based chiefly on the mere finiteness [of] the state diagram may not carry a great deal of significance.
      It can also be decided automatically whether a nondeterministic machine with finite memory halts on none, some, or all of the possible sequences of nondeterministic decisions, by enumerating states after each possible decision.


      History



      In April 1936, Alonzo Church published his proof of the undecidability of a problem in the lambda calculus. Turing's proof was published later, in January 1937. Since then, many other undecidable problems have been described, including the halting problem which emerged in the 1950s.


      = Timeline

      =
      1900 (1900): David Hilbert poses his "23 questions" (now known as Hilbert's problems) at the Second International Congress of Mathematicians in Paris. "Of these, the second was that of proving the consistency of the 'Peano axioms' on which, as he had shown, the rigour of mathematics depended".
      1920 (1920) – 1921 (1921): Emil Post explores the halting problem for tag systems, regarding it as a candidate for unsolvability. Its unsolvability was not established until much later, by Marvin Minsky.
      1928 (1928): Hilbert recasts his 'Second Problem' at the Bologna International Congress. He posed three questions: i.e. #1: Was mathematics complete? #2: Was mathematics consistent? #3: Was mathematics decidable? The third question is known as the Entscheidungsproblem (Decision Problem).
      1930 (1930): Kurt Gödel announces a proof as an answer to the first two of Hilbert's 1928 questions. "At first he [Hilbert] was only angry and frustrated, but then he began to try to deal constructively with the problem... Gödel himself felt—and expressed the thought in his paper—that his work did not contradict Hilbert's formalistic point of view"
      1931 (1931): Gödel publishes "On Formally Undecidable Propositions of Principia Mathematica and Related Systems I"
      19 April 1935 (1935-04-19): Alonzo Church publishes "An Unsolvable Problem of Elementary Number Theory", which proposes that the intuitive notion of an effectively calculable function can be formalized by the general recursive functions or equivalently by the lambda-definable functions. He proves that the halting problem for lambda calculus (i.e., whether a given lambda-expression has a normal form) is not effectively calculable.
      1936 (1936): Church publishes the first proof that the Entscheidungsproblem is unsolvable, using a notion of calculation by recursive functions.
      7 October 1936 (1936-10-07): Emil Post's paper "Finite Combinatory Processes. Formulation I" is received. Post adds to his "process" an instruction "(C) Stop". He called such a process "type 1 ... if the process it determines terminates for each specific problem."
      May 1936 (1936-05) – January 1937 (1937-01): Alan Turing's paper On Computable Numbers With an Application to the Entscheidungsproblem goes to press in May 1936 and reaches print in January 1937. Turing proves three problems undecidable: the "satisfaction" problem, the "printing" problem, and the Entscheidungsproblem. Turing's proof differs from Church's by introducing the notion of computation by machine. This is one of the "first examples of decision problems proved unsolvable".
      1939 (1939): J. Barkley Rosser observes the essential equivalence of "effective method" defined by Gödel, Church, and Turing
      1943 (1943): In a paper, Stephen Kleene states that "In setting up a complete algorithmic theory, what we do is describe a procedure ... which procedure necessarily terminates and in such manner that from the outcome we can read a definite answer, 'Yes' or 'No,' to the question, 'Is the predicate value true?'."
      1952 (1952): Kleene includes a discussion of the unsolvability of the halting problem for Turing machines and reformulates it in terms of machines that "eventually stop", i.e. halt: "...there is no algorithm for deciding whether any given machine, when started from any given situation, eventually stops."
      1952 (1952): Martin Davis uses the term 'halting problem' in a series of lectures at the Control Systems Laboratory at the University of Illinois in 1952. It is likely that this is the first such use of the term.


      = Origin of the halting problem

      =
      Many papers and textbooks refer the definition and proof of undecidability of the halting problem to Turing's 1936 paper. However, this is not correct. Turing did not use the terms "halt" or "halting" in any of his published works, including his 1936 paper. A search of the academic literature from 1936 to 1958 showed that the first published material using the term “halting problem” was Rogers (1957). However, Rogers says he had a draft of Davis (1958) available to him, and Martin Davis states in the introduction that "the expert will perhaps find some novelty in the arrangement and treatment of topics", so the terminology must be attributed to Davis. Davis stated in a letter that he had been referring to the halting problem since 1952. The usage in Davis's book is as follows:

      "[...] we wish to determine whether or not [a Turing machine] Z, if placed in a given initial state, will eventually halt. We call this problem the halting problem for Z. [...]
      Theorem 2.2 There exists a Turing machine whose halting problem is recursively unsolvable.

      A related problem is the printing problem for a simple Turing machine Z with respect to a symbol Si".
      A possible precursor to Davis's formulation is Kleene's 1952 statement, which differs only in wording:

      there is no algorithm for deciding whether any given machine, when started from any given situation, eventually stops.
      The halting problem is Turing equivalent to both Davis's printing problem ("does a Turing machine starting from a given state ever print a given symbol?") and to the printing problem considered in Turing's 1936 paper ("does a Turing machine starting from a blank tape ever print a given symbol?"). However, Turing equivalence is rather loose and does not mean that the two problems are the same. There are machines which print but do not halt, and halt but not print. The printing and halting problems address different issues and exhibit important conceptual and technical differences. Thus, Davis was simply being modest when he said:

      It might also be mentioned that the unsolvability of essentially these problems was first obtained by Turing.


      Formalization


      In his original proof Turing formalized the concept of algorithm by introducing Turing machines. However, the result is in no way specific to them; it applies equally to any other model of computation that is equivalent in its computational power to Turing machines, such as Markov algorithms, Lambda calculus, Post systems, register machines, or tag systems.
      What is important is that the formalization allows a straightforward mapping of algorithms to some data type that the algorithm can operate upon. For example, if the formalism lets algorithms define functions over strings (such as Turing machines) then there should be a mapping of these algorithms to strings, and if the formalism lets algorithms define functions over natural numbers (such as computable functions) then there should be a mapping of algorithms to natural numbers. The mapping to strings is usually the most straightforward, but strings over an alphabet with n characters can also be mapped to numbers by interpreting them as numbers in an n-ary numeral system.


      = Representation as a set

      =

      The conventional representation of decision problems is the set of objects possessing the property in question. The halting set

      K = {(i, x) | program i halts when run on input x}
      represents the halting problem.
      This set is recursively enumerable, which means there is a computable function that lists all of the pairs (i, x) it contains. However, the complement of this set is not recursively enumerable.
      There are many equivalent formulations of the halting problem; any set whose Turing degree equals that of the halting problem is such a formulation. Examples of such sets include:

      {i | program i eventually halts when run with input 0}
      {i | there is an input x such that program i eventually halts when run with input x}.


      = Proof concept

      =
      Christopher Strachey outlined a proof by contradiction that the halting problem is not solvable. The proof proceeds as follows: Suppose that there exists a total computable function halts(f) that returns true if the subroutine f halts (when run with no inputs) and returns false otherwise. Now consider the following subroutine:

      halts(g) must either return true or false, because halts was assumed to be total. If halts(g) returns true, then g will call loop_forever and never halt, which is a contradiction. If halts(g) returns false, then g will halt, because it will not call loop_forever; this is also a contradiction. Overall, g does the opposite of what halts says g should do, so halts(g) can not return a truth value that is consistent with whether g halts. Therefore, the initial assumption that halts is a total computable function must be false.


      = Sketch of rigorous proof

      =
      The concept above shows the general method of the proof, but the computable function halts does not directly take a subroutine as an argument; instead it takes the source code of a program. Moreover, the definition of g is self-referential. A rigorous proof addresses these issues. The overall goal is to show that there is no total computable function that decides whether an arbitrary program i halts on arbitrary input x; that is, the following function h (for "halts") is not computable:




      h
      (
      i
      ,
      x
      )
      =


      {



      1



      if


      program

      i

      halts on input

      x
      ,




      0



      otherwise.









      {\displaystyle h(i,x)={\begin{cases}1&{\text{if }}{\text{ program }}i{\text{ halts on input }}x,\\0&{\text{otherwise.}}\end{cases}}}


      Here program i refers to the i th program in an enumeration of all the programs of a fixed Turing-complete model of computation.

      The proof proceeds by directly establishing that no total computable function with two arguments can be the required function h. As in the sketch of the concept, given any total computable binary function f, the following partial function g is also computable by some program e:




      g
      (
      i
      )
      =


      {



      0



      if

      f
      (
      i
      ,
      i
      )
      =
      0
      ,





      undefined




      otherwise.









      {\displaystyle g(i)={\begin{cases}0&{\text{if }}f(i,i)=0,\\{\text{undefined}}&{\text{otherwise.}}\end{cases}}}


      The verification that g is computable relies on the following constructs (or their equivalents):

      computable subprograms (the program that computes f is a subprogram in program e),
      duplication of values (program e computes the inputs i,i for f from the input i for g),
      conditional branching (program e selects between two results depending on the value it computes for f(i,i)),
      not producing a defined result (for example, by looping forever),
      returning a value of 0.
      The following pseudocode for e illustrates a straightforward way to compute g:

      Because g is partial computable, there must be a program e that computes g, by the assumption that the model of computation is Turing-complete. This program is one of all the programs on which the halting function h is defined. The next step of the proof shows that h(e,e) will not have the same value as f(e,e).
      It follows from the definition of g that exactly one of the following two cases must hold:

      f(e,e) = 0 and so g(e) = 0. In this case program e halts on input e, so h(e,e) = 1.
      f(e,e) ≠ 0 and so g(e) is undefined. In this case program e does not halt on input e, so h(e,e) = 0.
      In either case, f cannot be the same function as h. Because f was an arbitrary total computable function with two arguments, all such functions must differ from h.
      This proof is analogous to Cantor's diagonal argument. One may visualize a two-dimensional array with one column and one row for each natural number, as indicated in the table above. The value of f(i,j) is placed at column i, row j. Because f is assumed to be a total computable function, any element of the array can be calculated using f. The construction of the function g can be visualized using the main diagonal of this array. If the array has a 0 at position (i,i), then g(i) is 0. Otherwise, g(i) is undefined. The contradiction comes from the fact that there is some column e of the array corresponding to g itself. Now assume f was the halting function h, if g(e) is defined (g(e) = 0 in this case), g(e) halts so f(e,e) = 1. But g(e) = 0 only when f(e,e) = 0, contradicting f(e,e) = 1. Similarly, if g(e) is not defined, then halting function f(e,e) = 0, which leads to g(e) = 0 under g's construction. This contradicts the assumption of g(e) not being defined. In both cases contradiction arises. Therefore any arbitrary computable function f cannot be the halting function h.


      Computability theory



      A typical method of proving a problem



      P


      {\displaystyle P}

      to be undecidable is to reduce the halting problem to



      P


      {\displaystyle P}

      .
      For example, there cannot be a general algorithm that decides whether a given statement about natural numbers is true or false. The reason for this is that the proposition stating that a certain program will halt given a certain input can be converted into an equivalent statement about natural numbers. If an algorithm could find the truth value of every statement about natural numbers, it could certainly find the truth value of this one; but that would determine whether the original program halts.
      Rice's theorem generalizes the theorem that the halting problem is unsolvable. It states that for any non-trivial property, there is no general decision procedure that, for all programs, decides whether the partial function implemented by the input program has that property. (A partial function is a function which may not always produce a result, and so is used to model programs, which can either produce results or fail to halt.) For example, the property "halt for the input 0" is undecidable. Here, "non-trivial" means that the set of partial functions that satisfy the property is neither the empty set nor the set of all partial functions. For example, "halts or fails to halt on input 0" is clearly true of all partial functions, so it is a trivial property, and can be decided by an algorithm that simply reports "true." Also, this theorem holds only for properties of the partial function implemented by the program; Rice's Theorem does not apply to properties of the program itself. For example, "halt on input 0 within 100 steps" is not a property of the partial function that is implemented by the program—it is a property of the program implementing the partial function and is very much decidable.
      Gregory Chaitin has defined a halting probability, represented by the symbol Ω, a type of real number that informally is said to represent the probability that a randomly produced program halts. These numbers have the same Turing degree as the halting problem. It is a normal and transcendental number which can be defined but cannot be completely computed. This means one can prove that there is no algorithm which produces the digits of Ω, although its first few digits can be calculated in simple cases.
      Since the negative answer to the halting problem shows that there are problems that cannot be solved by a Turing machine, the Church–Turing thesis limits what can be accomplished by any machine that implements effective methods. However, not all machines conceivable to human imagination are subject to the Church–Turing thesis (e.g. oracle machines). It is an open question whether there can be actual deterministic physical processes that, in the long run, elude simulation by a Turing machine, and in particular whether any such hypothetical process could usefully be harnessed in the form of a calculating machine (a hypercomputer) that could solve the halting problem for a Turing machine amongst other things. It is also an open question whether any such unknown physical processes are involved in the working of the human brain, and whether humans can solve the halting problem.


      = Approximations

      =
      Turing's proof shows that there can be no mechanical, general method (i.e., a Turing machine or a program in some equivalent model of computation) to determine whether algorithms halt. However, each individual instance of the halting problem has a definitive answer, which may or may not be practically computable. Given a specific algorithm and input, one can often show that it halts or does not halt, and in fact computer scientists often do just that as part of a correctness proof. There are some heuristics that can be used in an automated fashion to attempt to construct a proof, which frequently succeed on typical programs. This field of research is known as automated termination analysis.
      Some results have been established on the theoretical performance of halting problem heuristics, in particular the fraction of programs of a given size that may be correctly classified by a recursive algorithm. These results do not give precise numbers because the fractions are uncomputable and also highly dependent on the choice of program encoding used to determine "size". For example, consider classifying programs by their number of states and using a specific "Turing semi-infinite tape" model of computation that errors (without halting) if the program runs off the left side of the tape. Then




      lim

      n




      P
      (
      x


      halts is decidable


      x


      has


      n


      states

      )
      =
      1


      {\displaystyle \lim _{n\to \infty }P(x\,{\text{halts is decidable}}\mid x\,{\text{has}}\,n\,{\text{states}})=1}

      , over programs



      x


      {\displaystyle x}

      chosen uniformly by number of states. But this result is in some sense "trivial" because these decidable programs are simply the ones that fall off the tape, and the heuristic is simply to predict not halting due to error. Thus a seemingly irrelevant detail, namely the treatment of programs with errors, can turn out to be the deciding factor in determining the fraction of programs.
      To avoid these issues, several restricted notions of the "size" of a program have been developed. A dense Gödel numbering assigns numbers to programs such that each computable function occurs a positive fraction in each sequence of indices from 1 to n, i.e. a Gödelization φ is dense iff for all



      i


      {\displaystyle i}

      , there exists a



      c
      >
      0


      {\displaystyle c>0}

      such that




      lim inf

      n




      #
      {
      j


      N

      :
      0

      j
      <
      n
      ,

      ϕ

      i


      =

      ϕ

      j


      }

      /

      n

      c


      {\displaystyle \liminf _{n\to \infty }\#\{j\in \mathbb {N} :0\leq j
      . For example, a numbering that assigns indices




      2

      n




      {\displaystyle 2^{n}}

      to nontrivial programs and all other indices the error state is not dense, but there exists a dense Gödel numbering of syntactically correct Brainfuck programs. A dense Gödel numbering is called optimal if, for any other Gödel numbering



      α


      {\displaystyle \alpha }

      , there is a 1-1 total recursive function



      f


      {\displaystyle f}

      and a constant



      c


      {\displaystyle c}

      such that for all



      i


      {\displaystyle i}

      ,




      α

      i


      =

      ϕ

      f
      (
      i
      )




      {\displaystyle \alpha _{i}=\phi _{f(i)}}

      and



      f
      (
      i
      )

      c
      i


      {\displaystyle f(i)\leq ci}

      . This condition ensures that all programs have indices not much larger than their indices in any other Gödel numbering. Optimal Gödel numberings are constructed by numbering the inputs of a universal Turing machine. A third notion of size uses universal machines operating on binary strings and measures the length of the string needed to describe the input program. A universal machine U is a machine for which every other machine V there exists a total computable function h such that



      V
      (
      x
      )
      =
      U
      (
      h
      (
      x
      )
      )


      {\displaystyle V(x)=U(h(x))}

      . An optimal machine is a universal machine that achieves the Kolmogorov complexity invariance bound, i.e. for every machine V, there exists c such that for all outputs x, if a V-program of length n outputs x, then there exists a U-program of at most length



      n
      +
      c


      {\displaystyle n+c}

      outputting x.
      We consider partial computable functions (algorithms)



      A


      {\displaystyle A}

      . For each



      n


      {\displaystyle n}

      we consider the fraction




      ϵ

      n


      (
      A
      )


      {\displaystyle \epsilon _{n}(A)}

      of errors among all programs of size metric at most



      n


      {\displaystyle n}

      , counting each program



      x


      {\displaystyle x}

      for which



      A


      {\displaystyle A}

      fails to terminate, produces a "don't know" answer, or produces a wrong answer, i.e.



      x


      {\displaystyle x}

      halts and



      A
      (
      x
      )


      {\displaystyle A(x)}

      outputs DOES_NOT_HALT, or



      x


      {\displaystyle x}

      does not halt and



      A
      (
      x
      )


      {\displaystyle A(x)}

      outputs HALTS. The behavior may be described as follows, for dense Gödelizations and optimal machines:

      For every algorithm



      A


      {\displaystyle A}

      ,




      lim inf

      n





      ϵ

      n


      (
      A
      )
      >
      0


      {\displaystyle \liminf _{n\to \infty }\epsilon _{n}(A)>0}

      . In words, any algorithm has a positive minimum error rate, even as the size of the problem becomes extremely large.
      There exists



      ϵ
      >
      0


      {\displaystyle \epsilon >0}

      such that for every algorithm



      A


      {\displaystyle A}

      ,




      lim sup

      n





      ϵ

      n


      (
      A
      )

      ϵ


      {\displaystyle \limsup _{n\to \infty }\epsilon _{n}(A)\geq \epsilon }

      . In words, there is a positive error rate for which any algorithm will do worse than that error rate arbitrarily often, even as the size of the problem grows indefinitely.





      inf

      A



      lim inf

      n





      ϵ

      n


      (
      A
      )
      =
      0


      {\displaystyle \inf _{A}\liminf _{n\to \infty }\epsilon _{n}(A)=0}

      . In words, there is a sequence of algorithms such that the error rate gets arbitrarily close to zero for a specific sequence of increasing sizes. However, this result allows sequences of algorithms that produce wrong answers.
      If we consider only "honest" algorithms that may be undefined but never produce wrong answers, then depending on the metric,




      inf

      A



      honest





      lim inf

      n





      ϵ

      n


      (
      A
      )


      {\displaystyle \inf _{A\,{\textrm {honest}}}\liminf _{n\to \infty }\epsilon _{n}(A)}

      may or may not be 0. In particular it is 0 for left-total universal machines, but for effectively optimal machines it is greater than 0.
      The complex nature of these bounds is due to the oscillatory behavior of




      ϵ

      n


      (
      A
      )


      {\displaystyle \epsilon _{n}(A)}

      . There are infrequently occurring new varieties of programs that come in arbitrarily large "blocks", and a constantly growing fraction of repeats. If the blocks of new varieties are fully included, the error rate is at least



      ϵ


      {\displaystyle \epsilon }

      , but between blocks the fraction of correctly categorized repeats can be arbitrarily high. In particular a "tally" heuristic that simply remembers the first N inputs and recognizes their equivalents allows reaching an arbitrarily low error rate infinitely often.


      = Gödel's incompleteness theorems

      =

      The concepts raised by Gödel's incompleteness theorems are very similar to those raised by the halting problem, and the proofs are quite similar. In fact, a weaker form of the First Incompleteness Theorem is an easy consequence of the undecidability of the halting problem. This weaker form differs from the standard statement of the incompleteness theorem by asserting that an axiomatization of the natural numbers that is both complete and sound is impossible. The "sound" part is the weakening: it means that we require the axiomatic system in question to prove only true statements about natural numbers. Since soundness implies consistency, this weaker form can be seen as a corollary of the strong form. It is important to observe that the statement of the standard form of Gödel's First Incompleteness Theorem is completely unconcerned with the truth value of a statement, but only concerns the issue of whether it is possible to find it through a mathematical proof.
      The weaker form of the theorem can be proved from the undecidability of the halting problem as follows. Assume that we have a sound (and hence consistent) and complete axiomatization of all true first-order logic statements about natural numbers. Then we can build an algorithm that enumerates all these statements. This means that there is an algorithm N(n) that, given a natural number n, computes a true first-order logic statement about natural numbers, and that for all true statements, there is at least one n such that N(n) yields that statement. Now suppose we want to decide if the algorithm with representation a halts on input i. We know that this statement can be expressed with a first-order logic statement, say H(a, i). Since the axiomatization is complete it follows that either there is an n such that N(n) = H(a, i) or there is an n′ such that N(n′) = ¬ H(a, i). So if we iterate over all n until we either find H(a, i) or its negation, we will always halt, and furthermore, the answer it gives us will be true (by soundness). This means that this gives us an algorithm to decide the halting problem. Since we know that there cannot be such an algorithm, it follows that the assumption that there is a consistent and complete axiomatization of all true first-order logic statements about natural numbers must be false.


      Generalization


      Many variants of the halting problem can be found in computability textbooks. Typically, these problems are RE-complete and describe sets of complexity




      Σ

      1


      0




      {\displaystyle \Sigma _{1}^{0}}

      in the arithmetical hierarchy, the same as the standard halting problem. The variants are thus undecidable, and the standard halting problem reduces to each variant and vice-versa. However, some variants have a higher degree of unsolvability and cannot be reduced to the standard halting problem. The next two examples are common.


      = Halting on all inputs

      =
      The universal halting problem, also known (in recursion theory) as totality, is the problem of determining whether a given computer program will halt for every input (the name totality comes from the equivalent question of whether the computed function is total).
      This problem is not only undecidable, as the halting problem is, but highly undecidable. In terms of the arithmetical hierarchy, it is




      Π

      2


      0




      {\displaystyle \Pi _{2}^{0}}

      -complete.
      This means, in particular, that it cannot be decided even with an oracle for the halting problem.


      = Recognizing partial solutions

      =
      There are many programs that, for some inputs, return a correct answer to the halting problem, while for other inputs they do not return an answer at all.
      However the problem "given program p, is it a partial halting solver" (in the sense described) is at least as hard as the halting problem.
      To see this, assume that there is an algorithm PHSR ("partial halting solver recognizer") to do that. Then it can be used to solve the halting problem,
      as follows:
      To test whether input program x halts on y, construct a program p that on input (x,y) reports true and diverges on all other inputs.
      Then test p with PHSR.
      The above argument is a reduction of the halting problem to PHS recognition, and in the same manner,
      harder problems such as halting on all inputs can also be reduced, implying that PHS recognition is not only undecidable, but higher in the arithmetical hierarchy, specifically




      Π

      2


      0




      {\displaystyle \Pi _{2}^{0}}

      -complete.


      = Lossy computation

      =
      A lossy Turing machine is a Turing machine in which part of the tape may non-deterministically disappear. The halting problem is decidable for a lossy Turing machine but non-primitive recursive.


      = Oracle machines

      =

      A machine with an oracle for the halting problem can determine whether particular Turing machines will halt on particular inputs, but they cannot determine, in general, whether machines equivalent to themselves will halt.


      See also


      Busy beaver
      Gödel's incompleteness theorem
      Brouwer–Hilbert controversy
      Kolmogorov complexity
      P versus NP problem
      Termination analysis
      Worst-case execution time


      Notes




      References


      Church, Alonzo (1936). "An Unsolvable Problem of Elementary Number Theory". American Journal of Mathematics. 58 (2): 345–363. doi:10.2307/2371045. JSTOR 2371045.
      Copeland, B. Jack, ed. (2004). The essential Turing : seminal writings in computing, logic, philosophy, artificial intelligence, and artificial life, plus the secrets of Enigma. Oxford: Clarendon Press. ISBN 0-19-825079-7.
      Davis, Martin (1965). The Undecidable, Basic Papers on Undecidable Propositions, Unsolvable Problems And Computable Functions. New York: Raven Press.. Turing's paper is #3 in this volume. Papers include those by Godel, Church, Rosser, Kleene, and Post.
      Davis, Martin (1958). Computability and Unsolvability. New York: McGraw-Hill..
      Rogers, Hartley (Jr.) (1957). Theory of Recursive Functions and Effective Computability. Massachusetts Institute of Technology.
      Kleene, Stephen Cole (1952). Introduction to metamathematics. North-Holland. ISBN 0923891579.. Chapter XIII ("Computable Functions") includes a discussion of the unsolvability of the halting problem for Turing machines. In a departure from Turing's terminology of circle-free nonhalting machines, Kleene refers instead to machines that "stop", i.e. halt.
      Lucas, Salvador (June 2021). "The origins of the halting problem". Journal of Logical and Algebraic Methods in Programming. 121: 100687. doi:10.1016/j.jlamp.2021.100687. hdl:10251/189460. S2CID 235396831.
      Minsky, Marvin (1967). Computation: finite and infinite machines. Englewood Cliffs, NJ: Prentice-Hall. ISBN 0131655639.. See chapter 8, Section 8.2 "Unsolvability of the Halting Problem."
      Moore, Cristopher; Mertens, Stephan (2011). The Nature of Computation. Oxford University Press. doi:10.1093/acprof:oso/9780199233212.001.0001. ISBN 978-0-19-923321-2.
      Reid, Constance (1996). Hilbert. New York: Copernicus. ISBN 0387946748.. First published in 1970, a fascinating history of German mathematics and physics from 1880s through 1930s. Hundreds of names familiar to mathematicians, physicists and engineers appear in its pages. Perhaps marred by no overt references and few footnotes: Reid states her sources were numerous interviews with those who personally knew Hilbert, and Hilbert's letters and papers.
      Sipser, Michael (2006). "Section 4.2: The Halting Problem". Introduction to the Theory of Computation (Second ed.). PWS Publishing. pp. 173–182. ISBN 0-534-94728-X.
      Turing, A. M. (1937). "On Computable Numbers, with an Application to the Entscheidungsproblem". Proceedings of the London Mathematical Society. s2-42 (1). Wiley: 230–265. doi:10.1112/plms/s2-42.1.230. ISSN 0024-6115. S2CID 73712., Turing, A. M. (1938). "On Computable Numbers, with an Application to the Entscheidungsproblem. A Correction". Proceedings of the London Mathematical Society. s2-43 (1). Wiley: 544–546. doi:10.1112/plms/s2-43.6.544. ISSN 0024-6115. This is the epochal paper where Turing defines Turing machines, formulates the halting problem, and shows that it (as well as the Entscheidungsproblem) is unsolvable.
      Penrose, Roger (1989). The emperor's new mind: concerning computers, minds, and the laws of physics (1990 corrected reprint ed.). Oxford: Oxford University Press. ISBN 0192861980.. Cf. Chapter 2, "Algorithms and Turing Machines". An over-complicated presentation (see Davis's paper for a better model), but a thorough presentation of Turing machines and the halting problem, and Church's Lambda Calculus.
      Hopcroft, John E.; Ullman, Jeffrey D. (1979). Introduction to Automata Theory, Languages, and Computation (1st ed.). Addison-Wesley. ISBN 81-7808-347-7.. See Chapter 7 "Turing Machines." A book centered around the machine-interpretation of "languages", NP-Completeness, etc.
      Hodges, Andrew (1983). Alan Turing: the enigma. New York: Simon and Schuster. ISBN 0-671-49207-1.. Cf. Chapter "The Spirit of Truth" for a history leading to, and a discussion of, his proof.
      Börger, Egon (1989). Computability, complexity, logic. Amsterdam: North-Holland. ISBN 008088704X.
      Abdulla, Parosh Aziz; Jonsson, Bengt (1996). "Verifying Programs with Unreliable Channels". Information and Computation. 127 (2): 91–101. doi:10.1006/inco.1996.0053.
      Collected works of A.M. Turing
      Good, Irving John, ed. (1992). Pure Mathematics. North-Holland. ISBN 978-0-444-88059-8.
      Gandy, R. O.; Yates, C. E. M., eds. (5 December 2001). Mathematical Logic. Elsevier. ISBN 978-0-08-053592-0.
      Ince, D.C., ed. (1992). Mechanical Intelligence. North-Holland. ISBN 978-0-444-88058-1.
      Saunders, P. T., ed. (26 November 1992). Morphogenesis. Elsevier. ISBN 978-0-08-093405-1.


      Further reading


      c2:HaltingProblem
      Alfred North Whitehead and Bertrand Russell, Principia Mathematica to *56, Cambridge at the University Press, 1962. Re: the problem of paradoxes, the authors discuss the problem of a set not be an object in any of its "determining functions", in particular "Introduction, Chap. 1 p. 24 "...difficulties which arise in formal logic", and Chap. 2.I. "The Vicious-Circle Principle" p. 37ff, and Chap. 2.VIII. "The Contradictions" p. 60ff.
      Martin Davis, "What is a computation", in Mathematics Today, Lynn Arthur Steen, Vintage Books (Random House), 1980. A wonderful little paper, perhaps the best ever written about Turing Machines for the non-specialist. Davis reduces the Turing Machine to a far-simpler model based on Post's model of a computation. Discusses Chaitin proof. Includes little biographies of Emil Post, Julia Robinson.
      Edward Beltrami, What is Random? Chance and order in mathematics and life, Copernicus: Springer-Verlag, New York, 1999. Nice, gentle read for the mathematically inclined non-specialist, puts tougher stuff at the end. Has a Turing-machine model in it. Discusses the Chaitin contributions.
      Ernest Nagel and James R. Newman, Godel’s Proof, New York University Press, 1958. Wonderful writing about a very difficult subject. For the mathematically inclined non-specialist. Discusses Gentzen's proof on pages 96–97 and footnotes. Appendices discuss the Peano Axioms briefly, gently introduce readers to formal logic.
      Daras, Nicholas J.; Rassias, Themistocles M. (2018). Modern discrete mathematics and analysis: with applications in cryptography, information systems and modeling. Cham, Switzerland. ISBN 978-3319743240.{{cite book}}: CS1 maint: location missing publisher (link). Chapter 3 Section 1 contains a quality description of the halting problem, a proof by contradiction, and a helpful graphic representation of the Halting Problem.
      Taylor Booth, Sequential Machines and Automata Theory, Wiley, New York, 1967. Cf. Chapter 9, Turing Machines. Difficult book, meant for electrical engineers and technical specialists. Discusses recursion, partial-recursion with reference to Turing Machines, halting problem. Has a Turing Machine model in it. References at end of Chapter 9 catch most of the older books (i.e. 1952 until 1967 including authors Martin Davis, F. C. Hennie, H. Hermes, S. C. Kleene, M. Minsky, T. Rado) and various technical papers. See note under Busy-Beaver Programs.
      Busy Beaver Programs are described in Scientific American, August 1984, also March 1985 p. 23. A reference in Booth attributes them to Rado, T.(1962), On non-computable functions, Bell Systems Tech. J. 41. Booth also defines Rado's Busy Beaver Problem in problems 3, 4, 5, 6 of Chapter 9, p. 396.
      David Bolter, Turing’s Man: Western Culture in the Computer Age, The University of North Carolina Press, Chapel Hill, 1984. For the general reader. May be dated. Has yet another (very simple) Turing Machine model in it.
      Sven Köhler, Christian Schindelhauer, Martin Ziegler, On approximating real-world halting problems, pp.454-466 (2005) ISBN 3540281932 Springer Lecture Notes in Computer Science volume 3623: Undecidability of the Halting Problem means that not all instances can be answered correctly; but maybe "some", "many" or "most" can? On the one hand the constant answer "yes" will be correct infinitely often, and wrong also infinitely often. To make the question reasonable, consider the density of the instances that can be solved. This turns out to depend significantly on the Programming System under consideration.
      Logical Limitations to Machine Ethics, with Consequences to Lethal Autonomous Weapons - paper discussed in: Does the Halting Problem Mean No Moral Robots?


      External links


      Scooping the loop snooper - a poetic proof of undecidability of the halting problem
      animated movie - an animation explaining the proof of the undecidability of the halting problem
      A 2-Minute Proof of the 2nd-Most Important Theorem of the 2nd Millennium - a proof in only 13 lines
      haltingproblem.org - popular videos and documents explaining the Halting Problem.

    Kata Kunci Pencarian: