- Source: Resolution (logic)
In mathematical logic and automated theorem proving, resolution is a rule of inference leading to a refutation-complete theorem-proving technique for sentences in propositional logic and first-order logic. For propositional logic, systematically applying the resolution rule acts as a decision procedure for formula unsatisfiability, solving the (complement of the) Boolean satisfiability problem. For first-order logic, resolution can be used as the basis for a semi-algorithm for the unsatisfiability problem of first-order logic, providing a more practical method than one following from Gödel's completeness theorem.
The resolution rule can be traced back to Davis and Putnam (1960); however, their algorithm required trying all ground instances of the given formula. This source of combinatorial explosion was eliminated in 1965 by John Alan Robinson's syntactical unification algorithm, which allowed one to instantiate the formula during the proof "on demand" just as far as needed to keep refutation completeness.
The clause produced by a resolution rule is sometimes called a resolvent.
Resolution in propositional logic
= Resolution rule
=The resolution rule in propositional logic is a single valid inference rule that produces a new clause implied by two clauses containing complementary literals. A literal is a propositional variable or the negation of a propositional variable. Two literals are said to be complements if one is the negation of the other (in the following,
¬
c
{\displaystyle \lnot c}
is taken to be the complement to
c
{\displaystyle c}
). The resulting clause contains all the literals that do not have complements.
Formally:
a
1
∨
a
2
∨
⋯
∨
c
,
b
1
∨
b
2
∨
⋯
∨
¬
c
a
1
∨
a
2
∨
⋯
∨
b
1
∨
b
2
∨
⋯
{\displaystyle {\frac {a_{1}\lor a_{2}\lor \cdots \lor c,\quad b_{1}\lor b_{2}\lor \cdots \lor \neg c}{a_{1}\lor a_{2}\lor \cdots \lor b_{1}\lor b_{2}\lor \cdots }}}
where
all
a
i
{\displaystyle a_{i}}
,
b
i
{\displaystyle b_{i}}
, and
c
{\displaystyle c}
are literals,
the dividing line stands for "entails".
The above may also be written as:
(
¬
a
1
∧
¬
a
2
∧
⋯
)
→
c
,
c
→
(
b
1
∨
b
2
∨
⋯
)
(
¬
a
1
∧
¬
a
2
∧
⋯
)
→
(
b
1
∨
b
2
∨
⋯
)
{\displaystyle {\frac {(\neg a_{1}\land \neg a_{2}\land \cdots )\rightarrow c,\quad c\rightarrow (b_{1}\lor b_{2}\lor \cdots )}{(\neg a_{1}\land \neg a_{2}\land \cdots )\rightarrow (b_{1}\lor b_{2}\lor \cdots )}}}
Or schematically as:
Γ
1
∪
{
ℓ
}
Γ
2
∪
{
ℓ
¯
}
Γ
1
∪
Γ
2
|
ℓ
|
{\displaystyle {\frac {\Gamma _{1}\cup \left\{\ell \right\}\,\,\,\,\Gamma _{2}\cup \left\{{\overline {\ell }}\right\}}{\Gamma _{1}\cup \Gamma _{2}}}|\ell |}
We have the following terminology:
The clauses
Γ
1
∪
{
ℓ
}
{\displaystyle \Gamma _{1}\cup \left\{\ell \right\}}
and
Γ
2
∪
{
ℓ
¯
}
{\displaystyle \Gamma _{2}\cup \left\{{\overline {\ell }}\right\}}
are the inference's premises
Γ
1
∪
Γ
2
{\displaystyle \Gamma _{1}\cup \Gamma _{2}}
(the resolvent of the premises) is its conclusion.
The literal
ℓ
{\displaystyle \ell }
is the left resolved literal,
The literal
ℓ
¯
{\displaystyle {\overline {\ell }}}
is the right resolved literal,
|
ℓ
|
{\displaystyle |\ell |}
is the resolved atom or pivot.
The clause produced by the resolution rule is called the resolvent of the two input clauses. It is the principle of consensus applied to clauses rather than terms.
When the two clauses contain more than one pair of complementary literals, the resolution rule can be applied (independently) for each such pair; however, the result is always a tautology.
Modus ponens can be seen as a special case of resolution (of a one-literal clause and a two-literal clause).
p
→
q
,
p
q
{\displaystyle {\frac {p\rightarrow q,\quad p}{q}}}
is equivalent to
¬
p
∨
q
,
p
q
{\displaystyle {\frac {\lnot p\lor q,\quad p}{q}}}
= A resolution technique
=When coupled with a complete search algorithm, the resolution rule yields a sound and complete algorithm for deciding the satisfiability of a propositional formula, and, by extension, the validity of a sentence under a set of axioms.
This resolution technique uses proof by contradiction and is based on the fact that any sentence in propositional logic can be transformed into an equivalent sentence in conjunctive normal form. The steps are as follows.
All sentences in the knowledge base and the negation of the sentence to be proved (the conjecture) are conjunctively connected.
The resulting sentence is transformed into a conjunctive normal form with the conjuncts viewed as elements in a set, S, of clauses.
For example,
(
A
1
∨
A
2
)
∧
(
B
1
∨
B
2
∨
B
3
)
∧
(
C
1
)
{\displaystyle (A_{1}\lor A_{2})\land (B_{1}\lor B_{2}\lor B_{3})\land (C_{1})}
gives rise to the set
S
=
{
A
1
∨
A
2
,
B
1
∨
B
2
∨
B
3
,
C
1
}
{\displaystyle S=\{A_{1}\lor A_{2},B_{1}\lor B_{2}\lor B_{3},C_{1}\}}
.
The resolution rule is applied to all possible pairs of clauses that contain complementary literals. After each application of the resolution rule, the resulting sentence is simplified by removing repeated literals. If the clause contains complementary literals, it is discarded (as a tautology). If not, and if it is not yet present in the clause set S, it is added to S, and is considered for further resolution inferences.
If after applying a resolution rule the empty clause is derived, the original formula is unsatisfiable (or contradictory), and hence it can be concluded that the initial conjecture follows from the axioms.
If, on the other hand, the empty clause cannot be derived, and the resolution rule cannot be applied to derive any more new clauses, the conjecture is not a theorem of the original knowledge base.
One instance of this algorithm is the original Davis–Putnam algorithm that was later refined into the DPLL algorithm that removed the need for explicit representation of the resolvents.
This description of the resolution technique uses a set S as the underlying data-structure to represent resolution derivations. Lists, Trees and Directed Acyclic Graphs are other possible and common alternatives. Tree representations are more faithful to the fact that the resolution rule is binary. Together with a sequent notation for clauses, a tree representation also makes it clear to see how the resolution rule is related to a special case of the cut-rule, restricted to atomic cut-formulas. However, tree representations are not as compact as set or list representations, because they explicitly show redundant subderivations of clauses that are used more than once in the derivation of the empty clause. Graph representations can be as compact in the number of clauses as list representations and they also store structural information regarding which clauses were resolved to derive each resolvent.
A simple example
a
∨
b
,
¬
a
∨
c
b
∨
c
{\displaystyle {\frac {a\vee b,\quad \neg a\vee c}{b\vee c}}}
In plain language: Suppose
a
{\displaystyle a}
is false. In order for the premise
a
∨
b
{\displaystyle a\vee b}
to be true,
b
{\displaystyle b}
must be true.
Alternatively, suppose
a
{\displaystyle a}
is true. In order for the premise
¬
a
∨
c
{\displaystyle \neg a\vee c}
to be true,
c
{\displaystyle c}
must be true. Therefore, regardless of falsehood or veracity of
a
{\displaystyle a}
, if both premises hold, then the conclusion
b
∨
c
{\displaystyle b\vee c}
is true.
Resolution in first-order logic
Resolution rule can be generalized to first-order logic to:
Γ
1
∪
{
L
1
}
Γ
2
∪
{
L
2
}
(
Γ
1
∪
Γ
2
)
ϕ
ϕ
{\displaystyle {\frac {\Gamma _{1}\cup \left\{L_{1}\right\}\,\,\,\,\Gamma _{2}\cup \left\{L_{2}\right\}}{(\Gamma _{1}\cup \Gamma _{2})\phi }}\phi }
where
ϕ
{\displaystyle \phi }
is a most general unifier of
L
1
{\displaystyle L_{1}}
and
L
2
¯
{\displaystyle {\overline {L_{2}}}}
, and
Γ
1
{\displaystyle \Gamma _{1}}
and
Γ
2
{\displaystyle \Gamma _{2}}
have no common variables.
= Example
=The clauses
P
(
x
)
,
Q
(
x
)
{\displaystyle P(x),Q(x)}
and
¬
P
(
b
)
{\displaystyle \neg P(b)}
can apply this rule with
[
b
/
x
]
{\displaystyle [b/x]}
as unifier.
Here x is a variable and b is a constant.
P
(
x
)
,
Q
(
x
)
¬
P
(
b
)
Q
(
b
)
[
b
/
x
]
{\displaystyle {\frac {P(x),Q(x)\,\,\,\,\neg P(b)}{Q(b)}}[b/x]}
Here we see that
The clauses
P
(
x
)
,
Q
(
x
)
{\displaystyle P(x),Q(x)}
and
¬
P
(
b
)
{\displaystyle \neg P(b)}
are the inference's premises
Q
(
b
)
{\displaystyle Q(b)}
(the resolvent of the premises) is its conclusion.
The literal
P
(
x
)
{\displaystyle P(x)}
is the left resolved literal,
The literal
¬
P
(
b
)
{\displaystyle \neg P(b)}
is the right resolved literal,
P
{\displaystyle P}
is the resolved atom or pivot.
[
b
/
x
]
{\displaystyle [b/x]}
is the most general unifier of the resolved literals.
= Informal explanation
=In first-order logic, resolution condenses the traditional syllogisms of logical inference down to a single rule.
To understand how resolution works, consider the following example syllogism of term logic:
All Greeks are Europeans.
Homer is a Greek.
Therefore, Homer is a European.
Or, more generally:
∀
x
.
P
(
x
)
⇒
Q
(
x
)
{\displaystyle \forall x.P(x)\Rightarrow Q(x)}
P
(
a
)
{\displaystyle P(a)}
Therefore,
Q
(
a
)
{\displaystyle Q(a)}
To recast the reasoning using the resolution technique, first the clauses must be converted to conjunctive normal form (CNF). In this form, all quantification becomes implicit: universal quantifiers on variables (X, Y, ...) are simply omitted as understood, while existentially-quantified variables are replaced by Skolem functions.
¬
P
(
x
)
∨
Q
(
x
)
{\displaystyle \neg P(x)\vee Q(x)}
P
(
a
)
{\displaystyle P(a)}
Therefore,
Q
(
a
)
{\displaystyle Q(a)}
So the question is, how does the resolution technique derive the last clause from the first two? The rule is simple:
Find two clauses containing the same predicate, where it is negated in one clause but not in the other.
Perform a unification on the two predicates. (If the unification fails, you made a bad choice of predicates. Go back to the previous step and try again.)
If any unbound variables which were bound in the unified predicates also occur in other predicates in the two clauses, replace them with their bound values (terms) there as well.
Discard the unified predicates, and combine the remaining ones from the two clauses into a new clause, also joined by the "∨" operator.
To apply this rule to the above example, we find the predicate P occurs in negated form
¬P(X)
in the first clause, and in non-negated form
P(a)
in the second clause. X is an unbound variable, while a is a bound value (term). Unifying the two produces the substitution
X ↦ a
Discarding the unified predicates, and applying this substitution to the remaining predicates (just Q(X), in this case), produces the conclusion:
Q(a)
For another example, consider the syllogistic form
All Cretans are islanders.
All islanders are liars.
Therefore all Cretans are liars.
Or more generally,
∀X P(X) → Q(X)
∀X Q(X) → R(X)
Therefore, ∀X P(X) → R(X)
In CNF, the antecedents become:
¬P(X) ∨ Q(X)
¬Q(Y) ∨ R(Y)
(Note that the variable in the second clause was renamed to make it clear that variables in different clauses are distinct.)
Now, unifying Q(X) in the first clause with ¬Q(Y) in the second clause means that X and Y become the same variable anyway. Substituting this into the remaining clauses and combining them gives the conclusion:
¬P(X) ∨ R(X)
= Factoring
=The resolution rule, as defined by Robinson, also incorporated factoring, which unifies two literals in the same clause, before or during the application of resolution as defined above. The resulting inference rule is refutation-complete, in that a set of clauses is unsatisfiable if and only if there exists a derivation of the empty clause using only resolution, enhanced by factoring.
An example for an unsatisfiable clause set for which factoring is needed to derive the empty clause is:
(
1
)
:
P
(
u
)
∨
P
(
f
(
u
)
)
(
2
)
:
¬
P
(
v
)
∨
P
(
f
(
w
)
)
(
3
)
:
¬
P
(
x
)
∨
¬
P
(
f
(
x
)
)
{\displaystyle {\begin{array}{rlcl}(1):&P(u)&\lor &P(f(u))\\(2):&\lnot P(v)&\lor &P(f(w))\\(3):&\lnot P(x)&\lor &\lnot P(f(x))\\\end{array}}}
Since each clause consists of two literals, so does each possible resolvent. Therefore, by resolution without factoring, the empty clause can never be obtained.
Using factoring, it can be obtained e.g. as follows:
(
4
)
:
P
(
u
)
∨
P
(
f
(
w
)
)
by resolving (1) and (2), with
v
=
f
(
u
)
(
5
)
:
P
(
f
(
w
)
)
by factoring (4), with
u
=
f
(
w
)
(
6
)
:
¬
P
(
f
(
f
(
w
′
)
)
)
by resolving (5) and (3), with
w
=
w
′
,
x
=
f
(
w
′
)
(
7
)
:
false
by resolving (5) and (6), with
w
=
f
(
w
′
)
{\displaystyle {\begin{array}{rll}(4):&P(u)\lor P(f(w))&{\text{by resolving (1) and (2), with }}v=f(u)\\(5):&P(f(w))&{\text{by factoring (4), with }}u=f(w)\\(6):&\lnot P(f(f(w')))&{\text{by resolving (5) and (3), with }}w=w',x=f(w')\\(7):&{\text{false}}&{\text{by resolving (5) and (6), with }}w=f(w')\\\end{array}}}
Non-clausal resolution
Generalizations of the above resolution rule have been devised that do not require the originating formulas to be in clausal normal form.
These techniques are useful mainly in interactive theorem proving where it is important to preserve human readability of intermediate result formulas. Besides, they avoid combinatorial explosion during transformation to clause-form,: 98 and sometimes save resolution steps.: 425
= Non-clausal resolution in propositional logic
=For propositional logic, Murray: 18 and Manna and Waldinger: 98 use the rule
F
[
p
]
G
[
p
]
F
[
true
]
∨
G
[
false
]
{\displaystyle {\begin{array}{c}F[p]\;\;\;\;\;\;\;\;\;\;G[p]\\\hline F[{\textit {true}}]\lor G[{\textit {false}}]\\\end{array}}}
,
where
p
{\displaystyle p}
denotes an arbitrary formula,
F
[
p
]
{\displaystyle F[p]}
denotes a formula containing
p
{\displaystyle p}
as a subformula, and
F
[
true
]
{\displaystyle F[{\textit {true}}]}
is built by replacing in
F
[
p
]
{\displaystyle F[p]}
every occurrence of
p
{\displaystyle p}
by
true
{\displaystyle {\textit {true}}}
; likewise for
G
{\displaystyle G}
.
The resolvent
F
[
true
]
∨
G
[
false
]
{\displaystyle F[{\textit {true}}]\lor G[{\textit {false}}]}
is intended to be simplified using rules like
q
∧
true
⟹
q
{\displaystyle q\land {\textit {true}}\implies q}
, etc.
In order to prevent generating useless trivial resolvents, the rule shall be applied only when
p
{\displaystyle p}
has at least one "negative" and "positive" occurrence in
F
{\displaystyle F}
and
G
{\displaystyle G}
, respectively. Murray has shown that this rule is complete if augmented by appropriate logical transformation rules.: 103
Traugott uses the rule
F
[
p
+
,
p
−
]
G
[
p
]
F
[
G
[
true
]
,
¬
G
[
false
]
]
{\displaystyle {\begin{array}{c}F[p^{+},p^{-}]\;\;\;\;\;\;\;\;G[p]\\\hline F[G[{\textit {true}}],\lnot G[{\textit {false}}]]\\\end{array}}}
,
where the exponents of
p
{\displaystyle p}
indicate the polarity of its occurrences. While
G
[
true
]
{\displaystyle G[{\textit {true}}]}
and
G
[
false
]
{\displaystyle G[{\textit {false}}]}
are built as before, the formula
F
[
G
[
true
]
,
¬
G
[
false
]
]
{\displaystyle F[G[{\textit {true}}],\lnot G[{\textit {false}}]]}
is obtained by replacing each positive and each negative occurrence of
p
{\displaystyle p}
in
F
{\displaystyle F}
with
G
[
true
]
{\displaystyle G[{\textit {true}}]}
and
G
[
false
]
{\displaystyle G[{\textit {false}}]}
, respectively. Similar to Murray's approach, appropriate simplifying transformations are to be applied to the resolvent. Traugott proved his rule to be complete, provided
∧
,
∨
,
→
,
¬
{\displaystyle \land ,\lor ,\rightarrow ,\lnot }
are the only connectives used in formulas.: 398–400
Traugott's resolvent is stronger than Murray's.: 395 Moreover, it does not introduce new binary junctors, thus avoiding a tendency towards clausal form in repeated resolution. However, formulas may grow longer when a small
p
{\displaystyle p}
is replaced multiple times with a larger
G
[
true
]
{\displaystyle G[{\textit {true}}]}
and/or
G
[
false
]
{\displaystyle G[{\textit {false}}]}
.: 398
= Propositional non-clausal resolution example
=As an example, starting from the user-given assumptions
(
1
)
:
a
→
b
∧
c
(
2
)
:
c
→
d
(
3
)
:
b
∧
d
→
e
(
4
)
:
¬
(
a
→
e
)
{\displaystyle {\begin{array}{rccc}(1):&a&\rightarrow &b\land c\\(2):&c&\rightarrow &d\\(3):&b\land d&\rightarrow &e\\(4):&\lnot (a&\rightarrow &e)\\\end{array}}}
the Murray rule can be used as follows to infer a contradiction:
(
5
)
:
(
true
→
d
)
∨
(
a
→
b
∧
false
)
⟹
d
∨
¬
a
from (2) and (1), with
p
=
c
(
6
)
:
(
b
∧
true
→
e
)
∨
(
false
∨
¬
a
)
⟹
(
b
→
e
)
∨
¬
a
from (3) and (5), with
p
=
d
(
7
)
:
(
(
true
→
e
)
∨
¬
a
)
∨
(
a
→
false
∧
c
)
⟹
e
∨
¬
a
∨
¬
a
from (6) and (1), with
p
=
b
(
8
)
:
(
e
∨
¬
true
∨
¬
true
)
∨
¬
(
false
→
e
)
⟹
e
from (7) and (4), with
p
=
a
(
9
)
:
¬
(
a
→
true
)
∨
false
⟹
false
from (4) and (8), with
p
=
e
{\displaystyle {\begin{array}{rrclccl}(5):&({\textit {true}}\rightarrow d)&\lor &(a\rightarrow b\land {\textit {false}})&\implies &d\lor \lnot a&{\mbox{from (2) and (1), with }}p=c\\(6):&(b\land {\textit {true}}\rightarrow e)&\lor &({\textit {false}}\lor \lnot a)&\implies &(b\rightarrow e)\lor \lnot a&{\mbox{from (3) and (5), with }}p=d\\(7):&(({\textit {true}}\rightarrow e)\lor \lnot a)&\lor &(a\rightarrow {\textit {false}}\land c)&\implies &e\lor \lnot a\lor \lnot a&{\mbox{from (6) and (1), with }}p=b\\(8):&(e\lor \lnot {\textit {true}}\lor \lnot {\textit {true}})&\lor &\lnot ({\textit {false}}\rightarrow e)&\implies &e&{\mbox{from (7) and (4), with }}p=a\\(9):&\lnot (a\rightarrow {\textit {true}})&\lor &{\textit {false}}&\implies &{\textit {false}}&{\mbox{from (4) and (8), with }}p=e\\\end{array}}}
For the same purpose, the Traugott rule can be used as follows :: 397
(
10
)
:
a
→
b
∧
(
true
→
d
)
⟹
a
→
b
∧
d
from (1) and (2), with
p
=
c
(
11
)
:
a
→
(
true
→
e
)
⟹
a
→
e
from (10) and (3), with
p
=
(
b
∧
d
)
(
12
)
:
¬
true
⟹
false
from (11) and (4), with
p
=
(
a
→
e
)
{\displaystyle {\begin{array}{rcccl}(10):&a\rightarrow b\land ({\textit {true}}\rightarrow d)&\implies &a\rightarrow b\land d&{\mbox{from (1) and (2), with }}p=c\\(11):&a\rightarrow ({\textit {true}}\rightarrow e)&\implies &a\rightarrow e&{\mbox{from (10) and (3), with }}p=(b\land d)\\(12):&\lnot {\textit {true}}&\implies &{\textit {false}}&{\mbox{from (11) and (4), with }}p=(a\rightarrow e)\\\end{array}}}
From a comparison of both deductions, the following issues can be seen:
Traugott's rule may yield a sharper resolvent: compare (5) and (10), which both resolve (1) and (2) on
p
=
c
{\displaystyle p=c}
.
Murray's rule introduced 3 new disjunction symbols: in (5), (6), and (7), while Traugott's rule didn't introduce any new symbol; in this sense, Traugott's intermediate formulas resemble the user's style more closely than Murray's.
Due to the latter issue, Traugott's rule can take advantage of the implication in assumption (4), using as
p
{\displaystyle p}
the non-atomic formula
a
→
e
{\displaystyle a\rightarrow e}
in step (12). Using Murray's rules, the semantically equivalent formula
e
∨
¬
a
∨
¬
a
{\displaystyle e\lor \lnot a\lor \lnot a}
was obtained as (7), however, it could not be used as
p
{\displaystyle p}
due to its syntactic form.
= Non-clausal resolution in first-order logic
=For first-order predicate logic, Murray's rule is generalized to allow distinct, but unifiable, subformulas
p
1
{\displaystyle p_{1}}
and
p
2
{\displaystyle p_{2}}
of
F
{\displaystyle F}
and
G
{\displaystyle G}
, respectively. If
ϕ
{\displaystyle \phi }
is the most general unifier of
p
1
{\displaystyle p_{1}}
and
p
2
{\displaystyle p_{2}}
, then the generalized resolvent is
F
ϕ
[
true
]
∨
G
ϕ
[
false
]
{\displaystyle F\phi [{\textit {true}}]\lor G\phi [{\textit {false}}]}
. While the rule remains sound if a more special substitution
ϕ
{\displaystyle \phi }
is used, no such rule applications are needed to achieve completeness.
Traugott's rule is generalized to allow several pairwise distinct subformulas
p
1
,
…
,
p
m
{\displaystyle p_{1},\ldots ,p_{m}}
of
F
{\displaystyle F}
and
p
m
+
1
,
…
,
p
n
{\displaystyle p_{m+1},\ldots ,p_{n}}
of
G
{\displaystyle G}
, as long as
p
1
,
…
,
p
n
{\displaystyle p_{1},\ldots ,p_{n}}
have a common most general unifier, say
ϕ
{\displaystyle \phi }
. The generalized resolvent is obtained after applying
ϕ
{\displaystyle \phi }
to the parent formulas, thus making the propositional version applicable. Traugott's completeness proof relies on the assumption that this fully general rule is used;: 401 it is not clear whether his rule would remain complete if restricted to
p
1
=
⋯
=
p
m
{\displaystyle p_{1}=\cdots =p_{m}}
and
p
m
+
1
=
⋯
=
p
n
{\displaystyle p_{m+1}=\cdots =p_{n}}
.
Paramodulation
Paramodulation is a related technique for reasoning on sets of clauses where the predicate symbol is equality. It generates all "equal" versions of clauses, except reflexive identities. The paramodulation operation takes a positive from clause, which must contain an equality literal. It then searches an into clause with a subterm that unifies with one side of the equality. The subterm is then replaced by the other side of the equality. The general aim of paramodulation is to reduce the system to atoms, reducing the size of the terms when substituting.
Implementations
CARINE
GKC
Otter
Prover9
SNARK
SPASS
Vampire
Logictools online prover
See also
Condensed detachment — an earlier version of resolution
Inductive logic programming
Inverse resolution
Logic programming
Method of analytic tableaux
SLD resolution
Resolution inference
Notes
References
Robinson, J. Alan (1965). "A Machine-Oriented Logic Based on the Resolution Principle". Journal of the ACM. 12 (1): 23–41. doi:10.1145/321250.321253. S2CID 14389185.
Leitsch, Alexander (1997). The Resolution Calculus. Texts in Theoretical Computer Science. An EATCS Series. Springer. ISBN 978-3-642-60605-2.
Gallier, Jean H. (1986). Logic for Computer Science: Foundations of Automatic Theorem Proving. Harper & Row.
Lee, Chin-Liang Chang, Richard Char-Tung (1987). Symbolic logic and mechanical theorem proving. Academic Press. ISBN 0-12-170350-9.{{cite book}}: CS1 maint: multiple names: authors list (link)
External links
Alex Sakharov. "Resolution Principle". MathWorld.
Alex Sakharov. "Resolution". MathWorld.
Kata Kunci Pencarian:
- Kesultanan Utsmaniyah
- Retorika
- Agama Hindu
- Semiotika
- Ramon Llull
- Argentina
- Negara Islam Irak dan Syam
- Xenon
- Aljabar
- Dewan Perwakilan Rakyat Amerika Serikat
- Resolution (logic)
- Resolution
- Inductive logic programming
- Logic programming
- Proof calculus
- SLD resolution
- Mathematical logic
- First-order logic
- Factoring
- Completeness (logic)