- Source: Markov logic network
A Markov logic network (MLN) is a probabilistic logic which applies the ideas of a Markov network to first-order logic, defining probability distributions on possible worlds on any given domain .
History
In 2002, Ben Taskar, Pieter Abbeel and Daphne Koller introduced relational Markov networks as templates to specify Markov networks abstractly and without reference to a specific domain. Work on Markov logic networks began in 2003 by Pedro Domingos and Matt Richardson. Markov logic networks a popular formalism for statistical relational learning.
Syntax
A Markov logic network consists of a collection of formulas from first-order logic, to each of which is assigned a real number, the weight. The underlying idea is that an interpretation is more likely if it satisfies formulas with positive weights and less likely if it satisfies formulas with negative weights.
For instance, the following Markov logic network codifies how smokers are more likely to be friends with other smokers, and how stress encourages smoking:
2.0
::
s
m
o
k
e
s
(
X
)
←
s
m
o
k
e
s
(
Y
)
∧
i
n
f
l
u
e
n
c
e
s
(
X
,
Y
)
0.5
::
s
m
o
k
e
s
(
X
)
←
s
t
r
e
s
s
(
X
)
{\displaystyle {\begin{array}{lcl}2.0&::&\mathrm {smokes} (X)\leftarrow \mathrm {smokes} (Y)\land \mathrm {influences} (X,Y)\\0.5&::&\mathrm {smokes} (X)\leftarrow \mathrm {stress} (X)\end{array}}}
Semantics
Together with a given domain, a Markov logic network defines a probability distribution on the set of all interpretations of its predicates on the given domain. The underlying idea is that an interpretation is more likely if it satisfies formulas with positive weights and less likely if it satisfies formulas with negative weights.
For any
n
{\displaystyle n}
-ary predicate symbol
R
{\displaystyle R}
that occurs in the Markov logic network and every
n
{\displaystyle n}
-tuple
a
1
,
…
,
a
n
{\displaystyle a_{1},\dots ,a_{n}}
of domain elements,
R
(
a
1
,
…
,
a
n
)
{\displaystyle R(a_{1},\dots ,a_{n})}
is a grounding of
R
{\displaystyle R}
. An interpretation is given by allocating a Boolean truth value (true or false) to each grounding of an element. A true grounding of a formula
φ
{\displaystyle \varphi }
in an interpretation with free variables
x
1
,
…
,
x
n
{\displaystyle x_{1},\dots ,x_{n}}
is a variable assignment of
x
1
,
…
,
x
n
{\displaystyle x_{1},\dots ,x_{n}}
that makes
φ
{\displaystyle \varphi }
true in that interpretation.
Then the probability of any given interpretation is directly proportional to
exp
(
∑
j
w
j
n
j
)
{\displaystyle \exp(\sum _{j}w_{j}n_{j})}
, where
w
j
{\displaystyle w_{j}}
is the weight of the
j
{\displaystyle j}
-th sentence of the Markov logic network and
n
j
{\displaystyle n_{j}}
is the number of its true groundings.
This can also be seen as inducing a Markov network whose nodes are the groundings of the predicates occurring in the Markov logic network. The feature functions of this network are the groundings of the sentences occurring in the Markov logic network, with value
e
w
{\displaystyle e^{w}}
if the grounding is true and 1 otherwise (where again
w
{\displaystyle w}
is the weight of the formula).
Inference
The probability distributions induced by Markov logic networks can be queried for the probability of a particular event, given by an atomic formula (marginal inference), possibly conditioned by another atomic formula.
Marginal inference can be performed using standard Markov network inference techniques over the minimal subset of the relevant Markov network required for answering the query. Exact inference is known to be #P-complete in the size of the domain.
In practice, the exact probability is often approximated. Techniques for approximate inference include Gibbs sampling, belief propagation, or approximation via pseudolikelihood.
The class of Markov logic networks which use only two variables in any formula allows for polynomial time exact inference by reduction to weighted model counting.
See also
Markov random field
Statistical relational learning
Probabilistic logic network
Probabilistic soft logic
ProbLog
Resources
External links
University of Washington Statistical Relational Learning group
Alchemy 2.0: Markov logic networks in C++
pracmln: Markov logic networks in Python
ProbCog: Markov logic networks in Python and Java that can use its own inference engine or Alchemy's
markov thebeast: Markov logic networks in Java
RockIt: Markov logic networks in Java (with web interface/REST API)
Tuffy: A Learning and Inference Engine with strong RDBMs-based optimization for scalability
Felix: A successor to Tuffy, with prebuilt submodules to speed up common subtasks
Factorie: Scala based probabilistic inference language, with prebuilt submodules for natural language processing etc
Figaro: Scala based MLN language
LoMRF: Logical Markov Random Fields, an open-source implementation of Markov Logic Networks in Scala
Kata Kunci Pencarian:
- Markov logic network
- Markov random field
- Pedro Domingos
- List of things named after Andrey Markov
- Causal Markov condition
- Probabilistic logic network
- Transfer learning
- Statistical relational learning
- The Master Algorithm
- Probabilistic logic