- Source: Janson inequality
In the mathematical theory of probability, Janson's inequality is a collection of related inequalities giving an exponential bound on the probability of many related events happening simultaneously by their pairwise dependence. Informally Janson's inequality involves taking a sample of many independent random binary variables, and a set of subsets of those variables and bounding the probability that the sample will contain any of those subsets by their pairwise correlation.
Statement
Let
Γ
{\displaystyle \Gamma }
be our set of variables. We intend to sample these variables according to probabilities
p
=
(
p
i
∈
[
0
,
1
]
:
i
∈
Γ
)
{\displaystyle p=(p_{i}\in [0,1]:i\in \Gamma )}
. Let
Γ
p
⊆
Γ
{\displaystyle \Gamma _{p}\subseteq \Gamma }
be the random variable of the subset of
Γ
{\displaystyle \Gamma }
that includes
i
∈
Γ
{\displaystyle i\in \Gamma }
with probability
p
i
{\displaystyle p_{i}}
. That is, independently, for every
i
∈
Γ
:
Pr
[
i
∈
Γ
p
]
=
p
i
{\displaystyle i\in \Gamma :\Pr[i\in \Gamma _{p}]=p_{i}}
.
Let
S
{\displaystyle S}
be a family of subsets of
Γ
{\displaystyle \Gamma }
. We want to bound the probability that any
A
∈
S
{\displaystyle A\in S}
is a subset of
Γ
p
{\displaystyle \Gamma _{p}}
. We will bound it using the expectation of the number of
A
∈
S
{\displaystyle A\in S}
such that
A
⊆
Γ
p
{\displaystyle A\subseteq \Gamma _{p}}
, which we call
λ
{\displaystyle \lambda }
, and a term from the pairwise probability of being in
Γ
p
{\displaystyle \Gamma _{p}}
, which we call
Δ
{\displaystyle \Delta }
.
For
A
∈
S
{\displaystyle A\in S}
, let
I
A
{\displaystyle I_{A}}
be the random variable that is one if
A
⊆
Γ
p
{\displaystyle A\subseteq \Gamma _{p}}
and zero otherwise. Let
X
{\displaystyle X}
be the random variables of the number of sets in
S
{\displaystyle S}
that are inside
Γ
p
{\displaystyle \Gamma _{p}}
:
X
=
∑
A
∈
S
I
A
{\displaystyle X=\sum _{A\in S}I_{A}}
. Then we define the following variables:
λ
=
E
[
∑
A
∈
S
I
A
]
=
E
[
X
]
{\displaystyle \lambda =\operatorname {E} \left[\sum _{A\in S}I_{A}\right]=\operatorname {E} [X]}
Δ
=
1
2
∑
A
≠
B
,
A
∩
B
≠
∅
E
[
I
A
I
B
]
{\displaystyle \Delta ={\frac {1}{2}}\sum _{A\neq B,A\cap B\neq \emptyset }\operatorname {E} [I_{A}I_{B}]}
Δ
¯
=
λ
+
2
Δ
{\displaystyle {\bar {\Delta }}=\lambda +2\Delta }
Then the Janson inequality is:
Pr
[
X
=
0
]
=
Pr
[
∀
A
∈
S
:
A
⊄
Γ
p
]
≤
e
−
λ
+
Δ
{\displaystyle \Pr[X=0]=\Pr[\forall A\in S:A\not \subset \Gamma _{p}]\leq e^{-\lambda +\Delta }}
and
Pr
[
X
=
0
]
=
Pr
[
∀
A
∈
S
:
A
⊄
Γ
p
]
≤
e
−
λ
2
Δ
¯
{\displaystyle \Pr[X=0]=\Pr[\forall A\in S:A\not \subset \Gamma _{p}]\leq e^{-{\frac {\lambda ^{2}}{\bar {\Delta }}}}}
= Tail bound
=Janson later extended this result to give a tail bound on the probability of only a few sets being subsets. Let
0
≤
t
≤
λ
{\displaystyle 0\leq t\leq \lambda }
give the distance from the expected number of subsets. Let
ϕ
(
x
)
=
(
1
+
x
)
ln
(
1
+
x
)
−
x
{\displaystyle \phi (x)=(1+x)\ln(1+x)-x}
. Then we have
Pr
(
X
≤
λ
−
t
)
≤
e
−
φ
(
−
t
/
λ
)
λ
2
/
Δ
¯
≤
e
−
t
2
/
(
2
Δ
¯
)
{\displaystyle \Pr(X\leq \lambda -t)\leq e^{-\varphi (-t/\lambda )\lambda ^{2}/{\bar {\Delta }}}\leq e^{-t^{2}/\left(2{\bar {\Delta }}\right)}}
Uses
Janson's Inequality has been used in pseudorandomness for bounds on constant-depth circuits. Research leading to these inequalities were originally motivated by estimating chromatic numbers of random graphs.
References
Kata Kunci Pencarian:
- Janson inequality
- Svante Janson
- List of analyses of categorical data
- Erdős–Tetali theorem
- Paraproduct
- United States
- Béla Bollobás
- Guerrilla Girls
- Color consciousness
- Avatar: The Last Airbender