- Source: Generalized relative entropy
Generalized relative entropy (
ϵ
{\displaystyle \epsilon }
-relative entropy) is a measure of dissimilarity between two quantum states. It is a "one-shot" analogue of quantum relative entropy and shares many properties of the latter quantity.
In the study of quantum information theory, we typically assume that information processing tasks are repeated multiple times, independently. The corresponding information-theoretic notions are therefore defined in the asymptotic limit. The quintessential entropy measure, von Neumann entropy, is one such notion. In contrast, the study of one-shot quantum information theory is concerned with information processing when a task is conducted only once. New entropic measures emerge in this scenario, as traditional notions cease to give a precise characterization of resource requirements.
ϵ
{\displaystyle \epsilon }
-relative entropy is one such particularly interesting measure.
In the asymptotic scenario, relative entropy acts as a parent quantity for other measures besides being an important measure itself. Similarly,
ϵ
{\displaystyle \epsilon }
-relative entropy functions as a parent quantity for other measures in the one-shot scenario.
Definition
To motivate the definition of the
ϵ
{\displaystyle \epsilon }
-relative entropy
D
ϵ
(
ρ
|
|
σ
)
{\displaystyle D^{\epsilon }(\rho ||\sigma )}
, consider the information processing task of hypothesis testing. In hypothesis testing, we wish to devise a strategy to distinguish between two density operators
ρ
{\displaystyle \rho }
and
σ
{\displaystyle \sigma }
. A strategy is a POVM with elements
Q
{\displaystyle Q}
and
I
−
Q
{\displaystyle I-Q}
. The probability that the strategy produces a correct guess on input
ρ
{\displaystyle \rho }
is given by
Tr
(
ρ
Q
)
{\displaystyle \operatorname {Tr} (\rho Q)}
and the probability that it produces a wrong guess is given by
Tr
(
σ
Q
)
{\displaystyle \operatorname {Tr} (\sigma Q)}
.
ϵ
{\displaystyle \epsilon }
-relative entropy captures the minimum probability of error when the state is
σ
{\displaystyle \sigma }
, given that the success probability for
ρ
{\displaystyle \rho }
is at least
ϵ
{\displaystyle \epsilon }
.
For
ϵ
∈
(
0
,
1
)
{\displaystyle \epsilon \in (0,1)}
, the
ϵ
{\displaystyle \epsilon }
-relative entropy between two quantum states
ρ
{\displaystyle \rho }
and
σ
{\displaystyle \sigma }
is defined as
D
ϵ
(
ρ
|
|
σ
)
=
−
log
1
ϵ
min
{
⟨
Q
,
σ
⟩
|
0
≤
Q
≤
I
and
⟨
Q
,
ρ
⟩
≥
ϵ
}
.
{\displaystyle D^{\epsilon }(\rho ||\sigma )=-\log {\frac {1}{\epsilon }}\min\{\langle Q,\sigma \rangle |0\leq Q\leq I{\text{ and }}\langle Q,\rho \rangle \geq \epsilon \}~.}
From the definition, it is clear that
D
ϵ
(
ρ
|
|
σ
)
≥
0
{\displaystyle D^{\epsilon }(\rho ||\sigma )\geq 0}
. This inequality is saturated if and only if
ρ
=
σ
{\displaystyle \rho =\sigma }
, as shown below.
Relationship to the trace distance
Suppose the trace distance between two density operators
ρ
{\displaystyle \rho }
and
σ
{\displaystyle \sigma }
is
|
|
ρ
−
σ
|
|
1
=
δ
.
{\displaystyle ||\rho -\sigma ||_{1}=\delta ~.}
For
0
<
ϵ
<
1
{\displaystyle 0<\epsilon <1}
, it holds that
a)
log
ϵ
ϵ
−
(
1
−
ϵ
)
δ
≤
D
ϵ
(
ρ
|
|
σ
)
≤
log
ϵ
ϵ
−
δ
.
{\displaystyle \log {\frac {\epsilon }{\epsilon -(1-\epsilon )\delta }}\quad \leq \quad D^{\epsilon }(\rho ||\sigma )\quad \leq \quad \log {\frac {\epsilon }{\epsilon -\delta }}~.}
In particular, this implies the following analogue of the Pinsker inequality
b)
1
−
ϵ
ϵ
|
|
ρ
−
σ
|
|
1
≤
D
ϵ
(
ρ
|
|
σ
)
.
{\displaystyle {\frac {1-\epsilon }{\epsilon }}||\rho -\sigma ||_{1}\quad \leq \quad D^{\epsilon }(\rho ||\sigma )~.}
Furthermore, the proposition implies that for any
ϵ
∈
(
0
,
1
)
{\displaystyle \epsilon \in (0,1)}
,
D
ϵ
(
ρ
|
|
σ
)
=
0
{\displaystyle D^{\epsilon }(\rho ||\sigma )=0}
if and only if
ρ
=
σ
{\displaystyle \rho =\sigma }
, inheriting this property from the trace distance. This result and its proof can be found in Dupuis et al.
= Proof of inequality a)
=Upper bound: Trace distance can be written as
|
|
ρ
−
σ
|
|
1
=
max
0
≤
Q
≤
1
Tr
(
Q
(
ρ
−
σ
)
)
.
{\displaystyle ||\rho -\sigma ||_{1}=\max _{0\leq Q\leq 1}\operatorname {Tr} (Q(\rho -\sigma ))~.}
This maximum is achieved when
Q
{\displaystyle Q}
is the orthogonal projector onto the positive eigenspace of
ρ
−
σ
{\displaystyle \rho -\sigma }
. For any POVM element
Q
{\displaystyle Q}
we have
Tr
(
Q
(
ρ
−
σ
)
)
≤
δ
{\displaystyle \operatorname {Tr} (Q(\rho -\sigma ))\leq \delta }
so that if
Tr
(
Q
ρ
)
≥
ϵ
{\displaystyle \operatorname {Tr} (Q\rho )\geq \epsilon }
, we have
Tr
(
Q
σ
)
≥
Tr
(
Q
ρ
)
−
δ
≥
ϵ
−
δ
.
{\displaystyle \operatorname {Tr} (Q\sigma )~\geq ~\operatorname {Tr} (Q\rho )-\delta ~\geq ~\epsilon -\delta ~.}
From the definition of the
ϵ
{\displaystyle \epsilon }
-relative entropy, we get
2
−
D
ϵ
(
ρ
|
|
σ
)
≥
ϵ
−
δ
ϵ
.
{\displaystyle 2^{-D^{\epsilon }(\rho ||\sigma )}\geq {\frac {\epsilon -\delta }{\epsilon }}~.}
Lower bound: Let
Q
{\displaystyle Q}
be the orthogonal projection onto the positive eigenspace of
ρ
−
σ
{\displaystyle \rho -\sigma }
, and let
Q
¯
{\displaystyle {\bar {Q}}}
be the following convex combination of
I
{\displaystyle I}
and
Q
{\displaystyle Q}
:
Q
¯
=
(
ϵ
−
μ
)
I
+
(
1
−
ϵ
+
μ
)
Q
{\displaystyle {\bar {Q}}=(\epsilon -\mu )I+(1-\epsilon +\mu )Q}
where
μ
=
(
1
−
ϵ
)
Tr
(
Q
ρ
)
1
−
Tr
(
Q
ρ
)
.
{\displaystyle \mu ={\frac {(1-\epsilon )\operatorname {Tr} (Q\rho )}{1-\operatorname {Tr} (Q\rho )}}~.}
This means
μ
=
(
1
−
ϵ
+
μ
)
Tr
(
Q
ρ
)
{\displaystyle \mu =(1-\epsilon +\mu )\operatorname {Tr} (Q\rho )}
and thus
Tr
(
Q
¯
ρ
)
=
(
ϵ
−
μ
)
+
(
1
−
ϵ
+
μ
)
Tr
(
Q
ρ
)
=
ϵ
.
{\displaystyle \operatorname {Tr} ({\bar {Q}}\rho )~=~(\epsilon -\mu )+(1-\epsilon +\mu )\operatorname {Tr} (Q\rho )~=~\epsilon ~.}
Moreover,
Tr
(
Q
¯
σ
)
=
ϵ
−
μ
+
(
1
−
ϵ
+
μ
)
Tr
(
Q
σ
)
.
{\displaystyle \operatorname {Tr} ({\bar {Q}}\sigma )~=~\epsilon -\mu +(1-\epsilon +\mu )\operatorname {Tr} (Q\sigma )~.}
Using
μ
=
(
1
−
ϵ
+
μ
)
Tr
(
Q
ρ
)
{\displaystyle \mu =(1-\epsilon +\mu )\operatorname {Tr} (Q\rho )}
, our choice of
Q
{\displaystyle Q}
, and finally the definition of
μ
{\displaystyle \mu }
, we can re-write this as
Tr
(
Q
¯
σ
)
=
ϵ
−
(
1
−
ϵ
+
μ
)
Tr
(
Q
ρ
)
+
(
1
−
ϵ
+
μ
)
Tr
(
Q
σ
)
{\displaystyle \operatorname {Tr} ({\bar {Q}}\sigma )~=~\epsilon -(1-\epsilon +\mu )\operatorname {Tr} (Q\rho )+(1-\epsilon +\mu )\operatorname {Tr} (Q\sigma )}
=
ϵ
−
(
1
−
ϵ
)
δ
1
−
Tr
(
Q
ρ
)
≤
ϵ
−
(
1
−
ϵ
)
δ
.
{\displaystyle ~=~\epsilon -{\frac {(1-\epsilon )\delta }{1-\operatorname {Tr} (Q\rho )}}~\leq ~\epsilon -(1-\epsilon )\delta ~.}
Hence
D
ϵ
(
ρ
|
|
σ
)
≥
log
ϵ
ϵ
−
(
1
−
ϵ
)
δ
.
{\displaystyle D^{\epsilon }(\rho ||\sigma )\geq \log {\frac {\epsilon }{\epsilon -(1-\epsilon )\delta }}~.}
= Proof of inequality b)
=To derive this Pinsker-like inequality, observe that
log
ϵ
ϵ
−
(
1
−
ϵ
)
δ
=
−
log
(
1
−
(
1
−
ϵ
)
δ
ϵ
)
≥
δ
1
−
ϵ
ϵ
.
{\displaystyle \log {\frac {\epsilon }{\epsilon -(1-\epsilon )\delta }}~=~-\log \left(1-{\frac {(1-\epsilon )\delta }{\epsilon }}\right)~\geq ~\delta {\frac {1-\epsilon }{\epsilon }}~.}
Alternative proof of the Data Processing inequality
A fundamental property of von Neumann entropy is strong subadditivity. Let
S
(
σ
)
{\displaystyle S(\sigma )}
denote the von Neumann entropy of the quantum state
σ
{\displaystyle \sigma }
, and let
ρ
A
B
C
{\displaystyle \rho _{ABC}}
be a quantum state on the tensor product Hilbert space
H
A
⊗
H
B
⊗
H
C
{\displaystyle {\mathcal {H}}_{A}\otimes {\mathcal {H}}_{B}\otimes {\mathcal {H}}_{C}}
. Strong subadditivity states that
S
(
ρ
A
B
C
)
+
S
(
ρ
B
)
≤
S
(
ρ
A
B
)
+
S
(
ρ
B
C
)
{\displaystyle S(\rho _{ABC})+S(\rho _{B})\leq S(\rho _{AB})+S(\rho _{BC})}
where
ρ
A
B
,
ρ
B
C
,
ρ
B
{\displaystyle \rho _{AB},\rho _{BC},\rho _{B}}
refer to the reduced density matrices on the spaces indicated by the subscripts.
When re-written in terms of mutual information, this inequality has an intuitive interpretation; it states that the information content in a system cannot increase by the action of a local quantum operation on that system. In this form, it is better known as the data processing inequality, and is equivalent to the monotonicity of relative entropy under quantum operations:
S
(
ρ
|
|
σ
)
−
S
(
E
(
ρ
)
|
|
E
(
σ
)
)
≥
0
{\displaystyle S(\rho ||\sigma )-S({\mathcal {E}}(\rho )||{\mathcal {E}}(\sigma ))\geq 0}
for every CPTP map
E
{\displaystyle {\mathcal {E}}}
, where
S
(
ω
|
|
τ
)
{\displaystyle S(\omega ||\tau )}
denotes the relative entropy of the quantum states
ω
,
τ
{\displaystyle \omega ,\tau }
.
It is readily seen that
ϵ
{\displaystyle \epsilon }
-relative entropy also obeys monotonicity under quantum operations:
D
ϵ
(
ρ
|
|
σ
)
≥
D
ϵ
(
E
(
ρ
)
|
|
E
(
σ
)
)
{\displaystyle D^{\epsilon }(\rho ||\sigma )\geq D^{\epsilon }({\mathcal {E}}(\rho )||{\mathcal {E}}(\sigma ))}
,
for any CPTP map
E
{\displaystyle {\mathcal {E}}}
.
To see this, suppose we have a POVM
(
R
,
I
−
R
)
{\displaystyle (R,I-R)}
to distinguish between
E
(
ρ
)
{\displaystyle {\mathcal {E}}(\rho )}
and
E
(
σ
)
{\displaystyle {\mathcal {E}}(\sigma )}
such that
⟨
R
,
E
(
ρ
)
⟩
=
⟨
E
†
(
R
)
,
ρ
⟩
≥
ϵ
{\displaystyle \langle R,{\mathcal {E}}(\rho )\rangle =\langle {\mathcal {E}}^{\dagger }(R),\rho \rangle \geq \epsilon }
. We construct a new POVM
(
E
†
(
R
)
,
I
−
E
†
(
R
)
)
{\displaystyle ({\mathcal {E}}^{\dagger }(R),I-{\mathcal {E}}^{\dagger }(R))}
to distinguish between
ρ
{\displaystyle \rho }
and
σ
{\displaystyle \sigma }
. Since the adjoint of any CPTP map is also positive and unital, this is a valid POVM. Note that
⟨
R
,
E
(
σ
)
⟩
=
⟨
E
†
(
R
)
,
σ
⟩
≥
⟨
Q
,
σ
⟩
{\displaystyle \langle R,{\mathcal {E}}(\sigma )\rangle =\langle {\mathcal {E}}^{\dagger }(R),\sigma \rangle \geq \langle Q,\sigma \rangle }
, where
(
Q
,
I
−
Q
)
{\displaystyle (Q,I-Q)}
is the POVM that achieves
D
ϵ
(
ρ
|
|
σ
)
{\displaystyle D^{\epsilon }(\rho ||\sigma )}
.
Not only is this interesting in itself, but it also gives us the following alternative method to prove the data processing inequality.
By the quantum analogue of the Stein lemma,
lim
n
→
∞
1
n
D
ϵ
(
ρ
⊗
n
|
|
σ
⊗
n
)
=
lim
n
→
∞
−
1
n
log
min
1
ϵ
Tr
(
σ
⊗
n
Q
)
{\displaystyle \lim _{n\rightarrow \infty }{\frac {1}{n}}D^{\epsilon }(\rho ^{\otimes n}||\sigma ^{\otimes n})=\lim _{n\rightarrow \infty }{\frac {-1}{n}}\log \min {\frac {1}{\epsilon }}\operatorname {Tr} (\sigma ^{\otimes n}Q)}
=
D
(
ρ
|
|
σ
)
−
lim
n
→
∞
1
n
(
log
1
ϵ
)
{\displaystyle =D(\rho ||\sigma )-\lim _{n\rightarrow \infty }{\frac {1}{n}}\left(\log {\frac {1}{\epsilon }}\right)}
=
D
(
ρ
|
|
σ
)
,
{\displaystyle =D(\rho ||\sigma )~,}
where the minimum is taken over
0
≤
Q
≤
1
{\displaystyle 0\leq Q\leq 1}
such that
Tr
(
Q
ρ
⊗
n
)
≥
ϵ
.
{\displaystyle \operatorname {Tr} (Q\rho ^{\otimes n})\geq \epsilon ~.}
Applying the data processing inequality to the states
ρ
⊗
n
{\displaystyle \rho ^{\otimes n}}
and
σ
⊗
n
{\displaystyle \sigma ^{\otimes n}}
with the CPTP map
E
⊗
n
{\displaystyle {\mathcal {E}}^{\otimes n}}
, we get
D
ϵ
(
ρ
⊗
n
|
|
σ
⊗
n
)
≥
D
ϵ
(
E
(
ρ
)
⊗
n
|
|
E
(
σ
)
⊗
n
)
.
{\displaystyle D^{\epsilon }(\rho ^{\otimes n}||\sigma ^{\otimes n})~\geq ~D^{\epsilon }({\mathcal {E}}(\rho )^{\otimes n}||{\mathcal {E}}(\sigma )^{\otimes n})~.}
Dividing by
n
{\displaystyle n}
on either side and taking the limit as
n
→
∞
{\displaystyle n\rightarrow \infty }
, we get the desired result.
See also
Entropic value at risk
Quantum relative entropy
Strong subadditivity
Classical information theory
Min-entropy
References
Kata Kunci Pencarian:
- Keterkaitan kuantum
- Generalized relative entropy
- Kullback–Leibler divergence
- Min-entropy
- Entropy (information theory)
- Rényi entropy
- Differential entropy
- Entropy
- Joint quantum entropy
- Conditional entropy
- Second law of thermodynamics