- Source: Q-function
In statistics, the Q-function is the tail distribution function of the standard normal distribution. In other words,
Q
(
x
)
{\displaystyle Q(x)}
is the probability that a normal (Gaussian) random variable will obtain a value larger than
x
{\displaystyle x}
standard deviations. Equivalently,
Q
(
x
)
{\displaystyle Q(x)}
is the probability that a standard normal random variable takes a value larger than
x
{\displaystyle x}
.
If
Y
{\displaystyle Y}
is a Gaussian random variable with mean
μ
{\displaystyle \mu }
and variance
σ
2
{\displaystyle \sigma ^{2}}
, then
X
=
Y
−
μ
σ
{\displaystyle X={\frac {Y-\mu }{\sigma }}}
is standard normal and
P
(
Y
>
y
)
=
P
(
X
>
x
)
=
Q
(
x
)
{\displaystyle P(Y>y)=P(X>x)=Q(x)}
where
x
=
y
−
μ
σ
{\displaystyle x={\frac {y-\mu }{\sigma }}}
.
Other definitions of the Q-function, all of which are simple transformations of the normal cumulative distribution function, are also used occasionally.
Because of its relation to the cumulative distribution function of the normal distribution, the Q-function can also be expressed in terms of the error function, which is an important function in applied mathematics and physics.
Definition and basic properties
Formally, the Q-function is defined as
Q
(
x
)
=
1
2
π
∫
x
∞
exp
(
−
u
2
2
)
d
u
.
{\displaystyle Q(x)={\frac {1}{\sqrt {2\pi }}}\int _{x}^{\infty }\exp \left(-{\frac {u^{2}}{2}}\right)\,du.}
Thus,
Q
(
x
)
=
1
−
Q
(
−
x
)
=
1
−
Φ
(
x
)
,
{\displaystyle Q(x)=1-Q(-x)=1-\Phi (x)\,\!,}
where
Φ
(
x
)
{\displaystyle \Phi (x)}
is the cumulative distribution function of the standard normal Gaussian distribution.
The Q-function can be expressed in terms of the error function, or the complementary error function, as
Q
(
x
)
=
1
2
(
2
π
∫
x
/
2
∞
exp
(
−
t
2
)
d
t
)
=
1
2
−
1
2
erf
(
x
2
)
-or-
=
1
2
erfc
(
x
2
)
.
{\displaystyle {\begin{aligned}Q(x)&={\frac {1}{2}}\left({\frac {2}{\sqrt {\pi }}}\int _{x/{\sqrt {2}}}^{\infty }\exp \left(-t^{2}\right)\,dt\right)\\&={\frac {1}{2}}-{\frac {1}{2}}\operatorname {erf} \left({\frac {x}{\sqrt {2}}}\right)~~{\text{ -or-}}\\&={\frac {1}{2}}\operatorname {erfc} \left({\frac {x}{\sqrt {2}}}\right).\end{aligned}}}
An alternative form of the Q-function known as Craig's formula, after its discoverer, is expressed as:
Q
(
x
)
=
1
π
∫
0
π
2
exp
(
−
x
2
2
sin
2
θ
)
d
θ
.
{\displaystyle Q(x)={\frac {1}{\pi }}\int _{0}^{\frac {\pi }{2}}\exp \left(-{\frac {x^{2}}{2\sin ^{2}\theta }}\right)d\theta .}
This expression is valid only for positive values of x, but it can be used in conjunction with Q(x) = 1 − Q(−x) to obtain Q(x) for negative values. This form is advantageous in that the range of integration is fixed and finite.
Craig's formula was later extended by Behnad (2020) for the Q-function of the sum of two non-negative variables, as follows:
Q
(
x
+
y
)
=
1
π
∫
0
π
2
exp
(
−
x
2
2
sin
2
θ
−
y
2
2
cos
2
θ
)
d
θ
,
x
,
y
⩾
0.
{\displaystyle Q(x+y)={\frac {1}{\pi }}\int _{0}^{\frac {\pi }{2}}\exp \left(-{\frac {x^{2}}{2\sin ^{2}\theta }}-{\frac {y^{2}}{2\cos ^{2}\theta }}\right)d\theta ,\quad x,y\geqslant 0.}
Bounds and approximations
The Q-function is not an elementary function. However, it can be upper and lower bounded as,
(
x
1
+
x
2
)
ϕ
(
x
)
<
Q
(
x
)
<
ϕ
(
x
)
x
,
x
>
0
,
{\displaystyle \left({\frac {x}{1+x^{2}}}\right)\phi (x)<Q(x)<{\frac {\phi (x)}{x}},\qquad x>0,}
where
ϕ
(
x
)
{\displaystyle \phi (x)}
is the density function of the standard normal distribution, and the bounds become increasingly tight for large x.
Using the substitution v =u2/2, the upper bound is derived as follows:
Q
(
x
)
=
∫
x
∞
ϕ
(
u
)
d
u
<
∫
x
∞
u
x
ϕ
(
u
)
d
u
=
∫
x
2
2
∞
e
−
v
x
2
π
d
v
=
−
e
−
v
x
2
π
|
x
2
2
∞
=
ϕ
(
x
)
x
.
{\displaystyle Q(x)=\int _{x}^{\infty }\phi (u)\,du<\int _{x}^{\infty }{\frac {u}{x}}\phi (u)\,du=\int _{\frac {x^{2}}{2}}^{\infty }{\frac {e^{-v}}{x{\sqrt {2\pi }}}}\,dv=-{\biggl .}{\frac {e^{-v}}{x{\sqrt {2\pi }}}}{\biggr |}_{\frac {x^{2}}{2}}^{\infty }={\frac {\phi (x)}{x}}.}
Similarly, using
ϕ
′
(
u
)
=
−
u
ϕ
(
u
)
{\displaystyle \phi '(u)=-u\phi (u)}
and the quotient rule,
(
1
+
1
x
2
)
Q
(
x
)
=
∫
x
∞
(
1
+
1
x
2
)
ϕ
(
u
)
d
u
>
∫
x
∞
(
1
+
1
u
2
)
ϕ
(
u
)
d
u
=
−
ϕ
(
u
)
u
|
x
∞
=
ϕ
(
x
)
x
.
{\displaystyle \left(1+{\frac {1}{x^{2}}}\right)Q(x)=\int _{x}^{\infty }\left(1+{\frac {1}{x^{2}}}\right)\phi (u)\,du>\int _{x}^{\infty }\left(1+{\frac {1}{u^{2}}}\right)\phi (u)\,du=-{\biggl .}{\frac {\phi (u)}{u}}{\biggr |}_{x}^{\infty }={\frac {\phi (x)}{x}}.}
Solving for Q(x) provides the lower bound.
The geometric mean of the upper and lower bound gives a suitable approximation for
Q
(
x
)
{\displaystyle Q(x)}
:
Q
(
x
)
≈
ϕ
(
x
)
1
+
x
2
,
x
≥
0.
{\displaystyle Q(x)\approx {\frac {\phi (x)}{\sqrt {1+x^{2}}}},\qquad x\geq 0.}
Tighter bounds and approximations of
Q
(
x
)
{\displaystyle Q(x)}
can also be obtained by optimizing the following expression
Q
~
(
x
)
=
ϕ
(
x
)
(
1
−
a
)
x
+
a
x
2
+
b
.
{\displaystyle {\tilde {Q}}(x)={\frac {\phi (x)}{(1-a)x+a{\sqrt {x^{2}+b}}}}.}
For
x
≥
0
{\displaystyle x\geq 0}
, the best upper bound is given by
a
=
0.344
{\displaystyle a=0.344}
and
b
=
5.334
{\displaystyle b=5.334}
with maximum absolute relative error of 0.44%. Likewise, the best approximation is given by
a
=
0.339
{\displaystyle a=0.339}
and
b
=
5.510
{\displaystyle b=5.510}
with maximum absolute relative error of 0.27%. Finally, the best lower bound is given by
a
=
1
/
π
{\displaystyle a=1/\pi }
and
b
=
2
π
{\displaystyle b=2\pi }
with maximum absolute relative error of 1.17%.
The Chernoff bound of the Q-function is
Q
(
x
)
≤
e
−
x
2
2
,
x
>
0
{\displaystyle Q(x)\leq e^{-{\frac {x^{2}}{2}}},\qquad x>0}
Improved exponential bounds and a pure exponential approximation are
Q
(
x
)
≤
1
4
e
−
x
2
+
1
4
e
−
x
2
2
≤
1
2
e
−
x
2
2
,
x
>
0
{\displaystyle Q(x)\leq {\tfrac {1}{4}}e^{-x^{2}}+{\tfrac {1}{4}}e^{-{\frac {x^{2}}{2}}}\leq {\tfrac {1}{2}}e^{-{\frac {x^{2}}{2}}},\qquad x>0}
Q
(
x
)
≈
1
12
e
−
x
2
2
+
1
4
e
−
2
3
x
2
,
x
>
0
{\displaystyle Q(x)\approx {\frac {1}{12}}e^{-{\frac {x^{2}}{2}}}+{\frac {1}{4}}e^{-{\frac {2}{3}}x^{2}},\qquad x>0}
The above were generalized by Tanash & Riihonen (2020), who showed that
Q
(
x
)
{\displaystyle Q(x)}
can be accurately approximated or bounded by
Q
~
(
x
)
=
∑
n
=
1
N
a
n
e
−
b
n
x
2
.
{\displaystyle {\tilde {Q}}(x)=\sum _{n=1}^{N}a_{n}e^{-b_{n}x^{2}}.}
In particular, they presented a systematic methodology to solve the numerical coefficients
{
(
a
n
,
b
n
)
}
n
=
1
N
{\displaystyle \{(a_{n},b_{n})\}_{n=1}^{N}}
that yield a minimax approximation or bound:
Q
(
x
)
≈
Q
~
(
x
)
{\displaystyle Q(x)\approx {\tilde {Q}}(x)}
,
Q
(
x
)
≤
Q
~
(
x
)
{\displaystyle Q(x)\leq {\tilde {Q}}(x)}
, or
Q
(
x
)
≥
Q
~
(
x
)
{\displaystyle Q(x)\geq {\tilde {Q}}(x)}
for
x
≥
0
{\displaystyle x\geq 0}
. With the example coefficients tabulated in the paper for
N
=
20
{\displaystyle N=20}
, the relative and absolute approximation errors are less than
2.831
⋅
10
−
6
{\displaystyle 2.831\cdot 10^{-6}}
and
1.416
⋅
10
−
6
{\displaystyle 1.416\cdot 10^{-6}}
, respectively. The coefficients
{
(
a
n
,
b
n
)
}
n
=
1
N
{\displaystyle \{(a_{n},b_{n})\}_{n=1}^{N}}
for many variations of the exponential approximations and bounds up to
N
=
25
{\displaystyle N=25}
have been released to open access as a comprehensive dataset.
Another approximation of
Q
(
x
)
{\displaystyle Q(x)}
for
x
∈
[
0
,
∞
)
{\displaystyle x\in [0,\infty )}
is given by Karagiannidis & Lioumpas (2007) who showed for the appropriate choice of parameters
{
A
,
B
}
{\displaystyle \{A,B\}}
that
f
(
x
;
A
,
B
)
=
(
1
−
e
−
A
x
)
e
−
x
2
B
π
x
≈
erfc
(
x
)
.
{\displaystyle f(x;A,B)={\frac {\left(1-e^{-Ax}\right)e^{-x^{2}}}{B{\sqrt {\pi }}x}}\approx \operatorname {erfc} \left(x\right).}
The absolute error between
f
(
x
;
A
,
B
)
{\displaystyle f(x;A,B)}
and
erfc
(
x
)
{\displaystyle \operatorname {erfc} (x)}
over the range
[
0
,
R
]
{\displaystyle [0,R]}
is minimized by evaluating
{
A
,
B
}
=
arg
min
{
A
,
B
}
1
R
∫
0
R
|
f
(
x
;
A
,
B
)
−
erfc
(
x
)
|
d
x
.
{\displaystyle \{A,B\}={\underset {\{A,B\}}{\arg \min }}{\frac {1}{R}}\int _{0}^{R}|f(x;A,B)-\operatorname {erfc} (x)|dx.}
Using
R
=
20
{\displaystyle R=20}
and numerically integrating, they found the minimum error occurred when
{
A
,
B
}
=
{
1.98
,
1.135
}
,
{\displaystyle \{A,B\}=\{1.98,1.135\},}
which gave a good approximation for
∀
x
≥
0.
{\displaystyle \forall x\geq 0.}
Substituting these values and using the relationship between
Q
(
x
)
{\displaystyle Q(x)}
and
erfc
(
x
)
{\displaystyle \operatorname {erfc} (x)}
from above gives
Q
(
x
)
≈
(
1
−
e
−
1.98
x
2
)
e
−
x
2
2
1.135
2
π
x
,
x
≥
0.
{\displaystyle Q(x)\approx {\frac {\left(1-e^{\frac {-1.98x}{\sqrt {2}}}\right)e^{-{\frac {x^{2}}{2}}}}{1.135{\sqrt {2\pi }}x}},x\geq 0.}
Alternative coefficients are also available for the above 'Karagiannidis–Lioumpas approximation' for tailoring accuracy for a specific application or transforming it into a tight bound.
A tighter and more tractable approximation of
Q
(
x
)
{\displaystyle Q(x)}
for positive arguments
x
∈
[
0
,
∞
)
{\displaystyle x\in [0,\infty )}
is given by López-Benítez & Casadevall (2011) based on a second-order exponential function:
Q
(
x
)
≈
e
−
a
x
2
−
b
x
−
c
,
x
≥
0.
{\displaystyle Q(x)\approx e^{-ax^{2}-bx-c},\qquad x\geq 0.}
The fitting coefficients
(
a
,
b
,
c
)
{\displaystyle (a,b,c)}
can be optimized over any desired range of arguments in order to minimize the sum of square errors (
a
=
0.3842
{\displaystyle a=0.3842}
,
b
=
0.7640
{\displaystyle b=0.7640}
,
c
=
0.6964
{\displaystyle c=0.6964}
for
x
∈
[
0
,
20
]
{\displaystyle x\in [0,20]}
) or minimize the maximum absolute error (
a
=
0.4920
{\displaystyle a=0.4920}
,
b
=
0.2887
{\displaystyle b=0.2887}
,
c
=
1.1893
{\displaystyle c=1.1893}
for
x
∈
[
0
,
20
]
{\displaystyle x\in [0,20]}
). This approximation offers some benefits such as a good trade-off between accuracy and analytical tractability (for example, the extension to any arbitrary power of
Q
(
x
)
{\displaystyle Q(x)}
is trivial and does not alter the algebraic form of the approximation).
Inverse Q
The inverse Q-function can be related to the inverse error functions:
Q
−
1
(
y
)
=
2
e
r
f
−
1
(
1
−
2
y
)
=
2
e
r
f
c
−
1
(
2
y
)
{\displaystyle Q^{-1}(y)={\sqrt {2}}\ \mathrm {erf} ^{-1}(1-2y)={\sqrt {2}}\ \mathrm {erfc} ^{-1}(2y)}
The function
Q
−
1
(
y
)
{\displaystyle Q^{-1}(y)}
finds application in digital communications. It is usually expressed in dB and generally called Q-factor:
Q
-
f
a
c
t
o
r
=
20
log
10
(
Q
−
1
(
y
)
)
d
B
{\displaystyle \mathrm {Q{\text{-}}factor} =20\log _{10}\!\left(Q^{-1}(y)\right)\!~\mathrm {dB} }
where y is the bit-error rate (BER) of the digitally modulated signal under analysis. For instance, for quadrature phase-shift keying (QPSK) in additive white Gaussian noise, the Q-factor defined above coincides with the value in dB of the signal to noise ratio that yields a bit error rate equal to y.
Values
The Q-function is well tabulated and can be computed directly in most of the mathematical software packages such as R and those available in Python, MATLAB and Mathematica. Some values of the Q-function are given below for reference.
Generalization to high dimensions
The Q-function can be generalized to higher dimensions:
Q
(
x
)
=
P
(
X
≥
x
)
,
{\displaystyle Q(\mathbf {x} )=\mathbb {P} (\mathbf {X} \geq \mathbf {x} ),}
where
X
∼
N
(
0
,
Σ
)
{\displaystyle \mathbf {X} \sim {\mathcal {N}}(\mathbf {0} ,\,\Sigma )}
follows the multivariate normal distribution with covariance
Σ
{\displaystyle \Sigma }
and the threshold is of the form
x
=
γ
Σ
l
∗
{\displaystyle \mathbf {x} =\gamma \Sigma \mathbf {l} ^{*}}
for some positive vector
l
∗
>
0
{\displaystyle \mathbf {l} ^{*}>\mathbf {0} }
and positive constant
γ
>
0
{\displaystyle \gamma >0}
. As in the one dimensional case, there is no simple analytical formula for the Q-function. Nevertheless, the Q-function can be approximated arbitrarily well as
γ
{\displaystyle \gamma }
becomes larger and larger.
References
Kata Kunci Pencarian:
- Fungsi (matematika)
- Fungsi Fox–Wright
- Segi empat
- Fungsi-H Fox
- Daftar istilah biologi
- Mesin Turing
- Rantai transpor elektron
- Fosforilasi oksidatif
- Bahasa Inggris
- Kuartil
- Q-function
- Marcum Q-function
- Q-learning
- Error function
- Sublinear function
- Normal distribution
- Q-Pochhammer symbol
- Quantile function
- Rational function
- Q–Q plot
Jurassic World (2015)
Big Momma’s House 2 (2006)
Knives Out (2019)
No More Posts Available.
No more pages to load.