- Source: Distance correlation
In statistics and in probability theory, distance correlation or distance covariance is a measure of dependence between two paired random vectors of arbitrary, not necessarily equal, dimension. The population distance correlation coefficient is zero if and only if the random vectors are independent. Thus, distance correlation measures both linear and nonlinear association between two random variables or random vectors. This is in contrast to Pearson's correlation, which can only detect linear association between two random variables.
Distance correlation can be used to perform a statistical test of dependence with a permutation test. One first computes the distance correlation (involving the re-centering of Euclidean distance matrices) between two random vectors, and then compares this value to the distance correlations of many shuffles of the data.
Background
The classical measure of dependence, the Pearson correlation coefficient, is mainly sensitive to a linear relationship between two variables. Distance correlation was introduced in 2005 by Gábor J. Székely in several lectures to address this deficiency of Pearson's correlation, namely that it can easily be zero for dependent variables. Correlation = 0 (uncorrelatedness) does not imply independence while distance correlation = 0 does imply independence. The first results on distance correlation were published in 2007 and 2009. It was proved that distance covariance is the same as the Brownian covariance. These measures are examples of energy distances.
The distance correlation is derived from a number of other quantities that are used in its specification, specifically: distance variance, distance standard deviation, and distance covariance. These quantities take the same roles as the ordinary moments with corresponding names in the specification of the Pearson product-moment correlation coefficient.
Definitions
= Distance covariance
=Let us start with the definition of the sample distance covariance. Let (Xk, Yk), k = 1, 2, ..., n be a statistical sample from a pair of real valued or vector valued random variables (X, Y). First, compute the n by n distance matrices (aj, k) and (bj, k) containing all pairwise distances
a
j
,
k
=
‖
X
j
−
X
k
‖
,
j
,
k
=
1
,
2
,
…
,
n
,
b
j
,
k
=
‖
Y
j
−
Y
k
‖
,
j
,
k
=
1
,
2
,
…
,
n
,
{\displaystyle {\begin{aligned}a_{j,k}&=\|X_{j}-X_{k}\|,\qquad j,k=1,2,\ldots ,n,\\b_{j,k}&=\|Y_{j}-Y_{k}\|,\qquad j,k=1,2,\ldots ,n,\end{aligned}}}
where ||⋅ ||denotes Euclidean norm. Then take all doubly centered distances
A
j
,
k
:=
a
j
,
k
−
a
¯
j
⋅
−
a
¯
⋅
k
+
a
¯
⋅
⋅
,
B
j
,
k
:=
b
j
,
k
−
b
¯
j
⋅
−
b
¯
⋅
k
+
b
¯
⋅
⋅
,
{\displaystyle A_{j,k}:=a_{j,k}-{\overline {a}}_{j\cdot }-{\overline {a}}_{\cdot k}+{\overline {a}}_{\cdot \cdot },\qquad B_{j,k}:=b_{j,k}-{\overline {b}}_{j\cdot }-{\overline {b}}_{\cdot k}+{\overline {b}}_{\cdot \cdot },}
where
a
¯
j
⋅
{\displaystyle \textstyle {\overline {a}}_{j\cdot }}
is the j-th row mean,
a
¯
⋅
k
{\displaystyle \textstyle {\overline {a}}_{\cdot k}}
is the k-th column mean, and
a
¯
⋅
⋅
{\displaystyle \textstyle {\overline {a}}_{\cdot \cdot }}
is the grand mean of the distance matrix of the X sample. The notation is similar for the b values. (In the matrices of centered distances (Aj, k) and (Bj,k) all rows and all columns sum to zero.) The squared sample distance covariance (a scalar) is simply the arithmetic average of the products Aj, k Bj, k:
dCov
n
2
(
X
,
Y
)
:=
1
n
2
∑
j
=
1
n
∑
k
=
1
n
A
j
,
k
B
j
,
k
.
{\displaystyle \operatorname {dCov} _{n}^{2}(X,Y):={\frac {1}{n^{2}}}\sum _{j=1}^{n}\sum _{k=1}^{n}A_{j,k}\,B_{j,k}.}
The statistic Tn = n dCov2n(X, Y) determines a consistent multivariate test of independence of random vectors in arbitrary dimensions. For an implementation see dcov.test function in the energy package for R.
The population value of distance covariance can be defined along the same lines. Let X be a random variable that takes values in a p-dimensional Euclidean space with probability distribution μ and let Y be a random variable that takes values in a q-dimensional Euclidean space with probability distribution ν, and suppose that X and Y have finite expectations. Write
a
μ
(
x
)
:=
E
[
‖
X
−
x
‖
]
,
D
(
μ
)
:=
E
[
a
μ
(
X
)
]
,
d
μ
(
x
,
x
′
)
:=
‖
x
−
x
′
‖
−
a
μ
(
x
)
−
a
μ
(
x
′
)
+
D
(
μ
)
.
{\displaystyle a_{\mu }(x):=\operatorname {E} [\|X-x\|],\quad D(\mu ):=\operatorname {E} [a_{\mu }(X)],\quad d_{\mu }(x,x'):=\|x-x'\|-a_{\mu }(x)-a_{\mu }(x')+D(\mu ).}
Finally, define the population value of squared distance covariance of X and Y as
dCov
2
(
X
,
Y
)
:=
E
[
d
μ
(
X
,
X
′
)
d
ν
(
Y
,
Y
′
)
]
.
{\displaystyle \operatorname {dCov} ^{2}(X,Y):=\operatorname {E} {\big [}d_{\mu }(X,X')d_{\nu }(Y,Y'){\big ]}.}
One can show that this is equivalent to the following definition:
dCov
2
(
X
,
Y
)
:=
E
[
‖
X
−
X
′
‖
‖
Y
−
Y
′
‖
]
+
E
[
‖
X
−
X
′
‖
]
E
[
‖
Y
−
Y
′
‖
]
−
E
[
‖
X
−
X
′
‖
‖
Y
−
Y
″
‖
]
−
E
[
‖
X
−
X
″
‖
‖
Y
−
Y
′
‖
]
=
E
[
‖
X
−
X
′
‖
‖
Y
−
Y
′
‖
]
+
E
[
‖
X
−
X
′
‖
]
E
[
‖
Y
−
Y
′
‖
]
−
2
E
[
‖
X
−
X
′
‖
‖
Y
−
Y
″
‖
]
,
{\displaystyle {\begin{aligned}\operatorname {dCov} ^{2}(X,Y):={}&\operatorname {E} [\|X-X'\|\,\|Y-Y'\|]+\operatorname {E} [\|X-X'\|]\,\operatorname {E} [\|Y-Y'\|]\\&\qquad {}-\operatorname {E} [\|X-X'\|\,\|Y-Y''\|]-\operatorname {E} [\|X-X''\|\,\|Y-Y'\|]\\={}&\operatorname {E} [\|X-X'\|\,\|Y-Y'\|]+\operatorname {E} [\|X-X'\|]\,\operatorname {E} [\|Y-Y'\|]\\&\qquad {}-2\operatorname {E} [\|X-X'\|\,\|Y-Y''\|],\end{aligned}}}
where E denotes expected value, and
(
X
,
Y
)
,
{\displaystyle \textstyle (X,Y),}
(
X
′
,
Y
′
)
,
{\displaystyle \textstyle (X',Y'),}
and
(
X
″
,
Y
″
)
{\displaystyle \textstyle (X'',Y'')}
are independent and identically distributed. The primed random variables
(
X
′
,
Y
′
)
{\displaystyle \textstyle (X',Y')}
and
(
X
″
,
Y
″
)
{\displaystyle \textstyle (X'',Y'')}
denote
independent and identically distributed (iid) copies of the variables
X
{\displaystyle X}
and
Y
{\displaystyle Y}
and are similarly iid. Distance covariance can be expressed in terms of the classical Pearson's covariance,
cov, as follows:
dCov
2
(
X
,
Y
)
=
cov
(
‖
X
−
X
′
‖
,
‖
Y
−
Y
′
‖
)
−
2
cov
(
‖
X
−
X
′
‖
,
‖
Y
−
Y
″
‖
)
.
{\displaystyle \operatorname {dCov} ^{2}(X,Y)=\operatorname {cov} (\|X-X'\|,\|Y-Y'\|)-2\operatorname {cov} (\|X-X'\|,\|Y-Y''\|).}
This identity shows that the distance covariance is not the same as the covariance of distances, cov(‖X − X' ‖, ‖Y − Y' ‖). This can be zero even if X and Y are not independent.
Alternatively, the distance covariance can be defined as the weighted L2 norm of the distance between the joint characteristic function of the random variables and the product of their marginal characteristic functions:
dCov
2
(
X
,
Y
)
=
1
c
p
c
q
∫
R
p
+
q
|
φ
X
,
Y
(
s
,
t
)
−
φ
X
(
s
)
φ
Y
(
t
)
|
2
|
s
|
p
1
+
p
|
t
|
q
1
+
q
d
t
d
s
{\displaystyle \operatorname {dCov} ^{2}(X,Y)={\frac {1}{c_{p}c_{q}}}\int _{\mathbb {R} ^{p+q}}{\frac {\left|\varphi _{X,Y}(s,t)-\varphi _{X}(s)\varphi _{Y}(t)\right|^{2}}{|s|_{p}^{1+p}|t|_{q}^{1+q}}}\,dt\,ds}
where
φ
X
,
Y
(
s
,
t
)
{\displaystyle \varphi _{X,Y}(s,t)}
,
φ
X
(
s
)
{\displaystyle \varphi _{X}(s)}
, and
φ
Y
(
t
)
{\displaystyle \varphi _{Y}(t)}
are the characteristic functions of (X, Y), X, and Y, respectively, p, q denote the Euclidean dimension of X and Y, and thus of s and t, and cp, cq are constants. The weight function
(
c
p
c
q
|
s
|
p
1
+
p
|
t
|
q
1
+
q
)
−
1
{\displaystyle ({c_{p}c_{q}}{|s|_{p}^{1+p}|t|_{q}^{1+q}})^{-1}}
is chosen to produce a scale equivariant and rotation invariant measure that doesn't go to zero for dependent variables. One interpretation of the characteristic function definition is that the variables eisX and eitY are cyclic representations of X and Y with different periods given by s and t, and the expression ϕX, Y(s, t) − ϕX(s) ϕY(t) in the numerator of the characteristic function definition of distance covariance is simply the classical covariance of eisX and eitY. The characteristic function definition clearly shows that
dCov2(X, Y) = 0 if and only if X and Y are independent.
= Distance variance and distance standard deviation
=The distance variance is a special case of distance covariance when the two variables are identical. The population value of distance variance is the square root of
dVar
2
(
X
)
:=
E
[
‖
X
−
X
′
‖
2
]
+
E
2
[
‖
X
−
X
′
‖
]
−
2
E
[
‖
X
−
X
′
‖
‖
X
−
X
″
‖
]
,
{\displaystyle \operatorname {dVar} ^{2}(X):=\operatorname {E} [\|X-X'\|^{2}]+\operatorname {E} ^{2}[\|X-X'\|]-2\operatorname {E} [\|X-X'\|\,\|X-X''\|],}
where
X
{\displaystyle X}
,
X
′
{\displaystyle X'}
, and
X
″
{\displaystyle X''}
are independent and identically distributed random variables,
E
{\displaystyle \operatorname {E} }
denotes the expected value, and
f
2
(
⋅
)
=
(
f
(
⋅
)
)
2
{\displaystyle f^{2}(\cdot )=(f(\cdot ))^{2}}
for function
f
(
⋅
)
{\displaystyle f(\cdot )}
, e.g.,
E
2
[
⋅
]
=
(
E
[
⋅
]
)
2
{\displaystyle \operatorname {E} ^{2}[\cdot ]=(\operatorname {E} [\cdot ])^{2}}
.
The sample distance variance is the square root of
dVar
n
2
(
X
)
:=
dCov
n
2
(
X
,
X
)
=
1
n
2
∑
k
,
ℓ
A
k
,
ℓ
2
,
{\displaystyle \operatorname {dVar} _{n}^{2}(X):=\operatorname {dCov} _{n}^{2}(X,X)={\tfrac {1}{n^{2}}}\sum _{k,\ell }A_{k,\ell }^{2},}
which is a relative of Corrado Gini's mean difference introduced in 1912 (but Gini did not work with centered distances).
The distance standard deviation is the square root of the distance variance.
= Distance correlation
=The distance correlation of two random variables is obtained by dividing their distance covariance by the product of their distance standard deviations. The distance correlation is the square root of
dCor
2
(
X
,
Y
)
=
dCov
2
(
X
,
Y
)
dVar
2
(
X
)
dVar
2
(
Y
)
,
{\displaystyle \operatorname {dCor} ^{2}(X,Y)={\frac {\operatorname {dCov} ^{2}(X,Y)}{\sqrt {\operatorname {dVar} ^{2}(X)\,\operatorname {dVar} ^{2}(Y)}}},}
and the sample distance correlation is defined by substituting the sample distance covariance and distance variances for the population coefficients above.
For easy computation of sample distance correlation see the dcor function in the energy package for R.
Properties
= Distance correlation
== Distance covariance
=This last property is the most important effect of working with centered distances.
The statistic
dCov
n
2
(
X
,
Y
)
{\displaystyle \operatorname {dCov} _{n}^{2}(X,Y)}
is a biased estimator of
dCov
2
(
X
,
Y
)
{\displaystyle \operatorname {dCov} ^{2}(X,Y)}
. Under independence of X and Y
E
[
dCov
n
2
(
X
,
Y
)
]
=
n
−
1
n
2
{
(
n
−
2
)
dCov
2
(
X
,
Y
)
+
E
[
‖
X
−
X
′
‖
]
E
[
‖
Y
−
Y
′
‖
]
}
=
n
−
1
n
2
E
[
‖
X
−
X
′
‖
]
E
[
‖
Y
−
Y
′
‖
]
.
{\displaystyle {\begin{aligned}\operatorname {E} [\operatorname {dCov} _{n}^{2}(X,Y)]&={\frac {n-1}{n^{2}}}\left\{(n-2)\operatorname {dCov} ^{2}(X,Y)+\operatorname {E} [\|X-X'\|]\,\operatorname {E} [\|Y-Y'\|]\right\}\\[6pt]&={\frac {n-1}{n^{2}}}\operatorname {E} [\|X-X'\|]\,\operatorname {E} [\|Y-Y'\|].\end{aligned}}}
An unbiased estimator of
dCov
2
(
X
,
Y
)
{\displaystyle \operatorname {dCov} ^{2}(X,Y)}
is given by Székely and Rizzo.
= Distance variance
=Equality holds in (iv) if and only if one of the random variables X or Y is a constant.
Generalization
Distance covariance can be generalized to include powers of Euclidean distance. Define
dCov
2
(
X
,
Y
;
α
)
:=
E
[
‖
X
−
X
′
‖
α
‖
Y
−
Y
′
‖
α
]
+
E
[
‖
X
−
X
′
‖
α
]
E
[
‖
Y
−
Y
′
‖
α
]
−
2
E
[
‖
X
−
X
′
‖
α
‖
Y
−
Y
″
‖
α
]
.
{\displaystyle {\begin{aligned}\operatorname {dCov} ^{2}(X,Y;\alpha ):={}&\operatorname {E} [\|X-X'\|^{\alpha }\,\|Y-Y'\|^{\alpha }]+\operatorname {E} [\|X-X'\|^{\alpha }]\,\operatorname {E} [\|Y-Y'\|^{\alpha }]\\&\qquad {}-2\operatorname {E} [\|X-X'\|^{\alpha }\,\|Y-Y''\|^{\alpha }].\end{aligned}}}
Then for every
0
<
α
<
2
{\displaystyle 0<\alpha <2}
,
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are independent if and only if
dCov
2
(
X
,
Y
;
α
)
=
0
{\displaystyle \operatorname {dCov} ^{2}(X,Y;\alpha )=0}
. It is important to note that this characterization does not hold for exponent
α
=
2
{\displaystyle \alpha =2}
; in this case for bivariate
(
X
,
Y
)
{\displaystyle (X,Y)}
,
dCor
(
X
,
Y
;
α
=
2
)
{\displaystyle \operatorname {dCor} (X,Y;\alpha =2)}
is a deterministic function of the Pearson correlation. If
a
k
,
ℓ
{\displaystyle a_{k,\ell }}
and
b
k
,
ℓ
{\displaystyle b_{k,\ell }}
are
α
{\displaystyle \alpha }
powers of the corresponding distances,
0
<
α
≤
2
{\displaystyle 0<\alpha \leq 2}
, then
α
{\displaystyle \alpha }
sample distance covariance can be defined as the nonnegative number for which
dCov
n
2
(
X
,
Y
;
α
)
:=
1
n
2
∑
k
,
ℓ
A
k
,
ℓ
B
k
,
ℓ
.
{\displaystyle \operatorname {dCov} _{n}^{2}(X,Y;\alpha ):={\frac {1}{n^{2}}}\sum _{k,\ell }A_{k,\ell }\,B_{k,\ell }.}
One can extend
dCov
{\displaystyle \operatorname {dCov} }
to metric-space-valued random variables
X
{\displaystyle X}
and
Y
{\displaystyle Y}
: If
X
{\displaystyle X}
has law
μ
{\displaystyle \mu }
in a metric space with metric
d
{\displaystyle d}
, then define
a
μ
(
x
)
:=
E
[
d
(
X
,
x
)
]
{\displaystyle a_{\mu }(x):=\operatorname {E} [d(X,x)]}
,
D
(
μ
)
:=
E
[
a
μ
(
X
)
]
{\displaystyle D(\mu ):=\operatorname {E} [a_{\mu }(X)]}
, and (provided
a
μ
{\displaystyle a_{\mu }}
is finite, i.e.,
X
{\displaystyle X}
has finite first moment),
d
μ
(
x
,
x
′
)
:=
d
(
x
,
x
′
)
−
a
μ
(
x
)
−
a
μ
(
x
′
)
+
D
(
μ
)
{\displaystyle d_{\mu }(x,x'):=d(x,x')-a_{\mu }(x)-a_{\mu }(x')+D(\mu )}
. Then if
Y
{\displaystyle Y}
has law
ν
{\displaystyle \nu }
(in a possibly different metric space with finite first moment), define
dCov
2
(
X
,
Y
)
:=
E
[
d
μ
(
X
,
X
′
)
d
ν
(
Y
,
Y
′
)
]
.
{\displaystyle \operatorname {dCov} ^{2}(X,Y):=\operatorname {E} {\big [}d_{\mu }(X,X')d_{\nu }(Y,Y'){\big ]}.}
This is non-negative for all such
X
,
Y
{\displaystyle X,Y}
iff both metric spaces have negative type. Here, a metric space
(
M
,
d
)
{\displaystyle (M,d)}
has negative type if
(
M
,
d
1
/
2
)
{\displaystyle (M,d^{1/2})}
is isometric to a subset of a Hilbert space. If both metric spaces have strong negative type, then
dCov
2
(
X
,
Y
)
=
0
{\displaystyle \operatorname {dCov} ^{2}(X,Y)=0}
iff
X
,
Y
{\displaystyle X,Y}
are independent.
Alternative definition of distance covariance
The original distance covariance has been defined as the square root of
dCov
2
(
X
,
Y
)
{\displaystyle \operatorname {dCov} ^{2}(X,Y)}
, rather than the squared coefficient itself.
dCov
(
X
,
Y
)
{\displaystyle \operatorname {dCov} (X,Y)}
has the property that it is the energy distance between the joint distribution of
X
,
Y
{\displaystyle \operatorname {X} ,Y}
and the product of its marginals. Under this definition, however, the distance variance, rather than the distance standard deviation, is measured in the same units as the
X
{\displaystyle \operatorname {X} }
distances.
Alternately, one could define distance covariance to be the square of the energy distance:
dCov
2
(
X
,
Y
)
.
{\displaystyle \operatorname {dCov} ^{2}(X,Y).}
In this case, the distance standard deviation of
X
{\displaystyle X}
is measured in the same units as
X
{\displaystyle X}
distance, and there exists an unbiased estimator for the population distance covariance.
Under these alternate definitions, the distance correlation is also defined as the square
dCor
2
(
X
,
Y
)
{\displaystyle \operatorname {dCor} ^{2}(X,Y)}
, rather than the square root.
Alternative formulation: Brownian covariance
Brownian covariance is motivated by generalization of the notion of covariance to stochastic processes. The square of the covariance of random variables X and Y can be written in the following form:
cov
(
X
,
Y
)
2
=
E
[
(
X
−
E
(
X
)
)
(
X
′
−
E
(
X
′
)
)
(
Y
−
E
(
Y
)
)
(
Y
′
−
E
(
Y
′
)
)
]
{\displaystyle \operatorname {cov} (X,Y)^{2}=\operatorname {E} \left[{\big (}X-\operatorname {E} (X){\big )}{\big (}X^{\mathrm {'} }-\operatorname {E} (X^{\mathrm {'} }){\big )}{\big (}Y-\operatorname {E} (Y){\big )}{\big (}Y^{\mathrm {'} }-\operatorname {E} (Y^{\mathrm {'} }){\big )}\right]}
where E denotes the expected value and the prime denotes independent and identically distributed copies. We need the following generalization of this formula. If U(s), V(t) are arbitrary random processes defined for all real s and t then define the U-centered version of X by
X
U
:=
U
(
X
)
−
E
X
[
U
(
X
)
∣
{
U
(
t
)
}
]
{\displaystyle X_{U}:=U(X)-\operatorname {E} _{X}\left[U(X)\mid \left\{U(t)\right\}\right]}
whenever the subtracted conditional expected value exists and denote by YV the V-centered version of Y. The (U,V) covariance of (X,Y) is defined as the nonnegative number whose square is
cov
U
,
V
2
(
X
,
Y
)
:=
E
[
X
U
X
U
′
Y
V
Y
V
′
]
{\displaystyle \operatorname {cov} _{U,V}^{2}(X,Y):=\operatorname {E} \left[X_{U}X_{U}^{\mathrm {'} }Y_{V}Y_{V}^{\mathrm {'} }\right]}
whenever the right-hand side is nonnegative and finite. The most important example is when U and V are two-sided independent Brownian motions /Wiener processes with expectation zero and covariance |s| + |t| − |s − t| = 2 min(s,t) (for nonnegative s, t only). (This is twice the covariance of the standard Wiener process; here the factor 2 simplifies the computations.) In this case the (U,V) covariance is called Brownian covariance and is denoted by
cov
W
(
X
,
Y
)
.
{\displaystyle \operatorname {cov} _{W}(X,Y).}
There is a surprising coincidence: The Brownian covariance is the same as the distance covariance:
cov
W
(
X
,
Y
)
=
dCov
(
X
,
Y
)
,
{\displaystyle \operatorname {cov} _{\mathrm {W} }(X,Y)=\operatorname {dCov} (X,Y),}
and thus Brownian correlation is the same as distance correlation.
On the other hand, if we replace the Brownian motion with the deterministic identity function id then Covid(X,Y) is simply the absolute value of the classical Pearson covariance,
cov
i
d
(
X
,
Y
)
=
|
cov
(
X
,
Y
)
|
.
{\displaystyle \operatorname {cov} _{\mathrm {id} }(X,Y)=\left\vert \operatorname {cov} (X,Y)\right\vert .}
Related metrics
Other correlational metrics, including kernel-based correlational metrics (such as the Hilbert-Schmidt Independence Criterion or HSIC) can also detect linear and nonlinear interactions. Both distance correlation and kernel-based metrics can be used in methods such as canonical correlation analysis and independent component analysis to yield stronger statistical power.
See also
RV coefficient
For a related third-order statistic, see Distance skewness.
Notes
References
Bickel, Peter J.; Xu, Ying (2009). "Discussion of: Brownian distance covariance". The Annals of Applied Statistics. 3 (4): 1266–1269. arXiv:0912.3295. doi:10.1214/09-AOAS312A.
Gini, C. (1912). Variabilità e Mutabilità. Bologna: Tipografia di Paolo Cuppini. Bibcode:1912vamu.book.....G.
Klebanov, L. B. (2005). N-distances and their applications. Prague: Karolinum Press, Charles University. ISBN 9788024611525.
Kosorok, Michael R. (2009). "Discussion of: Brownian distance covariance". The Annals of Applied Statistics. 3 (4): 1270–1278. arXiv:1010.0822. doi:10.1214/09-AOAS312B. S2CID 88518490.
Lyons, Russell (2014). "Distance covariance in metric spaces". The Annals of Probability. 41 (5): 3284–3305. arXiv:1106.5758. doi:10.1214/12-AOP803. S2CID 73677891.
Pearson, K. (1895a). "Note on regression and inheritance in the case of two parents". Proceedings of the Royal Society. 58: 240–242. Bibcode:1895RSPS...58..240P.
Pearson, K. (1895b). "Notes on the history of correlation". Biometrika. 13: 25–45. doi:10.1093/biomet/13.1.25.
Rizzo, Maria; Székely, Gábor (2021-02-22). "energy: E-Statistics: Multivariate Inference via the Energy of Data". Version: 1.7-8. Retrieved 2021-10-31.
Székely, Gábor J.; Rizzo, Maria L.; Bakirov, Nail K. (2007). "Measuring and testing independence by correlation of distances". The Annals of Statistics. 35 (6): 2769–2794. arXiv:0803.4101. doi:10.1214/009053607000000505. S2CID 5661488.
Székely, Gábor J.; Rizzo, Maria L. (2009a). "Brownian distance covariance". The Annals of Applied Statistics. 3 (4): 1236–1265. doi:10.1214/09-AOAS312. PMC 2889501. PMID 20574547.
Székely, Gábor J.; Rizzo, Maria L. (2009b). "Rejoinder: Brownian distance covariance". The Annals of Applied Statistics. 3 (4): 1303–1308. arXiv:1010.0844. doi:10.1214/09-AOAS312REJ.
Székely, Gábor J.; Rizzo, Maria L. (2012). "On the uniqueness of distance covariance". Statistics & Probability Letters. 82 (12): 2278–2282. doi:10.1016/j.spl.2012.08.007.
Székely, Gabor J.; Rizzo, Maria L. (2014). "Partial Distance Correlation with Methods for Dissimilarities". The Annals of Statistics. 42 (6): 2382–2412. arXiv:1310.2926. Bibcode:2014arXiv1310.2926S. doi:10.1214/14-AOS1255. S2CID 55801702.
External links
E-statistics (energy statistics) Archived 2019-09-13 at the Wayback Machine
Kata Kunci Pencarian:
- Statistika
- Ras manusia
- Variabel acak
- Messier 68
- Kadirun Yahya
- Messier 55
- Ilmu aktuaria
- Mamalia
- Bintang
- Statistika matematika
- Distance correlation
- Pearson correlation coefficient
- Correlation
- Correlation coefficient
- Spearman's rank correlation coefficient
- Correlation function
- Cross-correlation
- Cosine similarity
- Kendall tau distance
- Autocorrelation