- Source: Independent component analysis
In signal processing, independent component analysis (ICA) is a computational method for separating a multivariate signal into additive subcomponents. This is done by assuming that at most one subcomponent is Gaussian and that the subcomponents are statistically independent from each other. ICA was invented by Jeanny Hérault and Christian Jutten in 1985. ICA is a special case of blind source separation. A common example application of ICA is the "cocktail party problem" of listening in on one person's speech in a noisy room.
Introduction
Independent component analysis attempts to decompose a multivariate signal into independent non-Gaussian signals. As an example, sound is usually a signal that is composed of the numerical addition, at each time t, of signals from several sources. The question then is whether it is possible to separate these contributing sources from the observed total signal. When the statistical independence assumption is correct, blind ICA separation of a mixed signal gives very good results. It is also used for signals that are not supposed to be generated by mixing for analysis purposes.
A simple application of ICA is the "cocktail party problem", where the underlying speech signals are separated from a sample data consisting of people talking simultaneously in a room. Usually the problem is simplified by assuming no time delays or echoes. Note that a filtered and delayed signal is a copy of a dependent component, and thus the statistical independence assumption is not violated.
Mixing weights for constructing the
M
{\textstyle M}
observed signals from the
N
{\textstyle N}
components can be placed in an
M
×
N
{\textstyle M\times N}
matrix. An important thing to consider is that if
N
{\textstyle N}
sources are present, at least
N
{\textstyle N}
observations (e.g. microphones if the observed signal is audio) are needed to recover the original signals. When there are an equal number of observations and source signals, the mixing matrix is square (
M
=
N
{\textstyle M=N}
). Other cases of underdetermined (
M
<
N
{\textstyle M
) and overdetermined (
M
>
N
{\textstyle M>N}
) have been investigated.
The success of ICA separation of mixed signals relies on two assumptions and three effects of mixing source signals. Two assumptions:
The source signals are independent of each other.
The values in each source signal have non-Gaussian distributions.
Three effects of mixing source signals:
Independence: As per assumption 1, the source signals are independent; however, their signal mixtures are not. This is because the signal mixtures share the same source signals.
Normality: According to the Central Limit Theorem, the distribution of a sum of independent random variables with finite variance tends towards a Gaussian distribution.Loosely speaking, a sum of two independent random variables usually has a distribution that is closer to Gaussian than any of the two original variables. Here we consider the value of each signal as the random variable.
Complexity: The temporal complexity of any signal mixture is greater than that of its simplest constituent source signal.
Those principles contribute to the basic establishment of ICA. If the signals extracted from a set of mixtures are independent and have non-Gaussian distributions or have low complexity, then they must be source signals.
Defining component independence
ICA finds the independent components (also called factors, latent variables or sources) by maximizing the statistical independence of the estimated components. We may choose one of many ways to define a proxy for independence, and this choice governs the form of the ICA algorithm. The two broadest definitions of independence for ICA are
Minimization of mutual information
Maximization of non-Gaussianity
The Minimization-of-Mutual information (MMI) family of ICA algorithms uses measures like Kullback-Leibler Divergence and maximum entropy. The non-Gaussianity family of ICA algorithms, motivated by the central limit theorem, uses kurtosis and negentropy.
Typical algorithms for ICA use centering (subtract the mean to create a zero mean signal), whitening (usually with the eigenvalue decomposition), and dimensionality reduction as preprocessing steps in order to simplify and reduce the complexity of the problem for the actual iterative algorithm. Whitening and dimension reduction can be achieved with principal component analysis or singular value decomposition. Whitening ensures that all dimensions are treated equally a priori before the algorithm is run. Well-known algorithms for ICA include infomax, FastICA, JADE, and kernel-independent component analysis, among others. In general, ICA cannot identify the actual number of source signals, a uniquely correct ordering of the source signals, nor the proper scaling (including sign) of the source signals.
ICA is important to blind signal separation and has many practical applications. It is closely related to (or even a special case of) the search for a factorial code of the data, i.e., a new vector-valued representation of each data vector such that it gets uniquely encoded by the resulting code vector (loss-free coding), but the code components are statistically independent.
Mathematical definitions
Linear independent component analysis can be divided into noiseless and noisy cases, where noiseless ICA is a special case of noisy ICA. Nonlinear ICA should be considered as a separate case.
= General
=The data are represented by the observed random vector
x
=
(
x
1
,
…
,
x
m
)
T
{\displaystyle {\boldsymbol {x}}=(x_{1},\ldots ,x_{m})^{T}}
and the hidden components as the random vector
s
=
(
s
1
,
…
,
s
n
)
T
.
{\displaystyle {\boldsymbol {s}}=(s_{1},\ldots ,s_{n})^{T}.}
The task is to transform the observed data
x
,
{\displaystyle {\boldsymbol {x}},}
using a linear static transformation
W
{\displaystyle {\boldsymbol {W}}}
as
s
=
W
x
,
{\displaystyle {\boldsymbol {s}}={\boldsymbol {W}}{\boldsymbol {x}},}
into a vector of maximally independent components
s
{\displaystyle {\boldsymbol {s}}}
measured by some function
F
(
s
1
,
…
,
s
n
)
{\displaystyle F(s_{1},\ldots ,s_{n})}
of independence.
= Generative model
=Linear noiseless ICA
The components
x
i
{\displaystyle x_{i}}
of the observed random vector
x
=
(
x
1
,
…
,
x
m
)
T
{\displaystyle {\boldsymbol {x}}=(x_{1},\ldots ,x_{m})^{T}}
are generated as a sum of the independent components
s
k
{\displaystyle s_{k}}
,
k
=
1
,
…
,
n
{\displaystyle k=1,\ldots ,n}
:
x
i
=
a
i
,
1
s
1
+
⋯
+
a
i
,
k
s
k
+
⋯
+
a
i
,
n
s
n
{\displaystyle x_{i}=a_{i,1}s_{1}+\cdots +a_{i,k}s_{k}+\cdots +a_{i,n}s_{n}}
weighted by the mixing weights
a
i
,
k
{\displaystyle a_{i,k}}
.
The same generative model can be written in vector form as
x
=
∑
k
=
1
n
s
k
a
k
{\displaystyle {\boldsymbol {x}}=\sum _{k=1}^{n}s_{k}{\boldsymbol {a}}_{k}}
, where the observed random vector
x
{\displaystyle {\boldsymbol {x}}}
is represented by the basis vectors
a
k
=
(
a
1
,
k
,
…
,
a
m
,
k
)
T
{\displaystyle {\boldsymbol {a}}_{k}=({\boldsymbol {a}}_{1,k},\ldots ,{\boldsymbol {a}}_{m,k})^{T}}
. The basis vectors
a
k
{\displaystyle {\boldsymbol {a}}_{k}}
form the columns of the mixing matrix
A
=
(
a
1
,
…
,
a
n
)
{\displaystyle {\boldsymbol {A}}=({\boldsymbol {a}}_{1},\ldots ,{\boldsymbol {a}}_{n})}
and the generative formula can be written as
x
=
A
s
{\displaystyle {\boldsymbol {x}}={\boldsymbol {A}}{\boldsymbol {s}}}
, where
s
=
(
s
1
,
…
,
s
n
)
T
{\displaystyle {\boldsymbol {s}}=(s_{1},\ldots ,s_{n})^{T}}
.
Given the model and realizations (samples)
x
1
,
…
,
x
N
{\displaystyle {\boldsymbol {x}}_{1},\ldots ,{\boldsymbol {x}}_{N}}
of the random vector
x
{\displaystyle {\boldsymbol {x}}}
, the task is to estimate both the mixing matrix
A
{\displaystyle {\boldsymbol {A}}}
and the sources
s
{\displaystyle {\boldsymbol {s}}}
. This is done by adaptively calculating the
w
{\displaystyle {\boldsymbol {w}}}
vectors and setting up a cost function which either maximizes the non-gaussianity of the calculated
s
k
=
w
T
x
{\displaystyle s_{k}={\boldsymbol {w}}^{T}{\boldsymbol {x}}}
or minimizes the mutual information. In some cases, a priori knowledge of the probability distributions of the sources can be used in the cost function.
The original sources
s
{\displaystyle {\boldsymbol {s}}}
can be recovered by multiplying the observed signals
x
{\displaystyle {\boldsymbol {x}}}
with the inverse of the mixing matrix
W
=
A
−
1
{\displaystyle {\boldsymbol {W}}={\boldsymbol {A}}^{-1}}
, also known as the unmixing matrix. Here it is assumed that the mixing matrix is square (
n
=
m
{\displaystyle n=m}
). If the number of basis vectors is greater than the dimensionality of the observed vectors,
n
>
m
{\displaystyle n>m}
, the task is overcomplete but is still solvable with the pseudo inverse.
Linear noisy ICA
With the added assumption of zero-mean and uncorrelated Gaussian noise
n
∼
N
(
0
,
diag
(
Σ
)
)
{\displaystyle n\sim N(0,\operatorname {diag} (\Sigma ))}
, the ICA model takes the form
x
=
A
s
+
n
{\displaystyle {\boldsymbol {x}}={\boldsymbol {A}}{\boldsymbol {s}}+n}
.
Nonlinear ICA
The mixing of the sources does not need to be linear. Using a nonlinear mixing function
f
(
⋅
|
θ
)
{\displaystyle f(\cdot |\theta )}
with parameters
θ
{\displaystyle \theta }
the nonlinear ICA model is
x
=
f
(
s
|
θ
)
+
n
{\displaystyle x=f(s|\theta )+n}
.
= Identifiability
=The independent components are identifiable up to a permutation and scaling of the sources. This identifiability requires that:
At most one of the sources
s
k
{\displaystyle s_{k}}
is Gaussian,
The number of observed mixtures,
m
{\displaystyle m}
, must be at least as large as the number of estimated components
n
{\displaystyle n}
:
m
≥
n
{\displaystyle m\geq n}
. It is equivalent to say that the mixing matrix
A
{\displaystyle {\boldsymbol {A}}}
must be of full rank for its inverse to exist.
Binary ICA
A special variant of ICA is binary ICA in which both signal sources and monitors are in binary form and observations from monitors are disjunctive mixtures of binary independent sources. The problem was shown to have applications in many domains including medical diagnosis, multi-cluster assignment, network tomography and internet resource management.
Let
x
1
,
x
2
,
…
,
x
m
{\displaystyle {x_{1},x_{2},\ldots ,x_{m}}}
be the set of binary variables from
m
{\displaystyle m}
monitors and
y
1
,
y
2
,
…
,
y
n
{\displaystyle {y_{1},y_{2},\ldots ,y_{n}}}
be the set of binary variables from
n
{\displaystyle n}
sources. Source-monitor connections are represented by the (unknown) mixing matrix
G
{\textstyle {\boldsymbol {G}}}
, where
g
i
j
=
1
{\displaystyle g_{ij}=1}
indicates that signal from the i-th source can be observed by the j-th monitor. The system works as follows: at any time, if a source
i
{\displaystyle i}
is active (
y
i
=
1
{\displaystyle y_{i}=1}
) and it is connected to the monitor
j
{\displaystyle j}
(
g
i
j
=
1
{\displaystyle g_{ij}=1}
) then the monitor
j
{\displaystyle j}
will observe some activity (
x
j
=
1
{\displaystyle x_{j}=1}
). Formally we have:
x
i
=
⋁
j
=
1
n
(
g
i
j
∧
y
j
)
,
i
=
1
,
2
,
…
,
m
,
{\displaystyle x_{i}=\bigvee _{j=1}^{n}(g_{ij}\wedge y_{j}),i=1,2,\ldots ,m,}
where
∧
{\displaystyle \wedge }
is Boolean AND and
∨
{\displaystyle \vee }
is Boolean OR. Noise is not explicitly modelled, rather, can be treated as independent sources.
The above problem can be heuristically solved by assuming variables are continuous and running FastICA on binary observation data to get the mixing matrix
G
{\textstyle {\boldsymbol {G}}}
(real values), then apply round number techniques on
G
{\textstyle {\boldsymbol {G}}}
to obtain the binary values. This approach has been shown to produce a highly inaccurate result.
Another method is to use dynamic programming: recursively breaking the observation matrix
X
{\textstyle {\boldsymbol {X}}}
into its sub-matrices and run the inference algorithm on these sub-matrices. The key observation which leads to this algorithm is the sub-matrix
X
0
{\textstyle {\boldsymbol {X}}^{0}}
of
X
{\textstyle {\boldsymbol {X}}}
where
x
i
j
=
0
,
∀
j
{\textstyle x_{ij}=0,\forall j}
corresponds to the unbiased observation matrix of hidden components that do not have connection to the
i
{\displaystyle i}
-th monitor. Experimental results from show that this approach is accurate under moderate noise levels.
The Generalized Binary ICA framework introduces a broader problem formulation which does not necessitate any knowledge on the generative model. In other words, this method attempts to decompose a source into its independent components (as much as possible, and without losing any information) with no prior assumption on the way it was generated. Although this problem appears quite complex, it can be accurately solved with a branch and bound search tree algorithm or tightly upper bounded with a single multiplication of a matrix with a vector.
Methods for blind source separation
= Projection pursuit
=Signal mixtures tend to have Gaussian probability density functions, and source signals tend to have non-Gaussian probability density functions. Each source signal can be extracted from a set of signal mixtures by taking the inner product of a weight vector and those signal mixtures where this inner product provides an orthogonal projection of the signal mixtures. The remaining challenge is finding such a weight vector. One type of method for doing so is projection pursuit.
Projection pursuit seeks one projection at a time such that the extracted signal is as non-Gaussian as possible. This contrasts with ICA, which typically extracts M signals simultaneously from M signal mixtures, which requires estimating a M × M unmixing matrix. One practical advantage of projection pursuit over ICA is that fewer than M signals can be extracted if required, where each source signal is extracted from M signal mixtures using an M-element weight vector.
We can use kurtosis to recover the multiple source signal by finding the correct weight vectors with the use of projection pursuit.
The kurtosis of the probability density function of a signal, for a finite sample, is computed as
K
=
E
[
(
y
−
y
¯
)
4
]
(
E
[
(
y
−
y
¯
)
2
]
)
2
−
3
{\displaystyle K={\frac {\operatorname {E} [(\mathbf {y} -\mathbf {\overline {y}} )^{4}]}{(\operatorname {E} [(\mathbf {y} -\mathbf {\overline {y}} )^{2}])^{2}}}-3}
where
y
¯
{\displaystyle \mathbf {\overline {y}} }
is the sample mean of
y
{\displaystyle \mathbf {y} }
, the extracted signals. The constant 3 ensures that Gaussian signals have zero kurtosis, Super-Gaussian signals have positive kurtosis, and Sub-Gaussian signals have negative kurtosis. The denominator is the variance of
y
{\displaystyle \mathbf {y} }
, and ensures that the measured kurtosis takes account of signal variance. The goal of projection pursuit is to maximize the kurtosis, and make the extracted signal as non-normal as possible.
Using kurtosis as a measure of non-normality, we can now examine how the kurtosis of a signal
y
=
w
T
x
{\displaystyle \mathbf {y} =\mathbf {w} ^{T}\mathbf {x} }
extracted from a set of M mixtures
x
=
(
x
1
,
x
2
,
…
,
x
M
)
T
{\displaystyle \mathbf {x} =(x_{1},x_{2},\ldots ,x_{M})^{T}}
varies as the weight vector
w
{\displaystyle \mathbf {w} }
is rotated around the origin. Given our assumption that each source signal
s
{\displaystyle \mathbf {s} }
is super-gaussian we would expect:
the kurtosis of the extracted signal
y
{\displaystyle \mathbf {y} }
to be maximal precisely when
y
=
s
{\displaystyle \mathbf {y} =\mathbf {s} }
.
the kurtosis of the extracted signal
y
{\displaystyle \mathbf {y} }
to be maximal when
w
{\displaystyle \mathbf {w} }
is orthogonal to the projected axes
S
1
{\displaystyle S_{1}}
or
S
2
{\displaystyle S_{2}}
, because we know the optimal weight vector should be orthogonal to a transformed axis
S
1
{\displaystyle S_{1}}
or
S
2
{\displaystyle S_{2}}
.
For multiple source mixture signals, we can use kurtosis and Gram-Schmidt Orthogonalization (GSO) to recover the signals. Given M signal mixtures in an M-dimensional space, GSO project these data points onto an (M-1)-dimensional space by using the weight vector. We can guarantee the independence of the extracted signals with the use of GSO.
In order to find the correct value of
w
{\displaystyle \mathbf {w} }
, we can use gradient descent method. We first of all whiten the data, and transform
x
{\displaystyle \mathbf {x} }
into a new mixture
z
{\displaystyle \mathbf {z} }
, which has unit variance, and
z
=
(
z
1
,
z
2
,
…
,
z
M
)
T
{\displaystyle \mathbf {z} =(z_{1},z_{2},\ldots ,z_{M})^{T}}
. This process can be achieved by applying Singular value decomposition to
x
{\displaystyle \mathbf {x} }
,
x
=
U
D
V
T
{\displaystyle \mathbf {x} =\mathbf {U} \mathbf {D} \mathbf {V} ^{T}}
Rescaling each vector
U
i
=
U
i
/
E
(
U
i
2
)
{\displaystyle U_{i}=U_{i}/\operatorname {E} (U_{i}^{2})}
, and let
z
=
U
{\displaystyle \mathbf {z} =\mathbf {U} }
. The signal extracted by a weighted vector
w
{\displaystyle \mathbf {w} }
is
y
=
w
T
z
{\displaystyle \mathbf {y} =\mathbf {w} ^{T}\mathbf {z} }
. If the weight vector w has unit length, then the variance of y is also 1, that is
E
[
(
w
T
z
)
2
]
=
1
{\displaystyle \operatorname {E} [(\mathbf {w} ^{T}\mathbf {z} )^{2}]=1}
. The kurtosis can thus be written as:
K
=
E
[
y
4
]
(
E
[
y
2
]
)
2
−
3
=
E
[
(
w
T
z
)
4
]
−
3.
{\displaystyle K={\frac {\operatorname {E} [\mathbf {y} ^{4}]}{(\operatorname {E} [\mathbf {y} ^{2}])^{2}}}-3=\operatorname {E} [(\mathbf {w} ^{T}\mathbf {z} )^{4}]-3.}
The updating process for
w
{\displaystyle \mathbf {w} }
is:
w
n
e
w
=
w
o
l
d
−
η
E
[
z
(
w
o
l
d
T
z
)
3
]
.
{\displaystyle \mathbf {w} _{new}=\mathbf {w} _{old}-\eta \operatorname {E} [\mathbf {z} (\mathbf {w} _{old}^{T}\mathbf {z} )^{3}].}
where
η
{\displaystyle \eta }
is a small constant to guarantee that
w
{\displaystyle \mathbf {w} }
converges to the optimal solution. After each update, we normalize
w
n
e
w
=
w
n
e
w
|
w
n
e
w
|
{\displaystyle \mathbf {w} _{new}={\frac {\mathbf {w} _{new}}{|\mathbf {w} _{new}|}}}
, and set
w
o
l
d
=
w
n
e
w
{\displaystyle \mathbf {w} _{old}=\mathbf {w} _{new}}
, and repeat the updating process until convergence. We can also use another algorithm to update the weight vector
w
{\displaystyle \mathbf {w} }
.
Another approach is using negentropy instead of kurtosis. Using negentropy is a more robust method than kurtosis, as kurtosis is very sensitive to outliers. The negentropy methods are based on an important property of Gaussian distribution: a Gaussian variable has the largest entropy among all continuous random variables of equal variance. This is also the reason why we want to find the most nongaussian variables. A simple proof can be found in Differential entropy.
J
(
x
)
=
S
(
y
)
−
S
(
x
)
{\displaystyle J(x)=S(y)-S(x)\,}
y is a Gaussian random variable of the same covariance matrix as x
S
(
x
)
=
−
∫
p
x
(
u
)
log
p
x
(
u
)
d
u
{\displaystyle S(x)=-\int p_{x}(u)\log p_{x}(u)du}
An approximation for negentropy is
J
(
x
)
=
1
12
(
E
(
x
3
)
)
2
+
1
48
(
k
u
r
t
(
x
)
)
2
{\displaystyle J(x)={\frac {1}{12}}(E(x^{3}))^{2}+{\frac {1}{48}}(kurt(x))^{2}}
A proof can be found in the original papers of Comon; it has been reproduced in the book Independent Component Analysis by Aapo Hyvärinen, Juha Karhunen, and Erkki Oja This approximation also suffers from the same problem as kurtosis (sensitivity to outliers). Other approaches have been developed.
J
(
y
)
=
k
1
(
E
(
G
1
(
y
)
)
)
2
+
k
2
(
E
(
G
2
(
y
)
)
−
E
(
G
2
(
v
)
)
2
{\displaystyle J(y)=k_{1}(E(G_{1}(y)))^{2}+k_{2}(E(G_{2}(y))-E(G_{2}(v))^{2}}
A choice of
G
1
{\displaystyle G_{1}}
and
G
2
{\displaystyle G_{2}}
are
G
1
=
1
a
1
log
(
cosh
(
a
1
u
)
)
{\displaystyle G_{1}={\frac {1}{a_{1}}}\log(\cosh(a_{1}u))}
and
G
2
=
−
exp
(
−
u
2
2
)
{\displaystyle G_{2}=-\exp(-{\frac {u^{2}}{2}})}
= Based on infomax
=Infomax ICA is essentially a multivariate, parallel version of projection pursuit. Whereas projection pursuit extracts a series of signals one at a time from a set of M signal mixtures, ICA extracts M signals in parallel. This tends to make ICA more robust than projection pursuit.
The projection pursuit method uses Gram-Schmidt orthogonalization to ensure the independence of the extracted signal, while ICA use infomax and maximum likelihood estimate to ensure the independence of the extracted signal. The Non-Normality of the extracted signal is achieved by assigning an appropriate model, or prior, for the signal.
The process of ICA based on infomax in short is: given a set of signal mixtures
x
{\displaystyle \mathbf {x} }
and a set of identical independent model cumulative distribution functions(cdfs)
g
{\displaystyle g}
, we seek the unmixing matrix
W
{\displaystyle \mathbf {W} }
which maximizes the joint entropy of the signals
Y
=
g
(
y
)
{\displaystyle \mathbf {Y} =g(\mathbf {y} )}
, where
y
=
W
x
{\displaystyle \mathbf {y} =\mathbf {Wx} }
are the signals extracted by
W
{\displaystyle \mathbf {W} }
. Given the optimal
W
{\displaystyle \mathbf {W} }
, the signals
Y
{\displaystyle \mathbf {Y} }
have maximum entropy and are therefore independent, which ensures that the extracted signals
y
=
g
−
1
(
Y
)
{\displaystyle \mathbf {y} =g^{-1}(\mathbf {Y} )}
are also independent.
g
{\displaystyle g}
is an invertible function, and is the signal model. Note that if the source signal model probability density function
p
s
{\displaystyle p_{s}}
matches the probability density function of the extracted signal
p
y
{\displaystyle p_{\mathbf {y} }}
, then maximizing the joint entropy of
Y
{\displaystyle Y}
also maximizes the amount of mutual information between
x
{\displaystyle \mathbf {x} }
and
Y
{\displaystyle \mathbf {Y} }
. For this reason, using entropy to extract independent signals is known as infomax.
Consider the entropy of the vector variable
Y
=
g
(
y
)
{\displaystyle \mathbf {Y} =g(\mathbf {y} )}
, where
y
=
W
x
{\displaystyle \mathbf {y} =\mathbf {Wx} }
is the set of signals extracted by the unmixing matrix
W
{\displaystyle \mathbf {W} }
. For a finite set of values sampled from a distribution with pdf
p
y
{\displaystyle p_{\mathbf {y} }}
, the entropy of
Y
{\displaystyle \mathbf {Y} }
can be estimated as:
H
(
Y
)
=
−
1
N
∑
t
=
1
N
ln
p
Y
(
Y
t
)
{\displaystyle H(\mathbf {Y} )=-{\frac {1}{N}}\sum _{t=1}^{N}\ln p_{\mathbf {Y} }(\mathbf {Y} ^{t})}
The joint pdf
p
Y
{\displaystyle p_{\mathbf {Y} }}
can be shown to be related to the joint pdf
p
y
{\displaystyle p_{\mathbf {y} }}
of the extracted signals by the multivariate form:
p
Y
(
Y
)
=
p
y
(
y
)
|
∂
Y
∂
y
|
{\displaystyle p_{\mathbf {Y} }(Y)={\frac {p_{\mathbf {y} }(\mathbf {y} )}{|{\frac {\partial \mathbf {Y} }{\partial \mathbf {y} }}|}}}
where
J
=
∂
Y
∂
y
{\displaystyle \mathbf {J} ={\frac {\partial \mathbf {Y} }{\partial \mathbf {y} }}}
is the Jacobian matrix. We have
|
J
|
=
g
′
(
y
)
{\displaystyle |\mathbf {J} |=g'(\mathbf {y} )}
, and
g
′
{\displaystyle g'}
is the pdf assumed for source signals
g
′
=
p
s
{\displaystyle g'=p_{s}}
, therefore,
p
Y
(
Y
)
=
p
y
(
y
)
|
∂
Y
∂
y
|
=
p
y
(
y
)
p
s
(
y
)
{\displaystyle p_{\mathbf {Y} }(Y)={\frac {p_{\mathbf {y} }(\mathbf {y} )}{|{\frac {\partial \mathbf {Y} }{\partial \mathbf {y} }}|}}={\frac {p_{\mathbf {y} }(\mathbf {y} )}{p_{\mathbf {s} }(\mathbf {y} )}}}
therefore,
H
(
Y
)
=
−
1
N
∑
t
=
1
N
ln
p
y
(
y
)
p
s
(
y
)
{\displaystyle H(\mathbf {Y} )=-{\frac {1}{N}}\sum _{t=1}^{N}\ln {\frac {p_{\mathbf {y} }(\mathbf {y} )}{p_{\mathbf {s} }(\mathbf {y} )}}}
We know that when
p
y
=
p
s
{\displaystyle p_{\mathbf {y} }=p_{s}}
,
p
Y
{\displaystyle p_{\mathbf {Y} }}
is of uniform distribution, and
H
(
Y
)
{\displaystyle H({\mathbf {Y} })}
is maximized. Since
p
y
(
y
)
=
p
x
(
x
)
|
∂
y
∂
x
|
=
p
x
(
x
)
|
W
|
{\displaystyle p_{\mathbf {y} }(\mathbf {y} )={\frac {p_{\mathbf {x} }(\mathbf {x} )}{|{\frac {\partial \mathbf {y} }{\partial \mathbf {x} }}|}}={\frac {p_{\mathbf {x} }(\mathbf {x} )}{|\mathbf {W} |}}}
where
|
W
|
{\displaystyle |\mathbf {W} |}
is the absolute value of the determinant of the unmixing matrix
W
{\displaystyle \mathbf {W} }
. Therefore,
H
(
Y
)
=
−
1
N
∑
t
=
1
N
ln
p
x
(
x
t
)
|
W
|
p
s
(
y
t
)
{\displaystyle H(\mathbf {Y} )=-{\frac {1}{N}}\sum _{t=1}^{N}\ln {\frac {p_{\mathbf {x} }(\mathbf {x} ^{t})}{|\mathbf {W} |p_{\mathbf {s} }(\mathbf {y} ^{t})}}}
so,
H
(
Y
)
=
1
N
∑
t
=
1
N
ln
p
s
(
y
t
)
+
ln
|
W
|
+
H
(
x
)
{\displaystyle H(\mathbf {Y} )={\frac {1}{N}}\sum _{t=1}^{N}\ln p_{\mathbf {s} }(\mathbf {y} ^{t})+\ln |\mathbf {W} |+H(\mathbf {x} )}
since
H
(
x
)
=
−
1
N
∑
t
=
1
N
ln
p
x
(
x
t
)
{\displaystyle H(\mathbf {x} )=-{\frac {1}{N}}\sum _{t=1}^{N}\ln p_{\mathbf {x} }(\mathbf {x} ^{t})}
, and maximizing
W
{\displaystyle \mathbf {W} }
does not affect
H
x
{\displaystyle H_{\mathbf {x} }}
, so we can maximize the function
h
(
Y
)
=
1
N
∑
t
=
1
N
ln
p
s
(
y
t
)
+
ln
|
W
|
{\displaystyle h(\mathbf {Y} )={\frac {1}{N}}\sum _{t=1}^{N}\ln p_{\mathbf {s} }(\mathbf {y} ^{t})+\ln |\mathbf {W} |}
to achieve the independence of the extracted signal.
If there are M marginal pdfs of the model joint pdf
p
s
{\displaystyle p_{\mathbf {s} }}
are independent and use the commonly super-gaussian model pdf for the source signals
p
s
=
(
1
−
tanh
(
s
)
2
)
{\displaystyle p_{\mathbf {s} }=(1-\tanh(\mathbf {s} )^{2})}
, then we have
h
(
Y
)
=
1
N
∑
i
=
1
M
∑
t
=
1
N
ln
(
1
−
tanh
(
w
i
T
x
t
)
2
)
+
ln
|
W
|
{\displaystyle h(\mathbf {Y} )={\frac {1}{N}}\sum _{i=1}^{M}\sum _{t=1}^{N}\ln(1-\tanh(\mathbf {w} _{i}^{\mathsf {T}}\mathbf {x} ^{t})^{2})+\ln |\mathbf {W} |}
In the sum, given an observed signal mixture
x
{\displaystyle \mathbf {x} }
, the corresponding set of extracted signals
y
{\displaystyle \mathbf {y} }
and source signal model
p
s
=
g
′
{\displaystyle p_{\mathbf {s} }=g'}
, we can find the optimal unmixing matrix
W
{\displaystyle \mathbf {W} }
, and make the extracted signals independent and non-gaussian. Like the projection pursuit situation, we can use gradient descent method to find the optimal solution of the unmixing matrix.
= Based on maximum likelihood estimation
=Maximum likelihood estimation (MLE) is a standard statistical tool for finding parameter values (e.g. the unmixing matrix
W
{\displaystyle \mathbf {W} }
) that provide the best fit of some data (e.g., the extracted signals
y
{\displaystyle y}
) to a given a model (e.g., the assumed joint probability density function (pdf)
p
s
{\displaystyle p_{s}}
of source signals).
The ML "model" includes a specification of a pdf, which in this case is the pdf
p
s
{\displaystyle p_{s}}
of the unknown source signals
s
{\displaystyle s}
. Using ML ICA, the objective is to find an unmixing matrix that yields extracted signals
y
=
W
x
{\displaystyle y=\mathbf {W} x}
with a joint pdf as similar as possible to the joint pdf
p
s
{\displaystyle p_{s}}
of the unknown source signals
s
{\displaystyle s}
.
MLE is thus based on the assumption that if the model pdf
p
s
{\displaystyle p_{s}}
and the model parameters
A
{\displaystyle \mathbf {A} }
are correct then a high probability should be obtained for the data
x
{\displaystyle x}
that were actually observed. Conversely, if
A
{\displaystyle \mathbf {A} }
is far from the correct parameter values then a low probability of the observed data would be expected.
Using MLE, we call the probability of the observed data for a given set of model parameter values (e.g., a pdf
p
s
{\displaystyle p_{s}}
and a matrix
A
{\displaystyle \mathbf {A} }
) the likelihood of the model parameter values given the observed data.
We define a likelihood function
L
(
W
)
{\displaystyle \mathbf {L(W)} }
of
W
{\displaystyle \mathbf {W} }
:
L
(
W
)
=
p
s
(
W
x
)
|
det
W
|
.
{\displaystyle \mathbf {L(W)} =p_{s}(\mathbf {W} x)|\det \mathbf {W} |.}
This equals to the probability density at
x
{\displaystyle x}
, since
s
=
W
x
{\displaystyle s=\mathbf {W} x}
.
Thus, if we wish to find a
W
{\displaystyle \mathbf {W} }
that is most likely to have generated the observed mixtures
x
{\displaystyle x}
from the unknown source signals
s
{\displaystyle s}
with pdf
p
s
{\displaystyle p_{s}}
then we need only find that
W
{\displaystyle \mathbf {W} }
which maximizes the likelihood
L
(
W
)
{\displaystyle \mathbf {L(W)} }
. The unmixing matrix that maximizes equation is known as the MLE of the optimal unmixing matrix.
It is common practice to use the log likelihood, because this is easier to evaluate. As the logarithm is a monotonic function, the
W
{\displaystyle \mathbf {W} }
that maximizes the function
L
(
W
)
{\displaystyle \mathbf {L(W)} }
also maximizes its logarithm
ln
L
(
W
)
{\displaystyle \ln \mathbf {L(W)} }
. This allows us to take the logarithm of equation above, which yields the log likelihood function
ln
L
(
W
)
=
∑
i
∑
t
ln
p
s
(
w
i
T
x
t
)
+
N
ln
|
det
W
|
{\displaystyle \ln \mathbf {L(W)} =\sum _{i}\sum _{t}\ln p_{s}(w_{i}^{T}x_{t})+N\ln |\det \mathbf {W} |}
If we substitute a commonly used high-Kurtosis model pdf for the source signals
p
s
=
(
1
−
tanh
(
s
)
2
)
{\displaystyle p_{s}=(1-\tanh(s)^{2})}
then we have
ln
L
(
W
)
=
1
N
∑
i
M
∑
t
N
ln
(
1
−
tanh
(
w
i
T
x
t
)
2
)
+
ln
|
det
W
|
{\displaystyle \ln \mathbf {L(W)} ={1 \over N}\sum _{i}^{M}\sum _{t}^{N}\ln(1-\tanh(w_{i}^{T}x_{t})^{2})+\ln |\det \mathbf {W} |}
This matrix
W
{\displaystyle \mathbf {W} }
that maximizes this function is the maximum likelihood estimation.
History and background
The early general framework for independent component analysis was introduced by Jeanny Hérault and Bernard Ans from 1984, further developed by Christian Jutten in 1985 and 1986, and refined by Pierre Comon in 1991, and popularized in his paper of 1994. In 1995, Tony Bell and Terry Sejnowski introduced a fast and efficient ICA algorithm based on infomax, a principle introduced by Ralph Linsker in 1987. An interesting link between ML and Infomax approaches can be found in. A quite comprehensive tutorial on ML approach has been published by J-F.Cardoso in 1998.
There are many algorithms available in the literature which do ICA. A largely used one, including in industrial applications, is the FastICA algorithm, developed by Hyvärinen and Oja, which uses the negentropy as cost function, already proposed 7 years before by Pierre Comon in this context. Other examples are rather related to blind source separation where a more general approach is used. For example, one can drop the independence assumption and separate mutually correlated signals, thus, statistically "dependent" signals. Sepp Hochreiter and Jürgen Schmidhuber showed how to obtain non-linear ICA or source separation as a by-product of regularization (1999). Their method does not require a priori knowledge about the number of independent sources.
Applications
ICA can be extended to analyze non-physical signals. For instance, ICA has been applied to discover discussion topics on a bag of news list archives.
Some ICA applications are listed below:
optical Imaging of neurons
neuronal spike sorting
face recognition
modelling receptive fields of primary visual neurons
predicting stock market prices
mobile phone communications
colour based detection of the ripeness of tomatoes
removing artifacts, such as eye blinks, from EEG data.
predicting decision-making using EEG
analysis of changes in gene expression over time in single cell RNA-sequencing experiments.
studies of the resting state network of the brain.
astronomy and cosmology
finance
Availability
ICA can be applied through the following software:
SAS PROC ICA
R ICA package
scikit-learn Python implementation sklearn.decomposition.FastICA
mlpack C++ implementation of RADICAL (The Robust Accurate, Direct ICA aLgorithm (RADICAL).) [1]
See also
Notes
References
Comon, Pierre (1994): "Independent Component Analysis: a new concept?", Signal Processing, 36(3):287–314 (The original paper describing the concept of ICA)
Hyvärinen, A.; Karhunen, J.; Oja, E. (2001): Independent Component Analysis, New York: Wiley, ISBN 978-0-471-40540-5 ( Introductory chapter )
Hyvärinen, A.; Oja, E. (2000): "Independent Component Analysis: Algorithms and Application", Neural Networks, 13(4-5):411-430. (Technical but pedagogical introduction).
Comon, P.; Jutten C., (2010): Handbook of Blind Source Separation, Independent Component Analysis and Applications. Academic Press, Oxford UK. ISBN 978-0-12-374726-6
Lee, T.-W. (1998): Independent component analysis: Theory and applications, Boston, Mass: Kluwer Academic Publishers, ISBN 0-7923-8261-7
Acharyya, Ranjan (2008): A New Approach for Blind Source Separation of Convolutive Sources - Wavelet Based Separation Using Shrinkage Function ISBN 3-639-07797-0 ISBN 978-3639077971 (this book focuses on unsupervised learning with Blind Source Separation)
External links
What is independent component analysis? by Aapo Hyvärinen
Independent Component Analysis: A Tutorial by Aapo Hyvärinen
A Tutorial on Independent Component Analysis
FastICA as a package for Matlab, in R language, C++
ICALAB Toolboxes for Matlab, developed at RIKEN
High Performance Signal Analysis Toolkit provides C++ implementations of FastICA and Infomax
ICA toolbox Matlab tools for ICA with Bell-Sejnowski, Molgedey-Schuster and mean field ICA. Developed at DTU.
Demonstration of the cocktail party problem Archived 2010-03-13 at the Wayback Machine
EEGLAB Toolbox ICA of EEG for Matlab, developed at UCSD.
FMRLAB Toolbox ICA of fMRI for Matlab, developed at UCSD
MELODIC, part of the FMRIB Software Library.
Discussion of ICA used in a biomedical shape-representation context
FastICA, CuBICA, JADE and TDSEP algorithm for Python and more...
Group ICA Toolbox and Fusion ICA Toolbox
Tutorial: Using ICA for cleaning EEG signals
Kata Kunci Pencarian:
- Psikologi kuantitatif
- PlayStation 3
- Perang Saudara Suriah
- Nakba
- BRCA1
- Asam beta-hidroksi beta-metilbutirat
- Proteasom
- Independent component analysis
- Principal component analysis
- Component analysis
- Kernel-independent component analysis
- Multilinear subspace learning
- Signal separation
- Negentropy
- Dependent component analysis
- Feature learning
- Resting state fMRI