- Source: Matrix completion
Matrix completion is the task of filling in the missing entries of a partially observed matrix, which is equivalent to performing data imputation in statistics. A wide range of datasets are naturally organized in matrix form. One example is the movie-ratings matrix, as appears in the Netflix problem: Given a ratings matrix in which each entry
(
i
,
j
)
{\displaystyle (i,j)}
represents the rating of movie
j
{\displaystyle j}
by customer
i
{\displaystyle i}
, if customer
i
{\displaystyle i}
has watched movie
j
{\displaystyle j}
and is otherwise missing, we would like to predict the remaining entries in order to make good recommendations to customers on what to watch next. Another example is the document-term matrix: The frequencies of words used in a collection of documents can be represented as a matrix, where each entry corresponds to the number of times the associated term appears in the indicated document.
Without any restrictions on the number of degrees of freedom in the completed matrix this problem is underdetermined since the hidden entries could be assigned arbitrary values. Thus we require some assumption on the matrix to create a well-posed problem, such as assuming it has maximal determinant, is positive definite, or is low-rank.
For example, one may assume the matrix has low-rank structure, and then seek to find the lowest rank matrix or, if the rank of the completed matrix is known, a matrix of rank
r
{\displaystyle r}
that matches the known entries. The illustration shows that a partially revealed rank-1 matrix (on the left) can be completed with zero-error (on the right) since all the rows with missing entries should be the same as the third row. In the case of the Netflix problem the ratings matrix is expected to be low-rank since user preferences can often be described by a few factors, such as the movie genre and time of release. Other applications include computer vision, where missing pixels in images need to be reconstructed, detecting the global positioning of sensors in a network from partial distance information, and multiclass learning. The matrix completion problem is in general NP-hard, but under additional assumptions there are efficient algorithms that achieve exact reconstruction with high probability.
In statistical learning point of view, the matrix completion problem is an application of matrix regularization which is a generalization of vector regularization. For example, in the low-rank matrix completion problem one may apply the regularization penalty taking the form of a nuclear norm
R
(
X
)
=
λ
‖
X
‖
∗
{\displaystyle R(X)=\lambda \|X\|_{*}}
Low rank matrix completion
One of the variants of the matrix completion problem is to find the lowest rank matrix
X
{\displaystyle X}
which matches the matrix
M
{\displaystyle M}
, which we wish to recover, for all entries in the set
E
{\displaystyle E}
of observed entries. The mathematical formulation of this problem is as follows:
min
X
rank
(
X
)
subject to
X
i
j
=
M
i
j
∀
i
,
j
∈
E
{\displaystyle {\begin{aligned}&{\underset {X}{\text{min}}}&{\text{rank}}(X)\\&{\text{subject to}}&X_{ij}=M_{ij}&\;\;\forall i,j\in E\\\end{aligned}}}
Candès and Recht proved that with assumptions on the sampling of the observed entries and sufficiently many sampled entries this problem has a unique solution with high probability.
An equivalent formulation, given that the matrix
M
{\displaystyle M}
to be recovered is known to be of rank
r
{\displaystyle r}
, is to solve for
X
{\displaystyle X}
where
X
i
j
=
M
i
j
∀
i
,
j
∈
E
{\displaystyle X_{ij}=M_{ij}\;\;\forall i,j\in E}
= Assumptions
=A number of assumptions on the sampling of the observed entries and the number of sampled entries are frequently made to simplify the analysis and to ensure the problem is not underdetermined.
Uniform sampling of observed entries
To make the analysis tractable, it is often assumed that the set
E
{\displaystyle E}
of observed entries and fixed cardinality is sampled uniformly at random from the collection of all subsets of entries of cardinality
|
E
|
{\displaystyle |E|}
. To further simplify the analysis, it is instead assumed that
E
{\displaystyle E}
is constructed by Bernoulli sampling, i.e. that each entry is observed with probability
p
{\displaystyle p}
. If
p
{\displaystyle p}
is set to
N
m
n
{\displaystyle {\frac {N}{mn}}}
where
N
{\displaystyle N}
is the desired expected cardinality of
E
{\displaystyle E}
, and
m
,
n
{\displaystyle m,\;n}
are the dimensions of the matrix (let
m
<
n
{\displaystyle m
without loss of generality),
|
E
|
{\displaystyle |E|}
is within
O
(
n
log
n
)
{\displaystyle O(n\log n)}
of
N
{\displaystyle N}
with high probability, thus Bernoulli sampling is a good approximation for uniform sampling. Another simplification is to assume that entries are sampled independently and with replacement.
Lower bound on number of observed entries
Suppose the
m
{\displaystyle m}
by
n
{\displaystyle n}
matrix
M
{\displaystyle M}
(with
m
<
n
{\displaystyle m
) we are trying to recover has rank
r
{\displaystyle r}
. There is an information theoretic lower bound on how many entries must be observed before
M
{\displaystyle M}
can be uniquely reconstructed. The set of
m
{\displaystyle m}
by
n
{\displaystyle n}
matrices with rank less than or equal to
r
{\displaystyle r}
is an algebraic variety in
C
m
×
n
{\displaystyle {\mathbb {C} }^{m\times n}}
with dimension
(
n
+
m
)
r
−
r
2
{\displaystyle (n+m)r-r^{2}}
.
Using this result,
one can show that at least
4
n
r
−
4
r
2
{\displaystyle 4nr-4r^{2}}
entries must be observed for matrix completion in
C
n
×
n
{\displaystyle {\mathbb {C} }^{n\times n}}
to have a unique solution
when
r
≤
n
/
2
{\displaystyle r\leq n/2}
.
Secondly, there must be at least one observed entry per row and column of
M
{\displaystyle M}
. The singular value decomposition of
M
{\displaystyle M}
is given by
U
Σ
V
†
{\displaystyle U\Sigma V^{\dagger }}
. If row
i
{\displaystyle i}
is unobserved, it is easy to see the
i
th
{\displaystyle i^{\text{th}}}
right singular vector of
M
{\displaystyle M}
,
v
i
{\displaystyle v_{i}}
, can be changed to some arbitrary value and still yield a matrix matching
M
{\displaystyle M}
over the set of observed entries. Similarly, if column
j
{\displaystyle j}
is unobserved, the
j
th
{\displaystyle j^{\text{th}}}
left singular vector of
M
{\displaystyle M}
,
u
i
{\displaystyle u_{i}}
can be arbitrary. If we assume Bernoulli sampling of the set of observed entries, the Coupon collector effect implies that entries on the order of
O
(
n
log
n
)
{\displaystyle O(n\log n)}
must be observed to ensure that there is an observation from each row and column with high probability.
Combining the necessary conditions and assuming that
r
≪
m
,
n
{\displaystyle r\ll m,n}
(a valid assumption for many practical applications), the lower bound on the number of observed entries required to prevent the problem of matrix completion from being underdetermined is on the order of
n
r
log
n
{\displaystyle nr\log n}
.
Incoherence
The concept of incoherence arose in compressed sensing. It is introduced in the context of matrix completion to ensure the singular vectors of
M
{\displaystyle M}
are not too "sparse" in the sense that all coordinates of each singular vector are of comparable magnitude instead of just a few coordinates having significantly larger magnitudes. The standard basis vectors are then undesirable as singular vectors, and the vector
1
n
[
1
1
⋮
1
]
{\displaystyle {\frac {1}{\sqrt {n}}}{\begin{bmatrix}1\\1\\\vdots \\1\end{bmatrix}}}
in
R
n
{\displaystyle \mathbb {R} ^{n}}
is desirable. As an example of what could go wrong if the singular vectors are sufficiently "sparse", consider the
m
{\displaystyle m}
by
n
{\displaystyle n}
matrix
[
1
0
⋯
0
⋮
⋮
0
0
0
0
]
{\displaystyle {\begin{bmatrix}1&0&\cdots &0\\\vdots &&\vdots \\0&0&0&0\end{bmatrix}}}
with singular value decomposition
I
m
[
1
0
⋯
0
⋮
⋮
0
0
0
0
]
I
n
{\displaystyle I_{m}{\begin{bmatrix}1&0&\cdots &0\\\vdots &&\vdots \\0&0&0&0\end{bmatrix}}I_{n}}
. Almost all the entries of
M
{\displaystyle M}
must be sampled before it can be reconstructed.
Candès and Recht define the coherence of a matrix
U
{\displaystyle U}
with column space an
r
−
{\displaystyle r-}
dimensional subspace of
R
n
{\displaystyle \mathbb {R} ^{n}}
as
μ
(
U
)
=
n
r
max
i
<
n
‖
P
U
e
i
‖
2
{\displaystyle \mu (U)={\frac {n}{r}}\max _{i
, where
P
U
{\displaystyle P_{U}}
is the orthogonal projection onto
U
{\displaystyle U}
. Incoherence then asserts that given the singular value decomposition
U
Σ
V
†
{\displaystyle U\Sigma V^{\dagger }}
of the
m
{\displaystyle m}
by
n
{\displaystyle n}
matrix
M
{\displaystyle M}
,
μ
(
U
)
,
μ
(
V
)
≤
μ
0
{\displaystyle \mu (U),\;\mu (V)\leq \mu _{0}}
The entries of
∑
k
u
k
v
k
†
{\displaystyle \sum _{k}u_{k}v_{k}^{\dagger }}
have magnitudes upper bounded by
μ
1
r
m
n
{\displaystyle \mu _{1}{\sqrt {\frac {r}{mn}}}}
for some
μ
0
,
μ
1
{\displaystyle \mu _{0},\;\mu _{1}}
.
Low rank matrix completion with noise
In real world application, one often observe only a few entries corrupted at least by a small amount of noise. For example, in the Netflix problem, the ratings are uncertain. Candès and Plan showed that it is possible to fill in the many missing entries of large low-rank matrices from just a few noisy samples by nuclear norm minimization. The noisy model assumes that we observe
Y
i
j
=
M
i
j
+
Z
i
j
,
(
i
,
j
)
∈
Ω
,
{\displaystyle Y_{ij}=M_{ij}+Z_{ij},(i,j)\in \Omega ,}
where
Z
i
j
:
(
i
,
j
)
∈
Ω
{\displaystyle {Z_{ij}:(i,j)\in \Omega }}
is a noise term. Note that the noise can be either stochastic or deterministic. Alternatively the model can be expressed as
P
Ω
(
Y
)
=
P
Ω
(
M
)
+
P
Ω
(
Z
)
,
{\displaystyle P_{\Omega }(Y)=P_{\Omega }(M)+P_{\Omega }(Z),}
where
Z
{\displaystyle Z}
is an
n
×
n
{\displaystyle n\times n}
matrix with entries
Z
i
j
{\displaystyle Z_{ij}}
for
(
i
,
j
)
∈
Ω
{\displaystyle (i,j)\in \Omega }
assuming that
‖
P
Ω
(
Z
)
‖
F
≤
δ
{\displaystyle \|P_{\Omega }(Z)\|_{F}\leq \delta }
for some
δ
>
0
{\displaystyle \delta >0}
.To recover the incomplete matrix, we try to solve the following optimization problem:
min
X
‖
X
‖
∗
subject to
‖
P
Ω
(
X
−
Y
)
‖
F
≤
δ
{\displaystyle {\begin{aligned}&{\underset {X}{\text{min}}}&\|X\|_{*}\\&{\text{subject to}}&\|P_{\Omega }(X-Y)\|_{F}\leq \delta \\\end{aligned}}}
Among all matrices consistent with the data, find the one with minimum nuclear norm. Candès and Plan have shown that this reconstruction is accurate. They have proved that when perfect noiseless recovery occurs, then matrix completion is stable vis a vis perturbations. The error is proportional to the noise level
δ
{\displaystyle \delta }
. Therefore, when the noise level is small, the error is small. Here the matrix completion problem does not obey the restricted isometry property (RIP). For matrices, the RIP would assume that the sampling operator obeys
(
1
−
δ
)
‖
X
‖
F
2
≤
1
p
‖
P
Ω
(
X
)
‖
F
2
≤
(
1
+
δ
)
‖
X
‖
F
2
{\displaystyle (1-\delta )\|X\|_{F}^{2}\leq {\frac {1}{p}}\|P_{\Omega }(X)\|_{F}^{2}\leq (1+\delta )\|X\|_{F}^{2}}
for all matrices
X
{\displaystyle X}
with sufficiently small rank and
δ
<
1
{\displaystyle \delta <1}
sufficiently small.
The methods are also applicable to sparse signal recovery problems in which the RIP does not hold.
High rank matrix completion
The high rank matrix completion in general is NP-Hard. However, with certain assumptions, some incomplete high rank matrix or even full rank matrix can be completed.
Eriksson, Balzano and Nowak have considered the problem of completing a matrix with the assumption that the columns of the matrix belong to a union of multiple low-rank subspaces. Since the columns belong to a union of subspaces, the problem may be viewed as a missing-data version of the subspace clustering problem. Let
X
{\displaystyle X}
be an
n
×
N
{\displaystyle n\times N}
matrix whose (complete) columns lie in a union of at most
k
{\displaystyle k}
subspaces, each of
r
a
n
k
≤
r
<
n
{\displaystyle rank\leq r
, and assume
N
≫
k
n
{\displaystyle N\gg kn}
. Eriksson, Balzano and Nowak showed that under mild assumptions each column of
X
{\displaystyle X}
can be perfectly recovered with high probability from an incomplete version so long as at least
C
r
N
log
2
(
n
)
{\displaystyle CrN\log ^{2}(n)}
entries of
X
{\displaystyle X}
are observed uniformly at random, with
C
>
1
{\displaystyle C>1}
a constant depending on the usual incoherence conditions, the geometrical arrangement of subspaces, and the distribution of columns over the subspaces.
The algorithm involves several steps: (1) local neighborhoods; (2) local subspaces; (3) subspace refinement; (4) full matrix completion. This method can be applied to Internet distance matrix completion and topology identification.
Algorithms for Low-Rank Matrix Completion
Various matrix completion algorithms have been proposed. These include convex relaxation-based algorithm, gradient-based algorithm, and alternating minimization-based algorithm.
= Convex relaxation
=The rank minimization problem is NP-hard. One approach, proposed by Candès and Recht, is to form a convex relaxation of the problem and minimize the nuclear norm
‖
M
‖
∗
{\displaystyle \|M\|_{*}}
(which gives the sum of the singular values of
M
{\displaystyle M}
) instead of
rank
(
M
)
{\displaystyle {\text{rank}}(M)}
(which counts the number of non zero singular values of
M
{\displaystyle M}
). This is analogous to minimizing the L1-norm rather than the L0-norm for vectors. The convex relaxation can be solved using semidefinite programming (SDP) by noticing that the optimization problem is equivalent to
min
W
1
,
W
2
trace
(
W
1
)
+
trace
(
W
2
)
subject to
X
i
j
=
M
i
j
∀
i
,
j
∈
E
[
W
1
X
X
T
W
2
]
⪰
0
{\displaystyle {\begin{aligned}&{\underset {W_{1},W_{2}}{\text{min}}}&&{\text{trace}}(W_{1})+{\text{trace}}(W_{2})\\&{\text{subject to}}&&X_{ij}=M_{ij}\;\;\forall i,j\in E\\&&&{\begin{bmatrix}W_{1}&X\\X^{T}&W_{2}\end{bmatrix}}\succeq 0\end{aligned}}}
The complexity of using SDP to solve the convex relaxation is
O
(
max
(
m
,
n
)
4
)
{\displaystyle O({\text{max}}(m,n)^{4})}
. State of the art solvers like SDPT3 can only handle matrices of size up to 100 by 100 An alternative first order method that approximately solves the convex relaxation is the Singular Value Thresholding Algorithm introduced by Cai, Candès and Shen.
Candès and Recht show, using the study of random variables on Banach spaces, that if the number of observed entries is on the order of
max
{
μ
1
2
,
μ
0
μ
1
,
μ
0
n
0.25
}
n
r
log
n
{\displaystyle \max {\{\mu _{1}^{2},{\sqrt {\mu _{0}}}\mu _{1},\mu _{0}n^{0.25}\}}nr\log n}
(assume without loss of generality
m
<
n
{\displaystyle m
), the rank minimization problem has a unique solution which also happens to be the solution of its convex relaxation with probability
1
−
c
n
3
{\displaystyle 1-{\frac {c}{n^{3}}}}
for some constant
c
{\displaystyle c}
. If the rank of
M
{\displaystyle M}
is small (
r
≤
n
0.2
μ
0
{\displaystyle r\leq {\frac {n^{0.2}}{\mu _{0}}}}
), the size of the set of observations reduces to the order of
μ
0
n
1.2
r
log
n
{\displaystyle \mu _{0}n^{1.2}r\log n}
. These results are near optimal, since the minimum number of entries that must be observed for the matrix completion problem to not be underdetermined is on the order of
n
r
log
n
{\displaystyle nr\log n}
.
This result has been improved by Candès and Tao. They achieve bounds that differ from the optimal bounds only by polylogarithmic factors by strengthening the assumptions. Instead of the incoherence property, they assume the strong incoherence property with parameter
μ
3
{\displaystyle \mu _{3}}
. This property states that:
|
⟨
e
a
,
P
U
e
a
′
⟩
−
r
m
1
a
=
a
′
|
≤
μ
3
r
m
{\displaystyle |\langle e_{a},P_{U}e_{a'}\rangle -{\frac {r}{m}}1_{a=a'}|\leq \mu _{3}{\frac {\sqrt {r}}{m}}}
for
a
,
a
′
≤
m
{\displaystyle a,a'\leq m}
and
|
⟨
e
b
,
P
U
e
b
′
⟩
−
r
n
1
b
=
b
′
|
≤
μ
3
r
n
{\displaystyle |\langle e_{b},P_{U}e_{b'}\rangle -{\frac {r}{n}}1_{b=b'}|\leq \mu _{3}{\frac {\sqrt {r}}{n}}}
for
b
,
b
′
≤
n
{\displaystyle b,b'\leq n}
The entries of
∑
i
u
i
v
i
†
{\displaystyle \sum _{i}u_{i}v_{i}^{\dagger }}
are bounded in magnitude by
μ
3
r
m
n
{\displaystyle \mu _{3}{\sqrt {\frac {r}{mn}}}}
Intuitively, strong incoherence of a matrix
U
{\displaystyle U}
asserts that the orthogonal projections of standard basis vectors to
U
{\displaystyle U}
has magnitudes that have high likelihood if the singular vectors were distributed randomly.
Candès and Tao find that when
r
{\displaystyle r}
is
O
(
1
)
{\displaystyle O(1)}
and the number of observed entries is on the order of
μ
3
4
n
(
log
n
)
2
{\displaystyle \mu _{3}^{4}n(\log n)^{2}}
, the rank minimization problem has a unique solution which also happens to be the solution of its convex relaxation with probability
1
−
c
n
3
{\displaystyle 1-{\frac {c}{n^{3}}}}
for some constant
c
{\displaystyle c}
. For arbitrary
r
{\displaystyle r}
, the number of observed entries sufficient for this assertion hold true is on the order of
μ
3
2
n
r
(
log
n
)
6
{\displaystyle \mu _{3}^{2}nr(\log n)^{6}}
Another convex relaxation approach is to minimize the Frobenius squared norm under a rank constraint. This is equivalent to solving
min
X
‖
X
‖
F
2
subject to
X
i
j
=
M
i
j
∀
i
,
j
∈
E
Rank
(
X
)
≤
k
.
{\displaystyle {\begin{aligned}&{\underset {X}{\text{min}}}&&\Vert X\Vert _{F}^{2}\\&{\text{subject to}}&&X_{ij}=M_{ij}\;\;\forall i,j\in E\\&&&{\text{Rank}}(X)\leq k.\end{aligned}}}
By introducing an orthogonal projection matrix
Y
{\displaystyle Y}
(meaning
Y
2
=
Y
,
Y
=
Y
′
{\displaystyle Y^{2}=Y,Y=Y'}
) to model the rank of
X
{\displaystyle X}
via
X
=
Y
X
,
trace
(
Y
)
≤
k
{\displaystyle X=YX,{\text{trace}}(Y)\leq k}
and taking this problem's convex relaxation, we obtain the following semidefinite program
min
X
,
Y
,
θ
trace
(
θ
)
subject to
X
i
j
=
M
i
j
∀
i
,
j
∈
E
trace
(
Y
)
≤
k
,
0
⪯
Y
⪯
I
(
Y
X
X
⊤
θ
)
⪰
0.
{\displaystyle {\begin{aligned}&{\underset {X,Y,\theta }{\text{min}}}&&{\text{trace}}(\theta )\\&{\text{subject to}}&&X_{ij}=M_{ij}\;\;\forall i,j\in E\\&&&{\text{trace}}(Y)\leq k,0\preceq Y\preceq I\\&&&{\begin{pmatrix}Y&X\\X^{\top }&\theta \end{pmatrix}}\succeq 0.\end{aligned}}}
If Y is a projection matrix (i.e., has binary eigenvalues) in this relaxation, then the relaxation is tight. Otherwise, it gives a valid lower bound on the overall objective. Moreover, it can be converted into a feasible solution with a (slightly) larger objective by rounding the eigenvalues of Y greedily. Remarkably, this convex relaxation can be solved by alternating minimization on X and Y without solving any SDPs, and thus it scales beyond the typical numerical limits of state-of-the-art SDP solvers like SDPT3 or Mosek.
This approach is a special case of a more general reformulation technique, which can be applied to obtain a valid lower bound on any low-rank problem with a trace-matrix-convex objective.
= Gradient descent
=Keshavan, Montanari and Oh consider a variant of matrix completion where the rank of the
m
{\displaystyle m}
by
n
{\displaystyle n}
matrix
M
{\displaystyle M}
, which is to be recovered, is known to be
r
{\displaystyle r}
. They assume Bernoulli sampling of entries, constant aspect ratio
m
n
{\displaystyle {\frac {m}{n}}}
, bounded magnitude of entries of
M
{\displaystyle M}
(let the upper bound be
M
max
{\displaystyle M_{\text{max}}}
), and constant condition number
σ
1
σ
r
{\displaystyle {\frac {\sigma _{1}}{\sigma _{r}}}}
(where
σ
1
{\displaystyle \sigma _{1}}
and
σ
r
{\displaystyle \sigma _{r}}
are the largest and smallest singular values of
M
{\displaystyle M}
respectively). Further, they assume the two incoherence conditions are satisfied with
μ
0
{\displaystyle \mu _{0}}
and
μ
1
σ
1
σ
r
{\displaystyle \mu _{1}{\frac {\sigma _{1}}{\sigma _{r}}}}
where
μ
0
{\displaystyle \mu _{0}}
and
μ
1
{\displaystyle \mu _{1}}
are constants. Let
M
E
{\displaystyle M^{E}}
be a matrix that matches
M
{\displaystyle M}
on the set
E
{\displaystyle E}
of observed entries and is 0 elsewhere. They then propose the following algorithm:
Trim
M
E
{\displaystyle M^{E}}
by removing all observations from columns with degree larger than
2
|
E
|
n
{\displaystyle {\frac {2|E|}{n}}}
by setting the entries in the columns to 0. Similarly remove all observations from rows with degree larger than
2
|
E
|
n
{\displaystyle {\frac {2|E|}{n}}}
.
Project
M
E
{\displaystyle M^{E}}
onto its first
r
{\displaystyle r}
principal components. Call the resulting matrix
Tr
(
M
E
)
{\displaystyle {\text{Tr}}(M^{E})}
.
Solve
min
X
,
Y
min
S
∈
R
r
×
r
1
2
∑
i
,
j
∈
E
(
M
i
j
−
(
X
S
Y
†
)
i
j
)
2
+
ρ
G
(
X
,
Y
)
{\displaystyle \min _{X,Y}\min _{S\in \mathbb {R} ^{r\times r}}{\frac {1}{2}}\sum _{i,j\in E}(M_{ij}-(XSY^{\dagger })_{ij})^{2}+\rho G(X,Y)}
where
G
(
X
,
Y
)
{\displaystyle G(X,Y)}
is some regularization function by gradient descent with line search. Initialize
X
,
Y
{\displaystyle X,\;Y}
at
X
0
,
Y
0
{\displaystyle X_{0},\;Y_{0}}
where
Tr
(
M
E
)
=
X
0
S
0
Y
0
†
{\displaystyle {\text{Tr}}(M_{E})=X_{0}S_{0}Y_{0}^{\dagger }}
. Set
G
(
X
,
Y
)
{\displaystyle G(X,Y)}
as some function forcing
X
,
Y
{\displaystyle X,\;Y}
to remain incoherent throughout gradient descent if
X
0
{\displaystyle X_{0}}
and
Y
0
{\displaystyle Y_{0}}
are incoherent.
Return the matrix
X
S
Y
†
{\displaystyle XSY^{\dagger }}
.
Steps 1 and 2 of the algorithm yield a matrix
Tr
(
M
E
)
{\displaystyle {\text{Tr}}(M^{E})}
very close to the true matrix
M
{\displaystyle M}
(as measured by the root mean square error (RMSE)) with high probability. In particular, with probability
1
−
1
n
3
{\displaystyle 1-{\frac {1}{n^{3}}}}
,
1
m
n
M
max
2
‖
M
−
Tr
(
M
E
)
‖
F
2
≤
C
r
m
|
E
|
m
n
{\displaystyle {\frac {1}{mnM_{\text{max}}^{2}}}\|M-{\text{Tr}}(M^{E})\|_{F}^{2}\leq C{\frac {r}{m|E|}}{\sqrt {\frac {m}{n}}}}
for some constant
C
{\displaystyle C}
.
‖
⋅
‖
F
{\displaystyle \|\cdot \|_{F}}
denotes the Frobenius norm. Note that the full suite of assumptions is not needed for this result to hold. The incoherence condition, for example, only comes into play in exact reconstruction. Finally, although trimming may seem counter intuitive as it involves throwing out information, it ensures projecting
M
E
{\displaystyle M^{E}}
onto its first
r
{\displaystyle r}
principal components gives more information about the underlying matrix
M
{\displaystyle M}
than about the observed entries.
In Step 3, the space of candidate matrices
X
,
Y
{\displaystyle X,\;Y}
can be reduced by noticing that the inner minimization problem has the same solution for
(
X
,
Y
)
{\displaystyle (X,Y)}
as for
(
X
Q
,
Y
R
)
{\displaystyle (XQ,YR)}
where
Q
{\displaystyle Q}
and
R
{\displaystyle R}
are orthonormal
r
{\displaystyle r}
by
r
{\displaystyle r}
matrices. Then gradient descent can be performed over the cross product of two Grassman manifolds. If
r
≪
m
,
n
{\displaystyle r\ll m,\;n}
and the observed entry set is in the order of
n
r
log
n
{\displaystyle nr\log n}
, the matrix returned by Step 3 is exactly
M
{\displaystyle M}
. Then the algorithm is order optimal, since we know that for the matrix completion problem to not be underdetermined the number of entries must be in the order of
n
r
log
n
{\displaystyle nr\log n}
.
= Alternating least squares minimization
=Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix problem. In the alternating minimization approach, the low-rank target matrix is written in a bilinear form:
X
=
U
V
T
{\displaystyle X=UV^{T}}
;
the algorithm then alternates between finding the best
U
{\displaystyle U}
and the best
V
{\displaystyle V}
. While the overall problem is non-convex, each sub-problem is typically convex and can be solved efficiently. Jain, Netrapalli and Sanghavi have given one of the first guarantees for performance of alternating minimization for both matrix completion and matrix sensing.
The alternating minimization algorithm can be viewed as an approximate way to solve the following non-convex problem:
min
U
,
V
∈
R
n
×
k
‖
P
Ω
(
U
V
T
)
−
P
Ω
(
M
)
‖
F
2
{\displaystyle {\begin{aligned}&{\underset {U,V\in \mathbb {R} ^{n\times k}}{\text{min}}}&\|P_{\Omega }(UV^{T})-P_{\Omega }(M)\|_{F}^{2}\\\end{aligned}}}
The AltMinComplete Algorithm proposed by Jain, Netrapalli and Sanghavi is listed here:
Input: observed set
Ω
{\displaystyle \Omega }
, values
P
Ω
(
M
)
{\displaystyle P_{\Omega }(M)}
Partition
Ω
{\displaystyle \Omega }
into
2
T
+
1
{\displaystyle 2T+1}
subsets
Ω
0
,
⋯
,
Ω
2
T
{\displaystyle \Omega _{0},\cdots ,\Omega _{2T}}
with each element of
Ω
{\displaystyle \Omega }
belonging to one of the
Ω
t
{\displaystyle \Omega _{t}}
with equal probability (sampling with replacement)
U
^
0
=
S
V
D
(
1
p
P
Ω
0
(
M
)
,
k
)
{\displaystyle {\hat {U}}^{0}=SVD({\frac {1}{p}}P_{\Omega _{0}}(M),k)}
i.e., top-
k
{\displaystyle k}
left singular vectors of
1
p
P
Ω
0
(
M
)
{\displaystyle {\frac {1}{p}}P_{\Omega _{0}}(M)}
Clipping: Set all elements of
U
^
0
{\displaystyle {\hat {U}}^{0}}
that have magnitude greater than
2
μ
k
n
{\displaystyle {\frac {2\mu {\sqrt {k}}}{\sqrt {n}}}}
to zero and orthonormalize the columns of
U
^
0
{\displaystyle {\hat {U}}^{0}}
for
t
=
0
,
⋯
,
T
−
1
{\displaystyle t=0,\cdots ,T-1}
do
V
^
t
+
1
←
argmin
V
∈
R
n
×
k
‖
P
Ω
t
+
1
(
U
^
V
T
−
M
)
‖
F
2
{\displaystyle \quad {\hat {V}}^{t+1}\leftarrow {\text{argmin}}_{V\in \mathbb {R} ^{n\times k}}\|P_{\Omega _{t+1}}({\hat {U}}V^{T}-M)\|_{F}^{2}}
U
^
t
+
1
←
argmin
U
∈
R
m
×
k
‖
P
Ω
T
+
t
+
1
(
U
(
V
^
t
+
1
)
T
−
M
)
‖
F
2
{\displaystyle \quad {\hat {U}}^{t+1}\leftarrow {\text{argmin}}_{U\in \mathbb {R} ^{m\times k}}\|P_{\Omega _{T+t+1}}(U({\hat {V}}^{t+1})^{T}-M)\|_{F}^{2}}
end for
Return
X
=
U
^
T
(
V
^
T
)
T
{\displaystyle X={\hat {U}}^{T}({\hat {V}}^{T})^{T}}
They showed that by observing
|
Ω
|
=
O
(
(
σ
1
∗
σ
k
∗
)
6
k
7
log
n
log
(
k
‖
M
‖
F
/
ϵ
)
)
{\displaystyle |\Omega |=O(({\frac {\sigma _{1}^{*}}{\sigma _{k}^{*}}})^{6}k^{7}\log n\log(k\|M\|_{F}/\epsilon ))}
random entries of an incoherent matrix
M
{\displaystyle M}
, AltMinComplete algorithm can recover
M
{\displaystyle M}
in
O
(
log
(
1
/
ϵ
)
)
{\displaystyle O(\log(1/\epsilon ))}
steps. In terms of sample complexity (
|
Ω
|
{\displaystyle |\Omega |}
), theoretically, Alternating Minimization may require a bigger
Ω
{\displaystyle \Omega }
than Convex Relaxation. However empirically it seems not the case which implies that the sample complexity bounds can be further tightened. In terms of time complexity, they showed that AltMinComplete needs time
O
(
|
Ω
|
k
2
log
(
1
/
ϵ
)
)
{\displaystyle O(|\Omega |k^{2}\log(1/\epsilon ))}
.
It is worth noting that, although convex relaxation based methods have rigorous analysis, alternating minimization based algorithms are more successful in practice.
Applications
Several applications of matrix completion are summarized by Candès and Plan as follows:
= Collaborative filtering
=Collaborative filtering is the task of making automatic predictions about the interests of a user by collecting taste information from many users. Companies like Apple, Amazon, Barnes and Noble, and Netflix are trying to predict their user preferences from partial knowledge. In these kind of matrix completion problem, the unknown full matrix is often considered low rank because only a few factors typically contribute to an individual's tastes or preference.
= System identification
=In control, one would like to fit a discrete-time linear time-invariant state-space model
x
(
t
+
1
)
=
A
x
(
t
)
+
B
u
(
t
)
y
(
t
)
=
C
x
(
t
)
+
D
u
(
t
)
{\displaystyle {\begin{aligned}x(t+1)&=Ax(t)+Bu(t)\\y(t)&=Cx(t)+Du(t)\end{aligned}}}
to a sequence of inputs
u
(
t
)
∈
R
m
{\displaystyle u(t)\in \mathbb {R} ^{m}}
and outputs
y
(
t
)
∈
R
p
,
t
=
0
,
…
,
N
{\displaystyle y(t)\in \mathbb {R} ^{p},t=0,\ldots ,N}
. The vector
x
(
t
)
∈
R
n
{\displaystyle x(t)\in \mathbb {R} ^{n}}
is the state of the system at time
t
{\displaystyle t}
and
n
{\displaystyle n}
is the order of the system model. From the input/output pair, one would like to recover the matrices
A
,
B
,
C
,
D
{\displaystyle A,B,C,D}
and the initial state
x
(
0
)
{\displaystyle x(0)}
. This problem can also be viewed as a low-rank matrix completion problem.
= Internet of things (IoT) localization
=The localization (or global positioning) problem emerges naturally in IoT sensor networks. The problem is to recover the sensor map in Euclidean space from a local or partial set of pairwise distances. Thus it is a matrix completion problem with rank two if the sensors are located in a 2-D plane and three if they are in a 3-D space.
= Social Networks Recovery
=Most of the real-world social networks have low-rank distance matrices. When we are not able to measure the complete network, which can be due to reasons such as private nodes, limited storage or compute resources, we only have a fraction of distance entries known. Criminal networks are a good example of such networks. Low-rank Matrix Completion can be used to recover these unobserved distances.
See also
References
Kata Kunci Pencarian:
- Terence Tao
- Menara Tokyo
- Ipsos
- Matrix completion
- Terence Tao
- Matrix regularization
- The Matrix Revolutions
- Responsibility assignment matrix
- Agent Smith
- Dedekind–MacNeille completion
- Imputation (statistics)
- Determinant
- Low-rank approximation
The Matrix (1999)
No More Posts Available.
No more pages to load.