Matrix Completion by Truncated Nuclear Norm Regularization Debing Zhang State Key Lab of CAD&CG Zhejiang University Hangzhou, China

Yao Hu State Key Lab of CAD&CG Zhejiang University Hangzhou, China

[email protected]

[email protected]

Jieping Ye Arizona University Tempe, AZ 85287 [email protected]

Xuelong Li Center for OPTical IMagery Analysis and Learning (OPTIMAL) State Key Laboratory of Transient Optics and Photonics Xi’an Institute of Optics and Precision Mechanics Chinese Academy of Sciences, Xi’an 710119, Shaanxi, China

Xiaofei He State Key Lab of CAD&CG Zhejiang University Hangzhou, China [email protected]

xuelong [email protected]

Abstract

4

5

x 10

Magnitude

4

3

2

1

0 0

20

(a) an image example 7

40 60 the i−th singular value

80

100

80

100

(b) red channel

4

x 10

4

10

x 10

6 8 Magnitude

5 Magnitude

Estimating missing values in visual data is a challenging problem in computer vision, which can be considered as a low rank matrix approximation problem. Most of the recent studies use the nuclear norm as a convex relaxation of the rank operator. However, by minimizing the nuclear norm, all the singular values are simultaneously minimized, and thus the rank can not be well approximated in practice. In this paper, we propose a novel matrix completion algorithm based on the Truncated Nuclear Norm Regularization (TNNR) by only minimizing the smallest N − r singular values, where N is the number of singular values and r is the rank of the matrix. In this way, the rank of the matrix can be better approximated than the nuclear norm. We further develop an efficient iterative procedure to solve the optimization problem by using the alternating direction method of multipliers and the accelerated proximal gradient line search method. Experimental results in a wide range of applications demonstrate the effectiveness of our proposed approach.

4 3

6

4

2 2

1 0 0

20

40 60 the i−th singular value

(c) red channel

80

100

0 0

20

40 60 the i−th singular value

(d) red channel

Figure 1. (a) A 400 × 500 image example. (b) The singular values of red channel. (c) The singular values of green channel. (d)The singular values of blue channel. As can be seen, the information is dominated by the top 20 singular values.

1. Introduction In many real problems in computer vision and pattern recognition, such as image inpainting [13, 17], there are missing values presented in the data. The problem of estimating missing values in a matrix, or matrix completion, has received considerable interest recently [5, 6, 14, 7, 16, 10]. The visual data, such as images, is probably of low rank structure, as shown in Fig. 1. Consequently, most of

978-1-4673-1228-8/12/$31.00 ©2012 IEEE

the existing matrix completion approaches aim to find a low rank approximation. Specifically, given the incomplete data matrix M ∈ Rm×n of rank r, the matrix completion problem can be formulated as follows: min

rank(X)

s.t.

Xij = Mij , (i, j) ∈ Ω,

X

2192

(1)

where X ∈ Rm×n and Ω is the set of locations corresponding to the observed entries. Unfortunately, the above rank minimization problem is NP-hard in general due to the non-convexity and discontinuous nature of the rank function. A widely used approach is to apply the nuclear norm (i.e., the sum of singular values) as a convex surrogate of the non-convex matrix rank function. Inspired from compressed sensing, Cand` es and Recht recently show that, if a matrix has row and column spaces that are incoherent with the standard basis, then nuclear norm minimization can recover this matrix from sufficient observed entries [5, 4, 6, 9]. The existing approaches based on the nuclear norm heuristic, such as Singular Value Thresholding (SVT, [3]), can often obtain good performance on noiseless synthetic data. However, they fail to get low rank solutions in real applications. This is because the nuclear norm can not accurately approximate the rank function. Specifically, compared to the rank function in which all the non-zero singular values are equally treated, the nuclear norm considers them differently. Even worse, these approaches may not converge sometimes. This is because the theoretical requirements (e.g. incoherence property) of the nuclear norm heuristic are usually very hard to be satisfied in practice. In this paper, we propose a novel matrix completion method called Truncated Nuclear Norm Regularization (TNNR) to recover the low rank matrices with missing values. Different from the nuclear norm based approaches which minimize the summation of all the singular values, our approach only minimize the smallest min(m, n) − r singular values since the rank of a matrix only corresponds to the r non-zero singular values. In this way, we can get a more accurate and robust approximation to the rank function. Furthermore, we propose two simple but efficient optimization schemes to solve the objective function by using the alternating direction method of multipliers (ADMM) and the accelerated proximal gradient line search method (APGL). The rest of the paper is organized as follows. In the next section, we provide a brief description of the related work. In Section 3, we present the proposed TNNR algorithm. We introduce two optimization schemes to solve the objective function in Section 4. A variety of experimental results are presented in Section 5. Finally, we provide some concluding remarks in Section 6.

proximating the rank function using the nuclear norm: X∗

s.t.

Xij = Mij , (i, j) ∈ Ω,

X

(2)

min (m,n) σi (X) is the nuclear norm and where X∗ = i=1 σi (X) is the i-th largest singular value of X. The above optimization problem can be solved by semi-definite programming. However, even the currently best SDP solvers can only handle the matrices whose size is less than 100 × 100 [3]. Recently, in order to solve the rank minimization problem for large matrices, Cai et al. propose a first order Singular Value Thresholding (SVT, [3]) method by combining nuclear norm and Frobenius norm: X∗ + αX2F

min X

(3) s.t. Xij = Mij , (i, j) ∈ Ω,  2 where XF = (i,j) Xij is the Frobenius norm. This optimization problem can be efficiently solved by using shrinkage operator [3]. Some other related work to our proposed approach include Singular Value Projection (SVP, [15]) and OptSpace [12].

3. Truncated Nuclear Norm Regularization 3.1. Notation Let X = (x1 , · · · , xn ) be an m × n matrix, and Ω ⊂ {1, ..., m} × {1, ..., n} denote the indices of the observed entries of X, and let Ωc denote the indices of the missing entries. It is convenient to summarize the information available via symbol XΩ , which is denoted as  Xij if (i, j) ∈ Ω (XΩ )ij = 0 if (i, j) ∈ Ωc .

3.2. Our Approach As we have described previously, the nuclear norm can not necessarily approximate the rank function very well in real applications. We first introduce Truncated Nuclear Norm which can better approximate the rank function. Definition 3.1. Given a matrix X ∈ Rm×n , the Truncated Nuclear Norm Xr is defined as the sum of min(m, n) − r min (m,n) minimum singular values, i.e., Xr = i=r+1 σi (X). Thus, the objective function of our approach can be formulated as follows

2. Related Work The central problem in matrix completion is how to approximate the rank function. Fazel [8] firstly solves rank minimization problem in control system and design by ap-

min

min

Xr

s.t

XΩ = MΩ .

X

(4)

Obviously, unlike the traditional nuclear norm heuristic, by solving the problem (4) we can always achieve the low

2193

rank solution as long as it exists. Since Xr is non-convex, it is not easy to solve (4) directly. We have the follow theorem:

This is because Tr((u1 , · · · , ur )T X(v1 , · · · , vr )) = Tr((u1 , · · · , ur )T U ΣV T (v1 , · · · , vr )) = Tr(((u1 , · · · , ur )T U )Σ(V T (v1 , · · · , vr ))) = Tr(diag(σ1 (X), · · · , σr (X), 0, · · · , 0)) r  σi (X). =

Theorem 3.1. For any given matrix X ∈ Rm×n , any matrices A ∈ Rm×n , B ∈ Rm×n that AAT = I, BB T = I. For any non-negative integer r(r <= min (m, n)), we have T

Tr(AXB ) ≤

r 

σi (X).

i=1

(5)

Combining (5) and (9), we can get

i=1

max

Proof. By Von Neumann’s trace inequality, we get

AAT =I,BB T =I

Tr(AXB T ) = Tr(XB T A) 

σi (X)σi (B T A),

X∗ −

(6)

where σ1 (X) ≥ · · · ≥ σmin (m,n) (X) ≥ 0. As rank(A) = r and rank(B) = r, so rank(B T A) = s ≤ r. For i ≤ s, σi (B T A) > 0 and σi2 (B T A) is the i-th eigenvalue of B T AAT B = B T B, which is also an eigenvalue of BB T = I. Therefore σi (B T A) = 1, for i = 1, 2, · · · , r, and the rest are all 0’s,

=

σi (X)σi (B T A) min (m,n)



σi (X)σi (B T A) +

i=1

= =

σi (X)σi (B T A)

i=s+1

s  i=1 s 

(7)

min (m,n)



σi (X) · 1 +

σi (X) · 0

σi (X).

s r Since s ≤ r and σi (X) ≥ 0, i=1 σi (X) ≤ i=1 σi (X). Combining inequality (6) and (7), we can prove the theorem

i=1



σi (X) −

r 

Tr(AXB T )

σi (X)

σi (X) ≤

r 

i=1

min X

X∗ −

max

AAT =I,BB T =I

Tr(AXB T ) (12)

here A ∈ Rr×m , B ∈ Rr×n . Based on (12), we design a simple but efficient iterative scheme. Set X1 = MΩ as initialization, in the l-th iteration, we firstly fix Xl and compute Al and Bl from (8) based on the singular value decomposition of Xl . And then we fix Al and Bl to update Xl+1 by solving an easier problem min

X∗ − Tr(Al XBlT )

s.t

X Ω = MΩ ,

(13)

given Al ∈ Rr×m , Bl ∈ Rr×n and the observed matrix MΩ . The scheme is summarized in Algorithm 1. By updating alternately using the two steps, the iterative scheme will converge to a local minimum of (12).

4. The Optimization

σi (X).

i=1

Suppose U ΣV T is the singular value decomposition of X , where U = (u1 , · · · , um ) ∈ Rm×m , Σ ∈ Rm×n and V = (v1 , · · · , vn ) ∈ Rn×n . The equality of (5) holds when A = (u1 , · · · , ur )T , B = (v1 , · · · , vr )T .

(11)

Xr .

X

i=1

s 

(10)

Thus, the optimization problem (4) can be rewritten as follows:

i=s+1

Tr(AXB T ) ≤

σi (X).

s.t XΩ = MΩ ,

i=1 s 

max

AAT =I,BB T =I

i=1

=

min (m,n)

=

r  i=1

min (m,n)

i=1



Tr(AXB T ) =

Then we have,

min (m,n)



(9)

(8)

How to solve the problem (13) is the most critical part in our algorithm. We need to design an efficient optimization algorithm to solve this problem. Because both X∗ and −T r(Al XBlT ) are convex, the objective function in (13) is also convex. In the following we introduce two optimization schemes for solving (13) by using the alternating direction method of multipliers (ADMM) and the accelerated proximal gradient line search method (APGL). We first introduce a very useful tool, that is, the singular value shrinkage operator [3].

2194

Algorithm 1 Iterative Scheme Input: original incomplete data matrix MΩ , where Ω is the position of the observed entries, tolerance . Initialize: X1 = MΩ repeat STEP 1. Given Xl

Computing Xk+1 : Fix Wk and Yk , and minimize L(X, Yk , Wk , ρ) for Xk+1 ρ Xk+1 = arg min X∗ − Tr(Al Wk BlT ) + X − Wk 2F 2 X

[Ul , Σl , Vl ] = svd(Xl ), where

where ρ is a positive scalar. Given the initial setting X1 = MΩ , W1 = X1 and Y1 = X1 , the optimization problem (17) can be solved via the following three steps.

+ Tr(YkT (X − Wk )).

Ul = (u1 , · · · , um ) ∈ Rm×m , Vl = (v1 , · · · , vn ) ∈ R

n×n

(19)

.

Ignoring constant terms, this can be rewritten as

Compute Al and Bl as T

Al = (u1 , · · · , ur ) , Bl = (v1 , · · · , vr )

ρ 1 Xk+1 = arg min X∗ + X − (Wk − Yk )2F . (20) 2 ρ X

T

By Theorem (4.1), we can solve the above problem (19) as

STEP 2.Solve

1 Xk+1 = D ρ1 (Wk − Yk ). ρ

Xl+1 = arg min X∗ − Tr(Al XBlT ) X

s.t

X Ω = MΩ .

Computing Wk+1 : Wk+1 as follows:

until Xl+1 − Xl F ≤  Return the recovered matrix

Fix Xk+1 and Yk to calculate

Wk+1 = arg min L(Xk+1 , Yk , W, ρ), W

Definition 4.1. Consider the singular value decomposition (SVD) of a matrix X ∈ Rm×n of rank r, X = U ΣV T , Σ = diag({σi }1≤i≤r ).

(14)

1 Wk+1 = Xk+1 + (ATl Bl + Yk ). ρ

Dτ (X) = U Dτ (Σ)V T , Dτ (Σ) = diag({σi − τ }+ ). (15) Theorem 4.1 ([3]). For each τ ≥ 0 and Y ∈ Rm×n , we have 1 Dτ (Y ) = arg min X − Y 2F + τ X∗ . (16) 2 X

Wk+1 = (Wk+1 )Ωc + MΩ . Computing Yk+1 : Yk+1 as follows:

ADMM is an algorithm that is intended to blend the decomposability of dual ascent with the superior convergence properties of the method of multipliers. By the spirit of ADMM, we rewrite (13) as the following form

X = W, WΩ = MΩ .

(24)

Fix Xk+1 and Wk+1 , calculate

Yk+1 = Yk + ρ(Xk+1 − Wk+1 ).

4.1. The Optimization using ADMM

s.t

(23)

Then we fixed the values at the observed entries

We have the following useful theorem:

min X∗ − Tr(Al W BlT )

(22)

which is a quadratic function of W and can be easily solved by setting the derivation of L(Xk+1 , Yk , W, ρ) to zero, and then we get

Define the singular value shrinkage operator Dτ as follows:

X,W

(21)

(25)

The whole corresponding procedure is summarized in Algorithm 2. Furthermore, the convergence of Algorithm 2 for (17) can be guaranteed by the alternating direction method [2].

4.2. The Optimization using APGL (17)

The augmented lagrange function of (17) is ρ L(X, Y, W, ρ) =X∗ − Tr(Al W BlT ) + X − W 2F 2 + Tr(Y T (X − W )), (18)

Actually, ADMM solves (13) as a hard constrained problem. Considering the noisy data in the real applications, it is beneficial to relax the constrained problem (13) into min X∗ − Tr(Al XBlT ) + for some λ > 0.

2195

λ XΩ − MΩ 2F , 2

(26)

Algorithm 2 The Optimization using ADMM Input: Al , Bl , MΩ and tolerance  are given. Initialize:X1 = MΩ , W1 = X1 , Y1 = X1 and ρ = 1. repeat STEP 1.Xk+1 = D ρ1 (Wk − ρ1 Yk ) STEP 2.Wk+1 = Xk+1 + ρ1 (ATl Bl + Yk ) Fix values at observed entries

Algorithm 3 The Optimization using APGL Input: Al , Bl , MΩ and tolerance . Initialize: t1 = 1, X1 = MΩ , Y1 = X1 repeat STEP 1.Xk+1 = Dtk (Yk + tk (ATl Bl − λ((Yk )Ω − MΩ ))) √ 1+

1+4t2k

STEP 2. tk+1 = 2 STEP 3. Yk+1 = Xk + until Xk+1 − Xk F ≤ 

Wk+1 = (Wk+1 )Ωc + MΩ . STEP 3. Yk+1 = Yk + ρ(Xk+1 − Wk+1 ) until Xk+1 − Xk F ≤ 

tk −1 tk+1 (Xk

− Xk−1 )

2

0

APGL method solves problems like X

(27)

log10(error)

min g(X) + f (X),

−2

where g is closed, convex, possibly nondifferentiable and f is a convex and differentiable function. Firstly, for any t > 0, APGL method constructs an approximation of F (X) at a given point Y as 1 X−Y 2F +g(X). 2t (28) Then APGL method solves optimization problem (27) by iteratively updating X, Y and t. In the k-th iteration, we update Xk+1 as the unique minimizer of Q(X, Yk ):

−4 −6 −8 −10 −12 1

Q(X, Y ) = f (Y )+X−Y, ∇f (Y ) +

2

3

4 5 6 number of outer iterations

7

8

9

Figure 2. Noiseless synthetic data (TNNR-ADMM): set r as the real rank of the matrix ( r = r0 = 15, m = 100, n = 200, p = 0.7 ) , the error decreases very fast. In order to show it more clearly, we use log 10(error) here.

Xk+1 = arg min Q(X, Yk ) X

= arg min g(X) + X

1 ||X − (Yk − tk ∇f (Yk ))||2F . 2tk (29)

In the problem (26), we choose g(X) = X∗ and f (X) = −Tr(Al XBlT ) + λ2 XΩ − MΩ 2F . So according to Theorem 4.1, we can get Xk+1 = arg min X∗ + X

1 ||X − (Yk − tk ∇f (Yk ))||2F 2tk

= Dtk (Yk + tk (ATl Bl − λ((Yk )Ω − MΩ ))). (30) At last, Yk+1 and tk+1 are updated as the same way in [11], tk − 1 Yk+1 = Xk+1 + (Xk+1 − Xk ), tk+1  1 + 1 + 4t2k tk+1 = . 2

(31)

The procedures of solving (26) are summarized in Algorithm 3. As we relax the strict constraint XΩ = MΩ , Algorithm 3 is more suitable for dealing with noisy data. What’s

more, Algorithm 3 has a very fast convergence rate O( k12 ), which is guaranteed by the convergence property of the accelerated proximal gradient method [1].

5. Experiments In this section, we conduct several experiments to show the effectiveness of our algorithms (TNNR-ADMM and TNNR-APGL) based on Truncated Nuclear Norm Regularization on both synthetic data and real visual data. TNNRADMM denotes our approach which uses ADMM for solving the optimization problem (13). TNNR-APGL denotes our approach which uses APGL for solving the optimization problem (26). We call the iterations in Algorithm 1 as outer iterations and the iterations in Algorithm 2 and Algorithm 3 as inner iterations for simplicity.

5.1. Synthetic data We generate m × n matrices of rank r0 by sampling two matrices of ML ∈ Rm×r0 and MR ∈ Rr0 ×n , each having i.i.d. Gaussian entries, and set M = ML MR . The set of observed indices Ω is sampled uniformly at random. Let p be the percentage of observed entries . We generate synthetic

2196

Method SVT SVP OptSpace TNNR-ADMM TNNR-APGL

65

60

55

error

50

45

σ(0.1) 9.41 9.24 9.35 7.01 5.30

σ(0.01) 0.96 0.93 0.95 0.67 0.52

Table 1. The average error of 20 tests (m = 100, n = 200, p = 0.7, r0 = 15). It shows our methods are more robust to noise. In TNNR-APGL, λ is chosen to be 0.04 here.

40

35

30 1

σ(1) 115.90 95.40 93.46 66.34 52.38

2

3

4 5 6 number of outer iterations

7

8

9

Figure 3. Noisy synthetic data (TNNR-APGL): set r as the real rank of the matrix ( r = r0 = 15, m = 100, n = 200, p = 0.7, σ = 1 ). The error decreases very fast. The error of our approach is much smaller than other methods, which is shown in Table 1.

(a) original image

(b) masked image

(c) SVP PSNR = 28.94

(d) OptSpace PSNR = 22.29

5 4.5 4 3.5

error

3 2.5 2 1.5 1 0.5 0 0

5

10

15

20 r

25

30

35

40

(e) TNNR-ADMM PSNR = 33.72 (f) TNNR-APGL PSNR = 33.81

(a) TNNR-ADMM

Figure 5. Text removal: It can be seen that TNNR-ADMM and TNNR-APGL can both deal with the problem well and they have a similar performance. TNNR-APGL achieves a better result due to the use of the soft constraint.

110 100 90

error

80

data from the model 70

B = M + σZ, Bij = Mij + σZij , (i, j) ∈ Ω

60 50 40 30 0

5

10

15

20 r

25

30

35

40

(b) TNNR-APGL

Figure 4. The error versus the parameter r in both noiseless (a) and noisy (σ = 1) (b) situations. As can be seen, our methods can accurately estimate the real rank of the given matrix.

where Z is a Gaussian white noise with standard deviation 1. Denote the full noiseless matrix as Xf ull and the solution given by an algorithm as Xsol . We define the error as (Xsol − Xf ull )Ωc F , which is a commonly used criterion in matrix completion. When σ = 0 (the noiseless case), we use TNNR-ADMM to solve the problem, the very fast convergence rate shown in Fig. 2 proves the effectiveness of Algorithm 1 and TNNR-ADMM. Only very few outer iterations are needed for the error to decrease to the scale of 10−10 . When adding a relatively big noise (σ = 1), our algorithms work well too, and the performance of TNNR-APGL is shown in Fig. 3.

2197

(a) PSNR = 28.41

(b) PSNR = 20.69

(c) PSNR = 24.28

(d) PSNR = 31.23

(e) PSNR = 24.32

(f) PSNR = 44.73

(g) PSNR = 24.99

Figure 6. The original images are listed in the first row and the corresponding masked images are listed below. The last row shows the results of TNNR-APGL.

Method SVP OptSpace TNNR-APGL

pic(a) 25.88 23.50 28.41

pic(b) 19.51 19.20 20.69

pic(c) 22.15 15.68 24.28

pic(d) 27.53 19.48 31.23

pic(e) 23.04 22.52 24.32

pic(f) 37.27 34.63 44.73

pic(g) 22.82 21.13 24.99

Table 2. The PSNR values of different methods.

We compare our algorithms with current matrix completion methods such as SVT, SVP and OptSpace under different noise levels. The results are shown in Table 1, which show that our methods are more robust than others. In many real applications the rank of the incomplete matrix is unknown. So the performance of our algorithms under different choices of r is tested and shown in Fig. 4. It shows that the error will first decrease when r is less that r0 (the real rank) and the minimum error can be achieved when r equals to r0 .

5.2. Real visual data In this section, we test our algorithms in real visual applications. Since color images have three channels (r, g, b), we deal with them separately and combine the results together to get the final outcome. We use PSNR (Peak Signal-toNoise Ratio) values to evaluate the performance. Suppose the total number of missing pixels is N , and we define the squared error (SE) as errorr2 + errorg2 + errorb2 , the mean squared error (MSE) as SE 3N and PSNR can be calculated 2552 with P SN R = 10 × log10 ( M SE ). In this subsection, the parameter λ in TNNR-APGL is empirically set to be 0.06. We apply our algorithms to the applications of text re-

moval and compare our methods with other matrix completion algorithms such as SVT, SVP and OptSpace. For SVP and OptSpace, the rank needs to be pre-determined. We test all the possible values and choose the best as their results. Fig. 5 shows the results of all the workable methods. Note that, SVT does not converge in the application of real images. As can be seen, the PSNR value of our method is the highest among all the methods, which shows Truncated Nuclear Norm Regularization is better than the traditional nuclear norm heuristic in this situation. It can be seen that the results of TNNR-ADMM and TNNR-APGL are very close. We further test more images, the detailed results are shown in Fig. 6. Table 2 gives PSNR values of different approaches. In the worst cases of our method, it is sufficient to test r from 1 to 20. And even if the best r is not chosen, the performances are also better than the other compared methods. We also conduct an interesting experiment using our method which is shown in Fig. 7. In this example, some windows in the original image are broken. Some pixels are damaged and there is some annoying text and logo in the bottom of the image. We mask all the pixels that we do not

2198

References

(a) original image

(b) mask

(c) SVP

(d) OptSpace

(e) TNNR-APGL

(f)

Figure 7. (f) shows the difference between (e) and (a), from this example we can see that our method can effectively paste the correct patches and remove the useless text and logo at the same time.

want and the missing pixels can be recovered automatically by our method.

6. Conclusions We have presented a novel method called Truncated Nuclear Norm Regularization for estimating missing values in visual data by solving matrix completion problems. Unlike traditional nuclear norm based approaches which take into account all the singular values, our approach only minimizes the smallest min (m, n) − r singular values. The introduced truncated nuclear norm can better approximate the matrix rank function. We also introduce two optimization schemes to solve the objective function by using ADMM and APGL. We conduct several experiments on both synthetic data and real visual data which show the advantages of TNNR compared with other methods based on the nuclear norm or matrix factorization.

7. Acknowledgements This work was supported by the National Basic Research Program of China (973 Program) under Grant 2009CB320801, National Natural Science Foundation of China (Grant Nos: 61125203, 61125106, 91120302, 61072093).

[1] A. Beck and M. Teboulle. A fast iterative shrinkagethresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009. [2] S. Boyd, N. Parikh, E. Chu, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Information Systems Journal, 3(1):1– 122, 2010. [3] J. F. Cai, E. J. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20:1956–1982, 2010. [4] E. J. Cand`es and Y. Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925–936, 2009. [5] E. J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations on Computational Mathematics, 9, 2009. [6] E. J. Cand`es and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Transduction on Information Theory, 56(5):2053–2080, 2009. [7] A. Eriksson and A. van den Hengel. Efficient computation of robust low-rank matrix approximations in the presence of missing data using the l1 norm. In IEEE Conference on Computer Vision and Pattern Recognition, 2010. [8] M. Fazel. Matrix rank minimization with applications. PhD thesis, Stanford University, 2002. [9] M. Jaggi and M. Sulovsky. A simple algorithm for nuclear norm regularized problems. In International Conference on Machine Learning, pages 471–478, 2010. [10] H. Ji, C. Liu, Z. Shen, and Y. Xu. Robust video denoising using low rank matrix completion. In IEEE Conference on Computer Vision and Pattern Recognition, 2010. [11] S. Ji and J. Ye. An accelerated gradient method for trace norm minimization. In Proceedings of the 26th Annual International Conference on Machine Learning, 2009. [12] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE Transactions on Information Theory, 56:2980–2998, 2010. [13] N. Komodakis and G. Tziritas. Image completion using global optimization. In IEEE Conference on Computer Vision and Pattern Recognition, 2006. [14] J. Liu, P. Musialski, P. Wonka, and J. Ye. Tensor completion for estimating missing values in visual data. In International Conference on Computer Vision, Kyoto, Japan, 2009. [15] R. Meka, P. Jain, and I. S. Dhillon. Guaranteed rank minimization via singular value projection. In Advances in Neural Information Processing Systems, Vancouver, Canada, 2010. [16] T. Okatani, T. Yoshida, and K. Deguchi. Efficient algorithm for low-rank matrix factorization with missing components and performance comparison of latest algorithms. In International Conference on Computer Vision, 2011. [17] C. Rasmussen and T. Korah. Spatiotemporal inpainting for recovering texture maps of partially occluded building facades. In IEEE International Conference on Image Processing, 2005.

2199

Matrix Completion by Truncated Nuclear Norm ...

missing values presented in the data. The problem of es- timating missing values in a matrix, or matrix completion, has received considerable interest recently [5, 6, 14, 7, 16,. 10]. The visual data, such as images, is probably of low rank structure, as shown in Fig. 1. Consequently, most of. (a) an image example. 0. 20. 40. 60.

353KB Sizes 4 Downloads 201 Views

Recommend Documents

Fast and Accurate Matrix Completion via Truncated ... - IEEE Xplore
achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by ... relationship between nuclear norm and the rank of matrices.

Matrix Completion Based ECG Compression
Sep 3, 2011 - coder, good tolerance to quantization noise, and good quality reconstruction. ... A full rank matrix of dimensions (n × n) has n2 degrees of freedom and ..... methods for matrix rank minimization,” Tech. Rep., Department of.

Fast Matrix Completion Without the Condition Number
Jun 26, 2014 - A successful scalable algorithmic alternative to Nuclear Norm ..... of noise addition to ensure coherence already gets rid of one source of the condition .... textbook version of the Subspace Iteration algorithm (Power Method).

18_Constellation Legends by Norm McCarter Naturalist and ...
18_Constellation Legends by Norm McCarter Naturalist and Astronomy Intern SCICON.pdf. 18_Constellation Legends by Norm McCarter Naturalist and ...

Norm Hord.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Norm Hord.pdf.

Consistency of trace norm minimization
and a non i.i.d assumption which is natural in the context of collaborative filtering. As for the Lasso and the group Lasso, the nec- essary condition implies that ...

Grant Completion Report (210515).pdf
TONLE SAP POVERTY REDUCTION AND. SMALLHOLDER DEVELOPMENT PROJECT. (TSSD). Grant Completion Report. GOVERNMENT OF FINLAND ...

CITI Completion Report.pdf
Try one of the apps below to open or edit this item. CITI Completion Report.pdf. CITI Completion Report.pdf. Open. Extract. Open with. Sign In. Main menu.

DH_anesthesia_VERIFICATION OF COMPLETION OF TRAINING.pdf ...
Management Branch. Office of Licensing. Page 1 of 1. DH_anesthesia_VERIFICATION OF COMPLETION OF TRAINING.pdf. DH_anesthesia_VERIFICATION ...

Truncated Power Method for Sparse Eigenvalue ...
Definition 1 Given a vector x and an index set F, we define the truncation ..... update of x and the update of z; where the update of x is a soft-thresholding ...

NUCLEAR SUBSIDIES
Feb 22, 2011 - Phone: +44 (0) 1248 712962 .... pool of UK insurers comprising 8 insurance companies and 16 Lloyds syndicates—Nuclear Risk Insurers. 16 ...... A network of land-based 2.5-megawatt (MW) turbines restricted to nonforested ...

A truncated isoform of Ca2 /calmodulin-dependent ...
We predict that this mRNA species will code for a .... coding for the 72-kDa protein of the signal recognition particle .... molecular weight standards used.

Certificate of Completion
May 10, 2015 - BUSN499 Senior Seminar in Business Administration on. 10th May ... Business Integration and Strategic Management .... Data Analysis Tools.

Apparatus for well completion operations
Oct 20, 1986 - Primary Examiner-Stephen J. Novosad. Assistant Examiner-Terry Lee Melius. Attorney, Agent, or Firm-James R. Duzan. [57]. ABSTRACT.

Consistency of trace norm minimization
learning, norms such as the ℓ1-norm may induce ... When learning on rectangular matrices, the rank ... Technical Report HAL-00179522, HAL, 2007b. S. Boyd ...

CERTIFICATE OF COMPLETION OF ACGME_AOA Training.pdf ...
Coordinating Council of Medical Education of the Canadian Medical Association (CCME) and consisted of the. following rotations: List type and length of training ...

Forgetting in Primed Fragment Completion
Experiments 3 and 4 provided further evidence of forgetting over 1 week. Experiment 5 ...... instructed (a) to copy each word onto a blank line beside the word,. (b) to make a ... 26 of these subjects, when contacted by telephone, agreed to partici-

On Moments of Folded and Truncated Multivariate ...
Dec 21, 2017 - ai < Xi < bi, i = 1,...,n) for nonnegative integer values ki = 0,1,2,.... The first ..... recursion has greater practical value when the ki's are nonnegative integers since the boundary conditions can be evaluated in ... When all the a

pdf-1881\the-norm-of-belief-by-john-gibbons-2013 ...
pdf-1881\the-norm-of-belief-by-john-gibbons-2013-09-15-by-john-gibbons.pdf. pdf-1881\the-norm-of-belief-by-john-gibbons-2013-09-15-by-john-gibbons.pdf.