This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSP.2016.2634550, IEEE Transactions on Signal Processing 1

A Sharp Condition for Exact Support Recovery With Orthogonal Matching Pursuit Jinming Wen, Zhengchun Zhou, Jian Wang, Xiaohu Tang, and Qun Mo

Abstract—Support recovery of sparse signals from noisy measurements with orthogonal matching pursuit (OMP) has been extensively studied. In this paper, we show that for any Ksparse signal x, if a sensing matrix A satisfies the restricted isometry property (RIP) with restricted isometry constant (RIC) √ δK+1 < 1/ K + 1, then under some constraints on the minimum magnitude of nonzero elements of x, OMP exactly recovers the support of x from its measurements y = Ax + v in K iterations, where v is a noise vector that is ℓ2 or ℓ∞ bounded. This sufficient condition is sharp in terms √ of δK+1 since for any given positive integer K and any 1/ K + 1 ≤ δ < 1, there always exists a matrix A satisfying the RIP with δK+1 = δ for which OMP fails to recover a K-sparse signal x in K iterations. Also, our constraints on the minimum magnitude of nonzero elements of x are weaker than existing ones. Moreover, we propose worst-case necessary conditions for the exact support recovery of x, characterized by the minimum magnitude of the nonzero elements of x. Index Terms—Compressed sensing (CS), restricted isometry property (RIP), restricted isometry constant (RIC), orthogonal matching pursuit (OMP), support recovery.

I

I. I NTRODUCTION N compressed sensing (CS), we frequently encounter the following linear model [1]–[4]: y = Ax + v,

(1)

where x ∈ R is an unknown K-sparse signal, (i.e., |supp(x)| ≤ K, where supp(x) = {i : xi ̸= 0} is the support n

Copyright (c) 2015 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to [email protected]. This work was supported by NSFC (No. 61672028, 11271010 and 11531013), and the Sichuan Provincial Youth Science and Technology Fund (No. 2015JQ0004 and 2016JQ0004), the fundamental research funds for the Central Universities, “Programme Avenir Lyon Saint-Etienne de l’Universit´e de Lyon” in the framework of the programme “Inverstissements d’Avenir” (ANR-11-IDEX-0007) and ANR through the HPAC project under Grant ANR 11 BS02 013 This work was presented in part at the IEEE International Symposium on Information Theory (ISIT 2016), Barcelona, Spain. J. Wen was with ENS de Lyon, Laboratoire LIP (U. Lyon, CNRS, ENSL, INRIA, UCBL), Lyon 69007, France. He is with the Department of Electrical and Computer Engineering, University of Alberta, Edmonton T6G 2V4, Canada (e-mail: [email protected]). Z. Zhou (Corresponding Author) is with the School of Mathematics, Southwest Jiaotong University, Chengdu 610031, China, (e-mail: [email protected]). He is also with State Key Laboratory of Information Security (Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093). J. Wang is with the Department of Electrical and Computer Engineering, Duke University, USA, 27708, (e-mail: [email protected]). X. Tang is with the Information Security and National Computing Grid Laboratory, Southwest Jiaotong University, Chengdu 610031, China, (e-mail: [email protected]). Q. Mo is with the Department of Mathematics, Zhejiang University, Hangzhou 310027, China (e-mail: [email protected]).

Algorithm 1 The OMP Algorithm [12] Input: y, A, and stopping rule. Initialize: k = 0, r 0 = y, S0 = ∅. until the stopping rule is met 1: k = k + 1, 2: sk = arg max |⟨r k−1 , Ai ⟩|, 1≤i≤n ∪ 3: Sk = Sk−1 {sk }, ˆ Sk = arg min ∥y − ASk x∥2 , 4: x x∈R|Sk |

ˆ Sk . 5: r = y − ASk x ˆ = arg min ∥y − Ax∥2 . Output: x k

x:supp(x)=Sk

of x and |supp(x)| is the cardinality of supp(x).), A ∈ Rm×n (m ≪ n) is a known sensing matrix, y ∈ Rm contains the noisy observations (measurements), and v ∈ Rm is a noise vector. There are several common types of noises, such as the ℓ2 bounded noise (i.e., ∥v∥2 ≤ ϵ for some constant ϵ [5]–[7]), the ℓ∞ bounded noise (i.e., ∥Av∥∞ ≤ ϵ for some constant ϵ [8]), and the Gaussian noise (i.e., vi ∼ N (0, σ 2 ) [9]). In this paper, we consider only the first two types of noises, as the analysis for these two types can be easily extended to the last one by following some techniques in [8]. One of the central goals of CS is to recover the sparse signal x on the basis of the sensing matrix A and the observations y. It has been demonstrated that under appropriate conditions on A, the original signal x can be reliably recovered via properly designed algorithms [10] [11]. Orthogonal matching pursuit (OMP) [12] [13] is a widely-used greedy algorithm for performing the recovery task. For any set S ⊂ {1, 2, · · · , n}, let AS denote the submatrix of A that contains only the columns indexed by S. Similarly, let xS denote the subvector of x that contains only the entries indexed by S. Then, the OMP algorithm is formally described in Algorithm 1.1 A widely used framework for analyzing the recovery performance of the CS recovery algorithms is the restricted isometry property (RIP) [1]. For an m×n matrix A and any integer K, the order-K restricted isometry constant (RIC) δK is defined as the smallest constant such that (1 − δK )∥x∥22 ≤ ∥Ax∥22 ≤ (1 + δK )∥x∥22

(2)

for all K-sparse vectors x. For the noise-free case (i.e., when v = 0), many RIC-based conditions have been proposed to guarantee the exact recovery 1 If the maximum correlation in Step 2 occurs for multiple indices, break the tie randomly.

1053-587X (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSP.2016.2634550, IEEE Transactions on Signal Processing 2

of sparse signals via OMP. It has√respectively been shown √ in [14] and [15] that δK+1 < 1/(3 K) and δK+1 < 1/ 2K are sufficient for OMP to recover any K-sparse signal x in K iterations. √ Later, the conditions have been improved√to δK+1 < 1/(1 + K) [16] [17] and further to δK+1 < ( 4K + 1 − 1)/(2K) [18]. Recently, it has been shown that if δK+1 < √ 1/ K + 1, OMP is guaranteed to exactly recover K-sparse signals x in K iterations [19]. On the other hand, it has been conjectured in [20] that √ there exists a matrix A satisfying the RIP with δK+1 ≤ 1/ K such that OMP fails to recover a K-sparse vector x in K iterations. This conjecture has been confirmed by examples provided in [16] [17]. Furthermore, it has been reported in [21] [19] that for any√given positive integer K ≥ 2 and any given δ satisfying 1/ K + 1 ≤ δ < 1, there always exist a K-sparse vector x and a matrix A satisfying the RIP with δK+1 = δ such that the OMP algorithm fails to recover x in K iterations. In other words, sufficient conditions for recovering √ x with K steps of OMP cannot be weaker than√δK+1 < 1/ K + 1, which therefore implies that δK+1 < 1/ K + 1 is a sharp condition [19]. For the noisy case (i.e., when v ̸= 0), we are often interested in recovering the support of x, i.e, supp(x). Once supp(x) is exactly recovered, the underlying signal x can be easily estimated by ordinary least squares regression [8]. It has been shown in [22] that under some constraint on the minimum magnitude of nonzero elements of x (i.e., √ mini∈supp(x) |xi |), δK+1 < 1/( K + 3) is sufficient for OMP to exactly recover supp(x) under both the ℓ2 and ℓ∞ bounded noises. √ The sufficient condition has been improved to δK+1 < 1/( K + 1) [23], √ and the best existing condition in terms of δK+1 is δK+1 < ( 4K + 1 − 1)/(2K) [18]. In this paper, we investigate sufficient, and worst-case necessary conditions, based on the RIC and mini∈supp(x) |xi |, for recovering supp(x) with OMP under both ℓ2 and ℓ∞ bounded noises. Here, the worst-case necessity means that if it is violated, then there is (at least) one instance of {A, x, v} such that OMP fails to recover supp(x) from the noisy measurements y = Ax+v [24]. Specifically, our contributions can be summarized as follows. i) We show that if A √and v in (1) respectively satisfy the RIP with δK+1 < 1/ K + 1 and ∥v∥2 ≤ ϵ, then OMP with the stopping rule ∥r k ∥2 ≤ ϵ exactly recovers supp(x) in K iterations, provided that min |xi | >

i∈supp(x)

2ϵ √ . 1 − K + 1δK+1

We also show that our constraint on mini∈supp(x) is weaker than existing ones.2 (Theorem 1). ii) We show that if A and √ v in (1) respectively satisfy the RIP with δK+1 < 1/ K + 1 and ∥AT v∥∞ ≤ ϵ, then OMP with the stopping rule √ ( ) (1 + δ2 )K T k ∥A r ∥∞ ≤ 1 + ϵ 1 − δK+1 2 These results were presented at the 2016 IEEE International Symposium on Information Theory (ISIT) conference [25].

exactly recovers supp(x) in K iterations, provided that ( √ ) (1+δ2 )K 2 √ 1+ ϵ. min |xi | > 1 − δK+1 i∈supp(x) 1− K +1δK+1 We also compare our constraint on mini∈supp(x) |xi | with existing results (Theorem 2). iii) We√show that for any given positive integer K, 0 < δ < 1/ K + 1, and ϵ > 0, there always exist a sensing matrix A ∈ Rm×n satisfying the RIP with δK+1 = δ, a K-sparse vector x ∈ Rn with √ 1 − δϵ √ min |xi | < , i∈supp(x) 1 − K + 1δ and a noise vector v ∈ Rm with ∥v∥2 ≤ ϵ, such that OMP fails to recover supp(x) from y = Ax + v in K iterations (Theorem 3). iv) We√show that for any given positive integer K, 0 < δ < 1/ K + 1 and ϵ > 0, there always exist a sensing matrix A ∈ Rm×n satisfying the RIP with δK+1 = δ, a K-sparse vector x ∈ Rn with 2ϵ √ min |xi | < , i∈supp(x) 1 − K + 1δ and a noise vector v ∈ Rm with ∥AT v∥∞ ≤ ϵ, such that OMP fails to recover supp(x) from y = Ax + v in K iterations (Theorem 4). Since OMP may fail to recover a K-sparse signal with √ OMP in K iterations when A satisfies the RIP with δ ≥ 1/ K + 1 and v = 0 [21] [19], sufficient conditions for recovering supp(x) with OMP in √ K iterations in the noisy case cannot be weaker than δ < 1/ K + 1 (note that v = 0 is the ideal case). Hence, our sufficient conditions summarized in i)–ii) are sharp in terms of the RIC. Moreover, iii) and iv) indicate that for all K-sparse vectors x√and sensing matrices A satisfying the RIP with δK+1 < 1/ K + 1, the worst-case necessary constraint on mini∈supp(x) |xi | guaranteeing the exact recovery of supp(x) from (1) with K iterations of OMP are √ 1 − δK+1 ϵ √ min |xi | ≥ , i∈supp(x) 1 − K + 1δK+1 and min |xi | ≥

i∈supp(x)

2ϵ √ , 1 − K + 1δK+1

under the ℓ2 and ℓ∞ bounded noise, respectively. The rest of the paper is organized as follows. In Section II, we introduce some notations that will be used throughout this paper. We also propose a lemma which plays a central role in proving our new sufficient conditions. In Section III, we present sufficient, and worst-case necessary conditions for the exact support recovery of sparse signals with OMP under both the ℓ2 and ℓ∞ bounded noises. Finally, we conclude our paper in Section IV. II. P RELIMINARIES A. Notation We first define some notations that will be used throughout this paper. Let R be the real field. Boldface lowercase letters

1053-587X (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSP.2016.2634550, IEEE Transactions on Signal Processing 3

denote column vectors, and boldface uppercase letters denote matrices. For a vector x, xi:j denotes the subvector of x formed by entries i, i+1, · · · , j. Let ek denote the k-th column of the identity matrix I and 0 denote a zero matrix or a zero column vector. Denote Ω = supp(x) and |Ω| be the cardinality of Ω, then for any K-sparse signal x, |Ω| ≤ K. For any set S, denote Ω \ S = {i|i ∈ Ω, i ∈ / S}. Let Ωc c and S denote the complement of Ω and S, respectively, i.e., Ωc = {1, 2, . . . , n} \ Ω, and S c = {1, 2, . . . , n} \ S. Let AS denote the submatrix of A that contains only the columns indexed by S. Similarly, let xS denote the subvector of x that contains only the entries indexed by S. For any matrix AS of full column-rank, let P S = AS (ATS AS )−1 ATS and P⊥ S = I − P S denote the projector and orthogonal complement projector onto the column space of AS , respectively, where ATS stands for the transpose of AS . B. A useful lemma We present the following lemma, which is one of the central results of this paper and will play a key role in proving our sufficient conditions for support recovery with OMP. Lemma 1: Suppose that A in (1) satisfies the RIP of order K + 1 and S is a proper subset of Ω (i.e., S ⊂ Ω with |S| < |Ω|). Then, ⊥ T ∥ATΩ\S P ⊥ S AΩ\S xΩ\S ∥∞ − ∥AΩc P S AΩ\S xΩ\S ∥∞ √ (1 − |Ω| − |S| + 1δ|Ω|+1 )∥xΩ\S ∥2 √ ≥ . (3) |Ω| − |S|

Proof. See Appendix A.  Note that in the noise-free case (i.e., when v = 0), Lemma 1 can be directly connected to the selection rule of OMP. Specifically, if we assume that S = Sk ⊂ Ω for some 0 ≤ k < |Ω| (see Algorithm 1 for the definition of Sk ), then ⊥ ⊥ ⊥ k P⊥ S AΩ\S xΩ\S = P Sk AΩ\Sk xΩ\Sk = P Sk Ax = P Sk y = r (4) and (3) can be rewritten as

max |⟨Aj , r k ⟩| − maxc |⟨Aj , r k ⟩| j∈Ω √ (1 − |Ω| − k + 1δ|Ω|+1 )∥xΩ\Sk ∥2 √ ≥ . |Ω| − k

j∈Ω\Sk

(5)

Clearly, (5) characterizes a lower bound on the difference between the maximum value of the OMP decision-metric for the columns belonging to Ω and that for the columns belonging to Ωc . Since ∥xΩ\Sk ∥2 > 0 for k < |Ω|, (5) implies that OMP chooses a correct index √ among Ω in the (k + 1)-th iteration as long as δ < 1/ |Ω| − k + 1. Thus, by induction, one can show √ that OMP exactly recovers Ω in K iterations under δ < 1/ K + 1, which matches the result in [19]. In the noisy case (i.e., when v ̸= 0), by assuming that S = Sk ⊂ Ω for some 0 ≤ k < |Ω|, we have ⊥ ⊥ ⊥ ⊥ k P⊥ S AΩ\S xΩ\S = P Sk Ax = P Sk y − P Sk v = r − P Sk v. (6) Due to the extra term P ⊥ v in (6), however, we cannot Sk directly obtain (5) from (3). Nevertheless, by applying (6)

k (i.e., the relationship between P ⊥ S AΩ\S xΩ\S and r ), one can implicitly obtain from (3) a lower bound for

max |⟨Aj , r k ⟩| − maxc |⟨Aj , r k ⟩|

j∈Ω\Sk

j∈Ω

by utilizing ∥xΩ\Sk ∥2 , δ|Ω|+1 and v. The lower bound, together with some constraints on mini∈Ω |xi |, allows to build sufficient conditions for OMP to select an index belonging to Ω at the (k + 1)-th iteration. See more details in Section III-A. Remark 1: Lemma 1 is motivated by [19, Lemma II.2]. But these two lemmas have a key distinction. Specifically, while [19, Lemmas II.2] showed that ∥ATΩ Ax∥∞ > ∥ATΩc Ax∥∞ , Lemma 1 quantitatively characterizes a lower bound on T ⊥ ∥ATΩ\S P ⊥ S AΩ\S xΩ\S ∥∞ − ∥AΩc P S AΩ\S xΩ\S ∥∞ ,

which is stronger in the following aspects: i) Lemma 1 is more general and embraces [19, Lemma II.2] as a special case. To be specific, Lemma 1 holds for any S which is a proper subset of Ω, while [19, Lemma II.2] only works for the case where S = ∅. In fact, the generality of Lemma 1 (i.e., it works for any S ⊂ Ω) is of vital importance for the noisy case analysis of OMP. Indeed, due to the noise involved, the recovery condition for the first iteration of OMP does not apply to the succeeding iterations.3 Thus, we need to consider the recovery condition for every individual iteration of OMP, which, as will be seen later, essentially corresponds to the cases of S = Sk , k = 0, 1, · · · , K − 1, in Lemma 1. ii) In contrast to [19, Lemma II.2] that is applicable to the noise-free case only, the lower bound in Lemma 1 works for both the noise-free as well as the noisy case (as indicated above). Specifically, as will be seen in Appendix B, by applying this quantitative lower bound with S = Sk , together with the relationship between the residual and P ⊥ Sk AΩ\Sk xΩ\Sk (see (6)), we are able to get a precise characterization of the difference between the maximum value of OMP decision metrics for correct and incorrect columns in the noisy case, from which the sufficient condition guaranteeing a correct selection immediately follows. iii) Compared to [19, Lemma II.2], Lemma 1 gives a sharper lower bound on the difference between the maximum value of the OMP decision-metric for the columns belonging to Ω and that for the columns belonging to Ωc . Specifically, [19, Lemma II.2] showed that ∥ATΩ Ax∥∞ − ∥ATΩc Ax∥∞ is lower bounded √ by zero when A satisfies the RIP with δK+1 < 1/ K + 1. Whereas, applying S = ∅ in (3) yields √ ∥ATΩ Ax∥∞−∥ATΩc Ax∥∞ ≥ (1− |Ω|+1δ|Ω|+1 )min |xi |, i∈Ω

where the right-hand side can be much larger than zero under the same RIP assumption. 3 This is in contrast to the noise-free case where the condition for the first iteration of OMP immediately applies to the succeeding iterations because the residual of those iterations can be viewed as the modified measurements of K-sparse signals with the same sensing matrix A; See, e.g., [17, Lemma 6].

1053-587X (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSP.2016.2634550, IEEE Transactions on Signal Processing 4

III. M AIN A NALYSIS In this section, we will show that √ if a sensing matrix A satisfies the RIP with δK+1 < 1/ K + 1, then under some constraints on mini∈Ω |xi |, OMP exactly recovers supp(x) in K iterations from the noisy measurements y = Ax + v. We will also present worst-case necessary conditions on mini∈Ω |xi | for the exact recovery of supp(x). A. Sufficient condition We consider both ℓ2 and ℓ∞ bounded noises. The following theorem gives a sufficient condition for the exact support recovery with OMP under the ℓ2 bounded noise. Theorem 1: Suppose that A and v in (1) respectively satisfy the RIP with 1 δK+1 < √ , (7) K +1 and ∥v∥2 ≤ ϵ. Then OMP with the stopping rule ∥r k ∥2 ≤ ϵ exactly recovers the support Ω of any K-sparse signal x from y = Ax + v in |Ω| iterations, provided that 2ϵ √ min |xi | > . (8) i∈Ω 1 − K + 1δK+1 Proof. See Appendix B.  If v = 0, then we can set ϵ = 0 which implies that (8) holds. Thus we have the following result, which coincides with [19, Theorem III.1]. Corollary 1: If A and v in (1) respectively satisfy the RIP with (7) and v = 0, then OMP exactly recovers all K-sparse signals x from y = Ax in K iterations. Remark 2: In [21] and [19], it has √ been shown that for any given integer K ≥ 2 and for any 1/ K + 1 ≤ δ < 1, there always exist a K-sparse vector x and a sensing matrix A with δK+1 = δ, such that the OMP algorithm fails to recover x from (1) (note that this statement also holds for K = 1). Therefore, the sufficient condition in Theorem 1 for the exact support recovery with OMP is sharp in terms of δK+1 . It might be interesting to compare our condition with existing results. In [18], [22], [23], similar recovery conditions have been proposed for the OMP algorithm under the assumption that the sensing matrices A have normalized columns (i.e., ∥Ai ∥2 = 1 for i = 1, 2, . . . , n). Comparing with these conditions, our condition given by Theorem 1 is more general as it works for sensing matrices whose column ℓ2 -norms are not necessarily equal to 1. More importantly, our result is less restrictive than those in [18], [22], [23] with respect to both δK+1 and mini∈Ω |xi |. To illustrate this, we compare our condition with that in [18], which is the best result to date. In [18], it was shown that if the sensing matrix A is column normalized and satisfies the RIP with √ 4K + 1 − 1 δK+1 < 2K and the noise vector v satisfies ∥v∥2 ≤ ϵ, then the OMP algorithm with the stopping rule ∥r k ∥2 ≤ ϵ exactly recovers the support Ω of any K-sparse signal x from y = Ax + v in K iterations, provided that √ ( 1 + δK+1 + 1)ϵ √ √ min |xi | > . i∈Ω 1 − δK+1 − 1 − δK+1 KδK+1

To show our condition in Theorem 1 is less restrictive, it suffices to show that √ 4K + 1 − 1 1 <√ (9) 2K K +1 and that

√ ( 1 + δK+1 + 1)ϵ 2ϵ √ √ √ > .(10) 1 − K + 1δK+1 1 − δK+1 − 1 − δK+1 KδK+1

To show (9), we need to show √ √ (4K + 1)(K + 1) < 2K + K + 1, which is equivalent to

√ 4K 2 + 5K + 1 < 4K 2 + K + 1 + 4K K + 1.

Since K ≥ 1, the aforementioned equation holds, √ so (9) holds. We next fucus on the proof of (10). Since 1 + δK+1 +1 > 2, it is clear that (10) holds if √ √ √ 1 − δK+1 − 1 − δK+1 KδK+1 < 1 − K + 1δK+1 . Equivalently, 1+



√ √ 1 − δK+1 K > K + 1.

Obviously, (11) holds if √ 1 − δK+1 > which is equivalent to δK+1 <



(11)

K +1−1 √ , K

√ 2( K + 1 − 1) . K

By (7), it suffices to show

√ 2( K + 1 − 1) 1 √ < . K K +1

By some simple calculations, one can show that the aforementioned inequality holds. Thus, (10) holds under (7). Now we turn to the case where the noise vector v is ℓ∞ bounded. Theorem 2: Suppose that A and v in (1) respectively satisfy the RIP with (7) and ∥AT v∥∞ ≤ ϵ. Then OMP with the stopping rule √ ( ) 1 + δ2 √ T k ∥A r ∥∞ ≤ 1 + K ϵ (12) 1 − δK+1 exactly recovers the support Ω of any K-sparse signal x from y = Ax + v in |Ω| iterations, provided that4 √ ( ) 2 1 + δ2 √ √ min |xi | > 1+ K ϵ. (13) i∈Ω 1 − δK+1 1 − K + 1δK+1 

Proof. See Appendix C. 4 If

the columns of A exhibit a unit(ℓ2 norm, then (12) ) and (13) can be √ 1+ √ K ϵ and min |xi | >

respectively relaxed to ∥AT r k ∥∞ ≤ ( ) √ √ 2 √ K + 1 ϵ. 1− K+1δ K+1

1−δK+1

i∈Ω

1−δK+1

1053-587X (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSP.2016.2634550, IEEE Transactions on Signal Processing 5

Remark 3: While in [18], [22], [23], ∥AT r k ∥∞ ≤ ϵ

B. Worst-case necessary condition (14)

was used as the stopping rule of OMP, we would like to note that (12) in Theorem 2 cannot be replaced by (14). Otherwise, OMP may choose indices that do not belong to Ω, no matter how large mini∈Ω |xi | is. To illustrate this, we give an example, where for simplicity we consider sensing matrix A with ℓ2 -normalized columns. 1+δ , let Example 1: For any 0 < δ < 1 and any a > 1−δ [√ ] [ ] ] [ 2δ − √1−δ a 1 − δ2 0 2 A= ,x= , ϵ = 1. ,v = 0 δ 1 1 Then, x is an 1-sparse vector supported on Ω = {1} and min |xi | = a. i∈Ω

A has unit ℓ2 -norm columns. It is easy to verify that the singular values of AT A are 1 ± δ, which, by the definition of the RIC, implies that δ2 = δ. Moreover, since [ ] −δ , AT v = 1 we have ∥AT v∥∞ ≤ ϵ. In the following, we show that if (14) is used as the stopping rule, then OMP finally returns the index set {1, 2}, no matter how large a is. By (1), we have [ √ ] 2δ a 1 − δ 2 − √1−δ 2 y= . aδ + 1 Thus, AT1 y = a − δ, AT2 y = aδ + 1. 1+δ , we have AT1 y > AT2 y. Thus by the selection Since a > 1−δ rule of OMP (see Algorithm 1), S1 = {1} is identified and consequently,

x ˆS1 = (AT1 A1 )−1 AT1 y = a − δ and the residual is updated as [ √ ] 2δ δ 1 − δ 2 − √1−δ 2 1 r = y − A1 x ˆ S1 = . 1 + δ2 By some calculations, we obtain [ ] 0 AT r 1 = , 1 + δ2 which implies that ∥A r ∥∞ = 1+δ > ϵ so that the stopping condition (14) does not satisfy. Hence, the OMP algorithm will continue to the second iteration and will eventually return the index set {1, 2}. Again, by [21] and [19], the sufficient condition given in Theorem 2 is sharp in terms of δK+1 . We mention that similar constraints on mini∈Ω |xi | have been proposed in [18], [22], [23]. However, since those results were based on a different stopping rule (i.e., (14)), we do not give a comparison of our constraint to those results. T

1

2

In the above subsection, we have presented sufficient conditions guaranteeing exact support recovery of sparse signals with OMP. In this subsection, we investigate worst-case necessary conditions for the exact support recovery. Like the sufficient conditions, our necessary conditions are given in terms of the RIC as well as the minimum magnitude of the nonzero elements of input signals. Note that OMP may fail to recover √ a K-sparse signal x from y = Ax + v if 1, even in the noise-free case [21] [19]. δK+1 ≥ 1/ K +√ Hence, δK+1 < 1/ K + 1 naturally becomes a necessity for the noisy case. Therefore, in deriving the worst-case necessary condition on mini∈Ω |xi |, we consider √ only the matrices A satisfying the RIP with δK+1 < 1/ K + 1. We first restrict our attention to the case of ℓ2 bounded noises. Theorem 3: For any given ϵ > 0, positive integer K, and 1 , 0<δ< √ K +1

(15)

there always exist a matrix A ∈ Rm×n satisfying the RIP with δK+1 = δ, a K-sparse vector x ∈ Rn with √ 1 − δϵ √ min |xi | < , i∈Ω 1 − K + 1δ and a noise vector v ∈ Rm with ∥v∥2 ≤ ϵ, such that OMP fails to recover Ω from y = Ax + v in K iterations. Proof. See Appendix D.  Remark 4: One can immediately obtain from Theorem 3 that under the ℓ2 bounded noise, a worst-case necessary condition (recall that the worst-case necessity means that if it is violated, then there is (at least) one instance of {A, x, v} such that OMP fails to recover supp(x) from the noisy measurements y = Ax + v [24].) on mini∈Ω |xi | for OMP is: √ 1 − δK+1 ϵ √ min |xi | ≥ . (16) i∈Ω 1 − K + 1δK+1 Here, we would like to note that the worst-case necessity does not mean that for A, x and v, (16) has to be satisfied to ensure the exact support recovery. In fact, it can be shown that OMP may be able to recover supp(x) in K iterations when (16) does not hold. One such example is given √ as follows. Example 2: For any given 0 < δ < 1/ 2 and √ 2(1 − δ) √ 0
[ ] √ 1 A = δI 2 , v= , ϵ = 2. 1 √ Then, A satisfies the RIP with δ2 = δ < 1/ 2. Moreover, x is 1-sparse and does not satisfy (16). However, one can check that OMP with the stopping condition ∥r k ∥2 ≤ ϵ exactly recovers Ω in just one iteration. Remark 5: We mention that the worst-case necessary condition for the exact support recovery with OMP has also been studied in [26], in which the author characterized the worstcase necessity using the signal-to-noise ratio (SNR). However, let

1053-587X (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSP.2016.2634550, IEEE Transactions on Signal Processing 6

their result concerned only the sensing matrices A whose singular values are ones. In comparison, our condition is more general and works for √ all sensing matrices A satisfying the RIP with δK+1 < 1/ K + 1. Next, we proceed to the worst-case necessity analysis for the case where the noise is ℓ∞ bounded. Theorem 4: For any given ϵ > 0, positive integer K, and δ satisfying (15), there always exist a matrix A ∈ Rm×n satisfying the RIP with δK+1 = δ, a K-sparse vector x ∈ Rn with 2ϵ √ min |xi | < , i∈Ω 1 − K + 1δ and a noise vector v ∈ Rm with ∥AT v∥∞ ≤ ϵ, such that OMP fails to recover Ω from y = Ax + v in K iterations. Proof. See Appendix E.  Remark 6: Similar to the case of the ℓ2 bounded noise, Theorem 4 implies a worst-case necessary condition on mini∈Ω |xi |, for exactly recovering supp(x) with OMP under the ℓ∞ bounded noise is: min |xi | ≥ i∈Ω

1−



2ϵ . K + 1δK+1

(17)

Again, we note that (17) applies to the worst case. For general cases, however, OMP may be able to exactly recover Ω without this requirement. See a toy example as follows. √ Example 3: For any given 0 < δ < 1/ 2 and √ 2 2δ. √ 0
[ ] a x= , 0

[ ] 1 v= , 1

ϵ=

√ 2δ.

√ Then, A satisfies the RIP with δ2 = δ < 1/ 2. x is 1-sparse and does not satisfy (17). Furthermore, one can easily show that the OMP algorithm can exactly recover Ω in one iteration when the stopping rule in (12) is used. Finally, we would like to mention that while our sufficient conditions are sharp in terms of the RIC, there is still some gap between the sufficient and the worst-case necessary constraints on mini∈supp(x) |xi |. In particular, for the ℓ2 bounded noise, the gap between conditions (8) and (16) is relatively small, demonstrating the tightness of the sufficient condition (8). For the ℓ∞ bounded noise, however, the gap between( conditions (13) and (17) can be large since √ √ ) 1+δ2 the expression 1 + 1−δ K on the right-hand side K+1 of (13) can be much larger than one for a support cardinality K that is large enough. Whether it is possible to bridge this gap is an interesting open question. IV. C ONCLUSION In this paper, we have studied sufficient conditions for the exact support recovery of sparse signals from noisy measurements by OMP. For both ℓ2 and ℓ∞ bounded noises, we have shown that if the RIC √ of a sensing matrix A satisfies the RIP with δK+1 < 1/ K + 1, then under some conditions on the minimum magnitude of nonzero elements of the K-sparse

signal x, the support of x can be exactly recovered in K iterations of OMP. The proposed conditions are sharp in terms of δK+1 for both types of noises, and the conditions on the minimum magnitude of nonzero elements of x are weaker than existing ones. We have also proposed worst-case necessary conditions for the exact support recovery of x characterized by the minimum magnitude of the nonzero elements of x, under both ℓ2 and ℓ∞ bounded noises. ACKNOWLEDGMENT We are grateful to the editor and the anonymous referees for their valuable and thoughtful suggestions that greatly improve the quality of this paper. A PPENDIX A P ROOF OF L EMMA 1 Before proving Lemma 1, we introduce the following three useful lemmas, which were respectively proposed in [1], [27] and [28]. Lemma 2: If A satisfies the RIP of orders k1 and k2 with k1 < k2 , then δk1 ≤ δk2 . Lemma 3: Let A ∈ Rm×n satisfy the RIP of order k and S ⊂ {1, 2, . . . , n} with |S| ≤ k, then for any x ∈ Rm , ∥ATS x∥22 ≤ (1 + δk )∥x∥22 . Lemma 4: Let sets S1 , S2 satisfy |S2 \ S1 | ≥ 1 and matrix A satisfy the RIP of order |S1 ∪ S2 |, then for any vector x ∈ R|S2 \S1 | , 2 2 (1−δ|S1 ∪S2 | )∥x∥22 ≤ ∥P ⊥ S1 AS2 \S1 x∥2 ≤ (1+δ|S1 ∪S2 | )∥x∥2 .

Proof of Lemma 1. Obviously, to show (3), it suffices to show for each j ∈ Ωc , T ⊥ ∥ATΩ\S P ⊥ S AΩ\S xΩ\S ∥∞ − |Aj P S AΩ\S xΩ\S | √ (1 − |Ω| − |S| + 1δ|Ω|+1 )∥xΩ\S ∥2 √ . (18) ≥ |Ω| − |S|

Since S is a proper subset of Ω, ∥xΩ\S ∥1 ̸= 0. Hence ∥ATΩ\S P ⊥ AΩ\S xΩ\S ∥∞ ∑ S ℓ∈Ω\S |xℓ | = ∥ATΩ\S P ⊥ S AΩ\S xΩ\S ∥∞ ∥xΩ\S ∥1 ∑ (a) 1 ( |xℓ |)∥ATΩ\S P ⊥ ≥√ S AΩ\S xΩ\S ∥∞ |Ω| − |S|∥xΩ\S ∥2 ℓ∈Ω\S ∑ ( ) 1 ≥√ xℓ ATℓ P ⊥ S AΩ\S xΩ\S |Ω| − |S|∥xΩ\S ∥2 ℓ∈Ω\S ( ∑ )T 1 xℓ Aℓ P ⊥ =√ S AΩ\S xΩ\S |Ω| − |S|∥xΩ\S ∥2 ℓ∈Ω\S ( )T 1 =√ AΩ\S xΩ\S P ⊥ S AΩ\S xΩ\S |Ω| − |S|∥xΩ\S ∥2 ( ⊥ )T 1 (b) =√ P S AΩ\S xΩ\S P ⊥ S AΩ\S xΩ\S |Ω| − |S|∥xΩ\S ∥2 1 2 =√ ∥P ⊥ S AΩ\S xΩ\S ∥2 , |Ω| − |S|∥xΩ\S ∥2

1053-587X (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSP.2016.2634550, IEEE Transactions on Signal Processing 7

where (a) follows from the √ |supp(xΩ\S )| = |Ω| − |S| and fact that ∥x∥1 ≤ |supp(x)|∥x∥2 for all x ∈ Rn (For more details, see, e.g., [29, p.517]. Note that this inequality itself can be derived from the Cauchy-Schwarz inequality), and (b) is because P ⊥ S is an orthogonal projector which has the idempotent and symmetry properties, i.e., ⊥ ⊥ ⊥ T ⊥ (P ⊥ S) PS = PSPS = PS.

(19)

Thus, 2 ∥P ⊥ S AΩ\S xΩ\S ∥2 √ ≤ |Ω| − |S|∥xΩ\S ∥2 ∥ATΩ\S P ⊥ S AΩ\S xΩ\S ∥∞ .



Let α=−

|Ω| − |S| + 1 − 1 √ . |Ω| − |S|

(20)

(21)

=(1 − α4 )∥Bu∥22 + 2α(1 + α2 )∥xΩ\S ∥2 |ATj P ⊥ S AΩ\S xΩ\S | ( ) 2α T ⊥ =(1 − α4 ) ∥Bu∥22 + ∥x ∥ |A P A x | 2 Ω\S Ω\S Ω\S j S 1 − α2 4 =(1 − α ) √ × (∥Bu∥22 − |Ω| − |S|∥xΩ\S ∥2 |ATj P ⊥ S AΩ\S xΩ\S |), (29) where the last equality follows from the first equality in (22). It is not hard to check that ∥B(u + w)∥22 − ∥B(α2 u − w)∥22 ≥ (1 − δ|Ω|+1 )∥(u + w)∥22 − (1 + δ|Ω|+1 )∥(α2 u − w)∥22

√ √ 2α 1 + α2 |Ω| − |S|, = − = |Ω| − |S| + 1. (22) 2 2 1−α 1−α To simplify the notation, for given j ∈ Ωc , we define [ ] Aj , B =P ⊥ (23) S AΩ\S [ ]T T u = xΩ\S 0 ∈ R|Ω\S|+1 , [ ]T w = 0T αt∥xΩ\S ∥2 ∈ R|Ω\S|+1 , (24) { 1 if ATj P ⊥ S AΩ\S xΩ\S ≥ 0 . t= −1 if ATj P ⊥ S AΩ\S xΩ\S < 0

∥B(u + w)∥22 − ∥B(α2 u − w)∥22

(a)

Then, by some simple calculations, we obtain

where

By the aforementioned equations, we have

(25)

Then, P⊥ S AΩ\S xΩ\S = Bu,

(26)

∥u + w∥22 = (1 + α2 )∥xΩ\S ∥22 ,

(27)

∥α2 u − w∥22 = α2 (1 + α2 )∥xΩ\S ∥22 .

(28)

and

(b)

=(1 − δ|Ω|+1 )(1 + α2 )∥xΩ\S ∥22

− (1 + δ|Ω|+1 )α2 (1 + α2 )∥xΩ\S ∥22 ( ) =(1 + α2 )∥xΩ\S ∥22 (1 − δ|Ω|+1 ) − (1 + δ|Ω|+1 )α2 ( ) =(1 + α2 )∥xΩ\S ∥22 (1 − α2 ) − δ|Ω|+1 (1 + α2 ) ( ) 1 + α2 4 2 =(1 − α )∥xΩ\S ∥2 1 − δ|Ω|+1 1 − α2 √ ( ) (c) = (1 − α4 )∥xΩ\S ∥22 1 − |Ω| − |S| + 1δ|Ω|+1 ,

(30)

where (a) follows from Lemma 4 and (23), (b) is from (27) and (28), and (c) follows from the second equality in (22). By (26), (29), (30) and the fact that 1 − α4 > 0, we have √ 2 ∥P ⊥ |Ω|−|S|∥xΩ\S ∥2 |ATj P ⊥ S AΩ\S xΩ\S | S AΩ\S xΩ\S ∥2 − √ T ⊥ 2 = ∥Bu∥2 − |Ω| − |S|∥xΩ\S ∥2 |Aj P S AΩ\S xΩ\S | √ ) ( ≥ ∥xΩ\S ∥22 1 − |Ω| − |S| + 1δ|Ω|+1 .

(a)

Combining the aforementioned equation with (20), we obtain √ |Ω| − |S|∥xΩ\S ∥2 ( ) T ⊥ × ∥ATΩ\S P ⊥ S AΩ\S xΩ\S ∥∞ − |Aj P S AΩ\S xΩ\S | √ ( ) ≥ ∥xΩ\S ∥22 1 − |Ω| − |S| + 1δ|Ω|+1 .

(b)

Therefore, (18) holds, which establishes the lemma.

Thus T ⊥ wT B T Bu = αt∥xΩ\S ∥2 ATj (P ⊥ S ) P S AΩ\S xΩ\S

=αt∥xΩ\S ∥2 ATj P ⊥ S AΩ\S xΩ\S



(c)

= α∥xΩ\S ∥2 |ATj P ⊥ S AΩ\S xΩ\S |,

where (a) follows from (23), (24) and (26), (b) follows from (19), and (c) is from (25). Therefore, for j ∈ Ωc , we have ∥B(u + w)∥22 =∥Bu∥22 + ∥Bw∥22 + 2wT B T Bu =∥Bu∥22 + ∥Bw∥22 + 2α∥xΩ\S ∥2 |ATj P ⊥ S AΩ\S xΩ\S | and ∥B(α2 u − w)∥22 =α4 ∥Bu∥22 + ∥Bw∥22 − 2α3 ∥xΩ\S ∥2 |ATj P ⊥ S AΩ\S xΩ\S |.

A PPENDIX B P ROOF OF T HEOREM 1 Proof. Our proof consists of two steps. First, we show that OMP makes correct selection at each iteration. Then, we prove that it performs exactly |Ω| iterations. We prove the first step by induction. Suppose that the OMP algorithm selects correct indices in the first k−1 iterations, i.e., Sk−1 ⊆ Ω. Then, we need to show that the OMP algorithm also selects a correct index at the k-th iteration, i.e., showing that sk ∈ Ω (see Algorithm 1). Here, we assume 1 ≤ k ≤ |Ω|. Thus, the proof for the first selection corresponds to the case of k = 1. Clearly the induction hypothesis Sk−1 ⊆ Ω holds for this case since S0 = ∅.

1053-587X (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSP.2016.2634550, IEEE Transactions on Signal Processing 8

By line 2 of Algorithm 1, to show sk ∈ Ω, it is equivalent to show that

We next give a lower bound on the left-hand side of (38). By the induction assumption Sk−1 ⊆ Ω, we have

max |⟨r k−1 , Ai ⟩| > maxc |⟨r k−1 , Aj ⟩|.

|supp(xΩ\Sk−1 )| = |Ω| + 1 − k.

i∈Ω

j∈Ω

(31)

In the following, we simplify (31). Since the minimum eigenvalue of ATSk−1 ASk−1 is lower bounded by

Thus,

√ |Ω| + 1 − k min |xi | i∈Ω\Sk−1 √ ≥ |Ω| + 1 − k min |xi |.

∥xΩ\Sk−1 ∥2 ≥

1 − δ|Sk−1 | ≥ 1 − δ|Ω| > 0, ATSk−1 ASk−1 is invertible. Thus, by line 4 of Algorithm 1, we have ˆ Sk−1 = (ATSk−1 ASk−1 )−1 ATSk−1 y. x

i∈Ω

∥ATΩ\Sk−1 P ⊥ Sk−1 AΩ\Sk−1 xΩ\Sk−1 ∥∞

(32)

− ∥ATΩc P ⊥ AΩ\Sk−1 xΩ\Sk−1 ∥∞ S √ k−1 (1 − |Ω| − k + 2δ|Ω|+1 )∥xΩ\Sk−1 ∥2 √ ≥ |Ω| + 1 − k √ (a) (1 − K + 1δ|Ω|+1 )∥xΩ\Sk−1 ∥2 √ ≥ |Ω| + 1 − k √ (b) (1 − K + 1δK+1 )∥xΩ\Sk−1 ∥2 √ ≥ |Ω| + 1 − k (c) √ ≥ (1 − K + 1δK+1 ) min |xi |,

ˆ Sk−1 r k−1 = y − ASk−1 x ( ) = I − ASk−1 (ATSk−1 ASk−1 )−1 ATSk−1 y (a)

= P⊥ Sk−1 (Ax + v)

(b)

= P⊥ Sk−1 (AΩ xΩ + v)

(c)

= P⊥ Sk−1 (ASk−1 xSk−1 + AΩ\Sk−1 xΩ\Sk−1 + v) ⊥ = P⊥ Sk−1 AΩ\Sk−1 xΩ\Sk−1 + P Sk−1 v,

(33)

where (a) follows from the definition of P ⊥ Sk−1 , (b) is due to the fact that Ω = supp(x), (c) is from the induction assumption that Sk−1 ⊆ Ω, and (d) follows from P⊥ Sk−1 ASk−1 = 0.

(34)

i∈Ω

T ⊥ ∥ATΩ\Sk−1 P ⊥ Sk−1 v∥∞ = |Ai0 P Sk−1 v|,

∥ATΩc P ⊥ Sk−1 v∥∞

max i∈Ω\Sk−1

|⟨r

, Ai ⟩| > maxc |⟨r

k−1

j∈Ω

=

|ATj0 P ⊥ Sk−1 v|.

(42) (43)

Hence,

Therefore, to show (31), it is equivalent to show k−1

(41)

where (a) is because k ≥ 1 and x is K-sparse (i.e., |Ω| ≤ K), (b) follows from Lemma 2, and (c) is from (7) and (40). We next give an upper bound on the right-hand side of (38). Obviously, there exist i0 ∈ Ω \ Sk−1 and j0 ∈ Ωc such that

Thus, by (33) and (34), for i ∈ Sk−1 , we have ⟨r k−1 , Ai ⟩ = ATi r k−1 = 0.

(40)

Since Sk−1 ⊆ Ω and |Sk−1 | = k −1, by Lemma 1, we have

Then, it follows from line 5 of Algorithm 1 and (32) that

(d)

(39)

, Aj ⟩|.

⊥ T ∥ATΩ\Sk−1 P ⊥ Sk−1 v∥∞ + ∥AΩc P Sk−1 v∥∞

(35)

T ⊥ =|ATi0 P ⊥ Sk−1 v| + |Aj0 P Sk−1 v|

=∥ATi0 ∪j0 P ⊥ Sk−1 v∥1 (a)√ ≤ 2∥ATi0 ∪j0 P ⊥ Sk−1 v∥2 (b)√ ≤ 2(1 + δK+1 )∥P ⊥ Sk−1 v∥2 (c)√ ≤ 2(1 + δK+1 )ϵ,

In the following, we will use (33) to rewrite (35). By (33) and the (reverse) triangle inequality, we obtain |⟨r k−1 , Ai ⟩| ( ) ⊥ = ∥ATΩ\Sk−1 P ⊥ Sk−1 AΩ\Sk−1 xΩ\Sk−1 + P Sk−1 v ∥∞

max

i∈Ω\Sk−1

≥ ∥ATΩ\Sk−1 P ⊥ Sk−1 AΩ\Sk−1 xΩ\Sk−1 ∥∞ − ∥ATΩ\Sk−1 P ⊥ Sk−1 v∥∞ ,

(36)

and maxc |⟨r k−1 , Aj ⟩| j∈Ω ) ( ⊥ = ∥ATΩc P ⊥ Sk−1 AΩ\Sk−1 xΩ\Sk−1 + P Sk−1 v ∥∞ T ⊥ ≤ ∥ATΩc P ⊥ Sk−1 AΩ\Sk−1 xΩ\Sk−1 ∥∞ + ∥AΩc P Sk−1 v∥∞ . (37)

Therefore, by (36) and (37), to show (35), it suffices to show ∥ATΩ\Sk−1 P ⊥ Sk−1 AΩ\Sk−1 xΩ\Sk−1 ∥∞ − ∥ATΩc P ⊥ Sk−1 AΩ\Sk−1 xΩ\Sk−1 ∥∞ T T ⊥ > ∥AΩ\Sk−1 P ⊥ Sk−1 v∥∞ + ∥AΩc P Sk−1 v∥∞ .

(44)

where (a) is due to that ATi0 ∪j0 P ⊥ Sk−1 v is a 2 × 1 vector √ and that ∥x∥1 ≤ |supp(x)|∥x∥2 for all x ∈ Rn (For more details, see e.g., [29, p.517]. Note that this inequality itself can be derived from the Cauchy-Schwarz inequality), (b) follows from Lemma 3 and (c) is because ∥P ⊥ Sk−1 v∥2 ≤ ∥v∥2 ≤ ϵ.

(45)

From (41) and (44), (38) (or equivalently (35)) can be guaranteed by √ √ (1 − K + 1δK+1 ) min |xi | > 2(1 + δK+1 )ϵ, i∈Ω

i.e., (38)

√ 2(1 + δK+1 )ϵ √ min |xi | > . i∈Ω 1 − K + 1δK+1

1053-587X (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSP.2016.2634550, IEEE Transactions on Signal Processing 9

√ √ Furthermore, by (7), we have 1 + δK+1 < 2. Thus, if (8) holds, then the OMP algorithm selects a correct index in each iteration. Now, what remains to show is that the OMP algorithm performs exact |Ω| iterations, which is equivalent to show that ∥r k ∥2 > ϵ for 1 ≤ k < |Ω| and ∥r |Ω| ∥2 ≤ ϵ. Since OMP selects a correct index at each iteration under (8), by the (reverse) triangle inequality and (33), for 1 ≤ k < |Ω|, we have ⊥ ∥r k ∥2 = ∥P ⊥ Sk AΩ\Sk xΩ\Sk + P Sk v∥2



(b)



∥P ⊥ Sk AΩ\Sk xΩ\Sk ∥2



T ⊥ ∥ATΩ\Sk−1 P ⊥ Sk−1 v∥∞ + ∥AΩc P Sk−1 v∥∞ .

Let λ denote the largest singular value of (ATSk−1 ASk−1 )−1 . Then λ equals to the reciprocal of the smallest singular value of ATSk−1 ASk−1 . Since A satisfies the RIP of order K +1 with δK+1 , the smallest singular value of ATSk−1 ASk−1 cannot be smaller than 1 − δK+1 . Thus, λ ≤ 1/(1 − δK+1 ). Therefore, ∥P Sk−1 v∥22 = v T P TSk−1 P Sk−1 v = v T P Sk−1 v

⊥ ≥ ∥P ⊥ Sk AΩ\Sk xΩ\Sk ∥2 − ∥P Sk v∥2 (a)

on ∥P Sk−1 v∥2 and then use (42) and (43) to given an upper bound on

(a)

= v T ASk−1 (ATSk−1 ASk−1 )−1 ATSk−1 v

−ϵ

(b)

≤ λ∥ATSk−1 v∥22 1 ∥ATSk−1 v∥22 ≤ 1 − δK+1 (c) K −1 ≤ ∥ATSk−1 v∥2∞ 1 − δK+1 (d) (K − 1)ϵ2 Kϵ2 ≤ < , 1 − δK+1 1 − δK+1

1 − δ|Ω| ∥xΩ\S k ∥2 − ϵ

√ √ 1 − δK+1 |Ω| − k min |xi | − ϵ i∈Ω √ ≥ 1 − δK+1 min |xi | − ϵ, (c)



i∈Ω

(46)

where (a) is from (45), (b) is from Lemma 4, and (c) follows from Lemma 2 and (40). Therefore, if min |xi | > √ i∈Ω

2ϵ , 1 − δK+1

(47)

then ∥r k ∥2 > ϵ for each 1 ≤ k < Ω. By some simple calculations, we can show that 1−



2ϵ 2ϵ ≥√ . K + 1δK+1 1 − δK+1

(48)

where (a) follows from the definition of P Sk−1 , (b) is from the assumption that λ is the largest singular value of (ATSk−1 ASk−1 )−1 , (c) is because |Sk−1 | = k − 1 ≤ K − 1, and (d) follows from ∥AT v∥∞ ≤ ϵ. By (42), (43) and the triangular inequality, we have ⊥ T ∥ATΩ\Sk−1 P ⊥ Sk−1 v∥∞ + ∥AΩc P Sk−1 v∥∞ T ⊥ =|ATi0 P ⊥ Sk−1 v| + |Aj0 P Sk−1 v|

Indeed, by the fact that 0 < 1 − δK+1 < 1, we have √ √ 1 − K + 1δK+1 ≤ 1 − δK+1 ≤ 1 − δK+1 .

=∥ATi0 ∪j0 P ⊥ Sk−1 v∥1 (a)√ ≤ 2∥ATi0 ∪j0 P ⊥ Sk−1 v∥2 √ T = 2∥Ai0 ∪j0 (I − P Sk−1 )v∥2 √ √ ≤ 2∥ATi0 ∪j0 v∥2 + 2 ATi0 ∪j0 P Sk−1 v∥2 (b) √

≤2∥ATi0 ∪j0 v∥∞ + 2(1 + δ2 ) P Sk−1 v∥2 √ (c) 1 + δ2 √ ≤ 2ϵ + 2Kϵ, 1 − δK+1

Thus, (48) holds. Therefore, by (47) and (48), if (8) holds, ∥r k ∥2 > ϵ for each 1 ≤ k < Ω, i.e., the OMP algorithm does not terminate before the |Ω|-th iteration. Similarly, by (33), ⊥ ∥r |Ω| ∥2 = ∥P ⊥ S|Ω| AΩ\S|Ω| xΩ\S|Ω| + P S|Ω| v∥2 (a)

=

∥P ⊥ S|Ω| v∥2

(b)

≤ ϵ,

(50)

(49)

where (a) is because S|Ω| = |Ω| and (b) follows from (45). So, by the stopping condition, the OMP algorithm terminates after the |Ω|-th iteration. Therefore, the OMP algorithm performs |Ω| iterations. This completes the proof.  A PPENDIX C P ROOF OF T HEOREM 2 Proof. Similar to the proof of Theorem 1, we need to prove that the OMP algorithm selects correct indexes at all iterations and it performs exactly |Ω| iterations. We first prove the first part. By the proof of Theorem 1, we only need to prove that (38) holds. As the noise vector satisfies a different constraint, we need to give a new upper bound on the right-hand side of (38). To do this, we first use the method used in the proof of [8, Theorem 5] to give an upper bound

(51)

where (a) is due to the fact that ATi0 ∪j0 P ⊥ Sk−1 v is a 2×1 vector and the Cauchy-Schwarz inequality, (b) and (c) respectively follow from Lemma 3 and (50). Therefore, by (41) and (51), if (13) holds, then (38) holds,5 i.e., OMP selects correct indexes in all iterations if (13) holds. Our next job is to prove that the OMP algorithm does not terminate before the |Ω|-th iteration. By the (reverse) triangular inequality and (33), we have ∥AT r k ∥∞ ⊥ =∥AT (P ⊥ Sk AΩ\Sk xΩ\Sk + P Sk v)∥∞ ⊥ ≥∥ATΩ\Sk (P ⊥ Sk AΩ\Sk xΩ\Sk + P Sk v)∥∞ T ⊥ ≥∥ATΩ\Sk P ⊥ Sk AΩ\Sk xΩ\Sk ∥∞ − ∥AΩ\Sk P Sk v∥∞ . 5 Note

(52)

that (38) still holds under the relaxed condition of (13).

1053-587X (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSP.2016.2634550, IEEE Transactions on Signal Processing 10

In the following, we give a lower bound on

where (a) is because

∥ATΩ\Sk P ⊥ Sk AΩ\Sk xΩ\Sk ∥∞ .

1 − δK+1 ≥ 1 −

It is not hard to check that

and the last inequality follows from Lemma 2. So, by the stopping condition (12), the OMP algorithm does not terminate before the |Ω|-th iteration. 6 Finally, we prove that OMP terminates after performing the |Ω|-th iteration. By (49), we have

∥ATΩ\Sk P ⊥ Sk AΩ\Sk xΩ\Sk ∥∞ 1 ≥√ ∥ATΩ\Sk P ⊥ Sk AΩ\Sk xΩ\Sk ∥2 |Ω| − k =

∥xΩ\Sk ∥2 ∥ATΩ\Sk P ⊥ Sk AΩ\Sk xΩ\Sk ∥2 √ |Ω| − k∥xΩ\Sk ∥2

r |Ω| = P ⊥ S|Ω| v.

T ⊥ (a) |xT Ω\Sk AΩ\Sk P Sk AΩ\Sk xΩ\Sk |



Thus, applying some techniques which are similar to that for deriving (54), we obtain

√ |Ω| − k∥xΩ\Sk ∥2

∥AT r |Ω| ∥∞ =∥AT P ⊥ S|Ω| v∥∞ √ ( ) 1 + δ1 √ ≤ 1+ K ϵ, 1 − δK+1 √ ( ) 1 + δ2 √ ≤ 1+ K ϵ, 1 − δK+1

⊥ 2 (b) ∥P Sk AΩ\Sk xΩ\Sk ∥2

= √ |Ω| − k∥xΩ\Sk ∥2 2 (c) (1 − δK )∥x Ω\Sk ∥2 ≥√ |Ω| − k∥xΩ\Sk ∥2 ∥xΩ\Sk ∥2 ≥(1 − δK+1 ) √ |Ω| − k ≥(1 − δK+1 ) min |xi |, i∈Ω

(53)

By the stopping condition (12), the OMP algorithm terminates after performing the |Ω|-th iteration. 7 

where (a) follows from the Cauchy-Schwarz inequality, (b) is due to (19), (c) is from Lemma 4, and the last inequality is from (40). In the following, we give an upper bound on

A PPENDIX D P ROOF OF T HEOREM 3 Proof. To prove the theorem, it suffices to show that there exists a linear model of the form (1), where v satisfies ∥v∥2 ≤ ϵ, A satisfies the RIP with δK+1 = δ for any given δ satisfying (15), and x is K-sparse and satisfies

∥ATΩ\Sk P ⊥ Sk v∥∞ . Let j0 ∈ Ω \ Sk such that T ⊥ ∥ATΩ\Sk P ⊥ Sk v∥∞ = |Aj0 P Sk v|.

min |xi | = γ

Then, by the triangular inequality, we obtain for some γ satisfying

0<γ< (54)

where the last inequality follows from the Cauchy-Schwartz inequality, (50) and Lemma 3. Therefore, for each 1 ≤ k < |Ω|, by (13) and (52)-(54), we have ∥AT r k ∥∞

(



(55)

i∈Ω

T ∥ATΩ\Sk P ⊥ Sk v∥∞ = |Aj0 (I − P Sk )v|

≤|ATj0 v| + |ATj0 P Sk v| ≤ ϵ + |ATj0 P Sk v| √ ( ) 1 + δ1 √ ≤ 1+ K ϵ, 1 − δK+1

√ K + 1δK+1 ,

) 1 + δ1 √ ≥(1 − δK+1 ) min |xi | − 1 + K ϵ i∈Ω 1 − δK+1 √ ( ) 2(1 − δK+1 ) 1 + δ2 √ √ 1+ K ϵ > 1 − δK+1 1 − K + 1δK+1 √ ( ) 1 + δ1 √ − 1+ K ϵ 1 − δK+1 √ √ ( ) ( ) (a) 1 + δ2 √ 1 + δ1 √ ≥2 1 + K ϵ− 1+ K ϵ 1 − δK+1 1 − δK+1 √ ( ) 1 + δ2 √ ≥ 1+ K ϵ, 1 − δK+1

√ 1 − δϵ √ , 1 − K + 1δ

(56)

such that the OMP algorithm fails to recover the support of x in K iterations. In the following, we construct such a linear model. Let 1K be a K-dimensional column vector with all entries being 1, then there exist ξi ∈ RK , 1 ≤ i ≤ K − 1, such that [ ] ξ 1 ξ 2 . . . ξ K−1 √1K 1K ∈ RK×K is an orthogonal matrix. Let  ξ1 ξ2 . . . ξK−1 U = 0 0 ... 0



1K K(β 2 +1) √ β2 β +1

T β1K 2 K(β +1)  − √ 12 β +1



,

(57) 6 If

A is column normalized, then δ1 ( = 0. Thus, under ) the relaxed √ 1+ √ K ϵ. Thus, the

condition of (13), we have ∥AT r k ∥∞ ≥

1−δK+1

OMP algorithm does not terminate before the |Ω|-th iteration under the relaxed stopping condition. T |Ω| 7 ( If A is column ) normalized, then δ1 = 0. Thus, ∥A r ∥∞ ≤ √ K 1−δK+1

1+ √

ϵ. Thus, the OMP algorithm terminates after performing

the |Ω|-th iteration.

1053-587X (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSP.2016.2634550, IEEE Transactions on Signal Processing 11

Let ei , 1 ≤ i ≤ K, be the i-th column of the (K + 1) × (K + 1) identity matrix, then, for 1 ≤ i ≤ K, we have

√ K +1−1 √ β= . K

where

Then, it is not hard to prove that U is also an orthogonal matrix. Applying some techniques which are similar to that for deriving (22), we can show that √ 1 − β2 1 2β K √ √ = , = . (58) 1 + β2 1 + β2 K +1 K +1 Let D ∈ R(K+1)×(K+1) be the diagonal matrix with {√ 1−δ i=K dii = √ , (59) 1 + δ i ̸= K and A = DU ,

(60)

then AT A = U T D 2 U . It is not hard to see that A satisfies the RIP with δK+1 = δ. In fact, let V = U T , then by the fact that U is orthogonal, we have T

T

2

2

A A=U D U =VD V

T

2

=VD V

−1

.

Thus, V D 2 V −1 is a valid eigenvalue decomposition of AT A, and D 2 is the diagonal matrix containing the eigenvalues. Therefore, the largest and smallest eigenvalues of AT A are respectively 1 + δ and 1 − δ, which implies that δK+1 = δ. Let [ ]T x = γ1TK 0 ∈ RK+1 (61) for any γ satisfying (56) (recall that 1K is a K-dimensional column vector with all of its entries being 1). Then x is Ksparse and satisfies (55). By (57) and the fact that ξ Ti 1K = 0 for 1 ≤ i ≤ K − 1, we have √ [ ]T K Ux = γ 0TK−1 1 β . 2 β +1 Thus,



D2 U x =

[ K γ 0TK−1 +1

β2

1−δ

(1 + δ)β

]T

]T √ − 1 − δϵ ∈ RK+1 ,

.

(62)

(63)

then, by (60) and the fact that U is orthogonal, we have [ ]T √ (64) AT v = 0TK − 1 − δϵ . In the following, we show that ∥v∥2 ≤ ϵ. By (57) and (63), we have [ T ]T √ √ ϵ v = D −1 √ 1−δ . 0K−1 −β 1 − δ 2 β +1 Thus, by (59), we obtain ϵ v=√ 2 β +1 and hence ∥v∥2 ≤ ϵ.

[ 0TK−1

−β



1−δ 1+δ

(a)

= eTi U T D 2 U x ( ) (1 − δ) + (1 + δ)β 2 (b) = γ β2 + 1 ( ) 1 − β2 = 1− δ γ 1 + β2 ( ) 1 = 1− √ δ γ, K +1 where (a) follows from the fact that U is orthogonal, (60) and (64), (b) is due to (57) and (62), and the last equality is from (58). Thus, by (15), ( ) 1 max |⟨y, Ai ⟩| = 1 − √ δ γ. (65) i∈Ω K +1 Similarly, we have ⟨y, AK+1 ⟩ = eTK+1 U T D 2 U x + eTK+1 AT v √ −2β √ = 2 Kδγ − 1 − δϵ β +1 √ −K δγ − 1 − δϵ, =√ K +1 where the last equality is from (58). Thus maxc |⟨y, Aj ⟩| = √ j∈Ω

√ K δγ + 1 − δϵ. K +1

(66)

By (56), (65) and (66), we have max |⟨y, Ai ⟩| < maxc |⟨y, Aj ⟩|. i∈Ω

j∈Ω

Thus, by line 2 of Algorithm 1, OMP chooses an index in Ωc in the first iteration. Therefore, OMP fails to recover Ω in K iterations. This completes the proof.  A PPENDIX E P ROOF OF T HEOREM 4

Let [ v = D −1 U 0TK

⟨y, Ai ⟩ = ⟨Ax + v, Ai ⟩ = eTi AT Ax + eTi AT v

Proof. Similar to the proof of Theorem 3, we need to show that there exists a linear model of the form (1), where v satisfies ∥AT v∥∞ ≤ ϵ, A satisfies the RIP with δK+1 = δ for any given δ satisfying (15), and x is K-sparse and satisfies (55) with 2ϵ √ 0<γ< , (67) 1 − K + 1δ such that the OMP algorithm fails to recover the support of x in K iterations. We use the same A and x (see (60) and (61)) as in the proof of Theorem 3, but instead of (63), we define v = −ϵD −1 U 1K+1 ∈ RK+1 .

(68)

Recall that 1K+1 is a (K +1)-dimensional column vector with all of its entries being 1. So by (60), we obtain,

]T ,

AT v = −ϵ1n , leading to ∥AT v∥∞ = ϵ.

1053-587X (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSP.2016.2634550, IEEE Transactions on Signal Processing 12

Applying some techniques which are similar to that for deriving (65) and (66), we have ( ) 1 max |⟨y, Ai ⟩| = 1 − √ δ γ − ϵ. i∈Ω K +1 and maxc |⟨y, Aj ⟩| = √ j∈Ω

K δγ + ϵ. K +1

Thus, by (67), we further have max |⟨y, Ai ⟩| < maxc |⟨y, Aj ⟩|. i∈Ω

j∈Ω

By line 2 of Algorithm 1, OMP chooses an index in Ωc in the first iteration. Therefore, OMP fails to recover Ω in K iterations. This completes the proof.  R EFERENCES [1] E. J. Cand´es and T. Tao, “Decoding by linear programming,” IEEE Trans. Inf. Theory, vol. 51, no. 12, pp. 4203–4215, 2005. [2] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, 2006. [3] A. Cohen, W. Dahmen, and R. DeVore, “Compressed sensing and best k-term approximation,” J. Amer. Math. Soc., vol. 22, pp. 211–231, 2009. [4] J. Wen, D. Li, and F. Zhu, “Stable recovery of sparse signals via lp minimization,” Appl. Comput. Harmon. Anal., vol. 38, no. 1, pp. 161– 176, 2015. [5] J. J. Fuchs., “Recovery of exact sparse representations in the presence of bounded noise,” IEEE Trans. Inf. Theory, vol. 51, no. 10, pp. 3601– 3608, 2005. [6] D. L. Donoho, M. Elad, and V. N. Temlyakov, “Stable recovery of sparse overcomplete representations in the presence of noise,” IEEE Trans. Inf. Theory, vol. 52, pp. 6–18, 2006. [7] E. J. Cand´es, “The restricted isometry property and its implications for compressed sensing,” C. R. Acad. Sci. Paris, Ser. I, vol. 346, no. 11, pp. 589–592, 2008. [8] T. Cai and L. Wang, “Orthogonal matching pursuit for sparse signal recovery with noise,” IEEE Trans. Inf. Theory, vol. 57, pp. 4680–4688, 2011. [9] E. J. Cand´es and T. Tao, “The dantzig selector: Statistical estimation when p is much larger than n,” Ann. Statist, vol. 35, pp. 2313–2351, 2007. [10] E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Comm. Pure Appl. Math, vol. 59, no. 8, pp. 1207–1223, 2006. [11] Q. Mo and S. Li, “New bounds on the restricted isometry constant δ2k ,” Appl. Comput. Harmon. Anal., vol. 31, pp. 460–468, 2011. [12] Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, “Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition,” in Proc. 27th Annu. Asilomar Conf. Signals, Systems, and Computers, vol. 1. IEEE, Pacific Grove, CA, Nov. 1993, pp. 40–44. [13] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Inf. Theory, vol. 53, no. 12, pp. 4655–4666, 2007. [14] M. Davenport and M. Wakin, “Analysis of orthogonal matching pursuit using the restricted isometry property,” IEEE Trans. Inf. Theory, vol. 56, no. 9, pp. 4395–4401, 2010. [15] E. Liu and V. Temlyakov, “The orthogonal super greedy algorithm and applications in compressed sensing,” IEEE Trans. Inf. Theory, vol. 58, no. 4, pp. 2040–2047, 2012. [16] Q. Mo and S. Yi, “A remark on the restricted isometry property in orthogonal matching pursuit,” IEEE Trans. Inf. Theory, vol. 58, no. 6, pp. 3654–3656, 2012. [17] J. Wang and B. Shim, “On the recovery limit of sparse signals using orthogonal matching pursuit,” IEEE Trans. Signal Process., vol. 60, no. 9, pp. 4973–4976, 2012. [18] L.-H. Chang and J.-Y. Wu, “An improved RIP-based performance guarantee for sparse signal recovery via orthogonal matching pursuit,” IEEE Trans. Inf. Theory, vol. 60, no. 9, pp. 707–710, 2014. [19] Q. Mo, “A sharp restricted isometry constant bound of orthogonal matching pursuit,” arXiv:1501.01708, 2015.

[20] W. Dai and O. Milenkovic, “Subspace pursuit for compressive sensing signal reconstruction,” IEEE Trans. Inf. Theory, vol. 55, no. 5, pp. 2230– 2249, 2009. [21] J. Wen, X. Zhu, and D. Li, “Improved bounds on the restricted isometry constant for orthogonal matching pursuit,” Electronics Letters, vol. 49, pp. 1487–1489, 2013. [22] Y. Shen and S. Li, “Sparse signals recovery from noisy measurements by orthogonal matching pursuit,” Inverse Problems and Imaging, vol. 9, no. 1, pp. 231–238, 2015. [23] R. Wu, W. Huang, and D. Chen, “The exact support recovery of sparse signals with noise via orthogonal matching pursuit,” IEEE Signal Processing Letters, vol. 20, no. 4, pp. 403–406, 2013. [24] C. Herzet, C. Soussen, J. Idier, and R. Gribonval, “Exact recovery conditions for sparse representations with partial support information,” IEEE Trans. Inform. Theory, vol. 59, no. 11, pp. 7509–7524, Nov 2013. [25] J. Wen, Z. Zhou, J. Wang, X. Tang, and Q. Mo, “A sharp condition for exact support recovery of sparse signals with orthogonal matching pursuit,” in 2016 IEEE International Symposium on Information Theory (ISIT). IEEE, 2016, pp. 2364–2368. [26] J. Wang, “Support recovery with orthogonal matching pursuit in the presence of noise,” IEEE Trans. Signal Process., vol. 63, no. 21, pp. 5868–5877, Nov. 2015. [27] D. Needel and J. A. Tropp, “CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,” Appl. Comput. Harmon. Anal., vol. 26, no. 3, pp. 301–321, 2009. [28] Y. Shen, B. Li, W. Pan, and J. Li, “Analysis of generalized orthogonal matching pursuit using restricted isometry constant,” Electron. Lett., vol. 50, no. 14, pp. 1020–1022, 2014. [29] S. Foucart and H. Rauhut, A mathematical introduction to compressive sensing. Springer-Verlag, 2013.

Jinming Wen received his Bachelor degree in Information and Computing Science from Jilin Institute of Chemical Technology, Jilin, China, in 2008, his M.Sc. degree in Pure Mathematics from the Mathematics Institute of Jilin University, Jilin, China, in 2010, and his Ph.D degree in Applied Mathematics from McGill University, Montreal, Canada, in 2015. He was a postdoctoral research fellow at Laboratoire LIP, ENS de Lyon from March 2015 to August 2016. He is currently working as a postdoctoral research fellow at department of Electrical and Computer Engineering, University of Alberta. His research interests are in the areas of lattice reduction with applications in communications, signal processing and cryptography, and sparse recovery. He was a Guest Editor for 4 special issues including one in ACM/Springer Mobile Networks and Applications.

Zhengchun Zhou received the B.S. and M.S. degrees in mathematics and the Ph.D. degree in information security from Southwest Jiaotong University, Chengdu, China, in 2001, 2004, and 2010, respectively. From 2012 to 2013, he was a postdoctoral member in the Department of Computer Science and Engineering, the Hong Kong University of Science and Technology. From 2013 to 2014, he was a research associate in the Department of Computer Science and Engineering, the Hong Kong University of Science and Technology. Since 2001, he has been in the Department of Mathematics, Southwest Jiaotong University, where he is currently a professor. His research interests include sequence design and coding theory. Dr. Zhou was the recipient of the National excellent Doctoral Dissertation award in 2013 (China).

1053-587X (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSP.2016.2634550, IEEE Transactions on Signal Processing 13

Jian Wang (S’11) received the B.S. degree in material engineering and the M.S. degree in information and communication engineering from Harbin Institute of Technology, China, and the Ph.D. degree in electrical and computer engineering from Korea University, Korea, in 2006, 2009, and 2013, respectively. From 2013 to 2015, he held positions as Postdoctoral Research Associate at Department of Statistics and Biostatistics, Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA, and Department of Electrical and Computer Engineering, Duke University, Durham, NC 27708, USA. He is currently a research assistant professor at Seoul National University, Seoul 151-744, Korea. His research interests include phase retrieval, sparse and low-rank recovery, lattice, wireless sensor network, signal processing for wireless communications, and statistical learning.

Xiaohu Tang (M’04) received the B.S. degree in applied mathematics from the Northwest Polytechnic University, Xi’an, China, the M.S. degree in applied mathematics from the Sichuan University, Chengdu, China, and the Ph.D. degree in electronic engineering from the Southwest Jiaotong University, Chengdu, China, in 1992, 1995, and 2001 respectively. From 2003 to 2004, he was a research associate in the Department of Electrical and Electronic Engineering, Hong Kong University of Science and Technology. From 2007 to 2008, he was a visiting professor at University of Ulm, Germany. Since 2001, he has been in the School of Information Science and Technology, Southwest Jiaotong University, where he is currently a professor. His research interests include coding theory, network security, distributed storage and information processing for big data. Dr. Tang was the recipient of the National excellent Doctoral Dissertation award in 2003 (China), the Humboldt Research Fellowship in 2007 (Germany), and the Outstanding Young Scientist Award by NSFC in 2013 (China). He serves as an Associate Editor for several journals including IEEE Transactions on Information Theory and EICE Transactions on Fundamentals, and served as a technical program member for a number of conferences.

Qun Mo has obtained a B.Sc. degree in 1994 from Tsinghua University, a M.Sc. degree in 1997 from Chinese Academy of Sciences and a Ph.D. degree in 2003 from University of Alberta in Canada. He is current an associate professor in mathematics in Zhejiang University. His research interests include compressed sensing, wavelets and their applications.

1053-587X (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

A Sharp Condition for Exact Support Recovery With ...

the gap between conditions (13) and (17) can be large since ... K that is large enough. Whether it ...... distributed storage and information processing for big data.

2MB Sizes 0 Downloads 218 Views

Recommend Documents

Condition for Perfect Dimensionality Recovery by ...
Under this constraint, an iterative algorithm for minimizing the free energy (6) was ... a simple alternative form of the global VB, as well as EVB, solution, which ...

A Primal Condition for Approachability with Partial Monitoring
partial monitoring. In previous works [5, 7] we provided a dual characteriza- tion of approachable convex sets and we also exhibited efficient strategies in the case where C ... derived efficient strategies for approachability in games with partial m

Support Recovery With Orthogonal Matching Pursuit in ... - IEEE Xplore
Nov 1, 2015 - Support Recovery With Orthogonal Matching Pursuit in the Presence of Noise. Jian Wang, Student Member, IEEE. Abstract—Support recovery ...

Bayesian Hypothesis Test for sparse support recovery using Belief ...
+82-62-715-2264, Fax.:+82-62-715-2274, Email:{jwkkang,heungno,kskim}@gist.ac.kr). Abstract—In this ... the test are obtained by aid of belief propagation (BP).

A utility representation theorem with weaker continuity condition
Sep 4, 2008 - http://www.wiwi.uni-bielefeld.de/˜imw/Papers/showpaper.php?401 .... We prove that for any linearly continuous preference relation, the set of ..... {si,si},i = 2,...,k − 1}), where ri(C) stands for the relative interior of the set C.

A utility representation theorem with weaker continuity condition
Sep 4, 2008 - The main step in the proof of our utility representation theorem is to show ...... If k ≥ 3, we can apply Lemma 1 again to D ∩ co({x0} ∪ Y ε/(k−1).

Computing Exact p-values for a Cross-correlation Shotgun Proteomics ...
processors such as PeptideProphet (5) and Percolator (6) simultaneously .... used with XF for primary versus secondary ions in step 3 of computing T. As an ..... the peptide level, XCorr p-value plus Percolator and MS-GF plus Percolator had ...

A Sommerfeld non-reflecting boundary condition for the ...
Jan 13, 2014 - well-posed, since the solution is bounded by the data. ...... and the error in the same norm with a time step size twice as big was less than 1% in .... Figure 2: Contours of ph for the benchmark problem with analytical solution ...

A Sufficiency Condition for Graphs to Admit Greedy ...
programs, and in the context of speeding up matrix ... for solving semi-definite programs. Thus a ..... [8] Y. Kim, “Data Migration to Minimize the Average Comple-.

A Riemann Hypothesis Condition for Metaplectic Eisenstein Series.pdf
Whoops! There was a problem loading more pages. Retrying... A Riemann Hypothesis Condition for Metaplectic Eisenstein Series.pdf. A Riemann Hypothesis ...

Client Support Specialist-Recovery Coach - NYC.pdf
Client Support Specialist-Recovery Coach - NYC.pdf. Client Support Specialist-Recovery Coach - NYC.pdf. Open. Extract. Open with. Sign In. Main menu.

Building a Microsoft SQL Server Disaster Recovery Plan with Google ...
Starting with SQL Server 2012, you must be licensed with Software .... ​Google Cloud Platform Console​, click ​Manage project settings​ in the project box. 10 ...

Multiple Routing Configurations for Fast IP Network Recovery with ...
properly managed in the existing system of multiple routing configurations for fast IP network recovery. Why because, in MRC mechanism whenever a failure happens in the network it can generate an alternate link immediately by using preconfigured data

Exact outage probability of cognitive AF relaying with ...
harmful interference on PUs, the transmit power at SU-Tx Ps is set at. Ps = Ip/|hs,p|2, where Ip is the maximum tolerable interference power at PU and hs,p is the ...

Isotropic Remeshing with Fast and Exact Computation of ... - Microsoft
ρ(x)y−x. 2 dσ. (4). In practice we want to compute a CVT given by a minimizer of this function instead of merely a critical point, which may be a saddle point. If we minimize the same energy function as in .... timization extra computation is nee

Exact Lifted Inference with Distinct Soft Evidence ... - Semantic Scholar
Jul 26, 2012 - The MAP configuration q under the marginal Pr(q) (a.k.a the marginal-map ... By sorting the vector α, the MAP problem can be solved in.

Efficient Exact Edit Similarity Query Processing with the ...
Jun 16, 2011 - edit similarity queries rely on a signature scheme to gener- ... Permission to make digital or hard copies of all or part of this work for personal or classroom ... database [2], or near duplicate documents in a document repository ...

DEVELOPING A COMMUNITY SUPPORT MODEL FOR TOURISM.pdf
DEVELOPING A COMMUNITY SUPPORT MODEL FOR TOURISM.pdf. DEVELOPING A COMMUNITY SUPPORT MODEL FOR TOURISM.pdf. Open. Extract.

Exact Lifted Inference with Distinct Soft Evidence ... - Semantic Scholar
Exact Lifted Inference with. Distinct Soft Evidence on Every. Object. Hung Hai Bui, Tuyen N. Huynh, Rodrigo de Salvo Braz. Artificial Intelligence Center. SRI International. Menlo Park, CA, USA. July 26, 2012. AAAI 2012. 1/18 ...

New Exact Solution of Dirac-Coulomb Equation with ... - Springer Link
Sep 19, 2007 - brings on that the solutions of the Klein-Gordon equation and the Dirac ... and its magnetic moment in a completely natural way and so on.

Exact outage probability of cognitive AF relaying with ...
Foundation for Science and Technology Development (NAFOSTED). (No. ... V.N.Q. Bao (Posts and Telecommunications Institute of Technology,. Vietnam).