THE PROBABILISTIC ESTIMATES ON THE LARGEST AND SMALLEST q-SINGULAR VALUES OF RANDOM MATRICES MING-JUN LAI



AND YANG LIU†

Abstract. We study the q-singular values of random matrices with preGaussian entries defined in terms of the `q -quasinorm with 0 < q ≤ 1. In this paper, we mainly consider the decay of the lower and upper tail prob(q) abilities of the largest q-singular value s1 , when the number of rows of the matrices becomes very large. Based on the results in probabilistic estimates on the largest q-singular value, we also give probabilistic estimates on the smallest q-singular value for pre-Gaussian random matrices.

1. Introduction The extremal spectrum or the largest and smallest singular values of random matrices have been interested to many research communities including numerical analysis and multivariate statistics. For example, the condition numbers of random matrices were of interest as early as in [von Neumann and Goldstein’1947, [28]] and [Smale’1985, [19]] and distribution of the largest and smallest eigenvalues of Wishart matrices was studied in [Wishart’1928, [30]]. Some estimates for the probability distribution of the norm of a random matrix transformation were obtained in [Bennett, Goodman and Newman’1975, [2]]. In 1988, Edelman presented a comprehensive study on the distribution of the condition numbers of Gaussian random matrices together with many numerical experiments (cf. [5]). In particular, Edelman explained several interesting applications of eigenvalues of random matrices in graph theory, the zeros of Riemann zeta functions as well as in nuclear physics (cf. [6]). Indeed, the well-known semi-circle law (cf. [Wigner’1962, [29]]) states that the histogram for the eigenvalues of a large random matrix is roughly a semi-circle. To be more precise, let A be a Gaussian random matrix and M (x) denote the √ T proportion of eigenvalues of Gaussian orthogonal ensemble (A + A )/(2 n) (the √ symmetric part of A/ n) that are less than x. Then the semi-circle law asserts that ( √ 2 1 − x2 , if x ∈ [−1, 1] d M (x) → π dx 0, otherwise This interesting property has made a long lasting impact and attracted many researchers to extend and generalize the semi-circle law. See recent papers in [Tao and Vu’2008,[24]] and [Rudelson and Vershynin’2010, [17]] for new results Date: Nov 26, 2012 and, in revised form, Sep 23, 2013. 2000 Mathematics Subject Classification. Primary 60B20; Secondary 60F10, 60G50, 60G42. Key words and phrases. Random matrices, probability, pre-Gaussian random variable, generalized singular values. ∗ This author is partly supported by the National Science Foundation under grant DMS0713807; † This author is partially supported by the Air Force Office of Scientific Research under grant AFOSR 9550-12-1-0455. 1

LARGEST AND SMALLEST q-SINGULAR VALUES

2

and survey and references therein. It is known that the largest eigenvalue of 1 √ Ms = Vn×s (Vn×s )T converges to (1 + y)2 almost surely (cf. [Geman’1980,[10]]) s √ and the smallest eigenvalue converges to (1 − y)2 almost surely (cf. [Silverstein’1985, [18]]), where Vn×s is a Gaussian random matrix of size n × s with n/s → y ∈ (0, 1] and Vn×s (Vn×s )T is called Wishart matrix. The behavior of the largest singular value of random matrices A with i.i.d. entries is well studied. If a random variable ξ has a bounded fourth moment, then the largest eigenvalue s1 (A) of a n × n random matrix A with i.i.d. copies of ξ satisfies the following property: p s1 (A) = 2 Eξ 2 lim √ n→∞ n almost surely. See, e.g., [Yin, Bai, Krishnaiah’1988, [31]] and [Bai, Silverstein and Yin’1988,[1]]. The bounded fourth moment is necessary and sufficient in this case. However, the behavior of the smallest singular value for general random matrices has been much less known. Although Edelman showed that for every  > 0, the smallest eigenvalue sn (A) of Gaussian random matrix A of size n × n has    P sn (A) ≤ √ ≤ . n for any  > 0, the probability estimates for sn (A) for general random matrix A are not known until the results in [Rudelson and Vershynin’2008, [14]]. In fact, Rudelson in [Rudelson’2008, [16]] presented a less accurate probability estimate for sn (A) and soon both Rudelson and Vershynin found a simpler proof of much accurate estimate in [Rudelson and Vershynin’2008, [15]]. More precisely, Rudelson and Vershynin first showed (cf. [15]) the following results: Theorem 1.1. If A is a matrix of size n × n whose entries are independent random variables with variance 1 and bounded fourth moment. Then    lim lim sup P sn (A) ≤ √ = 0. →0+ n→∞ n Furthermore, in [Rudelson and Vershynin’2008, [14]], they presented a proof of the following Theorem 1.2. Let A be an n × n matrix whose entries are i.i.d. centered random variables with unit variance and fourth moment bounded by B. Then   K = 0. lim lim sup P sn (A) ≥ √ K→+∞ n→∞ n These two results settled down a conjecture by Smale in [18] (the results on the Gaussian case were established by Edelman and Szarek, see [6] and [22]). More precise estimates for largest and smallest eigenvalues are given for sub-Gaussian random matrices, Bernoulli matrices, covariance matrices, and general random matrices of form M + A with deterministic matrix M and random matrix A in last ten years. See, e.g. [25], [20], [14], [26], [23] and the references in [17]. In this paper, we extend these studies on the probability estimate of the largest and smallest singular values of random matrices in `2 norm and give estimates for these extremal spectrum in the setting of the `q -quasinorm for 0 < q ≤ 1. Not only is it interesting to know if the probability estimates for largest and smallest singular values of random matrices in `2 norm can be extended to the setting of

LARGEST AND SMALLEST q-SINGULAR VALUES

3

the `q quasi-norm, but there are also some definite advantages of using general `q -quasi-norm when studying the restricted isometry property of random matrices as suggested in [Chartrand and Steneva’2008, [4]], [Foucart and Lai’2009, [8]] and [Foucart and Lai’2010,[9]]. In addition to Gaussian and sub-Gaussian random matrices, we would like to study the probability estimates for pre-Gaussian random matrices. A random variable ξ is pre-Gaussian if ξ has mean zero and the moment growth condition E(|ξ|k ) ≤ k!λk /2, i.e. (E(|ξ|k ))1/k ≤ Cλk for k ≥ 1 (cf. [Buldygin and Kozachenko’2000,[3]]). Note that the moment growth condition for √ 1/k a sub-Gaussian random variable η is E |η|k ≤ BC k. To be precise on what we are going to study in this paper, for any vector x = (x1 , · · · , xn )T in Rn , let n X kxkqq = |xi |q i=1

for q ∈ (0, ∞). It is known that for q ≥ 1, k · kq is a norm for Rn and k · kqq is a quasinorm for Rn for q ∈ (0, 1), that satisfies all the properties for a norm except the triangle inequality. Let A = (aij )1≤i≤m,1≤j≤n be a matrix. The standard largest q-singular value is defined by ( ) kAxkq (q) (1.1) s1 (A) := sup : x ∈ Rn with x 6= 0 . kxkq It is known that for q ≥ 1, the equation in (1.1) defines a norm on the space of m × n matrices. In addition, we know (1.2)

(q)

max kaj kq ≤ s1 (A) ≤ n j

q−1 q

max kaj kq , j

where aj , j = 1, 2, · · · , n are the column vectors of A. We refer to any book on matrix theory for the properties of the largest singular value sq1 (A) when q ≥ 1, for example, [11]. However, for q ∈ (0, 1), the properties of sq1 (A) are not well-known. For convenience, we shall explain some useful properties in the Preliminary section. The purpose of this paper is to study the matrix spectrum, e.g. sq1 (A) for random matrix A with pre-Gaussian entries. One set of our main results is the following Theorem 1.3 (Upper tail probability of the largest q-singular value). Let ξ be a pre-Gaussian variable normalized to have variance 1 and A be an m × m matrix with i.i.d. copies of ξ in its entries. Then for any 0 < q < 1,   1 (q) (1.3) P s1 (A) ≥ Cm q ≤ exp (−C 0 m) for some C, C 0 > 0 only dependent on the pre-Gaussian variable ξ. and Theorem 1.4 (Lower tail probability of the largest q-singular value). Let ξ be a pre-Gaussian variable normalized to have variance 1 and A be an m × m matrix with i.i.d. copies of ξ in its entries. Then there exists a constant K > 0 such that   1 (q) (1.4) P s1 (A) ≤ Km q ≤ cm for some 0 < c < 1, where K only depends on the pre-Gaussian variable ξ.

LARGEST AND SMALLEST q-SINGULAR VALUES

4

These results have their counterparts in [Yin, Bai, Krishnaiah’1988, [31]], [Bai, Silverstein and Yin’1988,[1]] and [Sosnikov’2002,[20]] for `2 -norm. It is interesting to know if the above results hold for general random matrices whose entries are i.i.d. copies of a random variable of bounded fourth moment. Next we would like to study the smallest singular values. In general we can define the k-th q-singular value as follows. Definition 1.1. The k-th q-singular value of an m × n matrix A are defined by (1.5) ) ) ( ( kAxkq (q) n : x ∈ V \ {0} : V ⊆ R , dim (V ) ≥ n − k + 1 . sk (A) := inf sup kxkq It is easy to see that (1.6)

(q)

(q)

(q)

s1 (A) ≥ s2 (A) ≥ . . . ≥ smin(m,n) (A) ≥ 0.

The smallest singular value sqmin(m,n) is also of special interest in various studies. In the lower tail probability estimate, we divide the study in two cases when m > n (tall matrices) and m = n (square matrices) under the assumption that A is of full rank. The study is heavily dependent on the known results on the compressible and incompressible vectors. In the upper tail probability estimate, we use the known estimates on the projection in the `2 norm. Another set of main results is as follows. For tall random matrices, we have Theorem 1.5 (Lower tail probability on the smallest q-singular value). Let us fix 0 < q ≤ 1. Let ξ be the pre-Gaussian random variable with mean 0 and variance 1. Suppose that A is an m × n matrix with i.i.d. copies of ξ in its entries with m > n. Then there exist some ε > 0, c > 0 and λ ∈ (0, 1) dependent on q and ε, such that   1/q (1.7) P s(q) < e−cm m (A) ≤ εm when n ≤ λm. For square random matrices, we have Theorem 1.6 (Lower tail probability on the smallest q-singular value ). Let us fix 0 < q ≤ 1. Let ξ be the pre-Gaussian random variable with variance 1 and A be an n × n matrix with i.i.d. copies of ξ in its entries. Then for any ε > 0, one has   −1/q (1.8) P s(q) <ε n (A) ≤ γn where γ > 0 depends only on the pre-Gaussian variable ξ. The above theorem is an extension of Theorem 1.1. Finally we have Theorem 1.7 (Upper tail probability on the smallest q-singular value). Given any 0 < q ≤ 1, and let ξ be a pre-Gaussian random variable with variance 1 and A be an n × n matrix with i.i.d. copies of ξ in its entries. Then for any K > e, there exist some C > 0, 0 < c < 1, and α > 0 which are only dependent on pre-Gaussian variable ξ, such that   C (ln K)α −1/2 + cn . (1.9) P s(q) (A) > Kn ≤ n Kα In particular, for any ε > 0, there exist some K > 0 and n0 , such that   −1/2 <ε (1.10) P s(q) n (A) > Kn

LARGEST AND SMALLEST q-SINGULAR VALUES

5

for all n ≥ n0 . The above theorem is an extension of Theorem 1.2. Note that we are not able to prove   −1/q <ε (1.11) P s(q) n (A) > Kn under the assumptions in Theorem 1.7. However, we strongly believe that the above inequality holds. We leave it as a conjecture. The remaining of the paper is devoted to the proof of these five theorems which give a good understanding the spectrum of pre-Gaussian random matrices in `q quasi-norm with 0 < q ≤ 1. We shall present the analysis in four separate sections after Preliminary section. 2. Preliminaries First of all, one can easily derive the following Lemma 2.1. For 0 < q < 1, the equation in (1.1) defines a quasi-norm on the space of m × N matrices. In particular, we have  q  q  q (q) (q) (q) s1 (A + B) ≤ s1 (A) + s1 (B) for any m × N matrices A and B. Moreover (q)

s1 (A) = max kaj kq

(2.1)

j

for 0 < q ≤ 1, where aj , j = 1, . . . , n, are the columns of matrix A. (q)

Proof. It is straightforward and not hard to show that s1 (A), q ≤ 1, defines a quasi-norm on matrices, by using the quasi-norm properties of kxkq , the `q quasinorm on vectors. To prove equation (2.1), on one hand, we have (2.2)

q

kAxkq ≤

N X

q

q

q

|xj |q · kaj kq ≤ kxkq max kaj kq j

j=1

for 0 < q ≤ 1, which implies (q)

s1 (A) ≤ max ||aj ||q .

(2.3)

j

On the other hand, by (1.1), we have (2.4)

(q)

s1 (A) =

sup x∈RN ,kxkq =1

kAxkq ≥ kAej kq = ||aj ||q

for every j, where ej is the j-th standard basis vector of RN , and then it follows that (2.5)

(q)

s1 (A) ≥ max ||aj ||q . j

Thus, combining with (2.3), we obtain the equation (2.1) for 0 < q ≤ 1 as desired.  Next we need the following elementary estimate. Mainly we need a linear bound for partial binomial expansion.

LARGEST AND SMALLEST q-SINGULAR VALUES

6

Lemma 2.2 (Linear bound for partial binomial expansion). For every positive integer n,   n X n n−k xk (1 − x) ≤ 8x k n k=b 2 c+1 for all x ∈ [0, 1].   Proof. Let us start with even integer. For every x ∈ 18 , 1 , we have   2n  2n  X X 2n 2n 2n−k 2n−k (2.6) xk (1 − x) ≤ xk (1 − x) = 1 ≤ 8x. k k k=n+1

k=0

  But for x ∈ 0, 81 , we let  2n  X 2n 2n−k f (x) := xk (1 − x) . k k=n+1

By De Moivre-Stirling’s formula (see e.g. [7]) and furthermore the estimate in [13],  n n √ n! = 2πn eλn , e where

1 12n+1

< λn <

1 12n .

We have √ 2n λ   2π2n 2n e 2n 4n 4n 2n e = √ eλ2n −2λn ≤ √ . (2.7) = √   2 n n πn πn 2πn ne eλn     2n 2n Since ≤ for n + 1 ≤ k ≤ 2n, k n (2.8)     2n  2n  X X 2n 2n 2n 2n−k k k x ≤n xn+1 x (1 − x) ≤ f (x) ≤ n n n k=n+1

k=n+1

for all x ∈ [0, 1]. Using (2.7), we have f (x) ≤ 4n

(2.9) Letting g(x) = 4n

pn

πx

n

r

n n+1 x . π

, we have

1 1 1 1 ln n − ln π ≤ −n ln 2 + ln n − ln π ≤ 0 2 2 2 2 for x ∈ [0, 1/8]. Thus we have f (x) ≤ x ≤ 8x. Also, we can have a similar estimate for odd integers. These complete the proof.  ln(g(x)) = n ln(4x) +

Remark 2.1. The coefficient on the right-hand side can be improved by Markov’s inequality, but the estimate obtain by the analytic technique above is actually good enough for the purposes in this paper. Next we review the smallest q-singular values. Without loss of generality, we consider m ≥ n. Then the n-th q-singular value is the smallest q-singular value which can also be expressed in another way.

LARGEST AND SMALLEST q-SINGULAR VALUES

7

Lemma 2.3. Let A be an m × n matrix with m ≥ n. Then the smallest q-singular value ( ) kAxkq (q) n (2.10) sn (A) = inf : x ∈ R with x 6= 0 . kxkq Proof. By the definition (2.11) n n kAxk o o (q) sn (A) = inf sup kxk q : x ∈ V \ {0} : V ⊆ Rn , dim (V ) ≥ 1 o o n n kAvkq ≤ inf sup kvk q : v ∈ V \ {0} : V = span (x) : x ∈ Rn \ {0} q n kAxk o = inf kxk q : x ∈ Rn with x 6= 0 . q

We also know the infimum can be achieved by considering the unit Sq -sphere in the finite dimensional space, and so the claim follows.  In particular, if A is an n × n matrix, we know ( ) kAxkq (q) n sn (A) = inf : x ∈ R with x 6= 0 kxkq 1 ( −1 ) =

A x (2.12) q sup : x ∈ Rn with x 6= 0 kxkq 1 = . (q) s1 (A−1 ) The estimate of the largest q-singular value can be used to estimate the smallest q-singular values based on this relation. As we see, the q-singular value is defined by `q -qusinorm, as oppose to `2 -norm, but using a similar proof for the relationship between the rank of a matrix and its smallest sigugular value in `2 , one has the following relationship between the rank of a matrix and its smallest q-singular value. Lemma 2.4. For any positive integers m and n, an m × n matrix A is of full rank (q) if and only if smin(m,n) (A) > 0. Remark 1. One could also derive this lemma by the properties of singular values defined by `2 -norm and by using the inequalities on the relations between `2 -norm and `q -quasinorm. We shall need the following result to estimate the smallest q-singular values. Lemma 2.5. Let A be a matrix of size m × N . Suppose that m ≥ N . Then (q)

smin(m,N ) (A) ≤ min kaj kq . j

Proof. Since choose ej0 to be a standard basis vector of RN such that kAej0 kq = (q)

minj kaj kq and use the definition of smin(m,N ) (A) for m ≥ N .



The following generalization of Lemma 4.10 in [Pisier’1999, [12]] will be used in a later section.

LARGEST AND SMALLEST q-SINGULAR VALUES

8

Lemma 2.6. For 0 < q ≤ 1, let Sq := {x ∈ Rn : |x|q = 1} denote the unit sphere of Rn in `q quasinorm. For any δ > 0, there exists a finite set Uq ⊆ Sq with  n/q 2 min kx − ukqq ≤ δ for all x ∈ Sq and card(Uq ) ≤ 1 + . u∈Uq δ Proof. Let (u1 , . . . , uk ) be a set of k points on the sphere Sq such that |ui −j |qq > δ for all i 6= j. We choose k as large as possible. Thus, it is clear that min |x − ui |qq ≤ δ

1≤i≤k

for all x ∈ Sq .

Let Bq := {x ∈ Rn : |x|q ≤ 1} be the unit ball of Rn relative to the quasinorm | · |q . It is easy to see that the (δ/2)-balls centered at ui ,  1/q δ Bq , 1 ≤ i ≤ k, ui + 2 are disjoint. Indeed, if x would belong to the (δ/2)-ball centered at xi as well as the (δ/2)-ball centered at xj , we would have δ δ + = δ, 2 2 which is a contradiction. Besides, it is easy to see that  1/q  1/q δ δ ui + Bq ⊆ 1 + Bq , 1 ≤ i ≤ k. 2 2 By comparison of volumes, we get 1/q    1/q  X   1/q   k δ δ δ Bq = Bq ≤ Vol Bq . kVol Vol ui + 1+ 2 2 2 i=1 |ui − uj |qq ≤ |ui − x|qq + |uj − x|qq ≤

Then, by homogeneity of the volumes, we have  n/q  n/q δ δ k Vol (Bq ) ≤ 1 + Vol (Bq ) , 2 2 n/q  2 . This completes the proof. which implies that k ≤ 1 + δ



3. The Upper Tail Probability of the Largest q-singular Value We begin with the following Theorem 3.1 (Upper tail probability of the largest 1-singular value). Let ξ be a pre-Gaussian variable normalized to have variance 1 and A be an m × m matrix with i.i.d. copies of ξ in its entries. Then   (1) (3.1) P s1 (A) ≥ Cm ≤ exp (−C 0 m) for some C, C 0 > 0 only dependent on the pre-Gaussian variable ξ. Proof. Since aij are i.i.d. copies of the pre-Gaussian variable ξ, Eaij = 0 and there k exists some λ > 0, such that E |aij | ≤ k!λk for all k. Without loss of generality, we may assume that λ ≥ 1. With the variance Ea2ij = 1, we have k

E |aij | ≤

Ea2ij k−2 H k! 2

LARGEST AND SMALLEST q-SINGULAR VALUES

9

for H := 2λ3 and all k ≥ 2. By the Bernstein inequality (cf. Theorem 5.2 in [3]), we know !     m X t2 t2 P aij ≥ t ≤ 2 exp − = 2 exp − 2 (m + tH) 2 (m + 2tλ3 ) i=1

for all t > 0 and for each j = 1, · · · , N . In particular, when t = Cm,   m   X C 2m   aij ≥ Cm ≤ 2 exp − (3.2) P . 4Cλ3 + 2 j=1 Here a condition on C will be determined later. On the other hand, by Lemma 2.1, (1)

s1 (A) = max ||aj ||1 = j

m X

|aij0 |

i=1

for some j0 . Furthermore, for any t > 0, by the probability of the union, ! ! m m X X X (3.3) P |aij | ≥ t ≤ P i aij ≥ t . (1 ,...,m )∈{−1,1}m

i=1

i=1

But −aij has the same pre-Gaussian properties as aij0 , precisely, E (−aij ) = 0 and k E |−aij | ≤ k!λk . Thus we have ! m   X (1) P s1 (A) ≥ Cm ≤m P |aij | ≥ Cm i=1 ! m X m ≤ 2 mP aij ≥ Cm (3.4) i=1 2  C m ≤ 2m m exp − 4Cλ 3 +2     C2 ≤ exp − 4Cλ3 +2 − ln 2 − 1 m .   (1) To obtain an exponential decay for the probability P s1 (A) ≥ Cm , we require that (3.5)

C2 4Cλ3 +2

− ln 2 − 1 > 0, for which p C > 2λ3 + 2λ3 ln 2 + 2 + 2 ln 2 + 4λ6 + 8λ6 ln 2 + 4λ6 ln2 2.

That is, choosing C 0 =

C2 4Cλ3 +2

− ln 2 − 1, we get (3.1).



The previous theorem allows us to estimate the largest q-singular value for 0 < q < 1. The estimate can follow easily from Theorem 3.1, but it is one of the tail probabilistic estimates we wanted to obtain, so let us state it as a theorem, which is Theorem 1.3. 1

Proof of Theorem 1.3. By Hölder’s inequality, we have kaj kq ≤ m q −1 kaj k1 for 0 < q < 1. It follows from Lemma 2.1 that (3.6)

1

(q)

(1)

s1 (A) = max kaj kq ≤ m q −1 s1 (A). j

LARGEST AND SMALLEST q-SINGULAR VALUES

From (3.1), we have   1 (q) P s1 (A) ≥ Cm q ≤ (3.7)

= ≤

10

 1  1 (1) P m q −1 s1 (A) ≥ Cm q   (1) P s1 (A) ≥ Cm exp (−C 0 m)

for some C, C 0 > 0 .



4. The Lower Tail Probability of the Largest q-singular Value Let us use the result in Lemma 2.2 to give estimates on the lower tail probabilities of the largest q-singular value. Lemma 4.1. Suppose ξ1 , ξ2 , · · · , ξn are i.i.d. copies of a random variable ξ. Then for any ε > 0, ! n X nε ≤ 8P (|ξ| ≤ ε) . (4.1) P |ξi | ≤ 2 i=1 Proof. First, we have the relation on the probability events that ) ( n X nε (4.2) (ξ1 , . . . ξn ) : |ξi | ≤ 2 i=1 is contained in (4.3) n [ k=b n +1 2c

[









(ξ1 , . . . , ξn ) : |ξi1 | ≤ ε, . . . , |ξik | ≤ ε, ξik+1 > ε, . . . , |ξin | > ε ,

{i1 , . . . , ik } ⊂ {1, . . . , n}

where {i1 , i2 , . . . , ik } is a subset of {1, 2, . . . , n} and {ik+1 , . . . , in } is its complement, and let us denote the set (4.3) by E. Let x = P (|ξ1 | ≤ ε). Then by the union probability,   n X n n−k (4.4) P (E) = xk (1 − x) , k k=b n +1 c 2 and applying Lemma 2.2, we have (4.5)

P (E) ≤ 8x = 8P (|ξ1 | ≤ ε) .

Since the event (4.2) is contained in the event (4.3), we have ! n X nε (4.6) P |ξi | ≤ ≤ P (E) ≤ 8P (|ξ1 | ≤ ε) . 2 i=1  We start with a lower tail probability for the 1-singular values. Theorem 4.1 (Lower tail probability of the largest 1-singular value). Let ξ be a pre-Gaussian variable normalized to have variance 1 and A be an m × m matrix with i.i.d. copies of ξ in its entries. Then there exists a constant K > 0 such that   (1) (4.7) P s1 (A) ≤ Km ≤ cm

LARGEST AND SMALLEST q-SINGULAR VALUES

11

for some 0 < c < 1, where K only depends on the pre-Gaussian variable ξ. Proof. Since aij has variance 1, there exists δ>0 and 0 ≤ β < 1, such that P (|aij | ≤ δ) = β.

(4.8)

m

Let Bj P be the number of variables in {aij }i=1 that are less than or equal to δ, m then if i=1 |aij | ≤ δ · λm for 0 < λ < 1, then Bj ≥ (1 − λ) m, because otherwise Pm |a | > δ · λm. It follows that ij i=1 ! m X (4.9) P |aij | ≤ δ · λm ≤ P (Bj ≥ (1 − λ) m) . i=1

By Markov’s inequality, (4.10)

P (Bj ≥ (1 − λ) m) ≤

EBi , (1 − λ) m

but Bj satisfies a binomial distribution of m independent experiments, each of which yields success with probability β, therefore β . (4.11) P (Bj ≥ (1 − λ) m) ≤ 1−λ By choosing suitable λ, we can make 0 < (4.12)

P

m X

β 1−λ

< 1. Thus !

|aij | ≤ δ · λm

≤c

i=1

for some 0 < c < 1. It follows that   (1) P s1 (A) ≤ λδm = (4.13) = ≤

Pm P (max1≤j≤N ( i=1 |aij |) ≤ λδm) Qm Pm j=1 P (( i=1 |aij |) ≤ λδm) m c

Thus letting K = λδ, we obtain (3.1).  For general 0 < q < 1, we have Theorem 1.4. Proof of Theorem 1.4. We can use the same method as in the proof of Theorem 4.1. Since aij has nonzero variance, there exists δ>0 and 0 ≤ β < 1, such that (4.14)

q

P (|aij | ≤ δ) = β. q

Then by Lemma 4.1 and substituting aij in the proof of Theorem 4.1 by |aij | ,   1 Pm 1 (q) q = P (max1≤j≤N ( i=1 |aij | ) ≤ λδm) P s1 (A) ≤ (λδ) q m q Q P m m q (4.15) = j=1 P (( i=1 |aij | ) ≤ λδm) ≤ cm 1

for some 0 < c < 1. Thus letting K = (λδ) q , (1.4) follows.  Remark 2. If one uses the quasi-norm comparison inequality 0 < q ≤ 1, one can get   (q) (4.16) P s1 (A) ≤ Km ≤ cm

(q) s1 (A)



(1) s1 (A)

for

LARGEST AND SMALLEST q-SINGULAR VALUES

12

  1 (q) for 0 < q ≤ 1 but with a loss of the estimate on P s1 (A) ≤ Km q . Since the bounded moment growth condition for pre-Gaussian variables is not needed in the proof of Theorem 4.1, so the above proofs also show that the theorem holds for any random variable with nonzero variance. Therefore, more generally, we have Theorem 4.2. Let ξ be a random variable with non-zero variance and A be an m×m matrix with i.i.d. copies of ξ in its entries. Then there exists a constant K > 0 such that   1 (q) (4.17) P s1 (A) ≤ Km q ≤ cm for some 0 < c < 1, where K only depends on ε and the random variable ξ.

5. The Lower Tail Probability of the Smallest q-singular Value In this section, we first study the probability estimates of the smallest q-singular value of rectangular random matrices with m > n. Then we give some estimates for square random matrices. 5.1. The Tall Random Matrix Case. In this subsection, we assume that n ≤ λm with λ ∈ (0, 1) and consider the smallest q-singular value of random matrices of size m × n. Theorem 5.1. Given any 0 < q ≤ 1, let ξ be the pre-Gaussian random variable with variance 1 and A is an m × n matrix with i.i.d. copies of ξ in its entries. Then there exist some γ > 0, b > 0 and ν ∈ (0, 1) dependent on the pre-Gaussian random variable ξ such that   1/q (5.1) P s(q) (A) < γm < e−bm n with n ≤ νm. To prove this result, we need to establish a few lemmas. Lemma 5.1. Fix any 0 < q ≤ 1. For any ξ1 , · · · , ξm that are i.i.d. copies of a preGaussian variable with non-zero variance, for any c ∈ (0, 1), there exists λ ∈ (0, 1), that does not depend on m, such that ! m X q (5.2) P |ξk | < λm ≤ cm . k=1

Proof. For any ξ1 , · · · , ξm that are i.i.d. copies of a pre-Gaussian variable with non-zero variance, we know that there exists some δ > 0, such that (5.3)

ε0 := P (|ξk | ≤ δ) < 1

LARGEST AND SMALLEST q-SINGULAR VALUES

13

for k = 1, 2, · · · , m, because otherwise the pre-Gaussian variable would have a zero variance. Then using the Riemann–Stieltjes integral for expectation, we have   q ˆ∞ q |ξk | t E exp − = exp − dP (|ξk | ≤ t) λ λ 0

ˆδ ≤

ˆ∞ dP (|ξk | ≤ t) +

0

δ

ˆ∞ =

 q t dP (|ξk | ≤ t) exp − λ

ε0 +

 q t exp − dP (|ξk | ≤ t) . λ

δ

Choose λ > 0 to be small enough, such that  q  q t δ ε0 exp − ≤ exp − < λ λ 2 (1 − ε0 ) for t ≥ δ. Therefore, it follows that  ˆ∞ q |ξk | ε0 3 ε0 E exp − dP (|ξk | ≤ t) ≤ ε0 + = ε0 . ≤ ε0 + λ 2 (1 − ε0 ) 2 2 δ

Finally, applying Markov’s inequality, we obtain ! ! ! m m X 1X q q |ξk | >1 P |ξk | < λm = P exp m − λ k=1 k=1 !! m 1X q ≤ E exp m − |ξk | λ k=1  m q Y |ξk | m = e E exp − . λ k=1



(3e0 /2)m .

For any c ∈ (0, 1), we choose 0 such that 3e0 /2 = c. This completes the proof.



The following lemma is a property of the linear combination of pre-Gaussian variables, which allows us to obtain the probabilistic estimate on kAvkq for the pre-Gaussian ensemble A. Lemma 5.2 (Linear combination of pre-Gaussian variables). Let aij , i = 1, 2, · · · , n X m and j = 1, 2, . . . , n be pre-Gaussian variables and ηi = aij xj . Then ηi are j=1

pre-Gaussian variables for i = 1, 2, . . . , m. Proof. Since aij are pre-Gaussian variables, Eaij = 0 and there are constants λij > k 0 such that E |aij | ≤ k!λkij for i = 1, 2, . . . , m and j = 1, 2, . . . , N . It is easy to see Eηi =

N X j=1

xj Eaij = 0.

LARGEST AND SMALLEST q-SINGULAR VALUES

Letting kxk1 =

14

PN

|xj |, we use the convexity to have  k N   X |x | j  k E |ηi | ≤ E kxk1 |aij | kxk 1 j=1 i=1



kxkk1

N X |xj | k E (|aij |) ≤ k!kxkk1 (max{λij })k . j kxk 1 j=1

for all integers k ≥ 1. Thus, ηk is a pre-Gaussian random variable.



Combining two lemmas above, we obtain the following Lemma 5.3. Given any 0 < q ≤ 1 and let A be an m × n pre-Gaussian matrix, then for any c ∈ (0, 1), there exists λ ∈ (0, 1) such that   (5.4) P kAvkq < λ1/q m1/q ≤ cm for each v ∈ Sq , where Sq is the (n − 1)-dimensional unit sphere in `q -quasinorm. We are now ready to prove Theorem 5.1. Proof of Theorem 5.1. By using Lemma 2.6, for any δ > 0, there exists a δ-net Uq in unit sphere Sq such that n/q  2 q . and card(Uq ) ≤ 1 + min kx − ukq ≤ δ for all x ∈ Sq u∈Uq δ By Lemma 5.3, for all v ∈ Uq , we have (5.5)

 P kAvkqq < λm, for all v ∈ Uq ≤ 1

(q)

 1+

2 δ

n/q

cm .

1

Since the event sn (A) < γm q implies kAv 0 kq < 2γm q for some v 0 ∈ Sq ,   1/q P(s(q) ) ≤ P kAvkq < 2γm1/q for some v ∈ Sq . n (A) < γm If v ∈ Uq , we use (5.5) with 2γ < λ1/q to have n/q  2 (q) 1/q (5.6) P(sn (A) < γm ) ≤ 1 + cm . δ If v 6∈ Uq , we use Theorem 1.3 to have   P kAvkq < 2γm1/q with v ∈ Sq \Uq   (q) ≤ e−c1 m + P s1 (A) ≤ Km1/q and kAvkq < 2γm1/q with v ∈ Sq \Uq . (q)

When v ∈ Sq \Uq in the event that s1 (A) ≤ Km1/q and kAvkq < 2γm1/q , there exists a u ∈ Uq within a q-distance δ such that q

kAukq

q



q

kA (v − u)kq + kAvkq  q (q) q q ≤ s1 (A) kv − ukq + kAvkq ≤ K q mδ + (2γ)q m <

λq m

LARGEST AND SMALLEST q-SINGULAR VALUES

if δ <

15

λq − (2γ)2 . We can use (5.5) again to conclude Kq

(5.7) n/q    2 (q) cm . P s1 (A) ≤ Km1/q and kAvkq < 2γm1/q some v ∈ Sq \Uq ≤ 1 + δ If we choose ν and c small enough in Lemma 5.1 with n = νm such that  ν/q 2 c < 1, c2 := 1 + δ we have thus completed the proof by choosing b > 0 such that e−c1 m + e−c2 m ≤ e−bm .  5.2. The Square Random Matrix Case. Now let us consider the square random matrices with pre-Gaussian entries. Theorem 5.2. Given any 0 < q ≤ 1, and let ξ be the pre-Gaussian random variable with variance 1 and A be an n × n matrix with i.i.d. copies of ξ in its entries. Then for any ε > 0 and 0 < q ≤ 1, there exist some K > 0 and c > 0 dependent ε and the pre-Gaussian random variable ξ, such that     1 − q1 < Cε + Cαn + P kAk > Kn− 2 . (5.8) P s(q) n (A) < εn where α ∈ (0, 1) and C > 0 depend only on the pre-Gaussian variable and K. To prove the above theorem, we generalize the ideas in [Rudelson and Vershynin’2008, [15]] to the setting of `q quasi-norm. We first decompose Sqn−1 into the set of compressible vectors and the set of incompressible vectors. The concepts of compressible and incompressible vectors in Sn−1 were introduced in [15]. See also 2 in [Tao and Vu’2009, [27]]. We shall use a generalized version of these concepts. Recall that kxk0 denotes the number of nonzero entries of vector x ∈ Rn . ). Fix ρ, λ ∈ (0, 1). Definition 5.1 (Compressible and incompressible vectors in Sn−1 q Let Compq (λ, ρ) be the set of vectors v ∈ Sqn−1 such that there is a vector v 0 with kv 0 k0 ≤ λn satisfying kv − v 0 kq ≤ ρ. The set of incompressible vector is defined as (5.9)

Incompq (λ, ρ) := Sqn−1 \ Compq (λ, ρ) .

Now using the decomposition in Definition 5.1, we have     1 1 (q) P sn (A) < εn− q ≤ P inf v∈Compq (λ,ρ) kAvkq < εn− q   (5.10) 1 +P inf v∈Incompq (λ,ρ) kAvkq < εn− q . In the following we are going to consider each of these two terms on the right hand side of the above equation. For the first term on compressible vectors, we can apply Lemma 5.3 since     1 1 (5.11) P inf kAvkq < εn− q ≤ P inf kAvkq < νn q v∈Incompq (λ,ρ)

v∈Incompq (λ,ρ)

to conclude that it actually decays exponentially for n > 1, where ν = λ1/q as in Lemma 5.3. However, for incompressible vectors, we first consider distq (Xj , Hj ), which denotes the distance between column Xj of an n × n random matrix A and the span of other columns Hj := span (X1 , · · · , Xj−1 , Xj+1 , . . . , Xn ) in `q quasi-norm. We

LARGEST AND SMALLEST q-SINGULAR VALUES

16

generalize a result on the probability estimate of the distance in `2 norm in [15] to the `q -quasinorm setting. This allows us to transform the probabilistic estimate on kAvkq for v ∈ Incompq (λ, ρ) to the probabilistic estimate on the average of the distances distq (Xj , Hj ), j = 1, 2, . . . , n. Lemma 5.4. Let A be an n × n random matrix with columns X1 , . . ., Xn , and Hj := span (X1 , · · · , Xj−1 , Xj+1 , · · · , Xn ) . Then for any ρ, λ ∈ (0, 1) and ε > 0, one has   n 1 1 X (5.12) P inf kAvkq < ερn− q < P (distq (Xj , Hj ) < ε) , λn j=1 v∈Incompq (λ,ρ) in which distq is the distance defined by the `q -quasinorm. Proof. For every v ∈ Incompq (λ, ρ), by Definition 5.1, there are at least λn com1 ponents of v, vj , satisfying |vj | ≥ ρn− q , because otherwise, v would be within `q -distance ρ of the sparse vector, the restriction of v on the components vj satis1 fying |vj | ≥ ρn− q with nsparsity less than o λn, and thus v would be compressible. 1

Thus if we let I1 (v) := j : |vj | ≥ ρn− q , then |I1 (v)| ≥ λn.

Next, let I2 (A) := {j : distq (Xj , Hj ) < ε} and E be the event that the cardinality of I2 (A), |I2 (A)| ≥ λn. Applying Markov’s inequality, we have P (E)

= ≤ = =

P ({I2 (A) : |I2 (A, ε)| ≥ λn}) 1 E |I2 (A)| λn 1 E {j : distq (Xj , Hj ) < ε} λn n 1 X P (distq (Xj , Hj ) < ε) . λn j=1

Since E c is the event that |{j : distq (Xj , Hj ) ≥ ε}| > (1 − λ) n for random matrix A, if E c occurs, then for every v ∈ Incompq (λ, ρ), |I1 (v)| + |I2 (A)| > λn + (1 − λ) n = n. Hence there is some j0 ∈ I1 (v) ∩ I2 (A). So we have 1

kAvkq ≥ distq (Av, Hj0 ) = distq (vj0 Xj0 , Hj0 ) = |vj0 | distq (Xj0 , Hj0 ) ≥ ερn− q . 1

If the eventskAvkq < ερn− q occurs then E also occurs. Thus   n 1 X − q1 P inf kAvkq < ερn ≤ P (E) ≤ P (distq (Xj , Hj ) < ε) . λn j=1 v∈Incompq (λ,ρ) These complete the proof.



Note that distq (Xj , Hj ) ≥ dist (Xj , Hj ) because k·kq ≥ k·k2 . Thus we can take the advantage of the estimate on P (dist (Xj , Hj ) < ε) given in [15] to obtain the estimate on P (distq (Xj , Hj ) < ε).

LARGEST AND SMALLEST q-SINGULAR VALUES

17

Theorem 5.3 (Distance bound (cf. [15])). Let A be a random matrix whose entries are independent variables with variance at least 1 and fourth moment bounded by B. Let K ≥ 1. Then for every ε > 0,   1 (5.13) P dist (Xj , Hj ) < ε and kAk ≤ Kn− 2 ≤ C (ε + αn ) where α ∈ (0, 1) and C > 0 depend only on B and K. The above theorem implies that (5.14)   1 P (distq (Xj , Hj ) < ε) ≤ P (dist (Xj , Hj ) < ε) ≤ C (ε + αn ) + P kAk ≤ Kn− 2 . Combining (5.10) and applying Lemma 5.4, we now reach the desired inequality in Theorem 5.2. Furthermore, since A is pre-Gaussian, using a standard concentration bound  we  − 21 < ε. know that for every ε > 0, there exists some K > 0 such that P kAk ≤ Kn Thus, we have proved Theorem 1.6. 6. The Upper Tail Probability of the Smallest q-singular Value In this section, we continue the study on the estimate of the upper tail probability of the smallest q-singular value of an n × n pre-Gaussian matrix. Mainly we are going to prove Theorem 1.7. To do so, we need some preparation. Let Xj be the j-th column vector of A and πj be the projection onto the subspace Hj := span (X1 , . . . , Xj−1 , Xj+1 , · · · , Xn ). We first have Lemma 6.1. For every α > 0, one has   1 1 (6.1) P kXj − πj (Xj )kq ≥ αn q − 2 ≤ c1 e−c2 α + c3 n−c4 for each j = 1, 2, . . . , n, where c1 , c2 , c3 , c4 > 0 are constants independent of j, n, and q. Proof. Without loss of generality, assume j = 1. Write (a1 , a2 , . . . , an ) := X1 − π1 (X1 ). Applying the Bessy-Esseen theorem (see for instance [21]), we know that P ! n aξ   i=1 i i (6.2) P kXj − πj (Xj )k2 ≥ α = P pPn ≥ α = P (|g| ≥ α) + O n−c 2 i=1 ai for some c > 0, where g is a standard normal random variable. By the Hölder inequality, kXj − πj (Xj )kq ≤ n

1−q q

1

1

kXj − πj (Xj )k1 ≤ n q − 2 kXj − πj (Xj )k2 .

It follows   1 1 P kXj − πj (Xj )kq ≥ n q − 2 α

 1 1  1 1 ≤ P n q − 2 kXj − πj (Xj )k2 ≥ n q − 2 α  = P kXj − πj (Xj )k2 ≥ α .

Therefore it follows from (6.2) that   1 1 P kXj − πj (Xj )kq ≥ αn q − 2

 ≤ P (|g| ≥ α) + O n−c ˆ ∞  1 2 2 √ = e− 2 x dx + O n−c 2π α

≤ c1 e−c2 α + c3 n−c4

LARGEST AND SMALLEST q-SINGULAR VALUES

for some positive constants c1 , c2 , c3 , c4 .

18



We are now ready to prove Theorem 1.7. Proof of Theorem 1.7. From Section 5.2 and by Lemma 2.4, we know that the n×n pre-Gaussian matrix A is invertible with very high probability. Thus, we have (6.3)    

−1 ε 1/q αt −1/q n

≥ P s(q) (A) ≤ A v . · n ≥ P kvk ≤ α, · n for some v ∈ R n q q ε t Thus it suffices to show  

ε (6.4) P kvkq ≤ α, A−1 v q ≥ · n1/q ≥ 1 − ε. t for some vector v 6= 0. Using the result established in [Rudelson and Vershynin’2008, [14]], we can easily

1 get the desired probability of the event that A−1 v q ≤ εt · n q occurs. Indeed, since

−1

−1

A v ≥ A v , we know that q 2  

 −1 P A v q ≤ εt · n−1/q ≤ P A−1 v 2 ≤ εt · n1/q 

1/2  (6.5) = P A−1 v 2 ≤ εt · n2/q  ≤ 2p 4ε, t, n2/q ,   2 where p (ε, t, n) := c5 ε + e−c6 t + e−c7 n for some positive constants c5 , c6 , c7 . Next let us choose v = X1 − π1 (X1 ). √ Lemma 6.1 together with the estimate in 1 , we have (6.5) yield (6.4). Indeed, letting u = t = ln M with M > 1 and ε = M   C −1/2 (6.6) P s(q) ≤ α + cn n (A) > M ln M · n M for some C > 0, 0 < c < 1, and α > 0. Then choosing K := M ln M , we have (6.7) α α   C (ln M )α C (ln (M ln M )) C (ln K) n n −1/2 P s(q) (A) > Kn ≤ +c ≤ +c = +cn n Kα Kα Kα if M ≥ e, that requires K > e. These complete the proof.  Acknowledgment The authors would like to thank the referees for useful comments. References [1] ZD Bai, J.W. Silverstein, and YQ Yin. A note on the largest eigenvalue of a large dimensional sample covariance matrix. Journal of Multivariate Analysis, 26(2):166–168, 1988. [2] G. Bennett, V. Goodman, and CM Newman. Norms of random matrices. Pacific Journal of Mathematics, 59(2):359–365, 1975. [3] V.V. Buldygin and I.U.V. Kozachenko. Metric characterization of random variables and random processes, volume 188. Amer Mathematical Society, 2000. [4] R. Chartrand and V. Staneva. Restricted isometry properties and nonconvex compressive sensing. Inverse Problems, 24:035020, 2008. [5] A. Edelman. Eigenvalues and condition numbers of random matrices. SIAM Journal on Matrix Analysis and Applications, 9(4):543–560, 1988. [6] A. Edelman. Eigenvalues and condition numbers of random matrices. PhD thesis, Massachusetts Institute of Technology, 1989.

LARGEST AND SMALLEST q-SINGULAR VALUES

19

[7] A. Fisher. The mathematical theory of probabilities and its application to frequency curves and statistical methods, volume 1. The Macmillan company, 1922. [8] S. Foucart and M.J. Lai. Sparsest solutions of underdetermined linear systems via `q minimization for 0 < q ≤ 1. Applied and Computational Harmonic Analysis, 26(3):395–407, 2009. [9] Simon Foucart and Ming-Jun Lai. Sparse recovery with pre-gaussian random matrices. Studia Mathematica, 200(1):91–102, 2010. [10] S. Geman. A limit theorem for the norm of random matrices. The Annals of Probability, 8(2):252–261, 1980. [11] G.H. Golub and C.F. Van Loan. Matrix computations, volume 3. Johns Hopkins Univ Pr, 1996. [12] G. Pisier. The volume of convex bodies and Banach space geometry, volume 94. Cambridge Univ Pr, 1999. [13] H. Robbins. A remark on stirling’s formula. The American Mathematical Monthly, 62(1):26– 29, 1955. [14] M. Rudelson and R. Vershynin. The least singular value of a random square matrix is o(n−1/2 ). Comptes Rendus Mathematique, 346(15):893–896, 2008. [15] M. Rudelson and R. Vershynin. The littlewood–offord problem and invertibility of random matrices. Advances in Mathematics, 218(2):600–633, 2008. [16] Mark Rudelson. Invertibility of random matrices: norm of the inverse. Annals of Mathematics, 168:575–600, 2008. [17] Mark Rudelson and Roman Vershynin. Non-asymptotic theory of random matrices: extreme singular values. In International Congress of Mathematicans, 2010. [18] J.W. Silverstein. The smallest eigenvalue of a large dimensional wishart matrix. The Annals of Probability, 13(4):1364–1368, 1985. [19] S. Smale. On the efficiency of algorithms of analysis. Bull. Amer. Math. Soc.(NS), 13, 1985. [20] A. Soshnikov. A note on universality of the distribution of the largest eigenvalues in certain sample covariance matrices. Journal of Statistical Physics, 108(5):1033–1056, 2002. [21] D.W. Stroock. Probability theory: an analytic view. Cambridge Univ Pr, 2010. [22] S.J. Szarek. Condition numbers of random matrices. J. Complexity, 7(2):131–149, 1991. [23] T. Tao and V. Vu. On the singularity probability of random bernoulli matrices. Journal of the American Mathematical Society, 20(3):603–628, 2007. [24] T. Tao and V. Vu. Random matrices: The circular law. Communications in Contemporary Mathematics, 10(2):261–307, 2008. [25] T. Tao and V. Vu. On the permanent of random bernoulli matrices. Advances in Mathematics, 220(3):657–669, 2009. [26] T. Tao and V. Vu. Random matrices: The distribution of the smallest singular values. Geometric And Functional Analysis, 20(1):260–297, 2010. [27] T. Tao and V. Vu. Smooth analysis of the condition number and the least singular value. Math Comput, 79:2333–2352, 2010. [28] J. Von Neumann and H.H. Goldstine. Numerical inverting of matrices of high order. Bull. Amer. Math. Soc, 53(11):1021–1099, 1947. [29] E.P. Wigner. On the distribution of the roots of certain symmetric matrices. The Annals of Mathematics, 67(2):325–327, 1958. [30] J. Wishart. The generalised product moment distribution in samples from a normal multivariate population. Biometrika, 20(1/2):32–52, 1928. [31] YQ Yin, ZD Bai, and PR Krishnaiah. On the limit of the largest eigenvalue of the large dimensional sample covariance matrix. Probability Theory and Related Fields, 78(4):509–521, 1988. Department of Mathematics, The University of Georgia, Athens, GA 30602 E-mail address: [email protected] Department of Mathematics, Michigan State University, East Lansing, MI 4882441027 E-mail address: [email protected]

Largest and Smallest q-Singular Values - Semantic Scholar

smallest eigenvalue sn(A) of Gaussian random matrix A of size n × n has ... random matrices, Bernoulli matrices, covariance matrices, and general random ma-.

458KB Sizes 1 Downloads 189 Views

Recommend Documents

Largest and Smallest q-Singular Values - Semantic Scholar
... numbers of random matrices. PhD thesis, Mas- ... E-mail address: [email protected]. Department of ... MI 488244-. 1027. E-mail address: [email protected].

Decay of the smallest singular values of submatrices
subset is not too large, relative to the cardinality of the set. In the numeral or computational ... matrices whose minimal smallest singular values by the order of a power of the size with some negative exponent. ...... Performance Computer Center (

Decay of the smallest singular values of submatrices
a technique from integral geometry and from the perspective of combinatorial geometry, we show ... Key words and phrases. matrix analysis, duality, singular values, combinatorial geometry. 1 ...... Cognitive Networked Sensing and Big Data.

NARCISSISM AND LEADERSHIP - Semantic Scholar
psychosexual development, Kohut (e.g., 1966) suggested that narcissism ...... Expanding the dynamic self-regulatory processing model of narcissism: ... Dreams of glory and the life cycle: Reflections on the life course of narcissistic leaders.

Irrationality and Cognition - Semantic Scholar
Feb 28, 2006 - Page 1 ... For example, my own system OSCAR (Pollock 1995) is built to cognize in certain ... Why would anyone build a cognitive agent in.

SSR and ISSR - Semantic Scholar
main source of microsatellite polymorphisms is in the number of repetitions of these ... phylogenetic studies, gene tagging, and mapping. Inheritance of ISSR ...

SSR and ISSR - Semantic Scholar
Department of Agricultural Botany, Anand Agricultural University, Anand-388 001. Email: [email protected]. (Received:12 Dec 2010; Accepted:27 Jan 2011).

Academia and Clinic - Semantic Scholar
to find good reasons to discard the randomized trials. Why? What is ... showed that even the very best trials (as judged by the ..... vagal Pacemaker Study (VPS).

SSR and ISSR - Semantic Scholar
Genetic analysis in Capsicum species has been ... analyzed with the software NTSYSpc version 2.20f. ..... Table: 1 List of cultivars studied and their origin. Sr.

Irrationality and Cognition - Semantic Scholar
Feb 28, 2006 - “When you do have a good argument for a conclusion, you should accept the conclusion”, and “Be ... For example, my own system OSCAR (Pollock 1995) is built to cognize in certain ways, ..... get a ticket, etc. Hierarchical ...

Identifying and Visualising Commonality and ... - Semantic Scholar
Each model variant represents a simple banking application. The variation between these model variants is re- lated to: limit on the account, consortium entity, and to the currency exchange, which are only present in some variants. Figure 1 illustrat

Identifying and Visualising Commonality and ... - Semantic Scholar
2 shows the division of the UML model corresponding to Product1Bank of the banking systems UML model vari- ants. ... be able to analyse this and conclude that this is the case when the Bank has withdraw without limit. On the ... that are highly exten

Physics - Semantic Scholar
... Z. El Achheb, H. Bakrim, A. Hourmatallah, N. Benzakour, and A. Jorio, Phys. Stat. Sol. 236, 661 (2003). [27] A. Stachow-Wojcik, W. Mac, A. Twardowski, G. Karczzzewski, E. Janik, T. Wojtowicz, J. Kossut and E. Dynowska, Phys. Stat. Sol (a) 177, 55

Physics - Semantic Scholar
The automation of measuring the IV characteristics of a diode is achieved by ... simultaneously making the programming simpler as compared to the serial or ...

Physics - Semantic Scholar
Cu Ga CrSe was the first gallium- doped chalcogen spinel which has been ... /licenses/by-nc-nd/3.0/>. J o u r n a l o f. Physics. Students http://www.jphysstu.org ...

Physics - Semantic Scholar
semiconductors and magnetic since they show typical semiconductor behaviour and they also reveal pronounced magnetic properties. Te. Mn. Cd x x. −1. , Zinc-blende structure DMS alloys are the most typical. This article is released under the Creativ

vehicle safety - Semantic Scholar
primarily because the manufacturers have not believed such changes to be profitable .... people would prefer the safety of an armored car and be willing to pay.

Reality Checks - Semantic Scholar
recently hired workers eligible for participation in these type of 401(k) plans has been increasing ...... Rather than simply computing an overall percentage of the.

Top Articles - Semantic Scholar
Home | Login | Logout | Access Information | Alerts | Sitemap | Help. Top 100 Documents. BROWSE ... Image Analysis and Interpretation, 1994., Proceedings of the IEEE Southwest Symposium on. Volume , Issue , Date: 21-24 .... Circuits and Systems for V

TURING GAMES - Semantic Scholar
DEPARTMENT OF COMPUTER SCIENCE, COLUMBIA UNIVERSITY, NEW ... Game Theory [9] and Computer Science are both rich fields of mathematics which.

A Appendix - Semantic Scholar
buyer during the learning and exploit phase of the LEAP algorithm, respectively. We have. S2. T. X t=T↵+1 γt1 = γT↵. T T↵. 1. X t=0 γt = γT↵. 1 γ. (1. γT T↵ ) . (7). Indeed, this an upper bound on the total surplus any buyer can hope

i* 1 - Semantic Scholar
labeling for web domains, using label slicing and BiCGStab. Keywords-graph .... the computational costs by the same percentage as the percentage of dropped ...

fibromyalgia - Semantic Scholar
analytical techniques a defect in T-cell activation was found in fibromyalgia patients. ..... studies pregnenolone significantly reduced exploratory anxiety. A very ...

hoff.chp:Corel VENTURA - Semantic Scholar
To address the flicker problem, some methods repeat images multiple times ... Program, Rm. 360 Minor, Berkeley, CA 94720 USA; telephone 510/205-. 3709 ... The green lines are the additional spectra from the stroboscopic stimulus; they are.