PROD. TYPE: FTP PP: 1-19 (col.fig.: nil)

ED: BABU.B PAGN: Prabhu.I -- SCAN: nil

YJMVA2635 + MODA ARTICLE IN PRESS

Journal of Multivariate Analysis

(

)

– www.elsevier.com/locate/jmva

O

3

Sample covariance shrinkage for high dimensional dependent data

O

Alessio Sancetta∗ 5

Faculty of Economics, Austin Robinson Building, Sidgwick Avenue, Cambridge CB3 9DE, UK

13

D

For high dimensional data sets the sample covariance matrix is usually unbiased but noisy if the sample is not large enough. Shrinking the sample covariance towards a constrained, low dimensional estimator can be used to mitigate the sample variability. By doing so, we introduce bias, but reduce variance. In this paper, we give details on feasible optimal shrinkage allowing for time series dependent observations. © 2007 Published by Elsevier Inc.

TE

11

Abstract

AMS 1991 subject classification: ; ; 

EC

9

PR

Received 3 April 2006

7

F

1

1. Introduction

R

15

R

Keywords: Sample covariance matrix; Shrinkage; Weak dependence

21 23 1

O

C

N

19

This paper considers the problem of estimating the variance covariance matrix of high dimensional data sets when the sample size is relatively small and the data exhibit time series dependence. The importance of estimating the covariance matrix in these situations is obvious. The number of applied problems where such an estimate is required is large, e.g. mean–variance portfolio optimization for a large number of assets, generalized method of moments estimation when the number of moment equations is large, etc. However, the estimator based on the sample covariance can be noisy, it can be difficult to find its approximate inverse, hence might perform poorly. To mitigate this problem, the sample covariance matrix can be shrunk towards a low dimensional constrained covariance matrix. Recently, this approach has been successfully studied by Ledoit

U

17

∗ Fax: +44 1223 335299.

E-mail address: [email protected]. 0047-259X/$ - see front matter © 2007 Published by Elsevier Inc. doi:10.1016/j.jmva.2007.06.004 Please cite this article as: A. Sancetta, Sample covariance shrinkage for high dimensional dependent data, J. Multivariate Anal. (2007), doi: 10.1016/j.jmva.2007.06.004

Q1

YJMVA2635

ARTICLE IN PRESS 2

15 17 19 21 23 25 27

F

O

O

29

O

13

PR

11

D

9

and Wolf [6]. These authors assume iid observations, a certain cross-dependence structure for the vector of observations and shrink towards a matrix proportional to the identity. Related references can be found in their work. The idea is to find an optimal convex combination of the sample covariance and the constrained covariance matrix. The parameter defining shrinkage depends on unknown quantities and needs to be estimated consistently. Intuitively, the problem is the usual one of balancing the bias and the variance of the estimator to obtain lower mean square error. The goal of the present paper is to show that covariance matrix shrinkage can be used in quite general situations, when data are time dependent and are not restricted in their cross-dependence structure. To account for time dependence, the estimator based on iid observations has to be slightly changed. However, an interesting property of the estimator is that accounting for time series dependence is not always crucial. We will make this statement more precise in our simulation study. Extending the theory to more general situations is important when dealing with real data. The results derived here are weak, as they only hold in probability versus L2 consistency of Ledoit and Wolf [6]. However, we show that the constrained covariance matrix does not need to be proportional to the identity and can be chosen more generally. In Ledoit and Wolf [5] a constrained covariance matrix based on a one factor model was suggested, assuming that the cross-sectional dimension stays fixed. Our framework covers the case when time and cross-sectional dimensions may grow at the same rate. However, this requires that the constrained covariance matrix is chosen appropriately. In this respect, while the results of this paper cover a lot of cases of interest not covered by Ledoit and Wolf [6], the two papers are still complementary. Details will be given in due course. The plan of the paper is as follows. Section 2 states the problem and the suggested solution. Section 3 contains a Monte Carlo study of the small sample performance of shrinkage when data series are dependent. Section 4 proves that the procedure is consistent. We introduce some notation. Given two sequences a := an and b := bn , ab, means that there is a finite absolute constant c such that a cb; a  b means that ab and ba. We may also use the O and o notation as complement and substitute of the above symbols to describe orders of magnitude, which ever is felt more appropriate. Given two numbers a and b, the symbols a ∨ b and a ∧ b mean, respectively, the maximum and minimum between a and b. If A is a countable set, #A stands for its cardinality. Finally, for a matrix A, Aij stands for the (i, j )th entry.

TE

7



EC

5

)

R

3

(

R

3 1

A. Sancetta / Journal of Multivariate Analysis

35 37 39

41

N

33

Suppose (Yt )t∈{1,...,T } are random variables with values in RN . For simplicity assume the  variables are mean zero. The covariance matrix is defined as  := T −1 Tt=1 EYt Yt and under  ˆ T = T Yt Yt /T is a sample second-order stationarity this reduces to  := EYt Yt . Then,  t=1 estimator for . In some cases, we may have that N grows with T . If N/T → 0 the sample covariance matrix ˆ T is consistent (under an appropriate metric), but the rate of convergence ˆ T might be singular in finite samples. If N  T ,  ˆ T is also can be arbitrarily slow, moreover,  inconsistent. This paper considers the case where N/T → c ∈ [0, ∞], so that we might even have T = o (N). To be more precise, we define the Frobenious norm in order to measure the distance between matrices.

U

31

C

2. Estimation of the unconditional covariance matrix

Definition√1. Suppose A is a square N -dimensional matrix. The Frobenious norm is defined as

A 2 := Trace (AA ). Please cite this article as: A. Sancetta, Sample covariance shrinkage for high dimensional dependent data, J. Multivariate Anal. (2007), doi: 10.1016/j.jmva.2007.06.004

YJMVA2635

ARTICLE IN PRESS A. Sancetta / Journal of Multivariate Analysis

(

)



3

15

ˆ T . The optimal choice of  under the expected Proposition 1. Suppose ˜ T () = F + (1 − )  squared Frobenious norm is

11

O

9

O

7

PR

5

2

where  2   E ˆ T −  =

23 25

O

and all relevant moments are assumed to exist.

C

The solution shows that we might reduce the error under the Frobenious norm even if ˜ T is biased (recall that F = ) because we reduce its variance. This is the usual bias–variance trade-off ˜ T (0 ) is based on unknown quantities, in the mean square error of the estimator. Unfortunately,  ˆ but  can be replaced by its unbiased estimator T , and F by an unbiased estimator, say FˆT . Clearly, FˆT should have low variance in order for the procedure to work well in practice. The choice of shrinkage parameter changes if we replace in (1) the unfeasible estimator ˜ T () with ˆ T . In particular, from the proof of Proposition 1, deduce that FˆT + (1 − ) 

N

21

1  i,j  N

U

19

  ˆ ij T , Var 

R

2



R

17

2      ˆ ˆ ˆ E  T −  − 1  i,j  N Cov ij T , Fij 2 0 := ∧1 2   ˆT E Fˆ −   2  2  ˆ ˆ T −  = arg min E FT + (1 − )   , 27

(1)

EC

TE

2   ˆ  2 −  E   T ˜  2 0 =  2 ∧ 1 = arg min E  T −  ,  ∈[0,1] 2   ˆT E F − 

D

3

F

13

N N 2 2 Remark 1. Note that A 22 := i=1 j =1 Aij . Moreover, when A is symmetric, A 2 = N 2 2 2 i=1 Ai , where A1 , . . . , AN are the eigenvalues of A. Ledoit and Wolf [6] suggest standardization by N, so that the Frobenious norm of the identity matrix is always one independently of the dimension. This will not be done here.   ˆ  To mitigate the problem that  T −  is large when N is relatively large, it is suggested 2 ˜ T () = F + (1 − ) ˆ T , where  ∈ [0, 1] and F is a conthat we use a shrunk estimator  strained version of . In general, F is chosen to impose stringent restrictions on the unconditional covariance matrix so that F = . Note that F is usually unknown and need to be replaced by an estimator. However, in Theorem 1, we will show that asymptotically, this does not affect the argument if the estimator is low dimensional. On the other hand, ˆ T is unbiased for , but very noisy, especially in finite sample, where we may even have N > T . The shrunk estimator ˜ T is   2 2     preferred to ˆ T if there exists an  ∈ (0, 1] such that E ˜ T () −  < E ˆ T −  . As done 2 2 in Ledoit and Wolf [6], we consider the expected squared Frobenious norm and minimize it with respect to .

43 1

∈[0,1]

2

(2)

Please cite this article as: A. Sancetta, Sample covariance shrinkage for high dimensional dependent data, J. Multivariate Anal. (2007), doi: 10.1016/j.jmva.2007.06.004

YJMVA2635

ARTICLE IN PRESS 4

1 3

A. Sancetta / Journal of Multivariate Analysis

)



under regularity conditions. We shall show that under suitable conditions on FˆT , 0 and 0 are asymptotically equivalent. For this reason, we  consider 0 as the quantity to estimate.  will just This is reassuring because estimation of Cov ˆ ij T , Fˆij could be a nontrivial exercise. Moreover, note that   T  2  1  ˆ  E T −  = (3) Var Yti Ytj 2 T 1  i,j  N

F

t=1

so that when the observations are dependent, we should estimate the covariance terms Cov Yti Ytj , Ysi Ysj . Define

O

5

(

t=1 T −s 

ˆ T ,ij (s) := 1  T

ˆ T ,ij Yt,i Yt,j − 



 Yt+s,i Yt+s,j − ˆ T ,ij ,

t=1 T −1

ˆ T ,ij (s) ,  (s/b) 

b > 0,

ˆ T :=

T −1



1  i,j  N

EC

9

where  (s) is some function decreasing to zero and continuous in the neighborhood of zero and b > 0 is a smoothing parameter. With the above notation, under stationarity conditions, an  ˆ bT ,ij . Then we define the sample estimator estimator of (3) is given by T −1 1  i,j  N  ˆ bT ,ij 

 2 ˆ  T − FˆT  2

R

7

(4)

TE

s=1

D

b ˆ T ,ij (0) + 2 ˆ T ,ij := 

PR

O

T 

ˆ T ,ij := 1  Yti Ytj , T

15

2

Condition 1. (1) E FˆT ,ij − Fij = O T −1 (∀i, j ) where F = EFˆT ;

(2) # 1 i, j N : Fij = FˆT ,ij N  ,  ∈ [0, 2);

19 21

(3) F −  22  N  ,  > 0; (4)  = T := N/T such that ,  and  satisfy  

1/2 N −1/2 ∨ 1/2 N (1+)/2 = o N  .

U

17

N

C

O

R

13

and will show that it is asymptotically equivalent to use either ˆ T or 0 , where 0 is as in (1 ). To make this statement formal, we require some conditions. Comments about the following technical conditions are deferred to the next subsection.

11

Condition 2. Suppose u, v ∈ {1, 2, 3, 4}. Consider the u and v tuples (i1 , . . . , iu ) , (s1 , . . . , su ) ∈ Nu and (j1 , . . . , jv ) , (t1 , . . . , tv ) ∈ Nv such that s1  · · · su < su + r t1  · · · tv for some r ∈ N. Then, there exists a sequence (r )r∈N , where r r −a with a > 3 such that

Cov Yt j · · · Yt j , Ys i , . . . , Ys i r . u v u u 1 v 1 1 Please cite this article as: A. Sancetta, Sample covariance shrinkage for high dimensional dependent data, J. Multivariate Anal. (2007), doi: 10.1016/j.jmva.2007.06.004

YJMVA2635

ARTICLE IN PRESS A. Sancetta / Journal of Multivariate Analysis

)



5

5

Condition 4. (Yt )t∈N is an N-dimensional vector of nondegenerate random variables with finite and stationary eighth moment.

7

Hence, we have consistency of the estimated shrinkage parameter and the feasible shrinkage estimator.

13

15

O

O PR

where 0  N 1− = o (1) is as in ( 1); (2) −1 

0 − 0 = o (1) , N 1−

D

11

Theorem 1. Under Conditions 1–4, (1) −1 

ˆ T − 0 = op (1) , N 1−

TE

9

F

3

Condition 3. In (4) above, (1)  : R → R is a decreasing positive function, continuous from the right with left-hand limits ∞ such that lims→0+  (s) = 1 and 0 [ (s)]2 ds < ∞.

(2) b → ∞ such that b = o T 1/2 .

where 0  N 1− = o (1) is as in (2); (3)      

 ˆ  ˆ T −  ˆ T −  ˆT FT + 1 − ˆ T   = 0 F + (1 − 0 )   1 + op 1/2 N (1−)/2 . 2

EC

23 1

(

2

Below, we provide comments about Theorem 1, and in the subsequent subsection, we remark on the technical conditions of the paper.

19

2.1. Remarks on Theorem 1

O

R

R

17

25 27 29

C

N

23

2

where 1/2 N (1−)/2 → 0 by Condition 1(4). Note that  → 0 is not required, but it is allowed. We may also have  → ∞ as long as Condition 1 is satisfied (remarks about this condition can be found in the next subsection). Note that Theorem 1 is not concerned with of ˆ T , but

consistency ˆ ˆ only assures that with high probability, the shrunk estimator ˆ T FT + 1 − ˆ T T will perform better than ˆ T under the Frobenious norm. ˆ T is invertible when  > 0 even though  ˆ T is Moreover, if F is full rank, then F + (1 − )  rank deficient. This intuition can be made formal in the special case F = vIN , where IN is the identity matrix and v a positive constant. Then,     ( − v) N ˆ ˆ det vIN + (1 − ) T − IN = (1 − ) det T − IN (1 − )

U

21

Theorem 1 gives a rate of convergence in probability uniformly in    ˆ T −  0 F + (1 − 0 )   ,

Please cite this article as: A. Sancetta, Sample covariance shrinkage for high dimensional dependent data, J. Multivariate Anal. (2007), doi: 10.1016/j.jmva.2007.06.004

YJMVA2635

ARTICLE IN PRESS 6

31 1 3

5

A. Sancetta / Journal of Multivariate Analysis

(

)



ˆ T has arbitrary eigenvalue := ( − v) / (1 − ) 0 (because  ˆ T is positive semidefiand  nite), implying  = (1 − ) + v > 0 (which is the corresponding eigenvalue of the shrunk estimator). Therefore, the minimum eigenvalue of this shrunk estimator is always larger than the one of the sample covariance matrix. We now turn to specific comments regarding the conditions of the paper. 2.2. Remarks on the technical conditions

15

F ,withthe other elements fixed and known. The simplest way to achieve this is by setting at most O N  elements to be nonzero in F and estimate them using FˆT . We give some examples.

19

21

O

O

PR

D

Example 2. Suppose the data can be divided in groups and we constraint the correlation between groups to be zero. Controlling for the number of groups and elements in each group would allow us to satisfy Condition 1. Many examples, also based on factor models, can be generated once we restrict between groups correlations to be zero. Details can be left to the interested reader.

R

23

TE

17

N ˆ Example 1. Suppose F := vIN where v = ˆ N where vˆ = i=1 ii /N . Then, FT = vI N ˜ ˆ T towards zero and ˆ /N. In this case,  shrinks all the off diagonal elements of   iiT T i=1 the diagonal towards the mean of its diagonal elements, in both cases by a factor (1 − ). In this

case, we need 1/2 N 1/2 ∨ N = o N  . This is the estimator used in Ledoit and Wolf [6], but with different restrictions on .

EC

11

R

9

F

13

2.2.1. Condition 1

Theorem 1(3) holds even if in Condition 1(1) we replace L2 convergence with Op T −1/2 convergence and we allow for  = 2 in Condition 1(2). However, part (2) in Theorem 1 requires the present slightly stronger conditions. In practice we might only be interested in knowing that the shrunk estimator performs asymptotically as well as the unfeasible optimal F + (1 − ) ˆ T , in which case milder conditions can be used. As just mentioned, part (1) in Condition 1 implies that FˆT ,ij is L 2 root-n consistent. Part (2)  says that FˆT and F are constrained so that there are at most O N elements to be estimated in

7

29 31

33 35 37

O

C

N

27

Part (3) implies that F = , and  quantifies how different F and  are under the Frobenious norm. Part (4) is the crucial condition of the paper and relates the coefficients  and  together with the ratio T := N/T . A simple example shows that these conditions do not define an empty set. Example 3. Suppose F and FˆT are diagonal, then  = 1. Suppose  > 1, which is surely satisfied if, for example,  22  N  . Then, Condition 1(4) is satisfied for T → c > 0 and we may even have T → ∞ at, e.g. a logarithmic rate.

U

25

It is interesting to note that if T → c > 0, the result of the paper does not cover the case  = 1 (i.e. F and FˆT are diagonal) and  is diagonal as well. In this case, we do require T → 0. In practice, we would often use shrinkage for a matrix  such that  and F are different (i.e.  > 1) because the number of entries to be estimated in F is relatively small (e.g.  < 23 ). In this case T → c > 0 is allowed. It is useful to compare with the results in Ledoit and Wolf [6] and in particular with their Assumption 2. We note that a necessary condition for Assumption 2 in Ledoit and Wolf is  22 = Please cite this article as: A. Sancetta, Sample covariance shrinkage for high dimensional dependent data, J. Multivariate Anal. (2007), doi: 10.1016/j.jmva.2007.06.004

YJMVA2635

ARTICLE IN PRESS A. Sancetta / Journal of Multivariate Analysis

11 13 15

F

9

O (N). We will show this below. Based on another restrictive assumption on the higher order crossdependence structure, Ledoit and Wolf show that using F proportional to the identity allows for successful shrinkage. Theorem 1 does not cover this case, though it is quite restrictive, as this would imply  =  = 1 and T → c > 0, as remarked before. In this case, we require T → 0. This is the price one has to pay for lifting the iid condition and restrictive conditions on the higher order cross-dependence structure of the data (Assumption 3 in Ledoit and Wolf). Clearly, the results in Ledoit and Wolf do not allow for, say  22  N 2 , which is covered by this paper. Hence, the present result and the one in Ledoit and Wolf are somehow complementary. We remark that what makes the approach of Ledoit and Wolf work is that under their conditions, they can  2  2     show that (1 − E) ˆ T  = op (N ) (they actually show it in L2 ) while ˆ T −  = Op (N ). 2

This cannot be done under the present, more general conditions, and a different route had to be used. To see that Assumption 2 in Ledoit and Wolf implies  22 = O (N ), write  = P P  where  is the matrix of eigenvalues and P is the matrix eigenvectors. Define Xt := P  Yt . N of orthonormal 8 Assumption 2 in Ledoit and Wolf says that i=1 EXti = O (N ) (using our notation). By Jensen inequality, this implies O (N) =

N 

EXti8 

EXti2

4

i=1

=

N 

  4ii = Trace 4 .

i=1

(5)

EC



1/2 By the properties of Lp norms, E |Z|  E |Z|2 for any random variable Z. Hence, setting 2 Z = ii and taking expectation with respect to i using the measure with mass 1/N at each i = 1, . . . , N, deduce  ⎞    ⎞1/2 ⎛   1/2 ⎛ N N Trace 2 Trace 4   ⎝ ⎠ = N −1 ⎠ . |ii |2  N −1 |ii |4 =⎝ N N

R

19

N  

TE

i=1

17

2

O

7

7

O

5



PR

3

)

D

39 1

(

i=1

(6)

R

i=1

Squaring (6) and multiplying it by N, (5) together with Remark 1 give

O

    2  2 O (N) = Trace 4 N −1 Trace 2 = N −1  22 ,

C

21

25

27 29 31

 22 = O (N) .

U

23

N

so that

Note that Assumption 3 in Ledoit and Wolf also imposes a further restriction on the cross-sectional dependence of the data (not used here), which is satisfied by a restricted class of random variables like Gaussian random variables. 2.2.2. Condition 2 Condition 2 can be verified by deriving the weak dependence coefficients of Doukhan and Louhichi [4]. Condition 2 is satisfied by a wide range of time series models. It is weaker and much easier to derive than strong mixing, and Doukhan and Louhichi [4] give important examples of processes satisfying conditions of this type. Ledoit and Wolf [6] assume independence. Please cite this article as: A. Sancetta, Sample covariance shrinkage for high dimensional dependent data, J. Multivariate Anal. (2007), doi: 10.1016/j.jmva.2007.06.004

YJMVA2635

ARTICLE IN PRESS 8

7 9



2.2.3. Condition 3 Condition 3 is standard for the estimation of the spectral density at frequency zero. Note that there are other alternatives for the estimation of the variance of the sample mean of dependent observations: block bootstrap, sieve bootstrap, subsampling, etc. (see [3,7] for reviews). Clearly any of these other approaches could be used as an estimator of (3) in place of (4). 2.2.4. Condition 4 From the proofs it is evident that stationarity is mainly used to simplify the notation in the ˆ T ,ij (s). Under suitable conditions, we could allow (Yt )t∈N to be heterogeneous definition of 

and interpret  to be the arithmetic average of EYt Yt t∈{1,...,T } and similarly for other quantities that will be defined in the next section. Details can be left to the interested reader.

F

5

)

O

3

(

O

1

A. Sancetta / Journal of Multivariate Analysis

17

D

15

TE

13

Ledoit and Wolf [6] carry out a simulation study to verify the small sample properties of their estimator. Theorem 1 says that we need to account for time series dependence. However, it is interesting to see what is the effect of dependence in practice. In the simulation examples we carry out below we can see that there is no substantial gain unless there is some moderate time series dependence across all the (i, j ) terms. Here is an explanation for this. To keep it simple, suppose that in Condition 1  = 1 and  = 1. Then, by Lemma 2       T T −1 1  i,j  N Var T −1/2 Tt=1 Yti Ytj   −1 −1/2 0 =  (N T ) Var T Yti Ytj , 2   ˆT t=1 1  i,j  N E F −  2

EC

11

PR

3. Simulation study

which is the average over the variances of the (i, j ) sample covariances. Hence, if

25 27 29

R

O

C

23

N

21

t=2

for many i and j ’s, then, averaging over (i, j ) would considerably decrease the impact of dependence on the estimator. Hence, optimal shrinkage can be thought to be somehow robust to departures from independence, especially in the positive dependent case. In fact, (2) implies that ˆ T and FˆT are positively correlated, T is upward biased for (2) in finite samples, and not if  accounting for positive dependence might counterbalance this bias. The Monte Carlo study is carried out as follows. Simulate several sequences of vectors and compute their covariance using the covariance shrinkage proposed here. In particular we choose the constrained estimator to be as in Example 1. This way, results can be compared with the shrunk estimator used for iid observations and proposed by Ledoit and Wolf [6]. We want to verify if in practice we should worry too much about weak dependence. For all the simulated 2 

 ˆ∗ ˆ ∗ () := FˆT + (1 − )  ˆ T , and as usual  is the ˆ T −  where  data, we compute E 

U

19

Cov Y1i Y1j , Yt,i Yt,j  0

R

T −1

T

31

33

2

T

true covariance matrix. We also compute the percentage relative improvement in average loss (PRIAL), i.e. 

2  2 ˆ∗   ˆT E  ˆT −   − E ˆ T −  ∗

T  2 2 . PRIAL T ˆ T = 100 2   ˆ E T −  2

Please cite this article as: A. Sancetta, Sample covariance shrinkage for high dimensional dependent data, J. Multivariate Anal. (2007), doi: 10.1016/j.jmva.2007.06.004

YJMVA2635

ARTICLE IN PRESS A. Sancetta / Journal of Multivariate Analysis

19 21 23 25 27 29 31 33 35 37 39 41 43

F

O

O

PR

D

17

TE

15

EC

13

R

11

R

9

The expectations are computed (approximated) using 1000 Monte Carlo replications, and standard errors are also computed. The smoothing function in Condition 3 is chosen to be  (s) = (1 − |s|) I{|s|  1} , which is the Bartlett kernel. For b = 1 it corresponds to the case when no time series dependence is accounted for, as the covariance terms all drop. We consider the cases b = 1, 5, so that in the second case, ˆ b . When b = 1, we just fourth-order autocovariance terms are retained in the estimation of  T ,ij recover the exact estimator considered in Ledoit and Wolf [6]. For comparison reasons, we also ˆ T and FˆT . compute ˆ ∗T () for  = 0, 1, i.e.  Details on the simulated data are as follows. The sample is T = 40 from an N = 20 dimensional vector autoregressive process (VAR) of order one. The matrix of autoregressive coefficients is diagonal with diagonal entries in [0, .8], and [.5, .8] in a second simulation example. These coefficients were obtained by simulating an N -dimensional vector of [0, .8] and [.5, .8] uniform random variables. The innovations are iid Gaussian vectors with diagonal covariance matrix, whose coefficients were simulated from a lognormal with mean one and scaling parameter = .25, .5, 1, 2 ( is the standard deviation of the logs of the observations). Different values of

allows us to assess changes in performance as the diagonal entries becomes less concentrated around their mean equal to one. As increases, FˆT becomes noisier and more biased for  so that shrinkage is less justified. Results show that when the scaling parameter is equal to 1, the PRIAL is quite small becoming negative when = 2. We also report the same results when

= 1 but N = 40, 80, 160 to see if there is a relative improvement, which indeed happens to be substantial, despite the increased variability and bias in FˆT . Simulations carried out by the author, but not reported here, show that a similar improvement is obtained when = 2, leading ˆ T is noisier, and to positive PRIAL as soon as N 40 (i.e. N/T 1). A larger N implies that  we can argue more strongly for shrinkage despite the bias in FˆT . Finally, in a third simulation, we briefly consider the case of nondiagonal true covariance matrix. To this end we use the same VAR model with autoregressive coefficients in [0, .8] and [.5, .8], but we set the covariance matrix of the innovations to be one along the diagonal and .25 off the diagonal. In this case, we only consider N = 20, 40. The results seem to be representative of the behavior of the covariance shrinkage estimator in the presence of exponentially decaying time series dependence. Note that for a VAR(1), Condition 2 is satisfied for any a > 0. All the results are reported in Table 1, Panels A, B, C and D. The first two columns in Table 1 refer to the shrunk estimator with estimated , the third and fourth column refer to the estimator with fixed  = 0, 1, i.e. ˆ T and FˆT , while the last two columns give values of ˆ T .

O

7

9

C

5



Remark 2. For the experimental results in Panel A and B we have  =  = 1, using the notation in Condition 1. By Theorem 1, the estimator might not be consistent, in this case, unless  → 0. For the experiments in Panel C and D, we have  = 1 and  = 2 and the estimator is consistent also for  → c > 0.

N

3

)

U

1

(

As we mentioned above, when time dependence is moderate across all the series (Panel B and D), accounting for time dependence can be advantageous. However, the difference between the estimators based on b = 5 and 1 decreases as shrinkage becomes less desirable. The results also suggest that for both estimators, the PRIAL increases in N/T . Simulations carried out by the author, but not reported here confirm this finding in a variety of situations, like the one contemplated in Ledoit and Wolf [6, Fig. 6]. If there is moderate time dependence, accounting for it in the estimation could be advantageous even when T is small and N large (e.g. N/T = 80/10) ˆb . despite noise in estimation of  T ,ij Please cite this article as: A. Sancetta, Sample covariance shrinkage for high dimensional dependent data, J. Multivariate Anal. (2007), doi: 10.1016/j.jmva.2007.06.004

YJMVA2635

ARTICLE IN PRESS 10

A. Sancetta / Journal of Multivariate Analysis

(

)



Table 1

Estimated alpha

Fixed alpha

b=5

Alpha = 0

Panel A. Diagonal covariance, VAR coefficients in [0,0.8] N = 20 Sigma = .25 MEAN 9.9 11.6 SE 0.12 0.16 PRIAL (%) 71.9 67.3

Alpha = 1

b=5

b=1

35.3 0.33 0.0

9.2 0.02 74.1

0.72 0.005

0.61 0.004

0.51 0.004

MEAN SE PRIAL (%)

18.2 0.19 53.5

19.4 0.23 50.4

39.2 0.42 0.0

22.0 0.02 43.8

0.60 0.005

Sigma = 1

MEAN SE PRIAL (%)

43.9 0.56 12.5

43.5 0.59 13.2

50.1 0.80 0.0

94.2 0.04 −87.9

0.36 0.004

Sigma = 2

MEAN SE PRIAL (%)

71.0 1.54 −12.5

68.7 1.57 −8.9

63.1 1.70 0.0

301.2 0.08 −377.3

MEAN SE PRIAL (%)

76.7 0.53 40.2

79.4 0.63 38.2

128.4 1.20 0.0

MEAN STD PRIAL (%)

117.0 0.47 58.9

123.0 0.63 56.8

MEAN STD PRIAL (%)

574.4 3.04 68.0

0.13 0.002

119.6 0.04 6.9

0.49 0.003

0.41 0.003

284.6 1.74 0.0

154.1 0.03 45.8

0.60 0.003

0.53 0.003

1796.1 11.07 0.0

634.2 0.08 64.7

0.63 0.003

0.52 0.002

82.1 0.79 0.0

11.6 0.05 85.9

0.66 0.004

0.46 0.003

TE

EC

665.7 4.84 62.9

D

0.17 0.003

O

N = 160 Sigma = 1

R

N = 80 Sigma = 1

R

N = 40 Sigma = 1

0.29 0.003

PR

Sigma = .5

O

b=1

F

Estimated value of alpha

O

Shrunk estimator

N

C

Panel B. Diagonal covariance, VAR coefficients in [0.5,0.8] N = 20 Sigma = .25 MEAN 19.6 31.2 SE 0.36 0.51 PRIAL (%) 76.1 62.0 MEAN SE PRIAL (%)

34.2 0.47 62.1

44.3 0.65 50.9

90.2 0.99 0.0

31.4 0.06 65.3

0.59 0.004

0.41 0.003

Sigma = 1

MEAN SE PRIAL (%)

82.3 1.00 24.7

85.5 1.18 21.8

109.2 1.65 0.0

136.2 0.09 −24.6

0.40 0.004

0.28 0.003

Sigma = 2

MEAN SE PRIAL (%)

131.7 2.53 −9.6

125.7 2.61 −4.7

120.1 2.92 0.0

424.7 0.15 −253.7

0.21 0.003

0.14 0.002

U

Sigma = .5

Please cite this article as: A. Sancetta, Sample covariance shrinkage for high dimensional dependent data, J. Multivariate Anal. (2007), doi: 10.1016/j.jmva.2007.06.004

YJMVA2635

ARTICLE IN PRESS A. Sancetta / Journal of Multivariate Analysis

(

)



11

293.2 2.69 0.0

174.8 0.09 40.4

0.50 0.003

0.35 0.002

MEAN STD PRIAL (%)

240.4 1.59 66.3

318.9 2.58 55.3

713.6 4.58 0.0

234.8 0.08 67.1

0.57 0.003

0.41 0.002

MEAN STD PRIAL (%)

1082.7 8.51 72.9

1647.1 13.81 58.7

3991.0 23.06 0.0

832.5 0.18 79.1

0.59 0.002

0.41 0.002

33.4 0.39 0.0

44.9 0.02 −34.6

Panel C. Nondiagonal covariance, VAR coefficients in [0,0.8] Sigma = 1 N = 20 MEAN 24.3 24.5 SE 0.23 0.24 PRIAL (%) 27.1 26.6

0.43 0.004

0.37 0.003

177.8 0.04 −35.3

0.42 0.003

0.36 0.003

77.8 0.93 0.0

82.6 0.05 −6.2

0.44 0.004

0.31 0.003

309.8 3.12 0.0

335.8 0.10 −8.4

0.42 0.004

0.30 0.003

N = 40 92.7 0.72 29.4

93.8 0.73 28.6

R

EC

Panel D. Nondiagonal covariance, VAR coefficients in [0.5,0.8] Sigma = 1 N = 20 MEAN 52.6 55.8 SE 0.49 0.59 PRIAL (%) 32.3 28.2

131.4 1.20 0.0

TE

MEAN SE PRIAL (%)

451

O

206.9 1.64 33.2

220.9 1.99 28.7

C

MEAN SE PRIAL (%)

R

N = 40

O

169.9 1.64 42.1

O

N = 160 Sigma = 1

143.8 1.12 50.9

PR

N = 80 Sigma = 1

MEAN SE PRIAL (%)

D

N = 40 Sigma = 1

F

Table 1 continued...

4. Asymptotics for covariance shrinkage estimators

U

N

Proof of Proposition 1. Differentiating with respect to , 2   ˆ T −  dE F + (1 − )   =2

2

d     ˆ T ,ij − ij Fij − ˆ T ,ij E Fij + (1 − ) 

1  i,j  N

=2







ˆ T ,ij E Fij − 

2

    ˆ ˆ + Cov Fij , T ,ij − Var T ,ij ,

1  i,j  N

3

  which, imposing the constraint, implies the result because Cov Fij , ˆ T ,ij = 0, as F is nonstochastic.  Please cite this article as: A. Sancetta, Sample covariance shrinkage for high dimensional dependent data, J. Multivariate Anal. (2007), doi: 10.1016/j.jmva.2007.06.004

YJMVA2635

ARTICLE IN PRESS 12

We introduce some notation.

 −1 Notation 1. ij (s) := Cov Yt,i Yt,j , Yt+s,i Yt+s,j , T ,ij := ij (0)+2 Ts=1 (1 − s/T ) ij (s). Moreover, . . . p,P is the Lp norm (p = 1, 2). The following lemmata are used to prove Theorem 1.

1  i,j  N N2

2

1  i,j  N



max

T 2 1  i,j  N

1  t1  t2  T

N2

PR



Cov Yt1 ,i Yt1 ,j , Yt2 ,i Yt2 ,j

t=1

(7)

O

C

ESt St+r1 St+r2 St+r3 

1  r1  r2  r3  ∞

∞ 

(r + 1)2 r < ∞,

(8)

N

r=1



which implies that the fourth mixed cumulant of St , St+r1 , St+r2 , St+r3 is summable in (r1 , r2 , r3 ). Noting   T T ,ij 1  Yt,i Yt,j = Var T T

U

15

t=1

implying the first part of Lemma 1. For arbitrary, but fixed i, j , define St := (1 − E) Yt,i Yt,j . Condition 2 implies (e.g. [4]), 

13



= N. T By Condition 4, Yt,i Yt,j is nondegenerate (∀i, j ) hence we must also have   T 

1  N2 Var Yt,i Yt,j  min Var Yt,i Yt,j  N, T T 1  i,j  N 

11

T 1  Var Yt,i Yt,j T

D

2





O

⎛ ⎞  2  b   ˆ T −  = o (N ) . ˆ T ,ij /T ⎠ − E  E ⎝  2 1  i,j  N

Proof. By Condition 2,  2 ˆ  E  T −  =

9

O

2

and

7

F

Lemma 1. Under Conditions 2–4,  2 ˆ  E  T −   N

TE

5



EC

3

)

R

1

(

R

5

A. Sancetta / Journal of Multivariate Analysis

t=1

17

by Condition 3 and (8), we deduce,  b  ˆ  max  −  = o (1) T ,ij  T ,ij 1  i,j  N

19

1,P

using Theorem 1 in Andrews [2] and the results in Anderson [1, Chapter 8].



Please cite this article as: A. Sancetta, Sample covariance shrinkage for high dimensional dependent data, J. Multivariate Anal. (2007), doi: 10.1016/j.jmva.2007.06.004

YJMVA2635

ARTICLE IN PRESS A. Sancetta / Journal of Multivariate Analysis

(

)



13

2   ˆT We give the rate of growth of E F −   . 2

Lemma 2. Under Conditions 1, 2 and 4 2   2    ˆT E F −   = F −  22 + E ˆ T −   N  . 2

Proof. Adding and subtracting 2ij , 2    2     ˆ T ,ij − 2 ˆ T ,ij + 2 + E ˆT Fij2 − 2Fij E E F −   = ij ij 2

F

3

2

1  i,j  N

O

1

 2 ˆ  = F −  22 + E  T − 

PR

O

2

 N

5

because, by Lemma 1, 2  

 ˆ E  T −  = O (N ) = o N ,

7





because Condition 1(4) gives 1/2 N (1+)/2 = o N  which implies N = o N  and because

F −  22  N  by Condition 1(3).  2   ˆT We show convergence of FˆT −   . 2

Lemma 3. Under Conditions 1, 2, and 4,  2 2 

ˆ  ˆT ˆT FT −   − E F −   = op N  .

Proof. By direct calculation,  2 ˆ ˆT FT −   2  2     ˆ =  FT − F + (F − ) +  − ˆ T 

O

R

11

2

R

2

EC

9

TE

D

2

2

N

C

[adding and subtracting F and ] 2 2  

2  FˆT ,ij − Fij + Fij − ij + ij − ˆ T ,ij = 1  i,j  N

U

    

+ 2 FˆT ,ij − Fij Fij − ij + 2 FˆT ,ij − Fij ij − ˆ T ,ij 

 ˆ T ,ij + 2 Fij − ij ij − 

[expanding the square]

= F −  22 + op N  ,

13

using the following, which are easily derived using Condition 1, 2      

FˆT ,ij − Fij = Op N  T −1 = Op N −1 = op N  ,

(9)

1  i,j  N

Please cite this article as: A. Sancetta, Sample covariance shrinkage for high dimensional dependent data, J. Multivariate Anal. (2007), doi: 10.1016/j.jmva.2007.06.004

YJMVA2635

ARTICLE IN PRESS 14

1

A. Sancetta / Journal of Multivariate Analysis

(

)



because there are N  nonzero elements in the sum and FˆT ,ij is root-n consistent: 2  

ˆ T ,ij = Op (N ) = op N  , ij −  1  i,j  N

by Lemma 1,       



FˆT ,ij − Fij Fij −ij = Op N  T −1/2 =Op 1/2 N −1/2 =op N  , 1  i,j  N

2

by Lemma 1, 

Fij − ij

   ˆ T −  ˆ T ,ij  F −  2  ij −   



1  i,j  N



(11)

by Lemma 1 and Condition 1(3). By Lemmata 2 and 1,  2   E F − ˆ T  = F −  22 + O (N ) .

EC

2



R

Hence,  2 2 

ˆ  ˆT ˆT FT −   − E F −   = op N  . 2

The following two lemmata will be used to show adaptiveness with respect to Fˆ .

R

9

D



= Op 1/2 N (1+)/2 = op N  ,

2

7

2



TE

5

2

 2 

  = Op ˆ T −  = op N  ,

O

2

1  i,j  N

O

by similar reasoning as for (9),          ˆ  ˆ T ,ij   FˆT ,ij − Fij ij −  −   FˆT − F   T

PR

3

F

(10)

N

 2 2 

  ˆT ˆT E Fˆ −   − E F −   = o N .

U

13

O

2

and

C

11

Lemma 4. Under Conditions 1, 2, and 4,  2  ˆT E Fˆ −    N

2

2

Proof. We have the following chain of equalities  2  ˆT E FˆT −   2  2    2     ˆ ˆ ˆ ˆ = E FT ,ij − Fij + 2 FT ,ij − Fij Fij − T ,ij + Fij − T ,ij 1  i,j  N

[adding and subtracting F and expanding the square]  2  2        ˆ = E FˆT − F  + E F −  E FˆT ,ij − Fij Fij − ˆ T ,ij .  +2 2

2

(12)

1  i,j  N

Please cite this article as: A. Sancetta, Sample covariance shrinkage for high dimensional dependent data, J. Multivariate Anal. (2007), doi: 10.1016/j.jmva.2007.06.004

YJMVA2635

ARTICLE IN PRESS A. Sancetta / Journal of Multivariate Analysis

15 1

(

)



15

2   ˆT By Lemma 2, E F −    N  , and mutatis mutandis from (9) using Condition 1(1), 2 2  

 ˆ E FT − F  = o N . By Holder inequality 2   2  2 1/2      ˆ ˆ ˆ ˆ E FT ,ij −Fij E Fij −T ,ij E FT ,ij − Fij Fij − T ,ij  1  i,j  N

1  i,j  N

  = O N  T −1/2

O

5

PR

3

O

F

 2 2 

because by Condition 1(1), E FˆT ,ij − Fij = O T −1 , by Condition 4, E Fij − ˆ T ,ij =   O (1), and by Condition 1(2) there are at most O N  nonzero elements in the double sum. 2   

 ˆT As in (10), O N  T −1/2 = o N  , implying that E F −    N  is the dominating term. 2 Substituting these orders of magnitude in (12), we have the result.  This is the final lemma before the proof of Theorem 1.

D

Lemma 5. Under Conditions 1, 2, and 4    ˆ T ,ij , Fˆij = o (N ) . Cov 

TE

7

1  i,j  N

1  i,j  N





EC

Proof. By Holder inequality,    ˆ T ,ij , Fˆij Cov 

  2  2 1/2   ˆ T ,ij − ij E Fˆij − Fij E  = O N  /T

R

9

1  i,j  N

11

O

R

  because, by Condition 1(2), there are at most O N  nonzero elements in the double sum, by 2 2  



Condition 1(1) E Fˆij − Fij = O T −1 , and by Condition 2, E ˆ T ,ij − ij = O T −1 

We can now prove Theorem 1.

N

13

C

as shown in (7). Since  < 2, N  /T = o (N).

U

Proof of Theorem 1(1). By the triangle inequality, 2    ˆ b −  E  ˆ   T 1  i,j  N T ,ij /T 2 −   2 2 ˆ    ˆT ˆ T − F  E  FT −  2 2 2  2 2    ˆ    ˆ  b −  E  −  E  E ˆ T −  ˆ T T /T  1  i,j  N T ,ij 2 2 2  −   2 2 +  2 −  2 ˆ        ˆ T − FˆT   ˆ T − FˆT  E ˆ T − F  T − FˆT   2

15

2

2

2

= I + II. Please cite this article as: A. Sancetta, Sample covariance shrinkage for high dimensional dependent data, J. Multivariate Anal. (2007), doi: 10.1016/j.jmva.2007.06.004

YJMVA2635

ARTICLE IN PRESS 16

A. Sancetta / Journal of Multivariate Analysis

(

)



Control over I: By Lemma 3, the Continuous Mapping Theorem and Lemma 2,

1

 2  −1 p   2  −1    ˆ  ˆ ˆ N N → E F − T   N − N  = O (1) . FT − T  2

(13)

2

By (13) and Lemma 1,

O

F

   2 1 b   ˆ T ,ij /T − E ˆ T −  I=  N  2  2 ˆ   ˆ N 1  i,j  N FT − T  2

Control over II: By direct calculation, Lemma 1 and (13),

PR D

3

O

  2  b   ˆ T −  ˆ T ,ij /T − E  N  = Op (1) 2 1  i,j  N   = op N 1− .



2

2

R



R

EC

TE

 2 ˆ   2 2  E  T −     ˆ 2 ˆ ˆ II =  2  2 E T − F  − T − FT  2 2 ˆ  ˆ  T − FˆT  E  T − F 2 2  2     E ˆ T −  N 2  2   2   ˆ ˆ ˆ   =  E  N − F −  − F   T T 2  2  T 2 2 ˆ    ˆT − F N  E  N T − FˆT 

O

= op N 1− .



 −1

Hence, I+II = op N 1− , which gives N 1− ˆ T − 0 = op (1). To see that 0  N 1− , we just use Lemmata 1 and 2. Then, Condition 1(4) shows that N 1− = o (1). 

7

Proof of Theorem 1(2). By the triangle inequality

N

C

5

U

 2  2    ˆ  ˆ ˆ ˆ E T −  E T −   − 1  i,j  N Cov T ,ij , Fij 2 2 −  2  2  ˆ   ˆT E F −  E ˆ T − F  2 2  2 2     ˆ  ˆ ˆ ˆ E T −  −  E  Cov  , F   T T ,ij ij 1  i,j  N 2 2   2 −  2 2 +  ˆ   ˆ ˆ ˆT ˆ E F −  − F E  E F −      T T 2

2

2

= I + II. Please cite this article as: A. Sancetta, Sample covariance shrinkage for high dimensional dependent data, J. Multivariate Anal. (2007), doi: 10.1016/j.jmva.2007.06.004

YJMVA2635

ARTICLE IN PRESS A. Sancetta / Journal of Multivariate Analysis

3



17

Control over I:

 2  2   ˆ  E Fˆ − ˆ T   − E T − F  N  2 2

F

by Lemmata 1, 2 and 4. Control over II. By Lemmata 4 and 5,    ˆ T ,ij , Fˆij   Cov  N 1  i,j  N = o N 1− . II = 2    ˆT N E Fˆ −  



2

D

Proof of Theorem 1(3). We have the following chain of inequalities,  

 ˆ ˆ T −   ˆT FT + 1 − ˆ T  2    

 ˆ T −  = ˆT FˆT − F + ˆ T F + 1 − ˆ T   2

2

TE

[adding and subtracting ˆ T F ]    

   ˆ T −   ˆT F + 1 − ˆ T   + ˆ T FˆT − F 

O

2    ˆ N −  E   T 2 I  2   2   ˆ ˆ   ˆ N E T − F  N E F − T  2 2   = o N 1− ,

O

1

)

PR

9

(

2

R

EC

[by Minkowski inequality]     

    ˆ T + 0 F + (1 − 0 )  ˆ T −  =  ˆ T − 0 F −   + ˆ T FˆT − F  2 2    ˆ adding and subtracting 0 F − T     

      ˆ T −   0 F + (1 − 0 )   + ˆ T − 0 F − ˆ T  + ˆ T FˆT − F  2

2

2

C

and

(14)

ˆ T = Op (0 )  N 1− = o (1) .

(15)

U

7

We shall bound the three terms above. First, note that by Theorem 1(1),  

ˆ T − 0 = op N 1− ,

N

5

O

R

[by Minkowski inequality] = I + II + III.

9

Control over II: By Lemmata 3 and 2,  2 2 



  ˆT ˆT F −   = E F −   + op N  = Op N  , 2

11

2

hence using (14),   II = op N 1−/2 . Please cite this article as: A. Sancetta, Sample covariance shrinkage for high dimensional dependent data, J. Multivariate Anal. (2007), doi: 10.1016/j.jmva.2007.06.004

YJMVA2635

ARTICLE IN PRESS 18

13 1 3

A. Sancetta / Journal of Multivariate Analysis

(

)



Control over III: Using (9) and (15)   III = op N 1−/2 . Control over I: For the bound to be uniform, we only need to show that the following holds in probability:    ˆ T −   N 1−/2 . 0 F + (1 − 0 )  p

2



PR

2

[adding and subtracting 0 ]  2 ˆ  = 20 F −  22 + (1 − 0 )2  T −  +



 ˆ T ,ij − ij 20 Fij − ij 

1  i,j  N

D

[expanding the square]

O

Using  to mean that  holds in probability and similarly for ,  2  ˆ T −  0 F + (1 − 0 )   2  2   ˆT −   = 0 (F − ) + (1 − 0 )  

O

5

F

2

p

p

TE

 N, because

 2 p ˆ  −  (1 − 0 )2    N T

EC

2

(16)

[by (15) and Lemma1],   20 F −  22 = O 2 N 2− = o (N )

R

[by (15) and Condition 1(3)],    

 ˆ T ,ij − ij = Op 3/2 N 3/2−/2 = o (N ) , 0 Fij − ij 

by (15), (11) and Condition 1(4). By Condition 1(4), N 1−/2 = o [N ]1/2 , so that

O

7

R

1  i,j  N

p

11

N



To write II and III in terms of I times an o (1) quantity we solve II+III= x [N ]1/2 = op N 1−/2 1/2 (1−)/2

, which implies the result.  for x to find x = op  N

U

9

C

I  [N ]1/2 N 1−/2 .

Acknowledgments I thank an associate editor and a referee for useful comments that improved the content and presentation of the paper. References

13 15

[1] T.W. Anderson, The Statistical Analysis of Time Series, Wiley, New York, 1971. [2] D. Andrews, Heteroskedasticity and autocorrelation consistent covariance matrix estimation, Econometrica 59 (1991) 817–858. Please cite this article as: A. Sancetta, Sample covariance shrinkage for high dimensional dependent data, J. Multivariate Anal. (2007), doi: 10.1016/j.jmva.2007.06.004

YJMVA2635

ARTICLE IN PRESS A. Sancetta / Journal of Multivariate Analysis

R

R

EC

TE

D

PR

O

O

F

[3] P. Bühlmann, Bootstraps for time series, Statist. Sci. 17 (2002) 52–72. [4] P. Doukhan, S. Louhichi, A new weak dependence condition and applications to moment inequalities, Stochastic Process. Appl. 84 (1999) 313–342. [5] O. Ledoit, M. Wolf, Improved estimation of the covariance matrix of stock returns with an application to portfolio selection, J. Empirical Finance 10 (2003) 603–621. [6] O. Ledoit, M. Wolf, A well-conditioned estimator for large-dimensional covariance matrices, J. Multivariate Anal. 88 (2004) 365–411. [7] D.N. Politis, The impact of bootstrap methods on time series analysis, Statist. Sci. 18 (2003) 219–230.

O

7

19

C

5



N

3

)

U

1

(

Please cite this article as: A. Sancetta, Sample covariance shrinkage for high dimensional dependent data, J. Multivariate Anal. (2007), doi: 10.1016/j.jmva.2007.06.004

uncorrected proof

For high dimensional data sets the sample covariance matrix is usually ... The number of applied problems where such an estimate is required is large, e.g. ...

276KB Sizes 2 Downloads 196 Views

Recommend Documents

Uncorrected Proof
Feb 2, 2010 - The suitability of the proposed numerical scheme is tested against an analytical solution and the general performance of the stochastic model is ...

uncorrected proof
ANSWER ALL QUERIES ON PROOFS (Queries are attached as the last page of your proof.) §. List all corrections and send back via e-mail or post to the submitting editor as detailed in the covering e-mail, or mark all ...... Publications: College Park,

Uncorrected Proof
Jun 26, 2007 - of California Press, 1936) but paid to claims for a role for Platonic ... even guided by divinely ordained laws of motion, to produce all the ... 5 Stephen Menn, Descartes and Augustine (Cambridge: Cambridge University Press, ...

uncorrected proof
was whether people can be meaningfully differentiated by social ... Although people with a prevention focus can use risk-averse or .... subset of people suffering from social anxiety reporting ..... During the 3-month assessment period, 100%.

uncorrected proof
Jay Hooperb, Gregory Mertzc. 4 a Department of Biochemistry and Molecular Biology, 2000 9th Avenue South, Southern Research Institute, Birmingham, ...

uncorrected proof
Internet Service Providers (ISPs) on the other hand, have to face a considerable ... complexity of setting up an e-mail server, and the virtually zero cost of sending.

uncorrected proof!
Secure international recognition as sovereign states with the dissolution of the Socialist .... kingdom of Carantania – including progressive legal rights for women! The ..... politics, does not have access to the company of eight Central European.

uncorrected proof
Dec 28, 2005 - Disk Used ... The rate of failure was not significantly affected by target ampli- ..... indicators (impulsion modality: reach time R, rate of failure F; ...

uncorrected proof
+598 2929 0106; fax: +598 2924 1906. Q1. ∗∗ Corresponding ... [12,13], and recently several papers have described the reduction. 24 of the carbonyl group by ...

uncorrected proof
social simulation methodology to sociologists of religion. 133 and religious studies researchers. But one wonders, would. 134 that purpose not be better served by introducing these. 135 researchers to a standard agent-based social simulation. 136 pac

uncorrected proof
indicated that growth decline and the degree of crown dieback were the .... 0.01 mm with a computer-compatible increment tree ....

uncorrected proof
3), we achieve a diacritic error rate of 5.1%, a segment error rate 8.5%, and a word error rate of ... Available online at www.sciencedirect.com ... bank corpus. ...... data extracted from LDC Arabic Treebank corpus, which is considered good ...

uncorrected proof
... the frequency of the voltage source is very large or very small as compare of the values ... 65 to mobile beams with springs of constants ki. ... mobile beam (m1) ...... is achieved when the variations of the variables and i go to zero as the tim

uncorrected proof
Jun 9, 2009 - The software component of a VR system manages the hardware ... VRE offers a number of advantages over in vivo or imaginal exposure. Firstly .... The NeuroVR Editor is built using a custom Graphical User Interface (GUI) for.

uncorrected proof
The data are collected from high- and low-proficiency pupils at each of the three grades in each ... good from poor readers. The linguistic ..... Analysis. We used NVivo, a software package for qualitative analysis, to process our data. In order.

uncorrected proof
(b) Lateral view. Focal plane at z ... (a) Pictorial view ..... 360 was presented. This transducer achieved a resolution of about. 361 ... A unified view of imag-. 376.

Uncorrected Proof
measurements, with particular focus on their applicability to landscape-scale ..... only multiple buried detectors and a control system for data storage (Tang et al.

uncorrected proof
+56 2 978 7392; fax: +56 2 272 7363. ... sent the ancestral T. cruzi lineages, and genetic recombination. 57 .... Intestinal contents were free of fresh blood,. 97.

uncorrected proof
Jul 6, 2007 - At the end of the paper I offer a new partitive account of plural definite descriptions ...... (18) Every man who had a credit card paid the bill with it;.

Uncorrected Proof
US Forest Service, Northern Research Station , 1831 Hwy 169 E., Grand Rapids , MN 55744 ..... approach, we hope to identify potential areas for improvement and thereby ... Southern Research Station, Asheville, NC ... Curtis , PS , Hanson , PJ , Bolst

uncorrected proof - Steve Borgatti
Earlier versions of this paper were presented at the 1991 NSF Conference on Measurement Theory and Networks. (Irvine, CA), 1992 annual meeting of the American Anthropological Association (San Francisco), and the 1993 Sunbelt. XII International Social

uncorrected proof
Jun 9, 2009 - Collins, R.L., Kashdan, T.B., & Gollnisch, G. (2003). The feasibility of using cellular phones ... CyberPsychology and Behavior, 9(6),. 711Б712.

uncorrected proof
Harvard University Press, Cam- bridge, MA, p. 41-89. Badyaev ... Gosler, A.G. (1991). On the use of greater covert moult and pectoral muscle as measures of.

uncorrected proof
For answering the question of how to evaluate the performance of a fuzzy ..... classification confidence using nonparametric machine learning methods, IEEE ...