Journal of Multivariate Analysis 98 (2007) 317 – 327 www.elsevier.com/locate/jmva

Estimation for parameters of interest in random effects growth curve models Wai-Cheung Ipa,∗ , Mi-Xia Wub , Song-Gui Wangb , Heung Wonga a Department of Applied Mathematics, Hong Kong Polytechnic University, Hong Kong b College of Applied Sciences, Beijing University of Technology, Beijing 100022, China

Received 1 March 2005 Available online 8 September 2006

Abstract In this paper, we consider the general growth curve model with multivariate random effects covariance structure and provide a new simple estimator for the parameters of interest. This estimator is not only convenient for testing the hypothesis on the corresponding parameters, but also has higher efficiency than the least-square estimator and the improved two-stage estimator obtained by Rao under certain conditions. Moreover, we obtain the necessary and sufficient condition for the new estimator to be identical to the best linear unbiased estimator. Examples of its application are given. © 2006 Elsevier Inc. All rights reserved. AMS 1991 subject classification: 62H12; 62H15; 62F10 Keywords: Growth curve models; Random-effects; Exact test; Best linear unbiased estimator; Maximum likelihood estimator

1. Introduction The linear mixed model is a popular choice for the treatment of longitudinal data with random effects, see Laird and Ware [6], Diggle et al. [3], Srivastava and VonRosen [14] and Verbeke and Molenberghs [15]. In this paper, we consider the multivariate model in which m distinct characteristics on each of N individuals taken from r different groups are measured on each of p different occasions. The jth characteristic on the ith individual can be assumed to follow the mixed model (k)

yij = Xj j + Zj uij + εij ,

i = 1, . . . , N, j = 1, . . . , m, i ∈ kth group,

∗ Corresponding author. Fax: +852 23629045.

E-mail address: [email protected] (W.C. Ip). 0047-259X/$ - see front matter © 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.jmva.2006.04.001

(1)

318

W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) 317 – 327

where yij = (yij 1 · · · yijp ) , yij l is the measurement of the jth characteristic at occasion l on the ith individual, Xj and Zj are, respectively, known p × q and p × c design matrices of full column (k) rank, j is the q × 1 vector of regression parameters on the jth characteristic in the treatment group k (k = 1, . . . , r), uij is c × 1 random effect vector, and εij is p × 1 random error vector. If each of the m characteristics that we consider follows a response curve of the same general type over the p occasions, that is, Xj = X and Zj = Z, then for ith individual, yi = (X ⊗ Im )(k) + (Z ⊗ Im )ui + εi , where (k)

yi = Vec((yi1 , . . . , yim ) ),

 (k) = Vec((1 , . . . , (k) m ) ),

εi = Vec((εi1 , . . . , εim ) ),

ui = Vec((ui1 , . . . , uim ) ),

⊗ denotes the Kronecker product, and Vec(·) operator stacks the columns of a matrix below one another to form a column vector. The individual random effects ui are assumed to be distributed independently as N(0, u ), and independent of the error εi , with distribution N (0, Ip ⊗ e ), where e is m × m positive definite matrix and u is cm × cm nonnegative definite matrix, that is, e > 0 and u 0. Thus the covariance matrix of yi is Cov(yi ) = Ip ⊗ e + (Z ⊗ Im )u (Z  ⊗ Im )=

(say).

(2)

Let Y = (y1 , . . . , yN ), U = (u1 , . . . , uN ), and E = (ε1 , . . . , εN ), the full multivariate model (1) can be expressed as Y = (X ⊗ Im )A + (Z ⊗ Im )U + E, Cov(Vec(Y )) = IN ⊗ ,

(3)

which is a general growth curve model with multivariate random effects covariance structure, where  = ((1) , . . . , (r) ) is the qm × r matrix of the growth curve coefficients, and A = (a1 , . . . , aN ) is the r × N matrix with full row rank. In particular, if its elements are either 1 or 0 indicating the group from which an observation comes, A is called the ‘group indicator’ matrix in literature. When m = 1, model (3) becomes Y = XA + ZU + E, and  = 2e Ip + Zu Z  , reducing to the single-variable case. Reinsel [11,12], Azzalini [1], Lange and Laird [7], and Nummi [8,9] considered two special cases: Z = X and Z = Xc , where X = (Xc : Xc¯ ), and showed that the maximum likelihood estimator (MLE) of  is identical to its least-squares estimator (LSE) ˆ = ((X X)−1 X  ⊗ Im )Y A (AA )−1 .

(4)

However, this is not the case for a general design matrix Z, where the explicit MLE of  usually does not exist. Moreover, both the two-stage estimator provided by Khatri [5] and the LSE ˆ ignore the structure information on the covariance matrix . One alternative is to adopt the improved two-stage estimator suggested by Rao [10] using the structure covariance matrix of the mixed effect model. In many practical situations, only a part of the parameters of model (3) are meaningful to the researcher. For example, in the rat data of Verbeke and Molenberghs [15], of primary interest is the estimation of changes over time and testing whether these changes are treatment dependent. For the mixed linear model with one random effect, Wu and Wang [17] gave a simple estimator

W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) 317 – 327

319

of the part parameter by the reduced model which is often used to remove nuisance parameters. In this paper, we mainly consider this problem under the general growth curve model (3) with random effects. In Section 2, we introduce a new simple estimator for the part parameter of  by a reduced model, which can be superior to the LSE under certain conditions. In Section 3, we consider the optimality of the new estimator, and obtain the necessary and sufficient condition the new estimator to be identical to the best linear unbiased estimator (BLUE). Furthermore, we show that the new estimator is superior to the corresponding two-stage estimator under certain conditions. Examples are presented in Section 4. 2. New estimator Without loss of generality, we partition X = (X1 : X2 ) in model (3) such that M(X1 ) ⊆ M(Z),

M(X2 ) ∩ M(Z) = {0},

(5)

where X1 and X2 are p × l and p × (q − l) matrices, respectively, (0 l q), and M(C) is the range space of any matrix C. Partition  = (1 : 2 ) conformably. Thus model (3) can be rewritten as Y = (X1 ⊗ Im )1 A + (X2 ⊗ Im )2 A + (Z ⊗ Im )U + E,

(6)

where 2 is the parameter matrix of primary interest. In what follows, we mainly consider the estimation of 2 . Denote Q1 the p × (p − c) matrix such that Q1 Q1 = Ip−c and Q1 Z = 0. Thus Q1 Q1 = Ip − PZ , where PZ = Z(Z  Z)− Z  is the orthogonal projector on the space M(Z). Let MZ = Ip − PZ = Q1 Q1 , premultiplying model (6) by Q1 ⊗ Im , we can obtain the reduced model (Q1 ⊗ Im )Y = (Q1 X2 ⊗ Im )2 A + ε,

Cov(Vec(ε)) = IN(p−c) ⊗ e ,

(7)

which is relevant to the matrix parameters 2 and e . It is readily to see that, under model (7), the MLE of 2 equals to its LSE   ˜ 2 = (X2 MZ X2 )−1 X2 MZ ⊗ Im Y A (AA )−1 . (8) 

Let 2 be partitioned as 2 = (21 , . . . , 2(q−l) ) , where 2i (i = 1, . . . , q−l) are m×r matrices. Assume that the elements of X are functions of the time variable t, such as X = (fi (tj )), then 2i represent the regression coefficients attached to fl+1 (t), . . . , fq−l (t), for the m characteristics and r groups. Denote  = (21 , . . . , 2(q−l) ). Applying definition of the restricted maximum likelihood (REML) estimator, we can obtain the REML estimator of e under reduced model (7), that is, ˜ e =

N  

   ˜  Q1 ⊗ a i ) k, Yi Q1 − (X 2

˜  Q1 ⊗ a i ) Yi Q1 − (X 2

(9)

i=1

˜ = (˜ 21 , . . . , ˜ 2(q−l) ). where k = N(p − c) − (q − l)r, Yi = (yi1 , . . . , yim ) , and  ˜ e are also unbiased for 2 and e , respectively. Under the original model (6), estimators ˜ 2 and 

320

W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) 317 – 327

Theorem 2.1. (a) ˜ 2 ∼ N(2 , (AA )−1 ⊗ (X2 MZ X2 )−1 ⊗ e ), (b) k · ˜ e ∼ Wm (k, e ), and independent of ˜ 2 , where Wm (k, e ) is the m-dimensional Wishart distribution with k degrees of freedom and parameter matrix e . Proof. (a) is obvious so we only need to prove (b). Noting Vec(AXC) = (C  ⊗ A) Vec(X), thus model (7) can be rewritten as Yi Q1 = (X2 Q1 ⊗ ai ) + Ei , Cov(Vec(Ei )) = Ip−c ⊗ e ,

i = 1, . . . , N,

(10)

where Yi is the same as that in (9),  = (21 , . . . , 2(q−1) ), and Ei = (εi1 , . . . , εim ) Q1 . Denote Y0 =(Y1 Q1 , . . . , YN Q1 ),

X0 =(X2 Q1 ⊗ a1 , . . . , X2 Q1 ⊗ aN ),

E0 =(E1 , . . . , EN ),

then model (7) can also be rewritten as Y0 = X0 + E0 ,

Cov(Vec(E0 )) = IN(p−c) ⊗ e .

(11)

˜ = (˜ 21 , . . . , ˜ 2(q−l) ) and ˜ e , respectively, Thus we get the equivalent forms of  ˜ = Y0 X  (X0 X  )−1 =  0 0

N 

  Yi Q1 Q1 X2 (X2 MZ X2 )−1 ⊗ ai (AA )−1 ,

i=1  ˜ e = Y0 (IN(p−c) − X0 (X0 X0 )−1 X0 )Y0 /k  N      ˜ ˜ k. Yi MZ Yi − (X = 2 MZ X2 ⊗ AA ) i=1

˜ and k · ˜ e ∼ Wm (k, e ). The proof of ˜ e is independent of  It follows readily from (11) that  Theorem 2.1 is completed.  Remark 2.1. Cov(Vec(˜ 2 )) does not depend on the matrix u . According to Theorem 2.1, the new estimator ˜ 2 can be used to construct an exact test on , i.e. on 2 for the general linear hypothesis H0 : LG = 0. In fact, the Wilks’s  is given by =

|W | , |W + H |

(12)

  −1    ˜ L ). ˜ ˜ e L ) and H = (LG) where W = k(L G (X2 MZ X2 ⊗ AA )−1 G (G  On the other hand, from (4), it is easy to obtain that the LSE of 2 under model (6) is   ˆ 2 = (X2 MX1 X2 )−1 X2 MX1 ⊗ Im Y A (AA )−1 , and

   Cov(Vec(ˆ 2 )) = (AA )−1 ⊗ (X2 MX1 X2 )−1 ⊗ e + (X2 MX1 X2 )−1 X2 MX1 Z ⊗ Im u   × Z  MX1 X2 (X2 MX1 X2 )−1 ⊗ Im . (13)

W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) 317 – 327

321

Combining with Remark 2.1, it is reasonable to expect that ˜ 2 may be superior to ˆ2 under some conditions. Suppose that X1 , X2 and Z in (5) satisfy the following conditions: X1 X2 = 0, Z = (X1 , Z0 ), M(Z0 ) ∩ M(Xi ) = {0}, i = 1, 2,

(14)

then MZ X1 = 0, MX1 X2 = X2 and MZ = MX1 − MX1 Z0 (Z0 MX1 Z0 )−1 Z0 MX1 .

(15)

For the proof of the last equality see [13]. By the use of (14) and the matrix identity (A + BCB  )−1 = A−1 − A−1 B(B  A−1 B + C −1 )−1 B  A−1 , we can obtain that (X2 MZ X2 )−1 = (X2 X2 )−1 + (X2 X2 )−1 X2 Z0 (Z0 MX Z0 )−1 Z0 X2 (X2 X2 )−1 . Thus Cov(Vec(˜ 2 )) can be rewritten as    Cov(Vec(˜ 2 )) = (AA )−1 ⊗ (X2 X2 )−1 ⊗ e + (X2 X2 )−1 X2 Z0 ⊗ Im    × (Z0 MX Z0 )−1 ⊗ e Z0 X2 (X2 X2 )−1 ⊗ Im . Furthermore, under the assumption (14), (13) can be simplified as  Cov(Vec(ˆ 2 )) = (AA )−1 ⊗ (X2 X2 )−1 ⊗ e     + (X2 X2 )−1 X2 Z0 ⊗ Im (u )22 Z0 X2 (X2 X2 )−1 ⊗ Im ,

(16)

(17)

where (u )22 = (0, Im(c−l) )u (0, Im(c−l) ) is m(c − l) × m(c − l) nonnegative definite matrix. Comparing (16) with (17), we obtain the condition for which the new estimator ˜ 2 is superior to the LSE ˆ2 in the following theorem. Theorem 2.2. If conditions (14) hold, then Cov(Vec(˜ 2 )) < Cov(Vec(ˆ 2 )) if and only if X2 Z0  = 0

and

((Z0 MX Z0 )−1 ⊗ e ) < (u )22 .

(18)

For example, X1 = (1, 1, 1) , X2 = (−1, 0, 1) , Z0 = (1, 4, 9) , and (u )22 = a · e , then ˜ 2 is superior to ˆ2 only if a > 23 . Remark 2.2. The condition (18) is irrelevant to the other sub-matrices of u besides (u )22 . In the following section, we will show that the new estimator ˜ 2 can also be superior to the improved two-stage estimator of 2 under some conditions. 3. The optimality of the new estimator In this section, we consider the optimality of the new estimator ˜2 of 2 , the parameter matrix of primary interest, and give the necessary and sufficient condition under which ˜ 2 is equal to the BLUE of 2 for the original model (6).

322

W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) 317 – 327

Theorem 3.1. ˜ 2 is the BLUE of 2 under model (6), if and only if Z  MX1 X2 = 0,

(19)

where MX1 = Ip − X1 (X1 X1 )−1 X1 . Proof. By Lemma 5.4.3 [16], ˜ 2 is the BLUE of 2 under model (6) if and only if Cov(Vec(˜ 2 ), (In − PA ⊗ PX ⊗ Im ) Vec(Y )) = 0,

(20)

where n = Npm, that is, (AA )−1 A ⊗ ((X2 MZ X2 )−1 X2 MZ ⊗ Im )(MX ⊗ Im ) = 0, which is equivalent to X2 MZ MX = 0.

(21)

From (5), we have X1 MZ = 0, thus (21) ⇔ X PZ MX = 0 ⇔ PX PZ MX = 0 ⇔ PX PZ = PX1 . Combining Theorem 1 (A10, A11) of [2] and the fact that PX = PX1 + MX1 X2 (X2 MX1 X2 )−1 X2 MX1 . Eq. (21) can been simplified to

ZM

X1 X2

= 0. The proof of the Theorem 3.1 is completed.

(22) 

The BLUE of 2 under model (6) is −1 C2 MC1 −1/2 Y A (AA )−1 , ∗2 = C22.1

(23)

where Ci = (Xi ⊗ Im ))−1/2 , i = 1, 2, C22.1 = C2 MC1 C2 , MC1 = I − C1 (C1 C1 )−1 C1 . Obviously, ∗2 is an usually unfeasible estimator since  in (23) includes two unknown covariance matrices u and e . However, if condition (19) holds, then by Theorem 3.1, we have ∗2 = ˜ 2 , that means the existence of the explicit ML estimator of the part parameter 2 . Theorem 3.2. Under model (6), the following statements are equivalent: (a) (b) (c)

Z  MX1 X2 = 0, ˜ 2 = ˆ 2 , ˆ 2 = ∗ . 2

Proof. (i) Proof of (a) ⇔ (c). Similar to the proof of Theorem 3.1, we have ˆ 2 = ∗2 ⇔ Cov(Vec(ˆ 2 ), (In − PA ⊗ PX ⊗ Im ) Vec(Y )) = 0, which can be simplified as (X2 MX1 Z ⊗ Im )u (Z  MX ⊗ Im ) = 0.

(24)

Since u 0 is arbitrary, (24) is equivalent to Z  MX1 X2 = 0 or Z  MX = 0. From (5), we have Z  MX = 0 ⇔ M(Z) = M(X1 ) ⇒ Z  MX1 X2 = 0, thus (24) is equivalent to Z  MX1 X2 = 0, that is, (a) ⇔ (c).

W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) 317 – 327

323

(ii) Proof of (a) ⇔ (b). Obviously, (a) ⇒ (b). Now, we consider (b) ⇒ (a). If (b) holds, then (X2 MX1 X2 )−1 X2 MX1 = (X2 MZ X2 )−1 X2 MZ .

(25)

Postmultiplying (25) by Z, we have (X2 MX1 X2 )−1 X2 MX1 Z=0, which is equivalent to Z  MX1 X2 = 0. The proof of Theorem 3.2 is completed.  Theorem 3.2 shows that the new estimator ˜ 2 and the LSE ˆ 2 can achieve optimality simultaneously, and both necessary and sufficient conditions are Z  MX1 X2 = 0. Remark 3.1. The condition Z  MX1 X2 = 0 is weaker than the necessary and sufficient condition for ˆ = ∗ under model (6). In fact, by Lemma 5.4.3 of [16], we can testify that ˆ = ∗ if and only if Z  X = 0 or M(Z) = M(X1 ), which are two special cases for Z  MX1 X2 = 0. Remark 3.1 sharpens the intuition that while the explicit MLE of the whole parameter matrix does not exist, it may do so for the part parameter matrix. For the covariance matrix with random effects such as (2), Rao [10] presented an improved two-stage estimator of  ˆ T1 = ˆ − ((X  X)−1 X  ⊗ Im )SQ(Q SQ)−1 Q Y A (AA )−1 , where Q = MX Z ⊗ Im , T1 = Q Y, and S = Y (In − PA )Y  . ˆ T1 is derived by covariance adjustment in the LSE ˆ using the concomitant variable T1 , Rao [10] and Grizzle and Allen [4] proved N −r −1 (AA )−1 ⊗ {(X  ⊗ Im )−1 (X ⊗ Im )}−1 Cov(Vec(ˆ T1 )) = N − r − mk0 − 1 N −r −1 = (26) Cov(Vec(∗ )) Cov(Vec(∗ )), N − r − mk0 − 1 where k0 = rk(MX Z). Clearly, (26) takes the equality if and only if k0 = 0, which requires ˆ and M(Z) ⊆ M(X). In this case the improved two-stage estimator ˆ T1 is equal to the LSE ,  is the corresponding improved two-stage estimator of 2 , ˆ T = ˜ 2 = ˆ 2 = ˆ 2T , where ˆ 1

2T1

1

  (ˆ 1T1 , ˆ 2T1 ) . In the following corollary, we will give a set of conditions for the new estimator ˜ 2 to be superior to the improved two-stage estimator ˆ 2T1 under the case k0  = 0.

Corollary 3.1. If M(X) ∩ M(Z) = M(X1 ), M(Z)  = M(X1 ) and Z  MX1 X2 = 0, then Cov(Vec(ˆ2T1 )) > Cov(Vec(∗2 )) = Cov(Vec(˜ 2 )) = Cov(Vec(ˆ 2 )).

(27)

The conditions M(X) ∩ M(Z) = M(X1 ) and M(Z)  = M(X1 ) ensure k0  = 0. Upon use of Theorem 3.1 and (26), we can obtain (27). In order to understand the set of conditions in Corollary 3.1, without loss of generality, we assume Z = (X1 : Z0 ). By (5), we have M(Z0 ) ∩ M(X) = {0}, and Z  MX1 X2 = 0 ⇔ Z0 MX1 X2 = 0.

324

W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) 317 – 327

According to Theorem 1 (A10, A11) of [2], Z0 MX1 X2 = 0 is equivalent to PZ0 PX = 0, thus Z  MX1 X2 = 0 ⇔ Z0 X = 0,

(28)

and hence the conditions in Corollary 3.1 are equivalent to Z0 X = 0, and Z0  = 0. For example, X1 = (1, 1, 1, 1) , X2 = (1, 2, 3, 4) , Z = (X1 , Z0 ), where Z0 = (1, −1, −1, 1) , clearly, Z0  = 0 and Z0 X = 0. 4. Examples In this section, we shall give two simple examples to illustrate the foregoing results. Example 1. Consider the model for the rat data (see Verbeke and Molenberghs [14]): ⎧ ⎨ 01 + u0j + 1 ti + u1j zi + εij if low dose, yij = 02 + u0j + 2 ti + u1j zi + εij if high dose, ⎩ 03 + u0j + 3 ti + u1j zi + εij if control,

(29)

where yij is the observation for the jth individual at ti time point, i = 1, . . . , p, uj = (u0j , u1j ) is random effect vector with normal distribution N (0, u ), and random error εij follows the distribution N(0, 2e ). We assume that u1 , . . . , uN , ε11 , . . . , ε1N , . . . , εp1 , . . . , εpN are independent each other. Of primary interest is the estimation of the slopes 1 , 2 and 3 , and testing whether these slopes are equal to each other. Without loss of generality, we assume that Y1 = (y1 , . . . , yn1 ), Y2 = (yn1 +1 , . . . , yn2 and Y3 = (yn2 +1 , . . . , yN ) are the matrices of the observations for the low-dose group, high-dose group and control group, respectively, where yj = (y1j , . . . , ypj ) . Denote Y = (Y1 : Y2 : Y3 ),

U = (u1 , . . . , uN ),

Z0 = (z1 , . . . , zp ) ,

X = (1p : T ),

and  =

1 2



 =

01 02 03 1 2 3 ,

T = (t1 , . . . , tp ) ,

Z = (1p : Z0 )

⎞ 1n1 01×(n2 −n1 ) 01×(N−n2 ) A = ⎝ 01×n1 1(n2 −n1 ) 01×(N−n2 ) ⎠ , 01×n1 01×(n2 −n1 ) 1(N−n2 ) ⎛

 ,

then model (4.2) can be rewritten as Y = XA + ZU + E = 1p 1 A + T 2 A + ZU + E.

(30)

The covariance matrix of Y is Cov(Vec(Y )) = IN ⊗ (Zu Z  + 2e Ip ). Denote Q1 the p × (p − 2) matrix, such that Q1 Q1 = MZ = Ip − PZ , then the reduced model of (30) may be represented by Q1 Y = Q1 T 2 A + Q1 E,

Cov(Vec(Q1 Y )) = IN ⊗ (2e Ip−2 ).

By (8) and (9), we obtain the new estimators of 2 = (1 , 2 , 3 ) and 2 ˜ 2 =

1 T M

ZT

T  MZ (Y 1 : Y 2 : Y 3 ),

(31)

W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) 317 – 327

325

or ˜ r =

1 T  MZ Y r , T  MZ T

r = 1, 2, 3,

(32)

and ˜ 2e =

3 

SSr /k,

(33)

r=1

where k = N(p − 2) − 3, Y r = (Y 1r , . . . , Y pr ) , Y i1 =

n1 

yij /n1 ,

n1 

yij /(n2 − n1 ),

N 

Y i3 =

j =n1 +1

j =1

SS1 =

n2 

Y i2 =

yij /(N − n2 ),

j =n2 +1

(yj − ˜ 1 T ) MZ (yj − ˜ 1 T ),

j =1

SS2 =

n2 

(yj − ˜ 2 T ) MZ (yj − ˜ 2 T ),

and

j =n1 +1

SS3 =

N 

(yj − ˜ 3 T ) MZ (yj − ˜ 3 T ).

j =n2 +1

By Theorem 2.1, ˜ 2 and ˜ 2 are independent, based on which we can construct a test statistic for testing H0 : 1 = 2 = 3 . Let   1 0 −1 H = , 0 −1 1 then H0 is equivalent to testing 2 H = 0. The exact test statistic is F =

 ˜ 2 H (H  (AA )−1 H )−1 H  ˜ 2 /2

˜ 2e

(T  MZ T ),

which is an F-statistic and hence F ∼ F2,k if 2 H = 0 holds. By Theorem 2.2, under the case T  Z0  = 0, if the variance of the error 2 satisfies 2e < (Z0 MX Z0 )(u )22 = (Z0 MX Z0 ) Var(u1j ),

(34)

then ˜ 2 is superior to the LSE of 2 ˆ 2 =

1 (T − t · 1p ) (Y 1 : Y 2 : Y 3 ). (T − t · 1p ) (T − t · 1p )

(35)

By Theorem 3.2, if T  Z0 = 0 and 1p Z0 = 0, then ˜ 2 = ˆ 2 , and ˜ 2 is the BLUE of 2 under model (29). Furthermore, if Z0  = 0, by Corollary 3.1, then ˜ 2 is superior to the improved two-stage estimator of 2 .

326

W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) 317 – 327

Example 2. Consider the model for systolic and diastolic blood pressure data:  (1) (1) 0j + uij + j xl + vij tl + εij l if treatment 1 group, yij l = (2) (2) 0j + uij + j xl + vij tl + εij l if treatment 2 group, i = 1, . . . , N,

j = 1, 2,

(36)

l = 1, . . . , p,

where yi1l , yi2l are the systolic and diastolic blood pressure for the ith patients (with moderate (1) (2) (1) (2) essential hypertension) at tl time point, respectively, 0j and 0j are fixed intercepts, j and j are the fixed effects of treatment 1 and 2 on the endpoints, respectively. We assume that random individual effect ui = (ui1 , ui2 , vi1 , vi2 ) follows distribution N (0, u ), and independent of the error εi = (εi11 , εi21 , . . . , εi1p , εi2p ) with distribution N (0, Ip ⊗ e ). Clearly, (36) is the case of model (3) with m = 2, ⎞ ⎛ (1) (2) 01 01 ⎟ (1) (2)   ⎜    02 ⎟ ⎜ 02 1n1 01×(N−n1 ) 1 ⎟ ⎜ = = ⎜− − −− − − −−⎟, A = , (37)  01×n1 1(N−n1 ) 2 ⎟ ⎜ (2) ⎠ ⎝ (1)  1 1 (1) (2) 2 2 and X = (1p , x),

Z = (1p , t),

U = (u1 , . . . , uN ).

Here x = (x1 , . . . , xp ) and t = (t1 , . . . , tp ) are designs vectors on dose and time point. Of primary interest is the testing on 2 , the effects of treatment 1 and 2 on systolic and diastolic blood pressure. According to Theorem 2.1, we can obtain the actual test statistic on 2 , Wilks’s  statistic (12), which is constructed by the new estimator ˜ 2 given by (8). Otherwise, the new estimator of 2 is superior to the LSE and two-stage estimator under the conditions of Theorem 2.2 and Corollary 3.1. See the following typical designs on dose x and time point t. Take x = (1, 1, 0, −1, −1), t = (1, 2, 3, 4, 5), then conditions (18) can be equivalent to e = Cov((εi1l , εi2l ) ) < Cov((vi1 , vi2 ) ).

(38)

That is, if the covariance matrix of error is lower than the covariance matrix of time effect under Löwner partial ordering, then the new estimator of 2 is superior to the LSE. Take x = (0, 2, 2, 2, 0), t = (−2, −1, 0, 1, 2), it is easy to verify that Z  MX1 x = (0, t  x) = (0, 0) , so the new estimator of 2 is superior to the two-stage estimator provided by Rao. The above two examples show that Theorem 2.2 and Corollary 3.1 are helpful for the study on optimal design of experiments too. Acknowledgments This research was supported by a grant from The Hong Kong Polytechnic University Research Committee. Wu’s and Wang’s research was also partially supported by National Natural Science

W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) 317 – 327

327

Foundation of China. We are grateful to the two referees for their detailed suggestions which considerably improved the quality of the paper. References [1] A. Azzalini, Growth curves analysis for patterned covariance matrices, in: M.L. Puri, J.P. Vilaplana, W. Wertz (Eds.), New Perspectives in Theoretical and Applied Statistics, Wiley, New York, 1987, pp. 61–74. [2] J.K. Baksalary, Algebraic characterizations and statistical implications of the commutativity of orthogonal projectors, in: Tarmo Pukkila, S. Puntanen (Eds.), Proceedings of the Second International Tampere Conference in Statistics, 1987, pp. 113–142. [3] P.J. Diggle, K.E. Liang, S.L. Zeger, Analysis of Longitudinal Data, Oxford Science, New York, 1994. [4] J.E. Grizzle, D.M. Allen, Analysis of growth and dose response curves, Biometrics 25 (1969) 357–381. [5] C.C. Khatri, A note on a MANOVA model applied to problems in growth curves, Ann. Inst. Statist. Math. 18 (1966) 75–86. [6] N.M. Laird, J.H. Ware, Random effects models for longitudinal data, Biometrics 38 (1982) 963–974. [7] N. Lange, N.M. Laird, The effect of covariance structure on variance estimation in balanced growth-curve models with random parameters, J. Amer. Statist. Assoc. 84 (1989) 241–247. [8] T. Nummi, Estimation in a random effects growth curve model, J. Appl. Statist. 24 (1997) 157–168. [9] T. Nummi, Möttönen, On the analysis of multivariate growth curves, Metrika 52 (2000) 77–89. [10] C.R. Rao, Least squares theory using an estimated dispersion matrix and its application to measurement of signals, in: L. Lecam, J. Neyman (Eds.), Proceeding of the Fifth Berkeley Symposium on Math Statistics & Probability, first ed., 1967, pp. 355–372. [11] G. Reinsel, Multivariate repeated-measurement or growth curve models with multivariate random-effects covariance structure, J. Amer. Statist. Assoc. 77 (1982) 190–195. [12] G. Reinsel, Estimation and prediction in a multivariate random effects generalized linear model, J. Amer. Statist. Assoc. 79 (1984) 406–414. [13] G.A.F. Seber, Linear Regression Analysis, Wiley, New York, 1977 p. 66. [14] M.S. Srivastava, D. VonRosen, Growth curve models, in: S. Ghosh (Ed.), Multivariate Analysis, Design of Experiments, and Survey Sampling, Marcel Dekker Inc., New York, 1999. [15] G. Verbeke, G. Molenberghs, Linear Mixed Models for Longitudinal Data, Springer, New York, 2000. [16] S.G. Wang, S.C. Chow, Advanced Linear Models, Marcel Dekker Inc., New York, 1994. [17] M.X. Wu, S.G. Wang, Parameter estimation in a partitioned mixed-effects model, Chinese J. Eng. Math. 19 (2) (2002) 31–38.

Estimation for parameters of interest in random ... - ScienceDirect.com

Sep 8, 2006 - Y = (X ⊗ Im)A + (Z ⊗ Im)U + E,. Cov(Vec(Y )) = IN ⊗ ,. (3) which is a general growth curve model with multivariate random effects covariance structure, where. = ((1),..., (r)) is the qm × r matrix of the growth curve coefficients, and A = (a1,...,aN ) is the r × N matrix with full row rank. In particular, if its elements are ...

171KB Sizes 0 Downloads 221 Views

Recommend Documents

Statistical resolution limit for multiple parameters of interest ... - Supelec
directly point out the best resolution that can be achieved by an un- biased estimator. .... into account the coupling between the parameters of interest. Con-.

LiDAR- Based Estimation of Canopy Fuel Parameters of Tree ...
the fieldwork and LiDAR was done. R2 was also computed to test relationship. between the two datasets. VILLAR, R. G. et al. - CMUJS Vol. 20, No.3 (2016) 140-149. Page 3 of 10. LiDAR- Based Estimation of Canopy Fuel Parameters of Tree Plantation in Bu

Maximum Likelihood Estimation of Random Coeffi cient Panel Data ...
in large parts due to the fact that classical estimation procedures are diffi cult to ... estimation of Swamy random coeffi cient panel data models feasible, but also ...

Semiparametric Estimation of the Random Utility Model ...
Apr 15, 2017 - ... the consistent estimation of the ratios of coefficients despite stochastic mis- ... is asymptotically normal, meaning that it is amenable to the ...

Improvement in Performance Parameters of Image ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, Issue 6, ... Department of Computer Science and Engineering, Technocrats Institute of Technology ... Hierarchical Trees (SPIHT), Wavelet Difference Reduction (WDR), and ...

request for expressions of interest - ICOH
(2) developing practical tools for assessment and management of occupational risks, and minimum ... social protection against occupational health and safety risks. ... Good track record of publications (journal articles and reports) in.

request for expressions of interest - ICOH
(3) incorporating workers' health into economic development policies and ... (3) a set of cost-effective interventions and practical tools for workplace ... the informal sector, e.g. farms, artisanal workshops, production of building materials and.

Inference for ordered parameters in multinomial ...
always normal and the bootstrap distribution estimators of the MLEs can be ... bootstrap approaches for the interval estimation of p[1] (p[k]) is unstable (see ...

Estimation of Separable Representations in ...
cities Turin, Venice, Rome, Naples and Palermo. The range of the stimuli goes from 124 to 885 km and the range of the real distance ratios from 2 to 7.137. Estimation and Inference. Log-log Transformation. The main problem with representation (1) is

ESTIMATION OF CAUSAL STRUCTURES IN LONGITUDINAL DATA ...
in research to study causal structures in data [1]. Struc- tural equation models [2] and Bayesian networks [3, 4] have been used to analyze causal structures and ...

Quantitative determination of tip parameters in ...
the solution of a rigid dielectric problem, b the stress field is calculated using .... contrary to the behavior anticipated in contact mode imag- ing. To complement ...

(in)variance of dsge model parameters
a strong trend in the last half century. ... that of the households at the micro-level. ... anticipate that ν is different from the micro-elasticity of household labor supply ...

Estimating parameters in stochastic compartmental ...
The field of stochastic modelling of biological and ecological systems ..... techniques to estimate model parameters from data sets simulated from the model with.

ESTIMATION OF FREQUENCY SELECTIVITY FOR ...
Abstract. In this paper, estimation of both global (long term) and local (in- stantaneous) ... file (PDP) parameters corresponding to a specific PDP model is given. ... discussed. 2 System model. An OFDM based system model is used. Time domain sample

Accurate determination of saturation parameters for ...
Received March 4, 2005; revised July 31, 2005; accepted September 7, 2005 ... excited-state absorption cross section fp = esa/ a were determined to be 6.1310−19 cm2 and 0.45, respectively, .... Note that if the pulse repetition rate of the pulsed l

Optimization of Distribution Parameters for Estimating ...
purposes, including structural diagnosis and prognosis (Zheng et al. [9]). For example, Kale et al. [3] used POD curves to optimize the inspection schedule that ...

Heterogeneity of Genetic Parameters for Calving ...
‡Animal Breeding and Genomics Centre, Animal Sciences Group, PO Box 65, 8200 AB, Lelystad, the ..... ies as 0.06 and 0.05, respectively, on the visual scale,.

Statistical resolution limit for multiple parameters of ...
Index Terms— Statistical resolution limit, performance analy- sis, Cramér-Rao bound. 1. INTRODUCTION. Characterizing the ability of resolving closely spaced ...

Heterogeneity of genetic parameters for calving ...
School of Agriculture, Food and Veterinary Medicine, College of Life ... and fitted a single residual variance component had the most optimal fit to the data.

characteristic parameters of profitable internet firms in ...
His main area of researh is related to international accounting, harmonisation, international accounting standards, and capital market ... Telephone: +34 976 761000 Ext. 4652. Fax number: +34 976 761769 .... industries to achieve high market share, p