Journal of Econometrics 102 (2001) 23}66

A simpli"ed approach to computing e$ciency bounds in semiparametric models Thomas A. Severini , Gautam Tripathi * Department of Statistics, Northwestern University, Evanston, IL-60201, USA Department of Economics, University of Wisconsin-Madison, Madison, WI-53706, USA Received 8 January 1999; received in revised form 5 September 2000; accepted 2 October 2000

Abstract Using some standard Hilbert space theory a simpli"ed approach to computing e$ciency bounds in semiparametric models is presented. We use some interesting examples to illustrate this approach and also obtain some results which seem to be new to the literature.  2001 Elsevier Science S.A. All rights reserved. JEL classixcation: C14 Keywords: E$ciency bounds; Semiparametric models

1. Introduction In order to determine whether a certain parameter in a semiparametric model has been e$ciently estimated, one compares the asymptotic variance of the estimator under question with the e$ciency bound for estimating the parameter. There is a vast literature in econometrics and statistics dealing with the computation of e$ciency bounds in various models. However, much of this literature is highly technical, and perhaps beyond the reach of the average economist. Furthermore, almost each previous paper on the subject has used a di!erent, albeit highly original, approach in order to obtain e$ciency bounds for di!erent

* Corresponding author. Tel.: #1-608-262-3804; fax: #1-608-263-3876. E-mail address: [email protected] (G. Tripathi). 0304-4076/01/$ - see front matter  2001 Elsevier Science S.A. All rights reserved. PII: S 0 3 0 4 - 4 0 7 6 ( 0 0 ) 0 0 0 9 0 - 7

24

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

models. Some readers may "nd this procedure of obtaining e$ciency bounds on a case-by-case basis quite bewildering. Our intention, in this paper, is to rectify this situation. Utilizing the concepts of the Fisher information norm and the Fisher information inner product described in Wong and Severini (1991, p. 610), we present a fairly straightforward approach to computing e$ciency bounds in semiparametric models. We have used several interesting examples to illustrate this approach and to obtain some results which seem to be new to the literature. By design some of the technical conditions used in these examples may not be the most general, although we have tried not to be too simplistic. Instead, we have opted to present our results in a manner so that they are accessible to a larger audience. We con"ne ourselves to calculating the bounds for e$cient estimation. The general construction of e$cient estimators is not discussed. As far as the existing literature is concerned, a useful and highly readable account of e$ciency bounds may be found in Newey (1990). For technical details, such as a convolution theorem that ensures the validity of the bounds obtained in this paper, the reader is referred to van der Vaart (1989). The comprehensive monograph by Bickel et al. (1993), although not particularly easy to read, has a wealth of information about e$cient estimation in semiparametric models. Additional references have also been provided in the main body of the paper. The paper is organized as follows: In Section 2 we describe the procedure to obtain the e$ciency bound for estimating some real-valued feature in a general semiparametric model. This solution technique is used to obtain bounds for some interesting examples. We begin with the simplest ones and obtain bounds for e$ciently estimating the population mean (Section 3), population quantiles (Section 4), and the cumulative distribution function (Section 5). Sections 6 and 7 provide some results which seem to be new to the literature. In Section 6 we obtain the bound for estimating the cdf of random variables (y, z) when some additional information about the conditional mean $(yz) is available. Some useful illustrations are provided to demonstrate the nature of a priori information which can help us estimate the cdf more e$ciently. Section 7 looks at the bound for estimating a certain functional of a conditional expectation. In Sections 8}12 we reexamine the bounds for some well-known examples in econometrics. We demonstrate the ease with which e$ciency bounds for the partially linear model, models with unconditional and conditional moment restrictions, the binary choice model, and density weighted average derivatives can be obtained by using the procedure described in Section 2. These sections may help readers compare this approach with existing methods of computing e$ciency bounds. Section 13 concludes. Relevant mathematical de"nitions and auxiliary results have been provided in the appendices. Throughout the paper we have assumed that the operations of di!erentiation and integration can be interchanged. The following notation is used subsequently.

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

25

Notation. ( ( ) ) is the indicator function of a set A and AM denotes its closure in  a L norm. L(1B; ) is the set of all real-valued functions on 1B which are square integrable w.r.t the Lebesgue measure . L(S; z) denotes the set of all realvalued functions on S which are square integrable w.r.t the probability distribution of z. We sometimes use the symbol $ to signify that expectation is being X taken w.r.t the distribution of z. When there is no danger of confusion we simply use $ as the expectation operator. 2. A general semiparametric model We begin by de"ning the notion of e$ciency. Let K be an estimator (based on L n iid observations) of a real-valued parameter  and suppose that  B N(0, v ). Then according to Fisher, v *1/i n(K ! )P where i denotes L    $ $ the information contained in a single observation (the Fisher information). This inequality will be subsequently referred to as the &information inequality'. K is said to be asymptotically e$cient if the information inequality is sharp; i.e. L v "1/i . However, the existence of supere$cient estimators shows that  $ in the absence of suitable regularity conditions on K the information inL equality may not hold. Therefore, to exclude such pathological cases, henceforth we only consider regular estimators. As shown by Bahadur (1964), the information inequality does hold for all regular estimators of  .  We now describe the procedure to calculate the e$ciency bound for estimating some "nite dimensional parameter in a general semiparametric model. This solution technique will then be applied to some interesting examples. Let z ,2, z be d;1 iid random vectors with a common but unknown  L Lebesgue density p (z). p is assumed to have full support on 1B. Let us write   p (z) as  (z), where  3 and  is a subset of the unit ball in L(1B; ). For the    moment, think of  as the set of all 3L(1B; ) satisfying (z)'0 and 1B (z) dz"1. Additional restrictions may also be imposed on elements of , which will be explicitly de"ned in subsequent examples. Observe that the transformation p C  allows us to work with the square root of the pdf    "(p which, unlike the pdf itself, is guaranteed to lie in L(1B; ). More  over, as pointed out by Gijbels (1999, p. 22), the square root transformation is

 See Newey (1990, p. 102) for the de"nition of a ®ular' estimator. Loosely speaking, the notion of regularity is akin to stability; i.e. a regular estimator is stable in the sense that small perturbations of the &truth' do not alter its asymptotic distribution.  As pointed out by Ritov and Bickel (1990), without additional regularity conditions on the elements of  the e$ciency bounds may not be achievable. These additional regularity conditions usually do not a!ect the bounds (Newey 1990, p. 104).

26

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

variance stabilizing. There may also be some additional advantages of working with (p instead of p . For more on this see Good and Gaskins (1971).   Before going any further let us set in place some additional terminology. For some t '0 let t C be a curve from [0, t ] into  such that   " ; i.e.  R  R R   &passes through'  when t"0. We use Q to denote the slope of  at t"0. R  R Geometrically, Q is a &vector' (i.e. a point) in the vector space L(1B; ) which is tangent to the set  at  . The tangent cone ¹(,  ) is the collection of all Q 's   which are tangent to  at  . These concepts are formally de"ned in Appendix  A. The smallest closed (in the L(1B; ) norm) linear space containing ¹(,  )  is called the tangent space and is denoted by lin ¹(,  ).  Our objective is to obtain the e$ciency bound for estimating ( ), a real  valued feature of  , where  is a pathwise di!erentiable functional. Although  the de"nition of a pathwise di!erentiable functional is provided in De"nition A.4, intuitively we can think of pathwise di!erentiability of  to mean that the derivative (d/dt)( ) exists in the usual sense and has the form of a continuR R ous linear functional on the tangent space. For example, if we are interested in estimating the cdf of z at some point  then the object of interest is the functional ( )"1B ( (z) (z) dz. The pathwise derivative of , henceforth denoted  \ K

 by , for this case is easily evaluated and is given in Section 5. The requirement that  be real valued is solely for exposition. As subsequent examples will demonstrate, the case when the feature of interest is vector valued can be treated analogously by looking at arbitrary linear combinations of its components. Though we have assumed that ( ) is a scalar, it is still not clear how we can  obtain the e$ciency bound for estimating ( ). If  is known up to some   "nite-dimensional parameters, standard maximum-likelihood theory will give us the bound. However, in our case the standard theory will not work since  is  an unknown function; i.e. an in"nite-dimensional parameter. To handle this di$culty we use an approach that was perhaps "rst suggested by Stein (1956). Stein's approach is based on the intuition that a problem having nonparametric components is at least as hard as any one-dimensional subproblem contained in it. So think of calculating the asymptotic variance for estimating the hardest one-dimensional subproblem contained in the in"nite-dimensional problem. Let us denote this variance by the acronym l.b. which is short form for &lower bound'. Obviously, l.b. is the upper bound on the asymptotic variance for estimating any one-dimensional subproblem of the original problem. Following Stein, it then makes sense to regard l.b. as the e$ciency bound for estimating ( ) since the  asymptotic variance of any regular estimator of ( ) in the original problem  will be bounded from below by l.b. We now make this discussion precise.

 Because if we estimate p (z) nonparametrically, say by using a Rosenblatt}Parzen-type ap proach, the asymptotic variance of the density estimator is proportional to p (z). A simple delta  function argument then shows that the transformation p (z) C (p (z) is variance stabilizing.  

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

27

For some t '0 let t C  be a curve from [0, t ] into  which passes through  R   when t"0, and think of estimating the ¶meter' t. Keep in mind that the  &true parameter' in this one-dimensional subproblem is t"0. Since the loglikelihood of estimating t using a single observation z is given by l (t)"log (z), the X R score for estimating t"0 can be written as



2Q (z) " .  (z) R  Therefore, the Fisher information for estimating t"0 is given by dl (t) S (z)" X  dt



i " $

1B



S (z) (z) dz"4  

1B

Q (z) dz.

Recall that our objective is to calculate the e$ciency bound for estimating ( ).  Introducing an additional variable t to parameterize  , i.e. looking at a one dimensional subproblem, is simply a device intended to make this calculation easier. But we cannot use any arbitrary  passing through  to estimate t. We R  should only consider those subproblems which are informative about the feature of interest ( ). To formalize this intuition, let us restrict attention to only  those curves which satisfy the condition ( )"t for all t in some neighborhood R of 0 in [0, t ]. Loosely speaking, this condition describes the notion that  &estimating t is equivalent to estimating ( ).' More precisely, this condition R ensures that the &true parameter' t"0 in the chosen one-dimensional subproblem is at least locally identi"ed. To see this, observe that if we want the statement &estimating t"0 is equivalent to estimating ( )' to hold for a curve  , then  R the mapping t C( ) must be one to one near t"0. But in this case we can use R the local inverse to reparameterize  so that ( )"t holds for all t3[0, t ] R R  near 0. Following Wong and Severini (1991, p. 607) we now use i to establish an $ inner product and a norm on the tangent space as follows: For any Q , Q 3lin ¹(,  ) let    Q , Q "4   $



1B

Q (z)Q (z) dz and Q  "Q , Q .    $   $

We call  ) , ) the Fisher information inner product and  )  the Fisher $ $ information norm. Note that the Fisher information for estimating t"0 can be written as i "Q  for some Q 3lin ¹(,  ). The choice of which Q 's to use is  $ $ important. In particular, since we are restricting attention to  's satisfying R ( )"t, only those Q 3lin ¹(,  ) may be used to calculate i which also R  $ satisfy (Q )"1. As the reader will soon discover, we will extract a lot of mileage from the characterization of the pathwise derivative  as a linear functional on the tangent space.

28

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

So, as before, for some t '0 let t C be a curve from [0, t ] into  which  R  passes through  at t"0 and satis"es ( )"t for all t in a neighborhood of  R 0 in [0, t ]. Let Q be tangent to  at t"0 and assume that Q O0 so that the  R condition (Q )"1 is not violated. Also let tK be a n consistent regular L estimator of t"0 in this subproblem, and use asvar tK to denote the asymptotic L variance of tK . Since the information inequality holds for all regular estimators, L asvar ntK *1/i "Q \. But as  has to satisfy ( )"t, we have L $ $ R R

asvar n[( K L )!( )] "asvar ntK *Q \. R  L $ This inequality depends on  only through an element of the tangent space. R Therefore, as motivated earlier, the lower bound (l.b.) for the asymptotic variance of any n consistent regular estimator of ( ) can be obtained  by choosing a Q 3lin ¹(,  ) to maximize the right-hand side of the above  inequality; i.e. sup Q \. $ Q Q (ZJGL 2 (   ($ M( Although the above optimization problem may appear quite unfamiliar, it leads to a rather simple result. To see this, pick any nonzero Q in the tangent space and examine the value taken by the pathwise derivative (Q ). If this equals zero, the chosen Q is infeasible and is discarded. On the other hand, if (Q )O0, let I "Q /(Q ). Note that I is well de"ned, nonzero, and an element of the tangent space. Since linear homogeneity of  guarantees that (I )"1, I is also feasible. Therefore, we can restate the previous equation as l.b."

Q



sup Q /(Q )\. $ (ZJGL 2 (   (Q $ But observe that l.b."

(1)

Q

sup Q /(Q )\" sup (Q )/Q  $ $ Q

Q  Q (ZJGL 2 (   ($ (ZJGL 2 (   ($

Q



"

sup (Q ), Q  $ (ZJGL 2 (   ( 

Q



where last equality follows because

 



(Q ) Q Q " and "1. Q  Q  Q  $ $ $ $ Finally, since  is a continuous linear functional on the tangent space, from Luenberger (1969, p. 105) we know that its norm  is given by H  " sup (Q ). H Q (ZJGL 2 (   (Q $ 

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

29

This shows that sup Q /(Q )\" . $ H (ZJGL 2 (   (Q $ Therefore, from (1) it follows that l.b." . H Calculating  to obtain the lower bound may not always be easy. See, for H instance, Cosslett (1987) where e$ciency bounds for the binary choice and censored linear regression models are obtained by calculating the norms of certain &derivatives'. Instead, by using a well-known result from Hilbert space theory, we can reduce the di$culty in obtaining the e$ciency bound by looking at an equivalent quantity that is often easier to compute. In order to do so, "rst recall that  is by de"nition a continuous linear functional on the tangent space. Moreover, since the tangent space is a closed subspace of L(1B; ), (lin ¹(,  ),  ) , ) ) is a Hilbert space. Therefore, by the  $ Riesz}FreH chet theorem described in Appendix A, there exists a unique H in the tangent space such that

Q

(Q )"H, Q ∀Q 3lin ¹(,  ) and  "H . H $ $  H is usually called the representer of the linear functional . Hence instead of using  to compute the e$ciency bound, we can use H with the bound now H being given by l.b."H . In short, to obtain the lower bound for the $ asymptotic variance of n consistent regular estimators of ( ), it su$ces to  "nd the representer of  and calculate the square of its Fisher norm. Starting with Section 3 we illustrate this solution technique by looking at some useful examples. Remark 2.1. (i) As mentioned earlier, pathwise di!erentiability of  means that the derivative of t C ( ) exists in the usual sense and has the form of R a continuous linear functional on the tangent space. In order to verify that  is pathwise di!erentiable the basic idea is to proceed as if the derivative exists and obtain a formal expression for (Q ). Next, check that  is a linear functional, set (Q )"Q ,H for all Q in the tangent space, and solve for the representer $ H. After verifying that H lies in the tangent space, from the Riesz}FreH chet theorem we can conclude that  is also continuous. Although we have not gone through these motions in subsequent examples, the reader should keep in mind this procedure for ascertaining the di!erentiability of . (ii) The assumption that t C  possess a tangent vector at t"0 is weaker R than the requirement that the square root of the parameterized density be di!erentiable in quadratic mean; namely, that there exist a S(z) such that $S(z)(R and 1B  (z)! (z)!t(S(z) (z)/2)  dz"o(t) as t 0. In particuR   lar, as this de"nition shows, quadratic mean di!erentiability requires not only the existence of a tangent vector but also that   (z) dz"o(t) as t 0. X  ( X R

30

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

However, as we are working with pdf's having full support, we can dispense with the latter requirement. (iii) Finally, we discuss the connection between our approach and the one described in Newey (1990, Section 3, pp. 104}106). This may be helpful to economists who are already familiar with this well-known paper. To obtain the e$ciency bound for estimating ( ) (we maintain our notation), Newey follows  a two-step procedure: (a) Find a d3L(1B; z) such that (Q )"$ d(z)S (z) ; and  (b) calculate "proj(dS), the orthogonal projection (using the &usual' inner product on the Hilbert space of random variables) of d(z) onto S, where S is the linear space spanned by the scores S (z) from one-dimensional subproblems.  Newey calls (z) the &e$cient in#uence function' and shows that the e$ciency bound for estimating ( ) is given by $ (z). In our notation, Newey's &e$cient  in#uence function' is just the score function evaluated at the representer; i.e. (z)"2H(z)/ (z). The di!erence between our approach and that of Newey is  that the Riesz}FreH chet theorem allows us to do in one step what Newey does in two. To see this, note that the basic reason behind the projection in Newey's step (b) is that d(z) in the "rst step is not uniquely determined. This is not to say that a function which solves step (a) uniquely cannot be found. In fact, this is exactly where the Riesz}FreH chet theorem enters the picture. If d(z) is held "xed the map S C $ d(z)S (z) is a bounded linear functional on S. Therefore, by the   Riesz}FreH chet theorem, there exists a unique 3S such that $ d(z)S (z) "  $

(z)S (z) for all S 3S. But this is just saying that "proj(dS)" ! Hence,   we can see that if we use the Riesz}FreH chet theorem in step (a) itself, the second step projection in Newey's procedure is not required. This is precisely what our approach does. By setting (Q )"H, Q for all Q in the tangent space and $ solving for H, we directly obtain the representer in one step. As the preceding discussion shows, it is not very surprising (or negative) that the two approaches are similar } they are just two ways of performing the same calculation. The main di!erence between the two approaches is that Newey works in a Hilbert space of random variables and uses projections, while we work in a Hilbert space of tangent vectors and use the representation theorem. Both approaches are useful and for a given problem one may be simpler than the other. 3. E7ciency bounds for population means We begin with computing the e$ciency bound for estimating the population mean of a d;1 random vector z. This is perhaps the simplest possible example  Because any random variable which is orthogonal to the score function can be added to d(z) without a!ecting the outcome in step (a). If d(z) in the "rst step can be uniquely determined, then from Newey (1990, Theorem 2.2, p. 103) we know that d(z) must be the unique in#uence function for estimating ( ). By de"nition, the in#uence function has mean zero and so is an element of the space  spanned by the scores. Hence if d(z) in step (a) is unique, the projection in step (b) is not required.

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

31

one can think of. For pedagogical purposes we now provide a step-by-step description of obtaining the e$ciency bound for estimating $(z). This should help the readers in following subsequent examples. Let us assume that the distribution of z has an unknown Lebesgue density  (z), where  3 and  

 

" 3L(1B; ) : (z)'0, (z) is bounded and continuous,

1B

zz (z) dz(R,



1B



(z) dz"1 .

Here A"(tr(AA) is the usual Euclidean norm. We want to obtain the e$ciency bound for estimating the d;1 vector  "1B z (z) dz, the population   mean of z. To simplify this problem so that we are looking at a real-valued feature of interest, consider estimating the functional ( )"c "   1B cz  (z) dz, where c31B is arbitrary.  As described in Section 2 we begin by parameterizing  with a one dimensional subproblem. For some t '0 let t C  be a curve from [0, t ] into  R   such that  passes through  and has a tangent vector Q at t"0. Since the R  loglikelihood for a single observation is log (z), the score for estimating t"0 R is given by S "d/dt log (z) "2Q (z)/ (z). Therefore, the Fisher informa R R  tion for estimating t"0 is i "41B Q (z) dz. $ As shown in Lemma B.1, the tangent space



lin ¹(,  )" Q 3L(1B; ) : 



1B



Q (z) (z) dz"0 . 

On this tangent space we can use the expression for i to de"ne the Fisher $ information inner product Q , Q "41B Q (z)Q (z) dz and the Fisher in  $   formation norm Q  "Q , Q  for estimating t"0. However, as empha $   $ sized earlier, not every tangent vector can be used to calculate the Fisher information. In particular, as we are interested in estimating the functional ( )"1B cz(z) dz, only those tangent vectors may be used to calculate R R i which also satisfy $



(Q )"2

1B

czQ (z) (z) dz. 

Since (lin ¹(,  ), ) , ) ) is a Hilbert space and  is a linear functional on  $ the tangent space, we can use the Riesz}FreH chet theorem to see that for all Q 3lin ¹(,  ) there exists a unique H3lin ¹(,  ) such that (Q )"   H, Q . But as we already know that H, Q "41B H(z)Q (z) dz, this means $ $ that



1 H(z)Q (z) dz" 2 1B



1B

cz (z)Q (z) dz ∀Q 3lin ¹(,  ).  

(2)

32

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

We use this functional identity to solve for the representer H. So look at (2). We claim that H(z)"c(z! ) (z).    To verify our claim we have to ensure that: (i) H satis"es (2), and (ii) it lies in the tangent space. The "rst part is easy. Since 1B Q (z) (z) dz"0 for any  Q 3lin ¹(,  ), it is straightforward to check that H satis"es (2). So it only  remains to verify that H also lies in the tangent space. But this is also straightforward since it is easily con"rmed that H3L(1B; ) and that 1B H(z) (z) dz"0.  Therefore, as in Section 2, the e$ciency bound for estimating c is 



l.b."H "c $

1B



(z! )(z! ) (z) dz c"c var(z)c.   

However, as c was arbitrary, the e$ciency bound for estimating  is given by  var(z). Note that since var(z) is the asymptotic variance of the sample mean, it follows that the sample mean is semiparametrically e$cient.

4. E7ciency bounds for quantiles We use the same setup as in Section 3 except for the fact that in this example z is a one-dimensional random variable. The objective now is to obtain the e$ciency bound for estimating the kth quantile of  , denoted by  "( ),    which is implicitly characterized by the equation M(  (z) dz"k. \  As before, letting  be a curve in  through  , it follows that the Fisher R  information for estimating t" 0 is i " 41 Q (z) dz"Q  , and the Fisher $ $ information inner product Q , Q " 41 Q (z)Q (z) dz. Furthermore, since   $   M(R (z) dz"k, di!erentiate both sides w.r.t t and evaluate at t"0 to obtain \ R  ( )(Q )#2F  (z)Q (z) dz"0; i.e. \    !2F  (z)Q (z) dz 2k  (z)Q (z) dz!2F  (z)Q (z) dz \  \  (Q )" " \  ,  ( )  ( )     where the second equality follows from the fact that any Q 3lin ¹(,  ) satis"es    (z)Q (z) dz"0. Using an indicator function the expression for (Q ) can \  be further simpli"ed to get



 k!(

(z)  (z) 1 \ F

 Q (z) dz" I , Q , $  ( ) 2 \   where I (z)" k!( (z)  (z)/ ( ). Since the fact that  I (z) (z) dz" \ F

   \  0 is easily veri"ed, it is clear that I 3lin ¹(, ). Therefore, we can use the  (Q )"2

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

33

Riesz}FreH chet theorem to conclude that the representer I (z) k!( (z)  (z) \ F

 . H(z)" " 2 2 ( )   Hence the e$ciency bound for regular estimators of  , the kth quantile of  ,   is given by



 k!( (z)  (z) k(1!k) \ F

 dz" .  ( )  ( ) \     Since we already know that the asymptotic variance of the kth sample quantile is k(1!k)/ ( ), we immediately obtain that the kth sample quantile is   semiparametrically e$cient. l.b."H " $

5. E7ciency bounds for distribution functions Once again we use the setup of Section 3. So let z be a d;1 random vector with unknown Lebesgue density  , where  3. We want to calculate the   e$ciency bound for estimating F ()"Pr z) , the cdf of z evaluated at some  "xed 31B. As before, in terms of a one dimensional parameterization we can think of estimating the functional ( )"1B ( (z)(z) dz where  is curve \ K

R R R in  through  .  As in the two previous examples the Fisher information for estimating t"0 is given by i "41B Q (z) dz"Q  , and the Fisher information inner product $ $ Q , Q "41B Q (z)Q (z) dz. Di!erentiating ( ) w.r.t t and evaluating the   $   R derivative at t"0 we get that



(Q )"2

1B

(

\ K

(z) (z)Q (z) dz. 

To simplify this even further let us write  as



(Q )"2

1B

(

\ K

(z) (z)Q (z) dz!2F ()  



1B

 (z)Q (z) dz. 

Here we have used the fact that if Q 3lin ¹(,  ) then 1B  (z)Q (z) dz"0. Now   let I (z)" ( (z)!F ()  (z). Since  is a linear functional on the tan\ K

  gent space, we can use the Riesz}FreH chet theorem to write (Q )"I , Q ∀Q 3lin ¹(,  ).  $  Looking at this equation we claim that the representer H(z)"I (z)/2. Since 1B H(z) (z) dz"0 it is clear that H3lin ¹(,  ) and our claim is readily   veri"ed.

34

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

Hence the e$ciency bound for n consistent regular estimators of F () is  given by



l.b."H " $

1B

( (z)!F ()  (z) dz"F ()[1!F ()]. \ K

   

As the empirical distribution function 1/nL ( (z ) has asymptotic variG \ K G ance F ()[1!F ()], this implies that the empirical distribution function is   semiparametrically e$cient. In the next section we generalize this example and obtain the e$ciency bound for estimating the cdf when some additional information is available.

6. E7ciency bounds for estimating distribution functions under auxiliary information We now obtain some results that seem to be new to the literature. We begin by generalizing the example in Section 5. So let (y, z) be continuously distributed random variables in 1;S , where S -1B. Fix (a , a ) in 1;1B. The objective X X   is to obtain e$ciency bound for estimating the joint cdf F (a , a )"    Pr y)a , z)a when we know f (z)"$(yz) to be an element of some set    F-L(S ; z). The idea here is to think of F as imposing some restrictions on X the functional form of $(yz). Hopefully, these restrictions will be motivated by economic theory although in many cases such restrictions are imposed merely for statistical convenience. Whatever the reason, an interesting question is to examine how these restrictions a!ect the joint distribution of (y, z) as far as estimating F (a , a ) is concerned. As an illustration, assume that F is the    collection of linear functions in L(S ; z). This happens, for instance, whenever X we do a linear regression of y on z. Our results provide a precise answer to the question: &Does knowing that z C $(yz) is linear help us in estimating F (a , a )?' Alternatively, one may want to know the e$ciency bound for    estimating F (a , a ) when $(yz)"0; i.e. when z has &no e!ect' on y. Indeed, by    varying F one can obtain results for many interesting cases. Some of these cases will be examined at the end of this section. It should be possible to extend the results obtained in this section to handle some other useful models. For example, although we do not pursue it here, one could obtain the e$ciency bound for estimating an expectation functional $ g(y, z) when we have a priori information about the conditional mean function $(yz) and g is known. Note that in this section we only investigate the special case g(y, z)"( y)a , z)a . By introducing some unknown "nite  dimensional parameters in g, this would lead to a generalization of the result in Brown and Newey (1998b). Results obtained in this section may also be useful to readers interested in the bootstrap methodology. To be more speci"c, when

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

35

prior information about nonparametric or semiparametric models is available, merely resampling the observed data to obtain an estimate of the cdf F usually  leads to poor inference from the bootstrap. See, for example, Brown and Newey (1998a). In order to obtain an e$cient bootstrapping procedure, one must resample using an e$cient estimate of the cdf; i.e. an estimate which incorporates all prior knowledge a researcher has about F . If this prior information is in  the nature of a restriction on the conditional mean function, then our results can help determine if a proposed estimator of F is e$cient or not. Indeed, the  results in this section may even prove helpful in constructing an e$cient estimator of F although we do not investigate this possibility in the current  paper. Some recent works which exploit the presence of auxiliary information in implementing the bootstrap include Hall and Horowitz (1996) and Zhang (1999). Before going any further let us set up some notation. We let b denote the  unknown marginal pdf of z, where



b 3B" b3L(S ; ) : b(z)'0, b(z) is bounded  X





b(z) dz"1 . 1X The conditional pdf of yz is denoted by v (yz), where v 3B and   and continuous,



B" v : 1;S P1 : X



v(yz) dy"1, v(yz)'0,

1

v(yz) is bounded and continuous and





yv(yz) dy is bounded .

1

For some t '0 let t C (v , b ) be a curve from [0, t ] into B;B which passes  R R  through (v , b ) at t"0. The score for estimating t"0 is   2v (yz) 2bQ (z) # . S "  v (yz) b (z)   From now on let (v , bQ )"3TQ "lin ¹(B, v );lin ¹(,  ), where T Q denotes   the product tangent space. The closure in lin ¹(B, v ) is w.r.t the L(1;S ; ;z)  X norm while the closure in lin ¹(B, b ) is w.r.t the L(S ; ) norm. As in Lemmas  X B.1 and B.2 we can show that

 

lin ¹(B, b )" bQ 3L(S ; ) :  X



1X



bQ (z)b (z) dz"0 , 

lin ¹(B, v )" v 3L(1;S ; ;z) :  X



1



v (yz)v (yz) dy"0 w.p 1 . 

36

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

Using the fact that 1 v (yz)v (yz) dy"0 for almost all z3S and for all  X v 3lin ¹(B, v ), it is straightforward to see that the Fisher information for  estimating t"0 is



 

bQ (z) dz. 1X For any  ,  3TQ the Fisher information inner product is then   i "4$ $ X

1

v (yz) dy #4

 ,  "4$   $ X



 

v (yz)v (yz) dy #4  

bQ (z)bQ (z) dz   1X and the Fisher information norm  ",  . Note that (TQ ,  ) , ) ) is $ $ $ a Hilbert space. Now let A "(!R, a ], A "(!R, a ], and consider estimating the     joint cdf 1



(v , b )" R R

(  (y)(  (z)v(yz)b(z) dy dz.   R R 1X 1 Di!erentiating this w.r.t t and evaluating at t"0 we get

   

()"2$ (  (z) X  #2

1

 

(  (y)v (yz)v (yz) dy  

(  (z) 

1

(  (y)v (yz) dy bQ (z)b (z) dz.   

1 But since  is a linear functional on the tangent space, we can use the Riesz}FreH chet theorem to see that for all 3TQ there exists a unique H"(vH, H)3T Q such that ()"H,  ; i.e. $ X

4$ X



1

 

vH(yz)v (yz) dy #4

   

" 2$ (  (z) X 

1

1X

bH(z)bQ (z) dz



(  (y)v (yz)v (yz) dy  



(  (z) (  (y)v (yz) dy bQ (z)b (z) dz.     1 1X Moreover, as the above equality holds for all 3T Q , we obtain #2



1

$ X

1 bH(z)bQ (z) dz" 2 X





  

1X

(  (z) 

1 vH(yz)v (yz) dy " $ 2 X 1

1



(  (y)v (yz) dy bQ (z)b (z) dz,   

(3)



(4)

1

(  (z)(  (y)v (yz)v (yz) dy .   

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

37

However, note that we cannot use every v 3lin ¹(B, v ) in (4) to calculate the  representer vH. In particular, since we have to impose the additional restriction that the conditional mean is an element of F, i.e.



g (z)" R

1

y(yz) dy3F, R

(5)

we can only use those Q 3lin ¹(B,  ) for which 



1

yQ (yz) (yz) dy3lin ¹(F, f ).  

(6)

Keep in mind that as F is treated as a subset of L(S ; z), to obtain lin ¹(F, f ) X  the closure is taken w.r.t the L(S ; z) norm. X All that remains now is to use (3), (4), and (6) to "nd the representer H"(bH, H). Calculating bH is straightforward. Look at (3) and let F (a )"?  (yz) dy. We claim that WX  \  bH(z)" (  (z)F (a )!$ [(  (z)F (a )] b (z).   WX  X  WX   To verify this claim observe that



1

1 bH(z)bQ (z) dz" 2 X



1X

(  (z)F (a )bQ (z)b (z) dz,  WX  

i.e. bH satis"es (3). Furthermore, since



1X

1 bH(z)b (z) dz" $ [(  (z)F (a )]!$ [(  (z)F (a )] "0  WX  X  WX  2 X 

and bH3L(S ; ), bH lies in the tangent space lin ¹(B,  ). Therefore, bH is X  indeed the right choice. We now show how to obtain H. Finding H is slightly hard due to the presence of the additional restriction given in (6). It is an awkward restriction to

 To see this, pick any Q 3lin ¹(B, ) and let g (z)"1 yQ (yz) (yz) dy. To show that   g 3lin ¹(F, f ) it su$ces to show that there exists an element in lin ¹(F, f ) which is arbitrarily   close to g in the L(S ; z) norm. Begin by observing that Q 3lin ¹(B, ) means that for any f'0 X  there exists a I 3lin ¹(B, ) such that $ 1 [Q (yz)!I (yz)] dy (. But since I 3lin ¹(B, ),  X  we can write I (yz)"I c I (yz) for some c ,2, c 31 and I ,2,I 3¹(B, ). Let G G G  I  I  g (z)"1 yI (yz) (yz) dy. From (5) it is easily seen that g is an element of ¹(F, f ). Therefore, G G  G  using the fact that 1 y (yz) dy is bounded, an application of the Cauchy}Schwarz inequality  shows that $ g (z)!I c g (z) (c where c denotes a generic positive constant. But since G G G I c g 3lin ¹(F, f ), we are done. G G G 

38

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

handle as it does not appear in the form of an equality restriction. Instead, it requires that a linear functional of H lie in the tangent space lin ¹(F, f ). How  does one incorporate this restriction into the &guess and verify' scheme we have been using so far? The key insight here is to realize that if we could come up with a H such that 1 yH(yz) (yz) dy was a projection (using some appropriate  inner product) onto lin ¹(F, f ), (6) would be automatically satis"ed since  a projection onto a linear space has to be an element of that space. So let "y!$(yz), (z)"var(z) and use proj g(z)lin ¹(F, f ) to denote N  the orthogonal projection of g, where g is some function of z, onto lin ¹(F, f )  using the &weighted' inner product g , g "$ g (z)g (z)/(z) . Also de"ne   N  



(z)"

1

y(  (y) (yz) dy!F (a )$(yz),   WX 

 (z)"(  (z)(z)!proj (  (z)(z)lin ¹(F, f ) . N  N   Observe that  (z) is the residual obtained when (  (z)(z) is projected onto N  lin ¹(F, f ) using the &weighted' inner product  ) , ) . We claim that  N 1  H(yz)" (  (z)(  (y)!(  (z)F (a )! (z)  (yz). (7)    WX  N 2 (z) 





To verify that this is indeed the right solution, all we have to do is to check that the proposed H lies in the tangent space lin ¹(B,  ), and that it satis"es (4)  and (6). Let us "rst verify that H3lin ¹(B,  ). Assuming that  $ (z)/(z) (R, it is easy to see that H3L(1;S ; ;z). Furthermore, by N X direct calculation we get that





1 $(z) H(yz) (yz) dy" (  (z)F (a )!(  (z)F (a )! (z)   WX   WX  N 2 (z) 1



"0, since $(z)"0. Therefore H3lin ¹(B,  ).  Next, we con"rm that H satis"es (4). So pick any Q 3lin ¹(B,  ) and let  g (z)"1 yQ (yz) (yz) dy. From (6) we know that g 3lin ¹(F, f ). Therefore,   using the fact that 1 Q (yz) (yz) dy"0, after a little algebra we can show that  1 g (z) H(yz)Q (yz) dy" (  (z)(  (y)Q (yz) (yz) dy! (z) .   N (z) 2 1  1







But since  (z) is the residual form projecting onto lin ¹(F, f ) using  ) , ) , we N  N know that it has to be orthogonal to lin ¹(F, f ); i.e. $ ( (z)/(z))g (z) "0 for  N any g 3lin ¹(F, f ). Hence, taking expectation on both sides of the previous 

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

39

equation, we have $ X





1 H(yz)Q (yz) dy " $ 2 X 1



1



(  (z)(  (y)Q (yz) (yz) dy ,   

i.e. H satis"es (4). Finally, since a little algebra shows that



1 yH(yz) (yz) dy" (  (z)(z)! (z)  N 2  1 1 " proj (  (z)(z)lin ¹(F, f ) 3lin ¹(F, f ), N    2

H also satis"es (6) and we can conclude that H is indeed the desired representer. Therefore, the e$ciency bound for estimating F (a , a ) is given by    l.b."H "4$ $ X



1

 

H(yz) dy #4

bH(z) dz. 1X

Now it is easy to show that



bH(z) dz"var($ (  (z)(  (y)z ).   1X Furthermore, as some straightforward calculations reveal, 4

4$ X



1



H(yz) dy "$ (  (z)F (a ) !$ (  (z)F (a ) X  WX  X  WX 

  

#$



 (z)  (z) N !2$ (  (z)(z) N .  (z) (z)

But it is also easy to see that $ (  (z)F (a ) !$ (  (z)F (a ) "$ var[(  (z)(  (y)z] , X  WX  X  WX  X    (z)  (z) $ (  (z)(z) N "$ N .  (z) (z)



  

Therefore, we obtain that

 

(z) l.b."F (a , a )[1!F (a , a )]!$ N .       (z)

(8)

Observe the form of the bound. From Section 5 we know that F (a , a )[1!F (a , a )] is the e$ciency bound for estimating F (a , a ) when          there are no additional restrictions on the joint distribution of (y, z). The term

40

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

$ (z)/(z) may therefore be thought of as the potential &gain' in estimating N F (a , a ) due to the availability of the auxiliary information $(yz)3F. It is of    some interest to know when this gain is positive; i.e. when does this extra information about $(yz) help us estimate the joint cdf better? Clearly, $ (z)/(z) '0 when the residual  (z), obtained by projecting (  (z)(z) onto N N  lin ¹(F, f ) using the &weighted' inner product  ) , ) , is non-zero almost surely. N  Therefore, the restriction f 3F will help us &do better' in estimating F (a , a )     provided the set F is such that the tangent space lin ¹(F, f ) is a proper  subspace of L(S ; z). The following examples present some interesting cases in X point. Example 6.1. Suppose F"L(S ; z) so that lin ¹(F, f )"L(S ; z). Since X  X (  (z)(z)3L(S ; z), it follows that  (z)"0. Therefore, the lower bound for  X N estimating F (a , a ) is just F (a , a )[1!F (a , a )]. Intuitively, this makes          sense. If f is &essentially unrestricted' we know that the best estimator of the  joint cdf is the empirical cdf. Example 6.2. Suppose that F" 0 so that $(yz)"0. This means that regressing y on z has &no e!ect'. In this case lin ¹(F, f )" 0 , implying that   (z)"(  (z)(z)!0"(  (z)1 y(  (y) (yz) dy. Therefore, the lower bound for N     estimating the joint cdf is F (a , a )[1!F (a , a )]!$ (z)/(z) .       N Example 6.3. Suppose that F is the set of all linear functions in L(S ; z). As X F is a closed linear subspace of L(S ; z) we get that lin ¹(F, f )"F. ThereX  fore, since F is a proper subspace of L(S ; z),  (z)O0 almost surely and the X N lower bound for estimating F (a , a ) is given by (8). Observe that the projection    for obtaining  (z) can be explicitly calculated as N z(  (z)(z) zz \  proj (  (z)(z)lin ¹(F, f ) "$ $ z. N   (z) (z)



 

Example 6.4. Now suppose that F is the set of all additive functions in L(S ; z); X i.e. F consists of functions of the form B h (z ), where z denotes the jth H H H H component of z and each h is square integrable w.r.t the distribution of z . Once H H again, since F is a closed linear subspace of L(S ; z), lin ¹(F, f )"F, X   (z)O0 almost surely, and the lower bound for estimating F (a , a ) is given by N    (8). However, unlike previous examples, in general no closed-form solution for proj (  (z)(z)lin ¹(F, f ) is known. N   Example 6.5. Next, let S be a convex compact subset of 1 and C(S ) the set of X X all real-valued twice continuously di!erentiable functions on S . Assume that X

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

41

F is set of all concave functions in C(S ). As shown in Tripathi (1999), X lin ¹(F, f ) in this case is equal to L(S ; z) and so  (z)"0; i.e. concavity of f is  X N  not helpful, at least asymptotically and in the class of n regular estimators of F (a , a ), in estimating F (a , a ) more e$ciently. A similar result holds if we       let F denote the set of all increasing functions in C(S ). X Example 6.6. Finally, we let S be a convex compact subset of 1B such that z , X B the last component of z3S , is positive and bounded away from zero. F is the X set of all functions in C(S ) which are also homogeneous of degree r31. The X degree of homogeneity is assumed known. As demonstrated in Tripathi (1999), in this case we can show that lin ¹(F, f ) is the set of all functions in L(S ; z)  X which are also homogeneous of degree r. Since this is a proper subspace of L(S ; z),  (z)O0 almost surely and therefore homogeneity of f helps in X N  estimating F (a , a ) more e$ciently. Letting w"(z /z ,2, (z )/z ), it is easy     B B\ B to show that in this example



  

( (z)(z)zP Bw proj (  (z)(z)lin ¹(F, f ) "zP $   B N  (z)

$

zP B w . (z)

7. Bound for a conditional expectation functional Consider the conditional expectation f (z)"$(yz), where y is a random  variable with support S -1 and z is a S -valued random variable with W X S -1B. It is well known that since evaluation functionals are not bounded, X f cannot be estimated by a n consistent estimator. However, as some recent  examples in econometrics have demonstrated, n estimation of certain functionals of f is indeed possible. In this section we obtain the lower bound for  e$cient estimation of one such functional. So let  "D f (z)(z) dz be the   object of interest where (z) is some known &weight' function, DLS is a comX pact region of integration, and D (z) dz(R. To simplify computations write



 " 

D



f (z)(z) dz" 



$(yz)(z) dz" D

D

1

W

y (yz)(z) dy dz, 

where  (yz) is the unknown conditional pdf of yz and 



 3B"  : S ;S P 1 :  W X



1

(yz) dy"1, (yz)'0, (yz) is

W

bounded and continuous, and



1W



y(yz) is bounded .

42

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

 (z) denotes the unknown pdf of z, and   3" 3L(S ; ) : (z)'0, (z) is bounded and continuous,  X





(z) dz"1 . 1X An illustration of this setup may be found in Newey and McFadden (1994), where  denotes the approximate change in consumer surplus for a given price  change. In their example, Newey and McFadden assume that y denotes the quantity demanded, z the vector of prices, (z)"1, and that the demand function, as a function of the prices, is given by f (z)"$(yz). Let us obtain the  e$ciency bound for estimating  . We will use this bound to show that the  estimator of  in Newey and McFadden (1994) is semiparametrically e$cient.  This seems to be a new result in the literature. Following our usual approach we can write the joint density of (y, z) as  (yz) (z). For some t '0 let t C ( ,  ) be a curve from [0, t ] into B;    R R  which passes through ( ,  ) at t"0. The score for estimating t"0 is given   by S "2Q (yz)/ (yz)#2Q (z)/ (z). Let "(Q , Q ) and T Q "lin ¹(B,  );     lin ¹(,  ). The elements of  are the tangent vectors to ( ,  ) at t"0, and  R R TQ denotes the product tangent space. As in Lemmas B.1 and B.2 we can show that

 

lin ¹(,  )" Q 3L(S ; ) : X 



1X



Q (z) (z) dz"0 , 

lin ¹(B,  )" Q 3L(S ;S ; ;z) :  W X





Q (yz) (yz) dy"0 w.p 1 .  1W Hence using the fact that  W Q (yz) (yz) dy"0 for almost all z, it is straight1  forward to see that the Fisher information for estimating t"0 is given by

     



 



Q (yz)  i "4$ #4 Q (z) dz $  (yz)  1X

Q (yz) dy  (z) dz#4 Q (z) dz.  1X 1W 1X Looking at the expression for i , for any  ,  3TQ we can write the Fisher $   information inner product as "4



Q (yz)Q (yz) dy  (z) dz#4 Q (z)Q (z) dz.      1X 1W 1X Obviously, the Fisher information norm  ",  . Note that (T Q ,),) ) $ $ $ is a Hilbert space.  , "4   $

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

43

We now consider estimating the real valued functional



( ,  )" R R

y(yz)(z) dy dz. R

1 But this means that only those 3TQ may be used to calculate i which also $ satisfy D

W



y (yz)Q (yz)(z) dy dz.  1W Now, from the Riesz}FreH chet theorem, we know that for all 3TQ there exists a unique H"(H, H)3T Q such that ()"H,  ; i.e. $ ()"2

D



2

D

1W

y (yz)Q (yz)(z) dy dz 

 

"4





H(yz)Q (yz) dy  (z) dz#4 

H(z)Q (z) dz.

(9)

1 1 1 Since this holds for all (Q , Q )3T Q we immediately get that H"0. Hence for all Q 3lin ¹(B,  ), (9) reduces to  X

  1X

1W 1 " 2

W



X

H(yz)Q (yz) dy  (z) dz 

   



[y!f (z)] (yz)Q (yz)(z) dy dz   1W [y!f (z)] (yz)(z)(D (z)   " Q (yz) dy  (z) dz,  2 (z) 1X 1W  where the "rst equality follows from the fact that D





(10)



f (z) (yz)Q (yz)(z) dy"f (z)(z)  (yz)Q (yz) dy"0.     1W 1W Comparing both sides of (10) we claim that [y!f (z)] (yz)(z)(D (z)   H(yz)" . 2 (z)  Since it is straightforward to verify that H3L(S ;S ; ;z) and W X  W H(yz) (yz) dy"0, this con"rms that we have the right solution; i.e. 1  H"(H,0). Therefore, the e$ciency bound for n consistent regular estimators of  is  given by



l.b."H "$ $



[y!f (z)](z)(D (z)  .  (z) 

44

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

As is easily veri"ed, upon letting d"1, D"[a, b], and (z)"1, this is also the asymptotic variance of the estimator given in Newey and McFadden (1994, p. 2212); i.e. their estimator is semiparametrically e$cient.

8. Bounds for the linear and the partially linear model Consider iid data (x, y, z) from the partially linear model y"x #f (z)#,   where  31N and  is independent of (x, z). x is a p;1 random vector and the  only information we have about f is that f 3L(1B; z). We assume that any   intercept term is absorbed in f and that the objects of interest are the slope  coe$cients  . It is straightforward to see that  is identi"ed if and only if the   matrix $[x!$(xz)][x!$(xz)] exists and is nonsingular, and this condition is assumed to hold throughout this section. The density of , denoted by h , is unknown. h 3H where  



H" h3L(1; ) : h(u)'0, h(u) is bounded continuous and di!erentiable, the derivative h(u) is such that



0(

[h(u)] du(R, and

1





h(u) du"1 .

1

The conditions imposed upon H imply that h ($R)"0; i.e. the density of   vanishes at in"nity. To see this, pick any a, b in 1. Since h is di!erentiable at  all points in 1 and its derivative h is integrable, Theorem 8.21 of Rudin (1974, p.  179) implies that h (b)!h (a)"@ h (u) du. Therefore, integrability of h im  ?   plies that lim h (b)!h (a))@ h (u) du"0. But this means that h (b) ?   ? @   is a Cauchy sequence in 1 as bPR; i.e. lim h (b) exists. And since h is @   itself integrable this limit must be zero. As we can similarly show that h (!R)"0, we get h ($R)"0. This fact will be used subsequently.   w (x, z), the marginal density of (x, z) is also unknown, and w 3W where  



W" w3L(1N;1B; ;) : w(x, z)'0,





w(x, z) dx dz"1 . "1B Since  is independent of (x, z), the joint density of (x, y, z) may be written as p (y  x, z)w (x, z)"h ()w (x, z), where p (y  x, z) is the conditional pdf of y  x, z      and



p 3P" p : 1;1N;1BP1 : 



1

1N



p (y  x, z) dy"1, p (y  x, z)'0 .

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

45

Let us now calculate the e$ciency bound for estimating  . Later in this section  we will use this result to obtain e$ciency bound for estimating  in the linear  model y" #x #.   For some t '0 consider a curve t C ( , h , w , ) from [0, t ] into  R R R R  1N;H;W;L(1B; z) which passes through ( , h , w , f ) at t"0. Since the     joint density for a single observation, as a function of t, is given by p(y  x, z)w(x, z)"h(y!x ! (z))w(x, z), R R R R R R we can write the score for estimating t"0 as d 2p (y  x, z) 2w (x, z) S " log p(y  x, z)#log w(x, z)  " # .  dt R R R p (y  x, z) w (x, z)   Note that p (y  x, z)"hQ ()!h ()[xQ #(z)]  is the tangent to p at t"0, and h (u)"dh (u)/du. Similarly, Q , hQ , w ,  are the R   tangent vectors to  , h , w ,  , respectively, at t"0. Let "(p ,w ) and denote R R R R the product tangent space by T Q "lin ¹(P, p );lin ¹(W, w ). As in Lemmas   B.1 and B.2 we can show that

 

lin ¹(H, h )" hQ 3L(1; ) : 



1



hQ (u)h (u) du"0 , 

lin ¹(P, p )" p 3L(1;1N;1B; ;x;z) : 





1

p (y  x, z)p (y  x, z) dy"0 

w.p 1 . We do not need to obtain lin ¹(W, w ) explicitly since w is ancillary to  . In    particular, it will be quite easy to show that the representer wH"0. The zero function is always an element of the tangent space, whatever it may be. Since h () vanishes at $R, the Fisher information for estimating t"0 is  given by i "$S "4$ 1 p (y  x, z) dy #41N 1B w (x, z) dx dz; i.e.  V X $ " hQ ()!h ()[xQ #(z)]   #4 i "4$ w (x, z) dx dz. $ h () 1N 1B  " Thus for  ,  3TQ the Fisher information inner product is  ,  "4$     $ V X

1 p (y  x, z)p (y  x, z) dy #41N 1B w (x, z)w (x, z) dx dz; i.e. "     hQ ()!h ()[xQ # (z)]     ,  "4$    $ h () 



 





46

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

;



hQ ()!h ()[xQ # (z)]     h () 



#4

1N

"1B



w (x, z)w (x, z) dx dz.  

Since  is a vector, consider estimating the functional (p , w )"c where  R R R c31N is arbitrary. Hence we should only be looking at those 3T Q which satisfy ()"cQ . As  is a linear functional on the product tangent space and (T Q ,  ) , ) ) is a Hilbert space, we can use the Riesz}FreH chet theorem to see that $ for all 3TQ there exists a H"(pH, wH)3T Q such that ()"H,  . So let us $ set



hH()!h ()[xH#H(z)]  h () 

4$



#4

1N

"

1B



hQ ()!h ()[xQ #(z)]  h () 



wH(x, z)w (x, z) dx dz"cQ

for all (Q , hQ , w , )31N;lin ¹(H, h );lin ¹(W, w );L(1B; z) and solve for   (H, hH, wH, H). If we can also show that (H, hH, wH, H) is an element of 1N;lin ¹(H, h );lin ¹(W, w );L(1B; z), then it follows that   pH(y  x, z)"hH()!h ()[xH#H(z)]3lin ¹(P, p ).   Hence solving for (H, hH, wH, H) leads directly to H. We now carry out this solution strategy. Observe that the previous display leads immediately to

  

1N

"1B

wH(x, z)w (x, z) dx dz"0,

(11)



$

h ()[xH#H(z)]!hH()  hQ () "0, h () 

$

(h ())[xH#H(z)]!hH()h ()   (z) "0, h () 

(13)



(14)

(12)



Q $



(h ())[xxH#xH(z)]!xhH()h () Q c   " h () 4 

for all (Q , hQ , w , )31N;lin ¹(H, h );lin ¹(W, w );L(1B; z). Clearly, (11)   implies that wH"0. It remains to "nd (H, hH, H). So look at (12). Using the independence of  and (x, z) it is almost immediate that hH()"

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

47

h ()$[xH#H(z)] satis"es (12). hH is also feasible since 



1



hH(u)h (u) du"$[xH#H(z)] 

1

h (u)h (u) du  

$[xH#H(z)] "

h (R)!h (!R) "0.   2

Substitute the value for hH just obtained in (13) to get $ [x!$x]H#[H(z)!$H(z)] (z)"0 ∀ 3L(1B; z). Once again, it is straightforward to verify that H(z)"!$(x  z)H satis"es this identity. Since the second moment of x exists, it follows that H3L(1B; z). If we now substitute H(z)"!$(x  z)H in the expression for hH, we get that hH()"0. And plugging hH"0 in (14) we obtain Q $ (h ())[xxH#xH(z)]/h () "Q c/4. Finally, upon substituting in the   value of H, this expression reduces to Q $(SS)H"Q c, where S" ()[x!$(x  z)] and (u)"d/du log h (u)"2h (u)/h (u). As this equality holds    for all Q 31N, we get that $(SS)H"c. Since h 3L(1) and is nonzero almost  surely, we have 0($()(R. This fact along with the identi"cation condition for  ensures that $SS is nonsingular. Therefore, H" $SS \c and we have  shown that (wH,H, hH, H)"(0, $SS \c, 0,!$(x  z) $SS \c). Substituting these values in the expression for pH we get that pH(y  x, z)"hH()!h ()[xH#H(z)]  "!h ()[x!$(x  z)] $SS \c.  Hence the e$ciency bound for regular estimators of c is given by  l.b."H "4$ $ V X



1

 

pH(y  x, z) dy #4

wH(x, z) dx dz

1N

"1B

"c $SS \c. But since c was arbitrary, the e$ciency bound for regular n consistent estimators of  in the partially linear model is given by $SS \. We can also  use this result to obtain e$ciency bounds for regular estimators of  in the  linear model y" #x #. To see this, simply note that if we set z"0 in   the partially linear model, it reduces to a linear model with  "f (0). Hence the   e$ciency bound for regular estimators of  in the linear model is given by 

$SI SI  \, where SI "()[x!$x] and  is de"ned as before. Remark 8.1. E$ciency bounds for the linear model have been obtained by Bickel (1982), who showed that these bounds were attainable. Bounds for the partially linear model were obtained by Cuzick (1992), who also constructed an estimator that achieved the bounds.

48

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

9. E7ciency bounds under unconditional moment restrictions Let z be a d;1 random vector with unknown Lebesgue density  (z) which  satis"es m moment restrictions $g(z,  )"0. The object of interest is  31N   while g : 1B;1NP1K, with m*p, is a vector of known functions.  is an  element of  where, as before, " 3L(1B; ) : (z)'0, (z) is bounded and continuous



and

1B



(z) dz"1 .

We also assume that the components of g are linearly independent w.p 1, ensuring that none of the moment restrictions are redundant. This requires that the matrix  "$g(z,  )g(z,  ) exist and be nonsingular, a fact we will use    later on in this example. For an arbitrary c31N let us now determine the e$ciency bound for estimating the functional ( )"c .   Using an approach that should now be quite familiar to the reader, for some t '0 let t C ( ,  ) be a curve from [0, t ] into 1N; which passes through  R R  ( ,  ) at t"0. The score for estimating t"0 is given by   S "d/dt log (z) . Thus the Fisher information for estimating t"0 can be  R R written as i "41B Q (z) dz"Q  , and the Fisher information inner product $ $ Q , Q "41B Q (z)Q (z) dz. However, not every Q 3lin ¹(,  ), where the      $ tangent space for this example is the same as given in Lemma B.1, may be used to calculate i . In particular, we can di!erentiate the moment conditions $ $g(z,  )"g(z,  )(z) dz"0 w.r.t t to see that only those Q may be used to R R R calculate i which also satisfy [$ g(z,  )]Q #2g(z,  ) (z)Q (z) dz"0; i.e. $ @   



[$ g(z,  )]Q "!2 g(z,  ) (z)Q (z) dz. @   

(15)

For notational ease let D "$ g(z,  ). Since  may be overidenti"ed, i.e.  @   m'p, (15) will typically have a nonunique solution in Q . To solve for Q we need an identi"cation result. As in Rothenberg (1971) we can show that a su$cient condition for  to be locally identi"ed is that we "nd a nonstochastic full rank  m;m matrix = such that the matrix D =D is invertible. Assuming that we   can do so, (15) can be solved to yield



Q "!2

1B

(D =D )\D =g(z,  ) (z)Q (z) dz.     

However, as the objective is to estimate the functional ( )"c , the R R tangent vectors Q used to calculate Q also have to satisfy (Q )"cQ . Since (lin ¹(,  ), ) , ) ) is a Hilbert space, by the Riesz}FreH chet theorem we know  $ that for every Q 3lin ¹(,  ) there exists a unique H3lin ¹(,  ) such that  

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

49

(Q )"H, Q . Therefore, equating 41B H(z)Q (z) dz with cQ we get $



4

1B



H(z)Q (z) dz"!2

1B

c(D =D )\D =g(z,  ) (z)Q (z) dz.     

But since this holds for all Q 3lin ¹(,  ), we claim that  H(z)"!c(D =D )\D =g(z,  ) (z).       To verify our claim it is easy to see that





1 H(z) (z) dz"! c(D =D )\D =     2 1B

1B

g(z,  ) (z) dz  

1 "! c(D =D )\D =$g(z,  )"0,     2

(16)

demonstrating that H is indeed an element of lin ¹(,  ).  Following the discussion in Section 2, the e$ciency bound for regular estimators of c is given by H . Notice that this bound depends upon =, the  $ auxiliary matrix used to solve for Q in (15); i.e. l.b.(=)"H "c(D =D )\D = =D (D =D )\c.  $       Thus we get that asvar(cK )*l.b.(=)"c(D =D )\D = =D (D =D )\c, L        where K is any n consistent regular estimator of  . This result is not L  very satisfactory as the lower bound depends upon =. To make the bound independent of = we use Hansen (1982)'s trick. Since we know that  is  nonsingular, write  "PP for some nonsingular m;m matrix P and let  ;"(D =D )\D =P!(D \D )\D P\. Then using the fact that        ;(P)\D "O , where O is the m;m null matrix, it is easy to see that  K"K K"K ;;"(D =D )\D = =D (D =D )\!(D \D )\.           Hence l.b.(=)"c;;c#c(D \D )\c, which implies that    asvar(cK )*l.b.(=)*c(D \D )\c, L    where the last term is independent of =. Therefore, the e$ciency bound for regular estimators of c is given by  c(D \D )\c. But as c31N was arbitrary, the e$ciency bound for regular    estimators of  is (D \D )\.    

50

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

Remark 9.1. This result is originally due to Chamberlain (1987), who obtained it by showing that any distribution could be approximated arbitrarily well by a multinomial distribution. In an in#uential paper Hansen (1982) had shown that this bound was sharp; i.e. the optimally weighted GMM estimator achieved the bound.

10. E7ciency bounds under conditional moment restrictions Now suppose that the random vectors (x, z) satisfy the conditional moment restriction $[g(z,  )x]"0 w.p 1. Here x31Q, z31B,  31N, and g : 1B;1NP   1K is a vector of known functions. The objective is to obtain the e$ciency bound for estimating  . To ensure that each of the m components of g provides  nonredundant information about  , we once again require that the matrix   (x)"$[g(z,  )g(z,  )x] exists and is nonsingular w.p 1.    Let us write the joint density of (x, z) as w (x, z)"q (zx)b (x). The condi   tional density of zx, denoted by q (zx), is unknown and 



q 3Q" q : 1B;1QP1 : 



q(zx) dz"1, q(zx)'0,

1B



q(zx) is bounded and continuous . The marginal density of x, denoted by b (x), is also unknown and 





b 3B" b3L(1Q; ) : b(x)'0, 



b(x) dx"1 .

1Q

So for some t '0 let t C ( , q , b ) be a curve from [0, t ] into 1N;Q;B  R R R  which passes through ( , q , b ) at t"0. For an arbitrary c31N consider    estimating the functional (q , b )"c . We can write the score for estimating R R R t"0 as d 2q (zx) 2bQ (x) S " log q(zx)#log b(x)  " # .  dt R R R q (zx) b (x)   The elements of "(q ,bQ ) denote the tangent vectors to (q , b ) at t"0. As in R R Lemma B.2 we can show that



lin ¹(Q, q )" q 3L(1B;1Q; ;x) : 



1B



q (zx)q (zx) dz"0 w.p 1 . 

We do not have to obtain an explicit expression for lin ¹(B, b ) since b is   ancillary to  . In particular, as we will soon demonstrate, the representer 

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

51

bH will be shown to be zero which is always an element of the tangent space, whatever it may be. Therefore, since 1B q (zx)q (zx) dz"0 for any  q 3lin ¹(Q, q ), it is readily seen that the Fisher information for estimating t"0  is given by i "$S "4$ $  V



1B

 

q (zx) dz #4

1Q

bQ (x) dx.

This implies that the Fisher information inner product  ,  "4$   $ V



1B

 

q (zx)q (zx) dz #4  

1Q

bQ (x)bQ (x) dx.  

Now we also know that  and q have to satisfy the moment condition R R $[g(z,  )x]"1B g(z,  )q(zx) dz"0. Hence, di!erentiating both sides of this R R R equality w.r.t t and evaluating at t"0 we get that



D (x)Q #2 

1B

g(z,  )q (zx)q (zx) dz"0,  

(17)

where D (x)"$[ g(z,  )x]. The presence of overidentifying moment restric @  tions implies that (17) will not have a unique solution for Q . But as in Rothenberg (1971) we can show that a su$cient condition which locally identi"es  in the  conditional moment case is the nonsingularity of $[D (x)=(x)D (x)], where   =(x) is some nonsingular (w.p 1) m;m matrix. Assuming that this condition holds, we can use it to solve for Q as follows: First premultiply (17) by D (x)=(x)  to get D (x)=(x)D (x)Q #2  



1B

D (x)=(x)g(z,  )q (zx)q (zx) dz"0.   

Next, take expectations on both sides w.r.t x and solve for Q ; i.e. Q "!2[$D (x)=(x)D (x)]\   $ V



1B



D (x)=(x)g(z,  )q (zx)q (zx) dz .   

But since we are estimating the functional (q , b )"c , only those  may be R R R used to solve for Q which also satisfy ()"cQ . As  is a linear functional on the product tangent space TQ "lin ¹(Q, q );lin ¹(B, b ) and (T Q ,  ) , ) ) is   $ a Hilbert space, the Riesz}FreH chet theorem implies that for all 3TQ there exists a unique H"(qH, bH)3T Q such that ()"H, v ; i.e. $ 4$ V



1B

 

qH(zx)q (zx) dz #4

1Q

bH(x)bQ (x) dx"cQ ∀3T Q ,

52

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

where Q is given in the previous display. Since this holds for all 3T Q it is easy to see that bH"0. We also claim that qH(zx)"!c[$D (x)=(x)D (x)]\D (x)=(x)g(z,  )q (zx).       To check our claim we can use the fact that $[g(z,  )x]"0 to verify  1B qH(zx)q (zx) dz"0; i.e. qH3lin ¹(Q, q ) as required.   Therefore, H"(qH, 0) and the lower bound for the asymptotic variance of any n consistent regular estimator of c (say cK ) is given by H ; i.e.  L $ asvar(cK )*l.b.(=)"H "c$ [$ D (x)=(x)D (x)]\ L $ V   ;[D (x)=(x) (x)=(x)D (x)][$D (x)=(x)D (x)]\ c,      where  (x)"$[g(z,  )g(z,  )x]. However, as is quite evident this bound    depends upon =(x), the auxiliary matrix used to solve for Q . To make the bound independent of = we use the same trick as in the previous section; i.e. since  (x) is nonsingular w.p. 1, write  (x)"P(x)P(x) for some nonsingular m;m   matrix P(x) and let ;(x)"[$D (x)=(x)D (x)]\D (x)=(x)P(x)    ![$D (x)\(x)D (x)]\D (x)P\(x).     Then using the fact that $[;(x)(P(x))\D (x)]"O , it is easy to see that  K"K l.b.(=)"c[$;(x);(x)]c#c[$D (x)\(x)D (x)]\c. But this implies that    asvar(cK )*l.b.(=)*c[$D (x)\(x)D (x)]\c, L    where the last term is independent of =. Therefore, the e$ciency bound for regular estimators of c is given by  c[$D (x)\(x)D (x)]\c. But since c31N was arbitrary, the e$ciency bound    for regular estimators of  is [$D (x)\(x)D (x)]\.     Remark 10.1. This result, using the multinomial argument, was obtained originally by Chamberlain (1987), who also demonstrated that the bound was sharp.

11. E7ciency bounds for the binary choice model In this section we obtain e$ciency bounds for the slope coe$cients in a binary choice model. Let (, x) be random variables in 1;1N and let y"( )x .  For convenience we de"ne x "x . The slope coe$cient  31N and the    unobserved &error' term  has an unknown conditional pdf  (x). Following  Klein and Spady (1993), we impose the index restriction that  depends upon 

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

53

x only through x . To identify  we assume that e , the kth (for 0(k(1)   I quantile of  (x), is known. Assume 3, where  





" : 1;1NP1 : (ux)" (ux ), 

1

(ux) du"1, (ux)'0,



CI



(ux) du"k . \ We let FCV(e)"C  (ux) du denote the conditional distribution function of  \  x. The unknown marginal density of x is given by b and 

(ux) is bounded and continuous, and



b 3B" b3L(1N; ) : b(x)'0, 



1Q



b(x) dx"1 .

We can write the joint density for a single observation as a (yx)b (x), where   a (yx) is the conditional pmf of yx and  a 3A" a : 0,1 ;1NP1 : a(yx)'0, a(0x)#a(1x)"1 .  Since a (1x)"FCV(x ) and a (0x)"1!FCV(x ),       a (yx)"[FCV(x )]W [1!FCV(x )]\W.      Let us now calculate the e$ciency bound for estimating  .  For some t '0 let t C ( , , b ) be a curve from [0, t ] into 1N;;B  R R R  which passes through ( , , b ) at t"0. Let FCV(x )"VY@R (ux) du. As    R R \ R a function of t the joint density for a single observation is a(yx)b(x)"[FCV(x )]W[1!FCV(x )]\Wb(x), R R R R R R R where a(yx)"[FCV(x )]W[1!FCV(x )]\W is the curve passing through R R R R R a at t"0. Thus the score for estimating t"0 is given by  d 2a (yx) 2bQ (x) S " log a(yx)#log b(x)  " # ,  dt R R R a (yx) b (x)   where [y!FCV(x )][  (x x)xQ #2V (ux ) (ux) du]     \   a (yx)" 2a (yx) [FCV(x )]\W [1!FCV(x )]W      is the tangent to a at t"0. Similarly, (Q ,bQ ) are the tangents to ( , b ) at t"0. R R R Note that the expression for the score function simpli"es to S " 

[y!FCV(x )][  (x x)xQ #2V (ux ) (ux) du] 2bQ (x)     \   # . FCV(x )[1!FCV(x )] b (x)     

54

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

Let "(a , bQ ), T Q "lin ¹(A, a );lin ¹(B, b ) denote the product tangent   space, and c be the counting measure on 0,1 . As in Lemma B.3 we can show that lin ¹(A, a )" a 3L( 0,1 ;1N; c ;x) : a (0x)a (0x)#a (1x)a (1x)"0    w.p 1 . We do not need to calculate lin ¹(B, b ) explicitly since it will be quite easy to  show that the representer bH"0, and zero is always an element of the tangent space whatever it may be. Also, from Lemma B.3 we have



lin ¹(, )" 3L(1;1; ;x ) :   and



CI



1

(ux ) (ux ) du"0   



(ux ) (ux ) du"0 w.p 1 .   

\ Using the fact that $ (y!FCV(x )/FCV(x )[1!FCV(x )]) "0, it is WV       straightforward to see that the Fisher information for estimating t"0 is i "$S "4$ a (0x)#a (1x) #41Q bQ (x) dx; i.e. $  V

 (x x)xQ #2V (ux ) (ux) du  \   #4 bQ (x) dx. i "$   $ V 1N (FCV(x )[1!FCV(x )]     So for any  ,  3T Q we can de"ne the Fisher information inner product as    ,  "4$ a (0x) a (0x)#a (1x) a (1x) #41N bQ (x) bQ (x) dx; i.e.   $ V      

 (x x)xQ #2V  (ux ) (ux) du    \     ,  "$   $ V (FCV(x )[1!FCV(x )]    

 (x )xQ #2V  (ux ) (ux) du  \    ;   (FCV(x )[1!FCV(x )]    



 

 

 



#4

1N

bQ (x)bQ (x) dx.  

As the parameter of interest is a vector, for an arbitrary c31N consider estimating the real-valued functional (a , b )"c . This means that we can R R R only use those 3TQ to calculate i for which ()"cQ . But because  is $ a linear functional on the product tangent space and (T Q ,  ) , ) ) is a Hilbert $ space, we can use the Riesz}FreH chet theorem to see that for all 3TQ there exists a unique H"(aH, bH)3T Q such that ()"H,  . So let us set $

 (x x)xH#2V H(ux ) (ux) du   \   $ V (FCV(x )[1!FCV(x )]    





T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66



55

 

 (x x)xQ #2V (ux ) (ux) du   \   #4 bH(x)bQ (x) dx"cQ 1N (FCV(x )[1!FCV(x )]     for all (Q , , bQ )31N;lin ¹(, );lin ¹(B, b ), and solve for (H, H, bH). If   (H, H, bH) lies in 1N;lin ¹(, );lin ¹(B, b ) then it follows that   aH(yx) [y!FCV(x )][  (x x)xH#2V H(ux ) (ux) du] \       " 3lin ¹(A, a ).  2a (yx) [FCV(x )]\W [1!FCV(x )]W      Hence to obtain H it su$ces to solve for (H, H, bH). We now carry out this solution technique. The previous display leads immediately to



1N

$ V

bH(x)bQ (x) dx"0,



(18)



[  (x x)xH#2V H(ux ) (ux) du]V (ux ) (ux) du   \   \   "0, FCV(x )[1!FCV(x )]     (19)





 (x x)xxH#2  (x x)xV H(ux ) (ux) du     \   "Q c (20) FCV(x )[1!FCV(x )]     for all (Q , , bQ )31N;lin ¹(, );lin ¹(B, b ). Clearly, (18) implies that bH"0.   To "nd H let us write H(ux )"(I)#(II), where (I)" H(ux )( I  (x ) and   C WV  (II)" H(ux )( I  (x ). A close look at (19) prompts us to claim that we can   C V de"ne Q $ V

 

0

if !R(u)e , I (I)" !A V V VY@ V A SV if e (u)x ,  A SV S I  A V V VY@ V A SV if x (u(R,  A SV S  and

 

H $  V   CI  H $      V 

(21)

H $ V A SV if !R(u)x , !A V V V VY@ A SV S \   H $ if x (u)e , (22) (II)" A V VVCIVY@  V A SV  A SV S  I 0 if e (u(R. I To check the validity of our claim we have to verify that the H de"ned above satis"es (19) and that H3lin ¹(, ). It is straightforward to show that when  e )x , I  V

 (x x)$(xHx )  .

H(ux ) (ux) du"!     2 \



56

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

After verifying that this is also the case when e 'x , we can use iterated I  expectations to show that H solves (19). Assuming $ 

 (x x)$(xxx )/ V    FCV(x )[1!FCV(x )] exists, which is also a su$cient condition for the Fisher     information i to be "nite, a little algebra shows that H3L(1;1; ;x ). $  Since a direct calculation yields that





\

H(ux ) (ux) du"0 &  



CI

\

H(ux ) (ux) du"0,  

we can conclude that H3lin ¹(, ). Therefore, substituting the value of  V H(ux ) (ux) du in (20) we get that Q $  (x )H"Q c, where the p;p  \   V matrix

 (x x)$ [x!$(xx )][x!$(xx )]x

 (x x) var(xx )   "    . (x )"    FCV(x )[1!FCV(x )] FCV(x )[1!FCV(x )]         Since this holds for all Q and since w.l.o.g we may assume that $  (x ) is  V nonsingular, we get H"[$  (x )]\c. Finally, plugging (H, H) in the V  expression for aH and simplifying, we have [y!FCV(x )]  (x x)[x!$(xx )] H      aH(yx)" . 2a (yx) [FCV(x )]\W [1!FCV(x )]W      Therefore, the e$ciency bound for regular estimators of c is 



l.b."H "4$ aH(0x)#aH(1x) #4 $ V



bH(x) dx

1N



 (x x) [x!$(xx )] [x!$(xx )]     H FCV(x ) [1!FCV(x )]     "H$  (x )H"c[$  (x )]\c. V  V  But as c31N was arbitrary, the lower bound for the asymptotic variance of n consistent regular estimators of  is given by [$  (x )]\.  V  "H$ V

Remark 11.1. Utilizing di!erent arguments, e$ciency bounds for slope coe$cients in the binary choice model have been obtained by Chamberlain (1986)

 Because otherwise the slope coe$cients  would not be identi"ed. To see this, suppose that  $  (x ) is singular; i.e. not of full column rank. This implies that there exists a dO0 in 1N such V  that $  (x )d"0; i.e. d$  (x )d"0. Since we already know that (x x)'0, this means that V  V   Pr x : x d"0 "1, where x "x!$(xx ). So letting d" ! for some  ,  31N, we get      that Pr x : x  "x  "1; i.e. FCV(x ! )"FCV(x ! ), where  "$(x x ). Hence we         G G  can "nd two di!erent location and slope parameters which yield the same distribution of the observables; i.e. in particular, the slope coe$cient  is not identi"ed. 

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

57

and Cosslett (1987). Under the assumption that  was independent of x, Klein and Spady (1993) showed that the bounds were sharp.

12. E7ciency bounds for density weighted average derivatives Once again let f (z)"$(yz), where y is a random variable with support  S -1 and z is a d;1 random vector with convex support S -1B. The W X unknown Lebesgue pdf of z is  (z), where  3 and  



"  : S P1 : (z)'0, (bdry(S ))"0, X X



1X

(z) dz"1, (z) is

bounded and continuously di!erentiable with bounded derivatives . In this section we obtain e$ciency bounds for estimating the density weighted average derivatives  "$  (z) df (z)/dz . Estimation of density weighted    average derivatives and their usefulness in microeconometrics is discussed in Powell et al. (1989). We will use the result obtained here to conclude that the estimator of  proposed in Powell et al. (1989) is semiparametrically e$cient.  So let  (yz) be the unknown conditional density of yz, where 



 3B"  : S ;S P1 :  W X



1W

bounded and continuous,

(yz) dy"1,(yz)'0, (yz) is  (yz) and z



1



y(yz) dy are bounded .

Using this notation we can write the object of interest as



  

df (z)  "$  (z)  "   dz

1X

d dz

1W



y (yz) dy  (z) dz.  

Since  is a vector, we obtain the bound for c where c31B is arbitrary; i.e. we   let

 

( ,  )"  

c

1X

d dz

1W



y (yz) dy  (z) dz  

denote the real-valued functional to be estimated. As before, for some t '0 let t C ( ,  ) be a curve from [0, t ] into B;  R R  which passes through ( ,  ) at t"0. The score for estimating t"0 is given   by S "2Q (yz)/ (yz)#Q (z)/ (z). Let "(Q , Q ) and TQ "lin ¹(B,  );     lin ¹(,  ). The elements of  are the tangent vectors to ( ,  ) at t"0 and  R R

58

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

TQ denotes the product tangent space. Since it can be shown that

 

lin ¹(,  )" Q 3L(S ; ) :  X



1

X



Q (z) (z) dz"0 , 

lin ¹(B,  )" Q 3L(S ;S ; ;z) :  W X



1

W



Q (yz) (yz) dy"0 w.p 1 , 

the Fisher information for estimating t"0 is

     

Q (yz)  #4 Q (z) dz i "4$ $  (yz)  1X "4

1X

1W

Q (yz) dy  (z) dz#4 



1X

Q (z) dz.

Thus for any  ,  3TQ , the Fisher information inner product is    ,  "4   $

  1X

1W



Q (yz)Q (yz) dy  (z) dz#4   



1X

Q (z)Q (z) dz.  

However, as in the previous examples, not every tangent vector in T Q may be used to calculate i . In particular, since we are estimating the functional $ ( ,  )" X cd/dz W y(yz) dy (z) dz, only those 3T Q can be used which 1 R R R 1 R also satisfy

   

()"2

c

1X

d dz

#4

1X

1W



y (yz)Q (yz) dy  (z) dz  

d c dz

1W



y (yz) dy  (z)Q (z) dz.  

Since (T Q ,  ) , ) F) is a Hilbert space, by the Riesz}FreH chet theorem we know that for all 3TQ there exists a unique H"(H, H)3TQ such that ()"H, F; i.e.

      c

2

1

X

d dz

1

W

#4

1X

d c dz

"4

1X



y (yz)Q (yz) dy  (z) dz  

1W

1W

 

y (yz) dy  (z)Q (z) dz  

H(yz)Q (yz) dy  (z) dz#4 



1X

H(z)Q (z) dz.

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

59

But as this holds for all (Q , Q )3T Q we have



1X

 

H(z)Q (z) dz"

1X

d c dz

1W



y (yz) dy  (z)Q (z) dz  

(23)

and

    1X

1W



H(yz)Q (yz) dy  (z) dz  c

"



y  (yz)Q (yz) dy  (z) dz.  2  W

d dz

(24)

1 1 A long look at (23) should convince the reader that X



H(z)"c  (z) 



df (z) df (z)  !$  (z)   dz dz



 (z) 

(25)

is the solution to (23). Any lingering doubts may be removed by verifying that the H in (25) solves (23) and is an element of the tangent space lin ¹(,  ).  Obtaining H requires a little more e!ort. First, write (24) as

    1X

1W



H(yz)Q (yz) dy  (z) dz  c

"

1X

1



y  (yz)  Q (yz) dy  (z) dz  z 2 W



y Q (yz)  (yz) dy  (z) dz. (26)   2 z 1X 1W Next, as in the proof of Lemma 2.1 of Powell et al. (1989, p. 1425), use the fact that  (z) vanishes at the boundary of S to see that  X y Q (yz)  (yz) dy  (z) dz  2  z 1X 1W y   (yz) (z)   "! Q (yz) dy dz. z 2 1X 1W After substituting this in (26) and simplifying, (26) reduces to c

#

    1X

1W



 





H(yz)Q (yz) dy  (z) dz 

 c

"!

1X



y d (z)   (yz)Q (yz) dy dz. 2 dz  1W

(27)

60

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

We now claim that d (z) [y!f (z)] (yz)   H(yz)"!c  . (28) 2 (z) dz  This claim may be veri"ed by checking that H satis"es (27), H3 L(S ;S ; ;z), and that  W H(yz) (yz) dy"0; i.e. H lies in the tangent W X 1  space lin T(B, ).  Therefore, using H and H from (25) and (28), respectively, the e$ciency bound for n consistent regular estimators of c is given by 

 





H(yz) dy  (z) dz#4 H(z) dz  1X 1W 1X d (z) d (z) df (z)   var(yz) c#4c var  (z)  "4c$ c.  dz dz dz

l.b."H "4 $









Since c31B was arbitrary, the e$ciency bound for n consistent regular estimators of  is thus  df (z) d (z) d (z)   var(yz) #4var  (z)  . 4$  dz dz dz









It is now straightforward to verify that the asymptotic variance of the estimator of  given in Powell et al. (1989, Theorem 3.1, p. 1412) is equal to the lower  bound obtained above; i.e. the density weighted average derivative estimator obtained by Powell et al. (1989) is semiparametrically e$cient. Remark 12.1. Using a denseness argument, Samarov (1990) showed that the average derivative estimator in HaK rdle and Stoker (1989) was semiparametrically e$cient. Samarov (1990, Remark 2, p. 171) mentions without proof that a similar result holds for the density weighted average derivative estimator. Newey and Stoker (1993) have also obtained e$ciency results for the density weighted average derivative estimator.

13. Conclusion We have used the notions of Fisher information norm and the Fisher information inner product to present a straightforward approach to computing e$ciency bounds in semiparametric models. In Sections 6 and 7 we have used this approach to obtain some results which seem to be new to the literature. We have also shown that it works quite well for some well-known models in econometrics. We hope that readers will "nd this approach useful enough, and simple enough, to apply to models of their own choice.

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

61

Acknowledgements We thank the associate editor and two anonymous referees for comments that greatly improved this paper. We also thank Bruce Hansen, Yuichi Kitamura, Whitney Newey, and seminar participants at the Rice and Texas A & M Economics departments for helpful suggestions and encouragement. The second author gratefully acknowledges continued research support from the University of Wisconsin-Madison Graduate School.

Appendix A. Some useful de5nitions Dexnition A.1 (Cone). Let X be a vector space over 1. A subset C of X is called a cone if for any c3C and *0 we have c3C. Dexnition A.2 (Tangent vector). Let M be a subset of a normed vector space (X,  )  ). A vector x 3X is said to be tangent to M at the point x 3M if there 6  exists a t '0 and a mapping t C r of the interval [0, t ] into X such that  R  x #tx #r 3M for all t3[0, t ] and r  "o(t) as t 0. By letting  R  R 6 x "x #tx #r we obtain a curve t C x from [0, t ] into X which passes R  R R  through x and is di!erentiable from the right at t"0. x can be visualized as the  slope of this curve at t"0. Dexnition A.3 (Tangent cone). The collection of vectors which are tangent to M at the point x 3M is denoted by ¹(M, x ). As shown in Krabs (1979, p. 154),   ¹(M, x ) is a closed (w.r.t  )  ) nonempty cone in X and is called the tangent  6 cone to M at x .  Dexnition A.4 (Pathwise di!erentiable functional). Let M be a subset of a normed vector space (X,  )  ) and x an element of M. For some t '0 let 6   t C x be a curve from [0, t ] into M which passes through x at t"0. x denotes R   the tangent vector to this curve at t"0. A functional : MP1 is said to be pathwise di!erentiable at x if for any x there exists a continuous linear  R functional  : XP1 such that MVR \MV !(x )P0 as t 0. R Theorem A.1 (Riesz}FreH chet). Let L be a continuous linear functional on a Hilbert space (<,  ) , ) ) which is equipped with the norm  ) "( ) , ) . Then there exists a unique vector vH3< such that for all v3<, ¸(v)"v, vH . Furthermore, letting ¸ denote the norm of the linear functional L, we have ¸ "vH and every H H vH determines a unique continuous linear functional in this way. Proof. See Luenberger (1969, Theorem 2, p. 109). 䊐

62

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

Appendix B. Tangent spaces to various sets Lemma B.1. Let  3, where  is dexned in Section 3. Then the tangent space  lin ¹(,  )" Q 3L(1B; ) : 1B Q (z) (z) dz"0 .   Proof. For convenience let



S" Q 3L(1B; ) :



1B



Q (z) (z) dz"0 . 

We begin by showing that ¹(,  )-S. So let Q be any element of ¹(,  ).   Since  is a subset of L(1B; ), clearly Q 3L(1B; ). Now use the de"nition of a tangent cone to see that there exists a t '0 and a mapping t C r(t) of the  interval [0, t ] into L(1B; ) such that   (z)" (z)#tQ (z)#r (z)3 ∀t3[0, t ] R  R   1B  _H 0 as t 0. Next, square both sides of above equality and integrate and r /t* P R  1B  _H 0 as t 0, it is straightforward to show w.r.t z. Then using the fact that r /t* P R that 1B Q (z) (z) dz"0; i.e. Q 3S.  The reverse direction S-¹(,  ) is shown in two steps. First, let 



¹H" Q 3L(1B; ) :

Q (z) is bounded and  (z) 



1B



Q (z) (z) dz"0 . 

Now pick any H3¹H and for t*0 de"ne  (z)" (z)(1#t(HX. Since R  ( X H(z)/ (z) is bounded by assumption, 1#tH(z)/ (z)'0 for small enough t.   Therefore, for small enough t,  is well de"ned and strictly positive. Moreover, R the fact that  (z) is bounded and continuous in z implies that z C  (z) is  R bounded and continuous for small enough t. We conclude that, for small enough t,  3 since it is easily seen that 1B (z) dz"1 and 1B zz (z) dz(R. R R R Thus d/dt  (z) "H/23¹(,  ) and since ¹(,  ) is a cone we get that R R   H3¹(,  ). Hence we have shown that ¹H-¹(,  ). If we can now show   that ¹H is dense in S, we are done. To show that ¹H is dense in S, pick any s3S. Let C(1B) be the set of all real valued in"nitely di!erentiable functions on A 1B with compact support. Keep in mind that C(1B) is dense in L(1B; ). Then A since s3L(1B; ), for any '0 there exists a function h 3C(1B) such that C A  Since ¹H-¹(,  ), taking closure w.r.t the L(1B; ) norm on both sides we get  ¹M H-¹(,  )"¹(,  ), where the equality follows from the fact that ¹(,  ) is closed in the    L(1B; ) norm. If ¹H is dense in S then ¹M H"S, and we obtain the desired result.

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

63

1B [s(z)!h (z)] dz(. Now de"ne C



I (z)"h (z)! (z) C C 

1B

h (z) (z) dz. C 

Then using the facts that (i) h vanishes outside a compact set, (ii)  is C  continuous and positive, and (iii) 1B  (z) dz"1, it is straightforward to see that  I 3L(1B; ), I (z)/ (z) is bounded, and 1B I (z) (z) dz"0; i.e. I 3¹H. FurC C  C  C thermore, by the Cauchy}Schwarz inequality we have



[s(z)!I (z)] dz)2#2 C 1B





[s(z)!h (z)] (z) dz C  1B

 (4,

i.e. I 3¹H is arbitrarily close to s3S in the L(1B; ) norm. Hence we have C shown that ¹H is dense in S. The above arguments show that ¹(,  )"S. But S is a linear subspace of  L(1B; ) and is closed in the L(1B; ) norm. The result follows. 䊐 Lemma B.2. Let  3B where B is dexned in Section 6. Then 



lin ¹(B, )" Q 3L(1;S ; ;z) : X 



1



Q (yz) (yz) dy"0 w.p 1 . 

Proof. The proof is very similar to Lemma B.1 and so we merely sketch out the essential details. Let S" Q 3L(1;S ; ;z) : 1 Q (yz) (yz) dy"0 w.p 1 . X  From its de"nition, B can be treated as a subset of the normed linear space  : 1;S P1 : sup X 1 (yz) dy(R equipped with the norm X XZ1 "sup X (1 (yz) dy. We can use this to show that ¹(B,  )-S. The  XZ1 details are almost exactly like the "rst part of the proof of Lemma B.1 and are omitted. Since S is a linear space and is closed in the L(1;S ; ;z) norm, X we immediately have that lin ¹(B,  )-S. For the reverse direction  S-lin ¹(B,  ) de"ne 

 

¹H" : 1;S P1 : X

1



1

(yz) dy and

(yz) are bounded,  (yz) 



(yz) (yz) dy"0 . 

As in the proof of Lemma B.1 we can show that ¹H-¹(B, ). Since ¹H is  a linear space the "nal step is to show that ¹H is dense (in the L(1;S ; ;z) X norm) in S. So pick any s3S. For any '0 there exists a h 3C(1;S ) such C A X

64

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

that $ 1 [s(yz)!h (yz)] dy (. Let C



s (yz)"h (yz)! (yz) C C 

1

h (yz) (yz) dy. C 

It is easily veri"ed that s 3¹H and an application of the Cauchy}Schwarz C inequality yields that $ 1 [s(yz)!s (yz)] dy (4. Hence we have shown that C s 3¹H is arbitrarily close to s3S in the L(1;S ; ;z) norm; i.e. ¹H is dense C X in S. 䊐 Lemma B.3. Let 3 where  is dexned in Section 11. Then we have that  lin (, )"S, where 



S" 3L(1;1; ;x ) :  ;



CI

\



1

(ux ) (ux ) du"0 and   



(ux ) (ux ) du"0 w.p 1 .   

Proof. As in Lemma B.2 we can show that lin ¹(, )-S. Proving  S-lin (, ) requires a slightly di!erent construction than the one used in  Lemma B.2, and so we sketch out the basic idea. As in the proof of Lemma B.1, "rst de"ne



¹H" : 1;1P1 :



;



(ux )  are bounded, (ux ) du and 

(ux ) 1  

(ux ) (ux ) du"0,    1



CI \



(ux ) (ux ) du"0   

and show that ¹H-¹(, ). The details are omitted. Since ¹H is a linear space  the "nal step is to show that ¹H is dense (in the L(1;1; ;x ) norm) in S. So  pick any s3S. For any '0 there exists a h 3C(1;1) such that C A $  1 [s(ux )!h (ux )] du (. Recall that CI  (ux )"k and de"ne V  C  \   h (ux )!A SV CI h (ux ) (ux ) du if !R(u)e ,  I \ C    I s (ux )" C C  h (ux )!A SV I h (ux ) (ux ) du if e (u(R. C  \I C C    I



It is now easily veri"ed that s 3¹H. Furthermore, an application of the C Cauchy}Schwarz inequality yields that $  1 [s(ux )!s (ux )] du (c, V  C  where c is a generic constant. Hence s 3¹H is arbitrarily close to s3S in the C L(1;1; ;x ) norm; i.e. ¹H is dense in S. 䊐 

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

65

References Bahadur, R., 1964. On Fisher's bound for asymptotic variances. Annals of Mathematical Statistics 35, 1545}1552. Bickel, P., 1982. On adaptive estimation. The Annals of Statistics 10, 647}671. Bickel, P., Klassen, C., Ritov, Y., Wellner, J., 1993. E$cient and Adaptive Estimation for Semiparametric Models. Johns Hopkins Press, Baltimore, MD. Brown, B.W., Newey, W.K., 1998a. E$cient bootstrapping for semiparametric models. Manuscript. Brown, B.W., Newey, W.K., 1998b. E$cient semiparametric estimation of expectations. Econometrica 66 (2), 453}464. Chamberlain, G., 1986. Asymptotic e$ciency in semiparametric models with censoring. Journal of Econometrics 32, 189}218. Chamberlain, G., 1987. Asymptotic e$ciency in estimation with conditional moment restrictions. Journal of Econometrics 34, 305}334. Cosslett, S., 1987. E$ciency bounds for distribution free estimators of binary choice and censored regression models. Econometrica 55, 559}585. Cuzick, J., 1992. E$cient estimates in semiparametric additive regression models with unknown error distribution. The Annals of Statistics 20 (2), 1129}1136. Gijbels, I., 1999. Density Estimation and Applications. In: Lopes, N.M., Gonc7 alves, E. (Eds.), On Nonparametric and Semiparametric Statistics, Centro Internacional de Matematica, Portugal, Vol. 10. Good, I., Gaskins, R., 1971. Nonparametric roughness penalties for probability densities. Biometrika 58, 255}277. Hall, P., Horowitz, J.L., 1996. Bootstrap critical values for tests based on generalized method of moments estimators. Econometrica 64, 891}916. Hansen, L.P., 1982. Large sample properties of generalized methods of moments estimators. Econometrica 50, 1029}1054. HaK rdle, W., Stoker, T., 1989. Investigating smooth multiple regression by the method of average derivatives. Journal of the American Statistical Association 84, 986}995. Klein, R.W., Spady, R.H., 1993. An e$cient semiparametric estimator for binary response models. Econometrica 61 (2), 387}421. Krabs, W., 1979. Optimization and Approximation. Wiley, New York. Luenberger, D.G., 1969. Optimization by Vector Space Methods. Wiley, New York. Newey, W.K., 1990. Semiparametric e$ciency bounds. Journal of Applied Econometrics 5, 99}135. Newey, W.K., McFadden, D., 1994. Large sample estimation and hypothesis testing. In: Engle, R., McFadden, D. (Eds.), Handbook of Econometrics, Vol. IV. Elsevier Science B.V., Amsterdam, pp. 2111}2245. Newey, W.K., Stoker, T.M., 1993. E$ciency of weighted average derivative estimators and index models. Econometrica 61 (5), 1199}1223. Powell, J.L., Stock, J.H., Stoker, T.M., 1989. Semiparametric estimation of index coe$cients. Econometrica 57 (6), 1403}1430. Ritov, Y., Bickel, P.J., 1990. Achieving information bounds in non and semiparametric models. Annals of Statistics 18, 925}938. Rothenberg, T.J., 1971. Identi"cation in parametric models. Econometrica 39 (3), 577}591. Rudin, W., 1974. Real and Complex Analysis. McGraw-Hill, New York. Samarov, A., 1990. On asymptotic e$ciency of average derivative estimates. In: Roussas, G. (Ed.), Nonparametric Functional Estimation and Related Topics, NATO ASI Series, Series C, Vol. 335. Kluwer Academic Publishers, New York, pp. 167}172. Stein, C., 1956. E$cient nonparametric testing and estimation. Proceedings of the third Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1, University of California Press, Berkeley, CA, pp. 187}195.

66

T.A. Severini, G. Tripathi / Journal of Econometrics 102 (2001) 23}66

Tripathi, G., 1999. Local semiparametric e$ciency bounds under shape restrictions. Econometric Theory, forthcoming. van der Vaart, A., 1989. On the asymptotic information bound. The Annals of Statistics 17 (4), 1487}1500. Wong, W.H., Severini, T.A., 1991. On maximum likelihood estimation in in"nite dimensional parameter spaces. The Annals of Statistics 19 (2), 603}632. Zhang, B., 1999. Bootstrapping with auxiliary information. The Canadian Journal of Statistics 27, 237}249.

A simpli"ed approach to computing e$ciency bounds in ...

(Section 4), and the cumulative distribution function (Section 5). Sections 6 ...... (y"z)dy is bounded, an application of the Cauchy}Schwarz inequality shows that ...

328KB Sizes 3 Downloads 65 Views

Recommend Documents

A Uniform Approach for Computing Supremal ...
†Department of Electrical Engineering and Computer Science, The University ... there is no supremal element for the class of observable sublanguages of K. For ...

a collocation approach for computing solar sail lunar ...
He notes that, as a general rule, the order of the integration scheme is ...... presented in a previous study by the authors.6 The data summary in Table 3 indicates.

A Hybrid Approach to Error Detection in a Treebank - language
of Ambati et al. (2011). The figure shows the pseudo code of the algo- rithm. Using this algorithm Ambati et al. (2011) could not only detect. 3False positives occur when a node that is not an error is detected as an error. 4http://sourceforge.net/pr

A Hybrid Approach to Error Detection in a Treebank - language
recall error identification tool for Hindi treebank validation. In The 7th. International Conference on Language Resources and Evaluation (LREC). Valleta, Malta.

Nonparametric Bounds on Returns to Education in ...
Mar 6, 2009 - (OHS) of 1995 and 1997 and LFS of September 2000) to show that returns to higher levels of education in South Africa are convex. The estimation procedure they use is OLS allowing for non-linear returns to education in the form of polyno

Introduction to Scientific Computing in Python - GitHub
Apr 16, 2016 - 1 Introduction to scientific computing with Python ...... Support for multiple parallel back-end processes, that can run on computing clusters or cloud services .... system, file I/O, string management, network communication, and ...

A Conceptual Network Approach to Structuring of Meanings in Design ...
A Conceptual Network Approach to Structuring of Meanings in Design.pdf. A Conceptual Network Approach to Structuring of Meanings in Design.pdf. Open.

a cautious approach to generalization in reinforcement ...
Department of Electrical Engineering and Computer Science, University of Li`ege, BELGIUM ..... even with small degree polynomial algorithms. As suggested by ...

A Data-Driven Approach to Question Subjectivity Identification in ...
1Department of Computer Science and Engineering,. The Chinese University of Hong Kong, Shatin, N.T., Hong Kong. 2Google Research, Beijing 100084, ..... Fluon Elastomer material's detail? What does BCS stand for in college football?

Bounds to memory loss
analyst having more knowledge about the agent's forgetting than the agent has himself. 12. .... Conference on Artificial Intelligence, pp. 954—959. ... nomics and Business Administration, Department of Economics, Hellcvcicn 30,. N-5035 ...

A Stand-Development Approach to Oak Afforestation in ...
mixtures are being planted with little knowledge of subsequent stand development, leading to an inability to predict future stand composition for management purposes. .... More recently, managers have established species mixtures with ...... HAYNES,

a model-driven approach to variability management in ...
ther single or multi window), and it has a default value defined in the signature of the template .... syntax of a FSML to the framework API. Similarly to the binding ...

A Novel Approach to Automated Source Separation in ...
of the proposed method with other popular source separation methods is drawn. ... The alternative method for speech separation that is pre- sented in this paper is not .... around the pitch harmonics versus the overall energy of signal xm.

Buy-in: a radical approach to change ... - Sharon Drew Morgen
I'm going to have to fig- ure that out because I've certainly got a responsibility to the employees. SDM:What would you need to know or believe differently to be ...

A Dynamic Bayesian Network Approach to Location Prediction in ...
A Dynamic Bayesian Network Approach to Location. Prediction in Ubiquitous ... SKK Business School and Department of Interaction Science. Sungkyunkwan ...

A Pragmatist Approach to Integrity in Business Ethics - SAGE Journals
MANAGEMENT INQUIRY. / September 2004. Jacobs / PRAGMATISM. AND INTEGRITY IN. BUSINESS ETHICS. A Pragmatist Approach to Integrity in Business ...

A Unified Approach to Equilibrium Existence in ...
Jul 12, 2012 - E-mail: [email protected]. ‡CNRS ... E-mail: [email protected]. 1 ...... [16] M. Hirsch, Magill M. and Mas-Colell M. (1987).

A microparametric approach to syncretisms in nominal ...
scope over obviation, we predict to find number syncretisms that are sensitive to obviation in these languages and indeed this is the case. There is a pervasive tendency for single-syncretism languages, including Cree-Innu, Ojibwe, and Delaware, to a

A Flexible Approach to Efficient Resource Sharing in ...
multiple concurrent bursty workloads on a shared storage server. Such a situation ... minimum throughput guarantees [8, 16] (IOPS) or response time bounds [11 ...