Quantile-Based Nonparametric Inference for First-Price Auctions Vadim Marmer University of British Columbia Artyom Shneyerov CIREQ, CIRANO, and Concordia University August 26, 2008

Abstract We propose a quantile-based nonparametric approach to inference on the probability density function (PDF) of the private values in …rst-price sealedbid auctions with independent private values. Our method of inference is based on a fully nonparametric kernel-based estimator of the quantiles and PDF of observable bids. Our estimator attains the optimal rate of Guerre, Perrigne, and Vuong (2000), and is also asymptotically normal with the appropriate choice of the bandwidth. As an application, we consider the problem of inference on the optimal reserve price. JEL Classi…cation: C14, D44 Keywords: First-price auctions, independent private values, nonparametric estimation, kernel estimation, quantiles, optimal reserve price We thank Donald Andrews, Herman Bierens, Chuan Goh, Christian Gourieroux, Bruce Hansen, Sung Jae Jun, Roger Koenker, Guido Kuersteiner, Isabelle Perrigne, Joris Pinkse, James Powell, Je¤rey Racine, Yixiao Sun, and Quang Vuong for helpful comments. Pai Xu provided excellent research assistance. The …rst author gratefully acknowledges the research support of the Social Sciences and Humanities Research Council of Canada under grant number 410-2007-1998. This version of the paper was completed when the second author was visiting the Center for Mathematical Studies in Economics and Management Science at Kellogg School of Management, Northwestern University. Its warm hospitality is gratefully acknowledged.

1

1

Introduction

Following the seminal article of Guerre, Perrigne, and Vuong (2000), GPV hereafter, there has been an enormous interest in nonparametric approaches to auctions.1 By removing the need to impose tight functional form assumptions, the nonparametric approach provides a more ‡exible framework for estimation and inference. Moreover, the sample sizes available for auction data can be su¢ ciently large to make the nonparametric approach empirically feasible.2 This paper contributes to this literature by providing a fully nonparametric framework for making inferences on the density of bidders’ valuations f (v). The need to estimate the density of valuations arises in a number of economic applications, as for example the problem of estimating a revenue-maximizing reserve price.3 As a starting point, we brie‡y discuss the estimator proposed in GPV. For the purpose of introduction, we adopt a simpli…ed framework. Consider a random, i.i.d. sample bil of bids in …rst-price auctions each of which has n bidders; l indexes auctions and i = 1; : : : ; n indexes bids in a given auction. GPV assume independent private values (IPV). In equilibrium, the bids are related to the valuations via the equilibrium bidding strategy B: bil = B (vil ). GPV show that the inverse bidding strategy is identi…ed directly from the observed distribution of bids: v = (b)

b+

1 n

G (b) ; 1 g (b)

(1)

where G (b) is the cumulative distribution function (CDF) of bids in an auction with n bidders, and g (b) is the corresponding density. GPV propose to use nonparametric ^ and g^. When b = bil , the left-hand side of (1) will then give what GPV estimators G call the pseudo-values v^il = ^ (bil ). The CDF F (v) is estimated as the empirical 1

See a recent survey by Athey and Haile (2007). For example, List, Daniel, and Michael (2004) study bidder collusion in timber auctions using thousands of auctions conducted in the Province of British Columbia, Canada. Samples of similar size are also available for highway procurement auctions in the United States (e.g., Krasnokutskaya (2003)). 3 This is an important real-world problem that arises in the administration of timber auctions, for example. The actual objectives of the agencies that auction timber may vary from country to country. In the United States, obtaining a fair price is the main objective of the Forest Service. As observed in Haile and Tamer (2003), this is a vague objective, and determining the revenue maximizing reserve price should be part of the cost-bene…ts analysis of the Forest Service’s policy. In other countries, maximizing the expected revenue from each and every auction is a stated objective, as is for example the case for BC Timber Sales (Roise, 2005). 2

2

CDF, and the PDF f (v) is estimated by the method of kernels, both using v^il as observations. GPV show that, with the appropriate choice of the bandwidth, their estimator converges to the true value at the optimal rate (in the minimax sense; Khasminskii (1978)). However, the asymptotic distribution of this estimator is as yet unknown, possibly because both steps of the GPV method are nonparametric with estimated values v^il entering the second stage, and because the GPV estimator applies trimming to the observations that lie too close to the boundaries of the support of the bids distribution. The estimator f^ (v) proposed in this paper avoids the use of pseudo-values and does not involve trimming; it builds instead on the insight of Haile, Hong, and Shum (2003).4 They show that the quantiles of the distribution of valuations can be expressed in terms of the quantiles, PDF, and CDF of bids. We show below that this relation can be used for estimation of f (v). Consider the -th quantile of valuations Q ( ) and the -th quantile of bids q ( ). The latter can be easily estimated from the sample by a variety of methods available in the literature. As for the quantile of valuations, since the inverse bidding strategy (b) is monotone, equation (1) implies that Q ( ) is related to q ( ) as follows: Q( ) = q( ) +

(n

1) g (q ( ))

(2)

;

providing a way to estimate Q ( ) by a plug-in method. The CDF F (v) can then be recovered simply by inverting the quantile function, F (v) = Q 1 (v). Our estimator f^ (v) is based on a simple idea that by di¤erentiating the quantile function we can recover the density: Q0 ( ) = 1=f (Q ( )), and therefore f (v) = 1=Q0 (F (v)). Taking the derivative in (2) and using the fact that q 0 ( ) = 1=g (q ( )), we obtain, after some algebra, our basic formula: f (v) =

n n

1 1 g (q (F (v)))

1 n

F (v) g 0 (q (F (v))) 1 g 3 (q (F (v)))

1

:

(3)

Note that all the quantities on the right-hand side, i.e. g (b), g 0 (b), q ( ), F (v) = Q 1 (v) can be estimated nonparametrically, for example, using kernel-based methods. Once this is done, we can plug them in (3) to obtain our nonparametric estimator. 4

The focus of Haile, Hong, and Shum (2003) is a test of common values. Their model is therefore di¤erent from the IPV model, and requires an estimator that is di¤erent from the one in GPV. See also Li, Perrigne, and Vuong (2002).

3

The expression in (3) can be also derived using the following relationship between the CDF of values and the CDF of bids: F (v) = G (B (v)) : Applying the change of variable argument to the above identity, one obtains f (v) = g (B (v)) B 0 (v) = g (B (v)) = 0 (B (v)) =

n n

1 1 g (B (v))

1 n

F (v) g 0 (B (v)) 1 g 3 (B (v))

1

:

Note however, that from the estimation perspective, the quantile-based formula appears to be more convenient, since the bidding strategy function B involves integration of F (see equation (4) below). Also, as we show below, the quantile-based approach eliminates trimming. Furthermore, replacing B (v) with appropriate quantiles has no e¤ect on the asymptotic distribution of the estimator. Our framework results in the estimator of f (v) that is both consistent and asymptotically normal, with an asymptotic variance that can be easily estimated. Moreover, we show that, with an appropriate choice of the bandwidth sequence, the proposed estimator attains the minimax rate of GPV. In a Monte Carlo experiment, we compare the performances of the two estimators, our quantile-based estimator and GPV’s by comparing their …nite sample biases, mean squared errors, and median absolute deviations. Our conclusions is that neither estimator strictly dominates the other. The GPV estimator is more e¢ cient when the PDF of valuations has a positive derivative at the point of estimation and the number of bidders tends to be large. On the other hand, the quantile-based estimator is more e¢ cient when the PDF of valuations has a negative derivative and the number of bidders is small. As an application, we consider the problem of inference on the optimal reserve price. Several previous articles have considered the problem of estimating the optimal reserve price. Paarsch (1997) develops a parametric approach and applies his estimator to timber auctions in British Columbia. Haile and Tamer (2003) consider the problem of inference in an incomplete model of English auction, derive nonparametric bounds on the reserve price and apply them to the reserve price policy in the US 4

Forest Service auctions. Closer to the subject of our paper, Li, Perrigne, and Vuong (2003) develop a semiparametric method to estimate the optimal reserve price. At a simpli…ed level, their method essentially amounts to re-formulating the problem as a maximum estimator of the seller’s expected pro…t. Strong consistency of the estimator is shown, but its asymptotic distribution is as yet unknown. In this paper, we propose asymptotic con…dence sets (CSs) for the optimal reserve price. Our CSs are formed by inverting a collection of asymptotic tests of Riley and Samuelson’s (1981) equation determining the optimal reserve price. This equation involves the density f (v), and a test statistic with an asymptotically normal distribution under the null can be constructed using our estimator. The paper is organized as follows. Section 2 introduces the basic setup. Similarly to GPV, we allow the number of bidders to vary from auctions to auction, and also allow auction-speci…c covariates. Section 3 presents our main results. Section 4 discusses inference on the optimal reserve price. We report Monte Carlo results in Section 5. All proofs are contained in the Appendix.

2

De…nitions

Suppose that the econometrician observes the random sample f(bil ; xl ; nl ) : l = 1; : : : ; L; i = 1; : : : nl g, where bil is the equilibrium bid of bidder i submitted in auction l with nl bidders, and xl is the vector of auction-speci…c covariates for auction l. The corresponding unobservable valuations of the object are given by fvil : l = 1; : : : ; L; i = 1; : : : nl g. We make the following assumption similar to Assumptions A1 and A2 of GPV (see also footnote 14 in their paper). Assumption 1 (a) f(nl ; xl ) : l = 1; : : : ; Lg are i.i.d. (b) The marginal PDF of xl , ', is strictly positive and continuous on its compact support X Rd , and admits at least R 2 continuous derivatives on its interior. (c) The distribution of nl conditional on xl is denoted by N = fn; : : : ; ng for all x 2 X , n 2.

(njx) and has support

(d) fvil : l = 1; : : : ; L; i = 1; : : : ; nl g are i.i.d. and independent of the number of bidders conditional on xl with the PDF f (vjx) and CDF F (vjx). 5

(e) f ( j ), is strictly positive and bounded away from zero on its support, a compact interval [v (x) ; v (x)] R+ , and admits at least R continuous partial derivatives on f(v; x) : v 2 (v (x) ; v (x)) ; x 2 Interior (X )g. (f) For all n 2 N , X.

(nj ) admits at least R continuous derivatives on the interior of

In the equilibrium and under Assumption 1(c), the equilibrium bids are determined by Z vil 1 (F (ujxl ))n 1 du; (4) bil = vil n 1 (F (vil jxl )) v (see, for example, GPV). Let g (bjn; x) and G (bjn; x) be the PDF and CDF of bil , conditional on both xl = x and the number of bidders nl = n. Since bil is a function of vil , xl , and F ( jxl ), the bids fbil g are also i.i.d. conditional on (nl ; xl ). Furthermore, by Proposition 1(i) and (iv) of GPV, for all n = n; : : : ; n and x 2 X , g (bjn; x) has the compact support b (n; x) ; b (n; x) for some b (n; x) < b (n; x) and admits at least R + 1 continuous bounded partial derivatives. The -th quantile of F (vjx) is de…ned as Q ( jx) = F

1

( jx)

inf fv : F (vjx) v

g:

The -th quantile of G, q ( jn; x) = G

1

( jn; x) ;

is de…ned similarly. The quantiles of the distributions F (vjx) and G (bjn; x) are related through the following conditional version of equation (2): Q ( jx) = q ( jn; x) +

(n

1) g (q ( jn; x) jn; x)

:

(5)

Note that the expression on the left-hand side does not depend on n, since, by Assumption 1(d) and as it is usually assumed in the literature, the distribution of valuations is the same regardless of the number of bidders. The true distribution of the valuations is unknown to the econometrician. Our objective is to construct a valid asymptotic inference procedure for the unknown f using the data on observable bids. Di¤erentiating (5) with respect to , we obtain the 6

following equation relating the PDF of valuations with functionals of the distribution of the bids: 1 @Q ( jx) = @ f (Q ( jx) jx) n 1 = n 1 g (q ( jn; x) jn; x)

g (1) (q ( jn; x) jn; x) ; (n 1) g 3 (q ( jn; x) jn; x)

(6)

where g (k) (bjn; x) = @ k g (bjn; x) =@bk . Substituting = F (vjx) in equation (6) and using the identity Q (F (vjx) jx) = v, we obtain the following equation that represents the PDF of valuations in terms of the quantiles, PDF and derivative of PDF of bids: 1 n 1 = f (vjx) n 1 g (q (F (vjx) jn; x) jn; x) 1 F (vjx) g (1) (q (F (vjx) jn; x) jn; x) : n 1 g 3 (q (F (vjx) jn; x) jn; x)

(7)

Note that the overidentifying restriction of the model is that f (vjx) is the same for all n. In this paper, we suggest a nonparametric estimator for the PDF of valuations based on equations (5) and (7). Such an estimator requires nonparametric estimation of the conditional CDF and quantile functions, PDF and its derivative. Let K be a kernel function. We assume that the kernel is compactly supported and of order R. Assumption 2 K is compactly supported on [ 1; 1], has at least R derivatives on R R R, the derivatives are Lipschitz, and K (u) du = 1, uk K (u) du = 0 for k = 1; : : : ; R 1. To save on notation, denote Kh (z) =

1 z K , h h

and for x = (x1 ; : : : ; xd )0 , de…ne K h (x) =

1 x 1 Q xk Kd = d dk=1 K : d h h h h 7

Consider the following estimators: 1X ' ^ (x) = K h (xl L l=1 L

(8)

x) ;

X 1 1 (nl = n) K h (xl ^ (njx) = ' ^ (x) L l=1 L

x) ;

n

l XX 1 1 (nl = n) 1 (bil ^ (njx) ' ^ (x) nL l=1 i=1 n o 1 ^ ^ q^ ( jn; x) = G ( jn; x) inf b : G (bjn; x) ;

L

^ (bjn; x) = G

b) K h (xl

x) ;

b

g^ (bjn; x) =

1 ^ (njx) ' ^ (x) nL n L l XX 1 (nl = n) Kh (bil

b) K h (xl

(9)

x) ;

l=1 i=1

where 1 (S) is an indicator function of a set S R.5;6 The derivatives of the density g (bjn; x) are estimated simply by the derivatives of g^ (bjn; x): (k)

g^

( 1)k (bjn; x) = ^ (njx) ' ^ (x) nL n L l XX (k) 1 (nl = n) Kh (bil

b) K h (xl

x) ;

(10)

l=1 i=1

(k)

1 where Kh (u) = h1+k K (k) (u=h), k = 0; : : : ; R, and K (0) (u) = K (u). Our approach also requires nonparametric estimation of Q, the conditional quantile function of valuations. An estimator for Q can be constructed using the relationship between Q, q and g given in (5). A similar estimator was proposed by Haile, Hong, and Shum (2003) in a related context. In our case, the estimator of Q will be used to construct F^ , an estimator of the conditional CDF of valuations. Since F is 5

We estimate the CDF of bids by a conditional version of the empirical CDF. In a recent paper, Li and Racine (2008) discuss a smooth estimator of the CDF (and a corresponding quantile estimator) obtained by integrating the kernel PDF estimator. We, however, adopt the non-smooth empirical CDF approach in order for our estimator to be comparable with that of GPV; both estimator can be modi…ed by using the smooth conditional CDF estimator. 6 The quantile estimator q^ is constructed by inverting the estimator of the conditional CDF of bids. This approach is similar to that of Matzkin (2003).

8

related to Q through F (vjx) = Q

1

(vjx) = sup f : Q ( jx)

(11)

vg ;

2[0;1]

F^ can be obtained by inverting the estimator of the conditional quantile function. However, since an estimator of Q based on (5) involves kernel estimation of the PDF g, it will be inconsistent for the values of that are close to zero and one. In particular, such an estimator can exhibit large oscillations for near one taking on very small values, which, due to supremum in (11), might proliferate and bring an upward bias into the estimator of F . A possible solution to this problem that we pursue in this paper is to use a monotone version of the estimator of Q. First, we ^ p: de…ne a preliminary estimator, Q ^ p ( jn; x) = q^ ( jn; x) + Q

(n

1) g^ (^ q ( jn; x) jn; x)

Next, pick 0 su¢ ciently far from 0 and 1, for example, monotone version of the estimator of Q as follows. ^ ( jn; x) = Q

(

^ p (tjn; x) ; 0 supt2[ 0 ; ] Q ^ p (tjn; x) ; 0 inf t2[ ; 0 ] Q

0

:

(12)

= 1=2. We de…ne a

< 1; < 0:

(13)

^ ( jn; x) is given The estimator of the conditional CDF of the valuations based on Q by n o ^ ^ F (vjn; x) = sup : Q ( jn; x) v : (14) 2[0;1]

^ ( jn; x) is monotone, F^ is not a¤ected by Q ^ p ( jn; x) taking on small values Since Q ^ ( jn; x) near the near = 1. Furthermore, in our framework, inconsistency of Q boundaries does not pose a problem, since we are interested in estimating F only on a compact inner subset of its support. Using (7), for a given an we propose to estimate f (vjx) by the plug-in method, i.e. by replacing g (bjn; x), q ( jn; x), and F (vjx) in (7) with g^ (bjn; x), q^ ( jn; x), and F^ (vjn; x). That is our estimator f^ (vjn; x) is given by the reciprocal of n n

1 1 g^ q^ F^ (vjn; x) jn; x jn; x 9

F^ (vjn; x) g^(1) q^ F^ (vjn; x) jn; x jn; x

1 n

1

g^3 q^ F^ (vjn; x) jn; x jn; x

:

(15)

While the PDF of valuations does not depend on the number of bidders n, the estimator de…ned by (15) does, and therefore we have a series of estimators for f (vjx): f^ (vjn; x), n = n; : : : ; n. The estimators f^ (vjn; x) ; : : : ; f^ (vjn; x) can be averaged to obtain: n X f^ (vjx) = w^ (n; x) f^ (vjn; x) ; (16) n=n

where the weights w^ (n; x) satisfy

n X

w^ (n; x) !p w (n; x) > 0; w (n; x) = 1:

n=n

In the next section, we discuss how to construct optimal weights that minimize the asymptotic variance of f^ (vjx). We also suggest estimating the conditional CDF of v using the average of F^ (vjn; x), n = n; : : : ; n: n X ^ w^ (n; x) F^ (vjn; x) : (17) F (vjx) = n=n

3

Asymptotic properties

In this section, we discuss uniform consistency and asymptotic normality of the estimator of f proposed in the previous section. The consistency of the estimator of f follows from the uniform consistency of its components. It is well known that kernel estimators can be inconsistent near the boundaries of the support, and therefore we estimate the PDF of valuations at the points that lie away from the boundaries of [v (x) ; v (x)]. The econometrician can choose quantile values 1 and 2 such that 0 < 1 < 2 < 1; in order to cut o¤ the boundaries of the support where estimation is problematic. While v (x) and v (x) are unknown, consider instead the following interval of v’s for 10

selected

1

and

2:

^ (x) =

n h \

n=n

i ^ ( 1 jn; x) ; Q ^ ( 2 jn; x) : Q

(18)

The interval ^ (x) estimates (x) = [Q ( 1 jx) ; Q ( 2 jx)], the interval between the 1 and 2 quantiles of the distribution of bidders’ valuations. As we show below, our estimator of f is uniformly consistent and asymptotically normal when f is estimated at the points from ^ (x). In practice, 1 and 2 can be selected as follows. Since by Assumption 2 the length of the support of K is two, and following the discussion on page 531 of GPV, in the case with no covariates one can choose 1 and 2 such that [^ q ( 1 jn) ; q^ ( 2 jn)]

[bmin (n) + h; bmax (n)

h]

for all n 2 N , where bmin (n) and bmax (n) denote the minimum and maximum bids in the auctions with n bidders respectively. When there are covariates available and f is estimated conditional on xl = x, one can replace bmin (n) and bmax (n) with the corresponding minimum and maximum bids in the neighborhood of x as de…ned on page 541 of GPV. First, we present the following lemma which provides uniform convergence rates for the components of the estimator f^ on appropriate intervals. Since the bidding function is monotone, by Proposition 2.1 of GPV, there is an inner compact interval of the support of the bids distribution, say [b1 (n; x) ; b2 (n; x)], such that [q ( 1 jn; x) ; q ( 2 jn; x)]

[b1 (n; x) ; b2 (n; x)]

b (n; x) ; b (n; x) :

(19)

Note that the knowledge of [b1 (n; x) ; b2 (n; x)] is not required for construction of our estimator. Lemma 1 Under Assumptions 1 and 2, for all x 2 Interior (X ) and n 2 N , (a) ^ (njx) (b) ' ^ (x)

Lhd log L

(njx) = Op ' (x) = Op

Lhd log L

^ (bjn; x) (c) supb2[b(n;x);b(n;x)] jG

1=2

1=2

+ hR .

+ hR .

G (bjn; x) j = Op 11

Lhd log L

1=2

+ hR .

(d) sup (e) sup

2[";1 "]

j^ q ( jn; x)

2[0;1] (limt#

Lhd log L

q ( jn; x) j = Op

q^ (tjn; x)

q^ ( jn; x)) = Op

(f) supb2[b1 (n;x);b2 (n;x)] j^ g (k) (bjn; x)

1=2

+ hR , for all 0 < " < 1=2. 1

Lhd log(Lhd )

. Lhd+1+2k log L

g (k) (bjn; x) j = Op

1=2

+ hR , k =

0; : : : ; R, where [b1 (n; x) ; b2 (n; x)] is de…ned in (19). (g) sup

2[

1

";

such that

2 +"]

1

^ ( jn; x) jQ " > 0 and

(h) supv2 ^ (x) jF^ (vjn; x)

Q ( jx) j = Op 2

Lhd+1 log L

1=2

+ hR , for some " > 0

+ " < 1. 1=2

Lhd+1 log L

F (vjx) j = Op

+ hR , where ^ (x) is de…ned

in (18). As it follows from Lemma 1, the estimator of the derivative of g ( jn; x) has the slowest rate of convergence among all components of f^. Consequently, it determines the uniform convergence rate of f^. Theorem 1 Let ^ (x) be de…ned by (18). Under Assumptions 1 and 2, and for all x 2 Interior (X ), supv2 ^ (x) f^ (vjx)

f (vjx) = Op

Lhd+3 log L

1=2

+ hR .

Remarks. 1. The theorem also holds when ^ (x) is replaced by an inner closed subset of [v (x) ; v (x)], as in Theorem 3 of GPV. 2. One of the implications of theorem is that our estimator achieves the optimal rate of GPV. Consider the following choice of the bandwidth parameter: h = 1=2 and hR are of the same orc (L= log L) . By choosing so that Lhd+3 = log L der, one obtains = 1= (d + 3 + 2R) and the rate (L= log L) R=(d+3+2R) , which is the same as the optimal rate established in Theorem 3 of GPV. Next, we discuss asymptotic normality of the proposed estimator. We make following assumption. Assumption 3 Lhd+1 ! 1, and Lhd+1+2k

1=2

hR ! 0.

The rate of convergence and asymptotic variance of the estimator of f are determined by g^(1) (bjn; x), the component with the slowest rate of convergence. Hence, 12

Assumption 3 will be imposed with k = 1 which limits the possible choices of the bandwidth for kernel estimation. For example, if one follows the rule h = cL , then has to be in the interval (1= (d + 3 + 2R) ; 1= (d + 1)). As usual for asymptotic normality, there is some under smoothing relative to the optimal rate. Lemma 2 Let [b1 (n; x) ; b2 (n; x)] be as in (19). Then, under Assumptions 1-3, for all b 2 [b1 (n; x) ; b2 (n; x)], x 2 Interior (X ), and n 2 N , (a) Lhd+1+2k

1=2

g^(k) (bjn; x)

g (k) (bjn; x) !d N (0; Vg;k (b; n; x)), where

Vg;k (b; n; x) = Kk g (bjn; x) = (n (njx) ' (x)) ; and Kk =

R

K 2 (u) du

d

R

K (k) (u)

2

du.

(b) g^(k) (bjn1 ; x) and g^(k) (bjn2 ; x) are asymptotically independent for all n1 6= n2 , n1; n2 2 N . Now, we present the main result of the paper. By (55) in the Appendix, we the following decomposition: f^ (vjn; x)

f (vjx) =

F (vjx) f 2 (vjx) 1) g 3 (q (F (vjx) jn; x) jn; x)

(n

g^(1) (q (F (vjx) jn; x) jn; x)

g (1) (q (F (vjx) jn; x) jn; x) + op

Lhd+3

1=2

: (20)

Lemma 2, de…nition of f^ (vjn; x), and the decomposition in (20) lead to the following theorem. Theorem 2 Let ^ (x) be de…ned by (18). Under Assumptions 1, 2, and 3 with k = 1, for v 2 ^ (x), x 2 Interior (X ), and n 2 N , Lhd+3

1=2

f^ (vjn; x)

f (vjx) !d N (0; Vf (v; n; x)) ;

where Vf (v; n; x) is given by

n (n

1)2

K1 F 2 (vjx) f 4 (vjx) ; (njx) ' (x) g 5 (q (F (vjx) jn; x) jn; x) 13

and K1 is de…ned in Lemma 2. Furthermore, f^ (vjn; x) ; : : : ; f^ (vjn; x) are independent. Remarks. 1. The theorem also holds for …xed v’s in an inner closed subset of [v (x) ; v (x)]. 2. Our approach can be used for estimation of the conditional PDF of values at quantile , f (Q ( jx)). The estimator, say f^ (Q ( jx) jn; x), is then given by f^ (Q ( jx) jn; x) = and Lhd+3

1=2

n n

1 1 g^ (^ q ( jn; x) jn; x)

f^ (Q ( jx) jn; x)

1 n

g^(1) (^ q ( jn; x) jn; x) 1 g^3 (^ q ( jn; x) jn; x)

1

;

f (Q ( jx) jx) !d N (0; Vf (Q ( jx) ; n; x)).

By Lemma 1, the asymptotic variance Vf (v; n; x) can be consistently estimated by the plug-in estimator which replaces the unknown F; f; '; ; g, and q in the expression for Vf (v; n; x) with their consistent estimators. Using asymptotic independence of f^ (vjn; x) ; : : : ; f^ (vjn; x), the optimal weights for the averaged PDF estimator of f (vjx) in (16) can be obtained by solving the GLS problem. As usual, the optimal weights are inversely related to the variance Vf (v; n; x): w^ (n; x) =

1=V^f (v; n; x) =

n X j=n

!

1=V^f (v; j; x)

n (n 1)2 ^ (njx) g^5 q^ F^ (vjn; x) jn; x jn; x = P ; n 2 5 q ^ (vjn; x) jj; x jj; x j (j 1) ^ (jjx) g ^ ^ F j=n

and the asymptotic variance of the optimal weighted estimator is therefore given by Vf (v; x) = Pn

n=n

n (n

K1 F 2 (vjx) f 4 (vjx) : 1)2 (njx) g 5 (q (F (vjx) jn; x) jn; x)

In small samples, accuracy of the normal approximation can be improved by taking into the account the variance of the second-order term multiplied by h2 . To make the notation simple, consider the case of a single n. We can expand the decomposition

14

in (20) to obtain that Lhd+3 Ff2 Lhd+3 3 (n 1) g

1=2

g^(1)

1=2

f^ (vjx; n)

g (1) + h

3f g

f (vjx) is given by 2nf 2 (n 1) g 2

Lhd

1=2

(^ g

g) + op (h) ;

where, F is the conditional CDF evaluated at v, and g, g (1) , g^, g^(1) are the conditional density (given x and n), its derivative, and their estimators evaluated at q (F (vjx) jn; x). With this decomposition, in practice, one can improve accuracy of asymptotic approximation by using the following expression for the estimated variance instead of V^f alone7 : V~f = V^f + h2

3f^ g^

2nf^2 (n 1) g^2

!2

V^g;0 :

Note that the second summand in the expression for V~f is Op (h2 ) and negligible in large samples.

4

Inference on the optimal reserve price

In this section, we discuss inference on the optimal reserve price given x, r (x). Riley and Samuelson (1981) show that under certain assumptions, r (x) is given by the unique solution to the equation: r (x)

1

F (r (x) jx) f (r (x) jx)

c = 0;

(21)

where c is the seller’s own valuation. One approach to the inference on r (x) is to estimate it as a solution r^ (x) to (21) using consistent estimators for f and F in place of the true unknown functions. However, a di¢ culty arises because, even though our estimator f^ (vjx) is asymptotically normal, it is not guaranteed to be a continuous function of v. We instead take a direct approach and construct CSs that do not require a point estimate of r (x). As discussed in Chapter 3.5 of Lehmann and Romano (2005), a natural CS for a parameter can be obtained by inverting a test of a series of simple hypotheses 7

There is no covariance term because

R

K (u) K (1) (u) du = 0.

15

concerning the value of that parameter.8 We construct CSs for the optimal reserve price by inverting the test of the null hypotheses H0 (v) : r (x) = v. Such hypotheses can be tested by testing the optimal reserve price restriction (21) at r (x) = v. Thus, the CSs are formed by collecting all values v for which the test fails to rejects the null that (21) holds at r (x) = v. Consider H0 (v) : r (x) = v, and the following test statistic:

T (vjx) = Lhd+3

1=2

v

1

F^ (vjx) f^ (vjx)

v u ! u u 1 c =t

F^ (vjx) f^4 (vjx)

2

V^f (v; x);

where F^ is de…ned in (17), and V^f (v; x) is a consistent plug-in estimator of the asymptotic variance of f^ (vjx). By Theorem 2 and Lemma 1(h), T (r (x) jx) !d N (0; 1). Furthermore, due to uniqueness of the solution to (21), for any t > 0, P (jT (vjx)j > tjr (x) 6= v) ! 1. A CS for r with the asymptotic coverage probability 1 is formed by collecting all v’s such that a test based on T (vjx) fails to reject the null at the signi…cance level : CS1

n (x) = v 2 ^ (x) : jT (vjx)j

z1

=2

o

;

where z is the quantile of the standard normal distribution. Asymptotically CS1 (x) has a correct coverage probability since by construction we have that P (r (x) 2 CS1 (x)) = P jT (r (x) jx)j z1 =2 ! 1 , provided that r (x) 2 (x) = [Q ( 1 jx) ; Q ( 2 jx)]. When the seller’s own evaluation c is unknown, using the above approach one can construct conditional CSs for given values of c.

5

Monte Carlo experiments

In this section, we evaluate the accuracy of the asymptotic normal approximation for our estimator f^ established in Theorem 2. In particular, it is interesting to see 8

CSs obtained by test inversion have been used in the econometrics literature, for example, in the context of instrumental variable regression with weak instruments (Andrews and Stock, 2005), for constructing CSs for the date of a structural break (Elliott and Müller, 2007), and in the case of set identi…ed models (Chernozhukov, Hong, and Tamer, 2007); see also the references on page 1268 of Chernozhukov, Hong, and Tamer (2007).

16

whether the boundary e¤ect can create substantial size distortions. We also compare the performance of our estimator with that of GPV in terms of bias, mean squared error (MSE), and median absolute deviation. In our experiment, we consider the case with no covariates (d = 0). The true CDF of valuations is given by 8 > v < 0; < 0; F (v) = v ; 0 v 1; > : 1; v > 1;

(22)

where > 0. Such a choice of F is convenient because, in this case, the bidding strategy is given by 1 v (23) B (v) = 1 (n 1) + 1 and easy to compute. In our simulations, we consider = 1=2; 1, and 2; when = 1, the distribution of valuations is uniform over the interval [0; 1], = 1=2 corresponds to the case of a downward-sloping PDF, and = 2 corresponds to the upward-sloping PDF. The number of bidders n = 2; 3; 4; 5; 6; 7; the number of auctions L, is determined in such a way so that the total number of observations available nL is constant, and therefore any observed di¤erences in the estimation accuracy across n’s are not due to the varying sample size. We consider nL = 4200. We estimate f at the following points: v = 0:2; 0:3; 0:4; 0:5; 0:6; 0:7; 0:8. Each Monte Carlo experiment has 103 replications. Similarly to GPV, we use the tri-weight kernel for all kernel estimators. We choose the bandwidth as follows. The MSE optimal bandwidth for kernel density estimation is of order L 1=5 , and it is L 1=7 for estimation of the density derivative (Pagan and Ullah, 1999, Page 56). Therefore, as in GPV, we use the normal rule-of-thumb bandwidth to estimate g: h1 = 1:06^ b (nL) 1=5 ; where ^ b is the estimated standard deviation of bids, to estimate the PDF g; and the density derivative g (1) is estimated with h2 = 1:06^ b (nL)

17

1=7

:

For each replication, we generate randomly nL valuations, fvi : i = 1; : : : ; nLg, from the distribution described by (22), and then compute the corresponding bids according to (23). Computation of the quantile-based estimator f^ (v) involves several steps. First, we estimate q ( ), the quantile function of bids. Let b(1) ; : : : ; b(nL) i = b(i) . Second, we estimate denote the ordered sample of bids. We set q^ nL g (b), the PDF of bids using (9). To construct our estimator, g needs to be estii mated at all points q^ nL n o : i = 1; : : : ; nL . Given the estimates of g^, we compute ^ p i : i = 1; : : : ; nL using (12), its monotone version according to (13), and Q nL

F^ (v) according to (14). Let dxe denote the nearest integer greater than or equal dnLF^ (v)e . Next, we compute g^ q^ F^ (v) to x; we compute q^ F^ (v) as q^ and nL g^(1) q^ F^ (v) using (9) and (10) respectively, and f^ (v) as the reciprocal of (15). Lastly, we compute the second-order corrected estimator of the asymptotic variance of f^ (v), 0

3f^ (v) V~f (v) = V^f (v) + h22 @ g^ q^ F^ (v) V^f (v) =

K1 F^ 2 (v) f^4 (v) n (n 1)2 g^5 q^ F^ (v)

(n

2nf^2 (v) 1) g^2 q^ F^ (v)

12

A V^g;0 q^ F^ (v)

:

A con…dence interval (CI) with the asymptotic con…dence level 1 f^ (v)

;

z1

=2

q

is formed as

V~f (v) = (Lh32 );

where z is the quantile of the standard normal distribution. Table 1 reports simulated coverage probabilities for 99%, 95%, and 90% asymptotic CIs in the case of Uniform [0; 1] distribution ( = 1). We observe deviation of the simulated coverage probabilities from the nominal values when the PDF is estimated near the upper boundary and the number of bidders is small (n = 2; 3). There is also some deviation of the simulated coverage probabilities from the nominal values for large n and v near the lower boundary of the support. Thus, as one can expect the normal approximation may breakdown near the boundaries of the support. However, away from the boundaries, as the results in Table 1 indicate, the normal approximation works well and the simulated coverage probabilities are close to their 18

nominal values. Similar result have been obtained in the case of = 2 (Table 2) and = 1=2 (Table 3). When = 2, the boundary e¤ect distorting coverage probabilities is somewhat more pronounced near the lower boundary of the support, and less so near the upper boundary. An opposite situation is observed for = 1=2: we see more distortion of coverage probabilities near the upper boundary and less near the lower boundary of the support. This can be explained by the fact that the PDF is increasing in the case of = 2, so there is relatively more mass near v = 1, and it is decreasing when = 1=2, so there is relatively less mass near v = 0. We observe good coverage probabilities away from the boundaries. Next, we compare the performance of our estimator with that of GPV. In their simulations, GPV use the bandwidths of order (nL) 1=5 in the …rst and second steps of estimation. We …nd, however, that using a bandwidth of order (nL) 1=7 in the second step signi…cantly improves the performance of their estimator in terms of bias and MSE. To compute the GPV estimator, we therefore use h1 as the …rst step bandwidth, and h2 at the second step. Similarly to the quantile-based estimator, the GPV estimator is implemented with the tri-weight kernel. To compute the GPV estimator of f (v), in the …rst step we compute nonparametric estimators of G and g, and obtain the pseudo-valuations v^il according to equation (1), with G and g replaced by their estimators. In the second step, we estimate f (v) by the kernel method from the sample f^ vil g obtained in the …rst-step. To avoid the boundary bias e¤ect, GPV suggest trimming the observations that are too close to the estimated boundary of the support. Note that no explicit trimming is necessary for our estimator, since implicit trimming occurs from our use of quantiles instead of pseudo-valuations.9 Table 4 reports bias, MSE, and median absolute deviation of the two estimators for = 1. In the majority of cases, the GPV estimator shows less bias; however neither estimator dominates the other in terms of MSE or median absolute deviation: our quantile-based (QB) estimator appears to be more e¢ cient for small numbers of bidders (n = 2; 3; 4), and GPV’s is more e¢ cient when n = 5; 6, and 7. The GPV is 9 In our simulations, we found that in practice trimming had no e¤ect on the GPV estimator: essentially the same estimates were obtained with or without the trimming. On the other hand, for our quantile-based estimator, monotonization of the estimated quantile function of valuations is very important in order to obtain reasonable estimates of the PDF of valuations near the upper boundary of the support.

19

relatively more e¢ cient when the PDF is upward sloping ( = 2) as the results Table 5 indicate. However, according to the results in Table 6, the QB estimator dominates GPV’s in the majority of cases when the PDF is downward-sloping ( = 1=2). To summarize our …ndings, the GPV estimator shows less bias which can be due to the fact that it is obtained by kernel smoothing of the data, while the QB estimator is a nonlinear function of the estimated CDF, PDF and its derivative. There is no strictly dominating estimator, and the relative e¢ ciency depends on the underlying distribution of the valuations and the number of bidders in the auction. The GPV estimator is more e¢ cient when the number of bidders is relatively large and PDF has a positive slope; on the other hand, the QB estimator is more attractive when the number of bidders is small and the PDF has a negative slope. Tables 4, 5, and 6 also report the average (across replications) standard error for our QB estimator. The variance of the estimator increases with v, since it depends on F (v). This fact is also re‡ected in the MSE values that increase with v. Interestingly, one can see the same pattern for the MSE of the GPV estimator, which suggests that the GPV variance depends on v as well.

Appendix of proofs Proof of Lemma 1. Parts (a) and (b) of the lemma follow from Lemma B.3 of Newey (1994). For part (c), de…ne a function G0 (b; n; x) = n (njx) G (bjn; x) ' (x) ; and its estimator as n

l 1 XX 0 ^ 1 (nl = n) 1 (bil G (b; n; x) = L l=1 i=1

L

b) K h (xl

x) :

Next, ^ 0 (b; n; x) = E EG

1 (nl = n) K h (xl

x)

nl X

1 (bil

i=1

= nE (1 (nl = n) 1 (bil 20

b) K h (xl

x))

!

b)

(24)

= nE ( (njxl ) G (bjn; xl ) K h (xl x)) Z = n (nju) G (bjn; u) K h (u x) ' (u) du Z = G0 (b; n; x + hu) Kd (u) du: By Assumption 1(e) and Proposition 1(iii) of GPV, G (bjn; ) admits at least R + 1 continuous bounded derivatives. Then, as in the proof of Lemma B.2 of Newey (1994), there exists a constant c > 0 such that ^ 0 (b; n; x) G0 (b; n; x) E G Z R ch jKd (u)j kukR du vec DxR G0 (b; n; x)

;

where k k denotes the Euclidean norm, and DxR G0 denotes the R-th partial derivative of G0 with respect to x. It follows then that ^ 0 (b; n; x) = O hR : EG

G0 (b; n; x)

sup

(25)

b2[b(n;x);b(n;x)]

Now, we show that ^0

sup b2[b(n;x);b(n;x)]

jG (b; n; x)

^0

E G (b; n; x) j = Op

Lhd log L

1=2

!

:

(26)

We follow the approach of Pollard (1984). Fix n 2 N and x 2 Interior (X ), and consider a class of functions Z indexed by h and b, with a representative function zl (b; n; x) =

nl X

1 (nl = n) 1 (bil

b) hd K h (xl

x) :

i=1

By the result in Pollard (1984) (Problem 28), the class Z has polynomial discrimination. Theorem 37 in Pollard (1984) (see also Example 38) implies that for any 2 sequences L , L such that L 2L 2L = log L ! 1, Ezl2 (b; n; x) L, 1X zl (b; n; x) L l=1 b2[b(n;x);b(n;x)] L

2

1 L

L

sup

j

21

Ezl (b; n; x) j ! 0

(27)

almost surely. We claim that this implies that sup b2[b(n;x);b(n;x)]

^ 0 (b; n; x) jG

Lhd log L

^ 0 (b; n; x) j = Op EG

1=2

!

:

The proof is by contradiction. Suppose not. Then there exist a sequence and a subsequence of L such that along this subsequence, sup b2[b(n;x);b(n;x)]

^ 0 (b; n; x) jG

^ 0 (b; n; x) j EG

L

Lhd log L

L

!1

1=2

(28)

:

on a set of events 0 with a positive probability measure. Now if we let 2L = hd 1=2 Lhd ) 1=2 L , then the de…nition of z implies that, along the subsequence, and L = ( log L on a set of events 0 , 1X zl (b; n; x) j sup L l=1 b2[b(n;x);b(n;x)] L

2

1 L

L

1=2

=

Lhd log L Lhd log L

1=2

=

Lhd log L

1=2

=

1=2 L

1=2 L

h

d

1=2 L

L

1X sup j zl (b; n; x) L l=1 b2[b(n;x);b(n;x)] sup

b2[b(n;x);b(n;x)] 1=2 L

Ezl (b; n; x) j

Lhd log L

L

^ 0 (b; n; x) jG

Ezl (b; n; x) j

^ 0 (b; n; x) j EG

1=2

! 1;

where the inequality follows by (28), a contradiction to (27). This establishes (26), so that (25), (26) and the triangle inequality together imply that sup b2[b(n;x);b(n;x)]

^ 0 (b; n; x) jG

G0 (b; n; x) j = Op

Lhd log L

1=2

+ hR

!

:

(29)

^ 0 (b; n; x), To complete the proof, recall that, from the de…nitions of G0 (b; n; x) and G G (bjn; x) =

^0 G0 (b; n; x) ^ (bjn; x) = G (b; n; x) ; ; and G (njx) ' (x) ^ (njx) ' ^ (x) 22

^ (bjn; x) so that by the mean-value theorem, G

G (bjn; x) is bounded by

~ 0 (b; n; x) ~ 0 (b; n; x) 1 G G ; ; ~ (n; x) ' ~ (x) ~ 2 (n; x) ' ~ (x) ~ (n; x) ' ~ 2 (x) ^ 0 (b; n; x) G

G0 (b; n; x) ; ^ (njx)

!

(njx) ; ' ^ (x)

' (x)

;

(30)

~ 0 G0 ; ~ ^ 0 G0 ; ^ where G ;' ~ ' G ;' ^ ' for all (b; n; x). Further, by Assumption 1(b) and (c) and the results in parts (a) and (b) of the lemma, with the probability approaching one ~ and ' ~ are bounded away from zero. The desired result follows from (29), (30) and parts (a) and (b) of the lemma. ^ ( jn; x) is monotone by construction, For part (d) of the lemma, since G n ^ (bjn; x) P (^ q ("jn; x) < b (n; x)) = P inf b : G b

^ (b (n; x) jn; x) > " = P G

o " < b (n; x)

= o (1) ; where the last equality is by the result in part (c). Similarly, "jn; x) > b (n; x)

P q^ (1

^ b (n; x) jn; x < 1 = P G

"

= o (1) : Hence, for all x 2 Interior (X ) and n 2 N , with the probability approaching one, b (n; x) q^ ("jn; x) < q^ (1 "jn; x) b (n; x). Since the distribution G (bjn; x) is continuous in b, G (q ( jn; x) jn; x) = , and, for 2 [ 1 ; 2 ], we can write the identity G (^ q ( jn; x) jn; x)

G (q ( jn; x) jn; x) = G (^ q ( jn; x) jn; x)

:

(31)

Using Lemma 21.1(ii) of van der Vaart (1998), 0

1 ; ^ (njx) ' ^ (x) nLhd

^ (^ G q ( jn; x) jn; x)

and by the results in (a) and (b), ^ (^ G q ( jn; x) jn; x) = 23

+ Op

Lhd

1

(32)

uniformly over . Combining (31) and (32), and applying the mean-value theorem to the left-hand side of (31), we obtain q^ ( jn; x) =

q ( jn; x)

^ (^ G (^ q ( jn; x) jn; x) G q ( jn; x) jn; x) + Op g (e q ( jn; x) jn; x)

1

Lhd

;

(33)

where qe lies between q^ and q for all ( ; n; x). Now, according to Proposition 1(ii) of GPV, there exists cg > 0 such that g (bjn; x) > cg for all b 2 b (n; x) ; b (n; x) , and the result in part (d) follows from (33) and part (c) of the lemma. Next, we prove part (e) of the lemma. Fix x 2 Interior (X ) and n 2 N . Let N=

nl L X X

1 (nl = n) Kd (xl ) :

l=1 i=1

Consider the ordered sample of bids b (n; x) = b(0) corresponds to nl = n and Kd (xl ) 6= 0. Then, 0

lim q^ (tjn; x) t#

q^ ( jn; x)

max

j=1;:::;N +1

:::

b(N +1) = b (n; x) that

b(j)

b(j

1)

:

By the results of Deheuvels (1984), max

j=1;:::;N +1

b(j)

b(j

1)

= Op

N log N

1

!

;

and part (e) follows, since N = Op Lhd . To prove part (f), note that by Assumption 1(f) and Proposition 1(iv) of GPV, g ( jn; ) admits at least R + 1 continuous bounded partial derivatives. Let (k)

g0 (b; n; x) =

(njx) g (k) (bjn; x) ' (x) ;

(34)

and de…ne n

l 1 XX (k) 1 (nl = n) Kh (bil nL l=1 i=1

L

(k)

g^0 (b; n; x) =

24

b) K h (xl

x) :

(35)

We can write the estimator g^ (bjn; x) as g^ (bjn; x) =

g^0 (b; n; x) ; ^ (njx) ' ^ (x)

so that

(k)

g^(k) (bjn; x) =

g^0 (b; n; x) ; ^ (njx) ' ^ (x) (k)

By Lemma B.3 of Newey (1994), the estimator g^0 (b; n; x) is uniformly consistent over b 2 [b1 (n; x) ; b2 (n; x)]: (k)

sup b2[b1 (n;x);b2 (n;x)]

j^ g0 (b; n; x)

d+1+2k

(k)

g0 (b; n; x) j = Op

Lh log L

1=2

+ hR

!

:

(36)

By the results in parts (a) and (b), the estimators ^ (njx) and ' ^ (x) converge at the rate faster than that in (36). The desired result follows by the same argument as in the proof of part (c), equation (30). For part (g), let cg be as in the proof of part (d) of the lemma. First, we con^ p ( jn; x) Q ( jx) is ^ p ( jn; x). We have that Q sider the preliminary estimator, Q bounded by j^ g (^ q ( jn; x) jn; x) g (q ( jn; x) jn; x)j g^ (^ q ( jn; x) jn; x) cg jg (^ q ( jn; x) jn; x) g (q ( jn; x) jn; x)j j^ q ( jn; x) q ( jn; x)j + g^ (^ q ( jn; x) jn; x) cg j^ g (^ q ( jn; x) jn; x) g (^ q ( jn; x) jn; x)j + g^ (^ q ( jn; x) jn; x) cg ! supb2[b1 (n;x);b2 (n;x)] g (1) (bjn; x) 1+ j^ q ( jn; x) q ( jn; x)j g^ (^ q ( jn; x) jn; x) cg j^ q ( jn; x)

+

q ( jn; x)j +

j^ g (^ q ( jn; x) jn; x) g (^ q ( jn; x) jn; x)j : g^ (^ q ( jn; x) jn; x) cg

By continuity of the distributions, we can pick " > 0 small enough so that q(

1

"jn; x) > b1 (n; x) and q (

25

2

+ "jn; x) < b2 (n; x) :

(37)

De…ne EL (n; x) = f^ q(

"jn; x)

1

b1 (n; x) ; q^ (

+ "jn; x)

2

b2 (n; x)g :

By the result in part (d), P (ELc (n; x)) = o (1). Hence, it follows from part (f) of the lemma that the estimator g^ (^ q ( jn; x) jn; x) is bounded away from zero with probability approaching one. Consequently, by Assumption 1(e) and part (d) of the lemma that the …rst summand on the right-hand side of (37) is Op L 1 uniformly over [

";

1

2

P

+ "], where

sup 2[

1

2[

1

P

";

L 2 +"]

sup ";

L 2 +"]

L

Lhd+1+2k log L

=

1=2

+h

R

. Next,

j^ g (^ q ( jn; x) jn; x)

g (^ q ( jn; x) jn; x)j > M

j^ g (^ q ( jn; x) jn; x)

g (^ q ( jn; x) jn; x)j > M; EL (n; x)

+P (ELc (n; x)) P

!

sup

L

b2[b1 (n;x);b2 (n;x)]

j^ g (bjn; x)

g (bjn; x)j > M

!

!

(38)

+ o (1) :

It follows from part (f) of the lemma and (38) that sup 2[

1

";

2 +"]

^ p ( jn; x) jQ

Q ( jx) j = Op

Lhd+1 log L

1=2

+ hR

!

:

(39)

^ ( jn; x) Q ^ p ( jn; x) Further, by construction, Q 0 for 0 . We can choose p 0 ^ 2 [ 0 ; ] such that 0 2 [ 1 ; 2 ]. Since Q ( jn; x) is left-continuous, there exists ^ p ( 0 jn; x) = supt2[ ; ] Q ^ p (tjn; x). Since Q ( jx) is nondecreasing, Q 0 ^ ( jn; x) Q ^ p ( jn; x) Q ^ p ( 0 jn; x) Q ^ p ( jn; x) = Q ^ p ( 0 jn; x) Q ( 0 jx) + Q ( jx) Q ^ p ( jn; x) Q ^ p (tjn; x) Q (tjx) + Q ( jx) Q ^ p ( jn; x) sup Q t2[

0;

2

]

sup 2[

1

";

2 +"]

^ p ( jn; x) Q

26

Q ( jx)

= Op

Lhd+1 log L

1=2

+ hR

!

;

where the last result follows from (39). Using a similar argument for < conclude that ! 1=2 d+1 Lh ^ ( jn; x) Q ^ p ( jx) = Op sup Q + hR : log L 2[ 1 "; 2 +"]

0,

we

(40)

The result of part (g) follows from (39) and (40). Lastly, we prove part (h). Let " be as in part(g). By Lemma 21.1(ii) of van der ^ ( jn; x) jn; x , where the inequality becomes strict only at Vaart (1998), F^ Q the points of discontinuity, and therefore ^ ( 1 jn; x) jn; x F^ Q

1

>

1

"

^ ( jn; x) is non-decreasing, for all n. Further, since Q ^ ( 2 jn; x) jn; x < P F^ Q = P

n ^ (tjn; x) sup t : Q

t2[0;1]

^ ( 2 jn; x) < Q ^( P Q

2

2

+"

o ^ Q ( 2 jn; x) <

2

+"

!

+ "jn; x)

! 1; where the last result is by part (g) of the lemma and because Q( 2 jx) < Q ( Thus, for all v 2 ^ (x), F^ (vjn; x) 2 [ 1 "; 2 + "]

2

+ "jx). (41)

with probability approaching one. Therefore, using the same argument as in part (g), equation (38), it is su¢ cient to consider only v 2 ^ (x) such that F^ (vjn; x) 2 [ 1 "; 2 + "]. Since by Assumption 1(f), Q ( jx) is continuously di¤erentiable on [ 1 "; 2 + "], for such v’s by the mean-value theorem we have that, Q F^ (vjn; x) jx

v = Q F^ (vjn; x) jx =

Q (F (vjx))

1 F^ (vjn; x) f (Q (~ (v; n; x) jn; x) jx) 27

F (vjx) ; (42)

where ~ (v; n; x) is between F^ (vjn; x) and F (vjx). ^ F^ (vjn; x) jn; x By Lemma 21.1(iv) of van der Vaart (1998), Q ^ Hence, can fail only at the points of discontinuity of Q. sup

v

v2 ^ (x)

^ F^ (vjn; x) jn; x Q

^ (tjn; x) lim Q

sup 2[

1

";

v, and equality

2 +"]

t#

Lhd+1 log L

+Op

1=2

+ hR

!

^ ( jn; x) Q (43)

;

however, ^ (tjn; x) lim Q

sup 2[

1

";

2 +"]

^ ( jn; x) Q

t#

supb2[b1 (n;x);b2 (n;x)] g^(1) (bjn; x) 1+ g^2 (^ q ( jn; x) jn; x) ! 1 d Lh ; = Op log(Lhd )

!

sup (lim q^ (tjn; x) 2[0;1] t#

q^ ( jn; x)) (44)

^ and by continuity of K, where the second inequality follows from the de…nition of Q and the equality (44) follows from part (e) of the lemma. Note that, as shown in the proof of part (g), g^ (^ q ( jn; x) jn; x) is bounded away from zero with probability approaching one. Combining (42)-(44), and by Assumption 1(e) we obtain that there exists a constant c > 0 such that supv2 ^ (x) F^ (vjn; x) F (vjx) is bounded by c sup Q F^ (vjn; x) jx

^ F^ (vjn; x) jn; x Q

v2 ^ (x)

Lhd+1 log L

+Op c

sup 2[

= Op

1

";

2 +"]

Lhd+1 log L

1=2

+ hR

Q ( jx)

!

^ ( jn; x) + Op Q

1=2

+ hR

!

;

where the equality follows from part (g) of the lemma.

28

Lhd+1 log L

1=2

+ hR

!

Proof of Theorem 1. By Lemma 1(d),(f) and (h), P

n o v 2 ^ (x) : q^ F^ (vjn; x) jn; x 2 [b1 (n; x) ; b2 (n; x)] ! 1;

and therefore, using the same argument as in the proof of Lemma 1(g), equation (38) it is su¢ cient to consider only such v’s. Next, g^(1) q^ F^ (vjn; x) jn; x jn; x g^(1) (bjn; x)

sup

g (1) (q (F (vjx) jn; x) jn; x)

g (1) (bjn; x)

b2[b1 (n;x);b2 (n;x)]

+g (2) (e q (v; n; x)) q^ F^ (vjn; x) jn; x

q (F (vjx) jn; x) :

(45)

where qe is the mean value between q^ and q. Further, g (2) is bounded by Assumption 1(e) and Proposition 1(iv) of GPV, and q^ F^ (vjn; x) jn; x sup 2[

1

";

2 +"]

q (F (vjx) jn; x)

j^ q ( jn; x)

q ( jn; x) j +

1 sup jF^ (vjn; x) cg v2 ^ (x)

F (vjx) j; (46)

where cg as in the proof of Lemma 1(d). By (45), (46) and Lemma 1(d),(f),(h), sup g^(1) q^ F^ (vjn; x) jn; x jn; x

v2 ^ (x)

= Op

Lhd+3 log L

1=2 R

+h

!

g (1) (q (F (vjx) jn; x) jn; x) (47)

:

By a similar argument, f^ (vjn; x)

f (vjn; x) F (vjx) fe2 (vjn; x) = (n 1) g 3 (q (F (vjx) jn; x) jn; x) g^(1) q^ F^ (vjn; x) jn; x jn; x ! 1=2 Lhd+1 +Op + hR ; log L

g (1) (q (F (vjx) jn; x) jn; x) (48)

uniformly in v 2 ^ (x), where fe(vjx) as in (15) but with some mean value ge(1) between 29

g (1) and its estimator g^(1) . The desired result follows from (16), (47), and (48). (k)

(k)

Proof of Lemma 2. Consider g0 (b; n; x) and g^0 (b; n; x) de…ned in (34) and (35) respectively. It follows from parts (a) and (b) of Lemma 1, 1=2

=

g^(k) (bjn; x) g (k) (bjn; x) Lhd+1+2k 1 1=2 (k) Lhd+1+2k g^0 (b; n; x) (njx) ' (x)

(k)

g0 (b; n; x) + op (1):

(49)

By the same argument as in the proof of part (f) of Lemma 1 and Lemma B2 of Newey (k) (k) (1994), E^ g0 (b; n; x) g0 (b; n; x) = O hR uniformly in b 2 [b1 (n; x) ; b2 (n; x)] for all x 2 Interior (X ) and n 2 N . Then, by Assumption 3, it remains to establish asymptotic normality of nLhd+1+2k

1=2

(k)

g^0 (b; n; x)

(k)

E^ g0 (b; n; x) :

De…ne (k)

wil;n = h(d+1+2k)=2 1 (nl = n) Kh (bil nl L X X 1 wL;n = (nL) wil;n ;

b) K h (xl

x) ;

l=1 i=l

so that nLhd+1+2k

1=2

= (nL)1=2 (wL;n

(k)

g^0 (b; n; x)

(k)

E^ g0 (b; n; x)

EwL;n ) :

(50)

By the Liapunov CLT (see, for example, Corollary 11.2.1 on page 427 of Lehmann and Romano (2005)), (nL)1=2 (wL;n

EwL;n ) = (nLV ar (wL;n ))1=2 !d N (0; 1) ;

2 provided that Ewil;n < 1, and for some

(51)

> 0,

1 E jwil;n j2+ = 0: =2 L!1 L lim

(52)

The condition in (52) follows from the Liapunov’s condition (equation (11.12) on page 30

427 of Lehmann and Romano (2005)) and because wil;n are i.i.d. Next, Ewil;n is given by (d+1+2k)=2

h

E Z

(njxl )

Z

(k)

Kh (u

b) g (ujn; xl ) duK h (xl x) Z (k) x) ' (y) Kh (u b) g (ujn; y) dudy

= h(d+1+2k)=2 (njy) K h (y Z (d+1)=2 = h (njhy + x) Kd (y) ' (hy + x) Z K (k) (u) g (hu + bjn; hy + x) dudy

! 0:

2 Further, Ewil;n is given by

d+1+2k

=

h Z

Z

(njy) K 2h

(y

x) ' (y)

Z

2

(k)

Kh (u

b)

g (ujn; y) dudy

(njhy + x) Kd2 (y) ' (hy + x) Z

K (k) (u)

2

g (hu + bjn; hy + x) dudy:

Hence, nLV ar (wL;n ) converges to (njx) g (bjn; x) ' (x)

Z

d 2

K (u) du

Z

K (k) (u)

2

(53)

du:

Lastly, E jwil;n j2+ is given by h(d+1+2k)(1+ Z

=2)

Z

2+

(k)

2+

(njy) jK h (y x)j ' (y) Kh (u b) Z (d+1) =2 (njhy + x) jKd (y)j2+ ' (hy + x) = h Z 2+ K (k) (u) g (hu + bjn; hy + x) dudy h

(d+1) =2

cg sup jK (u)jd(2+ ) sup ' (x) sup u2[ 1;1]

x2X

g (ujn; y) dudy

K (k) (u)

2+

;

(54)

u2[ 1;1]

where cg as in the proof of Lemma 1(d). The condition (52) is satis…ed by Assumptions 31

1(b) and 3, and (54). It follows now from (49)-(54), nLhd+3 !d N

1=2

0;

g^(k) (bjn; x)

g (k) (bjn; x) Z

g (bjn; x) (njx) ' (x)

d

K 2 (u) du

Z

K (k) (u)

2

!

du :

To prove part (b), note that the asymptotic covariance of wL;n1 and wL;n2 involves the product of two indicator functions, 1 (nl = n1 ) 1 (nl = n2 ), which is zero for n1 6= n2 . The joint asymptotic normality and asymptotic independence of g^(k) (bjn1 ; x) and g^(k) (bjn2 ; x) follows then by the Cramér-Wold device. Proof of Theorem 2. First, de…ne n o EL (n; x) = v 2 ^ (x) : q^ F^ (vjn; x) jn; x 2 [b1 (n; x) ; b2 (n; x)] : Next, for all z 2 R, P

Lhd+3

1=2

f^ (vjn; x) =P

f (vjx) Lhd+3

1=2

z = f^ (vjn; x)

f (vjx)

z; EL (n; x) + Rn ;

where 0 Rn P (ELc (n; x)) = o (1), by Lemma 1(d) and (41) in the proof of Lemma 1(h). Therefore, it su¢ ces to consider only v’s from EL (n; x). For such v’s, g^(1) q^ F^ (vjn; x) jn; x jn; x = g^(1) (q (F (vjx) jn; x) jn; x)

g (1) (q (F (vjx) jn; x) jn; x)

g (1) (q (F (vjx) jn; x) jn; x)

+^ g (2) (e q (v; n; x) jn; x) q^ F^ (vjn; x) jn; x

q (F (vjx) jn; x) ;

(55)

where qe is the mean value. It follows from Lemma 1(d) and (f) that the second 1=2 summand on the right-hand side of the above equation is op Lhd+3 . One arrives at (20), and the desired result follows immediately from (20), Theorem 1, and Lemma 2.

32

References Andrews, D. W. K., and J. H. Stock (2005): “Inference with Weak Instruments,” Cowles Foundation Discussion Paper 1530, Yale University. Athey, S., and P. A. Haile (2007): “Nonparametric Approaches to Auctions,”in Handbook of Econometrics, ed. by J. J. Heckman, and E. E. Leamer, vol. 6, Part 1, chap. 60, pp. 3847–3965. Elsevier, Amsterdam. Chernozhukov, V., H. Hong, and E. Tamer (2007): “Estimation and Con…dence Regions for Parameter Sets in Econometric Models,” Econometrica, 75(5), 1243–1284. Deheuvels, P. (1984): “Strong Limit Theorems For Maximal Spacings from a General Univariate Distribution,”Annals of Probability, 12, 1181–1193. Elliott, G., and U. K. Müller (2007): “Con…dence Sets For the Date of a Single Break in Linear Time Series Regressions,”Journal of Econometrics, 141(2), 1196– 1218. Guerre, E., I. Perrigne, and Q. Vuong (2000): “Optimal Nonparametric Estimation of First-Price Auctions,”Econometrica, 68(3), 525–74. Haile, P. A., H. Hong, and M. Shum (2003): “Nonparametric Tests For Common Values in First-Price Sealed Bid Auctions,”NBER Working Paper 10105. Haile, P. A., and E. Tamer (2003): “Inference with an Incomplete Model of English Auctions,”Journal of Political Economy, 111(1), 1–51. Khasminskii, R. Z. (1978): “A Lower Bound on the Risks of Nonparametric Estimates of Densities in the Uniform Metric,” Theory of Probability and its Applications, 23, 794–798. Krasnokutskaya, E. (2003): “Identi…cation and Estimation in Highway Procurement Auctions under Unobserved Auction Heterogeneity,”Working Paper, University of Pennsylvania. Lehmann, E. L., and J. P. Romano (2005): Testing Statistical Hypotheses. Springer, New York, third edn. 33

Li, Q., and J. Racine (2008): “Nonparametric Estimation of Conditional CDF and Quantile Functions with Mixed Categorical and Continuous Data,”Journal of Business and Economic Statistics, forthcoming. Li, T., I. Perrigne, and Q. Vuong (2002): “Structural Estimation of the A¢ liated Private Value Auction Model,”The RAND Journal of Economics, 33(2), 171–193. (2003): “Semiparametric Estimation of the Optimal Reserve Price in FirstPrice Auctions,”Journal of Business & Economic Statistics, 21(1), 53–65. List, J., M. Daniel, and P. Michael (2004): “Inferring Treatment Status when Treatment Assignment is Unknown: with an Application to Collusive Bidding Behavior in Canadian Softwood Timber Auctions,” Working Paper, University of Chicago. Matzkin, R. L. (2003): “Nonparametric Estimation of Nonadditive Random Functions,”Econometrica, 71(5), 1339–1375. Newey, W. K. (1994): “Kernel Estimation of Partial Means and a General Variance Estimator,”Econometric Theory, 10, 233–253. Paarsch, H. J. (1997): “Deriving an estimate of the optimal reserve price: An application to British Columbian timber sales,” Journal of Econometrics, 78(2), 333–357. Pagan, A., and A. Ullah (1999): Nonparametric Econometrics. Cambridge University Press, New York. Pollard, D. (1984): Convergence of Stochastic Processes. Springer-Verlag, New York. Riley, J., and W. Samuelson (1981): “Optimal auctions,” The American Economic Review, 71, 58–73. Roise, J. P. (2005): “Beating Competition and Maximizing Expected Value in BC’s Stumpage Market,”Working Paper, Simon Fraser University. van der Vaart, A. W. (1998): Asymptotic Statistics. Cambridge University Press, Cambridge. 34

Table 1: Simulated coverage probabilities of CIs for the PDF of valuations for di¤erent points of density estimation (v), numbers of bidders (n) and auctions (L), sample size nL = 4200, and the distribution parameter = 1 (Uniform [0,1] distribution) v con…dence level 0.2 0.3 0.4 0.5 0.6 0.7 0.8

0.99 0.95 0.90

n=2 0.982 0.975 0.965 0.951 0.909 0.914 0.883 0.947 0.937 0.926 0.898 0.835 0.838 0.791 0.882 0.891 0.881 0.860 0.805 0.782 0.754

0.99 0.95 0.90

n=3 0.983 0.984 0.983 0.970 0.949 0.948 0.936 0.936 0.944 0.948 0.932 0.894 0.896 0.876 0.869 0.895 0.902 0.893 0.847 0.851 0.820

0.99 0.95 0.90

n=4 0.975 0.982 0.990 0.978 0.966 0.960 0.956 0.922 0.945 0.956 0.940 0.912 0.919 0.910 0.851 0.885 0.894 0.893 0.874 0.881 0.867

0.99 0.95 0.90

n=5 0.972 0.977 0.987 0.982 0.974 0.967 0.966 0.911 0.937 0.949 0.941 0.921 0.932 0.919 0.842 0.878 0.888 0.888 0.882 0.883 0.885

0.99 0.95 0.90

n=6 0.969 0.976 0.987 0.981 0.976 0.973 0.978 0.898 0.932 0.940 0.937 0.927 0.933 0.925 0.829 0.877 0.881 0.885 0.881 0.881 0.884

0.99 0.95 0.90

n=7 0.967 0.973 0.989 0.980 0.974 0.975 0.983 0.893 0.926 0.932 0.929 0.926 0.933 0.931 0.823 0.875 0.874 0.883 0.878 0.868 0.883

35

Table 2: Simulated coverage probabilities of CIs for the PDF of valuations for di¤erent points of density estimation (v), numbers of bidders (n) and auctions (L), sample size nL = 4200, and the distribution parameter = 2 v con…dence level 0.2 0.3 0.4 0.5 0.6 0.7 0.8

0.99 0.95 0.90

n=2 0.964 0.949 0.965 0.942 0.933 0.943 0.931 0.911 0.901 0.910 0.877 0.879 0.878 0.857 0.855 0.860 0.868 0.831 0.843 0.845 0.788

0.99 0.95 0.90

n=3 0.958 0.968 0.980 0.978 0.964 0.969 0.969 0.897 0.900 0.927 0.916 0.925 0.928 0.931 0.817 0.850 0.876 0.865 0.883 0.879 0.874

0.99 0.95 0.90

n=4 0.954 0.970 0.973 0.981 0.979 0.977 0.979 0.881 0.890 0.926 0.927 0.929 0.938 0.939 0.797 0.830 0.874 0.867 0.880 0.890 0.896

0.99 0.95 0.90

n=5 0.956 0.961 0.971 0.981 0.982 0.981 0.979 0.868 0.883 0.917 0.930 0.927 0.935 0.935 0.791 0.820 0.850 0.870 0.865 0.889 0.887

0.99 0.95 0.90

n=6 0.952 0.957 0.970 0.983 0.984 0.983 0.980 0.861 0.887 0.903 0.918 0.919 0.932 0.936 0.789 0.813 0.835 0.862 0.853 0.870 0.880

0.99 0.95 0.90

n=7 0.953 0.960 0.975 0.977 0.981 0.979 0.978 0.859 0.882 0.889 0.915 0.910 0.925 0.932 0.792 0.810 0.824 0.855 0.845 0.858 0.860

36

Table 3: Simulated coverage probabilities of CIs for the PDF of valuations for di¤erent points of density estimation (v), numbers of bidders (n) and auctions (L), sample size nL = 4200, and the distribution parameter = 1=2 v con…dence level 0.2 0.3 0.4 0.5 0.6 0.7 0.8

0.99 0.95 0.90

n=2 0.976 0.966 0.937 0.899 0.877 0.817 0.780 0.935 0.915 0.875 0.827 0.794 0.716 0.698 0.876 0.870 0.818 0.772 0.738 0.656 0.625

0.99 0.95 0.90

n=3 0.983 0.984 0.954 0.926 0.908 0.875 0.849 0.948 0.933 0.901 0.871 0.853 0.796 0.772 0.890 0.886 0.861 0.829 0.807 0.735 0.716

0.99 0.95 0.90

n=4 0.984 0.987 0.967 0.951 0.933 0.907 0.880 0.954 0.946 0.921 0.895 0.883 0.834 0.819 0.890 0.892 0.878 0.855 0.835 0.792 0.764

0.99 0.95 0.90

n=5 0.985 0.988 0.977 0.963 0.952 0.930 0.908 0.950 0.949 0.935 0.913 0.900 0.860 0.845 0.891 0.898 0.884 0.876 0.863 0.823 0.797

0.99 0.95 0.90

n=6 0.984 0.991 0.982 0.966 0.959 0.941 0.932 0.944 0.950 0.936 0.920 0.913 0.889 0.869 0.889 0.903 0.886 0.884 0.881 0.839 0.821

0.99 0.95 0.90

n=7 0.982 0.990 0.983 0.973 0.962 0.949 0.943 0.940 0.951 0.936 0.925 0.925 0.899 0.893 0.886 0.903 0.884 0.887 0.890 0.861 0.842

37

Table 4: Bias, MSE and median absolute deviation of the quantile-based (QB) and GPV estimators, and the average standard error (second-order corrected) of the QB estimator, for di¤erent points of density estimations (v), numbers of bidders (n) and auctions (L), sample size nL = 4200, and the distribution parameter = 1 (Uniform [0,1] distribution) Bias v

0.2 0.3 0.4 0.5 0.6 0.7 0.8

QB

GPV

-0.0025 0.0030 -0.0191 -0.0022 -0.0173 0.0099 -0.0270 0.0227 -0.0743 -0.0068 -0.0722 0.0195 -0.0917 0.0061

QB

MSE GPV

Med abs deviation QB GPV

Std err QB

0.0126 0.0216 0.0405 0.0560 0.0764 0.1027 0.2016

n=2 0.0218 0.0439 0.0768 0.1177 0.1571 0.2061 0.2366

0.0909 0.1178 0.1556 0.1801 0.2123 0.2405 0.2744

0.1186 0.1683 0.2189 0.2696 0.3141 0.3681 0.3959

0.1073 0.1519 0.2004 0.2471 0.2752 0.3312 0.4143

0.0710 0.0851 0.1094 0.1299 0.1556 0.1781 0.1953

0.0731 0.0970 0.1245 0.1522 0.1813 0.2161 0.2372

0.0793 0.1073 0.1382 0.1701 0.1947 0.2287 0.2578

0.0619 0.0697 0.0871 0.1033 0.1226 0.1415 0.1545

0.0567 0.0696 0.0886 0.1071 0.1275 0.1514 0.1660

0.0667 0.0860 0.1079 0.1311 0.1505 0.1764 0.1982

0.2 0.3 0.4 0.5 0.6 0.7 0.8

0.0004 0.0025 -0.0111 -0.0035 -0.0063 0.0045 -0.0056 0.0147 -0.0342 -0.0059 -0.0264 0.0114 -0.0433 0.0017

0.0077 0.0114 0.0194 0.0284 0.0402 0.0503 0.0613

n=3 0.0082 0.0145 0.0245 0.0371 0.0519 0.0720 0.0857

0.2 0.3 0.4 0.5 0.6 0.7 0.8

0.0013 0.0021 -0.0084 -0.0039 -0.0031 0.0023 0.0004 0.0110 -0.0204 -0.0044 -0.0115 0.0082 -0.0233 0.0002

0.0059 0.0077 0.0121 0.0175 0.0248 0.0315 0.0380

n=4 0.0050 0.0077 0.0124 0.0183 0.0256 0.0360 0.0429

38

Table 4: Continued ( = 1) Bias v QB GPV

QB

MSE GPV

Med abs deviation QB GPV

Std err QB

0.2 0.3 0.4 0.5 0.6 0.7 0.8

0.0016 0.0019 -0.0072 -0.0040 -0.0017 0.0013 0.0026 0.0088 -0.0138 -0.0035 -0.0051 0.0064 -0.0147 -0.0003

0.0050 0.0060 0.0087 0.0124 0.0171 0.0220 0.0262

n=5 0.0037 0.0052 0.0078 0.0113 0.0156 0.0217 0.0259

0.2 0.3 0.4 0.5 0.6 0.7 0.8

0.0018 0.0018 -0.0065 -0.0040 -0.0010 0.0007 0.0037 0.0074 -0.0101 -0.0029 -0.0020 0.0053 -0.0100 -0.0005

0.0046 0.0051 0.0069 0.0096 0.0129 0.0167 0.0195

n=6 0.0032 0.0039 0.0057 0.0079 0.0108 0.0148 0.0175

0.0540 0.0559 0.0665 0.0774 0.0895 0.1026 0.1105

0.0448 0.0493 0.0598 0.0708 0.0831 0.0961 0.1055

0.0560 0.0667 0.0795 0.0937 0.1068 0.1231 0.1374

0.0043 0.0045 0.0059 0.0079 0.0103 0.0133 0.0152

n=7 0.0028 0.0033 0.0045 0.0061 0.0082 0.0109 0.0128

0.0522 0.0526 0.0613 0.0704 0.0805 0.0917 0.0977

0.0423 0.0449 0.0533 0.0620 0.0723 0.0824 0.0903

0.0535 0.0618 0.0721 0.0836 0.0947 0.1082 0.1202

0.2 0.3 0.4 0.5 0.6 0.7 0.8

0.0019 0.0017 -0.0061 -0.0040 -0.0006 0.0004 0.0042 0.0064 -0.0077 -0.0024 -0.0004 0.0045 -0.0075 -0.0005

39

0.0570 0.0611 0.0744 0.0877 0.1026 0.1182 0.1278

0.0490 0.0565 0.0703 0.0843 0.0997 0.1170 0.1284

0.0600 0.0741 0.0905 0.1083 0.1241 0.1444 0.1615

Table 5: Bias, MSE and median absolute deviation of the quantile-based (QB) and GPV estimators, and the average standard error (second-order corrected) of the QB estimator, for di¤erent points of density estimations (v), numbers of bidders (n) and auctions (L), sample size nL = 4200, and the distribution parameter = 2 Bias v

0.2 0.3 0.4 0.5 0.6 0.7 0.8

QB

GPV

-0.0024 0.0008 -0.0153 -0.0056 -0.0144 0.0053 -0.0380 -0.0097 -0.0443 0.0027 -0.0562 0.0197 -0.0912 -0.0110

QB

MSE GPV

Med abs deviation QB GPV

Std err QB

0.0043 0.0126 0.0268 0.0477 0.0727 0.1197 0.2400

n=2 0.0048 0.0159 0.0337 0.0620 0.1015 0.1621 0.2360

0.0508 0.0867 0.1257 0.1702 0.2129 0.2602 0.3379

0.0555 0.1010 0.1465 0.1983 0.2588 0.3228 0.3920

0.0588 0.1028 0.1596 0.2173 0.2855 0.3617 0.4430

0.0377 0.0595 0.0837 0.1116 0.1401 0.1716 0.2136

0.0346 0.0569 0.0817 0.1091 0.1404 0.1735 0.2172

0.0391 0.0660 0.0995 0.1345 0.1779 0.2242 0.2656

0.0337 0.0494 0.0669 0.0858 0.1077 0.1309 0.1623

0.0288 0.0431 0.0602 0.0779 0.0990 0.1207 0.1507

0.0332 0.0523 0.0755 0.1007 0.1311 0.1637 0.1957

0.2 0.3 0.4 0.5 0.6 0.7 0.8

-0.0013 0.0003 -0.0072 -0.0034 -0.0037 0.0028 -0.0166 -0.0084 -0.0137 0.0029 -0.0103 0.0133 -0.0384 -0.0052

0.0022 0.0057 0.0113 0.0194 0.0310 0.0499 0.0730

n=3 0.0019 0.0051 0.0106 0.0188 0.0299 0.0478 0.0733

0.2 0.3 0.4 0.5 0.6 0.7 0.8

-0.0012 0.0001 -0.0049 -0.0024 -0.0015 0.0018 -0.0103 -0.0066 -0.0065 0.0019 -0.0015 0.0099 -0.0186 -0.0037

0.0018 0.0039 0.0071 0.0113 0.0182 0.0281 0.0423

n=4 0.0013 0.0029 0.0057 0.0095 0.0150 0.0232 0.0356

40

Table 5: Continued ( = 2) Bias v QB GPV

QB

MSE GPV

Med abs deviation QB GPV

Std err QB

0.2 0.3 0.4 0.5 0.6 0.7 0.8

-0.0012 -0.0039 -0.0008 -0.0075 -0.0041 0.0012 -0.0120

-0.0001 -0.0019 0.0014 -0.0054 0.0011 0.0079 -0.0030

0.0016 0.0032 0.0054 0.0080 0.0127 0.0190 0.0277

n=5 0.0011 0.0022 0.0040 0.0062 0.0097 0.0144 0.0217

0.2 0.3 0.4 0.5 0.6 0.7 0.8

-0.0014 -0.0033 -0.0006 -0.0058 -0.0030 0.0023 -0.0087

-0.0002 -0.0016 0.0011 -0.0046 0.0006 0.0066 -0.0026

0.0016 0.0028 0.0046 0.0064 0.0100 0.0144 0.0203

n=6 0.0011 0.0019 0.0032 0.0047 0.0072 0.0103 0.0151

0.0315 0.0424 0.0538 0.0641 0.0800 0.0947 0.1134

0.0255 0.0347 0.0451 0.0547 0.0683 0.0804 0.0975

0.0302 0.0426 0.0569 0.0729 0.0914 0.1115 0.1324

0.0016 0.0026 0.0041 0.0055 0.0084 0.0117 0.0161

n=7 0.0010 0.0017 0.0028 0.0039 0.0058 0.0080 0.0115

0.0312 0.0411 0.0509 0.0591 0.0732 0.0858 0.1011

0.0249 0.0331 0.0421 0.0497 0.0613 0.0713 0.0848

0.0299 0.0407 0.0529 0.0664 0.0818 0.0986 0.1163

0.2 0.3 0.4 0.5 0.6 0.7 0.8

-0.0014 -0.0029 -0.0004 -0.0048 -0.0024 0.0028 -0.0068

-0.0002 -0.0014 0.0009 -0.0040 0.0001 0.0057 -0.0023

41

0.0322 0.0447 0.0585 0.0721 0.0905 0.1085 0.1320

0.0265 0.0376 0.0503 0.0629 0.0794 0.0949 0.1172

0.0311 0.0459 0.0635 0.0831 0.1062 0.1312 0.1566

Table 6: Bias, MSE and median absolute deviation of the quantile-based (QB) and GPV estimators, and the average standard error (second-order corrected) of the QB estimator, for di¤erent points of density estimations (v), numbers of bidders (n) and auctions (L), sample size nL = 4200, and the distribution parameter = 1=2 Bias

MSE GPV

Med abs deviation QB GPV

v

QB

GPV

QB

0.2 0.3 0.4 0.5 0.6 0.7 0.8

-0.0186 -0.0201 -0.0458 -0.0625 -0.0706 -0.1047 -0.1042

-0.0102 0.0018 -0.0190 0.0010 -0.0137 0.0020 0.0107

0.0220 0.0343 0.0706 0.0548 0.5800 0.0756 0.2375

n=2 0.0576 0.1059 0.1409 0.1800 0.1700 0.1771 0.1719

0.1195 0.1479 0.1737 0.1790 0.2100 0.2107 0.2342

0.1891 0.2512 0.2902 0.3330 0.3238 0.3397 0.3332

0.1497 0.1886 0.2269 0.2486 0.7302 0.2954 0.5659

0.0976 0.1163 0.1353 0.1482 0.1518 0.1771 0.1841

0.1247 0.1631 0.1892 0.2242 0.2214 0.2495 0.2539

0.1194 0.1463 0.1694 0.1963 0.2091 0.2785 0.2962

0.0848 0.0969 0.1140 0.1287 0.1301 0.1466 0.1534

0.0946 0.1193 0.1393 0.1653 0.1662 0.1927 0.2018

0.1017 0.1212 0.1399 0.1646 0.1750 0.2027 0.2164

0.2 0.3 0.4 0.5 0.6 0.7 0.8

-0.0124 -0.0110 -0.0302 -0.0323 -0.0596 -0.0763 -0.0742

-0.0040 -0.0009 -0.0110 0.0030 -0.0094 0.0053 0.0149

0.0144 0.0213 0.0299 0.0352 0.0393 0.1213 0.0984

n=3 0.0241 0.0412 0.0572 0.0770 0.0781 0.0948 0.0997

0.2 0.3 0.4 0.5 0.6 0.7 0.8

-0.0089 -0.0070 -0.0199 -0.0146 -0.0393 -0.0438 -0.0530

-0.0006 -0.0004 -0.0072 0.0032 -0.0061 0.0048 0.0128

0.0109 0.0146 0.0206 0.0278 0.0284 0.0469 0.0455

n=4 0.0136 0.0219 0.0308 0.0418 0.0432 0.0565 0.0627

42

Std err QB

Table 6: Continued ( = 1=2) Bias v QB GPV

MSE QB GPV

Med abs deviation QB GPV

Std err QB

0.2 0.3 0.4 0.5 0.6 0.7 0.8

-0.0067 0.0015 -0.0046 0.0004 -0.0142 -0.0053 -0.0077 0.0035 -0.0278 -0.0039 -0.0299 0.0037 -0.0363 0.0102

0.0089 0.0110 0.0156 0.0208 0.0211 0.0292 0.0329

n=5 0.0092 0.0137 0.0195 0.0261 0.0273 0.0366 0.0419

0.2 0.3 0.4 0.5 0.6 0.7 0.8

-0.0052 0.0028 -0.0030 0.0012 -0.0107 -0.0042 -0.0046 0.0037 -0.0206 -0.0026 -0.0213 0.0029 -0.0257 0.0084

0.0076 0.0087 0.0124 0.0162 0.0164 0.0216 0.0249

n=6 0.0069 0.0096 0.0136 0.0180 0.0189 0.0255 0.0295

0.0712 0.0753 0.0886 0.1005 0.1009 0.1142 0.1206

0.0678 0.0792 0.0925 0.1079 0.1097 0.1291 0.1383

0.0824 0.0934 0.1059 0.1221 0.1316 0.1478 0.1601

0.0068 0.0073 0.0103 0.0131 0.0132 0.0171 0.0202

n=7 0.0056 0.0072 0.0101 0.0132 0.0139 0.0188 0.0218

0.0672 0.0689 0.0806 0.0907 0.0908 0.1027 0.1094

0.0611 0.0688 0.0800 0.0925 0.0940 0.1106 0.1186

0.0767 0.0851 0.0954 0.1088 0.1176 0.1313 0.1427

0.2 0.3 0.4 0.5 0.6 0.7 0.8

-0.0041 0.0038 -0.0019 0.0018 -0.0086 -0.0034 -0.0029 0.0037 -0.0159 -0.0019 -0.0156 0.0025 -0.0185 0.0072

43

0.0768 0.0842 0.0992 0.1130 0.1136 0.1277 0.1353

0.0780 0.0946 0.1106 0.1304 0.1320 0.1549 0.1649

0.0903 0.1048 0.1201 0.1400 0.1500 0.1699 0.1838

Quantile$Based Nonparametric Inference for First ...

Aug 26, 2008 - The first author gratefully acknowledges the research support of the Social ... when the second author was visiting the Center for Mathematical .... Applying the change of variable argument to the above identity, one obtains.

273KB Sizes 1 Downloads 158 Views

Recommend Documents

Quantile$Based Nonparametric Inference for First ...
Dec 14, 2006 - recovered simply by inverting the quantile function, 3 %S& 4 =-( %S&. ..... small numbers of auctions, the simulated coverage probabilities are ..... U&, where 4G%S4U& as in %)-& but with some mean value 4H$(% %S* M* U&.

Quantile$Based Nonparametric Inference for First ...
Aug 30, 2010 - using the data on observable bids. Differentiating (5) with respect to (, we obtain the following equation relating the PDF of valuations with ...

Quantile-Based Nonparametric Inference for First-Price ...
Aug 30, 2010 - first-price auctions, report additional simulations results, and provide a detailed proof of the bootstrap result in Marmer and Shneyerov (2010).

PDF Fundamentals of Nonparametric Bayesian Inference
Deep Learning (Adaptive Computation and Machine Learning Series) · Bayesian Data Analysis, Third Edition (Chapman & Hall/CRC Texts in Statistical Science).

What Model for Entry in First&Price Auctions? A Nonparametric ...
Nov 22, 2007 - project size, we find no support for the Samuelson model, some support for the Levin and Smith ..... Define the cutoff 's a function of N as 's+N,.

Semi-nonparametric Estimation of First-Price Auction ...
Aug 27, 2006 - price.5 He proposes an MSM(Method of Simulated Moments) to estimate the parameters of structural elements.6 Guerre, Perrigne and Vuong (2000) show a nonparametric identification and propose a nonparametric estimation using a kernel. Th

Optimal nonparametric estimation of first-price auctions
can be estimated nonparametrically from available data. We then propose a ..... We define the set * of probability distributions P(-) on R. as. Zº = (P(-) is .... numerical integration in (1) so as to determine the buyer's equilibrium strategy s(-,

Semi-nonparametric Estimation of First-Price Auction ...
Jul 17, 2006 - λ is an associated density function. From observed bids, they recover the private values which are called pseudo-private values via a kernel estimation ˜v = b + 1. I−1. ˆΛ(b). ˆλ(b) . Then, they estimate the distribution of pri

Nonparametric Hierarchical Bayesian Model for ...
results of alternative data-driven methods in capturing the category structure in the ..... free energy function F[q] = E[log q(h)] − E[log p(y, h)]. Here, and in the ...

Robust Nonparametric Confidence Intervals for ...
results are readily available in R and STATA using our companion software ..... mance in finite samples by accounting for the added variability introduced by.

Nonparametric Hierarchical Bayesian Model for ...
employed in fMRI data analysis, particularly in modeling ... To distinguish these functionally-defined clusters ... The next layer of this hierarchical model defines.

Inference Protocols for Coreference Resolution - GitHub
R. 23 other. 0.05 per. 0.85 loc. 0.10 other. 0.05 per. 0.50 loc. 0.45 other. 0.10 per .... search 3 --search_alpha 1e-4 --search_rollout oracle --passes 2 --holdout_off.

LEARNING AND INFERENCE ALGORITHMS FOR ...
Department of Electrical & Computer Engineering and Center for Language and Speech Processing. The Johns ..... is 2 minutes, and the video and kinematic data are recorded at 30 frames per ... Training and Decoding Using SS-VAR(p) Models. For each ...

Bayesian Optimization for Likelihood-Free Inference
Sep 14, 2016 - There are several flavors of likelihood-free inference. In. Bayesian ..... IEEE. Conference on Systems, Man and Cybernetics, 2: 1241–1246, 1992.

A nonparametric hierarchical Bayesian model for group ...
categories (animals, bodies, cars, faces, scenes, shoes, tools, trees, and vases) in the .... vide an ordering of the profiles for their visualization. In tensorial.

Identification in Nonparametric Models for Dynamic ...
Apr 8, 2018 - treatment choices are influenced by each other in a dynamic manner. Often times, treat- ments are repeatedly chosen multiple times over a horizon, affecting a series of outcomes. ∗The author is very grateful to Dan Ackerberg, Xiaohong

Nonparametric Transforms of Graph Kernels for Semi ...
the spectral transformation is an exponential function, and for the Gaussian ... unlabeled data, we will refer to the resulting kernels as semi-supervised kernels.

Identification in Nonparametric Models for Dynamic ...
tk. − ≡ (dt1 , ..., dtk ). A potential outcome in the period when a treatment exists is expressed using a switching regression model as. Ytk (d−) = Ytk (d tk.