Dividing and Discarding: A Procedure for Taking Decisions with Non-transferable Utility Vinicius Carrascoy

William Fuchsz

October 1, 2009

Abstract We consider a setting in which two players must take a single action. The analysis is done within a private values model in which (i) the players’ preferences over actions are private information, (ii) utility is non-transferable, (iii) implementation is Bayesian and (iv) the welfare criterion is utilitarian. We characterize an optimal allocation rule. Instead of asking the agents to directly report their types, this allocation can be implemented dynamically. The agents are asked if they are to the left or to the right of a given cuto¤, if both reports agree, the section of the interval which none preferred is discarded and the process continues until one agent chooses left and the other right. In that case, this last cuto¤ is implemented. When types are uniformly distributed, this implementation can be carried out by a Principal who lacks commitment, implying this process is an optimal communication protocol.

1

Introduction

Many situations require two agents to take a joint action. Some examples are: managers of two di¤erent divisions within a …rm, tari¤ negotiations in a trade block, Monetary Union members deciding on monetary policy, parties in a political coalition, and two members of a household. Before reaching a decision, it is common for the agents to be involved in long conversations or negotiations that can take several rounds. Typically, a broad set of alternatives is considered at …rst, and slowly the set of alternatives “on the table” is re…ned until a decision is reached. A con‡ict naturally arises between the agents’ incentive to share information so that a better decision is taken and their fear that, if they reveal too much information, the other party might take advantage of it. A way around this problem is to reveal information coarsely at …rst, and slowly re…ne it as agents learn their interests are more aligned. In this case, by sharing more information, a better decision for both can We have bene…ted from conversations with Ricardo Alonso, Simon Board, Wouter Dessein, Sergio Firpo, Niko Matouschek, Juan P. Torres Martinez, Roger Myerson, Alessandro Pavan, Phil Reny, Leo Rezende, Yuliy Sannikov, Andy Skrzypacz and Balazs Szentes, and seminar participants at MIT, Chicago, SED, GTS meetings, Chicago GSB, Stanford, Duke, Northwesterm, Fucape, PUC-Rio, USP and EPGE-FGV. We are particularly thankful to Humberto Moreira for comments and endless conversations about Lagrangean methods, and to Ferdinando Monte for careful comments on an early draft. Antonio Sodré provided excellent research assistance. y Department of Economics, PUC-Rio. z Department of Economics, University of Chicago.

1

be attained. In contrast, once the agents learn their positions con‡ict there is no more scope for further communication. In this paper, we study the problem of …nding an optimal mechanism for a setting in which two agents have to take a joint action. We consider the case in which the players’preferences over actions are private information. Utility is non-transferable, the common action to be chosen belongs to an interval, and implementation is Bayesian. The lack of transfers and the focus on Bayesian implementation makes this a very hard problem to solve since standard mechanism design techniques cannot be readily applied.1 Aligning incentives is hard because the scope for the players to misreport their preferences is very large and the instruments available to induce truthfulness are limited. We further restrict attention to the case where the Agent’s preferences only depend on their own private information. Hence, unlike other papers that analyze decisions in committees, there is no advantage in sharing information to uncover some underlying truth.2 This assumption lowers the incentives for agents to be truthful. For collective decision problems like ours if, on the one hand, utility is non-transferable and implementation is in dominant strategies, it is widely known since the seminal work of Moulin (1980) that decisions will be taken in accordance to a "min-max" rule (see Moulin (1980) and Sprumont (1995)). On the other hand, when transfers are available, and the players’ utility is quasi-linear and satisfy a single crossing condition, one can solve for the optimal (Bayesian) mechanism using the virtual utility representation of the players’ preferences and applying standard maximization techniques. Once an optimal is found (and upon satisfying some monotonicity requirements), one can back up the necessary transfers to satisfy incentive compatibility. Little is known about how to solve for optimal mechanisms with non-transferable utility and Bayesian implementation even for very particular settings. Indeed, since the seminal works of Holmstrom (1984) and Melumad and Shibano (1991), most of the mechanism design papers on decision taking in settings in which side payments are not allowed have focused on the case of a single informed agent, where, obviously, there is no meaningful distinction between Bayesian and Dominant Strategy Implementation.3 To the best of our knowledge, the only exception is Martimort and Semenev (2008), who have considered an optimal design problem for the case in which there are two informed agents but, nevertheless, have focused on implementation in dominant strategies.4 We show that, when the agents’ preferences are quadratic, an optimal Bayesian mechanism is fully de1

scribed by a sequence of "cuto¤" points fcn gn=1 ; cn 2 (0; 1), and can be thought as being implemented by

a mediator through a sequence of binary questions. In stage n; the mediator simultaneously asks the agents if their favorite actions lie above or below the cuto¤ cn : If their reports agree, the side of the cuto¤ which none preferred is discarded, and the the remaining side is further divided using the sequence of cuto¤s –one moves to the cuto¤ c2n ; if they both answer "below" , and to cuto¤ c2n+1 otherwise –until one chooses above and the other below. In that case, the cuto¤ of the last remaining interval is implemented. We therefore name this class of mechanisms the Divide and Discard mechanisms (DD for short). Fleckinger (2008) has 1 If

the decision, choices and valuations were instead binary, a simple voting mechanism would be able to attain the e¢ cient outcome. See, for example, the analysis of enforceable voting by Maggi and Morelli (2006). Alternatively, if transfers were possible and players had quasi-linear utility, the problem could be easily solved using the expected externality mechanisms proposed by Arrow (1979), and d’Aspremont and Gerard-Varet (1979). 2 See, for example, Persico (2004). 3 See, for instance, Armstrong (1995), Athey et al (2005), Amador et al (2006), Alonso and Matouschek (2008), Goltsman et al (2009) and Kovac and Mylanov (2009). 4 In spite of their restriction to quadratic preferences – which could, in principle, allow for di¤erent outcomes – they prove that the optimal mechanism is a min-max rule as in Moulin (1980).

2

previously studied a particular DD allocation rule. However, his analysis is limited to the quadratic-uniform case and, more importantly, and in contrast to our paper, rather than proving optimality, he only shows that this rule generates an improvement over the optimal ex-post incentive compatible allocation described in Moulin (1980). We …rst prove the optimality of DD allocation rules for the case in which preferences are quadratic and types are distributed according to a general log-concave distribution. We achieve this by allowing for general stochastic mechanisms (as in Goltsman et al (2009) and Kováµc and Mylanov (2009)) and applying Lagrangian optimization techniques (as Amador et al (2006)). We then extend our analysis to the more general singlepeaked preferences case. The complexity of the problem increases substantially because marginal utilities are neither linear in the allocation nor separable in the decision to be taken and the agents’type.5 These two features, along with the fact that we deal with a multiple agent design problem with Bayesian IC constraints, make the problem of guessing the appropriate multipliers too hard.6 Nonetheless, by relying on a more constructive method, we are still able to establish the optimality of the DD allocations for the case in which (i) types are uniformly distributed and (ii) attention is restricted to mechanisms that satisfy a stronger monotonicity condition than the one implied by incentive compatibility.7 Although we conjecture that the optimality of a DD mechanism would hold irrespective of (i) and (ii), we have not been able to formally prove so. The DD mechanisms are appealing for a number of reasons. In spite of the complexity brought up by the lack of side payments, they are an extremely simple mechanism. The dynamic implementation resembles many real world situations (such as bilateral trade agreements). There are many rounds of negotiations, and alternatives are successively discarded until an agreement is reached. The mediator provides a way to mitigate the con‡ict that arises between the agents’ incentives to share information in order to achieve a better allocation, and their fear that, if they reveal too much information about their preferred actions, the other player may manipulate the allocation to his advantage. This is resolved by having agents report information coarsely at …rst, and gradually re…ne their reports as they learn that their interests are partially aligned. This hints on why contracting parties that negotiate sequentially may commit not to consider choices that were eliminated in previous rounds, i.e., agree to rule out –Hart and Moore (2007) –: as players move along further rounds of negotiation, they can be con…dent about the alignment of their interests. Therefore, whenever an agreement is reached, it will necessarily deliver an outcome that cannot be Pareto dominated by those that were ruled out; i.e., the outcome implied by a DD mechanism is renegotiation-proof. If one literally interprets the mediator as a player who is in charge of taking the decision on behalf of the informed agents, a DD mechanism can be implemented by a mediator who lacks commitment for the case in which types are uniformly distributed. This is important because in many circumstances, it might be di¢ cult for a mediator or principal to commit to a mechanism. Within a …rm, for instance, it is not clear that a CEO with authority will commit to not overrule the divisions’managers. The fact that an optimal mechanism can be implement even without commitment is also surprising. In general, we would expect that 5 Many

authors in the cheap talk literature and in the applied mechanism design literature restrict their analysis to the uniform-quadratic case. 6 Amador et al (2006) deal with a single-agent design problem. Also, they consider the case in which allocations and types interact multiplicatively in the agent’s preferences, so that the derivative of the agent’s utility with respect to his type only depends on the allocation. In terms of solving the problem, this plays a similar role to the separability we obtain for the quadratic case. 7 We also require a monotone hazard condition as in Athey et al (2005).

3

allowing the principal to commit would deliver strictly better outcomes. The implementation of DD allocations may call for several rounds of communication. Remarkably, for the case in which preferences are quadratic and types are uniformly distributed, the same expected value can be attained with just one round of cheap talk. The allocation that attains this value was derived by Alonso, Dessein and Matouschek (2008a) and (2008b) (ADM from now on).8 In our setting, an advantage of long cheap talk is that the resulting allocation is renegotiation proof. With just one round of cheap talk, both players could report their preferred allocation to be in the same partition, but would not be allowed to divide this partition further into smaller subdivisions in search of a better allocation. In addition to Moulin (1980), Barberà and Jackson (1994) and Barberà (2001) have also studied the implementation of social choice functions in dominant strategies in more general settings than ours. In contrast to their work, we just require the allocations to be Bayesian incentive compatible. On the one hand, this makes the task of …nding an optimal rule di¢ cult as it is somewhat hard to pin down the set of all allocations that are interim incentive compatible. On the other hand, it allows for a better outcome for the players. Our work also relates to the cheap talk literature. ADM extend Crawford and Sobel (1982) result that, if communication takes place just once players would communicate their private information coarsely by reporting intervals rather than precise types.9 In the ADM model, as in ours, the existence of a set of states for which there is perfect alignment of incentives for all players makes it possible to have an (countably) in…nite number of messages being sent even with just one round of communication. In contrast to ADM and in the spirit of Krishna and Morgan (2004) and Aumann and Hart (2003), who analyze multistage communication, we allow for long cheap talk. By showing that a DD mechanism is optimal, we prove that the communication protocol induced by it is optimal in our setting. As the ADM allocation generates the same expected value as the one obtained with an optimal mechanism, communicating through partitions with in…nite intervals is also an optimal communication protocol. Goltsman et al (2008) study three di¤erent processes: (i) (possibly long) cheap- talk (negotiations), (ii) non-binding recommendations by a third party (mediation), and (iii) binding recommendations by a third party (arbitration) within the uniform-quadratic case of Crawford and Sobel (1982) model. They show that, if the misalignment of incentives is low, negotiation and mediation lead to the same outcome. Moreover, only two rounds of cheap talk are needed to obtain the mediation outcome when the con‡ict of interest (an ex-ante known parameter) is low. However, arbitration always dominates the other protocols. We, in turn, show that when types are uniformly distributed, arbitration and negotiations lead to the same outcomes in our setting. Goltsman et al (2008) also show that the optimal arbitration rule will be non-stochastic, this is also holds true in our setting. The paper is organized as follows. In the next section, we introduce the model and derive some general properties of the problem faced by the two agents. In Section 3, we introduce the general DD mechanisms. In section 4, we establish the optimality of a DD allocation for the case in which preferences are quadratic. Section 5, shows that DD allocations are optimal for a more general class of single-peaked preferences when 8 Their very interesting papers focus mainly on the issue of centralized vs. decentralized decision making. In both cases, they consider quadratic preferences and only decision making with no commitment and one round of communication. Unlike our paper or ADM (2008b), in ADM (2008a) the authors allow for two actions and include a coordination failure cost from taking di¤erent actions. Our model relates to the limiting case in which this cost is prohibitively expensive. 9 Although partition equilibria are quite common in cheap talk environments, Kartik, Ottaviani and Squintani (2007) have shown that there always exists a full-separating equilibrium when costly communication is introduced .

4

types are uniformly distributed and attention is restricted to mechanisms that satisfy a stronger monotonicity condition than the one implied by Incentive Compatibility. In Section 6 we look at the dynamic implementation of DD allocations. In this section we also compare the DD implementation to an implementation with just one round of cheap talk. Section 7 draws the concluding remarks. All proofs are relegated to the Appendix.

2

The Model

We consider a setting in which two ex-ante symmetric players, i = 1; 2; have to take a joint action a, belonging to a (potentially large) compact set A action,

i

R; with [0; 1]

A: Player i0 s type is determined by his favorite

2 [0; 1] : While we deal with more general preferences in Section 5, for most of the paper we take

the players’(Bernoulli) utility function to be:

ui (a; i ) =

The favorite action

i

(a

2 i)

:

belongs to [0; 1] and is distributed according to an absolutely continuous, log-

concave, distribution F ( i ), with density f ( i ) > 0.10 Types are i.i.d. and privately known by the players. To simplify some of the arguments in our proofs, we will additionally assume f ( i ) is symmetric around 12 . None of the results depend on such assumption.

2.1

The Problem:

Before knowing their private types, the agents specify the rules of the mechanism by which the joint action will be chosen. The allocation rule (an enforceable contract) is a functional that maps the players’reported b1 ; b2 into a (cumulative) distribution of possible actions :11 types b P

jb : A ! [0; 1] :

Since we do not have quasi-linear utilities and side payments, the agents’bargaining power will play an important role in the determination of the optimal mechanism. Given that the agents are ex-ante symmetric, we focus our analysis on the case in which, ex-ante, they choose an incentive compatible mechanism to maximize the equally weighted sum of their utilities. A generalization of our results to arbitrary weights/bargain power is not immediate but we will leave it for future work since we wish to focus on incentive and information transmission issues. The agents’problem is to …nd a stochastic mechanism fP (aj )ga2A; max

fP (aj )g 1 0 The

2[0;1]2

X i

2[0;1]2

2 3 Z E 4 ui (a; i ) dP (aj )5

to solve:

A

log-concavity assumption is standard in the mechanism design literature. Also, a large set of commonly used distributions can be shown to be log-concave (see Bagnoli and Bergstrom (1989)). 1 1 We can, without loss, restrict attention to Direct Mechanisms. This follows from the Revelation Principle (Myerson, 1981).

5

subject to 2 Z 4 E i ui (a; i ) dP (aj i ;

3 5

i)

A

2.2

2 Z 4 E i ui (a; i ) dP ajbi ;

i

A

3

b

5 ; i = 1; 2;

i; i

2 [0; 1]

2

General Properties

Before studying optimality, we derive some general properties of the problem which will be useful for our analysis.12 We …rst establish existence of a solution for the general problem. Theorem 1 (Existence) A solution to the agents’ problem exists. This follows from the fact that the objective is a bounded (and, therefore, continuous) linear functional and the constraint set is compact in the weak-* topology. Noticing that the constraints and the objective are linear in P (aj i ;

i) ;

and the constraints are weak

inequalities, we also have: Lemma 1 (Convexity) The agents’ problem is convex. Convexity is necessary for us to use general Lagrangian Theorems which we make use of to prove our results of Section 4. Last, we wish to characterizenthe set ofoincentive compatible mechanisms. Given a pair of announcements ^ and a (stochastic) mechanism P aj^ ; one can de…ne the conditional (on the announcements) expected ^

action and variance as, respectively:

a ^

=

Z

adP aj^

A 2

^

=

Z

2

a ^

a

dP aj^ :

A

Note that, when the agents have quadratic preferences, their payo¤s only depend on the mean and variance of a mechanism: Ui ( i ) =

2 Z E i 4 (a A

2 i)

3

dP aj^ 5 =

E

i

h

a ^

i

2 i

E

i

h

2

^

i

:

As standard in the mechanism design literature, it is useful to use the fact that the agents’ preferences satisfy a single crossing condition to replace the incentive compatibility constraints by (i) the integral representation of the players’utility implied by the Envelope Theorem and (ii) a monotonicity condition that the mechanism has to satisfy. The next result states this formally. Lemma 2 (IC Representation) Letting i h 2 Ui ( i ) = max E i a bi ; i E i 1 2 The

bi

i

h

2

bi ;

i

i

=

E

i

[a ( )]

2 i

E

2 i

( ) ; (1)

results in this section can be extended beyond the quadratic preferences case. We present the results just for the quadratic case for expositional reasons.

6

Incentive Compatibility is equivalent to:

Ui ( i ) =

with E

i

[a ( i ;

i )]

8 > > > > Ui > > > < > > > > > > U > : i

non-decreasing in

1 2

+2

Zi

E

i

[a ( ;

i )]

d if

i

>

1 2

1 2

;

1

Z2 2 E

1 2

i

[a ( ;

i )]

d if

i

(2)

1 2

i

i:

13

The proof follows from Milgrom and Segal (2002), who, in their Theorem 2, provide the most general representation of the Envelope Theorem. Such general representation is necessary if one wishes to consider incentive mechanisms with arbitrary outcome functions. This is particularly relevant in our setting since, as we show below, the optimal mechanism will be discontinuous at an in…nitely countable number of points.14 Nonetheless, following Milgrom and Segal (2002), we can show that, even though the agents’indirect utility will not be di¤erentiable at in…nitely countable number of points, it can be represented as an integral of the partial derivative of the agent’s payo¤ function with respect to his type (i.e., it is absolutely continuous).

3

Divide and Discard Mechanisms 1

The Divide and Discard (DD) allocations are de…ned by a sequence of cuto¤s fcn gn=1 ; cn 2 (0; 1) : Given

the cuto¤s, the mechanism can be thought of as a sequences of "up (1) or down (0)" questions such that if, given a cuto¤ cn ; the agents report to be on di¤erent sides of the cuto¤, i.e., (0; 1) or (1; 0), then the action cn is implemented: If, instead, the agents report to be on the same side of the cuto¤ (0; 0) or (1; 1); then a new up and down question is asked where the new cuto¤ following cn is: c2n after (0; 0) c2n+1 after (1; 1) Where c2n < cn < c2n+1 .and c1 = 1=2: The next result establishes the existence of DD mechanisms such that, at each stage, the agents have

incentives to be truthful: Proposition 1 For each F ( i ) ; there exists a unique DD allocation characterized by sequence of cuto¤ s 1

fcn gn=1 such that at each stage is weakly dominant for the agents to report truthfully on which side of the

cuto¤ their favorite action lies.

The cuto¤s must be chosen in such a way so that the agent cannot bene…t from misrepresenting his type when he is pivotal. The symmetry of the problem around 1 2

as long as the cuto¤s are placed symmetrically around

1 2

implies that c1 ; the …rst cuto¤, can be set to

1 2:

symmetry of the problem makes it natural to pick = 21 as the reference type. The reader might be more used to seeing highest or lowest type picked as the reference type. Note that this choice is in general arbitrary and made for convenience. 1 4 Previous versions of the Envelope Theorem typically rely on continuity or di¤erentiability assumptions of the choice set. See Section 3 of Milgrom Segal (2002) for a detailed discussion and references. 1 3 The

7

When agents learn their interests are more aligned – i.e., their favorite action lies on the same side of a given cuto¤ –, by (credibly) sharing more information regarding their preferences, a better decision for both is attained. In contrast, once the agents learn their positions con‡ict – i.e., their favorite actions lie on di¤erent sides of a given cuto¤ –there is no more scope for further communication. As a consequence of this, DD allocations are renegotiation proof. We will refer to the incentive compatible DD allocation by simply DDIC : While, in general, it is not easy to compute the cuto¤ points in closed form, for the case in which

U [0; 1], i 1 1 3 1 3 5 7 2 ; 4 ; 4 ; 8 ; 8 ; 8 ; 8 ; :::

1

correspond to the dyadic rational numbers over [0; 1], fcn gn=1 =

the DDIC ’s cuto¤s :15 Graphically, the

resulting allocation is:

3/4

1/2 θ2

5/8 5/8

1/4 1/8 1/8

7/8 7/8

3/4

3/8 3/8

1/2

1/4 θ1

The DD Allocation with Dyadic Cuto¤s

Note that the DD mechanisms as described above are not direct revelation mechanisms. Nonetheless, they have a clear direct revelation counterpart where the agents report their type and the mechanism internally goes through the sequential process for the agents.16 Although the probability of having an agent’s type coincide with a cuto¤ is 0, for completeness, we assume that in those cases the mechanism ‡ips a fair coin to answer the relevant up or down question. Other than for the cuto¤ types (a zero measure set) the allocation involves no randomizations. Finally, note that if the sequential process of up and down questions is incentive compatible then the direct revelation counterpart will also be incentive compatible.

4 4.1

Optimality of DD Mechanisms: The Lagrangian

In settings with quasi-linear preferences and side payments, to …nd an optimal mechanism, one proceeds by simply substituting the incentive compatible representation of the players’ utility in the objective and maximizing it pointwise. If the resulting allocation turns out to satisfy the monotonicity constraints, it will 1 5 Truncated

DD allocations which randomly assign one agent to choose the action if there is no agreement after N rounds can be easily computed. Also, since the number of expected rounds tends to be quite low (2 for the uniform case) the loss in terms of expected payo¤s for high values of N is miniscule and converges to zero at an exponential rate. 1 6 See Footnote 25 in Section 5 for a characterization of the direct revelation mechanism for the uniform case.

8

be the solution. Indeed, with side payments, any allocation satisfying the monotonicity constraint can be made incentive compatible by an appropriate choice of side payments. In our setting, the fact that players cannot make side payments forces us to consider the incentive compatibility constraints explicitly, which, in turn, makes the problem fairly hard. To tackle such di¢ culty, we apply Lagrangian methods to show that the DDIC mechanism is optimal when preferences are quadratic. The idea is simple: having set up a Lagrangian that incorporates all relevant constraints, we construct Lagrangian multipliers –that must lie in suitable normed vector spaces – such that the DDIC mechanism, our candidate for an optimal, maximizes the Lagrangian over the set of all IC mechanisms. De…ne the Lagrangian Functional as L Ui 2

6 6 6 6 6 6 6 2 6 X 6 = 6 6 i=1 6 6 6 6 6 6 4

1 2 Z1 1 2

1

Z2 0

;E 2

6 4E 2

6 4E

i

[a (:)] ; E

2 i

(:) j ( E

i

i

h

E

h

E

i

i

i

i

h

2

i

[a ( )]

i

2

E

i

(:))i=1;2

2

E

i

[a ( )]

i

E

2

[a ( )]

(:) ;

i

+

Z1

( )

( )

i i

( i ) dE

2

E

i

0

B @U 0

B @U i

1 2

2 i

( )

Zi +2 E

i

i

[(a ( ;

i)

i

[(a ( ;

i)

1 2

1

1 2

Z2 2 E i

[a ( i ;

i )] :

0

Where the functions

i

( i ) and

i

3

13

7 7 7 C7 )] d A5 d i ( i ) 7 7 7 7 7 13 7 7; 7 C7 )] d A5 d i ( i ) 7 7 7 7 7 7 5

( i ) are, respectively, the Lagrange multipliers on the integral repre-

sentation of the agents’utility implied by incentive compatibility and on the monotonicity constraint.17 The following Lemma, which casts Theorem 1 (Chapter 8) in Luenberger (1969) in terms of the variables of our setting, implies that to establish optimality of a given allocation it su¢ ces to show that there are multipliers for which the proposed allocation maximizes the Lagrangian within the set of feasible allocations.18 Lemma 3 (Optimality) An incentive compatible allocation Ui

1 2

;E

the agents’problem if, and only if, there exist non-decreasing functions f continuous functions f L Ui

1 2

;E

i

( i )g

[a (:)] ; E i

i 2[0;1]

2 i

for which:

(:) j (

i (:) ;

n ei for all Incentive Compatible allocations U

1 7 Note

[a (:)] ; E

i

( i )gi;

2 i

i 2[0;1]

h i 1 ;E i e a (:) ; E 2 h i h io ;E i e a ( i ) ; E i e2 ( i )

i (:))i=1;2

1 2

i

ei L U

(:)

i ;i=1;2

solves

and (non-negative)

i

h

i e2 (:) j (

i

(:) ;

i ;i=1;2

that we express the monotonicity constraints in the Lagrangean in very general terms. If we knew the expected action were di¤erentiable, we could express the monotonicity constraint as the more familiar integral of the product of the multiplier dE

[a( )]

i . However, although increasing, the expected allocation may be ( i ) and the derivative of the expected allocation, d i discontinuous. In fact, this is the case for the DD allocations, which will be shown to be optimal, and which are discontinuous at a countable number of points. 1 8 A similar approach is used by Amador et al. (2006) to establish the optimality of minimum savings schemes in self-control problems.

i

9

i

(:))i=1;2

In the Appendix, we construct multipliers for which the DDIC maximizes the Lagrangian functional. Hence, Theorem 2 (Optimality) A DD allocation is optimal in the class of all incentive compatible allocations when preferences are quadratic. Since we construct multipliers for which the DDIC mechanism maximizes a Lagrangian functional, the su¢ ciency result in Lemma 5 is what is really important for our proof of Theorem 2. While powerful, the Lagrangian methods we use to establish the optimality of DD allocations do not provide much insight for the result. In order to provide some intuition for the result, in what follows we develop an argument to suggest why an optimal allocation must be such that E

i

[a ( )] is ‡at over regions.19

If one ignores the terms associated with the monotonicity constraints and sets

i

( i ) = F ( i ) ; the

Lagrangian can be written as X

|

+ | |

E

i

i

"

1 ; 2

a

i

1 2 {z

2

#

E

1 ; 2

2 i

!

i

}

A

X

E

E

i

i

[a ( )

i

i]

{z

1

F ( i) j f ( i)

i

F ( i) j f ( i)

i

>

1 2

B

X

E

E

i

i

[a ( )

i

i]

{z

1

>

1 2

C

} }

Those familiar with the Mechanism Design literature will recognize this expression as the sum of the agents’ (expected) virtual utility. In fact, this is the reasoning behind setting

i

( i ) = F ( i ) : these are

the multipliers on the Local IC constraints that would make the problem of maximizing the Lagrangian equivalent to the problem of maximizing the sum of the agents’(expected) virtual utility if the monotonicity constraints were to slack. Consider the terms B and C in the expression above. For any non-decreasing E

i

[a ( )] (recall that

incentive compatibility requires this) and any sets A and B; one has that E

i

E

i

F ( i) jA f ( i) 1 F ( i) E i [a ( )] jA E i f ( i) E

i

[a ( )]

1

jA ;

and E

i

E

i

E

i

E

i

[a ( )]

F ( i) jB f ( i)

[a ( )] jB E

i

F ( i) f ( i)

jB

for the expected value of the product of a non-decreasing function and a decreasing function is no larger than the product of the expected values. 1 9 Further

intuition is provided in Section 5.

10

Hence, the terms B and C would be maximized by a schedule that is ‡at over regions. Since type utility is larger the less variable a

1 2;

i

10 2 s

is, term A would also bene…t from a schedule which is constant

by parts. The economic interpretation of why a ‡at allocation is optimal is simple: it curbs the agents’ incentives to exaggerate their preferences. The DDIC allocation is one particular allocation for which E i [a ( )] is constant by parts. In the Appendix, we explicitly construct multipliers i ( i ) for which the DDIC is the allocation constant by parts that maximizes the whole Lagrangian.20 The construction is based on the following intuition. One can think of d

i

( i ) as expressing the marginal/incremental cost of making E

i

[a ( i ;

i )]

larger: in fact, since one is

restricted by the Incentive Compatibility constraints to choose mechanisms such that dE if one raises E

i

[a ( i ;

i )],

for the objective. The term d Now, E

i

[a ( i ;

i )]

one must also increase E i

i

[a ( ;

i )]

for all

i

[a ( i ;

i )]

0;

> 0; which is potentially costly

( i ) captures this marginal (shadow) cost.

evaluated at the DDIC jumps at the cuto¤ points fcn gn . If the DDIC is to be

optimal, one must have marginal shadow costs that change precisely at those points. Hence, we construct i

(:) so that d

i

( i ) jumps at fcn gn : Formally, since the set fcn gn is a dense subset of [0; 1] ; this amounts

to constructing multipliers that are singular functions with derivatives – that can be formally de…ned as

distributions –that jump at fcn gn .21 ,22 Those jumps are constructed to guarantee that, when evaluated at the DDIC , the marginal bene…t (in terms of the objective) of raising E

i

[a ( i ;

i )]

is exactly compensated

by the marginal shadow cost such raise implies in terms of the monotonicity constraint.

5

General Preferences and Uniform Distribution:

So far we have considered the case in which the agents’preferences are quadratic. In this section, we allow for more general preferences. Agent i0 s preferences are represented by a twice continuously di¤erentiable (Bernoulli) utility function ui (a; i ) ; with ui ( i ; i )

ui (a; i ) for all a; @ 2 ui (a; i ) @ 2 ui (a; i ) < 0 < @a2 @ @a

and such that any two equidistant actions from jja1

i jj

= jja2

i jj

i

lead to the same utility, i.e.:

) ui (a1 ; i ) = ui (a2 ; i ) :

The above conditions imply that the agents‘ preferences are single peaked, and symmetric around the peak

i:

With general preferences ui (a; i ), we are not able to come up with the appropriate guess for the multipliers on the monotonicity constraints which would allow us to use Lagrangian Theorems to verify the optimality of a DD mechanism. The main di¢ culty lies in that, when preferences are not quadratic, u i (a; i ) is neither linear in a nor additively separable in a and

i.

These two features, along with the fact that we deal with a

2 0 This

does not mean that there could not exist a di¤erent choice of multipliers for which and alternative allocation is also a maximizer of the Lagrangean. 2 1 A non-decreasing function f : [0; 1] ! < is singular if is non-constant and such that f 0 (x) = 0 for almost all x in [0; 1] (see Royden, 1988). 2 2 See Lang (1993, Chapter 11) for a formal de…nition of Distributions (also known as Generalized Functions).

11

multiple agent mechanism design problem with Bayesian IC constraints, makes the problem of guessing the multipliers non-trivial.23 Hence, instead of using Lagrangian methods, we rely on a more constructive method to establish optimality. In terms of notation, it is more convenient to work with the realizations of the random variable implied by P (aj i ;

i)

rather than P (aj i ;

i)

itself. So we will now denote the allocation as a ( i ;

i ; x)

where x is a uniform (0; 1) random variable.24 More importantly, in order to make progress, we impose three restrictions. The …rst of them concerns the distribution of the agents’types, which we assume to be uniform so that F ( i ) =

i.

We also impose that for all non-decreasing, IC allocations a ( i ;

@ 2 ui (a ( i ; i ; x) ; i ) (1 @ i @a

i)

and

@ 2 ui (a ( i ; i ; x) ; i ) @ i @a

i

are non-increasing.

i ; x),

(Monotone Hazard)

This is the same condition as the one used by Athey et al (2005) except we apply it to the uniform distribution case.25 As an illustration, note that, if ui (a; i ) =

(a

2 i)

; the condition clearly holds. We

follow Athey et al (2005) and, in a slight abuse of terminology, refer to the condition as monotone hazard condition. In addition, we restrict the set of the feasible allocations we consider. We limit our attention to allocations that are non-decreasing, that is, allocations that satisfy: a ( i;

i ; x)

non-decreasing in

i

for all

i

and x:

(Monotonicity)

As we showed in Section 4, for the quadratic case, the optimal mechanism turned out to satisfy this more stringent monotonicity condition.26 Therefore, for that case, there would not have been any loss from restricting attention to non-decreasing allocations. Although we conjecture this is also true for the general preferences case, we have not been able to formally prove so. Note as well that, if the decision a is to be taken by a principal who lacks commitment, he will always choose decisions that are monotonic on his beliefs of the agents’types. We show below that the DD mechanism with cuto¤s corresponding to the dyadic rationals over [0; 1], , is optimal in the class of monotonic allocations for the general preferences case when

1 1 3 1 3 5 7 2 ; 4 ; 4 ; 8 ; 8 ; 8 ; 8 ; :::

types are uniformly distributed. This allocation can be described as direct revelation mechanism requiring the agents to report their type, doing a binary decomposition of their reported types and then choosing the point that corresponds to the …rst digit for which they di¤er.27 2 3 More speci…cally, for general preferences, the monotonicity condition implies that E u i (a ( ; i ; x) ; i ) , i.e. the i ;x agents’expected marginal utility with respect to their types, must be non-decreasing in their announcements, . If u i (a; i ) is not separable and the implementation is bayesian, the restriction on marginal utilities is not directly mapped into restrictions on the allocation which is the object we must ultimately choose. 2 4 The assumption that x is uniform is without loss. 2 @ 2 ui (a( i ; 1 F ( i) F ( i) i ); i ) i ); i ) 2 5 For general distributions the condition is: @ ui (a( i ; and are non-increasing. @ i @a f ( i) @ i @a f ( i) 2 6 For the quadratic case, the monotonicity condition amounts to the expected decision, E [a ( ; )], being non-decreasing i i i in i : The DD, which is an optimal mechanism, is such that a ( i ; i ) is non-decreasing in i for all : See the Appendix i for the monotonicity condition required by incentive compatibility for the more general preferences case. 2 7 More precisely, for almost all i we can rede…ne the type to be ki as follows: i

= ki b

where ki is an in…nite row vector of zeros and ones ki = ki1 ; ki2 ; :::; kin ; :::

0

and b = [1=2; 1=4; :::; (1=2)n ; :::]0

12

5.1

The DD Mechanism is an Optimal Monotonic Allocation

We show in the appendix that using a result similar to Lemma 2 and after doing some integration parts we can recast the objective functional as:

V (a) =

1 4 |

+

+

E

1 4 |

E

i ;x

1 ; 2

u a

i; x

;

1 2

j

1 1 ;x < +E 2 2 {z

i

i ; x) ; i ) (1

1 ; 2

u a

i ;x

i; x

;

1 2

j

1 1 ;x < 2 2 {z

i

E

;x

u i (a ( i ;

1 1 ; 2 2

i ; x) ; i ) i j

B

1 4 |

1 4 |

E

u

;x

a ( i;

i

i ; x) ;

1 2

(1

i) j i

>

1 > 2

where

E

i

{z

;x

u

i

a ( i;

i ; x) ;

1 2

ij

i

>

}

1 > 2

i

C

E

1 ; 2

u a

i ;x

i; x

1 2

;

j

1 1 ;x > +E 2 2 {z

i

1 ; 2

u a

i ;x

i; x

;

1 2

j

1 1 ;x > 2 2

i

D

1 2; 1

1 2

1 1 ; 2 2

i) j

A

Now, for any non-decreasing allocation a ( i ; 0;

u i (a ( i ;

;x

i ; x) ;

consider replacing a ( i ;

i ; x)

: }

over the region

by

e a ( i;

i ; x;

) = (1

) a ( i;

i ; x)

+ E

;x

a ( i;

i ; x) j

2 [0; 1] :

We argue that this replacement has a positive e¤ect on term C for @ @ = E

;x

" =

E "

;x

u i (e a ( i; [u E

;x

a ( i; E

E

i ; x;

;x

E

;x

;x

ia

(a ( i ;

i ; x) j

u

a ( i;

0

ia

) ; i ) (1

2

(a ( i ;

i ; x) j

2

i) j i

>

1 > 2

i ; x) ; i ) (1 1 0; 21 2; 1

i )]

i ; x) ; i ) (1 1 0; 21 2; 1

i) j i

2 0;

1 ;1 2

1 2

small. Indeed, note that

j

i

a ( i; >

a ( i;

=0

!

i ; x) 1 2

>

j

i

>

1 2

1 > > 2

i

!#

i

i ; x) j i

>

#

i

where the inequality follows from the Monotone Hazard condition along with E

;x

a ( i;

i ; x) j

2 0; 12

1 2; 1

a ( i;

i ; x)

being decreasing in

i;

A similar argument can be

made for the other component of term C: This discussion suggests that the term C – that is associated with the region in which

i

< 1=2 <

i

– would be maximized by an incentive compatible mechanism that selects constant action. Note that a Now aDD ~i ; ~

i

= kb. Where k is a row vector for which: 8 n > < ki kn = 1 > : 0

~m = k ~m if 8 m n k i i ~m = k ~m and k ~n = ~n if 8 m < n k 6 k i i i i otherwise

A similar characterization can be found in Fleckinger (2008).

13

}

}

constant action of

1 2

in this region also maximizes the term D in the objective. From the work in social

choice theory by Moulin (1980), we know that if we required instead ex-post incentive compatibility the optimal allocation would also have 1=2 of diagonals.28 Nonetheless, once we consider Bayesian implementation, it is not obvious that it is optimal to set 1=2 as the allocation in these regions. We could expect that by perturbing the allocation slightly in these o¤2 2 1 diagonal regions one could improve the attainable values on the on-diagonals and 0; 12 ; once 2; 1 incentive constraints are taken explicitly into account. Suppose we were to carry out such a perturbation in the region where

i

<

1 2

<

i.

allocation strictly increasing in

Note that we start from a ( ) = i

in this region player

1 2

<

i

hence, if we were to make the

i would have more incentives to claim his type

is higher than it actually is. This would not help us bring the allocation in

2 1 2; 1

any closer to …rst best,

since the problem with the …rst best allocation is exactly that types in this region would want to pretend they are higher than they actually are. Therefore, within the class of weakly increasing allocation rules it is best to set a constant a ( ) =

1 2

on the o¤-diagonals. The following lemma establishes this formally.

Lemma 4 (1/2 o¤-diagonals) Given any symmetric incentive compatible allocation a ( ; x) that satis…es Monotonicity we can …nd an alternative incentive compatible allocation a ~ ( ; x) which is weakly better and satis…es a ~ ( ; x) =

1 2

when

i

>

1 2

>

i:

This is a very powerful result towards the full characterization of the optimal allocation. In fact, once one knows that setting

1 2

o¤-diagonals is optimal –so that this region plays no role in terms of providing incentives

over the main diagonal –the problem is separable. Furthermore, the problem over 0; 21 2 1 2; 1

2

; and respectively 2

is, subject to rescaling, exactly the same as the original problem (the problem over [0; 1] ): Here, is

where focusing on the uniform distribution is really helpful to be able to get the full characterization of the optimal allocation. The self-similarity property of the uniform distribution allows us to sequentially apply appropriately rescaled versions of Lemma (4). The resulting allocation is the DD mechanism in Figure 1 This discussion leads to the following result. Theorem 3 The DD allocation with cuto¤ s corresponding to the dyadic rationals is optimal in the class of non-decreasing Incentive Compatible allocations when types are uniformly distributed.

6

Dynamic Implementation of DD Allocations

Instead of fully revealing their types in one round of communication, DD allocations can alternatively be attained by having agents simultaneously report if they prefer an allocation below or above the cuto¤ of the interval of possible types: If their reports fall on di¤erent sides of the cuto¤, then the cuto¤ is implemented, if they both report to be on the same side of the cuto¤ then the process is restarted considering only the interval they both reported. This process is iterated until agents eventually report to be on di¤erent sides of the relevant cuto¤.29 The dynamic implementation of DD mechanisms captures in a simple way the property that negotiations often take place in rounds, and choices are sequentially eliminated until an agreement is reached. 2 8 See

also Barberà and Jackson (1994), and Barberà (2001) for detailed discussions of strategy-proof social choice functions. zero probability the types are the same. In that case, the procedure described above would never stop, then simply take the limiting point as the allocation. Also, if at any point a player is indi¤erent between reporting either "below" or "above" we assume he ‡ips a fair coin to decide. 2 9 With

14

As argued in Proposition 1, the agents have incentives to report truthfully in every round. Intuitively, at each stage, reporting the truth in expectation brings the allocation closer to the agent’s preferred point. The only types who are indi¤erent are the cuto¤ types. In many settings, there could be additional considerations beyond incentive compatibility that one could be concerned with. Players might want to minimize the amount of information they reveal to the principal or to the other agents. This could be the case if the agents type on one decision is correlated with their type in some other dimension and either the principal or the other agents could bene…t from learning something about this other dimension. Companies in a joint venture can be partners in the project at hand but …erce competitors in other markets or products. By revealing information coarsely and gradually, the agents are able to obtain an optimal allocation without having to reveal more about their types than what is strictly necessary. Additionally, this advantageous if one is concerned with the costs of communication required by the mechanism.30 Finally, restricting the amount of information that is revealed to the principal can be an e¤ective way to address the di¢ culties that arise when the principal lacks commitment. Much of the cheap talk literature for example motivates their modelling choice by saying that the principal lacks the ability to commit.31 As we detail below, for the uniform case, the dynamic implementation of DD mechanisms can implemented even when the principal lacks commitment. Lack of Commitment and Dynamic Revelation of Information. Although the DD allocation can be obtained by a direct revelation mechanism, by using a dynamic implementation the same allocation can be obtained by having signi…cantly less information revealed. In the uniform case, in expectation, the dynamic implementation ends in just two rounds. This means, that in expectation, agents just have to reveal 2 bits of information each. The ‡ipside of this is that typically very little of the agents type is actually revealed. This helps protect the agents from having information about their type from being used against them by the principal or some other player in some other context. Additionally, for the case in which types are uniformly distributed, the gradual revelation of information allows the DD mechanism to be implemented even without commitment. Indeed, suppose, as is the case in ADM, that it is two managers of some …rm that must report their preferred action to the CEO, who will in turn decide what action to carry out. If the CEO cares equally about both divisions he does not need to commit in order to follow the dynamic implementation of the DD allocation. The key for this is that once it is common knowledge that the managers are on di¤erent sides of a midpoint, the CEO cannot extract any more bene…cial information from them so he will choose the last midpoint as the allocation. We formalize this in the next result:32 3 0 See

Segal (2006) and references therein. example, in the model by ADM the CEO cannot commit not to do something that is on his best interest given the information he is conveyed by the two managers. 2 3 2 If u (a; ) = (a i i ) the restriction to pointwise monotonic allocations can be dropped. 3 1 For

15

Proposition 2 Suppose a( )

=

+ 2

+ 2

<

i

2 arg

<

i

max

a(~) non decreasing

s:t i

2

arg

i

2

arg

max ~i <

E

+ 2

max + 2

<~

; and i , i are i.i.d. uniform " X E ui (a ( ) ; i ) j i <

E

i

ui a ~i ;

i

u

i

a ~ i;

i

;

i

i

i

i

;

j i

+ 2 j

<

; + 2

: Then <

i

#

i

i

<

+ 2

The Proposition above is stronger than required since it establishes that even with the ability to commit, once the managers know that they are on opposite sides of the midpoint, it is not e¢ cient for the principal to extract any more information from them. It is then follows that if all the CEO knows is that the players are in di¤erent sides of a given midpoint, his optimal choice for an allocation is the midpoint itself. As we know from the seminal work of Crawford and Sobel (1982) and the rest of the cheap talk literature, coarse revelation of information is necessary if the principal lacks commitment. Fully revealing their types directly in one round of communication would fail if the CEO cannot commit. Once she learns the agents’ type she would then want to deviate from whatever was promised and implement the …rst best allocation a=

i+

2

i

: Restricting the amount of information conveyed to the mediator is a way to prevent her from

perturbing the allocation rule ex-post. Indeed, similar forces are behind the partition equilibria, in Crawford and Sobel (1982). Short and Long Cheap Talk As is frequently the case in the cheap talk literature, ADM restricted their analysis to only one round of cheap talk by the Agents. As they acknowledge: "It is well known in the literature on cheap talk games that repeated rounds of communication may expand the set of equilibrium outcomes even if only one player is informed. However, even for a simple cheap talk game such as the leading example in Crawford and Sobel (1982), it is still an open question as to what is the optimal communication protocol."33 Indeed as shown in some examples in Aumann and Hart (2003) and Krishna and Morgan (2004) there are potential bene…ts of long cheap talk. Goltsman et al (2008) study three di¤erent processes: (i) (possibly long) cheap- talk (negotiations), (ii) non-binding recommendations by a third party (mediation), and (iii) binding recommendations by a third party (arbitration) within the uniform-quadratic case of Crawford and Sobel (1982) model. They show that, if the misalignment of incentives is low, negotiation and mediation lead to the same outcome. Moreover, only two rounds of cheap talk are needed to obtain the mediation outcome when the con‡ict of interest (an ex-ante known parameter) is low. However, arbitration always dominates the other protocols. We, in turn, show that arbitration and negotiations lead to the same outcomes in our setting. Surprisingly, for the uniform-quadratic case studied by ADM, the value attained with the allocation they characterize (for the extreme case of prohibitively high miscoordination costs, that corresponds to our 3 3 Note

that the reference to "the leading example in Crawford and Sobel (1982)" corresponds to the uniform-quadratic case. A large part of the cheap talk literature focuses solely on this case.

16

model) is exactly the same as the one we attain with the DD allocation.34

35

This implies that, in this

environment, one round of communication is actually su¢ cient. Hence, some of the conclusions derived by ADM are actually much stronger than they were aware o¤. The limitation of their analysis to one round of communication is actually of no consequence in terms of ex-ante payo¤s. Since, without knowing so, theirs was an optimal communication protocol Nonetheless, we believe that the dynamic implementation of the DD allocation with several rounds of communication has some additional appealing features. Before making our case, it is useful to recall what the allocation characterized by ADM looks like. Essentially, the type space is partitioned, each agent reports the element of the partition to which his favorite action belongs, and then the Principal implements as an allocation the average type given the reported rectangle. Partitions are very …ne close to the middle of the interval since incentive constraints are not very binding for those types and progressively become coarser towards the extremes. Below, we replicate Figure 2 from their paper.

θ + θ  a=E 1 2 |   2 

ADM Allocation

It is important to note that, although ADM claim that they look at a cheap talk game because they are in a setting with no commitment, they are granting the CEO with an unnatural commitment ability to restrict communication to just one round. Suppose for example, that both players report to be in the shaded area in the …gure above. The players and the CEO would have a strong incentive to communicate further. Essentially, they are facing the same situation they were facing originally albeit within a smaller range. So, although the CEO is not allowed to commit to an allocation, it is important that he can commit not to keep on talking even when all the parties involved would ex-post prefer this.36 Instead, with the DD allocation, as long as agents report to be in the same quadrant, they would keep on re…ning their reports until it is clear their interests are in con‡ict. This happens when there is a value (the midpoint) that objectively separates both types. When this point is reached there in no further value in communicating since it will not be possible to extract any more information from the agents. Therefore, with the dynamic DD allocation 3 4 The

leading example in Crawford and Sobel (1982) corresponds to the uniform-quadratic case. A large part of the cheap talk literature focuses solely on this case. 2 3 5 Simple computations show that both allocations deliver an ex-ante expected utility of : For comparison, the …rst best 21 2 value is 24 : 3 6 Their allocation would not be incentive compatible if a second round of communication were expected in cases of agreement.

17

there is no scope for renegotiation and there no implicit commitment power granted to the principal to limit communication to just one round.

7

Concluding Remarks

Several decision making processes seem to involve slowly eliminating a subset of the available options under discussion until there is clear disagreement. When, …nally, the situation is of clear disagreement, there is no more room to work on a better solution and some compromise must reached between the parties involved. We have provided a model and an optimal allocation rule that formally captures this process. For this, it was necessary to step away from the transferable utility setting. Although there are no general methods readily available to do mechanism design with non-transferable utility we have been able to use Lagrangean methods to verify that Divide and Discard mechanisms are optimal. Although they can be characterized by a direct revelation mechanism we …nd the dynamic implementation of these mechanisms particularly appealing. Not only do they correspond to the real world counterparts we set out to explain but, furthermore, we believe that the gradual revelation of information they entail is very desirable in environments were the agent’s information can be used against them. This can take place in the underlying setting if there is a lack of commitment or in some related matter if the agents (or principal) interact in more than one dimension. Even though, for the model we presented, it is only for the uniform case that the allocation we propose can be implemented without commitment we believe that the idea of having agents only gradually releasing their private information when the principal cannot commit should be useful in other settings as well.37 There are several interesting dimensions in which the model can be extended. A natural and important one to explore is the case of N>2 players. Jackson and Sonnenschein (2007) and Carrasco and Fuchs (2009) have looked at the N player model. Since both papers look at the case in which agents must jointly decide on an in…nite amount of decisions, their …ndings, although interesting, do not shed much light for the case in which agents interact only once.

References [1] Alonso, R., Dessein, W. and Matouschek, N., 2008. "When Does Coordination Require Centralization?", American Economic Review, March 98(1), 145-179. [2] Alonso, R., Dessein, W. and Matouschek, N., 2008. "Centralization versus Decentralization: An Application to Price Setting by a Multi-Market Firm." Journal of the European Economic Association, Papers and Proceedings, April-May. [3] Amador, M, Angeletos, G, and Werning, I., 2006, "Commitment versus Flexibility", Econometrica, 74(2) - p. 365-396. [4] Arrow, K., 1979. "The Property Rights Doctrine and Demand Revelation under Incomplete Information", Economics and Human Welfare. Academic Press. 3 7 Some recent papers are working along these lines. See for example Damiano, Hao and Suen (2009) and Bognar, Meyer-terVehn and Smith (2009).

18

[5] Athey, S., Atkenson, A., and Kehoe, P., 2005. “The Optimal Degree of Monetary Policy Discretion”. Econometrica 73 (5), September (2005), 1431-1476. [6] Amstrong, M., 1995, "Delegating Decision-Making to an Agent with Unknown Preferences", mimeo, University of Southampton. [7] Aumann, R.J. and Hart, S., 2003. "Long Cheap Talk", Econometrica Vol. 71 (6), pp. 1619–1660. [8] Ausubel, L. and Deneckere, R., 1993. “A Generalized Theorem of the Maximum” , Economic Theory, Vol. 3, No. 1, pp. 99-107. [9] Bagnoli, M, and T. Bergstrom, 1989, Log Concave Probability and Its Applications, mimeo, University of Michigan. [10] Barberà, S. and Jackson, M., 1994. "A Characterization of Strategy-proof Social Choice Functions for Economies with Pure Public Goods", Social Choice and Welfare, 11, pp. 241-252. [11] Barberà, S., 2001. "An Introduction to Strategy-proof Social Choice Functions", Social Choice and Welfare, pp. 619-653. [12] Bognar,K., Meyer-ter-Vehn, M. and Smith, L. 2009, "We can’t argue forever", Working Paper. [13] Carrasco, V. and Fuchs, W. 2009. "From Equals to Despots: The Dynamics of Repeated Group Decision Taking with Private Information", Working paper. [14] Crawford, V. and Sobel, J., 1982. "Strategic Information Transmission", Econometrica, Vol. 50(6), pp. 1431-1451. [15] Damiano,E., Hao, L. and Suen, W. 2009. "Delay in Strategic Information Aggregation", Working Paper. [16] d’Apresmont, C. and Gerard-Varet, L., 1979. "Incentives and Incomplete Information", Journal of Public Economics 11, pp. 25-45. [17] Fleckinger, P. 2008. "Bayesian improvement of the phantom voters rule: An example of dichotomic communication", Mathematical Social Sciences 55 (2008). [18] Goltsman, M., Hörner, J., Pavlov, G., and Squintani, F., 2009, "Arbitration, Mediation and Cheap Talk", Journal of Economic Theory, Volume 144, Issue 4, pp. 1397-1420. [19] Gordon, H., 1966. "The Maximal Ideal Space of a Ring of Measurable Functions", American Journal of Mathematics, Vol.88, No.4, pp. 827-843. [20] Harsanyi, J., 1955. "Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparisons of Utility", The Journal of Political Economy, Vol. 63, No. 4, pp. 309-321. [21] Hart, O. and Moore, J., 2004. "Agreeing Now to Agree Later: Contracts that rule out but do not rule in", mimeo, Department of Economics, Harvard University. [22] Homstrom, B., 1984, "On the Theory of Delegation", in Bayesian Models in Economic Theory, edited by M. Boyer and R.E. Kihlstrom, vol. 5 of Studies in Bayesian Economics, pp. 115-141. 19

[23] Jackson, M. and Sonnenschein, H., 2007. "Overcoming Incentive Constraints by Linking Decisions", Econometrica, Vol. 75(1), pp. 241-258. [24] Kakutani, S., 1941. "Concrete Representation of Abstract (M)-Spaces (A characterization of the Space of Continuous Functions)", Annals of Mathematics, Vol.42, No.4, pp. 994-1024. [25] Kartik, N., Ottaviani M. and Squintani, F., 2007. "Credulity, Lies, and Costly Talk" Journal of Economic Theory, 134(1), 93–116, May 2007. [26] Kováµc, E., and Mylanov, T., 2009, "Stochastic mechanisms in settings without monetary transfers: The regular case", Journal of Economic Theory, Volume 144, Issue 4, pp. 1373-1395. [27] Krishna, V. and Morgan, J., 2004. "The art of conversation: eliciting information from experts through multi-stage communication", Journal of Economic Theory, Vol. 17(2), pp. 147-179. [28] Lang, S., 1993. Real and Functional Analysis, third edition. Springer-Verlag, New York. [29] Maggi, G. and Morelli, M., 2006. "Self-Enforcing Voting in International Organizations", American Economic Review, Vol. 96(4), pp. 1137-1158. [30] Martimort, D. and Semenov, A., 2008, "The Informational E¤ects of Competition and Collusion in Legislative Politics", Journal of Public Economics, vol. 92, n. 7, p. 1541-1563. [31] Melumad, N., and Shibano, T., 1991, "Communication in Settings with No Transfers", Rand Journal of Economics, Vol. 22, No. 2 (Summer, 1991), pp. 173-198 [32] Milgrom, P. and Segal, I., 2002. "Envelope Theorems for Arbitrary Choice Sets", Econometrica Vol. 70(2), pp. 583-601. [33] Moulin, H., 1980. "On Strategy Proofness and Single Peakedness", Public Choice, 35, pp. 437–55. [34] Myerson, R., 1981. “Optimal Auction Design”, Mathematics of Operations Research, Vol. 6(1) , pp. 58-73. [35] Persico, N., 2004. "Committee Design with Endogenous Information", Review of Economic Studies, 71, pp. 165-91. [36] Rosenlicht, M., 1999. Introduction to Analysis, Prentice-Hall. [37] Royden, H.L., 1988, Real Analysis, Third Edition, Englewood Cli¤s: Prenctice Hall. [38] Segal, I.,2006. Communication in Economic Mechanisms, in Advances in Economics and Econometrics: Theory and Application, Ninth World Congress (Econometric Society Monographs), ed. by Richard Blundell, Whitney K. Newey, and Torsten Persson, Cambridge University Press. [39] Sprumont, Y., 1995. "Strategyproof Collective Choices in Economic and Political Environments", Canadian Journal of Economics, 28, pp. 68-107.

20

8

Appendix

Appendix A: Preliminary Results In this appendix, we prove some of the preliminary results we will need to prove the optimality of the DD mechanisms, and some of the other results in the text. Proof of Theorem 1. The objective is a bounded linear functional. In fact, note that Z Z 2 2 2 b b sup (a (a E (a ) dP aj E i ) < 1: i ) dP aj i a2A;

i

Since the objective is a bounded linear functional, it is continuous.

The set of all distributions is compact in the weak-* topology. Since the inequalities in the IC constraints are weak, the set of distributions fP (aj )g that satisfy the IC constraints is a (weak-*) closed subset of

the set of all distributions (which is compact). Hence, the constraint set is compact. Since the objective is continuous and the constraints set is compact, a solution to the agents’problem must exist. Proof of Lemma 1. The objective is a linear functional. Also, Z 2 b E i (a i ) dP aj

is linear in P ajb and the constraints are (weak) inequalities. Hence, the constraint set is convex. It

follows that the program is convex. Proof of Lemma (IC Representation) . Ui ( i ) = max E

i

bi

h

For necessity, notice that

a bi ;

i

i

i2

E

i

h

bi ;

2

i

i

so that U ( i ) is a value function. The integral formula is then implied by Milgrom and Segal’s (2002) Envelope Theorem (Theorem 2), since E

i

h

a bi ;

i

i

i2

E

is di¤erentiable (and therefore absolutely continuous) in

bi ;

i

i

h

a bi ;

i

i

i

i

0

a

;

i

0 2

0

2

E

i

;

i

;

i

E

i

c for all a bi ;

Moreover, if a mechanism is Incentive Compatible, for all E

bi ;

2

0

00

a

i

i

and, due to the fact that A is compact, there

i

exists a …nite number c so that sup 2E

h

i

00

>

;

:

i

; one must have 0 2

i

00

2

E

i

;

i

(IC

0 00

)

: (IC

00 0

)

and E

i

a

00

;

i

00 2

E

00

2 i

E

i

a

0

;

00 2

i

E

Summing both expressions up, one has, after a few algebraic manipulations, 2 E

i

a

0

;

i

E

i

21

a

00

;

i

0

00

0

2 i

0

;

i

Hence, the expected monotonicity condition must hold. For su¢ ciency, let i > 0:5; and consider 0:5 < b < i : Ui (b)

Ui ( i )

=

2

Zi

E

b

=

= = =

Z i dE b

i [a ( ;

d

2

i

E

i

h

a b;

i

i

i

d

d

d

i

Z ih b

2

a b;

i

h a b; h E i a b; h E i a b;

E

i )]

i

i

i

i

i

i

i2

+E

i

E

i

E

i

i2 i2

h h

a b;

b;

2

h

b;

2

i

i

i

i

i2 b

+E

i

i

h

Ui (b)

a b;

i2 b +E

i

i

h

b;

2

where the …rst inequality follows from the expected monotonicity of the allocation. Therefore, Ui ( i )

E

i

h

a b;

i

i

The analysis for all other cases is analogous: Proof of Proposition 1.

i2

E

i

h

b;

2

i

i

:

Since the DD allocations are pointwise monotonic, in order to show that, at

stage n; it is weakly dominant to report truthfully, it is su¢ cient to show that, at stage n; the agent with cuto¤ type cn is indi¤erent between reporting up or down. For c1 to be indi¤erent, one must have: 2 2 2 F (c1 ) (c1 c1 ) [1 F (c3 )] (c3 c1 ) 1 6 X 4 F c3(2n+1 ) c3(2n+1 ) c1 F c3(2n ) 2 6 4

=

2

n=0

[1

1 X

F (c1 )] (c1

F c3(2n+1 )

c1 )

2

F c3(2n )

1

F (c2 ) (c2

c1 )

c3(2n+1 )

1

2

1

n=0

3 7 5

(ICc1 )

c1

2

3

7 5;

where the LHS corresponds to his payo¤ to responding "up" and the RHS to his payo¤ of reporting "down".

h Now, for an arbitrary cut-o¤ point ck 2 c k 2 6 6 4

2

6 = 6 4

1 2

i ; c k , indi¤erence of the cuto¤ type requires: 2

3 i h i 2 2 F ck j c k 1 ; c k (ck ck ) F c2k+1 j c k 1 ; c k (c2k+1 ck ) 2 2 2 2 7 1 hh h i i i h i 7 X 2 5 F c(2k+1)2n j c k 1 ; c k F c(2k+1)2n+1 j c k 1 ; c k c(2k+1)2n+1 ck h

i

2

2

n=0

h 1

2

i i h i h 2 2 (ck ck ) F c2k j c k 1 ; c k (c2k ck ) F ck j c k 1 ; c k 2 2 2 2 1 h h i h i i X F c(2k+1)(2n ) 1 j c k 1 ; c k F c(2k+1)(2n 1 ) 1 j c k 1 ; c k c(2k+1)(2n h

(ICck )

2

1

2

2

2

n=0

2

1)

1

ck

2

3

7 7: 5

Equations (ICc1 ) and (ICck ) pin down all IC constraints of a DD mechanism for which it is weakly dominant for the agents to report truthfully on which side of the cuto¤ their favorite action lies. We show that a 1

solution fck gk=1 to the equations (ICc1 ) and (ICck ) exist. 22

i

i

Toward that, for k = 1; 2; :::; de…ne the mapping Ck c2k ; c2k+1 ; :::; c(2k+1)(2n ) =

ck 2 c(2k+1)(2n )

that is, Ck c2k ; c2k+1 ; :::; c(2k+1)(2n )

1 ; c(2k+1)(2n ) ; :::

1 ; c(2k+1)(2n )

1 ; c(2k+1)(2n ) ; :::

; n = 1; :::jICck is satis…ed ;

is the (set of) cuto¤(s) ck so that the indi¤erence

condition for cuto¤ ck is satis…ed given the rest of the cuto¤s. Letting let the order over vectors c2k ; c2k+1 ; :::; c(2k+1)(2n )

be the usual order over [0; 1] ; we

1 ; c(2k+1)(2n ) ; ::: be the product order. Note that Ck (:)

is monotone since, if Ck (:) were not to increase, reporting "down" would become more attractive. Now, de…ne

1

: [0; 1] by

0

=) [0; 1]

C1 c2 ; c3 ; :::; c3(2n ) .. .

(:) is also monotone. Also, [0; 1]

by Tarski’s Fixed Point Theorem, the set of …xed points of

1

1 ; c3(2n ) ; :::

B B B (c) = B B Ck c2k ; c2k+1 ; :::; c(2k+1)(2n ) @ .. .

Since each Ck (:) is monotone,

1

C C C C n 1 ; c(2k+1)(2 ) ; ::: C A

1

is a (non-empty) complete lattice. Hence,

(:) is non-empty.38 . Moreover, if c 2

(c ) ;

c is a solution to the in…nite system of equations induced by (ICck ), k = 1; :::; and the result follows. Furthermore, there can only be one solution that satis…es c1 = 1=2 since the mapping is monotonic and (by Tarski’s Fixed Point Theorem) the set of …xed of points of

is a complete lattice.

We now proceed to prove Lemma 5. We do so by recasting Theorem 1 (Chapter 8) in Luenberger (1969) in terms of the variables of our setting. For the sake of completeness, we state the result one more time: Lemma 5 (Optimality) An incentive compatible allocation Ui

1 2

the planner’s problems if, and only if, there exists a non-decreasing f tinuous f L Ui

1 2

i

( i )g ;E

i 2[0;1]

for which

[a (:)] ; E i

2 i

(:) j (

i

(:) ;

n ei for all Incentive Compatible allocations U

Proof of Lemma Optimality.

i

i

[a ( i )] ; E

( i )g

i 2[0;1]

2 i

( i)

i ;i=1;2

solves

and (non-negative) con-

i h 1 ;E i e a (:) ; E 2 h i h io ;E i e a ( i ) ; E i e2 ( i )

i (:))i=1;2 1 2

;E

ei L U

i

h

i e2 (:) j (

i

(:) ;

i ;i=1;2

We proceed by showing our problem …ts into the general maximization

problem considered by Luenberger (1969, sections 8.3 and 8.4).39 Having established that, we can then invoke Luenberger’s (1969) Theorem 1 in section 8.3 (which establishes the necessity part of the General Lagrangean Theorem) and Theorem 1 in section 8.4 (which establishes the su¢ ciency part of the General Lagrangian Theorem). The general problem considered by Luenberger (1969) is: max Q (x) x2X

3 8 Recall that Tarski’s …xed point theorem states that, if X is a non-empty complete lattice, and f : X correspondence, then the set of …xed points of f is non-empty. 3 9 A similar strategy has been employed by Amador et al (2006).

23

! X is an increasing

i

(:))i=1;2

subject to x 2

and G (x) 2 P ; where

is a subset of the vector space X; and Q:

!<

G:

!Z

and

with Z being a normed vector space and P a positive non-empty cone in Z: In order to map our problem to Luenberger’s general problem, take X Z1 Z2

1 1 ; E i [a (:)] ; E i 2 (:) jUi 2 <; E i [a] : [0; 1] ! <; E 2 2 = fzjz : [0; 1] ! <; z (:) is continuousg , coupled with the sup norm

=

2

Ui

: [0; 1] ! <

i

= fzjz : [0; 1] ! <; z (:) is non-decreasingg : Q Ui

1 2

E

i

i [a (:)] ; E

2 i

(:)

=

2 X

E

i

i=1

1 2

G1 U

=

;E

;E

i

2

[a ( )]

2

[a (:)] ; E

i

and G2 Ui

E

1 2

i

2 i

;E

E

2

i [a ( )]

E

i

2 i

( )

i

;

(:)

( )

i

h

8 > > > > Ui > > > <

> > > > > > U > : i

[a (:)] ; E

1 2

+2

Zi

E

i

[a ( ;

i )]

d if

i

>

1 2

1 2

;

1

Z2 2 E

1 2

i

[a ( ;

i )]

d if

i

1 2

i

2 i

(:)

= dE

i

[a ( )] :

Our Problem then reads max Q (x) x2X

subject to G1 (x) = 0; G2 (x)

0:

This completes the mapping of our problem to Luenberger’s (1969).40 For necessity (the existence of multipliers), note that the problem is convex (by Proposition 1). Moreover, for a feasible allocation in the interior of Z2 , we can take the allocation in Moulin’s (1980): aM ( ) = median

1; 2;

1 2

:

Now, the Riesz Representation theorem implies that the dual of Z1 is isomorphic to the space of nondecreasing functions (see Theorems 2.3 and 2.7 and Corollary 2.8 in Lang (1993)). Finally, the positive dual of Z2 is the set of (non-negative) continuous functions (see Kakutani (1941) and Gordon (1966)). The result then follows from Theorem 1 in section 8.3 of Luenberger (1969). 4 0 In

our problem, on top of an inequality constraint, we must also deal with an equality constraint. That is the reason why there are no restrictions on the sign of i ( i ) :

24

For su¢ ciency, if there are multipliers in the dual of Z1 and in the dual of Z2 for which the Lagrangian Functional is maximized by an allocation, than such allocation solves the problem by Theorem 1 in section 8.4.of Luenberger (1969). In Appendix B, we construct multipliers for which a DD mechanism maximizes a Lagrangian functional. Therefore, the su¢ ciency result in Lemma 5 is what is really important for our proof of Theorem 2.

25

Appendix B: Optimality of the DD Mechanisms when Preferences are Quadratic: In this appendix, we show that the DD mechanism that is IC DDIC , is also optimal for the case in which preferences are quadratic. The appendix is organized as follows. We start by showing in Lemma 6 an auxiliary result that will be used to prove our optimality result. We then move toward the construction of the multipliers for which the DDIC maximizes the Lagrangian Functional. We de…ne multipliers for the monotonicity constraints that are singular functions. Therefore, throughout the construction of those multipliers we will make use of some results of the Theory of Distributions (see Lang (1993, Chapter 11)). Having de…ned the multipliers for the …rst order counterparts of the Incentive Compatible constraints implied by Lemma 8 and the monotonicity constraints, we derive the Lagrangian Functional evaluated at those multipliers. Finally, in Proposition 3, we show that the DDIC maximizes the Lagrangian Functional for the multipliers we construct. We then invoke Lemma 5 to establish that the DDIC mechanism is optimal when preferences are quadratic.

Auxiliary Result and the Lagrangian: We will use the following result to prove the optimality of the DDIC mechanism when preferences are quadratic. Lemma 6 (Covariance Lemma) Let fP (:j )g

decreasing E E

i

i

[a ( )] : Then, for all sets A; B

E

i

[a ( )]

(1

F ( i )) j f ( i)

i

2A

F ( i) j f ( i)

i

2B

and E

i

E

i

[a ( )]

(1 F ( i )) f ( i)

Proof. Since F (:) is log-concave, E

i

be an arbitrary (stochastic) mechanism with non-

2[0;1]2

[0; 1], one has

E

E

i

E

i

i

E

[a ( )] j

i

i

[a ( )] j

2A E

i

(1 i

2B E

i

F ( i )) j f ( i)

F ( i) j f ( i)

i

i

2A

2B

is non-increasing (see Bagnoli and Bergstrom (1989)). Since

[a ( )] is non-decreasing, for any set A; the conditional covariance of

(1 F ( i )) f ( i)

and E

i

[a ( )] is non-

positive: F ( i )) ; E i [a ( )] j i 2 A 0: f ( i) Using the de…nition of the covariance, one can re-write this inequality as Cov

E

i

E

i

[a ( )]

(1

(1

i

F ( i )) j f ( i)

i

2A

E

i

E

i

[a ( )] j

i

2A E

(1 i

F ( i )) j f ( i)

i

2A

as claimed. The other inequality can be proved using similar arguments. We now move toward proving Theorem 2. In the text, we de…ned the Lagrangian functional as 2 3 E i [Ui ( i )] 2 3 6 7 6 Z1 7 Zi 6 7 7 6 1 6 7 ( ; ) )] d d ( ) 2 U ( ) U E [(a 4 5 i i i 2 i i i 7 i 6 6 1 7 1 6 2 7 2 2 6 2 3 7 X 1 1 6 Z1 7 2 Z 6 7: 6 7 6 1 )] d 5 d i ( i ) 7 4Ui ( i ) Ui 2 + 2 E i [(a ( ; i ) i=1 6 7 6 7 6 0 7 i 6 7 6 7 Z1 6 7 4 5 + i )] i ( i ) dE i [a ( i ; 0

26

0

Using Lemma 5, we show that the DDIC mechanism is optimal by constructing non-decreasing multipliers f

i

( i )gi;

i

and (non-negative) continuous multipliers f

i

( i )g

i

for which the DDIC mechanism maximizes

the Lagrangian among all Incentive Compatible mechanisms.

The Multipliers: We now construct the candidate multipliers. Multipliers on the First Order Counterpart of the IC constraints: We will take, for i = 1; 2 i

( i) = F ( i) :

As we mentioned in the text, these are the multipliers on Local IC constraints that would make the problem of maximizing the Lagrangian equivalent to the problem of maximizing the sum of the agents’ (expected) virtual utility (which is what is done in most of the applied Mechanism Design literature) if the monotonicity constraints were to slack. Multipliers on Monotonicity Constraints: In any incentive compatible mechanism, fP (:j )g, one must have E

i

[a ( i ;

i )]

non-decreasing in

i:

From Lebesgue’s Decomposition Theorem (see Royden, 1988), any non-decreasing function can be decomposed as the sum of an absolutely continuous, non-decreasing, function and a singular function.41 . Hence, for any feasible mechanism, E where E

i

[aac ( i ;

i )]

i

[a ( i ;

i )]

=E

i

[aac ( i ;

i )]

+E

is its absolutely continuous part of E

i

i

[asg ( i ;

[a ( i ;

i )]

i )] ;

and E

i

[asg ( i ;

i )]

is the

singular part. One particular feature of the DD mechanisms (our candidates to an optimum) is that their decompositions have no absolutely continuous parts. Indeed, E

i

aDD ( i ;

i)

is constant over [0; 1] n fcn gn and jumps

at all the cuto¤ points fcn gn which are a countable, dense, subset of [0; 1] : Hence, to show that the DDIC mechanism is optimal, we will need to construct continuous multipliers f

i

( i )g

i

that only have singular

parts.

The construction of the multipliers: 1

Using the symmetry of the problem around 21 ; we …rst consider the region 12 ; 1 : Let fdj gj=0 , with d0 = 12 ; be the cuto¤s of the DDIC over 21 ; 1 : In what follows, we assume (without loss) that E i aDD ( i ; i ) is continuous from the left. Let zj be the size of the (upward) jump of the expected DDIC decision E

i

aDD ( i ;

i)

at dj : Formally, zj

=

lim+ E

i !dj

= Since E

i

aDD ( i ;

spite of the fact that E 41 A

1988).

i) i

lim E

+ i !dj

i

aDD ( i ;

i)

i

aDD ( i ;

i)

is non-decreasing over aDD ( i ;

lim E

_ i !dj

E 1 2; 1

i

i

aDD ( i ;

aDD (dj ;

i)

i)

; one can de…ne it as a measure.

Hence, in

i ) has no absolutely continuous component, one can formally de-

non-constant monotone function f : [a; b] ! < is said to be singular if f 0 (x) = 0 for almost all x in [a; b] (see Royden

27

…ne dE

i

aDD ( i ;

i)

as a distribution (see Lang (1993, Chapter 11).42 Indeed, one has that dE

i

aDD (:) =

1 X

zj

dj

(:) ;

j=0

where

dj

is the Dirac Measure concentrated on dj .43

We construct the multipliers

i

(:) so that the multipliers are symmetric around 21 , and the same across

players i

( i) =

i

(1

i) ;

f or all

1 ;1 2

2

i

and

i

( i) =

i

( i)

Using the last property, we drop the subscripts in what follows. 1 2; 1

Over nent.

44

;

(:) will be a non-decreasing, continuous, function with no absolutely continuous compo-

Since it is non-decreasing, it can be seen as measure. Hence, in spite of

(:) having no absolutely

continuous component, d (:) can be formally de…ned as distribution (see Lang (1993, Chapter 11)). We de…ne d (:) over

as follows: d 2 0

6 B d ( i ) = 4 (1) + 2 @

1 2

with

1 2; 1

1 2

Z1

= 0 and, for

(1

F ( i )) d

> 12 ;

i

Zi

i

1 2

(1

1 2

0 being picked so that

Z1

d ( i) =

13

C7 dE F ( )) d A5 E

1 2

(1)

aDD ( ) DD ( )] i [a i

:

1 2

Since

(:) is symmetric around 12 ; one can derive its values for 0; 12 from the construction above. Indeed, 0

letting fdj gj=

d

1 2

1,

with d0 = 21 ;be the DDIC ’s cuto¤s over 0; 21 , we can de…ne d (:) over 0; 12 as follows:

= 0 and, for

i

< 12 ; 45

d ( i) =

0 1 Z2 B 6 4 (0) + 2 @ F ( i ) d 2

13

1

Z2

i

0

i

C7 dE F ( ) d A5 E

aDD ( ) DD ( )] i [a i

While it will become clear below, the interpretation of why we pick such multipliers for the monotonicity constraints is simple. Consider the case in which ginal/incremental cost of making E

i

[a ( i ;

i )]

2

i

1 2; 1

. For such case, d

i

( i ) expresses the mar-

larger. Indeed, since one is restricted to choose mechanisms

4 2 By

the Riesz Representation Theorem, for any -regular Borel measure ; d de…nes a (bounded) linear functional over the set of continuous functions (see Lang, 1993). 4 3 Let X be a set; and X be a sigma-algebra over X: A Dirac measure x is such that, for any A 2 X; ( 1 if x 2 A : x (A) = 0 otherwise 4 4 A widely known example of a non-decreasing continuous function which is singular is the Cantor function (see Royden 1988). 1 4 5 For aDD ( i ; i ) as follows: i 2 0; 2 ; we can de…ne dE i

dE where zj is the upward jump of E

i

aDD (:;

i

h

aDD (:; i)

i

i)

at dj :

28

=

0 X

j= 1

dj

(:) zj ;

such that dE for all

>

i

[a ( i ;

i )]

0; if one makes E

i

[a ( i ;

i )]

larger, one must also increase E

i ; and this may be costly for the objective. The expression d

i

[a ( ;

i )]

i ( i ) captures this marginal

(shadow) cost. Now, E

aDD ( i ;

i

jumps at all the cuto¤ points fdj gj . If the DDIC is to be optimal, one has

i)

to construct marginal shadow costs that change precisely at the points at which the expected decision E

i

aDD ( i ;

i)

jumps. The d

i

( i ) we constructed has this feature. Moreover, the sizes of jumps of

( i ) are de…ned so that, when evaluated at the DDIC , any marginal bene…t (in terms of the objective) of raising E i [a ( i ; i )] are exactly compensated by the marginal shadow cost such raise implies in terms of the monotonicity constraint. d

i

Before proceeding, it is useful to prove the following result: 1 2; 1

Claim 1: Let f : Z1

(i)

f ( i) d

i

! <, and g : 0; 21 ! < be bounded: One has that 2 0 13 Z1 Zdi 1 X 6 B C7 f (dj ) 4 (1) + 2 @ (1 F ( i )) d i = (1 F ( )) d A5

( i)

j=0

1 2

1

(ii)

Z2

g ( i) d

i

( i)

0 X

=

2

j= 1

0

1 2

0 1 Z2 6 B g (dj ) 4 (0) + 2 @ F ( i ) d

1 2

13

1

Z2

i

0

dj

C7 F ( ) d A5

E

zj DD (d ; [a j i

E

i )]

zj : DD (d ; )] [a j i

Proof. We only prove (i) ; (ii) follows very similar arguments. Note that, by the de…nition of d i ( i ), 0 2 13 Z1 Z1 Zi Z1 DD ( ) B 6 C7 dE i a f ( i) d i ( i) = f ( i ) 4 (1) + 2 @ (1 F ( i )) d i (1 F ( )) d A5 E i [aDD ( )] 1 2

1 2

1 2

Z1 X 1

=

j=0

1 2

Z1

=

1 2

zj

lim

N !1

E N X j=0

f ( i) DD ( )] i [a

zj

E

2

0 Z1 6 B (1) + 2 4 @ (1 1 2

2

1 2

0

f ( i) 6 B (1) + 2 @ DD ( )] 4 [a i

F ( i )) d

i

Zi

13

C7 F ( )) d A5

(1

1 2

Z1

(1

F ( i )) d

1 2

i

Zi 1 2

(1

dj

( i) d

13

C7 F ( )) d A5

dj

Moreover, by Lebegue’s Dominated Convergence Theorem (the …rst equality) and the linearity of the integral, we have Z1 1 2

=

=

lim

N !1

lim

N !1

lim

N !1

N X j=0

Z1 X N 1 2

zj

zj

j=0

N Z1 X j=0

1 2

zj

f ( i) dd ( )] i [a

2

f ( i) [add ( )] i

2

f ( i) E i [add ( )]

2

E

E

0 Z1 6 B (1) + 2 4 @ (1

F ( i )) d

i

1 2

0 Z1 6 B 4 (1) + 2 @ (1

1 2

29

(1

1 2

F ( i )) d

i

1 2

0 Z1 6 B 4 (1) + 2 @ (1

Zi Zi

(1

1 2

F ( i )) d

i

Zi 1 2

(1

13

dj

( i) d

i

13

dj

( i) d

i

13

dj

( i) d

i

C7 F ( )) d A5 C7 F ( )) d A5 C7 F ( )) d A5

i

( i) d i:

Now, by the de…nition of a Dirac Measure, 0 2 Z1 Z1 f ( i) B 6 (1) + 2 zj @ (1 4 E i [aDD ( )] 1 2

B 6 f (dj ) 4 (1) + 2 @

=

Hence,

lim

N !1

=

=

0

2

lim

N !1 1 X j=0

as claimed.

N Z1 X j=0

1 2

N Z1 X j=0

zj

zj

1 2

F ( i )) d

Zi

i

1 2

Z1

(1

F ( i )) d

1 2

Zdi

i

1 2

f ( i) E i [aDD ( )] f ( i) E i [aDD ( )]

2

2

0 Z1 6 B f (dj ) 4 (1) + 2 @ (1

0 Z1 6 B 4 (1) + 2 @ (1

(1 13

C7 F ( )) d A5

(1

1 2

2

F ( i )) d

Zi

i

1 2

F ( i )) d

i

1 2

Zi

i

1 2

F ( i )) d

C7 F ( )) d A5

C7 F ( )) d A5

(1

1 2

i

i )]

13

dj

( i) d

i

13

dj

( i) d

i

C7 F ( )) d A5

(1

E

( i) d

C7 F ( )) d A5

(1

1 2

13

dj

zj DD (d ; [a j i

E

1 2

0 Z1 B 6 4 (1) + 2 @ (1 Zdi

13

zj DD (d ; [a j i

i )]

;

The Lagrangian Evaluated at the Multipliers and the Optimality Result: Using the multipliers above, and performing some integration by parts, one can re-write the Lagrangian as

L :j ( 2

=

6 6 2 X6 6 6 6 i=1 6 6 4

i

(:) ;

+E

i

(:))i=1;2 h E i a

h E i

[a ( i

1 1 2 i 2; 2 [1 F ( i )] )] f ( ) j i

+ (1) E

i

[a (1; Z1 + E

i

1 2

i )]

i

E i

2 i

E

h E i

(0) E

a

1 2;

i

[a ( )] i i

[a (0;

1 2 F ( i) f ( i) j i

1 2

i )]

[a ( )] d ( i )

0

where we have used the fact that Ui

1 2

=

=

2 Z 4 E i E

i

"

a

2

1 2

dP

A

1 ; 2

a

1 2

i

1 ; 2

aj

5

2

+

7 7 7 7 7; 7 7 7 5

3

i

2

i

3

a

1 ; 2

i

1 2

#

:

Before proceeding, we pause for two remarks: Remark 1: It is apparent that, except for the case in which the Lagrangian is the expected decision E

i

[a ( i ;

i )] :

30

i

=

1 2;

all that matters for the value of

Even for the case in which

i

= 21 ; the expected

decision will play a key role, as the derivative of

E

that the First Order condition with respect to a

1 2;

h

i

1 2;

a

1 2 2

i

i

is linear in E

i

a

1 2;

i

, so

1 2;

will depend on E i a i . Hence, we will show that the the expected decision implied by the DD satis…es the First Order Condition. Since the i

IC

objective is concave, this will establish that the DDIC maximizes the Lagrangian for the chosen multipliers. Remark 2: For most of the arguments we use below, it will be convenient to express some of the components of the Lagrangian in terms of dE

E

i

[a (1;

i )]

i

= E

[a ( )] : Toward that, the following identities will be useful:

i

1 ; 2

a

+

i

Z1

dE

i

[a ( )]

dE

i

[a ( )] :

1 2

1

E

i

[a (0;

i )]

= E

i

1 ; 2

a

Z2

i

0

After some rounds of integration by parts, we can write the Lagrangian as: L :j f h

2

0 2 6 X 6 Z1 6 6 B (1 i=1 4 + @ (1) + 2 |

6 6 6 6 6 6 6 6 6 2 6 X 6 6 + 6 i=1 6 6 6 6 6 6 6 6 6 4 |

i

h

a 12 ; 1

C F ( i )) d i A E

1 2

2

E

i

(:)gi=1;2 ; f

1 2 2

i

i

a

i

1 2;

i

(:)gi=1;2 ;

E

{z A

0 2 1 Z2 B 6 + E i [a ( )] @2 4 F ( i ) d

i

0

1

Z2

a

1 2;

Z2

i

1 2

i

1

1

B @ (0) + 2

i

1

2

=

3

Z2 0

C F ( i) d iA E 1

7 C dE F ( ) d 5 + (0)A E

3 i

a

1 2;

DD

[a ( )] [aDD ( )]

7 7 7 7 5

i

3

}

7 7 7 7 0 0 i 3 1 2 0 2 3 7 7 1 1 i Z Z Z 7 C dE i [aDD ( )] 7 7 6 B 6 (1 F ( )) d 5 + (1)A E [aDD ( )] 5 7 4E i [a ( )] @2 4 (1 F ( i )) d i 7 i 7 1 1 1 7 2 2 2 7 20 2 3 1 3 7 Z1 Z1 Zi 7 6B 6 7 C 7 7 + 4@2 4 (1 F ( i )) d i (1 F ( )) d 5 + (1)A dE i [a ( )]5 7 7 7 1 1 1 7 2 2 2 3 20 2 1 3 1 1 1 7 Z2 Z2 Z2 7 7 7 6B 6 7 C 5 F ( ) d 5 + (0)A dE i [a ( )]5 4@2 4 F ( i ) d 0

0

i

B

We next prove the following result.

{z

Proposition 3 (DDIC maximizes Lagragean) For multipliers

31

i

i

}

( i ) = F ( i ) and

( i ) with d

1 2

=0

and

2

0 Z1 6 B d ( i ) = 4 (1) + 2 @ (1

d ( i)

F ( i )) d

i

13

C7 dE F ( )) d A5 E

(1

1 2

1 2

0 1 Z2 B 6 4 (0) + 2 @ F ( i ) d 2

=

Zi

13

1

Z2

i

0

0

the DDIC maximizes the Lagrangian.

C7 dE F ( ) d A5 E

aDD ( ) DD ( )] i [a i

aDD ( ) [aDD ( )] i i

( i ) if

( i ) if

i

2 0;

1 2

i

2

1 ;1 2

;

Proof: We prove the result in two steps. In the …rst step, we show that, given the multipliers, the DDIC maximizes the Lagrangian over the set of mechanism such that E 1

0; 21 n fdj gj=0 and

1 2; 1

i

[a ( i ;

i )]

is constant over 1

1

1

n fdj gj=0 and (potentially) jump at the cuto¤ points fdj gj=0 and fdj gj=0 : In the

second step, using Lemma 6, we show that, in search for a maximizer of the Lagrangian, it is without loss to restrict attention to mechanisms such that, for i = 1; 2; E and

1 2; 1

1 n fdj gj=0

and (potentially) jump at the cuto¤ points

The DD

STEP 1:

is constant over 0;

1 2

IC

1 2.

[a ( i ; 1 fdj gj=0

i )]

1

is constant over 0; 21 n fdj gj=0

and

1 fdj gj=0

:

maximizes the Lagrangian over the set of mechanism such that E

1 n fdj gj= 1

Proof of Step 1: Given the i

i

1 and n fdj gj=1 and symmetry around 12 ; it 1 2; 1

i

[a ( i ;

i )]

(potentially) jump at the cuto¤ points. su¢ ces to show the result for the case in which

Taking First Order Conditions of the Lagrangian with respect a 12 ; i ; one has, using 0 1 0 1 1 Z1 Z2 B C B C @ (1) + 2 (1 F ( i )) d i A = @ (0) + 2 F ( i ) d i A ; 0

1 2

the expression

E

i

a

1 ; 2

1 ; 2

i

which, evaluated at the DDIC is zero. Moreover, at the DDIC ,

2

Consider now Term B. Note that it equals zero when evaluated at the DD over

1 2; 1

= 0 for all

i,

0

6 B d ( i ) = 4 (1) + 2 @

one has Z1 1 2

20 2 Z1 6B 6 4@2 4 (1

Z1 1 2

0:

2

6 4E

1 2

i

F ( i )) d

Z1

(1

F ( i )) d

i

1 2

i

Zi

(1

1 2

Zi 1 2

0 2 Z1 B 6 DD a ( ) @2 4 (1

(1

3

13

C7 dE F ( )) d A5 E 1

7 C F ( )) d 5 + (1)A dE

F ( i )) d

i

Zi 1 2

1 2

32

(1

i

3

:

. Indeed, using the fact that,

; 2

=

1 2

1 i 2; IC

a

aDD ( ) [aDD ( )] i i

3

7 aDD ( ) 5 1

7 C dE F ( )) d 5 + (1)A E

DD

3

a ( ) 7 5 DD ( )] i [a i

IC We now show that, starting at the DD , a lossois imposed in Term B of the objective if one moves in n the direction of an arbitrary allocation Pb (:j i ; i ) with an expected decision b a ( i ; i ) such ( i ; i )2[0;1]2 i h 1 1 a ( i ; i ) is constant over 12 ; 1 n fdj gj=1 and (potentially) jumps at the cuto¤ points fdj gj=1 . that E i b h i Any such E i b a ( i ; i ) can be described as a distribution of the form:

dE

i

where kj

=

h b a ( i;

h b i a ( i;

lim+ E

i !dj

h a ( i; lim+ E i b

=

i

=

i)

i

i)

is the jump of E

i

h b a (:;

i

(:) ;

dj

h b i a ( i;

lim_ E

i !dj

i

E

at dj .46

i)

kj

j=0

i)

i !dj

1 X

h b a ( i; i

i

i)

i

i)

IC Now, n for 2 (0; o 1) ; consider the allocation the allocation which is a convex combination of the DD and Pb (:j i ; i ) with: 2 (

i;

i )2[0;1]

h e i a (:;

E

i)

i

= (1

)E

aDD ( ) + E

i

h b i a (:;

The derivative of the component of Term B (corresponding to the region

1 2; 1

i

i)

:

) with respect to

evaluated

at zero is: Z1 1 2

20 2 Z1 6B 6 4@2 4 (1

Z1 1 2

2

F ( i )) d

Zi

i

1 2

3 h i C 7 7 F ( )) d 5 + (1)A dE i b a( ) 5

(1

1 2

0 2 Z1 i h B 6 6 b 4E i a ( ) @2 4 (1

F ( i )) d

j=0

Zi

i

1 2

F ( i )) d

i

1 2

Zdj

(1

1 2

De…ne

hj

3

i

3

7 C dE F ( )) d 5 + (1)A E

1

7 C F ( )) d 5 + (1)A kj [aDD (dj ;

1

i h b a ( ) ; can be written as i

1 E

(1

1 2

which, using Claim 1 and the de…nition of dE 0 2 Z1 1 X B 6 @2 4 (1

1

3

lim E

i )]

+ i !dj

E

i

a

i

h b a (dj ;

DD

( i;

i

i)

i)

!

:

One has that hj >> 1 for the jump of a DDIC at dj is discrete. Now, note that E 4 6 Without

loss, we assume that E

i

h b a ( i;

zj DD (d ; [a j i i

i)

i )]

= hj

1;

is continuous from the left.

33

E

DD

3

a ( ) 7 DD ( )] 5 [a i i

zj DD (d ; [a j i

i )]

so we can write the above expression as 0 2 1 Zdj 1 X B 6Z (1 @2 4 (1 F ( i )) d i j=0

1 2

1

3

h C 7 F ( )) d 5 + (1)A kj

1 2

Using the fact that kj = lim E

i

+ i !dj

h b a ( i;

i

i)

E

i

h b a (dj ;

one can write (Directional Derivative) as 1 0 2 3 " Z1 Zdj 1 X C B 6 7 (1 F ( )) d 5 + (1)A lim E @2 4 (1 F ( i )) d i + j=0

1 2

i

i)

h a ( i; We now argue that the contribution of term lim i !d+ E i b

i)

j

i

i ) (hj i

i 1) :

(Directional Derivative)

;

h b a ( i; i

i !dj

1 2

h b a (dj ; i

E

i i)

h hj E i b a (dj ;

i)

# i

:

(Directional Derivative’)

to the above expression is non-

positive. Toward that, note that, since ht >> 1 for all dt (i.e., for all cuto¤s dt ; ht is bounded away from 1), there exists a

> 0 so that

inf fht g

1+ :

t

1 2; 1

Moreover, since the cuto¤s are dense in

, for all

> 0; there exists a cuto¤ dt ( ) > dj such that

dt ( ) dj < : h i Hence, the contribution of term lim i !d+ E i b a ( i ; i ) to (Directional Derivative’) is no larger than j

0 2 Z1 B 6 @2 4 (1

F ( i )) d

1 2

i

Zdj

(1

1 2

3

1

0 2 Z1 7 C B 6 F ( )) d 5 + (1)A (1 + ) @2 4 (1

F ( i )) d

dZj +

i

1 2

(1

1 2

which is smaller than zero for su¢ ciently small .47

3

7 C F ( )) d 5 + (1)A

STEP 2: In search for an optimal of the Lagrangian, it is without loss to restrict attention to mechanisms such that, for i = 1; 2; E

i

1

0; 12 n fdj gj=

[a ( i ;

(potentially) jump at the cuto¤ points

i )] is constant over 1 1 fdj gj= 1 and fdj gj=1 :

1

and

1 2; 1

1

n fdj gj=1 and

1 argument would be much simpler if one could order the set fdj g1 j=0 of a DD cuto¤ points over 2 ; 1 ; with d0 = dj < dj+1 : Indeed, one would have h i h i kj = E i b a (dj+1 ; i ) E i b a (dj ; i ) h i and the contribution of a term E i b a (dj+1 ; i ) would be: 0 2 1 0 2 1 3 3 dj+1 Z1 Z1 Zdj Z B 6 7 C B 6 7 C B2 6 (1 F ( i )) d i C B 6 C (1 F ( )) d 7 (1 F ( )) d 7 @ 4 5 + (1)A @2 4 (1 F ( i )) d i 5 + (1)A 4 7 The

1 2

0 2

B 6 6 hj+1 B @2 4

1 2

Z1 1 2

dj+1

(1

F ( i )) d

i

Z 1 2

(1

3

7 F ( )) d 7 5+

1 2

1

0 2

C B 6 B 6 (1)C A + @2 4

1 2

Z1 1 2

dj+1

(1

F ( i )) d

i

Z 1 2

(1

3

7 F ( )) d 7 5+

1 ; 2

1

C (1)C A

which would be no larger than zero. However, since the fdj g1 j=0 are dense, one cannot enumerate them in an ordered fashion (since, starting at any dj ; there is not a cuto¤ which is larger than dj and smaller than all other cuto¤s that are larger than dj ). Hence, we needed to make this more involved argument.

34

1

Proof of Step 2: Given the multipliers, the Lagrangian " # 2 1 1 1 E i a ; i E i 2 a ; i 2 2 2 2 0 31 Z1 1 6 B 7C + @ (1) + 2 4 (1 F ( i )) d i 5A E i a ; i 2

can be decomposed in two terms: 1 2 0

31 2 1 Z2 7C 6 B @ (0) + 2 4 F ( i ) d i 5A E

and

1 2

0 Z1 B 6 4 (1) + 2 @ (1 2

F ( i )) d

Zi

i

1 2

1 X

kj+1 E

i

13

C7 F ( )) d A5 dE

(1

1 2

[a (dj+1 ;

i

a

1 ; 2

i

1

1 2

Z1

(Term 1)

i

[a ( )]

(Term 2)

i )]

j=1

The DDIC satis…es the FOC for Term 1. We now argue that, in search for an optimal for (Term 2), one can restrict attention to mechanisms such that, for i = 1; 2; E and

1 2; 1

i

[a ( i ;

1

n fdj gj=1 and (potentially) jumps at the cuto¤ points

After some integration by parts, one can write (Term 2) as Z1

(1)

dE

i [a ( i ;

1 X

i )]

(1

+E

(1)

Z1

kj+1 E

i

[a (dj+1 ;

1

0; 21 n fdj gj=

1

i )]

j=1

1 2

For the …rst two terms,

i )] is constant over 1 1 fdj gj= 1 and fdj gj=1 .

i

dE

F ( i )) E f ( i)

[a ( i ; i

i

[a ( )] j 1 X

i )] and

1 2

i

kj+1 E

i

[a (dj+1 ;

i )],

it is clearly with loss

j=1

1 2

of optimality the restriction to allocations with the property in the statement, for they only depend on the 1

values of E i [a ( )] over fdj gj=0 : We now consider the term

E

(1 i

F ( i )) E f ( i)

i

1

Let fdj gj=1 be the cuto¤s implied by the DDIC over (i) Aj \ Ak

(ii)

1 fAj gj=1

\

1 fdj gj=1

1

1

(ii) fAj gj=1 [ fdj gj=1 De…ne E

i

h i e a ( ) by

f or

Note that E

i

i

2 Aj [ fdj g : E

i

h e a(

i;

[a ( )] j 1 2; 1

1 : 2

i

1

. Consider sets fAj gj=1 such that,

=

? if j 6= k

=

?

=

i

i)

1 ;1 2

=E

i

E

i

[a ( )] jAj

h i e a ( ) satis…es the condition in the statement of STEP 2. 35

1

1

Now, using the de…nition of fAj gj=1 [ fdj gj=1 and the Law of Iterated Expectations, one has E =

(1 i

F ( i )) E f ( i) 1

Pr fAj gj=1 E

i

(1

[a ( )] j

1 =E 2

i

F ( i )) E f ( i)

i

(1

F ( i )) E f ( i)

i

[a ( )] j

i

1

1

[a ( )] j fAj gj=1 + Pr fdj gj=1 E

i

1

1

2 fAj gj=1 [ fdj gj=1

i

(1 i

1

F ( i )) E f ( i)

i

1

[a ( )] j fdj gj=1

1

Since fdj gj=1 is a zero measure set with respect to the distribution F (:) (for fdj gj=1 is a countable set

–and therefore has zero Lebesgue measure –and the measure F (:) is absolutely continuous with respect to the Lebesgue measure), the above can be simply written as F ( i )) E f ( i) X (1 Pr (Aj ) E i = E

(1

i

j

i

1

[a ( )] j fAj gj=1

F ( i )) E f ( i)

[a ( )] j

i

i

2 Aj ;

where the last equality uses the Law of Iterated Expectations. By the Lemma 6, for all Aj ; E

(1

F ( i )) E f ( i)

i

[a ( )] j

i

i

2 Aj

E

(1 i

F ( i )) j f ( i)

i

2 Bk E

i

2 Aj

i

E

i

[a ( )] j

i

2 Aj :

Hence, F ( i )) E f ( i) X (1 Pr (Aj ) E i E

=

(1

i

j

X =

E

i

1

[a ( )] j fAj gj=1

F ( i )) E f ( i)

F ( i )) j f ( i) h (1 F ( i )) a ( i; E i e f ( i)

Pr (Aj ) E

j

i

i

(1

i

i

[a ( )] j

2 Aj E

i)

i

j

i

i

E

i

[a ( )] j

i

2 Aj

1 2

where the last equality follows from the Law of Iterated Expectations and the de…nition of E

i

We have then shown that E

(1 i

F ( i )) E f ( i)

It follows that, for term E replaces an arbitrary E

i

i

i [a ( )] j

h

i

(1 F ( i )) f ( i) E

1 2

E

[a ( )] j i

(1 i

i

1 2

i

F ( i )) E f ( i)

h e i a(

i;

i ) j i

i

i;

i

i)

:

; a (weak) improvement can be obtained if one

[a ( )] by a function that satis…es the condition in the statement.

The combination of Proposition 3 with Lemma 5 proves Theorem 2.

36

1 : 2

h e a(

Appendix C: Optimality of the DD Mechanism with General Preferences and Uniform Distribution and the Proof of Proposition 2: Optimality: We start by showing that, for the case of the general preferences described in section 5, a DD is optimal among the class on non-decreasing Incentive Compatible allocations. Our strategy of proof involves two main steps. The …rst step shows that, for any given class of allocations, if it is optimal to set

1 2

o¤-diagonals, then

a DD must be an optimal allocation within that class. The idea of the proof is straightforward. Given an uniform distribution, the problem of …nding an optimal allocation, conditional on both agents having their types on the same region –say, above

1 2

–and given that a constant action is being chosen when they are in

di¤erent regions, is exactly the same as the original problem. In other words, if a constant action is chosen o¤-diagonals, so that the schedule o¤-diagonals plays no role in the provision of incentives, the problem self-replicates. Hence, the DD with cuto¤s corresponding to the dyadic rationals must be optimal. In the second step, we show that if a non-decreasing schedule does not have

1 2

o¤-diagonals, it can be

weakly improved upon. We proceed in the following way. Starting with an arbitrary Incentive Compatible, non-decreasing, and continuous schedule a (:), we show, using the same arguments as in the text, and ignoring Incentive Compatibility issues, that an improvement can be attained if one sets

1 2

o¤-diagonals. We

then move towards showing that further modi…cations of a (:) along the main diagonals can be made so to guarantee both Incentive Compatibility, and that the new Incentive Compatible allocation still fares better than a (:) :48 We then show, using limiting arguments, that an improvement can also be achieved if the initial non-decreasing allocation is not continuous. For this case, the improvement is not necessarily strict. Steps 1 and 2 together prove Theorem3. Throughout this appendix, we will consider allocation rules (enforceable contracts) that maps the players’ ~1 ; ~2 ; and an independent randomization device x U [0; 1] into an action49 reported types ~ 2 a ~i ; ~ i ; x : [0; 1]

[0; 1] ! <:

To simplify the arguments of the proof, we allow for a randomization device x will only be used to preserve the symmetry (around

1 2)

U [0; 1] : However, it

in the allocation and used only for a zero measure

set of types. As we show below there will be no need for randomization in the allocation from an e¢ ciency standpoint when one considers monotonic allocations. Before proceeding, we derive an Incentive Compatible Representation of the agents’utility for the general preferences case. Lemma 7 (IC Representation for General Preferences) Letting h i U ( i ) = max E i ;x u a bi ; i ; x ; i = E i ;x [u (a ( i ; ~

4 8 One

i ; x) ; i )] ;

concern that might arise is that maximizing V (a) see equation (??) is equivalent to maximizing the sum of agents’s utilities only when a is IC. Nonetheless, the improvement we attain by setting 1/2 o¤-diagonals does carry meaning since we continue modifying the allocation to restore IC and we always use the same objective function. 4 9 Even though we allow for general stochastic mechanisms, the randomization device is only used to preserve the symmetry in the allocation. There is no need for randomization in the allocation from an e¢ ciency standpoint.

37

Incentive Compatibility is equivalent to:50

U ( i) =

with E

i ;x

8 Ri > > > U ( 21 ) + E > < 1

[u i (a ( ;

i ; x) ;

)] d ; if

i

>

1 2

<

1 2

(ICRepresentation)

2 1

> > 1 > > : U(2)

[u i (a ( ;

i ;x

R2

E

i ;x

[u i (a ( ;

i ; x) ;

)] d ; if

i

i

i ; x) ; i )]

non-decreasing in

:

Proof. For necessity, just notice that the integral formula is implied by Milgrom and Segal’s (2002) Envelope 0

00

; one must have

00

;

i ; x);

0

(IC

0 00

)

u a( 0 ;

i ; x);

00

(IC

00 0

)

Theorem. Moreover, if a ( ; x) is Incentive Compatible, for all E

i ;x

u a( 0 ;

i ; x);

0

i ; x);

00

E

i ;x

>

u a(

and E

i ;x

u a(

00

;

E

i ;x

Summing both expressions up, E

i ;x

u a( 0 ;

i ; x);

0

E

i ;x

u a( 0 ;

00

i ; x);

E

u a(

i ;x

00

;

i ; x);

i; x

;

0

E

i ;x

u a(

00

;

i ; x);

Hence, Z

0

E

i ;x

u

i

a( 0 ;

i; x

;

E

i ;x

u

i

a(

00

;

d

0

00

for all

0

>

00

: Since u

For su¢ ciency, let

i

0; the expected monotonicity condition must then hold. > 0:5; and consider 0:5 < b < i : ia

(a; i )

Ui (b)

Ui ( i ) Zi

E

i ;x

b

h u

i

a b;

i; x

;

i

d

Zi

=

E

i ;x

b

= E

i ;x

= E

i ;x

[u i (a ( ;

h u a(b;

h u a(b;

i ; x) ;

)] d

i

E

i ; x); i

i

i ; x); i

i ;x

Ui (b)

h u a(b;

i ; x);

b

i

where the …rst inequality follows from the expected monotonicity of the allocation. Therefore, Ui ( i ) The proof of all other cases is analogous.

E

i ;x

h

u a(b;

i ; x); i

i

:

symmetry of the problem makes it natural to pick = 21 as the reference type. The reader might be more used to seeing highest or lowest type picked as the reference type. Note that this choice is in general arbitrary and made for convenience. 5 0 The

38

00

1 2

Separability and the Optimality of

O¤-Diagonals

Lemma 8 (Separability) Let a ( ; x) be an allocation that solves the program of interest. If a ( ; x) = for all

2 0; 21

1 2; 1

; and a

f or s:t: IC f or i

1 2;

i; x

1 2

=

for all

i

whenever x >

2

2

[0; 1=2] ; a ( ; x) 2 arg max

=

1; 2 given

2

[1=2; 1] ; a ( ; x) 2 arg max

=

1; 2 given

a()

i

1 2

then:

X

h i 2 E u (a; i ) j 2 (0; 1=2)

X

h i 2 E u (a; i ) j 2 (1=2; 1)

i

1 2

2 (0; 1=2) and monotonicity

and f or s:t: IC f or i

2

a()

i

i

2 (1=2; 1) and monotonicity

Proof of Lemma Separability. Using the agents’virtual utility (equation 8), after some integration by parts, one can write the agents’problem as: X 0 1 1 1 1 1 E ;x u i (a ( ; x) ; i ) i j i ; i < 12 i i; x ; 2 j 4 E i ;x u a 2 ; 2; x < 2 B X i B 1 1 1 1 1 1 B i; x ; 2 j i > 2 ; x < 2 + E ;x u i (a ( ; x) ; i ) (1 i) j i; i > 2 4 E i ;x u a 2 ; B B X i max B 1 1 1 1 1 1 1 1 1 a(:;:) is IC over [0;1]2 B i; x ; 2 j i i; x ; 2 j i > 2; x > 2 4 E i ;x u a 2 ; 2 ; x > 2 + E i ;x u a 2 ; B B X i @ 1 1 1 1 1 E ;x u i (a ( ; x) ; i ) i j i i) j i > 2 ; i < 2 i > 2 4 E ;x u i (a ( ; x) ; i ) (1 2; i

Given a ( ; x) sets 1=2 o¤-diagonals what remains to be proven is that X 0 1 1 1 1 1 E ;x u i (a ( ; x) ; i ) i j i ; i < 12 i; x ; 2 j i 4 E i ;x u a 2 ; 2; x < 2 B X i B 1 1 1 1 1 B i; x ; 2 j i > 2 ; x < 2 + E ;x u i (a ( ; x) ; i ) (1 i) j i; i > 4 E i ;x u a 2 ; a ( ; x) 2 arg max B B i a() B @ s:t: IC f or i = 1; 2 given i 2 (0; 1) and monotonicity and a ( ; x) = 1=2 o¤-diagonals 2

Since any schedule which is incentive compatible over [0; 1] and has a constant o¤-diagonals must, in fact, be incentive compatible over 0; f or s:t: IC f or i

2

1 2 2

1 2; 1

and

2

. The program above implies: i X h 2 2 (0; 1=2) ; a ( ; x) 2 arg max E ui (a; i ) j 2 (0; 1=2) a()

=

1; 2 given

2

(1=2; 1) ; a ( ; x) 2 arg max

=

1; 2 given

i

i

2 (0; 1=2) and monotonicity

and f or s:t: IC f or i

2

a()

i

X i

h i 2 E ui (a; i ) j 2 (1=2; 1)

2 (1=2; 1) and monotonicity 39

1 2

1 C C C C C C C C C A 1 C C C C C C A

as desired. Lemma (1/2 o¤-diagonals): Given any incentive compatible allocation a ( ; x) that satis…es Monotonicity, we can …nd an alternative incentive compatible allocation a ~ ( ; x) which is weakly better and satis…es a ~ ( ; x) = 21 for all x, when i > 21 > i ; and a ~ 12 ; i ; x = 12 when x > 12 : Proof Lemma (1/2 o¤-diagonals): Without loss of generality we focus on the case in which the starting a ( ; x) is symmetric across players and around a ( i;

i ; x)

= a(

a ( i;

i ; x)

=

1

1 2

i.e.

i ; i ; x)

a (1

i; 1

i ; x)

We …rst consider the case where the starting a ( ; x) is continuous and show later (in Step 3) the result extends to non-continuous allocations. Furthermore, let us …rst point out that it is without loss to start with a schedule that is not constant o¤-diagonals. Indeed, if a ( ; x) = c 2 < for all 1 2; 1

0;

1 2

; we could set

1 2

1 2

o¤-diagonals (and

in 0; 21

1 2; 1

and

…fty percent of time whenever a player announces

1 2)

without a¤ecting incentives and such a change would generate a gain. We now proceed to prove the result through a sequence of steps. Step 1 (1/2 o¤-diagonals generates an improvement): In this …rst step, we show that a strict improvement can be attained if we replace the o¤ -diagonal values in the original allocation by 1=2: Formally, ( 1 if i > 12 > i or if i = 12 and x > 12 2 a1=2 ( ; x) = a ( ; x) otherwise We should note however that a1=2 ( ; x) may not be IC (we will address this in the next step). We can write the objective functional as: 0 X1 1 1 1 1 1 E i ;x u a ; i; x ; j i ;x < E ;x u i (a ( ; x) ; i ) i j i ; i < B 4 2 2 2 2 2 B i B {z } | B A B B X1 1 1 1 1 1 B E i ;x u a ; i; x ; j i > ;x < + E ;x u i (a ( ; x) ; i ) (1 i) j i; i > B 4 2 2 2 2 2 B B |i {z B B V (a) = B B X1 1 1 1 1 1 1 1 B E i ;x u a ; i; x ; j i ;x > + E i ;x u a ; i; x ; j i > ;x > B 2 2 2 2 2 2 2 B i 4 B | {z B C B X1 B 1 1 1 1 B E u (a ( ; x) ; ) (1 ) j > ; < E ;x u i (a ( ; x) ; i ) i j i ; i> ;x i i i i i B 4 2 2 2 2 @ i | {z D

For any non-decreasing allocation a ( i ;

1 2; 1

where

i ; x) ;

consider replacing a ( i ;

i ; x)

over the region 0; 21

by e a ( i;

i ; x;

) = (1

) a ( i;

i ; x)

+ E

2 [0; 1] :

;x

a ( i;

i ; x) j

We argue that this replacement has a positive e¤ect on term D for Indeed, regarding its …rst term, note that 40

2 0;

small.

1 2

1 ;1 2

1

C C C C C C C C C } C C C C 1 C C 2 C } C C C C C C A }

@ @ = E

[u

;x

u i (e a ( i;

;x

ia

(a ( i ; E

> =

E

E

;x

E

i ; x;

) ; i ) (1

i ; x) ; i ) (1

u

;x

ia

;x a ( i ;

0

i )]

(a ( i ;

i ; x) j

i) j i

E

>

a ( i;

;x

i ; x) ; i ) (1 1 0; 12 2; 1

2

1 > 2

j

i

i ; x) j

i) j i

1 2

>

a ( i;

=0

2 0;

>

1 2

1 ;1 2

i

i ; x) j i

>

1 2

>

i

a ( i;

i ; x)

!

j

i

>

1 > 2

where the strict inequality follows from the Monotone Hazard condition along with E

a ( i;

;x

i ; x) j

being decreasing (and non-constant) in

i;

2 0;

1 2

1 ;1 2

a ( i;

i ; x)

A similar argument can be made for the other component of term

D: Since the initial a (:) was arbitrary, one has that, over 1 ;1 2

1 ;1 [ 2

1 2

0;

0;

1 2

;

the optimal schedule is constant. The best among the constant actions for this region is 12 : Moreover, by setting the constant to E

i ;x

1 2

1 ; 2

u a

when one announces 12 ; and x > 21 ; the term C, which is de…ned by i; x

;

1 2

j

i

1 1 ;x > +E 2 2

u a

i ;x

1 ; 2

i; x

;

1 2

j

i

>

1 1 ;x > 2 2

is also maximized. Therefore the modi…ed allocation has (a) a constant action o¤-diagonals and (b) prescribes, at least half of the time, type

10 2 s

most preferred actions. Hence, V a1=2 > V (a) :

Step 2 (Restoring IC): In this step we show that we can modify a1=2 ( ; x) in a way that restores IC, preserves 1=2 o¤-diagonals and does strictly better than the original allocation a ( ; x) : Throughout, we specify the changes only for the top quadrant (

2 1 2; 1 )

as the required changes over the bottom quadrant

are similar. We start by constructing an alternative schedule as follows. For an integer N; consider the partition of Ai =

1 2

+

i 1 1 2N ; 2

+

i 2N

action in each square Ai

1 2

and AN =

+

1 2; 1

N 1 2N ; 1

given by fAi gi ; where i 2 f1; :::; N g ; and, for i

. Let aij = E

;x

[a ( ; x) j 2 Ai

N

1;

Aj ] denote the expected

Aj under the original allocation. Now consider, the schedule

aN ( ; x) =

8 > < > :

1 2

if

i

aij a1j

1 2

> if

if

i

=

1 2;

41

>

i

2 Ai j

or if Aj

2 Aj and

1 i = 2 and 2 1 2; 1 x < 12

x>

1 2

(A1)

i

There exists a N and a

> 0 so that for all N > N ; V (aN ) > V (a) + : 2 1 2; 1

This follows from noting that aN ( ; x) converges to a ( ; x) over

when N goes to the in…nity, so

that: lim aN ( ; x) = a1=2 ( ; x) ;

N !1

Since V a1=2 > V (a) ; making use of the Dominated Convergence Theorem, it then follows that there exists a N and a that if N > N

> 0 so

V (aN ) > V (a) + as stated. Although better than a ( ; x) for N large, aN ( ; x) might not be IC. To re-establish incentive compatibility, we need all the "cuto¤ types"

1 2

+

i 2N ;

i 2 f1; ::; N

1g ; to be

indi¤erent between between reporting "left" or "right" together with expected monotonicity of the allocation. As a …rst step, we show that it is su¢ cient to modify the allocation by adding constants squares Ai

Ai ; i

i

along the diagonal

1; to satisfy the indi¤erence conditions. As a second step, we show that such

i

can be

chosen to be positive and such that the resulting allocation fares strictly better than a ( ; x). Finally, the last step shows that expected monotonicity is indeed satis…ed. Step 2A: In order for the IC constraints to be satis…ed, the f i gi must be chosen to guarantee 1 2N =

1 2N

N X

u ai;j ;

j=1;j6=i N X

1 i + 2 2N

u ai+1;j ;

j=1;j6=i+1

+

1 u aii + 2N

1 i + 2 2N

+

i;

1 i + 2 2N

1 u ai+1i+1 + 2N

i+1 ;

(A2) 1 i + 2 2N

;

so that the cut-o¤ types are indi¤erent between reporting left and right. The above condition implicitly de…nes i+1 as a function of i . We next show that, for a properly chosen 1;

one can …nd a sequence of f i gi with

0 for all i:

i

Step 2B: Let p > 0 be such that, by letting

1

to be O (N p ) ;

1 2N u

a11 +

1 1; 2

+

1 2N

is O (1). There

exists a sequence of non-negative numbers f i g that (i) solve Equation (A2), and (ii) lead to a schedule that improves strictly upon the initial a ( ; x) : Moreover, the

i

can be chosen to be O (N p ) for all i:

We establish this result in 3 Claims. Claim 1: If positivef i gi

2

1

is strictly positive and O (N p ) ; one can …nd, for all i

; where each

i

2; a sequence of strictly

p

is also O (N ) :

Proof: Note that (i) a ( ; x) is uniformly continuous (since a ( ; x) is a continuous function over a compact

set), and (ii) u (a ( ; x) ; i ) is uniformly continuous (by the same reasons). Hence, for any " > 0; there exists a N 0 so that, if N > N 0 ; u ai;j ;

1 i + 2 2N

u ai+1;j ; 42

1 i + 2 2N

< 2":

for all i,j: Therefore, 1 1 X i u ai;j ; + 2N j 2 2N

u ai+1;j ;

1 X 1 i u ai;j ; + 2N j 2 2N

1 i + 2 2N

u ai+1;j ;

1 i + 2 2N

<"

This implies that, for all i; j; 1 X 1 i u ai;j ; + 2N j 2 2N

u ai+1;j ;

1 i + 2 2N

=O

1 N

Now, consider A2 when i = 1: It can be rewritten as

=

1 u a22 + 2N 1 u a11 + 2N

1 1 + 2 2N 1 1 + 1; 2 2N 2;

1 1 1 u a22 ; + 2N 2 2N 1 1 1 u a11 + 1 ; + 2N 2 2N

+ = If

1

+

1 X 1 1 u a1j ; + 2N j 2 2N

1 i + 2 2N

1 1 + 2 2N

u a11 ; +O

u a2;j ;

1 N

+O

1 N2

is O (N p ) ; the right hand side is O (1) : Therefore, the left hand side must also be O (1) : This, in

turn, calls for

2

being O (N p ) : As

1

> 0;

2

can also be made positive. Proceeding inductively, the result

follows. ^ the value such that, for N > N ^ ; the sequence f i g is such that all elements are O (N p ) and Denote by N

positive. Claim 2: There exists a strictly positive 1 ; which is O (N p ) and so that the schedule de…ned by ( aN ( ; x) + 1 if 2 A1 A1 e a1 ( ; x) = aN ( ; x) otherwise satis…es

V (e a1 )

V (a) :

^ ; Proof: This follows immediately from Step 1. In fact, for all N > max N ; N V (aN ) > V (a) + 0

for a strictly positive . By adding a strictly positive number over A1 A1 , one will decrease type 12 s 0 1 payo¤. Since, from type 21 s perspective, the harm caused by such change will occur with probability 2N ; and V (aN ) > V (a) + the positive

1

necessary to satisfy

can be made O (N p ) :

V (ae1 )

V (a) :

43

Claim 3: One can …nd a schedule that satis…es the indi¤erence condition (Equation (A2)) and fares strictly better than a ( ; x). ^ ; de…ne a new schedule that is equal to aN ( ; x) except at the Proof: For some N > N = max N ; N squares Ai 1, for the

Ai ; where it is equal to aii + 1

i

for i

1, where the

i

is given by the sequence de…ned in Claim

in Claim 2. Denoting this schedule by e a (:; x) one has that V (e a) > V (e a1 )

This follows because (i) V (:) is linear in a (:) for 1 2:

does not a¤ect the utility of type

V (a) :

2 1 2; 1

2

; and, …nally, (ii) the adding of the

0 is

for i

2

We have just shown that starting from an arbitrary continuous and non-decreasing schedule a; we can construct an alternative schedule e a that has

1 2

o¤-diagonals, satis…es Local Incentive Compatibility and fares

better than the initial a: What is left to show is that e a satis…es expected monotonicity. We know argue that this is in fact the case.

e so that, for N > N e; Step 2C (Monotonicity): For all i; there is N N X j=1

N X

e ai+1j

j=1

e aij

Proof: First note that the indi¤erence condition Eq.(A2) can be read as 0

= 1 2N

N X

u ai;j ;

j=1;j6=i

1 2N

N X

1 i + 2 2N

u ai+1;j ;

j=1;j6=i+1

i+1 ;

i 1 + 2 2N

1 i+1 ; 2

1 i + 2 2N

= u ai;j ;

1 i + 2 2N

1 i + 2 2N

1 u ai+1i+1 + 2N +

i 2N

(3)

i+1 ;

1 i + 2 2N

(4)

around aii + ; we have

i;

Also, doing a Taylor series expansion of u ai+1;j ; 12 + u ai+1;j ;

i;

1 i + + 2 2N i 1 aii + i ; + [ai+1i+1 + 2 2N

= u aii + ua

1 u aii + 2N

1 i + 2 2N

Doing a Taylor series expansion of u ai+1i+1 + u ai+1i+1 +

+

+ ua ai;j ;

i 2N

i+1

aii

i]

+ O [ai+1i+1 +

around aij ; we have

1 i + 2 2N

[ai+1;j

ai;j ] + O [ai+1;j

Hence, N X

1 i + 2 2N j=1;j6=i+1 3 2 N X 1 i 5 + u aii ; 1 + i = 4 u ai;j ; + 2 2N 2 2N u ai+1;j ;

u ai+1i+1 ;

j=1;j6=i

+

N X

j=1;j6=i+1

ua ai;j ;

1 i + 2 2N

[ai+1;j

44

i+1

ai;j ] + O [ai+1;j

1 i + 2 2N 2

ai;j ]

:

ai;j ]

2

:

aii

2 i]

Therefore, one can write the indi¤erence condition as 0

= 1 1 i ua aii + i ; + [ai+1i+1 + i+1 aii i ] + O [ai+1i+1 + i+1 2N 2 2N 2 3 N 1 4 X 1 i 2 5 ua ai;j ; + (ai+1;j ai;j ) + O [ai+1;j ai;j ] 2N 2 2N

aii

i]

2

j=1;j6=i

1 1 i u aii ; + 2N 2 2N

u ai+1i+1 ;

1 i + 2 2N

or 0

=

(Indi¤) 1 1 i ua aii + i ; + [( i 2N 2 2N 2 N i 1 4 X 1 ua ai;j ; + 2N 2 2N

i+1 )

+ (aii

(ai+1;j

ai+1i+1 )] + O [ai+1i+1 + 3

ai;j ) + O [ai+1;j

j=1;j6=i

+

1 i 1 ua aii ; + 2N 2 2N

(ai+1;i+1

aii ) + O (ai+1

2

ai )

where we have done a Taylor series expansion of u ai+1i+1 ; 12 +

i 2N

ai;j ]

2

i+1

aii

2 i]

5

; around aii :

Now, assume, towards a contradiction, that expected monotonicity is violated, i.e., there is an i such that for all N :

N X j=1

This, in turn, implies that i

i+1

e aij >

>

N X

N X j=1

e ai+1j

(ai+1j

aij )

0:

j=1

Note that, 2 N 1 4 X 1 i ua ai;j ; + N 2 2N j=1;j6=i

(ai+1;j

3

ai;j )5 <

45

1 1 i min ua ai;j ; + N j 2 2N

N X

(ai+1;j

ai;j ) :

j=1;j6=i

(Ineq)

Hence, applying (Ineq) to Equation Indi¤, we get 0

= 1 1 i ua aii + i ; + [( i 2N 2 2N 2 N 1 4 X 1 i ua ai;j ; + 2N 2 2N

i+1 )

+ (aii

(ai+1;j

ai+1i+1 )] + O [ai+1i+1 + 3

ai;j ) + O [ai+1;j

j=1;j6=i

1 1 i ua aii ; + (ai+1;i+1 2N 2 2N 1 1 i ua aii + i ; + [( i 2N 2 2N

+ <

N X

1 1 i min ua ai;j ; + 2N j 2 2N +

i 1 1 ua aii ; + 2N 2 2N

+ (aii

(ai+1;j

ai+1i+1 )] + O [ai+1i+1 + N h X

1 N

ai;j )

aii

2 i]

i+1

aii

2 i]

5

ai )

j=1;j6=i

(ai+1;i+1

2

2

aii ) + O (ai+1 i+1 )

ai;j ]

i+1

O (ai+1

2

ai )

j=1;j6=i

i

2

aii ) + O (ai+1

ai )

We now show that, for large N; the right hand side of this inequality is smaller than zero, which leads to the desired contradiction. Towards this note that for large N the following are true: 1.

2.

1 ua aii + 2N since, for

i

and

i+1

1 N

ua aii +

is O 1 N

3.

i

1 N

1 i; 2

large, ua aii +

i 1 + 2 2N

[(

i+1 )]

i

< 0;

i 2N

< 0: Moreover, this term is of order O (1) for both PN –which is of the same order as j=1;j6=i (ai+1;j ai;j ) –are O (1) :

1 i; 2

+

i 2N

(aii

+

i;

ai+1i+1 ) =

1 N

1 i; 2

ua aii +

+

i 2N

2

is O

[(aii

1 N ua

ai+1i ) + (ai+1i

aii +

1 i; 2

+

i 2N

ai+1i+1 )]

:

minj ua ai;j ; 21 + ua aii ; 12 +

i 2N

4.

1 N

5.

1 NO

[ai+1i+1 +

is O

1 N

i 2N

PN

(ai+1;i+1

i+1

aii

j=1;j6=i

(ai+1;j

is O

1 N

:

1 N2

aii ) is O

2 i]

ai;j ) is O

1 N

; N1 O (ai+1

ai )

1 N2

; and

:

1 N

PN

j=1;j6=i

h

O (ai+1

Hence, there exists an N large such that the right hand side of the inequality is negative which implies, e so that, for N > N e; that for all i; there must be an N N X j=1

N X

e ai+1j

j=1

e aij :

n o e ; there exists an IC allocation Using Steps 1 and 2, we have shown that, for (…nite) N > max N ; N

e a (:; x) that is bounded (since, although potentially large, N is …nite) and has

1 2

o¤-diagonals which fares

strictly better than the initial continuous a (:) :

In Step 3 below, we deal with the case in which a (:) is non-decreasing but potentially discontinuous.

46

ai )

2

i

;

Step 3: For any k 2 <; de…ne the set: 8 9 > a ( ) non decreasing and so that > > > > > > > < sup ja ( ) j < z f or some large (f inite) z and = ! A (k) = a ( ) : : E i [u (a ( i ; i ) ; i )] i > > > > h b > > k 8 > > i : ; E i u a b; i ; i Similarly, de…ne the set

C (k) = fa ( ) 2 A (k) : a (:) is continuousg : Using Helly’s selection theorem (note that any sequence fan ( )g

A (k) will be such that supn; ja ( )j

z, so that the sequence is uniformly bounded), one can show that for a given k : i) A (k) is compact valued (in the weak topology). ii) A (k) is upper hemi-continuous. Now de…ne g (k) as: g (k)

=

"

max E a(:)

2 X

#

u (a ( i ;

i) ; i)

i=1

s:t a (:) 2 A (k) : Given the properties of A (k) stated above, Theorem 2 of Ausubel and Deneckere (1993) can be used to show that g (:) is continuous. Next, de…ne h (k) as: h (k)

=

sup E a(:)

"

2 X

u (a ( i ;

i) ; i)

i=1

#

s:t a (:) 2 C (k) : Since C (k)

A (k) ; h (k)

Also, for k 0 > k since A (k)

g (k) 8k:

A (k 0 ) and C (k)

C (k 0 ) it follows that g (k)

g (k 0 ) and h (k)

h (k 0 )

Lemma 9 For any k2 > k1 ; if a (:) 2 A (k1 ) ; there exists a sequence of continuous functions ffn ( )g that converge pointwise to a ( ) and such that, for …nite, but large n, fn ( ) is in C (k2 ) : Proof. As a ( ) 2 A (k1 ) ; E

i

[u (a ( ) ; i )] + k1

E

i

h

E

i

[u (a ( ) ; i )] + k2 > E

i

h

so that

u a b; u a b;

i

;

i

i

for all

i

i

;

i

i

for all

i

Now, take the sequence of continuous non-decreasing functions ffn gn such that fn ( ) ! a ( ) for all (see Lemma 12 below).

47

b: b:

Using the continuity of u (:; i ) ; the above implies that u (fn ( ) ; i ) ! u (a ( ) ; i ) for all ( i ;

u fn b;

i

;

! u a b;

i

;

i

i)

for all

i

b;

i

Since fn is continuous and de…ned over a compact set, it is bounded. Since u (:; i ) is continuous, there is a t < 1 such that

u fn b;

i

;

< t for all n:

i

Therefore, the Dominated Convergence Theorem implies: h i h E i u fn b; i ; i ! E i u a b;

i

;

i

i

for all b

i

It then follows from Proposition 23, in page 72, of Royden (1988) that, for any " > 0 there exists an N (") such that for all n > N (") h E i u fn b; 2 h

u fn b; E i4 h u a b;

i

;

i

i

;

i

i

u a b;

;

i

i) ; i)

u (a ( i ;

i) ; i)

i

i

h u a b;

there exists an " and an N (") s.t. h E i u fn b;

i

;

i

;

u (a ( i ;

i

u (fn ( i ;

i

Furthermore, if the IC constraints hold for almost all b a contradiction, that there were some

i

" for almost all b 2

i

i 5 < " for almost all b

i) ; i)

i

i) ; i)

i

<

i 3

u (fn ( i ;

Hence, for k1 < k2 ; if

E

i

i

i

k1 for all b

i:

i

k2 for almost all b

i

then they must hold for all. Suppose, in search of

which would bene…t by " more than k2 from claiming to be b 6=

i:

Now, for any such "; since all f/n is continuous and the agent’s utility function is continuous, there must for which the IC constraint is strictly violated by more than 2" : Of course, this contradicts the statement that for almost all b i the allocation was IC. Hence, for n large exist a ball of mass q > 0 of types around

i

(but …nite), fn 2 C (k2 ) ; as desired.

Lemma 10 For any k2 > k1 ; h (k2 )

g (k1 ).

Proof. Fix a k2 > k1 ; and pick an arbitrary > 0: Let a ( ) be such that g (k1 ) = E [u (a ( ) ; i )] : Consider a sequence of continuous functions ffn g that converge pointwise to a (:) : By the Dominated Convergence Theorem, we can …nd N1 large enough such that, if n > N1 ; " 2 # " 2 # X X E u (fn ( ) ; i ) > E u (a ( ) ; i ) i=1

= g (k1 )

i=1

Moreover, using Lemma(9) above, we can …nd N2 large enough for which, fn 2 C (k2 ) 48

:

for all …nite n that are larger than N2 : Hence, taking N = max fN1 ; N2 g ; for (…nite) n > N ; " 2 # " 2 # X X h (k2 ) E u (fn ( ) ; i ) > E u (a ( ) ; i ) = g (k1 ) ; i=1

i=1

so that h (k2 ) > g (k1 ) As

:

was arbitrary, the claim follows. Next we show that h (:) is continuous. Since h (:) is increasing, if it were discontinuous, there would be a

such that

1 n

lim h k

n!1

< lim h k + n!1

1 n

:

Now, from Lemma (10) above, for all n; h k

1 n

g k

2 n

:

g k+

2 n

h k+

1 n

:

lim h k +

1 n

> lim h k

Moreover,

Hence, one would have lim g k + n

2 n

n

n

1 n

lim g k n

2 n

:

which would contradict the continuity of g (:) : Hence, h (:) must be continuous. Now, we can …nally establish that: h (0) = g (0) So far, we have shown that, for any k > 0; we have: g (k)

h (k)

g (0) :

where the …rst inequality follows from the de…nitions of g and h; and the second inequality follows from Lemma (10). Finally, taking the limits as k ! 0; and using the continuity of g (:) and h (:) ; g (0)

h (0)

g (0) :

Hence, showing we can achieve a strict improvement over any continuous allocation is su¢ cient since by allowing the original set of allocations to be discontinuous does not allow us to do strictly better which is what h (0) = g (0) implies. Note though that to be able to actually attain the optimal value we might need to use a discontinuous allocation.

49

2

We now show that, for any non-decreasing function g : [0; 1] ! <; we can …nd a sequence of continuous

non-decreasing functions fgm gm ; which converge pointwise to g (:) :

In order to do so, we …rst use the following result, which proof can be found in Rosenlicht (1968, pages

237 and 238). Lemma 11 For a given N; consider the partition of [0; 1] given by fAi gi ; where Ai = whenever i 2 f N; :::; N

2g ; and AN

1

=

1 2

+

N 1 2N ; 1

following form

ci+1j

+

i 1 2N ; 2

+

i+1 2N

2

: Consider a function g : [0; 1] ! <; of the

g ( ) = cij 2 < whenever with ci+1j+1

1 2

2 Ai

Aj ;

cij : That is, g (:) is a non-decreasing simple function. One can then …nd a sequence

of non-decreasing continuous functions that converge to g (:) pointwise. We can now prove 2

Lemma 12 Let g : [0; 1] ! < be a non-decreasing function. There exists a sequence of continuous nondecreasing functions that converge pointwise to g (:)

Proof. For a given N; consider the partition of [0; 1] given by fAi gi ; where Ai =

i 2 f N; :::; N

2g ; and AN

1

=

1 2

+

N 1 2N ; 1

1 2

+

i 1 2N ; 2

+

i+1 2N

whenever

: De…ne gN as follows:

gN ( ) = E [g ( ) j 2 Ai

Aj ] :

Clearly, for all ; jgN ( )

g ( )j ! 0

as N ! 1:

Now, …xing a N; one has that gN (:) is a non-decreasing simple function. Hence, by 11, one can …nd, for m (:)gm so that, for all ; each N; a sequence of continuous non-decreasing functions fgN m jgN ( )

as m ! 1: Since,

m jgN ( )

g ( )j

m jgN ( )

gN ( )j ! 0 gN ( )j + jgN ( )

g ( )j ;

we have that, for all ; m jgN ( )

g ( )j ! 0

as m; N ! 1:

Proof of Proposition 2: We now prove Proposition 2 . Proof. From Milgrom and Segal (2002), and the fact that the players’ payo¤ satisfy a single crossing condition, it follows, using standard arguments (exactly the same as those in Lemma 7) that

1

2 arg

max ~1 <

E + 2

2 ;x

u a ~1 ;

50

2; x

;

1

j

+ 2

<

2

is equivalent to

E

2 ;x

u (a ( 1 ;

h and E 2 ;x u

1

+ 2

2 ; x) ; 1 ) j

(a ( 1 ;

<

+ 2

2 ; x) ; 1 ) j 2

2 arg

=

2

<

max + 2

8 > > > > <

E

+

E

2 ;x

1

+ 2

h u

;

~

1; 2; x

;

2; x

1 (a ( ;

+ 2

;

2 ; x) ;

j

+ 2

+ 2

)j

< <

2

i

2

i

9 > > > > = > > > > ;

d

(ICLocal1)

non-decreasing in

u a

1 ;x

<~2

h u a

2 ;x

Z2

> > > > :

i

2

E

1;

whereas

j

2

+ 2

<

1

is equivalent to

E

1 ;x

u (a ( 1 ;

2 ; x) ; 2 ) j

1

+ 2

<

h E 1 ;x u a Z2 h = > + E 1 ;x u > > > : + 8 > > > > <

+ 2

1;

;x ;

+ 2

j

1

(a ( 1 ; ; x) ; ) j

2

1

+ 2

<

<

i

+ 2

2

h and E 1 ;x u

2

(a ( 1 ;

2 ; x) ; 2 ) j

1

<

+ 2

i

i

9 > > > > = > > > > ;

d

(ICLocal2)

non-decreasing in

2:

Integration of ICLocal1 and ICLocal2 by parts allows us to write the objective as E

2 ;x

+E

+ ; 2

u a

1 ;x

u a

2; x

+ 2

;

+ ;x ; 2

1;

+ 2

j + 2

<

2

1

<

j

E + 2

u

;x

+E

1

u

;x

(a ( 1 ;

2 ; x) ; 1 ) [ 1

(a ( 1 ;

2

+ ]j

2 ; x) ; 2 )

2

1

j

As we show in the First Step of Lemma (1/2 o¤-diagonals) below, for all non-decreasing a ( 1 ; E E

u

;x

u

2

(a ( 1 ; E

2

;x

2 ; x) ; 2 )

a ( 1;

2 ; x) j 1

Moreover, for any-non-decreasing a ( 1 ; E E

u

;x

u

1

(a ( 1 ; E

1

;x

2

2 ; x) ;

2 ; x) ; 1 ) [ 1

a ( 1;

<

j

1

+ 2

< <

+ 2

< ;

2

+ 2

< 1

<

< + 2

2 ; x) ;

2

j

1

<

+ 2

<

2

+ ]j

1

<

+ 2

<

2

2

2

we have that + ]j

2 ; x) j 1

<

1

+ 2

< <

+ 2

< ;

2

1

2

[

1

:

Hence, it follows that both terms are maximized if one picks a constant. The best constant over those regions is

+ 2

: Such constant also maximizes the terms E

2 ;x

u a

E

1 ;x

u a

for

1

+ ; 2

;

+ 2

j

+ ;x ; 2

+ 2

j

2; x

+ 2

<

2

and 1;

1

<

of the objective. Hence, setting a ( ) =

+ 2

<

+ 2

<

2

is optimal, as claimed. 51

+ 2

2

<

2

Dividing and Discarding: A Procedure for Taking ...

Oct 1, 2009 - When agents learn their interests are more aligned —i.e., their favorite action lies on the same side of .... (2006) to establish the optimality of minimum savings schemes in self-control ..... incentive constraints are taken explicitly into account. .... commit not to do something that is on his best interest given the.

410KB Sizes 1 Downloads 220 Views

Recommend Documents

Method and apparatus for destroying dividing cells
Aug 27, 2008 - synovioma, mesothelioma, EWing's tumor, leiomyosarcoma, rhabdomyosarcoma, colon carcinoma, pancreatic cancer, breast cancer, ovarian ...

Method and apparatus for destroying dividing cells
Aug 27, 2008 - ing cleft (e.g., a groove or a notch) that gradually separates the cell into tWo neW cells. During this division process, there is a transient period ...

Dividing Polynomials
A2. I AM. ID: 1. Dividing Polynomials. Date. Block. Divide. 1) (5n3 + 3n? + 2n) + 6n? 2) (4x + 32x+ + 2x3) + 8x. 2. 3) (2k + 12kº + 5k) + 4k? 4) (2x + 4x + 16x4) + 4x3. 5) (k® +k? – 20k + 22) + (k - 3). 6) (2x + 5x2 + 8x + 10) + (x + 2). 7) (a3 -

Multiplying and Dividing Fractions
The Days drove their car from Nashville, Tennessee, to Orlando, Florida. They filled the gas tank before ... average (mean) bloom width? 41. MULTIPLE CHOICE ...

Dividing Integers - Somerset Canyons
Aug 27, 2015 - A high school athletic department bought 40 soccer uniforms at a cost of $3,000. ... A commuter has $245 in his commuter savings account. a.

A Simple Procedure for Mesophyll Protoplast Culture and Plant ...
2iP, 100 ml/l coconut milk, 4 g/l agarose), E2 (2% sucrose, 3.0 mg/l BAP, 0.1 mg/l GA3, 4 g/l agarose) and E3 ... very few divisions in comparison with those kept.

Apparatus and method for planning a stereotactic surgical procedure ...
Mar 6, 2003 - IF. 154 _< IQ' WAS. ELSE. PRESSED. THEN. 156 ___ FREE ALL ALLOCATED ... 230 _ SEND MESSAGE: '. 'SELECT THE FIDUCIALS.' 232 ...

Apparatus and method for planning a stereotactic surgical procedure ...
Mar 6, 2003 - c u Z .... .. FLUOROSCOPY. 5,682,886 A * 11/1997 Delp et a1. ...... .. 600/407 ... sium on Medical Robotics and Computer Assisted Surgery,” pp.

Dividing Integers - Somerset Canyons
Aug 27, 2015 - Solve. Show your work. 13. A high school athletic department bought 40 soccer uniforms at a cost of $3,000. After soccer season, they returned ...

Dividing Integers - Math-Drills.com
(−8)÷2 = (−3)÷3 = (−54)÷6 = (−1)÷1 = (−36)÷9 = (−24)÷3 = (−20)÷4 = (−48)÷8 = (−72)÷8 = (−16)÷8 = (−18)÷6 = (−4)÷4 = (−30)÷5 = (−10)÷5 = (−63)÷9 = (−35)÷5 =.

Standard operating procedure for evaluation procedure for CVMP ...
Standard operating procedure – PUBLIC. SOP/V/4112 15-DEC-20. Page 6/10. 5.0. Final SA. 5.1. Adoption of scientific advice. 5.2. Send to applicant. No. 5.5. Clarification? 2.0. Yes. Yes. 5.3. Archive and update tracking. 5.4. Include in post- meetin

Standard operating procedure for requesting exceptions and ...
track and give prior approval to control overrides or deviations from policies and procedures.' The aim of ... Corporate governance/06.2 Integrated Management System/6. Internal controls/Exceptions) ... It constitutes a deviation from established pro

Information Technology (Procedure and Safeguards for Interception ...
Information Technology (Procedure and Safeguards for ... n, Monitoring and Decryption of Data) Rules, 2016.pdf. Information Technology (Procedure and ...

A procedure for collecting a database of texts annotated with ...
Dec 1, 2003 - In everyone's opinion, Jupiter was the most exciting with its cloud bands and the moons. (6f). Saturn's ring was fun to see, too,. (6g) but both Neptune and Uranus seemed just like two little white dots. Figure 2 represents the coherenc

dividing decimals worksheets.pdf
dividing decimals worksheets.pdf. dividing decimals worksheets.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying dividing decimals worksheets.pdf.

EXERCISE SET 8.2 Scientific Notation Multiplying and Dividing ...
Lesson 8.2 (page 509) in your textbook (Personal Academic Notebook, or PAN ... 4. 2). Divide and write your answer in scientific notation: 6. 9. 01 51. 01 24. −. ×.

Trig2_ Dividing Complex Numbers and Intro to ...
Trig2_ Dividing Complex Numbers and Intro to Radicals_mod1_3.notebook. 1. October 03, 2011. Oct 37:00 AM. Warm-up. 10/3/11 a.) b.) c.) Page 2. Trig2_ Dividing Complex Numbers and Intro to Radicals_mod1_3.notebook. 2. October 03, 2011. Oct 37:05 AM. P

A PROCEDURE FOR THE MOTION OF PARTICLE
Jan 22, 2008 - A fixed-grid approach for modeling the motion of a ..... J. S. Fisher and A. P. Lee, Cell Encapsulation on a Microfluidic Platform, MicroTAS. 2004 ...

Standard operating procedure for Paediatric investigation plan or a
It is the responsibility of the Head of Paediatric Medicines Office to ensure that this procedure is adhered to. The responsibility for the execution of a particular part of this procedure is identified in the right-hand column of part 9. Procedure.

A Procedure for Comparing Color 2-Dimensional ...
A Procedure for Comparing Color 2-Dimensional Images Through their. Extrusions to ... Proceedings of the 15th International Conference on Electronics, Communications and Computers (CONIELECOMP 2005) ..... Web Site (october 2003):.

15. LAYOFF POLICY AND PROCEDURE
Apr 15, 2016 - LAYOFF POLICY AND PROCEDURE. 15.1. Policy. 15.1.1. Reasons for Layoff. The City may lay off employees because of lack of work, lack of funds, material change in duties or organization, or in the interests of economy, efficiency, or oth

Standard operation procedure for handling of requests from a
Send a question via our website www.ema.europa.eu/contact. © European Medicines ... Name: Matthias Sennwitz. Name: Anabela Marcal ... Guidance documents are available on the CMDh website: http://www.hma.eu/293.html. 7. Definitions.

Procedure for change in Bank Account Signatory of a Company.pdf ...
Procedure for change in Bank Account Signatory of a Company.pdf. Procedure for change in Bank Account Signatory of a Company.pdf. Open. Extract.

A Generalized Procedure for the One-Pot Preparation ...
Nov 15, 2005 - Chattar Manzil Palace, Lucknow 226001, UP, India. [b] Molecular ..... (dd, J = 2.8 and 13.1 Hz, 1 H, 5-Ha), 3.78–3.71 (dd, J = 1.5 and. 13.1 Hz, 1 ...