Mediation in reputational bargaining Jack Fanning∗ January 26, 2018

Abstract This paper investigates the potential for mediation in a dynamic reputational bargaining model, where rational agents imitate behavioral types. Agents voluntarily communicate with a mediator and are free to ignore any communication received. I first show that an intuitive form of information filtration highlighted by practitioners, suggesting a compromise whenever both parties privately accept its terms, fails to improve on unmediated outcomes. Adding noise to this protocol, however, can improve payoffs if behavioral types are unlikely. My main result characterizes an optimal symmetric mediation protocol. While this can always improve payoffs when agents are risk averse, this is still only possible for risk neutral agents when rational agents are unlikely. A key feature is that the mediator is needed not only to bring rational agents together in a compromise, but also to inform them when to concede to a committed opponent’s demand. Finally, I compare mediation to a mechanism design benchmark, which can impose outcomes. If rational agents are unlikely, that benchmark is efficient. Keywords: Bargaining, reputation, behavioral types, mediation, delay

1

Introduction

In its broadest definition, mediation refers to any instance in which a third party helps others reach a voluntary agreement. It is distinct from arbitration, which imposes an agreement. Mediation is widely used to help parties to resolve disputes ranging from international conflicts and industrial relations to divorce proceedings. For instance, Dixon (1996) finds that mediation occurred in 13% of dispute phases of international conflicts between 1947-1982.1 In legal disputes, mediation is an increasingly popular form of Alternative Dispute Resolution (ADR). In a survey of general counsel for Fortune 1000 companies, Stipanowich and Lamare (2013) find that less than 15% of companies “rarely” or “never” use mediation in every category of ∗

Brown University. Email: jack [email protected]. Address: Department of Economics, Robinson Hall, 64 Waterman Street, Brown University, Providence, RI 02912. Website: https : \\sites.google.com\a\brown.edu\ j f anning. 1 These dispute phases are distinguished by the level of conflict between parties (e.g. threats of hostilities, open hostilities).

1

dispute.2 Use of mediation increased in every category compared to a 1997 survey. Attesting to mediations benefit’s, Dixon found that mediated disputes were 47% less likely to escalate and 24% more likely to peacefully resolve compared to disputes with no conflict management. More convincing evidence comes from Emery et al. (1991), who find that a treatment group, randomly selected to receive mediation services, settled 89% of contested custody cases out of court, compared to 28% of a control group. Mediation also halved the time spent reaching agreement and increased parties’ satisfaction with the outcome. Why might mediation help? Veteran mediator and Former Secretary of State for Labor, John Dunlop, describes the difficulties of “end-play” negotiations and the benefit of mediation as follows: “The critical problem is that each side would prefer the other to move to avoid a further concession itself, and that any move may create the impression of being willing to move all the way to the position of the other side... In these circumstances a third party may greatly facilitate agreement. The separate conditional acceptance to the mediator by one side of the proposal does not prejudice the position of that side if there is no agreement. It is not unusual for a mediator to secure the separate acceptance of each side of a “package” of the mediator’s design and then to bring the parties together to announce that, even if they do not know it, they have an agreement.”3 The claim is that mediators help by filtering information. Agents may resist proposing a compromise themselves for fear of being identified as a weak type, who is willing to concede entirely to her opponent’s demand. By filtering the information that an agent is willing compromise (releasing it only when an opponent is also willing to compromise), the mediator can eliminate this fear, and so encourage agreement. This paper seeks to use economic theory to help understand why and when such mediation techniques can be effective. I do this in the context of the reputational bargaining model of Abreu and Gul (2000) (henceforth AG). In AG’s model, two agents must divide a dollar. They can make frequent offers over the course of an infinite horizon. With positive probability an agent is a behavioral type, who is committed to demanding a fixed share αi ∈ (0, 1) of the dollar and accepting nothing less, otherwise the agent is rational. In equilibrium, an agent identified as rational must concede immediately to a possibly behavioral opponent’s demand. Given this, rational agents must imitate behavioral types and bargaining resembles a war of attrition, with inefficient delay. This reputational model captures many of the difficulties of unmediated negotiations. In particular, agreement is often delayed and negotiators’ fear that small concessions will necessitate larger ones is well justified. An important advantage of considering reputation as opposed to other forms of incomplete information in bargaining is the uniqueness of the equilibrium, absent mediation. This provides a clean benchmark against which to asses mediation’s bene2 3

By contrast arbitration was “rarely” or “never” used in at least 49% for each category. Dunlop (1984), p16-24).

2

fits. Typically, two sided incomplete information dynamic bargaining models exhibit multiple equilibria, with hard to characterize equilibrium payoff sets. Of course, however, reputational bargaining may be an imperfect description of many disputes in which mediation is used.4 I model mediation as follows: each agent can privately “confess” to a mediator that she is willing to accept less than her behavioral demand (i.e. that she is rational). The mediator can suggest an agreement based on this information, which the parties are free to implement (or not). The mediator has no preferences, but rather follows an exogenous protocol. I initially investigate a simple mediation protocol that is very close Dunlop’s description. A mediator promotes a specific compromise division and immediately suggests an agreement on those terms if and only if both parties indicate a willingness to accept them. This leads to a negative result: mediation always fails. Neither party is willing to confess and play follows the unique equilibrium of AG. The reason why this simple mediation protocol fails is that information is still released even when the mediator doesn’t announce an agreement: an agent who has confessed learns that her opponent is more likely to be behavioral. This increases her incentive to concede, and that concession destroys her opponent’s incentive to compromise in the first place. Nonetheless, the finding that mediation has no effect is still surprising. It suggests that mediators must not be too eager to bring about (a compromise) agreement. The fact that mediators are typically paid hourly rate, rather than being explicitly incentivized to reach early agreement, is seemingly consistent with this finding. (ADD CITATION) A feature of previous work on mediation in non-bargaining contexts, is the importance not just of information filtration, but also of noise (e.g. see Horner et al). The paper’s first positive result is to show that even adding a small amount of noise Dunlop’s simple protocol, can be effective. In particular, I allow the mediator to fail to announce a deal with positive probability (possibly close to one) even when both parties confess. This noise can also be interpreted as agents’ messages sometimes going astray, or being misinterpreted by the mediator. Such mediation can bring about Pareto improvements over AG’s equilibrium whenever behavioral types are sufficiently unlikely. Intuitively, the noise makes agents less pessimistic about an opponent’s type if no deal is announced, reducing the incentive to subsequently concede. The above results clearly beg the question: what is the optimal mediation protocol? I address this within the context of a somewhat restricted, symmetric bargaining problem. The mediator can follow any distribution of suggested agreements times between two rational agents, and a different distribution when there is only one rational agent. Her objective is to maximize rational agents’ payoffs, subject to a type incentive constraint (rational agents reveal themselves 4

Pre-trial negotiations (highlighted above as an important setting for mediation) typically don’t have an infinite horizon, rather, agents must agree before a deadline (the trial). However, Fanning (2016) shows that the infinite horizon and deadline models do not differ substantively if there is even slight uncertainty about the deadline’s timing (the last time at which agents can strike a deal).

3

to the mediator) and an dynamic incentive constraint (rational agents don’t concede unless the mediator tells them too). A first finding in this more general setting is that mediation always offers Pareto improvements whenever agents are risk averse. In AG’s (unmediated) war of attrition rational agents sometimes concede to rational opponents and sometimes are conceded to. Simply replacing these extreme agreements with compromises, even while keeping the same agreement times’ distribution, strictly increases risk averse payoffs. This result generalizes to a non-symmetric setting. More surprisingly, if agents are risk neutral, then mediation still cannot improve on AG’s equilibrium if behavioral types are sufficiently likely (I identify an exact cutoff). This is because the mediator cannot significantly delay the concession of a rational agent to a behavioral opponent (while satisfying the dynamic incentive constraint), because the potential benefit of not conceding (a compromise with a rational opponent) is very unlikely. But in which case a rational agent has a strong incentive not to confess (she can then receive a rational opponent’s concession with minimal delay, and then concede herself). Beyond this, I show that the optimal agreement time distributions possess intuitive and interesting features. First, an intuitive feature is that the distribution of agreements between two rational agents is front-loaded as much as possible. There is an atom at time zero, after which agreements are continuously distributed so that rational agents are indifferent to conceding. This is an immediate implication of the dynamic incentive constraint binding whenever the mediator has not suggested an agreement. More interestingly, agreements between a rational agent and a behavioral type are not backloaded as much as possible (in the sense of having a probability one atom at some distant time). The optimal distribution features an atom at some time greater than zero (strictly greater when behavioral types are unlikely) which is followed by a continuous distribution, which leaves a rational agent (who didn’t confess) indifferent to conceding. An important implication is that the mediator must help rational agents back down (concede) against behavioral opponents at the right time. She cannot solely be involved in brokering compromises. Finally, I compare this optimal symmetric mediation protocol to a mechanism design benchmark, which can impose outcomes and so lacks dynamic incentive constraints. If rational agents are unlikely this benchmark is efficient, with the designer losing nothing as a result of incomplete information. This highlights how important voluntary agreement is for constraining the mediator, compared to the informational problem alone. In fact, it is the mediator’s ability to enforce (perpetual) disagreement between two reported behavioral types which is key to this efficiency. I argue, however, that it may be hard to enforce such disagreements. This benchmark does not appear to be a close fit with arbitration, which always imposes an agreement (seemingly some division of the surplus) on parties rather than disagreement. Im4

posing a division of the surplus on two behavioral agents, however, is highly problematic, as such an agreement is worse than perpetual disagreement for one of those types by assumption. It may, therefore, be very hard to convince such types to volunteer for arbitration. Mediation, by contrast is inherently voluntary. This may help explain the increased use of mediation compared to arbitration documented in Stipanowich and Lamare (2013). In addition to the role for mediators I consider here, mediation has many other reputed benefits which I do not address. These include the acknowledgement of each side’s grievances by a neutral party, the creation of a less confrontational atmosphere for negotiation, and a mediator’s ability to establish commonly accepted facts. The paper is arranged as follows. Section 2 outlines the model; Section 3 highlights the unique equilibrium without mediation from AG; Section 4.1 considers simple mediation protocols along the lines suggested by Dunlop; Section 5 identifies an optimal mediation protocol and a mechanism design benchmark; Section 6 discusses the results in relation to the existing literature.

2

The model

The model presented below encompass all the mediation protocols I consider in a consistent way. The setup adapts the the discrete continuous time bargaining protocol advanced by Abreu and Pearce (2007), although for much of the analysis time can be treated as completely continuous. I discuss in Section 6 how results can be generalized to different bargaining protocols. Two bargainers, i = 1, 2, must agree on how to divide a dollar and face an infinite horizon. Bargainers are either rational or behavioral types. If rational bargainer i obtains a share αi ∈ [0, 1] of the dollar at time t then her utility is e−ri t ui (αi ) where her discount factor is ri and the twice continuously differentiable utility function ui satisfies ui (0) = 0, u0i (αi ) > 0 and u00i (αi ) ≤ 0.5 Behavioral types have no preferences, but mechanically implement an exogenously defined strategy. A third player is a mediator, i = 3, who is always a behavioral type, with a fixed strategy. Time is discrete-continuous to allow multiple events to occur at the same time in a sequential order. Each time t ∈ [0, ∞) is divided into five different discrete times t1 , t2 , t3 , t4 , t5 . Time follows a lexicographic ordering so that tk < tk+1 , and tk < sl whenever t < s. The set of discrete continuous times is DC = [0, ∞) × {1, 2, ..., 5} ∪ ∞. There is no discounting of payoffs within each time t. The bargaining protocol is as follows: at time 01 each bargainer i simultaneously announces a demand αi (01 ) ∈ [0, 1]; at t1 > 01 each bargainer can concede to her opponent’s existing demand (accept the share (1 − α j (t1 ))), ending the game; at any t2 each bargainer 5

Results generalize to any bargaining situation with a concave positive utility possibility frontier.

5

can privately send a message to the mediator indicating that she is rational (and so willing to compromise); at t3 the mediator can publicly send a message (m1 (t3 ), m2 (t3 )) where m1 (t3 ) = 1−m2 (t3 ) ∈ [0, 1] (this could communicate anything, but will be taken as the suggested division of the dollar); at t4 each bargainer can simultaneously change her demand to αi (t4 ); at t5 each bargainer can concede to her opponent’s (possibly new) existing demand. If both bargainers concede at the same time then each proposal is selected with probability 12 . I say that an agent who sends a message to the mediator confesses, and is a confessing agent, otherwise she is a non-confessing agent. At every tk > 01 each bargainer is associated with an existing demand. If bargainer i changes her demand at t4 then she cannot change her demand again until time (t + ∆)4 for some ∆ > 0. That is, if αi (t3 ) , αi (t4 ) then i’s existing demand is αi (sk ) = αi (t4 ) for all sk < (t + ∆)4 . A bargainer can only send a message to the mediator once. The mediator can only propose a compromise once. These restrictions mean that agents’ bargaining environments are relatively stable, allowing strategies to be more easily defined. Agent i is a behavioral type with probability zi ∈ (0, 1), and is otherwise or rational. A behavioral type for agent i initially demands a share αi (01 ) = αi ∈ (0, 1) and never changes this.6 She concedes to her opponent’s demand at tk if and only if (1 − α j (tk )) ≥ αi . She never sends a message to the mediator indicating rationality. The behavioral demands of the two agents are incompatible, α1 + α2 > 1. The intuitive description of the game above does not define an explicit extensive form. I do that below using stopping times. This illustrates that an (effectively) continuous time game form need not create insurmountable technical challenges, without going into exhaustive detail. In later sections, I simplify the game considerably, both by specifying a particular mediator strategy, and by using well known reputational bargaining results. At each non-terminal private history for agent i (information set), she chooses an action plan ai (hi ). An action plan ai (hi ) = (τi (hi ), xi (hi ), i) for agent i consists of three parts: a future time to take action, τi (hi ); an action to take at that time, xi (hi ); and a marker for agent i. She can plan to never take a future action by setting τi (hi ) = ∞. Let any actions planned for t1 , t2 , t5 , ∞ be denoted ai (hi ) = X: these are unambiguous (agents plan either to concede, message the mediator, or do nothing), but actions at 01 , t3 , t4 must additionally specify a dollar division. A private history for agent i is then composed of a finite sequence of action plans which she has observed (of herself and others). Which private histories are ultimately realized is determined as follows. The initial realized private history is the null set, h1i = ∅. Subsequent realized private histories, hk+1 (where k ∈ N ∪ i k k k k ∞), are determined by the joint realized history h = (h1 , h2 , h3 ) and agents’ action plans at those 6

In AG, agents can imitate multiple behavioral types and announce demands sequentially. Many of the results can be extended to that setting.

6

realized private histories, a j (hkj ). Let the time of the first action planned given hk be τ˜ (hk ) = min{τ1 (hki ), τ2 (hki ), τ3 (hki )}. Given hk , let the set of players whose actions i observes at τ˜ (hk ) be Ji (hk ). If Ji (hk ) = ∅ then hk+1 = hki . If Ji (hk ) = {1, 2} then hk+1 = hki × (ai (hki ), a1 (hk1 ), a2 (hk2 )). If i i Ji (hk ) = { j} then hk+1 = hki × (ai (hki ), a j (hkj )). The game ends if ever τ˜ (hk ) ∈ {t1 , t5 } (so that one i player concedes to a well defined existing demand) or else there is no agreement (players get a zero payoff). An example will help clarify this structure. At time 01 , bargainers declare their initial (behavioral) demands, so bargainer i’s action plan at h1i = ∅ is ai (h1i ) = (τi (h1i ), xi (h1i ), i) where τi (∅) = 01 , xi (∅) = αi . The mediator intends to suggest no agreement unless he receives messages from both agents, so that a3 (∅) = (∞, X, 3). The minimum time τ˜ (h1 ) in the joint realized history h1 = (∅, ∅, ∅) is therefore 01 . As all players observe these demand announcements, the next realized private history for player i is h2i = h1i × (a1 (h11 ), a2 (h12 ), ai (h1i )). Given h1 , agent 1 plans to message the mediator at 02 , a1 (h21 ) = (02 , X, 1), agent 2 plans to concede at time t5 , a2 (h21 ) = (t5 , X, 2), and the mediator plans to say nothing, a3 (h21 ) = (∞, X, 3). This means τ˜ (h2 ) = 02 . Agent 1’s action at 02 is observed by the mediator and herself but not player 2 so that h31 = h21 × a1 (h21 ), h33 = h23 × (a1 (h21 ), a3 (h23 )), and h32 = h22 . Given h31 , agent 1 plans to change her demand to the entire dollar at s4 > t5 , a1 (h31 ) = (s5 , 1, 1), the mediator plans to say nothing a3 (h31 ) = (∞, X, 3). And so, τ˜ (h2 ) = t5 , at which point the game ends with player 2 conceding to player 1’s existing (behavioral) demand α1 . A pure strategy for agent i must specify an action plan for each possible private history. A behavior strategy randomizes between action plans. Let the set of possible joint realized histories be H and the set of possible private histories be Hi . A belief system for bargainer i, µi , specifies two things for each private history, a belief about her opponent’s type µTi (hi ) ∈ [0, 1], and a belief about the joint history, µiJ (hi ) ∈ ∆(H). Clearly, it is necessary to put restrictions on these beliefs, in particular agents should not believe in joint histories that are inconsistent with hi or that must be in the future.7 Let the minimum time of an observed action plan in private history hi , be τˇ (hi ). A (weak) perfect Bayesian equilibrium requires that at each of her private history’s, hi , bargainer i’s strategy maximizes her time τˇ (hi ) continuation payoff, given the strategies of others and her beliefs, where beliefs are determined by Bayes rule where possible. Although it has been useful for the game’s description to describe the mediator as an agent, henceforth, I use the term agent to refer only to one of the two bargainers. 7

For example, at h22 agent 2 last observed action was at 01 , so she should put zero probability on h3 which includes action plans dating from 01 . Effectively, if h0 extends h which is consistent with hi , then µiJ (hi ) should put zero weight on h0 .

7

3

A Baseline Without Mediation

In this section we consider a Baseline version of the model without mediation (i.e. the mediator makes no announcements, so we can ignore time t2 and t3 ). In this case, the results of AG imply that if agent i is revealed to be rational at time tk in equilibrium (i.e. just after i makes a nonbehavioral demand), while agent j may be behavioral, then i must immediately concede. This mirrors the logic of the Coase conjecture (Coase (1972)), one-sided incomplete information implies an immediate agreement favourable to the informed party. Given AG’s result, it is without loss of generality to assume that rational agents always imitate behavioral types and then simply choose when to concede. We can, therefore, move to fully continuous time and simply describe agent j’s strategy with a distribution function F j , with domain [0, ∞) ∪ ∞. Let F j (t) be the total probability that agent j has conceded before time t (not the probability that rational agent j has conceded before time t). This implies that agent z j’s reputation (for being behavioral) at t is z¯ j (t) := 1−Fjj (t) . Given agent j’s strategy, agent i’s expected payoff to conceding at t is:8 Ui (t) =

Z

e−ri s ui (αi )dF j (s) + (1 − F j (t))e−ri t ui (1 − α j ) s
Analysis The unique equilibrium of this model is characterized by three properties: (i) at most one agent concedes with with positive probability at time zero; (ii) both agents reach a probability one reputation at the same time, T ∗ < ∞; and (iii) agents are indifferent to conceding at any time on (0, T ∗ ]. This third indifference condition implies that agent j must concede on the interval (0, T ∗ ] at rate: f j (t) ri ui (1 − α j ) = λ j := (1) 1 − F j (t) ui (αi ) − ui (1 − α j ) This implies that 1 − F j (t) = (1 − F j (0))e−λ j t . Next define rational agent j’s exhaustion time, T j , as the time by which she must have conceded even if she did not concede at time zero. This must satisfy e−λ j T j = z j . Condition (i) and (ii) then imply T ∗ = min{T 1 , T 2 }, and finally: λ jT ∗

1 − F j (0) = z j e

(

−λ j λi

)

= min 1, z j zi

(2)

Proposition 1 (AG, Proposition 1). The Baseline model has a unique equilibrium, characterized by equations (1) and (2). 8

Here and elsewhere, I suppress the explicit dependence of payoffs on strategies to minimize notation.

8

The fact that T ∗ > 0 implies that rational agents sometimes reach agreement only after an inefficient delay. This offers scope for mediation to improve outcomes. Payoffs in this Baseline are: UiB = ui (αi )F j (0) + ui (1 − α j )(1 − F j (0)) (3)

4

Simple mediation protocols

In this section, I consider three simple protocols (strategies) for the mediator, motivated by Dunlop. In the first two protocols, the mediator seeks a specific compromise, and immediately suggests that agreement if and only if both agents provisionally accept it. In the first protocol agents can confess only at time zero, whereas in the second they can confess at any time (these are not nested). The third protocol adds noise to the first protocol: the mediator sometimes fails to announce an agreement even when both agents confess.

4.1

Immediate One-shot (I1) mediation

In the Immediate One-shot (I1) mediation protocol agents can only confess at 02 . If both do so then at time 03 the mediator suggests a division (m1 , m2 ). Following such a joint revelation of rationality, any dollar division or even perpetual delay is consistent with sequential rationality (e.g. agent i changes her demand to αi (04 ) ∈ [0, 1] and subsequently doesn’t concedes unless j offers her more than that). Nonetheless, it is without loss of generality to assume that agents follow the mediator’s suggestion given the eventual negative result.9 Given this, we can again simplify to a continuous time framework. Let ci ∈ [0, 1] be the probability that agent i confesses at 02 (not the probability that a rational agent i confesses). If both agents confess then agent i obtains the payoff ui (mi ), otherwise she must choose when to concede. Agent i’s concession choice is described by two distribution functions Fic and Fin . Let Fic (t) be the probability that agent i has conceded to her opponent before time t conditional on her confessing and no mediator suggestion. Similarly, let Fin (t) be the probability that agent i has conceded before time t in the war of attrition, conditional on her not confessing. Finally, let Fi (t) = ci Fic (t) + (1 − ci )Fin (t) be the probability that agent i has conceded by time t conditional on no mediator suggestion. I have not included subscripts or superscripts on Fi , however, it may be distinct from the function used to describe the Baseline model’s equilibrium. Note that while the Fi was also used to describe the Baseline equilibrium, the functions are in principal slightly differently. While, I do not add extra notation to distinguish these settings, there should be little scope for confusion. Agent i’s strategy can be 9

Expected continuation payoffs must be weakly below those associated with some mediator proposal. I show that even the prospect of those higher payoffs cannot incentivize joint confession.

9

summarized by σi = (ci , Fic , Fin ).10 If agent j adopts a strategy σ j then rational agent i’s utility from confessing and then conceding at time t if the mediated makes no suggestion, is: Uic (t)

Z

e−ri s ui (αi )dF nj (s) + (1 − F nj (t))e−ri t ui (1 − α j ) s
=c j ui (mi ) + (1 − c j )

Alternatively, rational i’s utility if she does not confess and concedes at time t is: Uin (t)

=

Z

e−ri s ui (αi )dF j (s) + (1 − F j (t))e−ri t ui (1 − α j ) s
Analysis It is clear that the Baseline model’s equilibrium can still be an equilibrium here, indeed this is the case in all mediation protocols considered. If agent j does not confess with positive probability then agent i has no incentive to do so either. It is also clear that there can be no equilibrium in which rational agent i always confesses and j does so with positive probability. If there was, then agent j would learn for sure that i was behavioral if the mediator made no announcement, and so would subsequently immediately. Knowing this, a rational agent i would optimally choose not to confess unless mi ≥ αi . But if m j ≤ 1 − αi and ci > 0 then U cj (t) < U nj (t) for all t. Any equilibrium with mediation, therefore, must involve rational agents mixing between compromising and not compromising. The next proposition says that there is no such equilibrium. Proposition 2. The distribution of outcomes in any equilibrium of the I1 protocol is identical to that in the unique Baseline equilibrium without mediation. The explanation for this result is similar to why it is impossible for rational agents to always confess. I prove that if the mediator does not suggest an agreement (at 03 ), then at least one confessing agent, say j, must immediately concede with probability one (F cj (0) = 1). I loosely sketch the proof of that claim below. Such concession destroys the incentive for her opponent to confess in the first place. 10

There are potentially relevant, non-degenerate higher order beliefs in this game. If agent i confessed (but j zj didn’t) then at time t, i believes j is behavioral with probability zcj (t) := (1−c j )(1−F n (t)) . If i did not confess, then at time t she believes j is behavioral with probability znj (t) :=

zj 1−F j (t) .

j

If j did not confess, then she believes that i

beliefs about her likelihood of being behavioral are zcj (t) with probability

10

ci (1−Fic (t)) 1−Fi (t)

and znj (t) otherwise.

Suppose neither confessing agent immediately concedes with probability one (F cj (0) < 1). Standard arguments show that concession behavior after time zero must be continuous. I then show that if a confessing agent i concedes continuously on some interval (s, t), then so must a non-confessing agent i. For this to be the case a non-confessing agent j must concede at f n (t) rate 1−Fj n (t) = λ j (to make confessing agent i indifferent regarding her concession) and the j

total concession for agent j must be at rate

f j (t) 1−F j (t)

= λ j (to make non-confessing but rational i f c (t)

indifferent). In turn that implies a confessing agent j must concede at rate 1−Fj c (t) = λ j . But such j a bounded concession rate would imply that a (rational) confessing agent j never concedes with probability one in finite time. Such behavior cannot be optimal for a rational agent, given the possibility of a behavioral opponent.

4.2

Immediate Infinite (I∞) mediation

One concern about the negative result for the I1 protocol is that mediation is discontinuous, happening once and for all at time zero. It might be thought that allowing agents to confess continuously over time might allow for greater success. After all, in practice, mediators often hold multiple conferences with disputing parties before helping them reach a settlement. I allow for such a mediator strategy in what I call it the Immediate Infinite (I∞) mediation protocol. Under this protocol, agents can confess at any t2 with t ∈ [0, ∞). If agent i confesses at time t2 and agent j confesses at s2 ≥ t2 , then at s3 the mediator suggests the agreement (m1 , m2 ). For tractability reasons, I focus on what I call I∞ equilibria in which rational agents follow the mediator’s suggestion by changing their demands to (m1 , m2 ) at t4 . Focussing on such equilibria entails some loss of generality because this imposes constant continuation payoffs following an announcement by the mediator (we could imagine that such payoffs change over time and depend on which agent compromised first). Given this focus, it is without loss of generality to assume mi ∈ (1 − α j , αi ) because if mi ≥ αi then rational i will confess with probability one at 02 (as this can only increase her payoff) causing the game to quickly unravel into a standard war of attrition. In an I∞ equilibrium, agent i’s strategy reduces to choosing a time to confess and a time to concede (to her opponent’s behavioral demand). It is without loss of generality to assume that an agent never concedes at t1 but only at t5 , and confesses before she concedes (because doing so strictly increases her payoff whenever it affects the game’s outcome). We can again, therefore, analyze the game in continuous time. Agent i’s strategy is described by two distribution functions, σi = (Fic , Fid ). Let Fic (t) be the probability that agent i has confessed before time t, and Fid (t) be the total probability that agent i has conceded before time t (c=confessed, d=defeated) where Fic (t) ≥ Fid (t). If agent j adopts strategy σ j then rational agent i’s utility from planning

11

to confess at time s and concede at time t ≥ s is: Z Z −ri v d Ui (s, t) = e ui (αi )dF j (v) + v
e−ri v ui (mi )dF cj (v)

v∈(s,t]

+ (1 − F cj (t))e−ri t ui (1 − α j ) + (F cj (s) − sup F dj (v))e−ri s ui (mi ) v
Analysis The Baseline equilibrium is still an I∞ equilibrium, where Fic (t) = Fid (t) for all t. Despite agents always confessing before conceding, however, the next proposition establishes that this is the only such equilibrium in this setting. Proposition 3. The distribution of outcomes in any I∞ equilibrium is identical to that in the unique Baseline equilibrium without mediation. The proof of this result is structurally similar to the proof of Proposition 2, if somewhat more involved. It shows that if ever Fic (t0 ) > Fid (t0 ), then this must be true on an non-degenerate interval [t0 , t00 ) where Fic (t00 ) = Fid (t00 ). Concession and confession behavior must be increasing and continuous on (t0 , t00 ]. The conditions for a confessing agent i (who has already confessed) and non-confessing agent (who hasn’t yet) to be indifferent regarding their concession and confession times, imply linear ODE that govern F cj and F dj . Those imply that F cj (t) − F dj (t) > 0 on (t0 , t00 ]. In turn, that implies that rational agents won’t concede in finite time, a contradiction.

4.3

Partial One-Shot (P1) mediation

In this subsection I introduce noise into the mediator’s strategy from the I1 protocol, I call it the Partial One-Shot (P1) protocol. In both the I1 and I∞ protocols, an agent who confesses immediately receives an unambiguous signal that her opponent did not confess, if the mediator makes no announcement. Adding noise, by having the mediator sometimes fail to suggest an agreement even when both parties confess, obfuscates that signal. An equivalent interpretation is that the mediator always suggests agreement when she knows that both parties confessed, but sometimes agents’ messages go astray or are misinterpreted. As in the I1 protocol agents can only confess at 02 . If both agents confess then at 03 the mediator suggests the agreement (m1 , m2 ) with probability b ∈ (0, 1), and otherwise remains silent. I focus attention on what I call P1 equilibria. A P1 equilibrium is an equilibrium of the P1 game in which rational agents always confess and subsequently implement the suggestion of the mediator. If the mediator makes no suggestion, then agents must decide when to concede. Let ci ∈ [0, 1] describe the probability that agent i confesses, where ci = 1 − zi in a P1 equilibrium. If the mediator suggests an agreement, then agent i obtains a payoff ui (mi ). Let Hic 12

be a distribution function such that Hic (t) is the probability that a rational agent i has conceded before time t conditional on confessing and the mediator making no suggestion. Similarly, let Hin (t) be the probability that a rational agent i has conceded before time t, conditional on not confessing (of course, rational agents do confess in a P1 equilibrium). Agent i’s strategy is summarized by σi = (ci , Hic , Hin ). In a P1 equilibrium, agent i’s utility if she confesses and concedes at time t is: !

Z

Uic (t)

=(1 − z j ) bui (mi ) + (1 − b) e s
ui (αi )dH cj (s)

Agent i’s utility if she does not confess and then concedes at time t is: Uin (t)

Z

  e−ri s ui (αi )dH cj (s) + (1 − z j )(1 − H cj (t)) + z j e−ri t ui (1 − α j ) s
=(1 − z j )

Analysis In a P1 equilibrium, if the mediator does not make a suggestion at 03 , then behavior in the continuation game must resemble the Baseline equilibrium but with initial reputations of z¯i = zi instead of zi . As noted previously, this equilibrium is characterized by three conditions: 1−(1−zi )b (i) at most one agent concedes with with positive probability at time zero; (ii) both agents reach a probability one reputation at the same time, T ∗ < ∞; and (iii) agents are indifferent to conceding at any time on (0, T ∗ ]. Let F j (t) = (1 − z¯ j )H cj (t) be the probability that a confessing agent i believes that j will concede before t conditional on no mediator announcement. Condition (iii) implies that agent i must f (t) expect j to concede at rate 1−Fj j (t) = λ j on (0, T ∗ ]. Agent j’s exhaustion time is now T j = − λ1j ln(¯z j ). To ensure conditions (i) and (ii) are satisfied, we must have T ∗ = min{T 1 , T 2 } and n z¯ − λλ j o z¯ (1−e−λ j t ) 1 − H cj (0) = max 1, 1−¯jz j z¯i i − 1 . For later t, we have 1 − H cj (t) = (1 − H cj (0))e−λ j t − j 1−¯z j . Given such behavior, a rational agent i who did not confess (not her equilibrium strategy) will subsequently find it in her interest to wait until T ∗ and then concede. This is because the probability that she expects j to concede before t is larger if she confessed than if she did not U n (t) (1 − z j )H cj (t) ≥ (1 − z¯ j )H cj (t), implying dti > 0 on [0, T ∗ ). Hence we must have: Ui∗n

:=

max Uin (t) t

= (1 − z j )

Z



e−ri s ui (αi )dH cj (s) + z j e−ri T ui (1 − α j ) s
13

When agent i confesses, she is subsequently indifferent to conceding at any t ∈ (0, T ∗ ] so that: Ui∗c

:=

max Uic (t) t

= (1 − z j ) bui (mi ) + (1 − b)

!

Z −ri s

e s
ui (αi )dH cj (s)



+ z j e−ri T ui (1 − α j )

A necessary and sufficient condition for a P1 equilibrium to exist therefore is: U ∗c − Ui∗n Qi := i = ui (mi ) − (1 − z j )b

Z s
e−ri s ui (αi )dH cj (s) ≥ 0

(4)

That is, the share proposed by the mediator must be better than the stream of payoffs from a known rational agent’s concession on [0, T ∗ ). The paper’s first positive result shows that when agents’ reputations are sufficiently small, a P1 equilibrium always exists which Pareto dominates the equilibrium of the Baseline model. Proposition 4. For any given ri , ui , αi for i = 1, 2, b ∈ (0, 1) and fixed K ≥ 1, there exists z > 0 such that if zi ≤ z and K ≥ zz12 ≥ K1 , then there is a P1 equilibrium, which rational agents strictly prefer to the Baseline equilibrium. Some intuition for the result comes from examining equation (4). If agent i does not confess agent she sacrifices an immediate payoff of ui (mi ) in return for the stream of payoffs R e−ri s ui (αi )dH cj (s). That stream of payoffs comes relatively slowly when initial reputations s
in such an equilibrium. By confessing, an agent effectively gains an immediate payoff of ui (mi ) but loses a delayed payoff of ui (αi ), when she faces a rational opponent. When behavioral types are likely, however, there can very little delay, and so the inequality cannot hold. Proposition 5. For any given ri , ui , αi for i = 1, 2 there exists z < 1 such that if z1 ≥ z, then there is no P1 equilibrium.

5

An Optimal Symmetric Mediation Protocol

So far, I have only considered mediator strategies close to the protocol outlined by Dunlop. In particular, the mediator only ever suggests agreements when both agents confess rationality, and never delays her announcements. It is clearly of interest to try to understand what mediation protocol is optimal in a richer strategy space. This section adopts a mechanism design approach to attack the optimal mediation problem in a somewhat restrictive, symmetric setting. This limited problem allows me to make considerable progress, at the expense of some generality. Symmetry implies identical agents with ui = u, ri = r, zi = z, αi = α. Agents can confess rationality only at 02 , and all rational agents do so in equilibrium. Treating agents symmetrically means that mediator can only suggest mi = 0.5 between two reportedly rational agents. However, when agent i confesses but j does not, the mediator suggests m j = 1 − mi = α (i.e. that rational agent i concedes to behavioral agent j). Rational agents (optimally) follow any such mediator suggestion. The distribution of suggested agreement times conditional on both agents confessing is GR , so that GR (t) is the conditional probability that the mediator suggests a compromise between two rational agents before time t3 (R=rational). The distribution of suggested agreements conditional on either i or j alone confessing is GZ , so that GZ (t) is the conditional probability the mediator suggests that the rational agent concedes to the behavioral agent before t3 (Z=behavioral). If neither agent confesses, then the mediator stays silent (as no deal is feasible). An equilibrium satisfying the above assumptions can be summarized by the same objects as a P1 equilibrium, σi = (ci , Hic , Hin ). Agent i’s expected utility if she confesses and then concedes at t5 (when there has been no mediator announcement) is: U (t) =(1 − z) c

Z

Z

u(0.5)dG (s) + z e−rs u(1 − α)dGZ (s) s≤t   + e−rt u(1 − α) (1 − z)(1 − GR (t)) + z(1 − GZ (t)) −rs

e

R

s≤t

15

Agent i’s expected utility if she does not confess and concedes at t5 where t ∈ [0, ∞) is: U (t) = (1 − z)

Z

  e−rs u(α)dGZ (s) + e−rt u(1 − α) (1 − z)(1 − GZ (t)) + z

n

s≤t

I then define the Optimal Symmetric Mediation Protocol (OSMP) problem as follows: max U c (max{T R , T Z })

GR ,GZ

s.t. U c (max{T R , T Z }) = max U c (t)

(Dynamic IC)

(5)

and U c (max{T R , T Z }) ≥ sup U n (t)

(Type IC)

(6)

t

t

where, T R = sup{t : GR (t) < 1}, T Z = sup{t : GZ (t) < 1} That is, the mediator must choose the distributions GR and GZ to maximize the payoff of rational agents subject to a dynamic incentive constraint that agents do not concede until directed to do so by the mediator (equation (5)), and a type incentive constraint that rational agents optimally choose to confess rationality rather than imitate behavioral types (equation 6)). The choice of rational agents’ payoffs as the objective function is natural for two reasons. First, behavioral types don’t have a utility function, and secondly because the mediator only seeks (and gets) cooperation from rational agents. The assumption of symmetry implies a natural if particular form of interpersonal utility comparison. Before formally beginning the analysis, notice that T Z ≤ T R in an OSMP because, otherwise a rational agent would realize that she faced a behavioral type at T R , and so would optimally concede (equation (5) couldn’t hold).

Analysis The main results of paper, characterizing an OSMP, are summarized in the following theorem. Theorem 1. An OSMP exists. It delivers higher payoffs than the Baseline equilibrium if and only if either: i) agents are risk averse, u(α)+u(1−α) < u(0.5); or, ii) behavioral types are unlikely, 2 R in particular z < α. The optimal distribution G features an atom of agreement at time zero and increases continuously on [0, T R ] to keep confessing agents indifferent to conceding, where T R = T Z . The optimal distribution GZ features an an atom of agreement at some t∗ ≥ 0 (with t∗ > 0 if behavioral types are unlikely and risk neural) and increases continuously on (t∗ , T Z ] to keep non-confessing agents indifferent to conceding, where t∗ < T Z . Both incentive constraints must bind. There is a lot to unpack here, and I shall do so gradually over the course of this section. Let 16

us start with the theorem’s claim that an OSMP must deliver higher payoffs that the Baseline equilibrium whenever agents are risk averse. This is, in fact, an almost immediate observation. The mediator can improve on Baseline payoffs simply by using the distribution of agreements −λt present in the Baseline equilibrium. Those Baseline distributions are 1 − GZ (t) = e 1−z−z and 1 − GR (t) = (1 − GZ (t))2 for t ≤ T R = T Z = − λ1 ln(z). In that Baseline equilibrium, an agreement between two rational agents at t ≤ T R is equally likely to give an agent u(α) as u(1−α), because agents 1 and 2 are equally likely to concede. By simply replacing such agreements with u(0.5), therefore, increases a risk-averse rational agent’s payoff. It is immediately verified that these n c distributions imply dUdt(t) = 0 and dUdt(t) > 0 for t < T R , so that U c (T R ) = max U c (t) > u(1−α) = maxt U n (t), which satisfies both incentive constraints. The possibility of mediation delivering Pareto improvements when agents are risk averse generalizes immediately to an asymmetric setup. In that case the Baseline distribution of agreements i Z zi (eλ (T −t) − 1) for between rational agent i and behavioral agent j is described by 1 − Gi (t) = 1−z i t ≤ T Z = T R = mini {− λ1i ln(zi )} while 1 − GR (t) = (1 − G1 (t))(1 − G2 (t)). Replacing the average time t rational agreement with its average then delivers a similar payoff gain.11 The possibility of inefficiency from dispersed outcomes for risk averse agents in a war of attrition has typically received much less attention than inefficacy caused by delay. However, eliminating this second form of inefficiency can also allow the mediator to reduce delay (notice that the type IC constraint doesn’t bind with the Baseline equilibrium distributions). Next consider the theorem’s characterization of an optimal distribution of agreements between two rational agents. The claim is that the distribution is front-loaded as much as possible, with an atom at time zero, and continuous agreement thereafter, to keep a confessing agent indifferent to concession. This is an implication of the claim that the dynamic incentive constraint binds at all t ≤ T R , where T R = T Z . If the dynamic incentive constraint is slack for some t < T R , it is possible to move some agreements forward in time while still satisfying the constraint at all t. This strictly increases rational agents’ payoffs, U c (T R ), and so relaxes the type incentive constraint because supt U n (t) has not changed. This is formally established in Lemma 1. I denote the optimal distribution of rational-rational agreement given any GZ , as GGR∗Z . Lemma 1. Given any distribution GZ , whenever there exists some distribution GR which satisfies both incentive constraints, there is a unique optimal distribution, GGR∗Z characterized by U c (t) = U c (T R ) for t ≤ T R = T Z . The optimal distribution GGR∗Z can be identified in closed form by noting that the requirement U c (t) = 0 on (0, T R ] implies the linear ODE: dt gGR∗Z (t) 11

For t > 0 let mi (t) =

αi

 = λ (1 − GR∗ (t)) +

i g j (t) +(1−αi ) g (t)j 1−G j (t) 1−G (t) j gi (t) + g (t) 1−Gi (t) 1−G j (t)

m

.

17

 z Z (1 − G (t)) 1−z

where λm =

ru(1 − α) . u(0.5) − u(1 − α)

Solving this ODE using the boundary condition GGR Z (T Z ) = 1 gives:

1

− GGR∗Z (t)

−λm t

=e

Z

TZ

λm eλ

ms

t

z (1 − GZ (s))ds 1−z

(7)

Having identified GGR∗Z , the OSMP problem is reduced to finding the optimal GZ . Because GGR∗Z leaves agents indifferent to conceding on [0, T Z ], the payoff from confessing is U c (0) = u(1 − α) + (u(0.5) − u(1 − α)) (1 − z)GGR∗Z (0). Maximizing this payoff is then equivalent to maximizing GGR∗Z (0). Given GZ we can, therefore, form a reduced time t type incentive constraint of ICGZ (t) = U c (0) − U n (t) ≥ 0. Plugging in for GGR∗Z (0), before integrating by parts gives:  Z   ICGZ (t) =u(1 − α) + (u(0.5) − u(1 − α)) (1 − z) 1 −

TZ

  z Z λ e (1 − G (s))ds 1−z 0 Z   − (1 − z) e−rs u(α)dGZ (s) − e−rt u(1 − α) (1 − z)(1 − GZ (t)) + z) s≤t   Z TZ   z m −rt m λ s =u(1 − α)(1 − e ) + (u(0.5) − u(1 − α)) (1 − z) 1 − λ e ds 1−z 0 Z TZ   m − e−rt GZ (t)(1 − z) (u(α) − u(1 − α)) + GZ (s)r u(1 − α)eλ s z − 1[s≤t] u(α)e−rs (1 − z) ds m λm s

(8)

0

Manipulating this incentive constraint is the key characterizing an optimal distribution, GZ . The fact that it is linear in GZ (s) (for s , t) makes this manipulation more straightforward that might otherwise be expected. The next step toward the characterization in Theorem 1 is Lemma 2, which states that the type incentive constraint must bind at T Z . This is fairly intuitive. If it did not hold, then it would be possible to slightly increase GZ (t) for t ∈ [T Z − ε, T Z ] while maintaining ICGZ (t) ≥ 0. Notice R∗ that if G˜ Z (t) ≥ GZ (t) for all t and G˜ Z (s) > GZ (s) for some s, then GGR∗ ˜ Z (0) > GGZ (0). Lemma 2. If GZ implies inf t ICGZ (t) ≥ 0 and ICGZ (T Z ) > 0 then there is another distribution G˜ Z such that UGc∗˜ Z (0) > UGc∗Z (0) while ICG˜ Z (T Z ) = mint ICG˜ Z (t) = 0 for t ≤ T Z . (DO WE REALLY NEED THIS? CAN IT BE WORKED INTO NEXT RESULT?)   . The next step in the argument distinguishes between times before and after tˆ := λm1+r ln (1−z)u(α) zu(1−α) In the time t incentive constraint (equation (8)) the final integrand is always zero at tˆ, whereas it is positive for s ∈ (tˆ, T Z ]. For such s, having GZ (s) as large as possible (subject to not changing GZ (t)) relaxes the time t type constraint. The implication is that the constraint must bind for t ≥ tˆ. This is established in Lemma 3. Lemma 3. Consider any distribution GZ such that mint ICGZ (t) ≥ ICGZ (T Z ) = 0. If ICGZ (t) > 0 h i for some t ∈ tˆ, T Z then there is an alternative distribution G˜ Z such that min s ICG˜ Z (s) ≥ 0 and h i ICG˜ Z (t) = 0 for t ∈ tˆ, T Z , such that UGc∗˜ Z (0) > UGc∗Z (0). 18

u(α) Now suppose that behavioral types are fairly likely, in the sense that z ≥ u(α)+u(1−α) (z ≥ α if agents are risk neutral). This implies tˆ ≤ 0 and so by Lemma 3 we may restrict attention to n distributions with ICGZ (t) = 0 for t ≤ T Z and so dUdt(t) = 0. Such distributions must therefore satisfy the linear ODE  z  gZ (t) = λ 1 − GZ (t) + 1−z Z

λ(T −t) which combined with the boundary condition G˜ Z (T Z ) = 1 gives G˜ Z (t) = 1−ze1−z . Among such distributions, GZ (t) is decreasing in T Z and so the OSMP is reduced to the problem finding the minimum T Z such that:

 Z  ˆ IC(T ) := ICG˜ Z (0) =(1 − z) (u(α) − u(1 − α)) 1 − Z

TZ

λT Z +(λm −λ)s)

λ (e

0

m

λm s

−e

 z 2   ds ) 1−z

− (u(α) − u(1 − α)) (1 − eλT z) ≥ 0 Z

ˆ Z ) is continuous in T Z . Moreover, if T Z = − 1 ln(z), then G˜ Z corresponds to its Notice that IC(T λ Z ˆ ˆ Z) > 0 Baseline equilibrium distribution, where IC(T ) = 0 if agents are risk-neutral and IC(T otherwise.12 An OSMP, therefore, must not only exist but be unique in this case. I denote such an optimal distribution as GZ∗ . Theorem 1 makes an additional claim about OSPM in this case. When agents are risk neutral the optimal distribution GZ∗ and hence GGR∗Z and payoffs are identical to the Baseline equilibrium. This is established in Lemma 4. The proof is just a few lines of algebra, which show that ˆ Z) d IC(T < 0 for T Z < − λ1 ln(z). dT Z Lemma 4. If agents are risk neutral, u(x) = x, and z ≥ α, then expected outcomes (and payoffs) in the unique OSMP are identical to those in the Baseline equilibrium. This is a fairly strong negative result regarding the potential for mediation. Lemma 5 establishes Theorem 1’s converse claim for risk neutral agents: if behavioral types are less likely, z > α, then mediation can always improve outcomes, which is in particular the case when at least 50% of agents are rational. The proof is by construction, using a distribution GZ that shares several features of an optimal distribution (but isn’t in fact optimal itself, because it leaves slack incentive constraints). Lemma 5. If z > α, then an OSMP delivers higher payoffs than the Baseline equilibrium. To complete the characterizing of an optimal distribution when behavioral types are unlikely, λ(T Z −t) u(α) z < u(α)+u(1−α) , define the distribution GZtˇ,T Z by GZtˇ,T Z (t) = 1−ze1−z for t ∈ [tˇ, T Z ] and GZtˇ,T Z (t) = 0   − λm   2  m −1 z z Z ˆ Evaluating at T Z = − λ1 ln(z)(0) we get IC(T ) = (1 − z) 1 − λz λλ m−λ + 1 1−z . It is easily verified that −λ m Z ˆ this expression is decreasing in the ratio λλ . When u(x) = x, then λm = 2λ and so IC(T ) = 0, and otherwise λm > 2. λ 12

19

otherwise, where tˇ ∈ [0, min{T Z , tˆ}]. This distribution has the form of an optimal distribution identified in Theorem 1 (i.e. an atom of agreement at tˇ followed by continuous concession to make a confessing agent indifferent to conceding on [tˆ, T Z ]). By examining equation (8) it is clear that ICGZ Z (T Z ) is continuous in tˇ and T Z and strictly tˇ,T increasing in the former for tˇ < tˆ (the final integrand is negative for s < tˆ). For a fixed T Z , the distribution GZ which implies the slackest type incentive constraint is, therefore, GZmin{T Z ,tˆ} . If ICGZ Z Z (T Z ) ≥ 0 we can then define t∗ (T Z ) = min{t : ICGZ Z (T Z )} ≥ 0. min{T ,tˆ},T

t,T

among all distributions GZ with the same T Z . Lemma 6 establishes the optimality of The proof considers any distribution (that has not been ruled out by Lemma 3), where GZ (t) > 0 for some t < T Z (t∗ ). I show that reducing GZ (s) for some s < T Z (t∗ ), and increasing GZ (v) for some v > t∗ (T Z ) while leaving the time T Z incentive constraint unchanged, can increase GGR∗Z (0). GZt∗ (T Z ),T Z

Lemma 6. For any distribution GZ where inf s ICGZ (s) ≥ 0, if ICGZ (t) > 0 and GZ (t) > 0 for some t ≤ T Z , then there is an alternative distribution G˜ Z such that min s ICG˜ Z (s) ≥ 0, ICG˜ Z (t)G˜ Z (t) = 0 for t ≤ T Z and UGc∗˜ Z (0) > UGc∗Z (0). This Lemma allows us to rapidly establish existence of an OSMP. Because ICGZ Z (T Z ) is contˇ,T tinuous in tˇ and T Z , and strictly increasing in the former, the set of T Z for which t∗ (T Z ) is defined is closed while ICGZ Z (T Z ) ≥ 0 for T Z = − λ1 ln(z) implies that this set is also non-empty. 0,T Moreover, t∗ (T Z ) is continuous on that closed set. Hence, the OSMP problem can be reduced to maximizing a continuous function of T Z , GGR∗Z (0) on a compact set {T Z : ICGZ Z Z (T Z ) ≥ min{T ,tˆ},T ∗ Z Z t (T ),T h i 1 13 0} ∩ 0, − λ ln(z) . The final claim of Theorem 1 is that the optimal distribution GZ∗ is not degenerate, i.e. t∗ (T Z ) < T Z . This is established in Lemma 7. This is interesting because one might have expected that (at least sometimes) concession to a behavioral type would be fully backloaded, in the sense of having a probability one atom at some distant date. After all such concession gives a rational agent who doesn’t confess (pretends to be behavioral) a payoff u(α) while only giving the confessing agent u(1 − α). Some intuition comes from the fact that a non-confessing agent expects to face a behavioral opponent with probability z at t∗ (T Z ) while the confessing agent does so z with a higher probability z¯(t) = z+(1−z)(1−G R (t)) , and so that payoff may be received with higher ∗ Z probability. This is why t (T ) = 0 when zu(1 − α) ≥ (1 − z)u(α). More generally, when z¯(t) is close to one, the confessing agent has a strong incentive to concede. Earlier behavioral-rational agreements reduces this belief and so significantly relaxes the dynamic incentive constraint. An implication of the result is that the mediator must help rational agents back down against behavioral opponents at the right time, and cannot solely broker compromises (if t∗ (T Z ) = T Z then a confessing agent could simply concede at T Z ). The proof is simple calculus and algebra, showing that when t∗ (T Z ) = T Z , a small increase 13

If T Z > − λ1 ln(z) then there is no GZ such that U n (T Z ) = supt U n (t) ≥ u(1 − α) by similar logic to Lemma 1.

20

in T Z and corresponding decrease in t∗ (T Z ), such that ICGZ∗ Z Z (T Z ) does not change, would t (T ),T strictly increase payoffs. Lemma 7. Any OSMP distribution of behavioral rational agreements, GZt∗ (T Z ),T Z satisfies t∗ (T Z ) < T Z.

5.1

A mechanism design benchmark

In this section, I compare my results on optimal mediation to a mechanism design benchmark when the mechanism designer can impose agreement and also disagreement between agents. There is no need to ensure that agents follow through on the designers’ suggestions. I call this an Optimal Symmetric Delegation Mechanism (OSDM), because agents delegate their subsequent decision making power. As with an OSMP, an OSDM is defined only for symmetric reputational bargaining problems. Agents report their types to the designer chooses an agreement (or disagreement) at any time based on the reported types. Behavioral agents always report their true type to the designer, but rational agents may lie. I require that the designer imposes perpetual disagreement between two reported behavioral types and only ever imposes agreements of ( 12 , 12 ) between reported rational pairs and (α, 1−α) between rational vs behavioral pairs. As in an OSMP the designer maximizes rational agents payoffs by choosing the distributions of agreement times GZ (between rational agent pairs) and and GZ (between rational vs behavioral pairs). Individual rationality constraints can also be added, however, these will typically not bind (see discussion below). Formally, the OSDM problem is as follows:

max U =(1 − z)

Z

Z

u(0.5)dG (s) + z e−rs u(1 − α)dGZ (s) s<∞ Z c n s.t. U ≥ U = (1 − z) e−rs u(α)dGZ (s) (Type IC∗ ) −rs

c

GR ,GZ

R

e

s<∞

s<∞

The key difference from the OSMP problem is the absence of a dynamic incentive constraint. The presence of only a type incentive constraint, makes this a much more standard mechanism design problem, which is much simpler to solve. I characterize the solution in the following proposition. Proposition 6. In any OSDM, pairs of rational agents agree immediately, GR (0) = 1. Furthermore, if (1 − z)u(0.5) + zu(1 − α) ≥ (1 − z)u(α) then GZ (0) = 1, and otherwise it is without loss (1−z)u(0.5) of generality to have GZ (0) = GZ (∞) = (1−z)u(α)−zu(1−α) . The proof of this characterization is trivial. Increasing GR (0) strictly improves the objective function and the type incentive constraint, so that we must have GR (0) = 1. On the other hand, 21

R increasing s<∞ e−rs dGZ (s) strictly improves the objective function, but worsens the incentive constraint. If (1 − z)u(0.5) + zu(1 − α) ≥ (1 − z)u(α), then the constraint is satisfied even with GZ (0) = 1, and otherwise that constraint must bind in an OSDM. There are many distributions which will make it bind. The distribution specified in the proposition imposes disagreement with probability 1 − GZ (0) > 0, however, many others have behavioral-rational pairs reaching agreement eventually. This characterization shows how much the freedom of agents to ignore the mediator’s suggestions (effectively the dynamic incentive constraint) constrained an OSMP. A mediator could never have all rational agents agree at time zero, GR (0) = 1, because otherwise absent such agreement then they would learn that they face a behavioral type and choose to immediately concede (implying GZ (0) = 1). A rational agent would not, therefore, truthfully reveal her type, because by imitating a behavioral type and then subsequently conceding, would give a strictly larger payoff, (1 − z)u(α) + zu(1 − α) > (1 − z)u(0.5) + zu(1 − α). The option to pretend to be behavioral and subsequently concede is not present in an OSDM, so that when behavioral types u(α)−u(0.5) , we can indeed have GR (0) = GZ (0) = 1 an efficient outare fairly likely z ≥ u(1−α)+u(α)−u(0.5) come, with no delay or disagreement. By contrast, for risk neutral agents and such parameters, an OSMP is unable to improve at all on the Baseline equilibrium. As mentioned, the OSDM problem above lacks individual rationality constraints, which might be thought to constrain its efficiency. Why should agents delegate their decision making power to the mediator? A natural constraint is that agents do better than in the Baseline equilibrium. This is certainly true for rational agents, as U c ≥ u(1 − α). For behavioral types, a plausible assumption is that they have the same discount rate and same utility as rational agents for dollar shares greater than α but obtain −D for any smaller share, for D large. This translates into a ∗ . This is because rational agents who constraint, U n ≥ u(1 − α)(1 − erT z), where T ∗ = − ln(z) λ concede at T ∗ in the Baseline equilibrium expect a payoff of u(1 − α), whereas behavioral types get e−rt u(1 − α)z less (by not conceding at that time). Clearly this constraint is also satisfied as U n = U c ≥ u(1 − α). We can now imagine an extended bargaining game, where at time 02 agents can voluntarily sign up to the mechanism and reveal their type to the designer, who either implements an OSDM at 03 , or tells the agents that at least one agent has refused to participate, in which case reputational bargaining continues. Seemingly, all agents would choose to sign up to the OSDM, so that their beliefs could remain unchanged if for some reason an opponent did not do so. While an OSDM achieves a strictly higher objective than an OSMP, the credibility of this mechanism seems slightly dubious. It requires agents to fully delegate future decision making to the designer, who can then maintain perpetual disagreement between two (reportedly) behavioral types. While contracts constraining future agreements may sometimes drawn up, courts do not enforce contracts in the absence of a harmed party (i.e. there is no party with standing to enforce the contract). Seemingly, therefore, a rational agent should always have the option to 22

pretend to be behavioral and then change her mind and accept her opponent’s demand. While the designer could impose agreements in an OSDM, I have not referred to it as arbitration. I did this because an OSDM also sometimes imposes perpetual disagreement, which seems to conflict with the typical practice of arbitration a form of Alternative Dispute Resolution. An arbitrator who always imposed some dollar division even between behavioral types would seem likely to face fierce opposition. Using the assumptions about utility outlined above, this would necessarily give at least one behavioral agent a payoff of less than u(1) − D 2z , which is worse than perpetual disagreement for large D. In this case, therefore, the designer would seem unable to satisfy any reasonable individual rationality constraint for behavioral types. This illustrates an important point. The knowledge that a mediator will never impose an agreement, which an agent strongly dislikes, may be an important selling point. It can help explain the increased popularity of mediation compared to arbitration highlighted by Stipanowich and Lamare (2013).

6

Discussion and literature review

Although the potential for mediators to filter information and expand payoff sets is well known in economics (for instance, the set of correlated equilibria is typically larger than the set of Nash equilibria), the role for mediators in dynamic bargaining settings (where real world mediators practice) has received relatively little attention. There is an inherent difficulty for such mediators to persuade bargainers to compromise, even if in private. If an agent refuses that compromise, then an opponent will learn she is more likely to be committed to her bargaining position, increasing that opponent’s incentive to concede to the agent’s preferred outcome. Propositions 2 and 3 showed that in simple mediation protocols, this problem is so severe that mediation may be completely ineffective. These negative results have an intuitive rationale, but are nonetheless quite surprising. Jarque et al. (2003) illustrate an equilibrium with mediation when there is incomplete information about reservation values and a dynamic bargaining protocol similar to my I∞ protocol. In their continuous time model, agent i gets utility e−rt (xi − si ) when she gets a share xi of the dollar at time t and has reservation value si ∈ [siL , siH ]. A war of attrition equilibrium always exists, in which there are only on two possible agreements (agent i demands xi > siH ). A mediator immediately announces an agreement mi ∈ (1 − x j , xi ), whenever both parties accept this in private.14 Their main result is that when fundamentals are symmetric, there is an equilibrium with mediation if and only if the fraction of types willing to concede in the war of attrition is sufficiently small (siL ≈ 1 − x j ). This can provide an (ex-ante) Pareto improvement over the war 14 The model allows for any finite number compromise alternatives, with agents successively indicating partial ˇ c and Ponsat´ı (2008) extend the model to allow for a continuum of possible acceptance to the mediator. Copiˇ agreements, and illustrates the existence of an existence of an ex-post efficient equilibrium.

23

of attrition. One subtle difference with my positive mediation results is the requirement that the fraction of flexible agents with low reservation values is small. By contrast, Proposition 4 and Theorem 1 required a large fraction of flexible rational agents. The more important difference is that Dunlop’s simple protocol (without noise) could succeed. The reason is that with reservation values, the mediator facilitates agreement between types who would never agree in the war of attrition. If x1 = x1 = 0.7 then types s1 = s2 = 0.4 will never concede in a war of attrition, but by introducing the alternative mi = 0.5, these types can reach agreement. The extra payoffs created by such agreements add enough grease to the system to overcome the inherent difficulties of mediation. In the reputational model, by contrast, introducing a compromise does not expand the set of types who are ultimately willing to agree. This of course begs the question of whether such types could reach agreement even without a mediator. Ponsati (1997) shows that if the game rules allow only three alternatives agreements and strategies are Markov, then at least one alternative will not be used (i.e. there is a war of attrition absent a mediator). However, she also shows that it is possible to construct nonMarkov equilibria, in which all three alternatives are used (i.e. agents compromise) that provide equilibria ex-ante Pareto improvements on the war of attrition. The uniqueness of the Baseline equilibrium in the reputational model, highlights the role of a mediator more clearly. Proposition 4 shows that the addition of (a small amount of) noise help resolve the problems of simple mediation protocols. The claim that a mediator may need to add noise in addition to filtering information is not new. For instance, Goltsman et al. (2009) investigate the potential for mediation, arbitration and negotiation (finitely many rounds of communication with no discounting) to improve receiver payoffs in a cheap talk game. They show that both mediators and arbitrators should filter information, but mediators should also add noise. They further show that arbitration is (generically) more effective than mediation, while mediation is only sometimes more effective than communication. H¨orner et al. (2015) by contrast show that arbitration and mediation are equally effective at deterring conflict in a simple game in which parties choose whether to go to war. Theorem 1 identifies an optimal mediation protocol using a mechanism design approach, albeit for a somewhat restrictive problem.15 The optimal protocol (OSMP) has several interesting features. In particular, a mediator can always benefit risk averse agents because she can eliminate the inefficiency caused by dispersed agreements. Agreements between rational agents are always front-loaded as much as possible. However, agreements between rational agents and behavioral types are only partially backloaded, pointing to the need for a mediator to help agents realize when to back down, and not just broker compromise. 15

Note that the reputational game is considerably more complex than the games considered by Goltsman et al. (2009) and H¨orner et al. (2015), which additionally do not face the difficulty of how behavioral types should interact with any mechanism.

24

Comparing mediation to a mechanism design benchmark (OSDM) where the designer can impose outcomes, reveals that mediation is never as effective. However, this alternative mechanism may lack credibility because it assumes that a mediator can enforce perpetual disagreement. One significant difference between the baseline setting of my model and AG, is that I consider only a single behavioral type. Adding more behavioral demands for rational agents to imitate would certainly complicate the analysis. To the extent that mediation is (necessarily) different for different disputes (demands), it should certainly affect demand choice. It is not clear what additional insights might emerge from that richer model. Another difference from AG is the direct use of discrete continuous time, instead of starting with discrete time. It may be thought that this involves an important loss of generality, because when the mediator reveals that both agents are rational any surplus division is consistent with equilibrium continuation play, but that typically isn’t the case in discrete time (e.g. in an alternating offer model). However, by revealing information more slowly, an informed mediator may still be able to implement almost any compromise between rational agents.16 Beyond the papers already mentioned, the literature on mediation in bargaining is small. One avenue of research considers the possibility a third party can provide additional resources to the bargainers. For instance, Manzini and Ponsati (2006) show that bargainers may delay agreement in a complete information alternating offer model until a third party, with has a stake in the outcome, arrives. They do this in order to extract additional resources from the stakeholder. Basak (2016) explores third-party intervention in a model that is very similar to reputational bargaining. His third party can reveal her own exogenous information about the likelihood that an agent is committed to her demand. He finds that if the third party only has access to moderately informative signals, then the intervention may increase expected delay. The paper is also related to a literature on optimal timing in information design such as Ely and Szydlowski (2017) and Marinovic et al. (2017). A key difference with such problems is that my mediator has to provide incentives for agents to reveal their private information. In an alternating offer model with period length ∆ ≈ 0, suppose the mediator wants to implement m1 ∈ (1 − α2 , αR1 ) when both agents are rational, and wants agent i to concede if she alone is rational, where αR1 is agent 1’s Rubinstein demand. If agent 1 is behavioral, the mediator announces this fact before period 1. If agent 2 is behavioral then the mediator announces this before period 1 with probability (1 − ε) for ε > 0 small. Furthermore, if agent 1 makes a demand m1 in period 1, then the mediator will reveal whether 2 is rational before period 2, and will otherwise remain silent. Anticipating this, rational agent 2 will accept 1 − m1 as u2 (1 − m1 ) > e−r2 ∆ u2 (αR2 ). If z2 ε ≈ 0. Hence, 1 the mediator makes no announcement, then 1 believes 2 is behavioral with probability z2 = z2 ε+(1−z 2) −r1 ∆ will demand m1 because this gives her a payoff of (1 − z2 )u1 (m1 ) + z2 e u1 (1 − α2 ) ≈ u1 (m1 ), whereas making any other demand will leave a game of one-sided incomplete information, in which her payoff can be only marginally more than u1 (1 − α2 ). 16

25

7

Appendix

Proof of Proposition 2. Suppose there is an equilibrium σ = (σ1 , σ2 ) with ci c j > 0. Let Aci = {t : Uic (t) = max s Uic (s)} and Ani = {t : Uin (t) = max s Uin (s)}. Since σ is an equilibrium, Ani , ∅ , Aci . Define T ic as the final time by which a confessing agent i concedes to her opponent: T ic = inf{t : Fic (t) = 1}. Similarly, define T in as the final time a rational, non-confessing agent i concedes to her opponent: T in = inf{t : (1 − ci )(1 − Fin (t)) = zi }. Finally, define T ∗ = max{T cj , T nj , T ic , T in } and T c = min{T ic , T cj }. I next make (and prove) a series of claims, which will help establish the result. (a) We must have T cj ≤ T in < ∞. To establish T in ≥ T cj suppose instead that T in < T cj then after time T in a confessing agent j knows that she faces a behavioral opponent, and so would prefer to concede immediately rather than wait until T cj . To establish T in < ∞, let πtj be the conditional probability that agent j continues to act consistent with a behavioral type on the interval [s, s + t) for arbitrary s. For agent i not to concede at s it must be that: ui (1 − α j ) ≤(1 − πtj )ui (1) + πtj e−ri t ui (1) πtj ≤

ui (1) − ui (1 − α j ) (1 − e−ri t )ui (1) u (1)−u (1−α )

i j where the second line simply rearranges the first. Fix δ ∈ ( i ui (1) , 1), and consider K such that δK < zi u (1)−u (1−α ) i i j 0 and t0 such that δ = (1−e −ri t0 )u (1) . Suppose agent i did not to concede on the interval [0, t K) then it must be i 0 that the probability j acts consistent with a behavioral type on that interval is less than (πtj )K ≤ δK < zi , but this contradicts the fact that a behavioral type acts like itself. And so rational agent i will always concede by T in ≤ t0 K

(b) We must have T in ≤ max{T cj , T nj }. Suppose that T in > max{T cj , T nj } then after time max{T cj , T nj } a nonconfessing rational agent i knows that she faces a behavioral opponent, and so would prefer to concede immediately rather than wait until T in . (c) There is no jump in Fic at t ∈ (0, T ∗ ]. Suppose Fic jumped at t ∈ (0, T ∗ ], then F nj is constant on [t − ε, t] for some ε > 0, as non-confessing agent j would prefer instead to concede an instant after t rather than on the interval [t − ε, t]. But in which case, a confessing agent i would prefer to concede at t − ε rather than wait until t. (d) There is no jump in Fin at t ∈ (0, T ∗ ]. Suppose that Fin did jump at t ∈ (0, T ∗ ], then F j is constant on [t − ε, t] for some ε > 0, as a rational agent j would prefer instead to concede an instant after t rather than on the interval [t − ε, t]. But in which case, a non-confessing agent i would prefer to concede at t − ε rather than wait until t. (e) If Fic and Fin are continuous at t then U cj (s) and U cj (s) are continuous at t. This follows by their definition. (f) If T ∗ ≥ t00 > t0 then Fi (t00 ) > Fi (t0 ). Suppose not, then let ti∗ = sup{t : Fi (t) = Fi (t0 )} ≥ t00 . It is clear that no 0 rational agent j will concede on s ∈ (t0 , ti∗ ) because this is strictly worse than conceding at s+t 2 . The continuity of F j on (0, T ∗ ], established in (c) and (d), then means a rational agent i would strictly prefer to concede at ti∗ +t0 ∗ ∗ 2 rather than wait to concede at or just after ti , contradicting the definition ti . (g) If T cj ≥ t00 > t0 then Fin (t00 ) > Fin (t0 ). Suppose not, then let ti∗n = sup{t : Fin (t) = Fin (t0 )} ∈ [t00 , ∞). A confessing t0 +t∗n

agent j will not concede on s ∈ (t0 , ti∗n ) because this is strictly worse than conceding at 2 i . This in particular ensures that T cj ≥ ti∗n . In conjunction with (f) it also means that F nj (t00 ) > F nj (t0 ) and Fic (t00 ) > Fic (t0 ). The latter implies that Aci is dense in (t0 , ti∗n ]. From (d) and (e) Uic (t) is continuous and hence constant on (t0 , ti∗n ). dU c (t) In turn that ensures that Uic (t) differentiable on (t0 , ti∗n ) with dti = 0, so that a non-confessing agent j must

26

f n (t)

concede at rate 1−Fj n (t) = λ j . Notice, however, that because c j (1 − F cj (t)) > 0 for t < T cj where T cj ≥ ti∗n for j t ∈ (t0 , ti∗n ) we must have: (1 − c j ) f jn (t) f jn (t) f j (t) = < = λj 1 − F j (t) (1 − c j )(1 − F nj (t)) + c j (1 − F cj (t)) 1 − F nj (t) f (t)

A concession rate of exactly 1−Fj j (t) = λ j would make a non-confessing agent i indifferent to concession at any t ∈ (t0 , ti∗n ). The continuity of F j , established in (c) and (d), then means that a non-confessing agent i gets t0 +t∗n a strictly lower payoff when conceding at or just after ti∗n than if she conceded at 2 i . This means that ti∗n cannot be the supremum, a contradiction. f (t)

(h) If T cj > 0, then agent j must concede at rate 1−Fj j (t) = λ j rate on (0, T cj ]. If T cj > 0, then (g) implies that Ani is dense in [0, T cj ]. From (c), (d), and (e) it follows that Uin (t) is continuous on (0, T cj ] and hence Uin (t) is constant on this interval. Hence Uin (t) is constant on this interval, and so differentiable with implies that agent j concedes at rate λ j .

dUin (t) dt

= 0, which

f (t)

(i) If T cj < T ∗ , then 1−Fj j (t) = λ j on (T cj , T ∗ ]. Notice that a confessing and non-confessing agent i must have identical beliefs about j’s likelihood of being behavioral on [T cj , T ∗ ], and so Ani ∩ [T cj , T ∗ ] = Aci ∩ [T cj , T ∗ ]. From (f) if follows that Ani is dense in [T cj , T ∗ ]. From (c), (d), and (e) it follows that Uin (t) is continuous on (T cj , T ∗ ] and hence is also constant. In turn that implies that Uin (t) is differentiable on (T cj , T ∗ ] with and so agent j must concede at rate λ j .

dUin (t) dt

= 0,

(j) If T c ≥ t00 > t0 , and F cj (t00 ) = F cj (t0 ) then F cj (t00 ) = Fic (t00 ) = 0. I first claim that Fic (t00 ) = Fic (t0 ). To see this, notice that if F cj (t00 ) = F cj (t0 ), then to ensure that agent j on average concedes at rate λ j on (t0 , t00 ), as required by (h), a non-confessing agent j must concede at rate: f jn (t) 1 − F nj (t)

  = λ j 1 +

c j (1 − F cj (t)) (1 − c j )(1 − F nj (t))

  

(9)

For t < T c , however, c j (1 − F cj (t)) > 0 and so this rate is strictly greater than λ j , which implies a confessing agent i would strictly prefer to concede at t00 rather than on the interval (t0 , t00 ). Next, define ti∗∗ = inf{s : 0 Fic (s) = Fic (t0 )} ≤ t0 . The previous argument implies ti∗∗ = t∗∗ j ∈ [0, t ]. Anticipating that a non-confessing agent j will concede at the rate specified in equation (9) on (ti∗∗ , t00 ), a confessing agent i won’t concede on [ti∗∗ − ε, t00 ) for some ε > 0, preferring to concede at t00 instead, which implies ti∗∗ = 0 and Fic (t0 ) = 0. (k) Suppose T c > 0, let t∗c := inf{t : Fic (t) > 0 for i = 1, 2}, and suppose t∗c ≤ t0 < t00 ≤ T c , then Fic (t00 ) > Fic (t0 ). First notice that T c > t∗c follows from (c), the continuity of Fic on (0, ∞). The claim then follows immediately from (j), (c) again, and the fact that either Fic (T c ) = 1 or F cj (T c ) = 1. f n (t)

f c (t)

(l) If T c > 0 then 1−Fj n (t) = 1−Fj c (t) = λ j on (t∗c , T c ], where t∗c is defined in (k). First notice that by (k) Aci must j j be dense in [t∗c , T c ]. From (d) and (e) Uic (t) is continuous on (0, T c ]. Hence Uic (t) is constant on (t∗c , T c ], and f n (t) dU c (t) so differentiable with dti = 0, which implies that a non-confessing agent j’s concession rate is 1−Fj n (t) = λ j j

on this interval. By (h) we must also have a total concession rate concession rates hold, then: λj =

f j (t) 1−F j (t)

= λ j on the interval. If both these

c j f jc (t) + (1 − c j ) f jn (t) c j f jc (t) + (1 − c j )(1 − F nj (t))λ j f j (t) = = , 1 − F j (t) c j (1 − F cj (t)) + (1 − c j )(1 − F nj (t)) c j (1 − F cj (t)) + (1 − c j )(1 − F nj (t))

which rearranges to give

f jc (t) 1−F cj (t)

= λ j.

(m) We must have T c = 0. Suppose not, and so T c = T cj > 0 for some agent j, while F cj (0) < 1. If t ∈ [t∗c , T c ] (where t∗c is as in (k)), then F cj (t) = 1 − (1 − F cj (0))e−λ j t < 1 for all t, but this contradicts T cj < ∞.

27

We are almost done. Notice that if T cj = 0, then (0, T ∗ ] ⊆ Aci = Ani by (i), hence a confessing agent i who concedes at t ∈ Aci must get the payoff:   Uic (t) = c j ui (mi ) + (1 − c j ) F nj (0)ui (αi ) + (1 − F nj (0))ui (1 − α j ) Whereas a non-confessing agent i’s who concedes at t ∈ Ani must get the payoff:   Uin (t) = c j ui (αi ) + (1 − c j ) F nj (0)ui (αi ) + (1 − F nj (0))ui (1 − α j ) Therefore, if c j > 0 we must have mi ≥ αi , or agent i would not find it optimal to confess. Clearly we cannot have m j < 1 − αi , or confessing would deliver agent j a payoff of strictly less than u j (1 − αi ) which she can guarantee by not confessing and then conceding (recall that F cj (0) = 1 and ci > 0). Suppose finally that m j = 1 − αi . In this case, we must have U cj (t) ≤ ui (1 − α) for all t (or we couldn’t have F cj (0) = 1) and so similarly U nj (t) ≤ ui (1 − α) for all t, in particular implying Fi (0) = 0. Analogous to the requirement that both agents reach a probability one reputation at the same time in the Baseline model, we must have T nj = T ∗ = max{T ic , T in }. If T nj < max{T ic , T in } then because T cj = 0, any rational agent i would know she faced a behavioral type at T nj and would concede at most an instant after. Similarly if T nj > max{T ic , T in }, then a non-confessing agent j would know she faced a behavioral type at max{T ic , T in } and would concede at most an instant after. f (t)

fi (t) = λi on (0, T ∗ ]. Given that By (h) and (i) we know that agents must concede at rates 1−Fj j (t) = λ j and 1−F i (t) Fi (0) = 0 we must have 1 − Fi (t) = e−λi t for t ≤ T ∗ . Combined with (1 − ci )(1 − Fin (T ∗ )) = zi and (1 − Fic (T ∗ )) = 0, ∗ we must therefore have e−λi T = zi . Similarly, for agent j we must have (1 − F j (t)) = (1 − F j (0))e−λ j t . Notice that F cj (0) = 1 implies that (1 − c j )(1 − F nj (t)) = 1 − F j (t). Combined with (1 − c j )(1 − F nj (T ∗ )) = zi we therefore get ∗

(1 − F j (0))e−λ j T = zi . Clearly if T ∗j = − λ

ln(z j ) λj

i) < − ln(z = T i∗ , we have an immediate contradiction. Otherwise λi

− λj

(1 − F j (0)) = zi eλ j T = zi z j i . But in which case, any such an equilibrium has exactly the same distribution of outcomes as the Baseline equilibrium! An aside: such an equilibrium “involving” the mediator does exist (e.g. let ∗

λ

− λj

ci = 1 − zi and c j ≤ 1 − zi z j

i

when T i∗ ≤ T ∗j ).



Proof of Proposition 3. Suppose there is an equilibrium σ = (σ1 , σ2 ). In this setup, I refer to agent j who has confessed but not yet conceded, as a confessing agent. Let Ai = {(s, t) : Ui (s, t) = maxv,w Ui (v, w)}. Since σ is an equilibrium, Ai , ∅. Finally, define T id = inf{t : Fid (t) = 1 − zi } and T ∗ = max{T 1d , T 2d }. (a) We must have T id = T ∗ < ∞. This follows for the reasons as outlined in the proof of Proposition 2, point (a). We have T id = T dj , because if an agent knows she faces a behavioral opponent she will concede immediately. We have T dj < ∞ because if the agent does not concede at some t, she must expect her opponent to be concede soon and thus must eventually become convinced that her opponent is behavioral. (b) If Fid jumps at t ∈ (0, T ∗ ] then F cj is constant on [t − ε, t] for some ε > 0. This follows because if agent j has not confessed before t − ε, she would strictly increase her payoff by confessing an instant after t compared to slightly before (giving her u j (α j ) rather than u j (m j ) with positive probability). (c) If Fic jumps at t = (0, T ∗ ] then F dj is constant on [t − ε, t) for some ε > 0. This follows because agent j would prefer to concede an instant after t rather than slightly before (to get u j (m j ) rather than u j (1 − αi ) with probability Fic (t) − sup s t0 . If Fic (t00 ) = Fic (t0 ) then either F dj (t0 ) = F dj (t00 ) or F cj (t) = F dj (t) for t ∈ [t0 , t00 ). Suppose not, then F dj (t0 ) < F dj (t00 ) and F dj (t000 ) < F cj (t000 ) for some t000 ∈ [t0 , t00 ). I first claim that this implies F dj (t000 ) = F dj (t00 ). Otherwise, there is some s ≤ t000 and some t ∈ (t000 , t00 ] such that (s, t) ∈ A j . Given that Fic (t00 ) = Fic (t000 )

28

the alternative strategy of conceding at 12 (t000 +t) and still confessing at s is more profitable (it moves the payoff u j (1 − αi ) forward in time). Define tˇ = sup{t : Fic (t) = Fic (t0 )}. I claim that Fic is continuous at tˇ. Using the above argument again, we have F dj (t000 ) = sup s F dj (t000 ), it must be that if i would get a higher payoff by confessing at 21 (tˇ + t000 ) instead of tˆ without changing her concession time (this brings the payoff ui (mi ) forward in time). Hence, we must have F dj (t000 ) = F dj (tˇ). But in which case, confessing an instant after tˇ cannot be optimal for i either, contradicting the definition of supremum tˇ. (e) Let T ∗ ≥ t00 > t0 . If Fid (t00 ) = Fid (t0 ) and Fic (t0 ) > Fid (t0 ) then F cj (t0 ) = F cj (t00 ) Suppose not so that F cj (t0 ) < F cj (t00 ). Then there exists (s, t) ∈ A j such that s ∈ (t0 , t00 ]. However, the alternative plan of confessing at sˆ = 21 (t0 +s) while still conceding at t would give j a higher payoff as then, with probability (Fic (t0 )−Fid (t0 )) > 0, she gets the payoff ui (mi ) at an earlier date, without affecting the distribution of payoffs from concession. (f) There is no jump in Fid at t ∈ (0, T ∗ ]. Suppose not, then by (b) F cj is constant on [t − ε, t] for some ε > 0. Hence, by (d) either Fid (t) = Fid (t − ε) (a direct contradiction) or Fic (s) = Fid (s) for s ∈ [t − ε, t). It must then be that Fic also jumps at t, because we must have sup s 0 (the same ε without loss of generality). Given that Fic and Fid jump at t, we must have (t, t) ∈ Ai . However, the alternative strategy for i of both confessing and then conceding at tˆ = t − 2ε delivers strictly higher expected profits as she gets the payoffs (F cj (t − ε) − F dj (t − ε))ui (mi ) and (1 − F cj (t − ε))ui (1 − α j ) > 0 at an earlier date, without affecting the distribution of other payoffs. (g) If Fid is continuous at s ≤ t then Ui (s, t) is continuous at s, and if Fic is continuous at t then Ui (s, t) is continuous at t. This follows from how Ui (s, t) is defined. For claims (h)-(l) suppose that F1c (t0 ) > F1d (t0 ) for some t0 ∈ [0, ∞) (symmetric arguments apply if F2c (t0 ) > F2d (t0 )). Define t1 = inf{t ≥ t0 : F1c (t) = F1d (t)} and t1 = inf{t : F1c (s) < F1d (s) ∀s ∈ [t, t0 ]}. By (b), the continuity of F1d , must have t1 > t0 ≥ t1 and F1c (t1 ) = F1d (t1 ). We then have F1c (t) > F1d (t) for t ∈ (t1 , t1 ). Let t1 ≥ t000 > t00 > t1 . (h) We must have F2c (t000 ) > F2c (t00 ). Suppose not. Let tˇ2 := sup{t : F2c (t) = F2c (t00 )} ≥ t000 . I first claim that either F1d (t00 ) = F1d (tˇ2 ) or F1c (t) = F1d (t) for t ∈ [t00 , tˇ2 ). Suppose not then F1d (t00 ) < F1d (tˇ2 ) and there is some t ∈ [t00 , tˇ2 ) such that F1d (t) < F1c (t). By (f), the continuity of F1d , we must then have F1d (t00 ) < F1d (tˇ2 − ε) for all ε > 0 sufficiently small. We must also have F1d (t) < F1c (t) for some t ∈ [t00 , tˇ2 − ε) for ε sufficiently small (because F1d (t) < F1c (t) for some t ∈ [t00 , tˇ2 )). Hence, we have F2c (t00 ) = F2c (tˇ2 − ε), F1d (t) < F1c (t) for some t ∈ [t00 , tˇ2 − ε) and F1d (t) < F1c (t) for all t ∈ [t00 , tˇ2 − ε), which contradicts claim (d). By assumption we have F1c (t) > F1d (t) for t ∈ (t1 , t00 ] so we must have F1d (t00 ) = F1d (tˇ2 ). This implies t1 > tˇ2 because F1d (tˇ2 ) = F1d (t00 ) < F1c (t00 ) ≤ F1c (tˇ2 ) and F1c (t1 ) = F1d (t1 ). I next claim that (tˇ2 , t) < A2 for any t ≥ tˇ2 . To see this, notice that 2’s alternative plan of confessing at tˆ = 21 (tˇ2 + t00 ) and conceding at t must deliver strictly larger payoffs by bringing forward the payoff u2 (m2 ), with probability F1c (tˆ) − F1d (tˆ) > 0, without affecting the distribution of payoffs from concession. Given (f), the continuity of F1d , this argument similarly implies that confessing an instant after tˇ2 cannot be optimal plan for agent 2 either, contradicting the definition of the supremum tˇ2 . (i) We must have F1d (t000 ) > F1d (t00 ). Suppose not, then let tˇ1 = sup{t : F1d (t) = F1d (t00 )} ≥ t000 . Given (f) we have F1d (tˇ1 ) = F1d (t00 ). Given F1d (t00 ) < F1c (t00 ) we must have tˇ1 < t1 . Hence, by (e) we must have F2c (tˇ1 ) = F2c (t00 ) which contradicts (h), that F2c is increasing on (t1 , t1 ]. (j) We must have F2d (t000 ) > F2d (t00 ). Suppose not then F2d (t000 ) = F2d (t00 ). Given that F2c is increasing on the interval [t00 , t000 ] by (h), we must have F2c (t) > F2d (t) for t ∈ (t00 , t000 ]. But then switching the notation for 1 and 2, (h) implies F1c (t000 ) > F1c (t00 ) and (i) then implies F2d (t000 ) > F2d (t00 ), a contradiction. (k) We must have F1c (t000 ) > F1c (t00 ). Suppose not, and so F1c (t000 ) = F1c (t00 ). Let tˇ1 = inf{t : F1c (t) = F1c (t00 )}. Clearly, we have tˇ1 ≥ t1 . By (d), we must have either F2d (t000 ) = F2d (tˇ1 ), which contradicts (j), or F2c (t) = F2d (t)

29

for t ∈ [tˇ1 , t000 ). I claim that (tˇ1 , t) < A1 for any t ≥ t000 . The alternative strategy of confessing at t000 while still conceding at t gives agent 1 a strictly higher payoff of u1 (α1 ) instead of m1 from the positive concession of agent 2 on the interval [tˇ1 , t000 ). That is:

U1 (t000 , t) − U1 (tˇ1 , t) ≥

Z tˇ1 ≤s≤t000

(u1 (α1 ) − u1 (m1 ))e−r1 s dF2c (s)

000

≥ e−r1 t (u1 (α1 ) − u1 (m1 ))(sup F2c (s) − F2c (tˇ1 )) > 0 s
where the first inequality follows from F2c (t) = F2d (t) on [tˇ1 , t000 ), the second from t000 ≥ s ∈ [tˇ1 , t000 ] and the third from (h). For the same reason, confessing an instant before tˇ1 cannot be optimal either. This either contradicts the definition of tˇ1 or contradicts F1c (t0 ) > F1d (t0 ). (l) Fic is continuous on (t1 , t1 ]. If Fic did jump at t ∈ (t1 , t1 ] then by (c), F dj is constant on (t − ε, t) for some ε > 0, contradicting either (i) or (j). We are almost done. Because F1c , F1d are increasing on (t1 , t1 ) and F1d (t) < F1c (t) in this interval, it follows that there is some s0 ∈ (t1 , t1 ) such that A1 is dense in the set {(s0 , t) : t, ∈ [s0 , t1 ]}. Notice that regardless of whether the agent concedes at s0 or s > s0 , the agent faces the same continuation payoffs conditional on not having conceded before s. From the continuity of F2c on (t1 , t1 ] it follows that U1 (s0 , t) is constant on [s0 , t1 ], and hence differentiable with (s0 ,t) respect to t with zero partial derivative, U1 ∂t = 0. Rearranging this zero derivative condition gives: f2c (t) r1 u1 (1 − α2 ) = λc2 := 1 − F2c (t) m1 − u1 (1 − α2 ) c

For t ∈ [t1 , t1 ], this implies (1 − F2c (s)) = φc2 e−λ2 (s−t1 ) where φcj = (1 − F cj (t1 )). By the same reasoning there must be some s00 ∈ (t1 , t1 ) such that A1 is dense in the set {(s, s00 ) : s, ∈ [t1 , s00 ]}. The continuity of F2d on (t1 , t1 ] then implies that U1 (s, s00 ) is constant on (t1 , s00 ], and hence differentiable with respect (s,s00 ) to s with zero partial derivative, ∂U1∂s = 0. Rearranging this zero derivative condition gives:     c f1d (s) = λd2 (1 − F2d (s)) − (1 − F2c (s)) = λd2 (1 − F2d (s)) − φc2 e−λ2 (s−t1 ) where λd2 :=

r1 u1 (m1 ) u1 (α1 )−u1 (m1 ) .

Solving this linear ODE gives:

(1 −

F2d (s))

  d −λd (s−t ) d c −λc (s−t ) −λd (s−t ) c   m2 e 2 1 + φ2 θ2 (e 2 1 − e 2 1 ) if λ2 , λ2 =  d  (md + λd φc (s − t ))e−λ2 (s−t1 ) if λd = λc 2

where: θ2 :=

λd2 d λ2 −λc2

2 2

1

2

2

and φd2 = (1 − F2d (t1 )) ≥ (1 − F2c (t1 )) = φc2 . Define the gap between F2c and F2d as d2 (s) =

F2c (s) − F2d (s), and take transformations of this gap to give: eλ2 (s−t1 ) φd2 − θ2 φc2 d c = + e(λ2 −λ2 )(s−t1 ) d2 (s) θ2 − 1 θ2 − 1 c eλ2 (s−t1 ) θ2 − 1 d c d2 (s) d =e(λ2 −λ2 )(s−t1 ) + d c φ2 − θ2 φ2 φ2 − θ2 φc2 d

d2 (s)eλ2 (s−t1 ) =φd2 − φc2 + λd2 φc2 (s − t1 ) d

if

λd2 > λc2

if

λd2 < λc2

if

λd2 = λc2 λc2 λd2 −λc2 from φd2

I claim that each of these transformations is positive. Notice that θ2 − 1 = φd2 − θ2 φc2 ≥ −φc2

λc2 d λ2 −λc2

> 0 when λd2 < λc2 , where the first inequality follows

30

> 0 when λd2 > λc2 . Similarly ≥ φc2 . Each to the transformed

gaps is strictly increasing in s, implying that d2 (s) > 0 for s ∈ (t1 , t1 ]. Define t2 = inf{t > t1 : F2c (t) = F2d (t)}. We can now repeat the above arguments with the roles of agent 1 and 2 reverse to find that d1 (s) > 0 for s ∈ (t1 , t2 ]. Let t = min{t1 , t2 }. Suppose t < ∞, then we have an immediate contradiction to di (t) > 0. On the other hand t = ∞ contradicts (a) that T ∗ < ∞. We then must have Fic (t) = Fid (t) for t ∈ [0, ∞). Given this, the unique equilibrium must match that of the Baseline model by standard arguments (see AG).  Proof of Proposition 4. First observe that:     Ui∗c = (1 − z j ) bui (mi ) + (1 − b)H cj (0)ui (αi ) + z j + (1 − z j )(1 − b)(1 − H cj (0)) ui (1 − α j ) because agent i is indifferent to conceding an instant after time 0. This in turn implies: ∗

Z s
e−ri s ui (αi )dH cj (s) = ui (1 − α j ) + H cj (0)(ui (αi ) − ui (1 − α j )) +

z j (1 − e−ri T )ui (1 − α j ) (1 − z j )(1 − b)

(10)

And so, Qi reduces to: ! ∗ z j (1 − e−ri T )ui (1 − α j ) c Qi = ui (mi ) − ui (1 − α j ) + + H j (0)(ui (αi ) − ui (1 − α j )) (1 − z j )(1 − b) Suppose that T ∗ = T j ≤ T i , and hence H cj (0) = 0. Substituting in for T ∗ and Hic (0) gives:  zj 1 − Qi =ui (mi ) − ui (1 − α j ) −    zi 1 −

Q j =u j (m j ) − u j (α j ) −

zj 1−(1−z j )b

 λri ! j

ui (1 − α j )

(1 − z j )(1 − b)  λr j  j  zj  u (1 − α ) i  j 1−(1−z j )b (1 − zi )(1 − b)

+ (u j (α j ) − u j (1 − αi ))

  − λλi   j zj  zi  1−(1−z j )b − 1 (1 − zi )(1 − b)

Define mi as the mediation share that causes Qi = 0.     λri !   j zj ui (1 − α j )  z j 1 − 1−(1−z j )b     mi := u−1 ui (1 − α j ) + i   (1 − z j )(1 − b)   Notice that mi → 1 − α j as z j → 0. Setting m j = 1 − mi ) we then have:

Q j u j (1 − mi ) − u j (α j ) = − zi zi Notice that

u j (1−mi )−u j (α j ) zi

≥K

   1 −

zj 1−(1−z j )b

 λr j  j   u (1 − α ) i  j

(1 − z j )(1 − b)

u j (1−mi )−u j (α j ) zj

+ (ui (αi ) − ui (1 − α j ))

  

zj 1−(1−z j )b

− λλi

j

  − 1

(1 − zi )(1 − b)

(11)

by assumption. Taking the limit of the right hand side as z j → 0 using

l’Hopital’s rule and the inverse function theorem gives limK Q

u j (1−mi )−u j (α j ) zj

u0 (α j )

= −K u0 (1−αj i )(1−b) > −∞. This ensures i

that lim zij = ∞ as the final expression in equation 11 explodes as z j → 0. Hence, there exists z0 > 0 such that if z j ≤ z0 we must have Q j ≥ 0 and Qi ≥ 0 and so there is an P1 equilibrium. It remains to show that such equilibrium can strictly improve the payoff of both agents. If λ j ≥ λi and z j ≥ zi then clearly T i ≥ T j in any P1 equilibrium and in the Baseline equilibrium. Alternatively, suppose that λ j > λi (and

31

possibly z j < zi ). Let z be such that for z j ≥ z we have 00

2

zj 1 − (1 − z j )b

!λi



zj 1−(1−z j )b

 λλi −1 j

≥ K. This implies:

! ! ! Kz j Kz j zi λj ≥ ≥K ≥ 1 − (1 − z j )b 1 − (1 − Kz j )b 1 − (1 − zi )b ! ! zj 1 1 zi ≤ − ln = Ti T j = − ln λj 1 − (1 − z j )b λi 1 − (1 − zi )b

The first inequality on the first line is directly implied, the second follows because (1 − b)(1 − z j ) + z j = (1 − Kz j )(1 − b) + Kz j − (K − 1)z j b ≤ (1 − Kz j )(1 − b) + Kz j , the third because Kz j ≥ zi . The second line stating T j ≤ T i is then simply a rearrangement of the inequality of the first and final term on line one. The bound immediately λi λ

also ensures that z j j

−1

≥ K and so in the Baseline equilibrium we must also have T j ≤ T i .

Let z j ≤ min{z0 , z00 , 12 }, then the payoff to player i in the Baseline equilibrium is ui (1 − α j ). To ensure Qi > 0 we must certainly have ui (mi ) > ui (1 − α j ) and hence Ui∗c > ui (1 − α j ). In the Baseline equilibrium j’s payoff is λ

− λi

U Bj := u j (αi ) − (u j (α j ) − u j (1 − αi ))zi z j j . We need to compare this to her payoff in a P1 equilibrium: U ∗c j

zj = (1 − zi )bu j (m j ) + (1 − b(1 − zi ))ui (α j ) − (ui (αi ) − ui (1 − α j ))zi 1 − (1 − z j )b

!− λλi

j

Notice that B U ∗c j − Uj

zi

 !− λλi   − λi j  zj  λj  =(1 − zi )b + (ui (αi ) − ui (1 − α j )) z j −  zi 1 − (1 − z j )b  !− λλi  λ u j (1 − mi ) − ui (α j )  − λi  j  2  j  ≥K + (ui (αi ) − ui (1 − α j ))z j 1 −  zj 2−b u j (1 − mi ) − ui (α j )

where the second line follows from (1 − zi )b is equivalent to 2 − b ≥ 2(1 − (1 − z j )b)).

u j (1−mi )−ui (α j ) zi

≥K

u j (1−mi )−ui (α j ) zj

and

zj 1−(1−z j )b

2 ≥ z j 2−b when z j ≤

1 2

(the

λ

The final expression on the second line explodes as z j → 0 because



 i 2 − λj 2−b

< 1. We previously established that

u j (1−mi )−ui (α j ) K zj

the limit of as z j → 0 is finite. This implies that there exists z > 0 such for z j ≤ z we have a P1 B ∗c B  equilibrium with mi = mi and U ∗c j > U j and U i > U i . QED. Proof of Proposition 5. Suppose this were not true, then there must exist some sequence of games (ri , ui , αi , zni , mn , bn ) with zn1 → 1 and a sequence of P1 equilibria in each (I suppress the subscript n in what follows for simpler notation). I first claim that any (sub)sequence of these P1 equilibria must satisfy lim T ∗ = 0. This follows immediately   z1 from the fact that T ∗ ≤ T 1 = − λ11 ln (1−z1 )(1−b)+z → 0. 1 R ∗ Notice that s 0, for all sufficiently large n we need mi > αi −ε for i = 1, 2, in order to have Qi ≥ 0. Choosing ε = α1 +α2 2 −1 we have m1 + m2 > 1, a contradiction.  Proof of Lemma 1. Hold GZ fixed. Suppose that some distribution GR , implies T R > T Z . In which case, the alternative distribution G˜ R with G˜ R (t) = GR (t) for t < T Z and G˜ R (T Z ) = 1 so that T R = T Z strictly increases U c (T Z ), while relaxing both incentive constraints. It is therefore, without loss of generality to focus on GR that imply T R = T Z . Suppose such a GR satisfies both incentive constraints, while U c (T Z ) − U c (t1 ) = δ > 0 at some t1 < T Z . We shall identify an alternative distribution Gˇ R which satisfies both incentive constraints and gives rational agents a strictly higher payoff. Define t2 = T Z if T Z < ∞ and t2 = min{t : e−rt u(1) ≤ 2δ } otherwise. Notice that we must have

32

2

U c (T Z ) − U c (t2 ) ≤ e−rt u(1) ≤

δ 2

and so U c (t2 ) − U c (t1 ) ≥

δ 2

> 0.

Next define t3 = min{t ∈ [t1 , t2 ] : U c (t2 ) − U c (t) ≤ (t2 − t) 4(t2δ−t1 ) }. This is well defined because the the right continuity of Gz and GR ensure that U c (t) is right continuous also. By construction t3 > t1 , and U(t3 ) − U(t) > δ(t3 −t) for all t ∈ [t1 , t3 ). For such t we have: 4(t2 −t1 ) U c (t3 ) − U c (t) =(1 − z)

Z s∈(t,t3 ]

e−rs u(0.5)dGR (s) + z

Z

e−rs u(1 − α)dGZ (s) s∈(t,t3 ]

    3 + e−rt u(1 − α) (1 − z)(1 − GR (t3 )) + z(1 − GZ (t3 )) − e−rt u(1 − α) (1 − z)(1 − GR (t)) + z(1 − GZ (t))     3 ≤ e−rt (GR (t3 ) − GR (t)) u(0.5) − u(1 − α j ) + (e−rt − e−rt ) (1 − z)(1 − GR (t3 )) + z(1 − GZ (t3 )) u(1 − α) Where the inequality follows from the fact that the integrals in the first line are respectively smaller than (1 − z)e−rt u(0.5)(GR (t3 ) − GR (t)) and ze−rt u(1 − α)(GZ (t3 ) − GZ (t)), and some rearrangement. Recall also that U(t3 ) − δ(t3 −t) 3 3 U(t) > 4(t 2 −t1 ) , and so dividing the right hand side of the above equation by (t − t) and taking its limit as t → t gives:     GR (t3 ) − GR (t) δ 3 3 e−rt u(0.5) − u(1 − α j ) (1 − z)limt→t3 :t 0 and t4 < t3 such that for all t ∈ [t4 , t3 ],  GR (t3 ) − GR (t) ≥ (1 − GR (t3 )) + where λm =

 z (1 − GZ (t3 )) λm (t3 − t) + ε(t3 − t). 1−z

ru(1 − α) u(0.5) − u(1 − α)

. Consider then an alternative distribution, Gˆ R . This is defined by Gˆ R (t) = GR (t) for t ≥ t3 , and the indifference condition U c (t) = U c (t3 ) for t ≤ t3 . This indifference condition implies that Gˆ Z (t) is differentiable on [0, t3 ] with   z (1 − GZ (t)) λm . It is clear that this implies the existence of some t5 < t3 such that for all g(t) = (1 − Gˆ R (t)) + 1−z t ∈ [t5 , t3 ],   ε Gˆ R (t3 ) − Gˆ R (t) ≤ (1 − z)(1 − Gˆ R (t3 )) + z(1 − GZ (t3 )) λm (t3 − t) + (t3 − t). 2 Letting t6 = max{t4 , t5 } we must then have Gˆ R (t) > GR (t) for all t ∈ [t6 , t3 ). We can now define Gˇ R (t) = Gˆ R (t) for t ≥ t6 and Gˇ R (t) = GR (t) elsewhere. This distribution implies Gˇ R (t) ≥ GR (t) for all t, and Gˇ R (t) > GR (t) for t ∈ [t6 , t3 ). This ensures that UGcˇ R (t) > UGc R (t) for all t ∈ [t6 , T Z ]. I claim that Gˇ R satisfies the Dynamic IC constraint. For t ≥ t3 we have UGcˇ R (T Z )−UGcˇ R (t) = U c (T Z )GR −UGc R (t) ≥ 0. For t ∈ [t6 , t3 ] we have UGcˇ R (T Z ) − UGcˇ R (t) = UGcˇ R (T Z ) − UGcˇ R (t3 ) ≥ 0 (recall that UGcˇ R (t) = UGcˇ R (t3 )). Finally for t < t6 we have UGcˇ R (t) = UGc R (t) and so UGcˇ R (T Z ) > UGcˇ R (t). The new distribution Gˇ R must certainly also satisfy the type IC constraints (because GZ is unchanged), but with a higher expected payoff for a rational agent. For two distributions GR and G˜ R which satisfy both IC constraints for the same GZ (and T R = T Z ), I define the partial order G˜ R % GR if G˜ R (t) ≥ GR (t) for all t ≤ T Z . I next claim that for any such arbitrary GR , we have GR∗ % GR (where GR∗ is defined in equation (7). To establish this, define a sequence of IC distributions (GRk ) as follows: let GR1 be arbitrary. Let u(GR ) = U c R (T Z )+u(GRk ) sup{UGc˜ R (T Z ) : G˜ R % GR }. Choose GRk+1 % GRk such that UGc Rk+1 (T Z ) ≥ G k+1 2 . Notice that if UGc Rk (T Z ) − c Z Rk+1 Rk UGRk (t) > 0 for some t ≤ T then by the arguments above G  G . Define GR (t) = limk GRk (t) and

33

R

R

−w G . G (t) = inf{GRk (s) : s > t}. Given that GRk (t) is increasing in k, this is well defined, and moreover GRk→ R

R

−w G I claim that G satisfies the dynamic IC constraint. Given that GRk (T Z ) = 1 for T Z < ∞, it is clear that GRk→ R c Z c Z c c implies limk UGRk (T ) = U R (T ). If t is a point of continuity of G , then it is clear that limk UGRk (t) = U R (t), G G and so U c R (T Z ) − U c R (t) = limk [UGc Rk (T Z ) − UGc Rk (t)] =≥ 0. Moreover, if t is not a point of continuity then G

G

R

U c R (t) = lim s→t+ U c R (s), and so, U c R (T Z ) − U c R (t) = U c R (T Z ) − lim s→t+ U c R (s) ≥ 0. Given G (t) ≥ GRk (t) it is G

G

G

G

G

G

R

also clear that U c R (T Z ) ≥ UGc Rk (T Z ) and so it must also satisfy the type IC constraint. This implies that G % GRk . G

R R Finally, suppose that G , GGR∗Z , then there exists some G˜ R  G so that UGc˜ R (T Z ) = U c R (T Z ) + ε for some ε > 0. G But in which case UGc Rk+1 (T Z ) ≥ UGc Rk (T Z ) + 2ε . However, given that UGc Rk (T Z ) ≤ u(α) this presents a contradiction for sufficiently large k.

 Proof of Lemma 2. If ICGZ (T Z ) > 0 and T Z < ∞, then given the right continuity of GZ , we must have ICGZ (t) ≥ ε for t ∈ [T Z − ε, T Z ] and some ε > 0. Consider the alternative distribution Gˇ Z , such that Gˇ Z (t) = GZ (t) for t < T Z − ε and Gˇ Z (t) = min{1, GZ (t) + ε0 } for t ≥ T Z − ε and some ε0 ≥ 0. Notice that for t < T Z − ε we must   Z have ICGˇ Z (t) ≥ ICGZ (t). For all t ≥ T Z − ε we have ICGˇ Z (t) ≥ ICGZ (t) − ε0 e−r(T −ε) (1 − z) (u(α) − u(1 − α)) + m Z Z εε0 r min{0, u(1 − α)eλ (T −ε) z − u(α)e−r(T −ε) (1 − z)}. Given ICGZ (t) ≥ ε, by selecting ε0 > 0 sufficiently small, we must have ICGˇ Z (t) ≥ 0. Clearly, this alternative distribution implies UGc∗ˇ Z (0) > UGc∗Z (0). Similarly, suppose that ICGZ (T Z ) ≥ ε > 0 and T Z = ∞. Notice first that UGn Z (t) − UGn Z (T Z ) ≤ e−rt u(1 − α). ˇ = G(t) for t < t0 and G(t ˇ 0 ) = 1 for some t0 < ∞. Clearly, Consider some alternative distribution Gˇ such that G(t) 0 U c∗Gˇ Z (0) > U c∗GZ (0), while UGnˇ Z (t) = UGn Z (t) for t < t0 and UGnˇ Z (t) − UGn Z (t) ≤ e−rt u(α) otherwise. Hence, for t < t0 we have U c∗Gˇ Z (0) − UGnˇ Z (t) > 0 while for t ≥ t0 we have 0

U c∗Gˇ Z (0) − UGnˇ Z (t) > U c∗GZ (0) − UGn Z (t) − (UGn Z (t) − UGnˇ Z (t) ≥ ε − e−rt (u(1 − α) + u(α). Clearly, choosing t0 < ∞ sufficiently large we must get ICGˇ Z (t) ≥ 0. Finally, we can replicate arguments in the proof of Lemma 1. Given any GZ and Gˆ Z satisfying min{inf t ICGZ (t), inf t ICGˇ Z (t)} ≥ 0, let Gˆ Z % GZ if Gˆ Z (t) ≥ GZ (t) for all t. Let u(GZ ) = sup{UGc∗ˆ Z (0) : Gˆ Z % GZ }. We define a sequence (GZk ) where GZ1 such that inf t ICGZ1 (t) ≥ 0 and ICGZ1 (T Z ) > 0 is arbitrary. Choose GZk+1 % GZk such that UGc∗Zk+1 (0) ≥

U c∗Z (0)+u(GZk ) G k

2

and implies T Z < ∞. Notice that if ICGZk (T Z ) > 0 then by the arguments above Z

GZk+1  GZk . Define GZ (t) = limk GZk (t) and G (t) = inf{GZk (s) : s > t}. Given that GZk (t) is increasing in k, this is Z −w G . well defined, and moreover GZk→ R

I claim that inf t ICGZ (t) ≥ 0. If t is a point of continuity of G , then it is clear that limk ICGZk (t) = ICGZ (t) ≥ 0. If t Z

is not a point of continuity then ICGZ (t) = lim s→t+ ICGZ (t) ≥ 0. This implies that G % GZk . R Finally, suppose that ICGZ (T Z ) > 0, then there exists some Gˆ R  G such that UGc∗ˆ Z (T Z ) ≥ U c∗Z (T Z ) + ε for G some ε > 0. But in which case UGc∗Zk+1 (0) ≥ UGc∗Zk (0) + 2ε . However, given that UGc∗Zk (T Z ) ≤ u(α) this presents a contradiction for sufficiently large k.

 h i Proof of Lemma 3. If ICGZ (t0 ) > 0 for some t0 ∈ tˆ, T Z , then we first find an alternative distribution Gˆ Z which is still incentive compatible but has Gˆ Z (t) ≥ GZ (t). Let Gˆ Z (t) = GZ (t) if t < t0 and Gˆ Z (t) = max{GZ (t0 ) + ε, GZ (t)} otherwise, for some ε > 0. Notice that Gˆ Z (t) ≥ GZ (t) for all t and Gˆ Z (t) > GZ (t) for t ∈ [t0 , t0 + ) for some  > 0 and so UGc∗ˆ Z (0) > UGc∗Z (0). Clearly, ICGˆ Z (t0 ) is continuous in ε, so that for sufficiently small ε > 0 we still have ICGˆ Z (t0 ) > 0. If Gˆ Z (t) = GZ (t) then because the final the integrand in equation (8) is always positive for

34

s ≥ t0 where Gˆ Z (s) ≥ GZ (s), we must have ICGˆ Z (t) ≥ ICGZ (t) ≥ 0. If Gˆ Z (t) = Gˆ Z (t0 ) > GZ (t), however, then ∂IC (t) ICGˆ Z (t) ≥ ICGˆ Z (t0 ) > 0 (a larger t simply delays payoffs, Gt Z = re−rt zu(1 − α) for t ∈ [t0 , t0 + ).. Z

Replicating the arguments in the final three paragraphs of Lemma 2 then establishes that a distribution G exists h i such that min s ICGZ (s) ≥ 0 and ICGZ (t) = 0 for t ∈ tˆ, T Z , such that U c∗Z (0) > UGc∗Z (0).  G

Proof of Lemma 4. As argued in the text, we can restrict attention to the reduced problem of finding the minimum Z Z ˆ ˆ T Z such that IC(T ) ≥ 0. Recall that for risk neutral agents we have IC(T ) = 0 for T Z = − λ1 ln(z). Taking the Z ˆ derivative of IC(T ) we get Z TZ Z ˆ z2 d IC(T ) Z m Z = − ru(1 − α) λeλT +(λ −λ)s ds + zru(1 − α)eλT dT Z 1−z 0 z2 λ m Z Z Z =− ru(1 − α) m (eλ T − eλT ) + zru(1 − α)eλT 1−z λ −λ Z ˆ dIC(0) d IC(T ) −λT Z λ 1 is λm −λ = 1. And so dT Z |T Z =− λ ln(z) = 0. Finally, notice that dT Z e Z ˆ d IC(T ) −1 Z Z ˆ decreasing in T Z and so dT Z > 0 for T Z < −1 λ ln(z). Hence, we must have IC(T ) < 0 whenever T < λ ln(z). 1 Z∗ R∗ Z In the OSMP, therefore, we must have T = − λ ln(z), so that G and G correspond exactly to the Baseline

When u(x) = x we have λm = 2λ and so



equilibrium.

Proof of Lemma 5. Given Observation ??, we only need to contend with the risk neutral case. I do not identify Z an OSMP here, but merely consider a distribution which works. Consider a distribution GZt0 such   that Gt0 (t) = 0 for t < t0 , Gz (t0 ) =

0

λ(ert −1) r(1−z)

and GZ (t) =

1−zeλ(T 1−z

Z −t)

(r+λ)−λert rz

for t ∈ [t0 , T Z ] where T Z = t0 − λ1 ln

0

. This distribution

ensures that an agent who confesses is exactly indifferent to subsequently conceding at 0 or at any t ∈ [t0 , T Z ], for an expected payoff of u(1 − α). Notice that t0 = 0 implies T Z = − λ1 ln(z), and so this corresponds to the distribution Z d2 T Z 0 0 in the Baseline equilibrium. Furthermore, notice that and so dT dt0 |t =0 = 0, while (dt0 )2 |t =0 = −(r + λ). We want to show that GGR∗Z (0) > 0 for t0 > 0 sufficiently small. In this case we have t0

GGR∗Z (0) t0 and so (1 − z)2 z

z =1− 1−z

dGGR∗Z (0) t0

dt0

t0

Z

m λm s

λ e

0



m



ze

 z 2 Z T Z Z m m ds − eλT +(λ −λ)s − eλ s ds λm 0 1−z t

λT Z +(λm −λ)t0

−e

λm t0



dT Z − 0 zλλm dt

Risk neutrality implies λm = 2λ and so it is readily established that dT Z dt0

=1−

(r+λ)t0 −λT Z

e

z

(1−z)2 z

TZ

Z

dGR∗Z

G0 t

dt0

eλT

Z

+(λm −λ)s

ds

(12)

t0

|t0 = 0 = 0. We can substitute in for

into equation 12 and then take the second derivative:

! Z TZ d2GGR∗Z d2 T Z (1 − z)2 dT Z Z m t0 m 2 λm t0 m λT Z +(λm −λ)t0 m m =λ ze λ − λ + λ 0 − (λ ) e − 0 2 zλλ eλT +(λ −λ)s ds 0 z dt d(t0 )2 (dt ) t    Z TZ Z Z    dT dT Z m 0 m Z Z m    m λT +(λ −λ)t λ T λT +(λ −λ)s − 0 zλλ −e + 0 e +λ e ds 0 dt dt t

Evaluating at t0 = 0 we get: (1 − z)2

d2GR (0) (λ + r)λλm (1 − z) rλm (α − z) = λm z(λm − λ) − z(λm )2 + = >0 0 2 λm − λ 2α − 1 d(t ) t0 =0

Where the second equality λm = 2λ and λ + r = we have

GR∗Z G0 t d(t0 )

rα 2α−1

when u(x) = x. And so for some t0 > 0 sufficiently small,

> 0 and so GGR∗Z (0) > 0.



t0

35

h i Proof of Lemma 6. By lemmas 2 and 3 we restrict attention to GZ such that ICG˜ Z (t) = 0 for t ∈ min{tˆ, T Z }, T Z where tˆ > 0 and inf t ICG˜ Z (t) = 0. Call such GZ a smooth-ending distribution. Given T Z it must be that ICGZ Z Z (T Z ) ≥ 0 so that t∗ (T Z ) is defined. Let t = inf{t : GZ (t) > 0} and t = inf{t : ICGZ (s) = 0 for s ∈ [t, T Z ]}. min{T ,tˆ],T

I claim that t < t∗ (T Z ) and t > t∗ (T Z ). Suppose that t ≥ t∗ (T Z ) then GZt∗ (T Z ),T Z (t) ≥ GZ (t) for all t and so ICGZ (T Z ) < 0, a contradiction. Similarly, if t ≤ t∗ (T Z ) the GZt∗ (T Z ),T Z (t) ≤ GZ (t) for all t and so ICGZ (T Z ) > 0, again a contradiction. I first define an alternative distribution Gˇ Z , which generates higher payoffs. For t < t let Gˇ Z (t) = 0. Define Gˇ Z (t) = 0 for Gˇ Z (t) = min{GZ (t) − ε1 , 0} for t ≤ t +  where ε1 ≥ 0 and  > 0. Suppose there exists t0 ∈ (t∗ (T Z ), t] such that sup s 0 and GZ (t0 ) > GZ (t) for t < t0 , and define Gˇ Z (t) = max{G(t) + ε2 , G(t0 )} for t ∈ [t0 − ε, t0 ). Let Gˇ Z (t) = GZ (t) elsewhere. For t ≥ t0 we have: ICGˆ Z (t) − ICGZ (t) =

t+ε

Z

  m (Gˆ Z − GZ (s))r u(1 − α)eλ s z − u(α)e−rs (1 − z) ds

t

Z

t0



  m (Gˆ Z − GZ (s))r u(1 − α)eλ s z − u(α)e−rs (1 − z) ds

t0 −ε

This is continuous and strictly increasing in ε1 and −ε2 , is positive for ε2 = 0 and negative for ε1 = 0. For all sufficiently small ε1 therefore, there is a uniquely defined ε2 such that ICGˆ Z (T Z ) − ICGZ (T Z ) = 0. This leaves Gˆ Z as a function of ε1 and . For ICGˆ Z (T Z ) − ICGZ (T Z ) = 0 as ε1 → 0 and then  → 0 we must have: lim lim

→0 ε1 →0

   ICGˆ Z (T Z ) − ICGZ (T Z ) ε2  m m 0 0 = −r u(1 − α)eλ t z − u(α)e−rt (1 − z) +lim lim 1 r u(1 − α)eλ t z − u(α)e−rt (1 − z) = 0 1 →0 ε1 →0 ε ε

Notice that for sufficiently small , we have ICGZ (t) ≥ δ for t ∈ [t0 − , t0 ) and some δ > 0 and so for such t, ICGˆ Z (t) > 0 for all sufficiently small ε2 . For t < t0 − ε we have UGn Z (t) ≤ UGnˇ Z (t), hence so long as we can show R∗ GGR∗ ˇ Z (0) > GGˇ Z (0) then all time t type incentive constraints will be satisfied. To that end, notice that: R∗ GGR∗ ˆ Z (0) − GGZ (0) =

lim lim lim

→0 ε1 →0 ε1 →0

R∗ GGR∗ ˆ Z (0) − GGZ (0) 1 − z

ε1

zλm

TZ

Z 0

λm eλ

m

s

z (Gˆ Z (s) − GZ (s))ds 1−z

ε2 m lim lim 1 − eλ t ) →0 ε1 →0 ε     m 0 m m m 0 0 eλ t u(1 − α)eλ t z − u(α)e−rt (1 − z) − eλ t u(1 − α)eλ t z − u(α)e−rt (1 − z)

=eλ

m 0

=

t

u(1 − α)eλm t0 z − u(α)e−rt0 (1 − z) m 0 u(α)(1 − z)e(λ −r)t (e−r(t −t) − eλ (t −t) ) >0 = u(1 − α)eλm t0 z − u(α)e−rt0 (1 − z) 0

m

Where the final two lines hold for t0 < tˆ, and the inequality in the final line follows because the denominator is 2 negative for t0 < tˆ. If t0 = tˆ, on the other hand then we must have lim→0 limε1 →0 εε1 = ∞ and so the second line R∗ must certainly be strictly positive. The implication of this is that for sufficiently small ε1 and , GGR∗ ˆ Z (0) > GGZ (0). Finally, we can modify the arguments of Lemma 2. For any smooth-ending distributions, GZ and Gˆ Z with the same T Z , let Gˆ Z % GZ if Gˆ Z (t) ≥ GZ (t) for all t ≥ t∗ (T Z ) and Gˆ Z (t) ≤ GZ (t) for all t < t∗ (T Z ). Let u(GZ ) = sup{UGc∗ˆ Z (0) : Gˆ Z % GZ }. We define a sequence (GZk ) where GZ1 is an arbitrary smooth-ending distribution. Choose GZk+1 % GZk such that UGc∗Zk+1 (0) ≥

U c∗Z (0)+u(GZk ) G k

2 Z

and implies T Z < ∞. Notice that if GZk , GZt∗ (T Z ),T Z then by the arguments Z

above GZk+1  GZk . Define G (t) = limk GZk (t) and G (t) = inf{GZk (s) : s > t}. Given that GZk (t) is increasing in k

36

Z

−w G . for t ≥ t∗ (T Z ) and decreasing in k for t < t∗ (T Z ), this is well defined, and moreover GZk→ Z

Following exactly the arguments in Lemma 2 it is clear that G satisfies all time t type incentive constraints and Z Z so is a smooth ending distribution itself with G % GZk . We have argued that if G , GZt∗ (T Z ),T Z then there exists R

some Gˆ R  G such that UGc∗ˆ Z (T Z ) ≥ U c∗Z (T Z ) + ε for some ε > 0. But in which case UGc∗Zk+1 (0) ≥ UGc∗Zk (0) + 2ε . G However, given that UGc∗Zk (T Z ) ≤ u(α) this presents a contradiction for sufficiently large k.  Proof of Lemma 7. We can restrict attention to the case when tˇ > 0. First notice that for t < T Z , we have: ∂ICGZ Z (T Z )

=−

t,T

∂t

1 − zeλ(T 1−z

Z

−t)

r(zu(1 − α)eλ t − (1 − z)u(α)e−rt ) m

which is strictly positive for t < tˆ. Next, notice that for t ≤ T Z we have: ∂ICGZ Z (T Z ) t,T

∂T Z ∂2 ICGZ Z (T Z ) t,T

∂(T Z )2

Z

=rzu(1 − α)e−rT − λr

TZ

Z

eλ(T

Z

−s)

zu(1 − α)eλ s − (1 − z)u(α)e−rs ds m

t

  Z m Z Z = − r2 zu(1 − α)e−rT − λr zu(1 − α)eλ T − (1 − z)u(α)e−rT − λ2 r

Z

TZ

eλ(T

Z

−s)

zu(1 − α)eλ s − (1 − z)u(α)e−rs ds m

t

For t = tˆ < T Z , it is clear that this second derivative is strictly negative. Also notice that: dICGZ Z Z (T Z ) T ,T

dT Z

  Z m Z Z = rzu(1 − α)e−rT − r zu(1 − α)eλ T − (1 − z)u(α)e−rT

which is strictly positive for T Z ≤ tˆ. Combining these facts with ICGZ

0,T Z

exists T Z such that ICGZ

min{tˆ,T Z },T Z

(T Z )

≥ 0 for T Z = − λ1 ln(z) implies that there

(T Z ) ≥ 0 for all T Z ∈ [T Z , − λ1 ln(z)] but ICGZ

min{tˆ,T Z },T Z

< 0 for all smaller T Z . The

∗ ∗ ∗ continuity of t∗ implies that its image of [T Z , − λ1 ln(z)] is also a compact interval [t∗ , t ] where t ≤ tˆ. On [t∗ , t ], ∗ ∗ define the continuous variable T Z (t∗ ) = min{T Z ≥ t∗ : ICGZ∗ Z ≥ 0}. Letting t˜∗ = min{t∗ ∈ [t∗ , t ] : t∗ = T Z (t∗ )}∪t , t ,T we can be further restrict attention in the OSMP problem to distributions Gt∗ ,T Z (t∗ ) where t∗ ∈ [t∗ , t˜∗ ], in particular if ∂ICGZ

(T Z )

∗ t,T Z t∗ ∈ (t˜∗ , t ] then Gt∗ ,T Z (t∗ ) (t) ≤ Gt˜∗ ,t˜∗ (t). Notice that for t ≤ T Z ≤ tˇ we have > 0, which implies t∗ (T Z ) < t˜∗ ∂T Z for T Z slightly larger than t˜∗ , and so t∗ < t˜∗ . Clearly T Z (t∗ ) is strictly increasing on [t∗ , t˜∗ ] and is implicitly defined by ICGZ∗ Z ∗ (T Z (t∗ )) = 0 where: t ,T (t )

∂ICGZ

(T Z )

λ(T Z −t)

1−ze r(zu(1 − α)eλ t − (1 − z)u(α)e−rt ) dT (t ) 1−z = − = . R T Z (t∗ ) ∂ICGZ (T Z ) dt∗ rzu(1 − α)e−rT Z (t∗ ) − λr t eλ(T Z (t∗ )−s) zu(1 − α)eλm s − (1 − z)u(α)e−rs ds t∗ ,T Z (t∗ ) Z

t∗ ,T Z (t∗ ) ∂t∗



m

∂T Z

Furthermore, on this interval we have: (1 − z)2 zλm

dGGR∗Z

dt∗

For t = T (t ), it is clear that ˜∗

Z

˜∗

(0)

t∗ ,T Z (t∗ )

dT Z (t∗ ) dt∗

= −eλ t (1 − zeλ(T m ∗

= 0 and so

dGR∗Z G

Z

(t∗ )−t∗ )

)−

dT Z (t∗ ) λz dt∗

Z

T ∗ (t∗ )

eλ(T

Z

(t∗ )−s)+λm s

ds.

t∗

(0)

t∗ ,T Z (t∗ ) dt∗

< 0. This means that GZt˜∗ ,t˜∗ cannot be optimal. 

37

References Abreu, D. and F. Gul (2000). Bargaining and Reputation. Econometrica 68(1), pp. 85–117. 2 Abreu, D. and D. Pearce (2007). Bargaining, reputation, and equilibrium selection in repeated games with contracts. Econometrica 75(3), 653–710. 5 Basak, D. (2016). Transparency and delay in bargaining. Unpublished manuscript. 25 Coase, R. H. (1972). Durability and Monopoly. Journal of Law and Economics 15(1), pp. 143–149. 8 ˇ c, J. and C. Ponsat´ı (2008). Robust bilateral trade and mediated bargaining. Journal of the European EcoCopiˇ nomic Association 6(2-3), 570–580. 23 Dixon, W. J. (1996). Third-party techniques for preventing conflict escalation and promoting peaceful settlement. International Organization 50(04), 653–681. 1 Dunlop, J. T. (1984). Dispute resolution: Negotiation and consensus building. Greenwood Publishing Group. 2 Ely, J. and M. Szydlowski (2017). Moving the goalposts. Technical report, Unpublished manuscript. 25 Emery, R. E., S. G. Matthews, and M. M. Wyer (1991). Child custody mediation and litigation: Further evidence on the differing views of mothers and fathers. Journal of Consulting and Clinical Psychology 59(3), 410. 2 Fanning, J. (2016). ‘reputational bargaining and deadlines. Econometrica. 3 Goltsman, M., J. H¨orner, G. Pavlov, and F. Squintani (2009). Mediation, arbitration and negotiation. Journal of Economic Theory 144(4), 1397–1420. 24 H¨orner, J., M. Morelli, and F. Squintani (2015). Mediation and peace. The Review of Economic Studies 82(4), 1483–1501. 24 Jarque, X., C. Ponsatı, and J. S´akovics (2003). Mediation: incomplete information bargaining with filtered communication. Journal of Mathematical Economics 39(7), 803–830. 23 Manzini, P. and C. Ponsati (2006). Stakeholder bargaining games. International Journal of Game Theory 34(1), 67–77. 25 Marinovic, I., A. Skrzypacz, and F. Varas (2017). Random inspections and periodic reviews: Optimal dynamic monitoring. Unpublished manuscript. 25 Ponsati, C. (1997). Compromise vs. capitulation in bargaining with incomplete information. Annales d’Economie et de Statistique, 191–210. 24 Stipanowich, T. and J. R. Lamare (2013). Living with’adr’: Evolving perceptions and use of mediation, arbitration and conflict management in fortune 1,000 corporations. Arbitration and Conflict Management in Fortune 1. 1, 5, 23

38

optimal mediation 171221.pdf

Page 1 of 38. Mediation in reputational bargaining. Jack Fanning∗. January 26, 2018. Abstract. This paper investigates the potential for mediation in a dynamic reputational bargaining. model, where rational agents imitate behavioral types. Agents voluntarily communicate. with a mediator and are free to ignore any ...

317KB Sizes 5 Downloads 132 Views

Recommend Documents

MEDIATION IN THE EU:
15:00 Opening session and introduction to the programme and project. Anastasia Patta ... 13:30 WORKSHOP: Role play – Solving business disputes out of court.

Mediation-FAQ.pdf
Consumer-Merchant [repairs, service, merchandise, warranty]. Neighborhood [noise, harassment, pets, property lines, land use, nuisance]. Landlord-Tenant ...

Mediation leaflet.pdf
Sign in. Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying.

Mediation leaflet.pdf
in their best interests. Whoops! There was a problem loading this page. Retrying... Mediation leaflet.pdf. Mediation leaflet.pdf. Open. Extract. Open with. Sign In.

Mediation Case.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Mediation Case.

mediation in the eu - Gemme
Speakers. Key topics. • Legal framework of cross- border civil and commercial mediation. • Interaction between mediation and civil proceedings. • Key features and different phases of the mediation process. • Online dispute resolution schemes.

ST-mediation-expcrea.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item.

Optimal Arbitration
May 10, 2012 - comments. †Email: mylovanov ατ gmail.com ...... Ambrus, Atilla and Shih-En Lu, “Robust fully revealing equilibria in multi-sender cheap talk ...

download Mediation and Negotiation: Reaching ...
... in Law and Business For Mobile by Elizabeth Trachte-huber, Populer books ... Law and Business For android by Elizabeth Trachte-huber, unlimited Mediation ...

Bargaining mediation 160709.pdf
Sign in. Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying.

eBook Challenging Conflict: Mediation Through ...
Challenging Conflict: Mediation Through Understanding For android by ... provides an analysis of understanding conflict and offers a way to work together to.

FREE [DOWNLOAD] INTRODUCTION TO MEDIATION ...
visualization of interactions, and testing of questions about moderated mediation. ... data; boxes with SAS, SPSS, and PROCESS code; and loads of tips, ...

Waiver of Mediation Communication Privileges.pdf
Waiver of Mediation Communication Privileges.pdf. Waiver of Mediation Communication Privileges.pdf. Open. Extract. Open with. Sign In. Main menu.