Noisy Contagion Without Mutation ´ In Ho Lee and Akos Valentinyi∗ Department of Economics University of Southampton June 1999
Abstract In a local interaction game agents play an identical stage game against their neighbors over time. For nearest neighbor interaction, it is established that, starting from a random initial configuration in which each agent has a positive probability of playing the risk dominant strategy, a sufficiently large population coordinates in the long-run on the risk dominant equilibrium almost surely. Our result improves on Blume (1995), Ellison (2000), and Morris (2000) by showing that the risk dominant equilibrium spreads to the entire population in a two dimensional lattice and without the help of mutation, as long as there is some randomness in the initial configuration. Keywords: Local interaction, contagion, risk dominance. JEL classification: C72; D83
∗
The paper was previously titled “Interactive Contagion.” We thank Berthold Herrendorf, Robin Mason, Tom Mountford, seminar participants at Berkeley, Birkbeck, Humbold University, University of Hamburg, the editor, and especially Stephen Morris for helpful comments. The first author acknowledges financial support from ESRC Research Fellowship. Remaining errors are ours.
1
Introduction
It is a fascinating task to explain how a convention develops in a society. The uniformity of a certain choice of behavior is surprising given that the number of agents involved is large and that there appears to be no obvious coordination mechanism. Recent game theoretic research addresses the issue relying on the tools developed in the theory of stochastic processes.1 Kandori et al. (1993), (henceforth KMR), and Young (1993) establish that the risk dominant equilibrium due to Harsanyi and Selten (1988) is selected in 2 × 2 coordination games. These papers argue that a small probability of mutation (representing mistakes and experimentation) suffices for the unique selection of the risk dominant equilibrium. In a large society, this approach may be unsatisfactory since a switch to the risk dominant equilibrium requires many simultaneous mutations. For this reason it seems safe to conclude that the initial configuration of strategy chosen by the majority of population determines the convention for any meaningful length of time. Ellison (1993; 2000) offers a solution by considering local instead of global interaction. Since it is easier to have a few mutations occurring at different locations over time than many mutations simultaneously across the whole population, a convention will emerge in a shorter period of time in an environment with local interaction. Moreover, local interaction is also a more realistic mechanism since it is rare for agents to interact with the whole population, or even some significant fraction of it. This paper shows that a large population with only local interaction exhibits a strong tendency to coordinate on the risk dominant equilibrium even in the absence of mutation. For the determination of the long-run equilibrium, we rely on a contagion mechanism that exploits the randomness in the initial strategy choice of agents. The major result is that the contagion threshold is exactly 1/2 for a large population even when the probability of initially playing the risk dominant action vanishes to zero. The present approach improves previous results by Blume (1995), Ellison (2000), and Morris (2000) in a non-trivial way. These authors conclude that the contagion threshold is strictly less than 1/2 in 2-dimensional lattices. However, they consider only the noise-free mechanism where the risk-dominant equilibrium propagates 1
Friedlin and Wentzell (1984) provide a good account of this theory.
1
from one small group of agents. Therefore, our result suggests that the contagion mechanism is much more powerful than previously thought. Consider a local interaction game whose stage game is the 2 × 2 coordination game played by a population of agents. The agents are located on a 2-dimensional lattice and interact with their nearest neighbors. Agents are boundedly rational in that they play the myopic best response to the strategies played by their immediate neighbors. Instead of relying on mutation along the play, we assume that the initial strategy choice of the agents is random.2 Once the initial choice is made, there is no further muddling through. Since there is no noise in the choice of action, the response dynamic is deterministic. Clearly, the two strategies can initially be assigned to the agents so that the population eventually coordinates on the risk dominant one.3 The question of importance, however, is whether such a coordinated configuration occurs when individuals choose their initial strategy randomly. We show that it does almost surely in a large population. The intuition can be explained as follows. Suppose that there is a region on the lattice where agents presently play the risk dominant strategy. Agents will continue to play the risk dominant strategy within that region. Moreover, the region will grow if there are some agents who play the risk dominant strategy immediately next to its boundary. The number of agents next to that region who are required to play the risk dominant strategy for further propagation to occur turns out to be small, and independent of the size of the region. This ensures that some coordinated configurations will occur with positive probability independently of the size of the population. Finally, the spatial homogeneity of the lattice implies that coordinated configurations are simply translations of each other. A large population then ensures that one of these occurs almost surely. Earlier papers show that, first, the risk dominant action spreads in one dimension, but 2
The random initial strategy choice is a natural initial condition. We do not want to speculate about the source of this noise because our results are not sensitive to it. This is in contrast to the sensitivity of the result in other approaches such as KMR and Young (1993) to the relative sizes of the mutation probability, pointed out by Bergin and Lipman (1996) and Binmore and Samuelson (1997). The sensitivity of such approaches calls for a theory of the mutations. 3
A trivial assignment is when all agents play the risk dominant strategy.
2
not in two (see Ellison (1993)); secondly, non-trivial mixed long-run equilibria exist in two dimensions, but not in one (see Blume (1995) and Anderlini and Ianni (1996)). One might be tempted to conclude that contagion relies on the absence of such non-trivial long-run equilibria as Morris (2000) shows more generally. We show that this link between contagion and the existence of mixed equilibria is broken when we allow randomness in the initial conditions. It also appears that the intuition explained above can be applied to even more general environments including a higher dimensional lattice and a larger interaction range. For instance, the construction of a stable team (Blume (1995)) that plays the risk dominant strategy in a larger interaction range enables us to extend our argument in a straightforward fashion to translation-invariant neighborhoods on a 2-dimensional lattice. The paper is organized as follows. Section 2 formulates a local interaction game on the 2-dimensional lattice for nearest neighbor interaction. Section 3 contains the main result on the long-run distribution of the local interaction game for the best-response strategy dynamic. For comparison, it also demonstrates how the noise-free contagion mechanism works. Section 4 concludes.
2
The Model
2.1
A Framework of Local Interaction
There is a population of N 2 agents located on the two dimensional torus Λ(N) ≡ −bN/2c, . . . , 2 0, . . . , b(N − 1)/2c with N ≥ 1 being an integer, and where b · c denotes the integer part of the expression within the brackets. An agent with address x ∈ Λ(N) interacts with her nearest neighbors. The set of neighbors of the origin is defined by N ≡ {y : kyk = 1} where kyk ≡ (|y1 | + |y2 |), and the set of neighbors of agent x is given by x + N ≡ {y : kx − yk = 1}; thus the translation of N by x is denoted by x + N. There are two strategies {A, B} for each agent x ∈ Λ(N). Let s : Λ(N) −→ {A, B} be a map; then s(x) gives the state of agent x while s describes the state of the whole population. Finally
3
let Zt be the set of agents playing strategy A in period t, Zt = {x : st (x) = A}.
2.2
(1)
A Coordination Game
Consider the 2 × 2 coordination game given in Table 1. We require that a > c, d > b and (a − c) > (d − b) so that both (A, A) and (B, B) are Nash equilibria and (A, A) is the risk dominant one. A B A a, a b, c B c, b d, d Table 1: Coordination Game
All agents play the game simultaneously in discrete time. Local interaction is captured by restricting the dependence of the payoff of each agent to the strategies played by herself and the agents in her neighborhood. Let θt (x) ≡
|Zt ∩ (x + N)| , |x + N|
(2)
where | · | denotes the cardinality of a set.4 In words, θt (x) : Λ(N) −→ {0, 14 , 12 , 43 , 1} is the fraction of x’s neighbors playing strategy A. Then the payoff of agent x from playing strategy A in period t is given by ut (x, A) = θt (x)a + (1 − θt (x))b.
(3)
Similarly, the payoff of agent x from playing strategy B in period t is given by ut (x, B) = θt (x)c + (1 − θt (x))d. 4
For instance, |x + N| = 4.
4
(4)
Agents are assumed to play a myopic best-response. Agent x in period t + 1 chooses st+1 (x) = arg max{ut (x, A), ut (x, B)} {A,B}
(5)
where in the case of indifference the risk dominant strategy A is chosen.5 This implies that agent x plays strategy A in period t + 1 if θt (x) ≥
d−b ˆ ≡ θ. (a − c) + (d − b)
(6)
Since (A, A) is the risk dominant equilibrium, θˆ < 1/2. The evolution of the population st can be described in terms of the state of agent x as A if θt (x) ≥ θˆ st+1 (x) = B otherwise.
(7)
Thus, x chooses A if and only if at least a share θˆ of his neighbors plays A.
3
Best Response Dynamics
3.1
Noise-Free Contagion
Previous studies of local interaction games in the absence of mutation have focused on the problem of finding conditions for the risk dominant strategy to spread out from one group of agents initially playing it. We call this propagation mechanism ‘noise-free contagion’ since randomness of any form is absent.6 The clearest example of how this mechanism works is due to Ellison (1993). He considers a local interaction system of dimension one. A finite number of agents are placed around a circle; each agent interacts with its two nearest neighbors on either side. If at least one of 5
This tie-breaking rule plays little role except for the knife-edge case of 12 -dominance.
6
For the detail of the mechanism, refer to Morris (2000).
5
the neighbors played the risk dominant strategy in a period, the agent plays the risk dominant strategy in the next. It is easy to see that two consecutive agents playing the risk dominant strategy in a period suffice for coordination on the risk dominant equilibrium by the whole population: these two agents never switch to the risk dominated strategy, and their neighbors switch to the risk dominant strategy and continue to play it forever. This result holds true for any risk dominant strategy which requires for adoption at most half of the neighbors to play it; such a strategy is termed 12 -dominant strategy. As Morris (2000) shows, the previous result extends to more general environments only for thresholds strictly less than 12 . As an example, consider a 2-dimensional torus. Agents interact with all neighbors that are located within the Euclidean distance of 1. In this case, if the risk dominant strategy is to spread from any one agent playing it initially, the threshold needs to be less than or equal to 41 . To see the reason, suppose that two agents, who are neighbors, play the risk dominant strategy initially. They will continue to play it while the six agents that have either of them as one of their four neighbors will switch to the risk dominant strategy. If it required more than
1 4
of the neighbors to switch to the risk dominant strategy, the propagation
mechanism stops working.7 Morris computes that the maximum threshold is
n(2n+1)m−1 (2n+1)m −1
for a
local interaction system of dimension m and a box neighborhood of interaction range n.
3.2
Noisy Contagion without Mutation
The noise-free contagion mechanism requires the possibility that a group of neighboring agents induces other agents to switch to the risk dominant strategy solely due to the contacts among them. As we show next, propagation can be achieved without relying on this “selfgrowing” mechanism. If we can find more than two neighboring groups of agents playing initially the risk dominant strategy so that their interactions induce further agents to switch, the propagation may continue even if each group of agents alone cannot induce the switching of neighbors. We demonstrate that such a favorable condition can be ensured if the popula7
The importance of
1 4
in two dimensional lattices is due to Blume (1995).
6
tion starts from a random initial state.8 Since the adoption process is noisy due to the initial randomness, we call this propagation mechanism ‘noisy contagion’. It is already known that propagation is most difficult for nearest neighbor interaction due to the discreteness of the lattice structure. Therefore, focusing on nearest neighbor interaction provides a particularly strong result. We shall show that the noisy contagion mechanism improves the contagion threshold from
1 4
in the noise-free case to 12 .9
Let each individual play initially strategy A with probability p, and strategy B with probability (1 − p), independent of strategies played by the other agents in period 0. The state of the population at some time t > 0 is described by a distribution on the product space {A, B}Λ(N) or by Zt . The main goal of our analysis is to determine which strategy is selected in the long run. In particular, we are interested in the probability, R(N, p), that the population of size N coordinates on the risk dominant strategy conditional on the initial distribution p: R(N, p) ≡ Pr lim st (x) = A, ∀x ∈ Λ(N) p . t→∞
(8)
Our first proposition characterizes this probability as a function of p. Proposition 1 For a given population size N the probability that the population eventually coordinates on the risk dominant strategy is monotonically increasing in p. Proof: First, we construct an initial state. Assign a random number drawn independently from the uniform distribution on the unit interval to each agent. For agent x denote this number by λ(x). Now pick a p ∈ [0, 1], and let A if s0 (x|p) = B if
λ(x) < p λ(x) ≥ p
8 We note that Blume (1995) already studied the case of the random initial conditions. However, he showed that if the population is large enough, then a small initial density ensures the existence of an initial seed of agents playing A which, for a low threshold, can grow without relying on further noise in the initial configuration. 9
We use methods first suggested by van Enter (1987) and developed further by Aizenman and Lebowitz (1988) and Schonmann (1990; 1992).
7
for all x ∈ Λ(N). This construction ensures that agents play the risk dominant strategy with probability p. Moreover, the construction implies that if s0 (x|p) = A then s0 (x|p0 ) = A for any p0 ≥ p. That is, all agents playing A in the initial configuration generated by p play A in the initial configuration generated by any p0 ≥ p. Thus it suffices to show that if Z0 (p) ⊂ Z0 (p0 ), then R(N, p) ≤ R(N, p0 ) where Z0 (p) is the initial configuration generated under p according to the construction. Observe that the best-response strategy dynamic is monotone in the sense that the risk dominant strategy is chosen if and only if at least half of the neighbors play it. This means that if we switch some agents from B to A in a given configuration, then at least as many agents adopt the risk dominant strategy in the next period under the modified configuration as under the original one. Therefore Z0 (p) ⊂ Z0 (p0 ) implies Z1 (p) ⊂ Z1 (p0 ). By an induction argument, we obtain that if Z0 (p) ⊂ Z0 (p0 ), then Zt (p) ⊂ Zt (p0 ) for all t ≥ 0. Consequently, R(N, p) ≤ R(N, p0 ), which proves our claim.
Since R(N, p) is monotone in p, the key question is to find the critical initial density so that the population coordinates on the risk dominant strategy; thus, pc = inf{p : limN→∞ R(N, p) = 1}. The main result of our paper is to show that pc = 0. Although the proof is lengthy, the basic idea is simple. There is a set of configurations that eventually lead to the adoption of the risk dominant strategy. Call an element of this set a coordinated configuration. We do not intend to characterize the entire set. Instead a subset of the coordinated configurations will be identified which is relatively easy to analyze. Finally, it is shown that a coordinated configuration almost surely occurs in a large population. First some preparatory work is needed for our proof. To simplify the problem we renormalize the original population in order to eliminate switches from A to B. Suppose that N is
8
even.10 Divide the torus Λ(N) into squares of side length 2: n o T (y) = x ∈ Λ(N) : xi ∈ {2yi , 2yi + 1}, i = 1, 2, y ∈ Λ(M) ,
(9)
where M ≡ N/2. In the renormalized population y refers to a group of four agents called a team. Define
A if st (x) = A ηt (y) = B otherwise
∀x ∈ T (y) (10)
Thus, a team is said to play A (ηt (y) = A) if and only if all team members play A. Observe that if a team plays A none of its members ever adopts B. Moreover, a team playing B adopts A if two neighboring teams plays A in two different coordinate directions. Such a propagation takes at most three periods. It follows that the inequality 4 {y : ηt (y) = A, y ∈ Λ(N/2)} ≤ {x : s3t (x) = A, x ∈ Λ(N)}
(11)
holds for all t. Let RT (M, q) be the probability that the renormalized population of size M 2 = (N/2)2 coordinates on A given the initial distribution q where q = p4 . The inequality (11) implies that R(N, p) ≥ RT (M, q) ≡ RT (N/2, p4 ).
(12)
Therefore it is sufficient to show that the renormalized population coordinates on the risk dominant strategy. We now prove our main result in the following theorem. The argument applies also to odd N as well. This is because if the population on Λ(N) is coordinated on A, so is the population on Λ(N + 1). To see this, suppose that all agents play A on Λ(N). Construct Λ(N + 1) by adding a row and a column of agents of length N + 1 to Λ(N) such that all new agents play B. The geometry of the torus implies that agents playing A keep playing it forever. Moreover, all agents playing B will have two neighbors playing A. Therefore, it will be the best response for all agents playing B to switch to A. That is limN→∞ R(N, p) = 1 implies limN→∞ R(N + 1, p) = 1. 10
9
Theorem 1 If θˆ < 1/2, then pc = inf{p : lim R(N, p) = 1} = 0. N→∞
(13)
Thus, a sufficiently large population almost surely coordinates on the risk dominant strategy starting from any initial distribution with positive density p. Proof: By the inequality (12), it is sufficient to show that the population of teams coordinates on the risk dominant strategy. We carry out the proof in three steps. First, we construct a particular class of coordinated configurations centered at the origin under. Secondly, we show that the probability that our coordinated configuration occurs is strictly positive for any q > 0, and is independent of the size of the population. Thirdly, we prove that in a sufficiently large population a coordinated configuration centered at some team will almost surely occur for any q > 0. Step 1: Denote the set of coordinated configurations for Λ(M) by C M : n o C M = η0 : lim ηt (y) = A, ∀y ∈ Λ(M) . t→∞
We construct now a particular coordinated configuration ηˆ . Suppose that M is odd. Let Γ(k) be the square of side length 2k + 1 centered at the origin of Λ(M), that is, n o Γ(k) ≡ y ∈ Λ(M) : |yi | ≤ k, i ∈ {1, 2}, 1 ≤ k ≤ (M − 1)/2 . In addition define Γ(0) = {(0, 0)}, that is, the origin. Let F(k, l) be the lth face of the kth square, that is, for each k = 1, 2, . . . , (M − 1)/2, and l = 1, 2, 3, 4, o n F(k, l) ≡ y ∈ Γ(k) ∩ Γ(k − 1) : yil = al k, 1 ≤ l ≤ 4, ,
(14)
where ( · ) denotes the complementary set and a1 = a3 = 1, a2 = a4 = −1, i1 = i2 = 1 and
10
i3 = i4 = 2. Finally, let ηˆ ∈ Cˆ M ≡ {η : ∃y ∈ F(k, l), η(y) = A, 1 ≤ k ≤ M/2, 1 ≤ l ≤ 4}.
(15)
Thus, ηˆ is a configuration with the property that there is at least one team playing the risk dominant strategy on each face of each of the (M − 1)/2 squares and Cˆ M is a set of such configurations. Now we show by induction that ηˆ as constructed above is a coordinated configuration and that Cˆ M is the set of coordinated configurations centered at the origin. Suppose that the team at the origin Γ(0) plays A. Furthermore suppose that there is at least one team playing A on each face of the square Γ(1), and there are at most four teams in F(1, l) not playing A. It is clear that each of the teams eventually will see two other teams in each coordinate direction playing A. Therefore all y ∈ F(1, l) adopt A. Suppose now that η(y) = A for all y ∈ Γ(k − 1) and a team at (yi , k) for yi ∈ {−k, . . . , k} plays A. Since all teams (y j , k − 1) for y j ∈ {−(k − 1), . . . , (k − 1)} play A, the team (y j , k) encounters two teams in each coordinate direction playing A and so adopts A. The argument can be repeated for (−yi , k) and so on. Thus, all y ∈ F(k, l) for each 1 ≤ l ≤ 4 adopts A.11 In particular we conclude that Cˆ M ⊂ C M . Step 2: Let Pr(Cˆ M |q) denote the probability of the event that coordinated configurations occur centered at the origin as described in Step 1. We show that Pr(Cˆ M |q) ≥ α(q) > 0, for q > 0, independent of the size of the population. We have
Pr(Cˆ M |q) =
(M−1)/2 Y i=0
(M−1)/2 X 4 1 − (1 − q)2i+1 = exp 4 ln 1 − (1 − q)2i+1 i=0
(M−1)/2 ∞ (2i+1) j X X (1 − q) (1 − q)2i+1 + , = exp −4 j i=0
j=2
where the second line follows from the fact that − ln(1 − x) = 11
P∞
x j=1 j .
Since there is a constant
Ellison (2000) anticipated our argument to some extent by showing that the risk dominant action would spread from a cross, which is an example of the configuration we have just described.
11
0 < C < ∞ independent of i such that 12 ∞ X (1 − q)(2i+1) j j=2
j
≤
∞ X (1 − q)(2i+1) j j=2
C
,
the above equation can be rewritten as (M−1)/2 ∞ 2(2i+1) X X (1 − q) (2i+1)( j−2) (1 − q)2i+1 + (1 − q) Pr(Cˆ M |q) ≥ exp −4 C j=2 i=0 (M−1)/2 ! 2i+1 X (1 − q) 1 2i+1 (1 − q) = exp −4 1+ 2i+1 C 1 − (1 − q) i=0 ! (M−1)/2 X 1 − q 1 (1 − q)2i+1 ≥ exp −4 1 + C q i=0 ! (M−1)/2 X i 1 + (1 − q)C = exp −4 (1 − q) (1 − q)2 qC i=0 ! ! 1 + (1 − q)C 1 − q 1 − (1 − q) M+1 = exp −4 C q 1 − (1 − q)2 ! 1 1+C1 ≥ exp −4 ≡ α(q). C q q(2 − q)
(16)
Thus, we have shown that the probability of the event that a coordinated configuration centered at the origin occurs is bounded away from zero by α(q) > 0 for any q > 0 where α(q) > 0 is independent of M. Step 3: We have shown so far that there is a coordinated configuration centered at the origin which occurs with strictly positive probability. The main idea of the next step is based on the fact that if we divide the population into disjoint squares, each of these squares can be coordinated independently from each other. Since a large population can be divided into a large number of such regions, one of these regions will almost surely be coordinated. Moreover, if the side length of a region is large, then this region will almost surely grow further because there will be a team on each of its sides. Let M be any integer such that M = m2 for some odd integer m. This allows us to partition The inequality clearly holds for C = 2. However, there may exist another value of C which gives a tighter bound for the left hand side. Therefore we will use the symbol C for such a constant. 12
12
the torus into m2 disjoint squares of side length m. We say that a given square of side length m is coordinated if eventually all agents within that square play the risk dominant strategy. Let Cm be the event that there is a square of side length m on the torus somewhere that is coordinated. Finally, let Gm be the event that every row or column of length m in Λ(M) contains at least one team playing A. First, recall that C M is the event that the torus Λ(M) is coordinated; thus RT (M, q) = Pr(C M |q). Next observe that Pr(C M |Cm ∩ Gm ) = 1. Since Cm occurred, we know that there is a square of side length m somewhere on the torus that is coordinated. Since Gm implies that there will be a team playing A on each face of the coordinated square, there will be at least one square of side length m + 1 which is coordinated. We obtain by induction that if both Cm and Gm occur then the whole torus is coordinated. It follows that
RT (M, q) ≥ Pr(Cm ∩ Gm )
implying that 1 − RT (M, q) ≤ Pr(Cm ∩ Gm ) = Pr(Cm ∪ Gm ) ≤ Pr(Cm ) + Pr(Gm ).
(17)
Now we estimate the probabilities on the right hand side. First, consider Cm , there is no square of side length m in Λ(M) which is coordinated. The probability that a square is not coordinated is bounded above by 1 − α(q), implying that the probability that none of them is coordinated under a given partition is bounded above by (1 − α(q))(M/m) . Since we can partition the torus at most m2 different ways, we obtain 2
M 2 Pr(Cm ) ≤ m2 (1 − α(q))( m ) .
(18a)
Second, consider the event Gm , that there is no row or column of length m on the torus containing a team playing A. The probability that there is no team playing A in a given row or a column of length m is given by (1 − q)m . Moreover, the number of such a columns or rows
13
in Λ(M) is at most 2M 2 , implying that Pr(Gm ) ≤ 2M 2 (1 − q)m .
(18b)
Consequently, equation (17) can be restated as M 2
1 − RT (M, q) ≤ m2 (1 − α(q))( m ) + 2M 2 (1 − q)m .
(19)
Notice that for a fixed and finite n and a < 1, lim x→∞ xn a x = 0.13 Rewriting equation (19) in terms of m (since M = m2 ), we get 2
1 − RT (M, q) ≤ m2 (1 − α(q))m + 2m4 (1 − q)m .
(20)
As m ↑ ∞, both terms on the right hand side converge to zero, since they are of the form xn a x which converges to zero as x tends to infinity. Thus, we have shown that lim RT (M, q) = 1
M→∞
for any q > 0. Consider now the case where M , m2 for some odd m. Then, let m be the largest odd √ integer less than or equal to M, thus, √ M − 1 + 1. m = 2 2 Clearly, m2 squares of side length m can be placed on the torus so that no squares overlap. Since the squares are disjoint, and Pr(C M |Cm ∩ Gm ) = 1 still holds, the same argument applies. 13
For a proof, apply L’Hopital’s rule to
xn (1/a) x .
14
4
Conclusion
This paper shows that up to 12 -dominance, the risk dominant equilibrium is selected with probability 1 by a large population starting with a random initial configuration. The result holds for any size of initial noise, indicating that the risk dominant equilibrium is far more robust than the payoff dominant equilibrium in spite of the collective rationality argument of Harsanyi and Selten (1988) in favor of the latter. Indeed experimental results in van Hyuck et al. (1990; 1991) provide evidence supporting our conclusion, since the risk dominant equilibrium was always selected in their experiments.
References Aizenman, Michael and Joel L. Lebowitz, “Metastability Effects in Bootstrap Percolation,” Journal of Physics A, 1988, 21, 3801–3813. Anderlini, Luca and Antonella Ianni, “Path Dependence and Learning from Neighbors,” Games and Economic Behavior, 1996, 13, 141–177. Bergin, James and Barton L. Lipman, “Evolution with State Dependent Mutation,” Econometrica, 1996, 64, 943–956. Binmore, Ken and Larry Samuelson, “Muddling Through: Noisy Equilibrium Selection,” Journal of Economic Theory, 1997, 74, 235–265. Blume, Lawrence E., “The Statistical Mechanics of Best-Response Strategy Revision,” Games and Economic Behavior, 1995, 11, 111–145. Ellison, Glenn, “Learning, Local Interaction and Coordination,” Econometrica, 1993, 61, 1047–1071. , “Basins of Attraction, Long Run Stochastic Stability, and the Speed of Step-by-step Evolution,” Review of Economic Studies, 2000, 67, 17–45.
15
Friedlin, Mark I. and Alexander D. Wentzell, Random Perturbations of Dynamical Systems, New York: Springer-Verlag, 1984. Harsanyi, John and Reinhard Selten, A General Theory of Equilibrium Selection in Games, Cambridge, MA: MIT Press, 1988. Kandori, Michihiro, George J. Mailath, and Rafael Rob, “Learning, Mutation, and Longrun Equilibria in Games,” Econometrica, 1993, 61, 29–56. Morris, Stephen, “Contagion,” Review of Economic Studies, 2000, 67, 57–78. Schonmann, Roberto H., “Finite Size Scaling Behaviour of a Biased Majority Rule Cellular Automaton,” Physica A, 1990, 167, 619–627. , “On the Behavior of Some Cellular Automata Related to Bootstrap Percolation,” Annals of Probability, 1992, 20, 174–193. van Enter, Aernout C. D., “Proof of Straley’s Argument for Bootsrap Percolation,” Journal of Statistical Physics, 1987, 48, 943–945. van Hyuck, John B., Raymond C. Battalio, and Richard O. Beil, “Tacit Coordination Games, Strategic Uncertainty, and Coordination Failure,” American Economic Review, 1990, 80, 234–248. ,
, and
, “Strategic Uncertainty, Equilibrium Selection, and Coordination Failure in
Average Opinion Games,” Quarterly Journal of Economics, 1991, 106, 885–910. Young, Peyton, “The Evolution of Conventions,” Econometrica, 1993, 61, 57–84.
16