Anonymity, signaling and ritual David Hugh-Jones∗ July 7, 2008

First draft

Abstract Actors who need to convince others of their preferences or abilities may do so by sending a costly, hard-to-fake signal. Across the social sciences, costly signaling has been proposed as an explanation for many kinds of behaviour. These explanations face a problem: unless the signal’s cost and the future benefits of commitment are about equal, freeriders will have an incentive to send the signal and behave selfishly later. Signaling may then either fail in its function of weeding out selfish behaviour, or be prohibitively costly for participants. The problem is partly solved if the average level of signaling in a group is observable, but individual effort is not. Then, as freeriders can and will behave selfishly without being detected, group members learn about the average level of commitment among the group. I develop a formal model, and give examples of institutions to preserve anonymity, focusing particularly on the anthropological study of rituals including music and dance. Other applications include voting, anonymous comments and charitable donations. The importance of anonymity in signaling to outsiders is also discussed. ∗ The

author is a PhD candidate in the Department of Government at the University of Essex

1

1

Introduction

In many settings, agents spend time, money and effort doing things that seem not to contribute to their interests. For example, in the United States, incumbent politicians often raise large sums to fund their reelection campaigns, even when they face no plausible challenger for their seat; in my father’s time, privileged young Englishmen paid money to be taught Latin and Ancient Greek in great detail, although they would never use these languages again; people in many societies perform elaborate religious rituals which appear to have no practical function. A potential way to explain all these forms of behaviour is that they convey something about the agent to an audience. A politician who raises a big “war chest” is likely to be a formidable campaigner, and this fact itself will deter opponents from entering the race. Although classical languages are useless in the modern world, learning them may convince potential employers that you are smart and hardworking, as stupid or lazy students would not find it worth the effort. The last example, of rituals, will be a central focus of this paper. Rituals have been explained as ways for members of a group to demonstrate their commitment to its shared values. The argument is that only those who intend to stay with a group for the long term will be motivated to perform costly rituals; short-termists will not find it worth their while. Therefore, ritual observance acts as a reliable, costly signal. There is some empirical evidence to support this claim. For example, Sosis and Ruffle (2003) found that religious communes with strict codes of dress and conduct survived for longer than ones with more lax standards. Explanations of this kind are grouped under the heading of “costly signaling”. As the term emphasizes, for signals to work, they must be costly. It is easy to see why. Suppose that a particular church has short and enjoyable services, and new members are trusted implicitly after only a brief period of attendance. Then a dishonest person will find it easy and profitable to gain the trust of the congregation, before absconding with the collection money. In fact, cases like this are often in the newspapers. (Another 2

example comes from the film Six Degrees of Separation, in which a young trickster inveigles his way into the heart of a family of well-off New York liberals. He explains wryly: “When rich people do something nice for you, you give ’em a pot of jam. ” Of course, a pot of jam is a crazily cheap way to pass oneself off as upper class.) In general, any signal that is easy to fake will fail to weed out those with undesirable characteristics. As a rule of thumb, the cost of a signal must be of roughly the same magnitude as the benefit individuals gain by signalling their type.1 This poses a particular problem for the explanation of ritual. Unlike other examples such as education, many rituals are of no direct practical value. (Of course, some rituals have a practical component.) Not only is participation costly for individuals, but it also brings no benefits to the group. A rain dance does not cause rain, and at least from a secular point of view, time spent in religious observance is wasted. If the costs of rituals are the same as the benefits gained, then a group will lose as much by performing the ritual as it gains by discovering its members’ level of commitment. For example, suppose that participation in a rain dance at the onset of a drought signals altruism towards the group, so that observers can share food with participants and expect to have the favour returned when they themselves are in need. Then, if the costs of the rain dance are equivalent to the benefits of food sharing, group members will have gained nothing overall, and they would be better off signaling altruism by a more practical activity, such as hunting and sharing any remaining game. On the other hand, if the costs of the dance are much less than the benefits of mutual food sharing, as seems highly likely, then why would all individuals, including selfish ones who plan to accept food without giving in return, not take part? So there seem to be two problems with the costly signaling explanation of rituals. First, 1 This is not necessarily true when signals convey information about ability. Signals may be cheap “on the equilibrium path”, i.e. for the people who actually give them, so long as they would be sufficiently costly to deter others who wanted to fake the signaled characteristics. However, when the characteristics being signalled do not affect the actor’s abilities - for example, with signals of motivation and commitment - so that sending the signal costs the same for everyone, then that cost must be high. See XXX Lachman

3

many rituals seem to have no direct practical benefit. (Indeed this is why the costly signaling explanation is needed in the first place.) Why could the same signaling function not be fulfilled by activities which provide a direct collective benefit to participants? Secondly, by contrast, many rituals are cheap relative to the behaviour they are supposed to signal. (Another example: a lead role in a war dance is less risky than a lead role in actual combat.) Rituals are often described as “symbolic”, a word which denotes representation of a higher reality but which can also connote trivial magnitude, as in a symbolic payment. I suggest that in some cases, these facts result from a particular solution to the problem of dishonest participation by individuals. Essentially, rather than increasing the cost of signaling, some rituals are designed or evolved to ensure that individuals’ contribution levels are invisible, and that only the average contribution level among the group is observed. This keeps the signal cheap; but the need to preserve anonymity may favour particular forms of signaling behaviour which are of little practical use. Some examples will illustrate this, and also show how this logic can explain a wide range of institutions.

1. I would like to hire an honest employee. I am unsure about the average level of honesty among the candidates, which may be high or low. Unless I am sure enough that the average level of honesty is high, I will prefer not to make a hire as my expected payoff will be too low. I could randomly select a candidate and lend him £5. Then, if I get my £5 back after a day, I hire the employee. Unfortunately, if so, even a dishonest candidate will return my £5 in the expectation of getting hired. A subtler approach is to randomly select a candidate and lend him £5. If I get my £5 back after a day, then I make a second random selection for the job itself, assuming that honesty by one person gives me enough confidence that the average honesty level is high. This course of action is fine if I can commit to it. The problem is that if I learn with certainty that the first candidate is honest, I will prefer to hire him than a different person who may not be, even when the 4

average honesty is high. But then, as before, the candidate has an incentive to return my cash even if he is dishonest. One solution is to guarantee anonymity, perhaps by leaving £5 in an envelope with my address so that I have no way of telling who behaved well to me. Then only the genuinely honest will return my £5. If I get the cash back, it is worth my while to hire a randomly chosen candidate. 2. A group of co-workers wonder if they can strike for higher wages. Will they support each other, or will some blackleg their colleagues? To work out the level of mutual goodwill, they might think back to the Christmas presents they received from their workmates. But Christmas presents may be given in order to strike up mutually beneficial long-term relationships. These ties of long-term self-interest are fine for the ordinary interactions of office politics, but will not guarantee the mutual loyalty needed to carry the group through a strike. On the other hand, if the workplace has the “secret Santa” institution, in which presentgivers are anonymous to the receivers, then the generosity of gifts conveys real information about co-workers’ character. 3. Members of a church in 17th century Amsterdam wish to transact business with each other. Trust is vital in the world of early capitalism (Weber protestant sects XXX). The church enjoins strict norms of good behaviour on its members. But how can they be sure that their fellows genuinely believe its teachings? Generous donations to the church are strong evidence of firm belief among the congregation, but only if they are made anonymously. Otherwise a fraudster might make a large donation now, in order to reap a still larger profit from breaking the trust of the godly. 4. A legislature faces a vote on a relatively trivial issue, such as raising its own salary. Some legislators are honest; others are greedy and want a large raise. Legislators’ salaries are relatively unimportant to the economy. But greedy leg5

islators are, in general, likely to take worse decisions for the country. So voters are concerned with the outcome of the vote. Learning whether a large or small raise was voted will inform voters whether the median legislator is greedy. Voters may wish to learn more than this, and find out how their own individual representative voted. But if they are able to, legislators who vote for a large raise will probably lose at the next election. Instead, they will restrain themselves, make more money over the long term and cause greater damage to the country. It would be better if individual legislators’ votes were secret; then voters would at least learn something from the overall outcome. 5. A primitive society faces regular collective action problems, such as food shortages or attacks from neighbouring groups. In any one instance, the payoffs to defection may be too large to sustain long-term cooperation through a punishment mechanism. Instead, members of the group are socialized into norms of collective action, transforming their payoffs so that the collective action problem takes the form of an assurance game. However, the socialization process is not fully reliable and some group members may be antisocial types who prefer to freeride. At times of danger, the tribe performs a collective dance. The rules of the ritual are complex and for it to be performed correctly, everyone must play their part. Also, if one individual shirks the effort to perform the ritual successfully, the whole rite breaks down and it will not be obvious who caused the failure. If shirkers were individually detected, they might face hostility and sanctions from the group, and this would deter even antisocial types from shirking. Because of the anonymity, antisocial types will shirk and may cause the ritual to fail. If the ritual fails, this signals the presence of antisocial types; all players will then defect when a serious collective action problem turns up. A successful rite has the opposite effect. Shirking in the ritual therefore changes the outcome of collective action, so antisocial types have an incentive not to shirk; but if there

6

are many antisocial types whose identities are unknown to each other, they may fail to coordinate on not shirking and instead each shirk because they expect others to do so too. 6. An employer wishes to hire motivated employees. The interview process may include a set of tasks that test this motivation. If the tasks are performed by individuals, then even lazy candidates will try hard on the day, as the prospect of a job is enough to motivate them. Instead, the employer splits candidates into groups who must work together on a task. These groups then have a collective action problem: working hard will help get your teammates hired as well as yourself. Because of this, only individuals with intrinsic motivations will work hard, and the team’s performance will be informative about the average level of motivation within it.

All of these institutions allow participants to perform anonymously a trivial or symbolic altruistic action. Other participants and observers then learn about the level of altruism in the society. This learning may be useful in future collective action problems in which the stakes are higher, as it allows “conditional cooperators” to contribute only when they are certain enough that their actions will be reciprocated. These examples also show two other points. First, the symbolic action may help inform group members (examples 2, 3 and 5) and/or observers outside the group (1, 4 and 6). I discuss the case of signaling to outsiders below. Second, a crucial condition for this theory is that there are different types of actor in society. In particular, there are selfish actors and conditional cooperators. Selfish actors prefer to “defect” or “shirk” in the main collective action problem. Conditional cooperators prefer to “cooperate” or “contribute”, but only if contributions from other players are sufficiently high. The idea of different types of actors sits well with the idea, put forward by Simon (1990) among others, that humans are socialized into norms of

7

behaviour as a tax by the community on interpersonal learning; this socialization, however, is unlikely to be completely successful, as individuals face strong evolutionary pressures to advance their own interests before those of the group. The same pressures also encourage individuals to exaggerate their level of socialization in order to elicit others’ cooperation from others. In experiments, humans do indeed seem to act from a mix of altruism and self-interest, and to fall into different “types”, with different types having different patterns of behaviour. However, even if social actors are always selfinterested, they may still have good reasons to differ in their level of commitment to a particular group. This is discussed further in the literature review. Ritual is a very broad concept. The explanation of ritual advanced here is meant to complement, not supplant, other explanations. The next section discusses the existing literature on signaling in a variety of fields and shows the contribution of this paper. Section 3 gives more in-depth examples of institutions to facilitate anonymous signaling. I then develop a formal model. The conclusion discusses implications. (XXX experiment?)

2

Literature

Rituals have long puzzled anthropologists. Early approaches were functionalist: rituals served a concealed purpose of social integration rather than their stated purpose. Later, structuralists and cultural anthropologists proposed that rituals embodied messages or narratives. More recently, signaling explanations for ritual have become popular. As described above, these explanations posit that individuals contribute to collective rituals in order to signal their commitment to the group. They essentially flesh out the functionalist account, firstly by describing a mechanism through which rituals could increase social integration, and secondly by showing that individuals, as well as the group as a whole, can benefit from taking part in rituals. Bird and Smith (2005) review

8

the anthropological signaling literature, covering ritual as well as institutions such as competitive giving and inefficient hunting. They draw comparisons with evolutionary biology where again signaling explanations have become important. Hagen and Bryant (2003) specifically claim that music and dance evolved to signal coalition quality to outsiders. As practising together takes time, music demonstrates to outsiders that a group has been together for long and is therefore likely to be committed to one another. They are more dubious that music can enhance social cohesion internally, since “music and dance communicate little” about individuals’ abilities or common interests. But if practice takes effort, it can act as a signal of commitment to the group and thus of common interests, even to group members who already know how long a group has existed. Therefore, these practices can signal group cohesion to both insiders and outsiders. (Signaling to outsiders is discussed below.) Anshel and Kipper (1988) have an experiment on group singing showing that it increases trust among participants. Chwe (2001) puts forward a related theory, in which rituals such as songs and dance serve to coordinate action, not by signaling commitment but by forcing participants to pay attention to each other and thus providing common knowledge that all have coordinated on a particular course of action. Repetition and tradition are devices to ensure that different participants and different generations are reliably “on the same page”. As Hagen and Bryant point out, it is not clear why music and dance would serve this goal better than straightforward communication. Chwe’s theory may be more appropriate for other cases. This paper suggests an explanation for the specific form of music and dance - a form in which individuals’ levels of effort are masked although the average level is obvious. Whether this is true is an empirical question. Certainly, when large groups sing, individual voices blend in to one another and it is hard to tell who is singing loudest (and most in tune). Similarly, a misstep by a single dancer may cause the whole dance to become miscoordinated: whether the guilty party can be identified depends on many

9

factors. A related area, explored by anthropologists and others, is the explanation of religion. Weber (N.d.) was an early exponent of the idea that Protestant sects facilitated trade between members by increasing mutual trust. More recently Ruffle and Sosis (2003) has shown that 19th-century US communes which enforced tougher requirements on dress and behaviour survived for longer. Levy and Razin (2007) develop a single model combining ritual observance and religious belief. These theories tend to see participation as a binary choice, whereas explanations of “ritual” focus on more the levels of coordination and effort displayed. These are again likely to be complementary explanations. The more general setting of this paper is the literature on cooperation in groups. In game theory, the various Folk Theorems show that when games are indefinitely repeated, rational self-interested individuals can enforce cooperation by punishing noncooperators, if they are sufficiently patient (see e.g. Fudenberg and Tirole (1991)). Kreps et al. (2001) was an early use of the idea that there are different types of players: if a few players are “nice” and would like to cooperate so long as others do so to, then this can allow for all players to cooperate even in a finitely repeated game. In fact, experimental economists have found considerable evidence for “conditional cooperation” - see Fischbacher, achter and Fehr (2001) - and also for different “types” of players, i.e. for individual differences in behaviour that are stable over time (XXX who?). Differences in “type”, i.e. in character, provides one rationale for the importance of signaling. However, it is not the only possible one: even if all agents have the same fundamental motivations, other differences, such as different chances of surviving alone, or more or less valuable prior reputations to maintain, may lead them to vary in their commitment to a group. Similarly, conditional cooperation - the desire to help the group if and only if others do so too - may result either from human psychology, or just from there being increasing returns to cooperation (e.g. you cannot hunt big game on your own).

10

Existing literature dealing with anonymity in cooperation treats it mostly as a problem to be solved. Game theorists (Ellison, 1994; Kandori, 1992) have shown the possibility of cooperation in the Prisoner’s Dilemma when play is anonymous. Cooperation is enforced by a “punishment equilibrium” in which everyone defects, rather than by internalized norms. By contrast, in the present paper, anonymity may serve a positive function, by solving problems of adverse selection when payoffs are small. In economic work on transparency in principal-agent relationships, anonymity has been treated more favourably. When only incomplete contracts are possible, and agents have “career concerns” (that is, they are on short-term contracts and are trying to get themselves rehired at the end of the contract) principals may benefit from committing not to learn too much about the choices made by agents. (? is a key reference for the “career concerns” literature in economics.) This is because being observed may lead agents to choose an action which makes them look smart, rather than the one which is actually best for the principal. In this context, Prat (2005) remarks on common exceptions to freedom of information laws for political advice. Levy (2007b; 2007a) compares public and secret voting by expert committees with career concerns and shows that under certain circumstances secret voting, where only the overall result, not individual votes, is made public, leads to better decisions. Gersbach and Hahn (2008) apply this idea to individual voting records for central bank interest rate decisions. The model in this paper can be seen in this perspective if we take all actors as both principals and agents, and view the possibility of future punishment or exclusion from collective goods as providing the equivalent of “career concerns”. Agents here are motivated to signal their good intentions, rather than their expertise. (XXX moral hazard in teams.) Acemoglu (2007) uses a similar theory to provide an explanation for the functions of government. In their paper, individualized incentives encourage participants to exert effort on manipulating the signal of their achievement rather than on the achievement 11

itself (for example, a teacher can “teach to the test”). It may be better to organize players into teams (“firms”) to avoid this wasted effort. As in this paper, however, firms find it hard to commit not to examine individual signals; this provides governments, which can make this commitment more easily, with a role. Here I focus on different, smaller-scale mechanisms which solve the commitment problem by technologies that ensure individual effort is not observable. This paper explains how cooperation games of relatively trivial - “symbolic” - cost can be informative about behaviour in more important situations. Another explanation of small concessions is that they are stepping stones towards bigger ones. There is a literature in international relations on this idea (Osgood, 1962; Ward, 1989; Kydd, 2000) and an economic experiment demonstrating the concept (Kurzban et al., 2001). The “stepping stones” explanation works even in two player situations such as bilateral negotiations where anonymity is impossible. However, the increase in the size of cooperative gestures will still be limited by participants’ discount rates. These equilibria, and repeated-game equilibria in general, work better when the rewards for defection are quite stable over time. When they are more unstable - for example, in conditions of great uncertainty such as war or famine - they are less likely to work well as they will be vulnerable to “long fraud”, in which self-interested participants cooperate only until there is a really large benefit from defecting. In such situations, good behaviour cannot be enforced by fear of punishment, so genuine good character becomes correspondingly more important. (It is not surprising that armies take great efforts to bond their troops by the rituals of “basic training”.) This paper makes two contributions to the existing literature. First, it shows that conditional cooperators engaged in collective action, under conditions of uncertainty, face a problem of “political correctness” or “pandering” which makes it hard for them to trust each other. Second, it suggests that certain institutions - in particular, some collective rituals and “symbolic” collective actions - may have developed in order to solve this

12

problem by allowing participants to contribute anonymously, which makes it lest costly to test the average level of cooperativeness in the group.

3

Examples of Anonymous Signaling

XXX from old version: Group singing can be judged by its overall qualities of volume, harmony and enthusiasm, while it is harder to discover exactly who is out of tune or keeping quiet. In communal dances, if one person makes a mistake, the entire group may lose the rhythm. The same applies when infantry march in step. In a modern context, I have already mentioned Secret Santa. Similarly, a good way to judge the culture of an organization may be the size of donations to the “honesty box” for office coffee and biscuits. Many team sports have elements of group performance: it would be interesting to examine what effect modern sports statistics, which break down a team’s performance into the effect of individual players, have on group morale. Specific dances? Communal singing. Anonymous donations - cases where “total donations” are announced, e.g. church.

3.1

Further examples

Applause at the end of a performance gives an honest sign of collective appreciation, since contribution levels cannot be distinguished. The same goes for the cheers and jeers during Prime Minister’s Questions. An anecdote from Solzhenitsyn (1997) shows this mechanism grotesquely breaking down, and reveals much about the inherent tensions in anonymous mechanisms: XXX quote

13

Here, although observers might not know who started the standing ovation, the first person to stop would be marked out - as the frightened participants knew. In fact, Stalin demanded absolute, enthusiastic support from all those around him XXX ref. This is one extremity of a tradeoff, not explicitly modelled here, between gaining useful information from your subordinates’ honesty, and allowing their dissent to become common knowledge. If a leader is too insecure, in reality or in his own imagination, not only the identity, but even the existence of dissenters, must be concealed, and so even anonymous mechanisms will not be tolerated. In more normal politics, “groupthink” or political correctness (cf. Morris 2001) can be mitigated by allowing opinions to be expressed anonymously. The medieval court had the jester, an unimportant and unthreatening person who might safely relay court gossip to those in power. Facing the other direction, modern parliaments have the principle of collective cabinet responsibility, allowing decisions to be taken which are unpopular with the parliamentary party. Similarly, some firms allow employees to make anonymous suggestions or comments. On an everyday level, gossip, which is generally disapproved of, may serve a useful function: people can be told unwelcome truths on the (perhaps mutually convenient) assumption that the speaker is only repeating what others are saying. A famous anthropological example can be interpreted in a similar way. Malinowski (2007) gives an account of the Kula Ring institution practiced among Pacific islanders. This is a system of ceremonial gift-giving in which armshells are exchanged for necklaces amongst a ring of island groups.. The objects are given, not traded, with an expectation that a present of equal value will be returned at a later date. The consensus among anthropologists (XXX) is that they serve to integrate the island societies: expeditions to other islands for the purpose of Kula giving also accomplish a great deal of ordinary trade. Thus, the ongoing exchange of Kula gifts serves as a reliable signal of the stability of the relationship between different groups - a matter of importance when

14

the visiting party is far from home and at the mercy of its hosts. From our point of view, an interesting feature of the Kula is that necklaces are always given clockwise, armshells anticlockwise. This allows some excuse for a delay in reciprocating a Kula gift: for example, if one man has been given a fine necklace and owes an armshell of equal distinction, he may need to wait for it to arrive from the other direction. Malinowski gives an example of this kind of supposed delay causing tension between two Kula partners:

“Then Tovasana asked the visitors about one of the chiefs from the island of Kayleula (to the West of Kiriwina), and when he was going to give him a big pair of mwali. The man answered they do not know ; to their knowledge that chief has no big mwali at present. Tovasana became very angry, and in a long harangue, lapsing here and there into the Gumasila language, he declared that he would never hula again with that chief, who is a topiki (mean man), who has owed him for a long time a pair of mwali as yotile (return gift), and who always is slow in making Kula.” (ibid. p. 271)

Without the possible excuse that an appropriate gift is unavailable, the pressure towards prompt repayment might be so great that nothing could be learned from the exchange. The rules of clockwise and anticlockwise circulation thus allow partners to conceal their unwillingness to repay, and create the ambiguity that makes Kula exchange so interesting (i.e. informative) to participants. Supporting this interpretation is Malinowski’s observation that “a very fine article must be replaced by one of equivalent value, and not by several minor ones, though intermediate gifts may be given to mark time” (ibid. p. 96). Team sports; show tension between individual and collective display.

15

Marching. Similarly, anonymous votes.

3.2

Signaling to outsiders

The disadvantage of keeping individuals’ effort levels anonymous is that some information is lost. Observers learn only about the average level of contribution to a public good, not about who contributes. When a symbolic collective display is meant to inform people outside the group, this disadvantage no longer matters. For example, if a war chant is meant to impress an opposing group with the readiness of participants to fight, the opponents will want to know how many fanatical warriors they are facing, and be less interested in exactly who they are, and in any case, the singing group has no incentive to signal this information. (There is as before an individual incentive to appear as more determined than others, in order to deter attacks on oneself; as before, this provides a reason for the group to use an institution that provides anonymity.) Thus, anonymity-preserving institutions are especially well-suited to signaling to outsiders. Hagen and Bryant (2003) discuss song and music as an example of “coalitional signaling”. However, they lack a really powerful explanation of why this particular form should have evolved. Why demonstrate your vocal chords when you could be showing off your muscles? I suggest that the answer lies in the anonymity of sound. Humans find it hard to determine the exact source of a sound; for visual displays this would be much easier.

4

A Model of Anonymous Signaling

Suppose there are n > 2 players who must choose whether to contribute (c) or defect (d) in a public goods game. Players are “socialized” with probability η and “antisocial”

16

otherwise; types are private information and are drawn independently. Payoffs are given below for each type: When k other players contribute Socialized: c

2(k + 1)/n − 1 if k < n − 1; 1 + 2(k + 1)/n = 3 otherwise ;

d

2k/n When k other players contribute

Antisocial: c

2(k + 1)/n − 1

d

2k/n

In other words, contributions are doubled and shared equally; socialized players get a utility bonus of 2 if everybody contributes. This could reflect a psychological reward for correctly following a social norm of cooperation. There are two Bayesian Nash equilibria of this game, taken on its own. In the first, everyone plays d. For antisocial types, d is a dominant strategy as 2/n < 1. For socialized types, unless all other players contribute, defection gives strictly higher utility, so if all other players are playing d, d is a best response. The more interesting equilibrium has socialized types playing c and antisocial types playing d. This is a separating equilibrium: different types reveal themselves by choosing different actions. The condition for this to be an equilibrium is that socialized types gain higher expected utility from playing c, or n−1

   n−1 2(n − a) 2(n − a − 1) −1 > ∑ B(a) n n a=1 a=0     n−1 n−1 2 ⇔ η n−1 3 − 2 > ∑ B(a) 1 − n n a=1     n−1 2 ⇔ η n−1 3 − 2 > (1 − η n−1 ) 1 − n n 1 1 ⇔ η n−1 > − . 2 n

η n−1 (3) + ∑ B(a)



17

(1)



 n − 1   n−1−a where B(a) ≡  (1 − η)a is the probability of having a antisocial η a types among the other n − 1 players, from the binomial distribution. This holds for all values of η above some cutpoint η ∗ < 1. As n increases η ∗ → 1: when there are more participants it is more likely that at least one will be a defector, so that there is no separating equilibrium unless players are almost certainly socialized.

4.1

A first round with visible play

If η < η ∗ , socialized types will find it too risky to contribute and the public good will not be provided. It would be helpful if there were some way to find out whether there are in fact any defectors: if there are none, then socialized types will be willing to contribute to the public good so long as others do so too. One option would be to play a first round of the game, with the same structure as before, but with payoffs multiplied by some small ε > 0. Playing the main public goods game may actually be costly but unavoidable (for example, deduct two from all players’ payoffs – this does not affect the structure of preferences within the game) in which case it makes sense to keep payoffs as small as possible in the initial round. Suppose that it is impossible to exclude defectors from the second round of play. Then, it makes no difference whether defection is anonymous or not. Antisocial types may, however, prefer to contribute simply in order to get the benefit from freeriding in the second round. So the first round with visible play may not prevent adverse selection. But if there is more than one antisocial type, and one antisocial type is expected to defect, the other one will do the same, as universal defection will be guaranteed in the second round whether one or two antisocial types defect. (In other words, the antisocial types suffer from a “moral hazard in teams” problem: cf. Holmstrom (1982).) So antisocial types will need to coordinate on first round contribution. This coordination will be more difficult because the number and identities of the antisocial are not public 18

knowledge. Formally, suppose that all antisocial players defect in both rounds, and that socialized players always contribute in the first round, and contribute in the second round only if nobody has defected. Second round play is clearly in equilibrium. An antisocial player, choosing whether to defect or contribute in round 1, faces two possibilities: he is the only antisocial player, and will cause universal defection in round 2 by defecting now; or he is not and will not. The condition for defection to be a best response is n−1

η n−1 (ε(2 − 2/n) − (ε1 + 2 − 2/n)) + ∑ B(a)(ε(1 − 2/n)) > 0 a=1

⇔η

n−1

[ε(1 − 2/n) − (2 − 2/n)] + (1 − η n−1 )[ε(1 − 2/n)] > 0 ⇔ η n−1 [−(2 − 2/n)] + ε(1 − 2/n) > 0 ⇔ η n−1

6 ε

1 − 2/n 2 − 2/n

which is false for small enough ε, true for ε = 1 and n large, and false for any ε < 1 when n is small and η is close to 1. A socialized player faces the same two possibilities in round 1 but has slightly different incentives. For her to contribute we require

η n−1 [3ε + 3 − ((2 − 2/n)ε + 0)] + (1 − η n−1 )[ε(1/n − 1)] > 0 ⇔ η n−1 [3 + ε(1 + 2/n)] + (1 − η n−1 )[ε(1/n − 1)] > 0 ⇔ η n−1 [3 + ε(2 + 1/n)] > ε(1 − 1/n) ⇔ η n−1

> ε

1 − 1/n . 3 + ε(2 + 1/n)

Thus to sustain this equilibrium we require

η n−1

3 + ε(2 + 1/n) 1 − 1/n

> ε > η n−1

2 − 2/n . 1 − 2/n

(2)

It is easy to check that these equations can always be satisfied by some ε for any

19

n, as the left hand side is strictly greater than the right hand side – intuitively, the socialized type gains more from contributing than the antisocial type. More interesting is whether they can be satisfied for ε < 1. With n large this is normally not a problem. The right hand side is decreasing in n and increasing in η; it is less than 1 for e.g. η = 0.99, n = 75 or η = 0.95, n = 15. XXX insert section here discussing when including the ε first round is worthwhile. For η < η ∗ ε will usually be quite small; even for η > η ∗ it might be worthwhile to have a round 1? We can rule out some other equilibria as follows. First, suppose that all types contribute in round one. Then the equilibrium belief after observing all players contributing will be just that η of the players are socialized. If so, for η < η ∗ nobody will contribute in round two; as contributing in round one guarantees the worst outcome in round two, for η < η ∗ again no player will contribute in round one, contradicting our assumption. A similar argument shows there are no equilibria in which only antisocial types contribute in round one. If no types contribute in round one, then again no types contribute in round two as the equilibrium prior remains at η. This is an equilibrium. We examine its robustness as follows (XXX which refinement is this?): suppose that off equilibrium, after a single player deviates and contributes in round 1, s/he is believed to be socialized with certainty. (If contribution increases the probability of socialized types contributing in round two, then socialized types gain more in expectation from the deviation as there is a possibility that all players are socialized; we can then apply the XXX Criterion and conclude that this belief survives the refinement. If contributing does not increase the probability of round two contributions, then this belief and all others survive the refinement. We seek an ε such that the belief survives the refinement and guarantees equilibrium. XXX you need to revise, or “vise”, these refinements...) For an equilibrium with no contributions, we require for socialized types: 20

4.2

The commitment problem

Thus, unless antisocial types are vanishingly rare, it will be possible to detect them by playing a less expensive “toy” or “symbolic” round of the public goods game. Armchair empiricism confirms that, for example, we expect our friends to help us out in small ways, and that violating this norm can have consequences out of proportion to the direct benefit of the help. The problem with this setup is that we have assumed that the public goods game is non-excludable. Thus, when antisocial types are revealed in round 1, the consequence is universal defection in round two. In many cases, it would be more reasonable to assume that antisocial types are excluded from the benefits of other players’ contributions, or punished directly. In this way, some contributions can still be sustained even in the presence of antisocial players. But of course, this gives the antisocial types back their incentive to mimic the socialized by contributing in round 1. I focus on exclusion mechanisms to avoid having to model punishment explicitly. I assume that after round 1, every player writes down a list of players, which must include himself. All those players who share the same list G ⊂ {1, . . . , n} then play the round 2 game, with payoffs as shown above except that n is everywhere replaced by n0 = |G|, and payoffs when n0 < 3 are always zero. As the game is defined, there is no inherent advantage to having a particular sized group. (XXX if antisocial types are present with positive probability, threesomes will be pareto optimal – but not a dominant strategy because it depends what everyone else is doing – maybe the unique coalition-proof equilibrium though.) I focus on the following potential equilibrium: after round 1, all players who contributed write down G∗ ={all players who contributed}. Defectors write down anything – as their partners in any group will be defectors, they get 0. In round 2, players contribute if and only if they are (1) socialized and (2) in a group with only round 1 contributors. In round 1, the socialized contribute and the antisocial defect. Solving the game backwards, clearly round 2 strategy is optimal as contributions take 21

place always and only in groups where every player contributes. When players write down their list, round 1 defectors cannot join the contributors’ group and a round 1 contributor who chooses anything but G∗ will be in a group with only defectors. So the given strategy is optimal. But consider round 1. Now, the payoff from contributing for an antisocial type is n−1

  n−a n−a−1 B(a) (2 − 1)ε + 2 ∑ n n−a a=0

where the second term in brackets gives the payoff from defecting in round 2, in a group with n − a − 1 contributors. The payoff from defecting is n−1

  n−a−1 ∑ B(a) 2 n ε a=0 and the relative gain from contributing is n−1

  n−1 2 n−a−1 2 n−a−1 ∑ B(a) ( n − 1)ε + 2 n − a = ( n − 1)ε + ∑ B(a)2 n − a . a=0 a=0

In equilibrium this must be negative, which will require ε to be large. Indeed, for n = 3 √ and η > 3/2 − 7/2 ∼ = 0.177, we require ε > 1, and as n → ∞ we require ε > 2. (The relative gain goes to −ε + 2 as B(a) becomes concentrated at a = n/2. XXX check!) In general, then, the round 1 game will be more expensive than it is worth if players can be excluded from round 2. Could there be a different equilibrium which avoids this problem? Suppose after round one, every player writes down G = {1, . . . , n}. Then we are back in the earlier game. If ε is calibrated to allow antisocial types to deviate in round 1, then once more round 2 cooperation can sometimes be sustained even if η < η ∗ . Also, no individual has an incentive to deviate from the action of writing down G = {1, . . . , n}, as anyone who does will receive a payoff of 0 in round 2. However, this equilibrium is not coalition-proof.

22

If we assume that after round 1, players can make agreements so long as these are selfenforcing, then whenever some players have defected in round 1, the G∗ -coalition of all contributing players will wish to deviate to writing down G∗ and excluding defectors, then playing c in round 2. Clearly the same problem will arise for any G which includes round 1 defectors; as long as there are at least 2 round 1 contributors these will have an incentive to deviate as a coalition.

4.3

Anonymous play in the first round

To avoid these problems, an institution might develop in which the toy round 1 game is played anonymously, so that the total number of defectors, but not their identities, is known. The anonymizing technology could take a number of forms, as discussed in the examples above. The whole game is now as follows: a first round of the contribution game is played with payoffs multiplied by ε; after the round players observe the total number of contributors and defectors, but not their identities; they then each write down their list of players, including themselves, and those who share the same list play the final round of the contribution game. We examine the following equilibrium. In round 1, socialized types contribute and antisocial types defect. After round 1, if everyone has contributed, everyone writes down G = {1, . . . , n} and contributes in the final game. If there are d > 0 round 1 defections, players split into triples {1, 2, 3}, {4, 5, 6}, . . .. We assume that n is divisible by three. (The triples maximize the probability of having all socialized types in a group and thus getting the maximum payoff of 3. Any larger group would not be coalition proof; when everyone has contributed, group size is not important. A more realistic model would have returns to scale from the public goods technology; if so, players would balance the risk of having defectors against the benefit of more players.)

23

If everyone has contributed in round 1, then socialized types contribute in round 2. Otherwise, socialized types contribute iff d2 2d(n − d) (n − d)2 (−1/3) + (1/3) + 3 2 2 n n n2

>

⇔ d 2 (−1) + 2d(n − d)(−1) + (n − d)2 (5) > ⇔ 6d 2 − 12dn + 5n2

d2 2d(n − d) (n − d)2 0+ (2/3) + (4/3) (3) 2 2 n n n2 0

> 0 √ 6 (1 − 6/6)n

⇔d

where d/n is the posterior probability that a randomly selected player is an antisocial type. This assumes that only antisocial types defect in round 1. This requires, for antisocial types: n−1

  D−1 n−a−1 B(a) 2 ε + ∑ B(a)R(a) > ∑ n a=0 a=0

n−1

  D n−a B(a) 2 − 1 ε + ∑ B(a)R(a) ∑ n a=0 a=0

n−1



∑ B(a) (1 − 2/n) ε

> B(D)R(D)

a=0

⇔ε

>

B(D)R(D) 1 − 2/n

(4)

√ where D is the largest integer less than (1 − 6/6)n, and R(a) is the expected second round benefit from defecting when there are a 6 D other antisocial types:

R(a) =

a2 2a(n − a) (n − a)2 0 + (2/3) + (4/3). n2 n2 n2

Socialized types must prefer to contribute in round 1, which requires that n−1

∑ B(a)2

a=0

D−1 n−a−1 ε + ∑ B(a)S(a) 6 n a=0

  n−1 D n−a η n−1 (3ε) + ∑ B(a) 2 − 1 ε + ∑ B(a)S(a) n a=1 a=0

n−1



∑ B(a)(1 − 2/n)ε

6 η n−1 (3 − 2

a=1

24

n−1 )ε + B(D)S(D) n

⇔ε

6

B(D)S(D) (1 − η n−1 )(1 − 2/n) + η n−1 (2 n−1 n − 3)

where S(a) is the expected second round benefit (to a socialized type) from cooperating when there are a 6 D other antisocial types:

S(a) =

a2 2a(n − a) (n − a)2 (−1/3) + (1/3) + (3). 2 2 n n n2

A sufficient condition for (4) and (5) to hold simultaneously for some ε is that S(D) > R(D), or 2 D2 (−1/3) + 2d(n−D) (1/3) + (n−D) 3 n2 n2 n2

>

2 D2 0 + 2D(n−D) (2/3) + (n−D) (4/3) n2 n2 n2

This condition is simply (3) with d = D, and is satisfied by the definition of D. For the right values of ε this play is therefore an equilibrium. The benefit of the anonymous play in round 1 is twofold. First of all, it allows cooperation when no players have defected, even though the public good is excludable. Second, it unlocks some of the benefit of this excludability, by allowing some cooperation even in the face of high rates of initial defection. The ability to split into small groups maximizes the chance that socialized types will face only other contributors and get their internal payoff from everyone contributing.

5

Conclusion

The market mechanism enables our needs to be catered for by the “butcher’s selfinterest” . In primitive societies without large-scale markets, and also in parts of modern societies that the market does not reach, for example inside firms or other cooperative structures, others’ character and intentions towards us may be vital for our success and even survival. Social processes of character formation becomes correspondingly 25

(5)

important. It is equally important to find out whether these processes have succeeded. This in turn puts gives individuals incentives to conceal their true character. Some kinds of ritual can be viewed as institutions which provide a forgiving environment for people to act in accordance with their true character, by obscuring their identity behind the screen of the collective. Whether this is actually the function of any particular ritual or institution, and whether this function explains the institution’s persistence, are empirical questions to be answered case by case. However, some ancient cultural forms, in particular song, seem especially well suited for preserving the anonymity of participants and shirkers. So there are grounds for optimism that empirical evidence will be forthcoming.

References Acemoglu, D. 2007. “Incentives in Markets, Firms and Governments.” Journal of Law, Economics and Organization pp. 1–34. URL: http://0-jleo.oxfordjournals.org.serlib0.essex.ac.uk/cgi/reprint/ewm055v1.pdf Anshel, A. and D. A. Kipper. 1988. “The influence of group singing on trust and cooperation.” Journal of Music Therapy 25:145–155. Bird, Rebecca Bliege and Eric Alden Smith. 2005. “Signaling Theory, Strategic Interaction, and Symbolic Capital.” Current Anthropology 46:221–248. URL: http://www.journals.uchicago.edu/doi/abs/10.1086/427115 Chwe, M. S. Y. 2001. Rational Ritual: Culture, Coordination, and Common Knowledge. Princeton University Press. Ellison, G. 1994. “Cooperation in the Prisoner’s Dilemma with Anonymous Random Matching.” Review of Economic Studies 61:567–88.

26

Fischbacher, U., S. G achter and E. Fehr. 2001. “Are people conditionally cooperative? Evidence from a public goods experiment.” Economics Letters 71:397–404. Fudenberg, D. and J. Tirole. 1991. Game Theory. Mit Press. Gersbach, Hans and Volker Hahn. 2008. “Should the individual voting records of central bankers be published?” Social Choice and Welfare 30:655–683. URL: http://dx.doi.org/10.1007/s00355-007-0259-7 Hagen, E. H. and G. A. Bryant. 2003. “MUSIC AND DANCE AS A COALITION SIGNALING SYSTEM.” Human Nature 14:21–51. Holmstrom, Bengt. 1982. “Moral Hazard in Teams.” The Bell Journal of Economics 13:324–340. This article studies moral hazard with many agents. The focus is on two features that are novel in a multiagent setting: free riding and competition. The freerider problem implies a new role for the principal: administering incentive schemes that do not balance the budget. This new role is essential for controlling incentives and suggests that firms in which ownership and labor are partly separated will have an advantage over partnerships in which output is distributed among agents. A new characterization of informative (hence valuable) monitoring is derived and applied to analyze the value of relative performance evaluation. It is shown that competition among agents (due to relative evaluations) has merit solely as a device to extract information optimally. Competition per se is worthless. The role of aggregate measures in relative performance evaluation is also explored, and the implications for investment rules are discussed. URL: http://www.jstor.org/stable/3003457 Kandori, M. 1992. “Social Norms and Community Enforcement.” Review of Economic Studies 59:63–80. Kreps, D. M., P. Milgrom, J. Roberts and R. Wilson. 2001. “Rational Cooperation in the Finitely Repeated Prisoners’ Dilemma.” Readings in Games and Information . 27

Kurzban, Robert, Kevin McCabe, Vernon L. Smith and Bart J. Wilson. 2001. “Incremental Commitment and Reciprocity in a Real-Time Public Goods Game.” Pers Soc Psychol Bull 27:1662–1673. URL: http://psp.sagepub.com/cgi/content/abstract/27/12/1662 Kydd, Andrew. 2000. “OVERCOMING MISTRUST.” Rationality and Society 12:397– 424. URL: http://rss.sagepub.com/cgi/content/abstract/12/4/397 Levy, G. 2007a. “Decision Making in Committees: Transparency, Reputation, and Voting Rules.” The American Economic Review 97:150–168. Levy, G. 2007b. “Decision-Making Procedures for Committees of Careerist Experts.” American Economic Review 97:306–310. Levy, Gilat and Ronny Razin. 2007. A Theory of Religious Organizations. URL: http://personal.lse.ac.uk/RAZIN/religion.pdf Malinowski, Bronislaw. 2007. Argonauts Of The Western Pacific. READ BOOKS. Morris, S. 2001. “Political Correctness.” Journal of Political Economy 109:231–265. Osgood, C. E. 1962. An Alternative to War Or Surrender. University of Illinois Press. Prat, A. 2005. “The Wrong Kind of Transparency.” The American Economic Review 95:862–877. Ruffle, Bradley J and Richard H Sosis. 2003. Does it Pay to Pray? Evaluating the Economic Return to Religious Ritual. SSRN. SSRN eLibrary. URL: http://ssrn.com/paper=441285 Simon, H. A. 1990. “A mechanism for social selection and successful altruism.” Science 250:1665–1668.

28

Solzhenitsyn, Aleksandr I. 1997. The Gulag Archipelago, 1918-1956: An Experiment in Literary Investigation. Basic Books. Sosis, R. and B. J Ruffle. 2003. “Religious Ritual and Cooperation: Testing for a Relationship on Israeli Religious and Secular Kibbutzim.” Current Anthropology 44:713– 722. Ward, H. 1989. “Testing the Waters: Taking Risks to Gain Reassurance in Public Goods Games.” Journal of Conflict Resolution 33:274–308. Weber, M. N.d. “The Protestant Sects and the Spirit of Capitalism.” HH Gerth and C. Wright Mills (translators and editors), From Max Weber: Essays in Sociology,(New York: Oxford, 1946). Forthcoming.

29

Anonymity, signaling and ritual

politician who raises a big “war chest” is likely to be a formidable campaigner, and this fact itself will ..... For antisocial types, d is a dominant strategy as 2/n < 1.

139KB Sizes 1 Downloads 284 Views

Recommend Documents

Anonymity, signaling, contributions and ritual
19 Nov 2008 - never use these languages again; people in many societies perform elaborate religious rituals which ... (2003) found that religious communes with strict codes of dress and conduct survived for longer than ..... Andreoni and Petrie (2004

Secret Santa: Anonymity, Signaling, and Conditional ...
ritual, religion, music and dance, voting, charitable donations, and military institutions. We explore the value of anonymity in .... eration in the Prisoner's Dilemma when play is anonymous. Cooperation is enforced by ... Hagen and Bryant (2003) dis

PDF Book of Blessings: Ritual Edition (Roman Ritual) Read online
Book of Blessings: Ritual Edition (Roman Ritual) Download at => https://pdfkulonline13e1.blogspot.com/0814618758 Book of Blessings: Ritual Edition (Roman Ritual) pdf download, Book of Blessings: Ritual Edition (Roman Ritual) audiobook download, B

Ritual and Cult at Ugarit.pdf
Page 2 of 314. Writings from the Ancient World. Society of Biblical Literature. Simon B. Parker, General Editor. Associate Editors. Jerrold S. Cooper. Richard ...

Repeated Signaling and Firm Dynamics
Mar 17, 2010 - University of California, Berkeley ... firms with negative private information have negative leverage, issue equity, and overin- vest. ... We thank the seminar participants at Columbia University, Dartmouth College, New York University

Repeated Signaling and Firm Dynamics
We thank the seminar participants at Columbia University, Dartmouth College, New York University, London. Business School .... In our model, firms with positive information choose higher capital stocks and credibly ...... in simulated data, the corre

Signaling, Cheap Talk and Persuasion
to “cheap talk games”, in which communication is costless and non binding, and .... sity Press. Osborne, M. J. and A. Rubinstein (1994): A Course in Game ...

Respondent Anonymity and Data-Matching Erik ...
Jul 16, 2007 - The JSTOR Archive is a trusted digital repository providing for ... Signature in Personal Reports," lournal of Applied Psychology, 20 (1936), pp.

Respondent Anonymity and Data-Matching Erik ...
Jul 16, 2007 - Respondent Anonymity and Data-Matching ... The JSTOR Archive is a trusted digital repository providing for long-term preservation and access ...

Internet Anonymity and Rule 45.pdf
Page 1 of 7. UNITED STATES DISTRICT COURT. DISTRICT OF MINNESOTA. Strike 3 Holdings LLC, Case No. 18-cv-768 (DSD/FLN). Plaintiff, ORDER. v. John Doe subscriber assigned IP address 107.4.246.135,. Defendant. Adam Gislason, for Plaintiff. THIS MATTER c

Mondrian Multidimensional K-Anonymity
Optimal multidimensional anonymization is NP-hard. (like previous optimal ...... re-partition the data into two “runs” (lhs and rhs) on disk. It is worth noting that this ...

Paired Fingerprints to Improve Anonymity Protection - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, Issue 1, ... is proposed in [13], where the minutiae positions extracted from a fingerprint and ... in the orientation and frequency between the two different fingerprints.

Mondrian Multidimensional K-Anonymity
Proposition 2 Every strict multidimensional partitioning can be expressed as a relaxed multidimensional partition- ing. However, if there are at least two tuples in ...

Incognito: Efficient Full-Domain K-Anonymity - CiteSeerX
Jan 21, 1976 - external data to uniquely identify individuals. ... ature to refer to data published in its raw, non-aggregated, ..... mining association rules [2, 16].

Nonatomic Games with Limited Anonymity
use of a congestioned good, such as internet, roads, electricity networks, when ... price in a market, the topic choice for the whole set of young researchers ..... make this simplification to unify our presentation of the three approaches to prove t

Myth, Ritual, and Religion, vol 1
But, in the Wiraijuri mysteries, the master, Baiame, deceives the women as to the Mysteries! ... That a Totemistic dance, or medicine−dance of Emu ...... of the administration of arsenic were attributed by the barbarous people to supernatural ...

Download [Epub] Book of Blessings: Ritual Edition (Roman Ritual) Full Books
Book of Blessings: Ritual Edition (Roman Ritual) Download at => https://pdfkulonline13e1.blogspot.com/0814618758 Book of Blessings: Ritual Edition (Roman Ritual) pdf download, Book of Blessings: Ritual Edition (Roman Ritual) audiobook download, B

Railroad Signaling Block Design Tool
I created and implemented half of the Software Development Plan along with Chad. ... Chad: I worked with Chris to write the Software Development Plan.