Anonymity, signaling, contributions and ritual David Hugh-Jones and David Reinstein∗ November 19, 2008

Abstract Actors who need to convince others of their preferences or abilities may do so by sending a costly, hard-to-fake signal. Across the social sciences, costly signalling has been proposed as an explanation for many kinds of behaviour. These explanations face a problem: unless the signal’s cost and the future benefits of commitment are about equal, freeriders will have an incentive to send the signal and behave selfishly later. Signalling may then either fail in its function of weeding out selfish behaviour, or be prohibitively costly for participants. The problem is partly solved if the average level of signalling in a group is observable, but individual effort is not. Then, as freeriders can and will behave selfishly without being detected, group members learn about the average level of commitment among the group. We develop a formal model, and give examples of institutions to preserve anonymity, focusing particularly on the anthropological study of rituals including music and dance. Other applications include voting, charitable donations, military uniforms and the Kula Ring. Anonymity in signalling to outsiders is also discussed. We test our theory with a laboratory experiment. ∗ David Hugh-Jones is a post-doctoral researcher at the Max Planck Institute for Economics, Jena. David Reinstein is a lecturer in the Department of Economics at Essex University.

1

1

Introduction

In many settings, agents spend time, money and effort doing things that seem not to contribute to their interests. For example, in the United States, incumbent politicians often raise large sums to fund their reelection campaigns, even when they face no plausible challenger for their seat; in my father’s time, privileged young Englishmen paid money to be taught Latin and Ancient Greek in great detail, although they would never use these languages again; people in many societies perform elaborate religious rituals which appear to have no practical function. A potential way to explain all these forms of behaviour is that they convey something about the agent to an audience. A politician who raises a big “war chest” is likely to be a formidable campaigner, and this fact itself will deter opponents from entering the race. Although classical languages are useless in the modern world, learning them may convince potential employers that you are smart and hardworking, as stupid or lazy students would not find it worth the effort. The last example, of rituals, will be a central focus of this paper. Rituals have been explained as ways for members of a group to demonstrate their commitment to its shared values. The argument is that only those who intend to stay with a group for the long term will be motivated to perform costly rituals; short-termists will not find it worth their while. Therefore, ritual observance acts as a reliable, costly signal. There is some empirical evidence to support this claim. For example, Sosis and Ruffle (2003) found that religious communes with strict codes of dress and conduct survived for longer than ones with more lax standards. Explanations of this kind are grouped under the heading of “costly signaling”. As the term emphasizes, for signals to work, they must be costly. It is easy to see why. Suppose that a particular church has short and enjoyable services, and new members are trusted implicitly after only a brief period of attendance. Then a dishonest person will find it easy and profitable to gain the trust of the congregation, before absconding with the collection money. In fact, cases like this are often in the newspapers. (Another 2

example comes from the film Six Degrees of Separation, in which a young trickster inveigles his way into the heart of a family of well-off New York liberals. He explains wryly: “When rich people do something nice for you, you give ’em a pot of jam. ” Of course, a pot of jam is a crazily cheap way to pass oneself off as upper class.) In general, any signal that is easy to fake will fail to weed out those with undesirable characteristics. As a rule of thumb, the cost of a signal must be of roughly the same magnitude as the benefit individuals gain by signalling their type.1 This poses a particular problem for the explanation of ritual. Unlike other examples such as education, many rituals are of no direct practical value. (Of course, some rituals have a practical component.) Not only is participation costly for individuals, but it also brings no benefits to the group. A rain dance does not cause rain, and at least from a secular point of view, time spent in religious observance is wasted. If the costs of rituals are the same as the benefits gained, then a group will lose as much by performing the ritual as it gains by discovering its members’ level of commitment. For example, suppose that participation in a rain dance at the onset of a drought signals altruism towards the group, so that observers can share food with participants and expect to have the favour returned when they themselves are in need. Then, if the costs of the rain dance are equivalent to the benefits of food sharing, group members will have gained nothing overall, and they would be better off signaling altruism by a more practical activity, such as hunting and sharing any remaining game. On the other hand, if the costs of the dance are much less than the benefits of mutual food sharing, as seems highly likely, then why would all individuals, including selfish ones who plan to accept food without giving in return, not take part? So there seem to be two problems with the costly signaling explanation of rituals. First, 1 This is not necessarily true when signals convey information about ability. Signals may be cheap “on the equilibrium path”, i.e. for the people who actually give them, so long as they would be sufficiently costly to deter others who wanted to fake the signaled characteristics. However, when the characteristics being signalled do not affect the actor’s abilities - for example, with signals of motivation and commitment - so that sending the signal costs the same for everyone, then that cost must be high. See XXX Lachman

3

many rituals seem to have no direct practical benefit. (Indeed this is why the costly signaling explanation is needed in the first place.) Why could the same signaling function not be fulfilled by activities which provide a direct collective benefit to participants? Secondly, by contrast, many rituals are cheap relative to the behaviour they are supposed to signal. (Another example: a lead role in a war dance is less risky than a lead role in actual combat.) Rituals are often described as “symbolic”, a word which denotes representation of a higher reality but which can also connote trivial magnitude, as in a symbolic payment. I suggest that in some cases, these facts result from a particular solution to the problem of dishonest participation by individuals. Essentially, rather than increasing the cost of signaling, some rituals are designed or evolved to ensure that individuals’ contribution levels are invisible, and that only the average contribution level among the group is observed. This keeps the signal cheap; but the need to preserve anonymity may favour particular forms of signaling behaviour which are of little practical use. Some examples will illustrate this, and also show how this logic can explain a wide range of institutions. 1. I would like to hire an honest employee. I am unsure about the average level of honesty among the candidates, which may be high or low. Unless I am sure enough that the average level of honesty is high, I will prefer not to make a hire as my expected payoff will be too low. I could randomly select a candidate and lend him £5. Then, if I get my £5 back after a day, I hire the employee. Unfortunately, if so, even a dishonest candidate will return my £5 in the expectation of getting hired. A subtler approach is to randomly select a candidate and lend him £5. If I get my £5 back after a day, then I make a second random selection for the job itself, assuming that honesty by one person gives me enough confidence that the average honesty level is high. This course of action is fine if I can commit to it. The problem is that if I learn with certainty that the first candidate is honest, I will prefer to hire him than a different person who may not be, even when the 4

average honesty is high. But then, as before, the candidate has an incentive to return my cash even if he is dishonest. One solution is to guarantee anonymity, perhaps by leaving £5 in an envelope with my address so that I have no way of telling who behaved well to me. Then only the genuinely honest will return my £5. If I get the cash back, it is worth my while to hire a randomly chosen candidate. 2. A group of co-workers wonder if they can strike for higher wages. Will they support each other, or will some blackleg their colleagues? To work out the level of mutual goodwill, they might think back to the Christmas presents they received from their workmates. But Christmas presents may be given in order to strike up mutually beneficial long-term relationships. These ties of long-term self-interest are fine for the ordinary interactions of office politics, but will not guarantee the mutual loyalty needed to carry the group through a strike. On the other hand, if the workplace has the “secret Santa” institution, in which presentgivers are anonymous to the receivers, then the generosity of gifts conveys real information about co-workers’ character. 3. Members of a church in 17th century Amsterdam wish to transact business with each other. Trust is vital in the world of early capitalism Weber (1946). The church enjoins strict norms of good behaviour on its members. But how can they be sure that their fellows genuinely believe its teachings? Generous donations to the church are strong evidence of firm belief among the congregation, but only if they are made anonymously. Otherwise a fraudster might make a large donation now, in order to reap a still larger profit from breaking the trust of the godly. 4. A legislature faces a vote on a relatively trivial issue, such as raising its own salary. Some legislators are honest; others are greedy and want a large raise. Legislators’ salaries are relatively unimportant to the economy. But greedy legislators are, in general, likely to take worse decisions for the country. So voters 5

are concerned with the outcome of the vote. Learning whether a large or small raise was voted will inform voters whether the median legislator is greedy. Voters may wish to learn more than this, and find out how their own individual representative voted. But if they are able to, legislators who vote for a large raise will probably lose at the next election. Instead, they will restrain themselves, make more money over the long term and cause greater damage to the country. It would be better if individual legislators’ votes were secret; then voters would at least learn something from the overall outcome. 5. A primitive society faces regular collective action problems, such as food shortages or attacks from neighbouring groups. In any one instance, the payoffs to defection may be too large to sustain long-term cooperation through a punishment mechanism. Instead, members of the group are socialized into norms of collective action, transforming their payoffs so that the collective action problem takes the form of an assurance game. However, the socialization process is not fully reliable and some group members may be antisocial types who prefer to freeride. At times of danger, the tribe performs a collective dance. The rules of the ritual are complex and for it to be performed correctly, everyone must play their part. Also, if one individual shirks the effort to perform the ritual successfully, the whole rite breaks down and it will not be obvious who caused the failure. If shirkers were individually detected, they might face hostility and sanctions from the group, and this would deter even antisocial types from shirking. Because of the anonymity, antisocial types will shirk and may cause the ritual to fail. If the ritual fails, this signals the presence of antisocial types; all players will then defect when a serious collective action problem turns up. A successful rite has the opposite effect. Shirking in the ritual therefore changes the outcome of collective action, so antisocial types have an incentive not to shirk; but if there are many antisocial types whose identities are unknown to each other, they may

6

fail to coordinate on not shirking and instead each shirk because they expect others to do so too. 6. An employer wishes to hire motivated employees. The interview process may include a set of tasks that test this motivation. If the tasks are performed by individuals, then even lazy candidates will try hard on the day, as the prospect of a job is enough to motivate them. Instead, the employer splits candidates into groups who must work together on a task. These groups then have a collective action problem: working hard will help get your teammates hired as well as yourself. Because of this, only individuals with intrinsic motivations will work hard, and the team’s performance will be informative about the average level of motivation within it. All of these institutions allow participants to perform anonymously a trivial or symbolic altruistic action. Other participants and observers then learn about the level of altruism in the society. This learning may be useful in future collective action problems in which the stakes are higher, as it allows “conditional cooperators” to contribute only when they are certain enough that their actions will be reciprocated. These examples also show two other points. First, the symbolic action may help inform group members (examples 2, 3 and 5) and/or observers outside the group (1, 4 and 6). I discuss the case of signaling to outsiders below. Second, a crucial condition for this theory is that there are different types of actor in society. In particular, there are selfish actors and conditional cooperators. Selfish actors prefer to “defect” or “shirk” in the main collective action problem. Conditional cooperators prefer to “cooperate” or “contribute”, but only if contributions from other players are sufficiently high. The idea of different types of actors sits well with the idea, put forward by Simon (1990) among others, that humans are socialized into norms of behaviour as a tax by the community on interpersonal learning; this socialization, however, is unlikely to be completely successful, as individuals face strong evolutionary 7

pressures to advance their own interests before those of the group. The same pressures also encourage individuals to exaggerate their level of socialization in order to elicit others’ cooperation from others. In experiments, humans do indeed seem to act from a mix of altruism and self-interest, and to fall into different “types”, with different types having different patterns of behaviour. However, even if social actors are always selfinterested, they may still have good reasons to differ in their level of commitment to a particular group. This is discussed further in the literature review. The next section discusses the existing literature on signaling in a variety of fields and shows the contribution of this paper. Section 3 gives more in-depth examples of institutions to facilitate anonymous signaling. I then develop a formal model. Section 5 details a laboratory experiment to test the model’s implications. The conclusion discusses the implications.

2

Literature

The more general setting of this paper is the literature on cooperation in groups. In game theory, the various Folk Theorems show that when games are indefinitely repeated, rational self-interested individuals can enforce cooperation by punishing noncooperators, if they are sufficiently patient (see e.g. Fudenberg and Tirole (1991)). Kreps et al. (2001) was an early use of the idea that there are different types of players: if a few players are “nice” and would like to cooperate so long as others do so to, then this can allow for all players to cooperate even in a finitely repeated game. In fact, experimental economists have found considerable evidence for “conditional cooperation” - see Fischbacher, Gaechter and Fehr (2001) - and also for different “types” of players, i.e. for individual differences in behaviour that are stable over time (XXX who?). Differences in “type”, i.e. in character, provides one rationale for the importance of signaling. However, it is not the only possible one: even if all agents have the

8

same fundamental motivations, other differences, such as different chances of surviving alone, or more or less valuable prior reputations to maintain, may lead them to vary in their commitment to a group. Similarly, conditional cooperation - the desire to help the group if and only if others do so too - may result either from human psychology, or just from there being increasing returns to cooperation (e.g. you cannot hunt big game on your own). Existing literature dealing with anonymity in cooperation treats it mostly as a problem to be solved. Game theorists (Ellison, 1994; Kandori, 1992) have shown the possibility of cooperation in the Prisoner’s Dilemma when play is anonymous. Cooperation is enforced by a “punishment equilibrium” in which everyone defects, rather than by internalized norms. The experimental literature on cooperation sees anonymity as a bad: players cooperate more when their identities are made clearer to others. By contrast, in the present paper, anonymity may serve a positive function, by solving problems of adverse selection when payoffs are small. Thus, although cooperation is reduced on average in the anonymous game, there is also more variance in levels of cooperation, and these are based on people’s “real” preferences rather than on fear of punishment. Then, in more important situations, previous cooperation is a more effective signal. The overall effect may be to raise cooperation levels in the important situations. Experimental economists have examined the role of anonymity and identification in eliciting donations. The overall result is that being identified increases people’s generosity. Andreoni and Petrie (2004) mention it as a puzzle that charities allow anonymous donations. In one of their treatments they allow anonymous or non-anonymous contributions to a public goods game: no players use the anonymous option, but average donations are significantly higher than when only non-anonymous contributions are allowed. The presence of the anonymous option may have increased the signalling effect when non-anonymous contributions were made. The effect we describe could not have been strongly present in this experiment since there was no way to reward,

9

punish or exclude individual contributors. In economic work on transparency in principal-agent relationships, anonymity has been treated more favourably. When only incomplete contracts are possible, and agents have “career concerns” (that is, they are on short-term contracts and are trying to get themselves rehired at the end of the contract) principals may benefit from committing not to learn too much about the choices made by agents. (Holmström (1999) is a key reference for the “career concerns” literature in economics.) This is because being observed may lead agents to choose an action which makes them look smart, rather than the one which is actually best for the principal. In this context, Prat (2005) remarks on common exceptions to freedom of information laws for political advice. Levy (2007b; 2007a) compares public and secret voting by expert committees with career concerns and shows that under certain circumstances secret voting, where only the overall result, not individual votes, is made public, leads to better decisions. Gersbach and Hahn (2008) apply this idea to individual voting records for central bank interest rate decisions. The model in this paper can be seen in this perspective if we take all actors as both principals and agents, and view the possibility of future punishment or exclusion from collective goods as providing the equivalent of “career concerns”. Agents here are motivated to signal their good intentions, rather than their expertise. Acemoglu (2007) uses a similar theory to provide an explanation for the functions of government. In their paper, individualized incentives encourage participants to exert effort on manipulating the signal of their achievement rather than on the achievement itself (for example, a teacher can “teach to the test”). It may be better to organize players into teams (“firms”) to avoid this wasted effort. As in this paper, however, firms find it hard to commit not to examine individual signals; this provides governments, which can make this commitment more easily, with a role. Here I focus on different, smaller-scale mechanisms which solve the commitment problem by technologies that ensure individual effort is not observable.

10

As well as examining the signalling implications of anonymity, this paper proposes that a range of existing institutions, including ritual, song and dance, evolved in part as technologies to allow anonymous costly signalling. Rituals have long puzzled anthropologists. Early approaches were functionalist: rituals served a concealed purpose of social integration rather than their stated purpose. Later, structuralists and cultural anthropologists proposed that rituals embodied messages or narratives. More recently, signaling explanations for ritual have become popular. As described above, these explanations posit that individuals contribute to collective rituals in order to signal their commitment to the group. They essentially flesh out the functionalist account, firstly by describing a mechanism through which rituals could increase social integration, and secondly by showing that individuals, as well as the group as a whole, can benefit from taking part in rituals. Bird and Smith (2005) review the anthropological signaling literature, covering ritual as well as institutions such as competitive giving and inefficient hunting. They draw comparisons with evolutionary biology where again signaling explanations have become important. Ritual is a very broad concept. The explanation of ritual advanced here is meant to complement, not supplant, other explanations. Hagen and Bryant (2003) specifically claim that music and dance evolved to signal coalition quality to outsiders. As practising together takes time, music demonstrates to outsiders that a group has been together for long and is therefore likely to be committed to one another. They are more dubious that music can enhance social cohesion internally, since “music and dance communicate little” about individuals’ abilities or common interests. But if practice takes effort, it can act as a signal of commitment to the group and thus of common interests. Therefore, these practices can signal group cohesion to both insiders and outsiders. (Signaling to outsiders is discussed below.) They lack a really powerful explanation of why music in particular should have evolved for this purpose. Why demonstrate your vocal chords when you could be showing off your muscles? We suggest that the answer lies in the anonymity of sound. Humans

11

find it hard to determine the exact source of a sound; for visual displays this would be much easier. Anshel and Kipper (1988) have an experiment on group singing showing that it increases trust among participants. Chwe (2001) puts forward a related theory, in which rituals such as songs and dance serve to coordinate action, not by signaling commitment but by forcing participants to pay attention to each other and thus providing common knowledge that all have coordinated on a particular course of action. Repetition and tradition are devices to ensure that different participants and different generations are reliably “on the same page”. As Hagen and Bryant point out, it is not clear why music and dance would serve this goal better than straightforward communication. Chwe’s theory may be more appropriate for other cases. A related area, explored by anthropologists and others, is the explanation of religion. Weber (1946) was an early exponent of the idea that Protestant sects facilitated trade between members by increasing mutual trust. More recently Ruffle and Sosis (2003) has shown that 19th-century US communes which enforced tougher requirements on dress and behaviour survived for longer. Levy and Razin (2007) develop a single model combining ritual observance and religious belief. These theories tend to see participation as a binary choice, whereas explanations of “ritual” focus on more the levels of coordination and effort displayed. These are again likely to be complementary explanations. This paper explains how cooperation games of relatively trivial - “symbolic” - cost can be informative about behaviour in more important situations. Another explanation of small concessions is that they are stepping stones towards bigger ones. There is a literature in international relations on this idea (Osgood, 1962; Ward, 1989; Kydd, 2000) and an economic experiment demonstrating the concept (Kurzban et al., 2001). The “stepping stones” explanation works even in two player situations such as bilateral negotiations where anonymity is impossible. However, the increase in the size of cooperative gestures will still be limited by participants’ discount rates. These equilibria,

12

and repeated-game equilibria in general, work better when the rewards for defection are quite stable over time. When they are more unstable - for example, in conditions of great uncertainty such as war or famine - they are less likely to work well as they will be vulnerable to “long fraud”, in which self-interested participants cooperate only until there is a really large benefit from defecting. In such situations, good behaviour cannot be enforced by fear of punishment, so genuine good character becomes correspondingly more important. This paper makes two contributions to the existing literature. First, it shows that conditional cooperators engaged in collective action, under conditions of uncertainty, face a problem of “political correctness” or “pandering” which makes it hard for them to trust each other. Second, it proposes that anonymity may mitigate the problem of pandering, and may therefore have a previously-ignored positive role in developing trust. Third, it suggests that certain institutions - in particular, some collective rituals and “symbolic” collective actions - may have developed in order to solve this problem by enforcing an anonymous contribution technology, which makes it lest costly to test the average level of real cooperativeness in the group.

3

Examples of Anonymous Signaling

Dance has long been recognized as a way of enhancing group solidarity, and it has even been suggested that the development of this function was an essential step in human evolution(McNeill, 1997). This paper suggests an explanation for the specific form of music and dance - a form in which individuals’ levels of effort are masked although the average level is obvious. Whether this is true is an empirical question. Certainly, group singing can be judged by its overall qualities of volume, harmony and enthusiasm, while it is harder to discover exactly who is out of tune or keeping quiet. In communal dances, if one person makes a mistake, the entire group may lose the rhythm: whether

13

the guilty party can be identified depends on many factors. Armies are particularly likely to need institutions for anonymous signalling, since they need to signal both among themselves, and to their opponents, that their members are truly committed to victory. Military music has a long tradition: . When infantry march in step, a single person who loses the rhythm will cause the whole unit to break step. Like singing and dancing, this is an anonymous, weakest-link signalling technology. Vegetius ([c. 400] 2004) emphasizes the importance of disciplined marching in his handbook for the training of recruits to the Roman legions: “troops who march in an irregular and disorderly manner are always in great danger of being defeated.” Modern armies also use uniforms. A large literature in sociology and social psychology is dedicated to the effects of uniform on behaviour. A key idea is that uniforms “deindividuate” their wearers, making them more likely to conform to group norms (Joseph and Alex 1972; Rafaeli and Pratt 1993) . Watson (1973) found that cultures which used warpaint or other anonymizing techniques were more likely to fight to the death and take no prisoners. The deindividuation may originate from the sense that one has become anonymous. Anonymous individuals cannot use the battlefield as a stage to display their courage; this means that an opponent who seems aggressive probably really is so, and will not back down if challenged. In a modern context, we have already mentioned Secret Santa. Similarly, a good way to judge the culture of an organization may be the size of donations to the “honesty box” for office coffee and biscuits. Many team sports have elements of group performance: it would be interesting to examine what effect modern sports statistics, which break down a team’s performance into the effect of individual players, have on group morale. Many charities identify and thank donors who give a specified amount. These doubtless serve to elicit donations by people in search of social respect and recognition (Andreoni and Petrie 2004). But this technique is far from universal. Particularly in small-scale settings, it is common to publicize the total donations made so far, without identify14

ing individual givers. This could be simply due to a lack of resources, or because no large-scale donations have been made. However, early anonymous donations may also possess greater signalling value in encouraging others to come forward. This is especially true when (1) the giving community is bounded, so that levels of donations will be informative about the level of altruism in the community; (2) potential givers are known to one another, so that non-anonymous donations had a strong effect. Noticeably, church collections are usually taken in a pocket that allows individuals to conceal the amount of their donations. (This could be for religious reasons, but the same churches often feature memorials to distinguished church benefactors, who are named.) (Soetevent, 2005) examines the effect of making church donations public, and finds that donations only increase during the second collection for external good causes, not during the first collection which is earmarked for the community itself. Recalling that donations are sequential as the pocket is handed round, this is consistent with our theory. Applause at the end of a performance gives an honest sign of collective appreciation, since contribution levels cannot be distinguished. The same goes for the cheers and jeers during Prime Minister’s Questions. An anecdote from Solzhenitsyn (1997) shows this mechanism grotesquely breaking down, and reveals much about the inherent tensions in anonymous mechanisms: At the conclusion of the conference, a tribute to Comrade Stalin was called for.... The small hall echoed with "stormy applause, rising to an ovation." For three minutes, four minutes, five minutes, the "stormy applause rising to an ovation" continued. But palms were getting sore and raised arms were already aching.... At the rear of the hall, which was crowded, they could of course cheat a bit, clap less frequently, less vigorously, not so eagerly—but up there with the presidium, where everyone could see 15

them?... Then, after eleven minutes, the director of the paper factory assumed a business-like expression and sat down in his seat. And, oh, a miracle took place! Where had the universal, uninhibited, indescribable enthusiasm gone? To a man, everyone else stopped dead and sat down. They had been saved!... That, however, was how they discovered who the independent people were.... That same night the factory director was arrested. Here, although observers might not know who started the standing ovation, the first person to stop would be marked out - as the frightened participants knew. In fact, Stalin demanded absolute, enthusiastic support from all those around him. This is one extremity of a tradeoff, not explicitly modelled here, between gaining useful information from your subordinates’ honesty, and allowing their dissent to become common knowledge. If a leader is too insecure, in reality or in his own imagination, not only the identity, but even the existence of dissenters, must be concealed, and so even anonymous mechanisms will not be tolerated. In more normal politics, “groupthink” or political correctness (cf. Morris 2001) can be mitigated by allowing opinions to be expressed anonymously. The medieval court had the jester, an unimportant and unthreatening person who might safely relay court gossip to those in power. Modern parliaments have the principle of collective cabinet responsibility, allowing decisions to be taken which are unpopular with the parliamentary party or the country in general. Similarly, some firms allow employees to make anonymous suggestions or comments. On an everyday level, gossip, which is generally disapproved of, may serve a useful function: people can be told unwelcome truths on the (perhaps mutually convenient) assumption that the speaker is only repeating what others are saying. A famous anthropological example can be interpreted in a similar way. Malinowski (2007) gives an account of the Kula Ring institution practiced among Pacific islanders. 16

This is a system of ceremonial gift-giving in which armshells are exchanged for necklaces amongst a ring of island groups.. The objects are given, not traded, with an expectation that a present of equal value will be returned at a later date. The consensus among modern anthropologists (Leach and Leach 1983) is that they serve to integrate the island societies: expeditions to other islands for the purpose of Kula giving also accomplish a great deal of ordinary trade. Thus, the ongoing exchange of Kula gifts serves as a reliable signal of the stability of the relationship between different groups a matter of importance when the visiting party is far from home and at the mercy of its hosts. From our point of view, an interesting feature of the Kula is that necklaces are always given clockwise, armshells anticlockwise. This allows some excuse for a delay in reciprocating a Kula gift: for example, if one man has been given a fine necklace and owes an armshell of equal distinction, he may need to wait for it to arrive from the other direction. Malinowski gives an example of this kind of supposed delay causing tension between two Kula partners:

“Then Tovasana asked the visitors about one of the chiefs from the island of Kayleula (to the West of Kiriwina), and when he was going to give him a big pair of mwali. The man answered they do not know ; to their knowledge that chief has no big mwali at present. Tovasana became very angry, and in a long harangue, lapsing here and there into the Gumasila language, he declared that he would never hula again with that chief, who is a topiki (mean man), who has owed him for a long time a pair of mwali as yotile (return gift), and who always is slow in making Kula.” (ibid. p. 271)

Without the possible excuse that an appropriate gift is unavailable, the pressure towards prompt repayment might be so great that nothing could be learned from the exchange. The rules of clockwise and anticlockwise circulation thus allow partners to conceal 17

their unwillingness to repay, and create the ambiguity that makes Kula exchange so interesting (i.e. informative) to participants. Supporting this interpretation is Malinowski’s observation that “a very fine article must be replaced by one of equivalent value, and not by several minor ones, though intermediate gifts may be given to mark time” (ibid. p. 96). (InLanda (1994) , the circulation of gifts around the ring is given a signalling interpretation, but the reason for the ring structure is that “V2 [a Kula trader] cannot reciprocate V1’s gift of a necklace with the same necklace.... because V1 may interpret this as a rejection of V1’s friendship.” This does not explain why V2 could not directly reciprocate V1’s necklace with a different necklace of his own, i.e. why there is a need for necklaces to circulate clockwise and armshells anticlockwise.) Finally, since the nineteenth century the ballot in Britain and the US has been secret. For political parties, this had the disadvantage that voters could no longer be bribed into supporting them. However, there was also an advantage: a party’s vote share became a reliable signal of public support for its policies. As one function of voting is to signal one’s support for particular policies (Smirnov and Fowler 2007; Stigler 1972), both to the party and to other social actors, there may be an advantage to political parties in making that signal clearer. In this spirit, Londregan and Vindigni (2006) model voting as a reliable signal of willingness to fight in a war.

3.1

Signaling to outsiders

The disadvantage of keeping individuals’ effort levels anonymous is that some information is lost. Observers learn only about the average level of contribution to a public good, not about who contributes. When a symbolic collective display is meant to inform people outside the group, this disadvantage no longer matters. For example, if a war chant is meant to impress an opposing group with the readiness of participants to fight, the opponents will want to know how many fanatical warriors they are facing, and be less interested in exactly who they are, and in any case, the singing group 18

has no incentive to signal this information. (There is as before an individual incentive to appear as more determined than others, in order to deter attacks on oneself; as before, this provides a reason for the group to use an institution that provides anonymity.) Thus, anonymity-preserving institutions are especially well-suited to signaling to outsiders. Hagen and Bryant (2003) discuss song and music as an example of “coalitional signaling”.

4

A Model of Anonymous Signaling

Suppose there are n > 2 players who must choose whether to contribute (c) or defect (d) in a public goods game. Players are “socialized” with probability η and “antisocial” otherwise; types are private information and are drawn independently. Payoffs are given below for each type: When k other players contribute Socialized: c

2(k + 1)/n − 4 if k < n − 1; 2(k + 1)/n − 4 + 2 = 0 otherwise ; 2k/n − 3

d

When k other players contribute Antisocial: c d

2(k + 1)/n − 4 2k/n − 3

In words, everybody loses 3, and can choose to contribute one extra unit which is doubled and shared equally among the group. If nobody cooperates, all players get -3. If all players cooperate, antisocial players get -2=-4+2. Socialized players get an extra utility bonus of 2, and thus break even, if and only if everybody contributes. This could reflect a psychological reward for correctly following a social norm of cooperation. Limiting the reward to cases where everyone cooperates is purely a simplification. There are two Bayesian Nash equilibria of this game, taken on its own. In the first, everyone plays d. For antisocial types, d is a dominant strategy as 2/n < 1. For socialized 19

types, unless all other players contribute, defection gives strictly higher utility, so if all other players are playing d, d is a best response. The more interesting equilibrium has socialized types playing c and antisocial types playing d. This is a separating equilibrium: different   types reveal themselves by choos n − 1  n−1−a ing different actions. Define B(a) ≡  (1 − η)a as the probability of η a having a antisocial types among n − 1 players, from the binomial distribution. The condition for the existence of a separating equilibrium is that socialized types gain higher expected utility from playing c when other socialized types are playing c, thus: n−1

η

n−1

   n−1 2(n − a) 2(n − a − 1) × 0 + ∑ B(a) −4 > ∑ B(a) −3 n n a=1 a=0     n−1 2 n−1 ⇔ ∑ B(a) −1 > B(0) 2 −3 n n a=1     2 n−1 ⇔ (1 − η n−1 ) −1 > η n−1 2 −3 n n 1 1 ⇔ η n−1 > − . 2 n 

(1)

This holds for all values of η above some cutpoint η ∗ < 1. As n increases η ∗ → 1: when there are more participants it is more likely that at least one will be a defector, so that there is no separating equilibrium unless players are almost certainly socialized.

4.1

A first round with visible play

If η < η ∗ , socialized types will find it too risky to contribute and the public good will not be provided. It would be helpful if there were some way to find out whether there are in fact any defectors: if there are none, then socialized types will be willing to contribute to the public good so long as others do so too. One option would be to play a first round of the game, with the same structure as before, but with payoffs multiplied by some small ε > 0. (Taking the payoffs as they are, players would like ε to be small, 20

Figure 1: Minimum values of η for different group sizes (no first round)

21

reflecting the fact that ritual cooperation has costs.) Suppose that it is impossible to exclude defectors from the second round of play. Then, it makes no difference whether defection is anonymous or not. Antisocial types may, however, prefer to contribute simply in order to get the benefit from freeriding in the second round. So the first round with visible play may not be informative. But if there is more than one antisocial type, and one antisocial type is expected to defect, the other one will do the same, as universal defection will be guaranteed in the second round whether one or two antisocial types defect. (In other words, the antisocial types suffer from a “moral hazard in teams” problem: cf. Holmstrom (1982).) So antisocial types will need to coordinate on first round contribution. This coordination will be more difficult because the number and identities of the antisocial are not public knowledge. Formally, suppose that all antisocial players defect in both rounds, and that socialized players always contribute in the first round, and contribute in the second round if and only if nobody has defected. Second round play is clearly in equilibrium. An antisocial player, choosing whether to defect or contribute in round 1, faces two possibilities: he is the only antisocial player, and will cause universal defection in round 2 by defecting now; or he is not and his action will make no difference in round 2. The condition for defection in round 1 to be a best response is

η n−1 [ε(2

n−1 n−1 n−a−1 − 3) − 3] + ∑ B(a)[ε(2 − 3) − 3] > n n a=1

η n−1 [ε(−2) + (2

n−1 n−1 n−a − 3)] + ∑ B(a)[ε(2 − 4) n n a=1

. The first term on the left gives the round 1 payoff from defection when all others cooperate, plus the round 2 payoff from all defecting. The second term gives the round 1 payoff when n − a − 1 cooperative types cooperate, plus the round 2 payoff from all defecting. On the right, the first term gives the round 1 payoff from cooperating when all others do, plus the round 2 payoff from defecting when all others cooperate (as in this equilibrium all cooperating in the first round results in all socialized types cooperating

22

in the second round). The second term gives the round 1 payoff from cooperating when at least one other player defects, plus the payoff from universal defection in round 2. The above simplifies to:

η n−1 [ε(2

n−1

n−1 n−1 − 1) − 2 ] > n n

∑ B(a)[ε(2

a=1

n−a n−a−1 −2 − 1)] n n

n−1 n−1 2 ⇔ η n−1 [ε(2 − 1) − 2 ] > (1 − η n−1 )ε( − 1) n n n 1 − 2/n n−1 ⇔η 6 ε 2 − 2/n which is false for small enough ε, true for ε = 1 and n large, and false for any ε < 1 when n is small and η is close to 1. A socialized player faces the same two possibilities in round 1 but has slightly different incentives. For her to contribute we require

η n−1 [ε(2

n−1 n−1 n−a−1 − 3) − 3] + ∑ B(a)[ε(2 − 3) − 3] > n n a=1

η n−1 [ε(0) + (2

n−1 n−1 n−a − 3)] + ∑ B(a)[ε(2 − 4) − n n a=1

which is similar to (2) but with a payoff of 0 rather than -2 from cooperating when all others cooperate. This simplifies to

η n−1

>

ε

1 − 1/n . 3 + ε(2 + 1/n)

Thus to sustain the equilibrium described, in which socialized types contribute and antisocial types do not, we require

η n−1

3 + ε(2 + 1/n) 1 − 1/n

> ε > η n−1

2 − 2/n . 1 − 2/n

(3)

It is easy to check that these equations can always be satisfied by some ε for any n, as the left hand side is strictly greater than the right hand side – intuitively, the socialized type gains more from contributing than the antisocial type.

23

4.1.1

Benefits from the first round

The first round is itself costly, so its effect in detecting cheaters will only be beneficial if ε is small. Taking ε = η n−1 2−2/n 1−2/n , the smallest possible value of ε that allows round 1 antisocial types to be detected, we calculate the benefit of having the round. First, assume η < η ∗ . Then without round 1, all players defect and get a payoff of -3. The utility to a cooperator when round 1 exists is:

n−1

η n−1 × 0

+

∑ B(a)[ε(

a=1 n−1

=

n−a − 4) − 3] n 2 − 2/n n − a − 4) − 3] n

∑ B(a)[η n−1 1 − 2/n (

a=1

and the first round is worthwhile if n−1

2 − 2/n n − a − 4) − 3] > n

∑ B(a)[η n−1 1 − 2/n (

a=1 n−1



2 − 2/n n − a − 4) − 3] > n

∑ B(a)[η n−1 1 − 2/n (

a=1

n−1



n−1



∑ B(a)(−3)

a=0

−3η n−1

2 − 2/n n − a − 4) > n

−3

∑ B(a) 1 − 2/n (

a=1

n−1

2 − 2/n n − a − 4) > n

∑ B(a)η n−1 1 − 2/n (

a=1

−3

This will always hold for η high enough, since if so the probabilities for B(a) with a > 0 become very low. TODO: show that this holds for η ∗ if we increase payoffs in first round game (e.g. add two so that default “non-psychological” payoff is 0, satisfying budget constraint). We can rule out some other equilibria as follows. First, suppose that all types contribute in round one. Then the equilibrium belief after observing all players contributing will 24

be just that η of the players are socialized. If so, for η < η ∗ nobody will contribute in round two; as contributing in round one guarantees the worst outcome in round two, for η < η ∗ again no player will contribute in round one, contradicting our assumption. A similar argument shows there are no equilibria in which only antisocial types contribute in round one. If no types contribute in round one, then again no types contribute in round two as the equilibrium prior remains at η. This is an equilibrium. We examine its robustness as follows : suppose that off equilibrium, after a single player deviates and contributes in round 1, s/he is believed to be socialized with certainty. (If contribution increases the probability of socialized types contributing in round two, then socialized types gain more in expectation from the deviation as there is a possibility that all players are socialized; we can then apply the Intuitive Criterion and conclude that this belief survives the refinement. If contributing does not increase the probability of round two contributions, then this belief and all others survive the refinement. We seek an ε such that the belief survives the refinement and guarantees equilibrium.) TODO: refinement.

4.2

The commitment problem

Thus, unless antisocial types are vanishingly rare, it will be possible to detect them by playing a less expensive “toy” or “symbolic” round of the public goods game. Armchair empiricism confirms that, for example, we expect our friends to help us out in small ways, and that violating this norm can have consequences out of proportion to the direct benefit of the help. The problem with this setup is that we have assumed that the public goods game is non-excludable. Thus, when antisocial types are revealed in round 1, the consequence is universal defection in round two. In many cases, it would be more reasonable to assume that antisocial types are excluded from the benefits of

25

other players’ contributions, or punished directly. In this way, some contributions can still be sustained even in the presence of antisocial players. But of course, this gives the antisocial types back their incentive to mimic the socialized by contributing in round 1. I focus on exclusion mechanisms to avoid having to model punishment explicitly. I assume that after round 1, every player writes down a list of players, which must include himself. All those players who share the same list G ⊂ {1, . . . , n} then play the round 2 game, with payoffs as shown above except that n is everywhere replaced by n0 = |G|, and payoffs when n0 < 3 are always zero. As the game is defined, there is no inherent advantage to having a particular sized group. I focus on the following potential equilibrium: after round 1, all players who contributed write down G∗ ={all players who contributed}. Defectors write down anything – as their partners in any group will be defectors, they get 0. In round 2, players contribute if and only if they are (1) socialized and (2) in a group with only round 1 contributors. In round 1, the socialized contribute and the antisocial defect. Solving the game backwards, clearly round 2 strategy is optimal as contributions take place always and only in groups where every player contributes. When players write down their list, round 1 defectors cannot join the contributors’ group and a round 1 contributor who chooses anything but G∗ will be in a group with only defectors. So the given strategy is optimal. But consider round 1. Now, the payoff from contributing for an antisocial type is n−1

  n−a n−a−1 ∑ B(a) (2 n − 1)ε + 2 n − a a=0

where the second term in brackets gives the payoff from defecting in round 2, in a group with n − a − 1 contributors. The payoff from defecting is n−1

  n−a−1 B(a) 2 ε ∑ n a=0

26

and the relative gain from contributing is n−1

  n−1 2 2 n−a−1 n−a−1 B(a) ( = ( − 1)ε + ∑ B(a)2 − 1)ε + 2 . ∑ n n − a n n−a a=0 a=0

In equilibrium this must be negative, which will require ε to be large. Indeed, for n = 3 √ and η > 3/2 − 7/2 ∼ = 0.177, we require ε > 1, and as n → ∞ we require ε > 2. (The relative gain goes to −ε + 2 as B(a) becomes concentrated at a = n/2. ) In general, then, the round 1 game will be more expensive than it is worth if players can be excluded from round 2. Could there be a different equilibrium which avoids this problem? Suppose after round one, every player writes down G = {1, . . . , n}. Then we are back in the earlier game. If ε is calibrated to allow antisocial types to deviate in round 1, then once more round 2 cooperation can sometimes be sustained even if η < η ∗ . Also, no individual has an incentive to deviate from the action of writing down G = {1, . . . , n}, as anyone who does will receive a payoff of 0 in round 2. However, this equilibrium is not coalition-proof. If we assume that after round 1, players can make agreements so long as these are selfenforcing, then whenever some players have defected in round 1, the G∗ -coalition of all contributing players will wish to deviate to writing down G∗ and excluding defectors, then playing c in round 2. Clearly the same problem will arise for any G which includes round 1 defectors; as long as there are at least 2 round 1 contributors these will have an incentive to deviate as a coalition.

4.3

Anonymous play in the first round

To avoid these problems, an institution might develop in which the toy round 1 game is played anonymously, so that the total number of defectors, but not their identities, is known. The anonymizing technology could take a number of forms, as discussed in the examples above. 27

The whole game is now as follows: a first round of the contribution game is played with payoffs multiplied by ε; after the round players observe the total number of contributors and defectors, but not their identities; they then each write down their list of players, including themselves, and those who share the same list play the final round of the contribution game. We examine the following equilibrium. In round 1, socialized types contribute and antisocial types defect. After round 1, if everyone has contributed, everyone writes down G = {1, . . . , n} and contributes in the final game. If there are d > 0 round 1 defections, players split into triples {1, 2, 3}, {4, 5, 6}, . . .. We assume that n is divisible by three. (The triples maximize the probability of having all socialized types in a group and thus getting the maximum payoff of 3. Any larger group would not be coalition proof; when everyone has contributed, group size is not important. A more realistic model would have returns to scale from the public goods technology; if so, players would balance the risk of having defectors against the benefit of more players.) If everyone has contributed in round 1, then socialized types contribute in round 2. Otherwise, socialized types contribute iff d2 2d(n − d) (n − d)2 (−1/3) + (1/3) + 3 n2 n2 n2

>

⇔ d 2 (−1) + 2d(n − d)(−1) + (n − d)2 (5) > ⇔ 6d 2 − 12dn + 5n2 ⇔d

d2 2d(n − d) (n − d)2 0 + (2/3) + (4/3) (4) n2 n2 n2 0

> 0 √ 6 (1 − 6/6)n

where d/n is the posterior probability that a randomly selected player is an antisocial type. This assumes that only antisocial types defect in round 1. This requires, for antisocial

28

types: n−1

  D−1 n−a−1 B(a) 2 ε + ∑ B(a)R(a) > ∑ n a=0 a=0

n−1

  D n−a B(a) 2 − 1 ε + ∑ B(a)R(a) ∑ n a=0 a=0

n−1



∑ B(a) (1 − 2/n) ε

> B(D)R(D)

a=0

⇔ε

>

B(D)R(D) 1 − 2/n

(5)

√ where D is the largest integer less than (1 − 6/6)n, and R(a) is the expected second round benefit from defecting when there are a 6 D other antisocial types:

R(a) =

a2 2a(n − a) (n − a)2 0 + (2/3) + (4/3). n2 n2 n2

Socialized types must prefer to contribute in round 1, which requires that n−1

∑ B(a)2

a=0

D−1 n−a−1 ε + ∑ B(a)S(a) 6 n a=0

  D n−1 n−a − 1 ε + ∑ B(a)S(a) η n−1 (3ε) + ∑ B(a) 2 n a=0 a=1

n−1



∑ B(a)(1 − 2/n)ε

6 η n−1 (3 − 2

a=1

⇔ε

6

n−1 )ε + B(D)S(D) n B(D)S(D)

(1 − η n−1 )(1 − 2/n) + η n−1 (2 n−1 n

− 3)

where S(a) is the expected second round benefit (to a socialized type) from cooperating when there are a 6 D other antisocial types:

S(a) =

a2 2a(n − a) (n − a)2 (−1/3) + (1/3) + (3). n2 n2 n2

A sufficient condition for (5) and (6) to hold simultaneously for some ε is that S(D) > R(D), or 2 D2 (−1/3) + 2d(n−D) (1/3) + (n−D) 3 n2 n2 n2

>

29

2 D2 0 + 2D(n−D) (2/3) + (n−D) (4/3) n2 n2 n2

(6)

This condition is simply (4) with d = D, and is satisfied by the definition of D. For the right values of ε this play is therefore an equilibrium. The benefit of the anonymous play in round 1 is twofold. First of all, it allows cooperation when no players have defected, even though the public good is excludable. Second, it unlocks some of the benefit of this excludability, by allowing some cooperation even in the face of high rates of initial defection. The ability to split into small groups maximizes the chance that socialized types will face only other contributors and get their internal payoff from everyone contributing.

5

Experiment

The key features of the model are:

1. Some players are conditional cooperators (“socialized types”), others (“antisocial types”) are not, and players’ types are not known to each other. 2. Therefore players would like to punish antisocial types or at least exclude them from interactions when payoffs are large. 3. Play when payoffs are low may be taken as informative about players’ types. 4. Due to signalling, play in such games will be more informative when players’ identities are known. 5. Therefore play in low-payoff rounds will have a greater effect on play in highpayoff rounds when the low-payoff rounds are anonymized. 6. Low-payoff rounds may be deliberately created by societies as signaling devices (“rituals”) and these may then be anonymous.

30

The last point cannot easily be tested in the laboratory, but we can test the other points in various combinations. (For example, we could induce conditional cooperator preferences by different payments in the lab; or we could theorize that players have these preferences even if monetary payoffs do not induce them.) We are particularly interested in testing whether this theory holds in public goods games, which are thought to model many kinds of voluntary action that benefit the community at the individual’s expense. We therefore test points 1-5 in a public goods game framework.

5.1

Setup

Groups of 6 players play a public goods game in which they are endowed with 10 points. They may either keep these points, or contribute them to a community pot, in which case the points are doubled and shared equally between players. The social optimum is thus for all players to contribute all their points; the Nash equilibrium with material payoffs is for no points to be contributed. Each point is worth 20 Euro cents. Before the main game, 3 randomly chosen players of the 6 play an initial public goods game, again being endowed with 10 points and being allowed to keep or contribute them, with contributed points doubled and shared equally. By backwards induction, the Nash equilibrium with material payoffs is still for no points to be contributed in either round. Points in the initial round are worth only 10 Euro cents . These three players are randomly numbered 1,2 and 3 and are collectively known as the “round 1 players” while the others are the “round 2 only players”. There are two treatments: Anonymous and Identified. In the Anonymous treatment, the 3 donations in the first round are revealed to all 6 players before the second round begins. However, players have no way of knowing which player (of numbers 1, 2 and 3) made which donation. In the Identified treatment, the 3 donations in the first round, and the number of each 31

player making the donation, are revealed to all 6 players before the second round begins. After round 1 donations have been revealed, a randomly selected round 2 only player is offered the chance to spend money to “punish” any or all the round 1 players (identified as 1, 2 and 3). For each 10c s/he spends on punishment, the target round 1 player loses 30c. The result of this punishment is revealed to the relevant players after round 2 has been played. To sum up, the sequence of play is as follows: Round 1 contribution decisions made⇒round 1 contribution decisions revealed with or without player numbers⇒punishment decisions made⇒round 2 contribution decisions made⇒total round 2 payoffs revealed, minus any punishments.

5.2

6

Results

Conclusion

The market mechanism enables our needs to be catered for by the “butcher’s selfinterest” . In primitive societies without large-scale markets, and also in parts of modern societies that the market does not reach, for example inside firms or other cooperative structures, others’ character and intentions towards us may be vital for our success and even survival. Social processes of character formation becomes correspondingly important. It is equally important to find out whether these processes have succeeded. This in turn puts gives individuals incentives to conceal their true character. Some kinds of ritual can be viewed as institutions which provide a forgiving environment for people to act in accordance with their true character, by obscuring their identity behind the screen of the collective. Whether this is actually the function of any particular ritual or institution, and whether this function explains the institution’s persistence, are empirical questions to be answered case by case. However, some ancient cultural forms, 32

in particular song, seem especially well suited for preserving the anonymity of participants and shirkers. So there are grounds for optimism that empirical evidence will be forthcoming.

References Acemoglu, D. 2007. “Incentives in Markets, Firms and Governments.” Journal of Law, Economics and Organization pp. 1–34. URL: http://0-jleo.oxfordjournals.org.serlib0.essex.ac.uk/cgi/reprint/ewm055v1.pdf Andreoni, J. and R. Petrie. 2004. “Public goods experiments without confidentiality: a glimpse into fund-raising.” Journal of Public Economics 88:1605–1623. Anshel, A. and D. A. Kipper. 1988. “The influence of group singing on trust and cooperation.” Journal of Music Therapy 25:145–155. Bird, Rebecca Bliege and Eric Alden Smith. 2005. “Signaling Theory, Strategic Interaction, and Symbolic Capital.” Current Anthropology 46:221–248. URL: http://www.journals.uchicago.edu/doi/abs/10.1086/427115 Chwe, M. S. Y. 2001. Rational Ritual: Culture, Coordination, and Common Knowledge. Princeton University Press. Ellison, G. 1994. “Cooperation in the Prisoner’s Dilemma with Anonymous Random Matching.” Review of Economic Studies 61:567–88. Fischbacher, U., S. Gaechter and E. Fehr. 2001. “Are people conditionally cooperative? Evidence from a public goods experiment.” Economics Letters 71:397–404. Fudenberg, D. and J. Tirole. 1991. Game Theory. Mit Press.

33

Gersbach, Hans and Volker Hahn. 2008. “Should the individual voting records of central bankers be published?” Social Choice and Welfare 30:655–683. URL: http://dx.doi.org/10.1007/s00355-007-0259-7 Hagen, E. H. and G. A. Bryant. 2003. “MUSIC AND DANCE AS A COALITION SIGNALING SYSTEM.” Human Nature 14:21–51. Holmstrom, Bengt. 1982. “Moral Hazard in Teams.” The Bell Journal of Economics 13:324–340. This article studies moral hazard with many agents. The focus is on two features that are novel in a multiagent setting: free riding and competition. The freerider problem implies a new role for the principal: administering incentive schemes that do not balance the budget. This new role is essential for controlling incentives and suggests that firms in which ownership and labor are partly separated will have an advantage over partnerships in which output is distributed among agents. A new characterization of informative (hence valuable) monitoring is derived and applied to analyze the value of relative performance evaluation. It is shown that competition among agents (due to relative evaluations) has merit solely as a device to extract information optimally. Competition per se is worthless. The role of aggregate measures in relative performance evaluation is also explored, and the implications for investment rules are discussed. URL: http://www.jstor.org/stable/3003457 Holmström, Bengt. 1999. “Managerial Incentive Problems: A Dynamic Perspective.” The Review of Economic Studies 66:169–182. The paper studies how a person’s concern for a future career may influence his or her incentives to put in effort or make decisions on the job. In the model, the person’s productive abilities are revealed over time through observations of performance. There are no explicit output-contingent contracts, but since the wage in each period is based on expected output and expected output depends on assessed ability, an "implicit contract" links today’s performance to future wages. An incentive problem arises from the person’s ability and desire to 34

influence the learning process, and therefore the wage process, by taking unobserved actions that affect today’s performance. The fundamental incongruity in preferences is between the individual’s concern for human capital returns and the firm’s concern for financial returns. The two need be only weakly related. It is shown that career motives can be beneficial as well as detrimental, depending on how well the two kinds of capital returns are aligned. URL: http://www.jstor.org/stable/2566954 Joseph, N. and N. Alex. 1972. “The Uniform: A Sociological Perspective.” American Journal of Sociology 77:719. Kandori, M. 1992. “Social Norms and Community Enforcement.” Review of Economic Studies 59:63–80. Kreps, D. M., P. Milgrom, J. Roberts and R. Wilson. 2001. “Rational Cooperation in the Finitely Repeated Prisoners’ Dilemma.” Readings in Games and Information . Kurzban, Robert, Kevin McCabe, Vernon L. Smith and Bart J. Wilson. 2001. “Incremental Commitment and Reciprocity in a Real-Time Public Goods Game.” Pers Soc Psychol Bull 27:1662–1673. URL: http://psp.sagepub.com/cgi/content/abstract/27/12/1662 Kydd, Andrew. 2000. “OVERCOMING MISTRUST.” Rationality and Society 12:397– 424. URL: http://rss.sagepub.com/cgi/content/abstract/12/4/397 Landa, J. T. 1994. Trust, Ethnicity, and Identity: Beyond the New Institutional Economics of Ethnic Trading Networks, Contract Law, and Gift-Exchange. University of Michigan Press. Leach, J. W. and E. R. Leach. 1983. The Kula: New Perspectives on Massim Exchange. Cambridge University Press. 35

Levy, G. 2007a. “Decision Making in Committees: Transparency, Reputation, and Voting Rules.” The American Economic Review 97:150–168. Levy, G. 2007b. “Decision-Making Procedures for Committees of Careerist Experts.” American Economic Review 97:306–310. Levy, Gilat and Ronny Razin. 2007. A Theory of Religious Organizations. URL: http://personal.lse.ac.uk/RAZIN/religion.pdf Londregan, J. and A. Vindigni. 2006. “Voting as a Credible Threat.”. URL: http://www.princeton.edu/~pegrad/papers/londvind.pdf Malinowski, Bronislaw. 2007. Argonauts Of The Western Pacific. READ BOOKS. McNeill, William H. 1997. Keeping Together in Time. Morris, S. 2001. “Political Correctness.” Journal of Political Economy 109:231–265. Osgood, C. E. 1962. An Alternative to War Or Surrender. University of Illinois Press. Prat, A. 2005. “The Wrong Kind of Transparency.” The American Economic Review 95:862–877. Rafaeli, A. and M. G. Pratt. 1993. “Tailored Meanings: On the Meaning and Impact of Organizational Dress.” The Academy of Management Review 18:32–55. Ruffle, Bradley J and Richard H Sosis. 2003. Does it Pay to Pray? Evaluating the Economic Return to Religious Ritual. SSRN. SSRN eLibrary. URL: http://ssrn.com/paper=441285 Simon, H. A. 1990. “A mechanism for social selection and successful altruism.” Science 250:1665–1668. Smirnov, O. and J. H. Fowler. 2007. “Policy-Motivated Parties in Dynamic Political Competition.” Journal of Theoretical Politics 19:9. 36

Soetevent, A. R. 2005. “Anonymity in giving in a natural context: a field experiment in 30 churches.” Journal of Public Economics 89:2301–2323. Solzhenitsyn, Aleksandr I. 1997. The Gulag Archipelago, 1918-1956: An Experiment in Literary Investigation. Basic Books. Sosis, R. and B. J Ruffle. 2003. “Religious Ritual and Cooperation: Testing for a Relationship on Israeli Religious and Secular Kibbutzim.” Current Anthropology 44:713– 722. Stigler, George J. 1972. “Economic competition and political competition.” Public Choice 13:91–106. URL: http://dx.doi.org/10.1007/BF01718854 Vegetius, M. D. 2004. Epitoma rei militaris. Clarendon Press New York; Tokyo: Oxford University Press, Oxford. Ward, H. 1989. “Testing the Waters: Taking Risks to Gain Reassurance in Public Goods Games.” Journal of Conflict Resolution 33:274–308. Watson, R I. 1973. “Investigation into deindividuation using a cross-cultural survey technique.” Journal of Personality and Social Psychology 25:342–5. PMID: 4705668. URL: http://www.ncbi.nlm.nih.gov/pubmed/4705668 Weber, M. 1946. The Protestant Sects and the Spirit of Capitalism. New York: Oxford.

37

Anonymity, signaling, contributions and ritual

19 Nov 2008 - never use these languages again; people in many societies perform elaborate religious rituals which ... (2003) found that religious communes with strict codes of dress and conduct survived for longer than ..... Andreoni and Petrie (2004) mention it as a puzzle that charities allow anony- mous donations.

160KB Sizes 0 Downloads 303 Views

Recommend Documents

Anonymity, signaling and ritual
politician who raises a big “war chest” is likely to be a formidable campaigner, and this fact itself will ..... For antisocial types, d is a dominant strategy as 2/n < 1.

Secret Santa: Anonymity, Signaling, and Conditional ...
ritual, religion, music and dance, voting, charitable donations, and military institutions. We explore the value of anonymity in .... eration in the Prisoner's Dilemma when play is anonymous. Cooperation is enforced by ... Hagen and Bryant (2003) dis

PDF Book of Blessings: Ritual Edition (Roman Ritual) Read online
Book of Blessings: Ritual Edition (Roman Ritual) Download at => https://pdfkulonline13e1.blogspot.com/0814618758 Book of Blessings: Ritual Edition (Roman Ritual) pdf download, Book of Blessings: Ritual Edition (Roman Ritual) audiobook download, B

Organizational Report and Election Campaign Contributions and ...
Organizational Report and Election Campaign Contributions and Expenditures Manual .pdf. Organizational Report and Election Campaign Contributions and ...

Ritual and Cult at Ugarit.pdf
Page 2 of 314. Writings from the Ancient World. Society of Biblical Literature. Simon B. Parker, General Editor. Associate Editors. Jerrold S. Cooper. Richard ...

Contributions
Mar 8, 2016 - 8/12/14 KEN PAXTON CAMPAIGN. STATE. ATTORNEY. GENERAL. SUPPORT. MONETARY. $5,000.00. July 2014 DEREK SCHMIDT.

Small Contributions
and cleared for a new Wal-Mart anyway? .... the day after, and the day after that. .... One less steak purchased may go unnoticed in the “noise” of the market, ...

Repeated Signaling and Firm Dynamics
Mar 17, 2010 - University of California, Berkeley ... firms with negative private information have negative leverage, issue equity, and overin- vest. ... We thank the seminar participants at Columbia University, Dartmouth College, New York University

Health Savings Account Balances, Contributions, Distributions, and ...
Nov 29, 2016 - Distributions, and Other Vital Statistics, 2015: Estimates ... The Employee Benefit Research Institute (EBRI) maintains data on ...... An Analysis of Health Savings Account Balances, Contributions, and Withdrawals in 2012.

Health Savings Account Balances, Contributions, Distributions, and ...
Nov 29, 2016 - and Affordable Care Act of 2010 (ACA) requires be covered in full.) Otherwise, all health care services must be subject to the HSA's deductible.

Health Savings Account Balances, Contributions, Distributions, and ...
Sep 19, 2017 - This Issue Brief is the fourth annual report drawing on cross-sectional data from the EBRI ... ebri.org Issue Brief • Sept. .... ERISA Compliance .

Health Savings Account Balances, Contributions, Distributions, and ...
3 days ago - Distributions, and Other Vital Statistics, 2016: Statistics ... Institute (EBRI) developed the EBRI HSA Database to analyze the state of and ... Health Education and Research Program at the Employee Benefit Research Institute.

Repeated Signaling and Firm Dynamics
We thank the seminar participants at Columbia University, Dartmouth College, New York University, London. Business School .... In our model, firms with positive information choose higher capital stocks and credibly ...... in simulated data, the corre

Signaling, Cheap Talk and Persuasion
to “cheap talk games”, in which communication is costless and non binding, and .... sity Press. Osborne, M. J. and A. Rubinstein (1994): A Course in Game ...

special contributions
to enhance disaster capability, since many ALS units can be deployed quickly. ..... teer units, private companies, unions, and fire companies. Some forms of SSM ...

Respondent Anonymity and Data-Matching Erik ...
Jul 16, 2007 - The JSTOR Archive is a trusted digital repository providing for ... Signature in Personal Reports," lournal of Applied Psychology, 20 (1936), pp.

Respondent Anonymity and Data-Matching Erik ...
Jul 16, 2007 - Respondent Anonymity and Data-Matching ... The JSTOR Archive is a trusted digital repository providing for long-term preservation and access ...

Internet Anonymity and Rule 45.pdf
Page 1 of 7. UNITED STATES DISTRICT COURT. DISTRICT OF MINNESOTA. Strike 3 Holdings LLC, Case No. 18-cv-768 (DSD/FLN). Plaintiff, ORDER. v. John Doe subscriber assigned IP address 107.4.246.135,. Defendant. Adam Gislason, for Plaintiff. THIS MATTER c