Social Accountability to Contain Corruption A. Lambert-Mogiliansky∗† Paris School of Economics March 11, 2015

Abstract In this paper we investigate the welfare properties of simple reappointment rules aimed at holding public officials accountable and monitoring their activity. Public officials allocate budget resources to various activities delivering public services to citizens. Officials have discretion over the use of resource, and can divert some of them for private ends. Due to a liability constraint, zero diversion can never be obtained in all states. The optimal reappointment mechanism under complete information is shown to exhibit some leniency. In the absence of information a rule with random verification in a pre-announced subset is shown to be optimal. Surprisingly, most common rules make little use of hard information about service delivery when available. By way of contrast, requesting that the public official defend his record publicly can be very useful if service users can refute false claims with cheap-talk complaints: the first-best complete information outcome can be approached. JEL: D73, D81, D86, H11.

1

Introduction

We typically do not observe high-powered incentive contracts for public officials. Most often the official receives a fixed salary and incentive transfers are rare. Instead the decision as to whether to retain the official in their job is used to discipline public officials. Politicians can be ousted from power by general elections, and high-level bureaucrats by politicians or bureaucratic procedures. The recent developments of so-called transparency and accountability initiatives have come about because of considerable frustration with elections and bureaucratic procedures as the dominant means of holding politicians and high level bureaucrats accountable for their decisions.1 There is a broad consensus that those instruments are grossly inefficient in terms of monitoring public officials and fighting corruption, and that they need to be complemented with new mechanisms. ∗I

am grateful to J. Hagenbach, F. Koessler and L. Wren-Lewis for very useful discussions and suggestions. 48 Boulevard Jourdan, 75014 Paris 1 Criminal courts are not perceived as an alternative either. This is partly due to the process being slow and very † [email protected],

demanding in terms of evidence.

1

Transparency and accountability initiatives have a long tradition in the US (cf. Open Government) with some recent very interesting developments (see e.g., Noveck 2009). There has also been a fascinating recent upsurge of activities in developing countries including India. This is partly due to the enactment of the Right to Information Act, and partly to the development of new technologies that allow for innovative approaches based on Web 2.0 technology. For a review of those initiatives see e.g., Posani and Aiyar (2009). As emphasized in multiple evaluation reports (see Mc Gee and Gaventa, 2010, for example) "we are facing a serious deficit of understanding of the mechanisms at work in those initiatives which makes their evaluation hazardous". The present paper aims to help fill this gap. Accountability is a composite concept. It has been described (see Malena et al., 2004) as comprising three elements: "answerability" - the obligation to justify one’s action; "enforcement" - the sanction if the action and/or the justification are not satisfactory; and "responsiveness" - the willingness of those held accountable to respond to demands made. The first element is informational, and we can formulate it as the obligation to persuade of the suitability of one’s action upon request. The second is incentives (or effective sanctions). The third element is monitoring. Accountability can be reformulated as a monitoring mechanism that includes an obligation to participate in an ex post persuasion procedure. As noted above, the use of monetary incentives is typically very constrained: the wage is fixed and sanctions are often reduced to reputation costs affecting the chance of reappointment. The emphasis in this paper is therefore on (ex-post) mechanisms managed by citizens that determine how the public official (PO) can persuade them that he or she deserves reappointment. The kind of situation we consider is the provision of public services such as education, health, sanitation or some other service valued by citizens.2 In our model, the provision of public services depends on the resources the PO allocates to the service, as well as some stochastic (service-specific) state of productivity that is only observed by the PO. Our main focus is on corruption, defined here as the diversion of public funds from the provision of public services to private ends. The public official has effective discretion to divert resources due to a liability constraint (the harshest punishment is dismissal) and informational asymmetry. In the absence of any signal of the PO’s behavior (e.g., a performance measure, a verification outcome, announcements, users’ complaints etc.) citizens have no way of preventing a corrupt PO from diverting money: the PO is in effect not at all accountable for the use of resources. The question we ask here is whether and how much welfare can be increased (service delivery improved) by associating service users to an accountability mechanism that has limited verification resources. On the one hand we have a PO who implicitly or explicitly claims that he spends the money properly and always wants to be reappointed in office. On the other hand we have citizens who also want to reappoint the PO, but only if he spends the money properly. They know a corrupt 2 In

Andra Pradesh the National Rural Employement Guarantee Act has been the playground for well-documented

social-accountability initiatives.

2

PO diverts money unless he is punished for doing so.3 They have to devise a mechanism to monitor his behavior. The most natural thing that comes to mind is verification i.e., to check the (explicit or implicit) claims of the PO, and if diversion is detected dismiss the PO. Clearly, if citizens can check all claims, they have complete information and the first-best can be achieved. Systematic verification is not a realistic option, however. Citizens typically lack the necessary time (not to mention willingness) and information-processing capacity. But they could appeal to a professional auditor and pay for his services. In this paper we do not consider costly verification. One reason - consistent with our concern for corruption - is that in most LDCs there is no reason to trust independent auditors more than a bureaucratic audit.4 The failure of bureaucratic verification is precisely what triggers the development of citizen-based initiatives: bureaucrats collude with the PO and the PO is expected to collude with an outside auditor. Therefore, instead of costly verification, we consider limited verification where only a few - most of the time only one - services can be checked. The verification is performed by the citizens themselves: they process the evidence provided by the PO upon their request.5 The question boils down to the design of a selection rule that determines which services will be checked. Our focus is therefore on accountability mechanisms with the following form: to persuade the citizens that he did not divert funds and deserves reappointment, the PO must provide some evidence specified by the mechanism. Otherwise (i.e. if he fails to provide the evidence) the citizens believe that he has diverted money and so will be dismissed. We first characterize a (first-best) complete-information optimal mechanism which departs from the zero-tolerance principle due to the liability constraint. This is characterized by a satisfaction level (a sufficient target) above which the PO is implicitly allowed to divert funds. In the absence of any information about the PO’s behavior, the optimal P-rule (persuasion rule) calls for random verification within a pre-announced subset of services. This ensures that there is no diversion in that subset of services only. Surprisingly, the availability of information about the quality of service delivery (a signal of the PO’s behavior) is of little value. In particular we find that the most intuitive mechanism, which consists of a rule calling for the verification of one of the services where diversion might have occurred, i.e. low-quality services, is a very bad idea as it leads to maximal diversion. The intuition here is that such a rule increases the PO’s cost of refraining from diversion in the first place. Instead the maximal dilution of the detection probability by diverting whenever possible becomes optimal. Combining random verification with a necessary performance target weakly improves upon the optimal random-verification outcome. We next turn to social accountability and investigate the value of communication. We show that a debate where the public official publicly defends his record and 3 We 4 The

here use the term corruption in the sense of embezzlement. recent scandals with auditing firms, for example Enron, show that it can be difficult to prevent collusion even

in developed economies. 5 In contrast to Glazer and Rubinstein, 2006, but in line with their 2004 paper, we do not let the PO choose the service on which evidence is provided. Instead the receiver (here the citizens) chooses according to an explicit pre-announced rule.

3

where service users may refute his claim with cheap-talk complaints can be exploited in a mechanism that comes close to the complete-information first-best outcome. This result provides some support for the intuition behind social-accountability initiatives, and shows that a well-designed persuasion game involving the public can play a significant role in improving welfare. In particular existing internet-based complaint platforms can be adapted for this purpose. Related literature The issue of accountability has been addressed in the political science and political economy literature (e.g., Persson et al.,1997, Maskin and Tirole 2004). The emphasis there is on election rules and organizational structures. Our approach shares common features with the literature on optimal monitoring with ex-post verification (cf. Townsend, 1979, Gale and Hellwig, 1985, Ben Porah et al. 2012). In contrast with e.g., Townsend, we do not consider an explicit cost of verification: we instead assume limited verification resources. Moreover we are interested in the value of communication. This brings us closer to the persuasion literature (cf. Glazer and Rubinstein, 2004, 2006). A first contribution of this paper is to formulate accountability in terms of persuasion. This allows us to integrate the communication features typical in social-accountability initiatives into an optimal regulation analysis. A second contribution is to characterize the optimal use of limited verification resources with and without communication to deter corruption in a common situation of delegated resource allocation. The paper is organized as follows. In Section 2 we discuss the state of the art in the development community, and in Section 3 the general model is introduced. Section 4 characterizes two benchmarks: the complete-information first-best and the optimal mechanism in the absence of any observation. Section 5 investigates the optimal accountability rule (within a restricted class) when information about service delivery is available. Section 6 develops the full mechanism with both communication and verification, and our concluding remarks appear in the final section.

2

Social Accountability: toward a "theory of change"

According to the World Bank "while the concept of social accountability remains contested, it can broadly be understood as a range of actions and strategies beyond voting, that societal actors -namely the citizens- employ to hold the state to account" (World Bank 2013). Accordingly, we shall not deal with the design of a public monitoring agency. Instead, we consider a situation where citizens, via NGOs or ad-hoc associations, manage the mechanism by themselves, as in most social-accountability initiatives. Social accountability is developing because of the frustrating inefficiency of traditional forms of control over bureaucracies in certain environments and, as mentioned in the Introduction, due to the development of new technologies. Keeping bureaucrats accountable has long been recognized as an issue in democracies. In his celebrated essay "Bureaucracy" (1958) Max Weber argues that delegating decision power to bureaucrats is basically abdication as their expertise (private information)

4

makes effective control by legislators impossible. Others have responded that institutional design and procedures are efficient means to overcome this problem. As argued in Lupia and McCubbins (1994), the latter underestimate the lack of willingness of bureaucrats to facilitate control, as they can be obstructive in order to keep information hidden. Further, they write "Whether a legislator can overcome the problem associated with bureaucratic accountability depends on their ability to obtain information about the consequences of bureaucratic activity" (p. 92). When bureaucrats’ obstructive activity is of particular concern and outside experts cannot be relied on (two features that characterize an environment where corruption is a problem) appealing to citizens appears as one possible way of improving information and monitoring bureaucrats. The field of Transparency and Accountability (T&A) emerged as a field in development about 20 years ago. T&A initiatives to encourage social accountability have flourished, affecting very different activities ranging from service delivery (e.g., complaint mechanisms, citizen report cards, community monitoring and social audit) to the management of natural resources (e.g., the Extractive Industry Transparency Initiative (EITI)), the budget process ( e.g., participatory budget approaches, publicexpenditure monitoring, participatory auditing, open budget ) and the enforcement of the right to work (Saghal, 2008). There exists, as of today, a number of e-complaint platforms which share some features with the mechanism investigated here. Well-known examples are IPaidaBribe and Bribespot in India and Rospil in Russia. Like most citizen-led initiatives, those are information-gathering platforms used for advocacy purposes. Our mechanism goes further by explicitly modelling how complaints are processed into a prescription for targeted verification and how the outcome of verification affects the public official’s incentives. The aims and the claims of what T&A initiatives can deliver tend to be broad. They include contributing to the fight against corruption, improving the quality of governance, increasing development effectiveness, greater citizen empowerment and redressing unequal power relations to achieve essential human rights. These broad and ill-defined aims have helped to create a fair amount of confusion. According to Joshi and Houtzager (2012) some of this confusion originates from the two ideological roots of the concept. On the one hand there is a distrust of public officials and a focus on monitoring their performance. On the other hand there is a more collaborative approach where poor performance is due to lack of understanding/information about the actual value of the services delivered to the population. This brings about a focus on deepening democracy through the increased participation of various civil-society groups. In this paper we endorse the view that accountability is about monitoring poor-performing civil servants. We do not dismiss the democratic approach to accountability but this surely calls for a different type of model. As noted above, there is now a broad consensus within the development community that we lack the proper analytical tools to evaluate accountability initiatives:6 A major problem is that no clear 6 "Among

the principal methodological challenges and issues are ... untested assumptions and poorly articulated

theories of change" (TAI Synthesis report, 2010, p.36).

5

framework exists to guide practitioners and evaluators to help disentangle the various components of accountability. Many evaluations end up with the vague conclusion that "context matters" (see, e.g., World Bank, 2013). The weakness of enforcement is often presumed to be responsible for the failure. Since we are not dealing with legal sanctions, enforcement must be the equilibrium outcome of a complex game between different forces in any particular society: political parties, civil society organizations, bureaucracy etc. This is a topic that deserves research in its own right. The reports seldom even address the issue of whether the experimented answerability mechanism is efficient or even suitable with respect to the regulatory issue at stake. Put differently, they do not question its potential impact (i.e., for a given level of punishment) on the nexus of interactions the outcome of which it aim to affect. The analysis of answerability mechanisms and their monitoring properties is part of the field of regulation. The novelty lies in integrating an ex-post mechanism of information revelation and communication into the incentive scheme. The analysis can be carried out for an exogenously-given level of enforceable punishment, i.e. without going into the details on how that results. Similarly, the issues regarding civil society’s incentives to participate is worth analyzing in its own right. In the model that we investigate, the problems arising from service users having little incentive to participate is partly picked up by the informativeness of the signals generated by complaints. The evaluation reports often call for the development of a "Theory of Change". In our understanding, this new term refers to an analytical framework that maps the instruments used in accountability initiatives to results in terms of monitoring efficiency. The aim of this paper is to investigate the properties of some commonly-used instruments to contribute to the development of such a comprehensive analytical framework. This will allow a more rigorous evaluation of initiatives and guide the design of new initiatives in different regulatory and socio-political contexts. We also believe that clarifying the properties of citizen-led accountability mechanisms can play an important role in grounding their legitimacy and in strengthening enforcement incentives.

3

The model

There are a finite number n of different services in N , which have to be provided to citizens. A public official (PO) is hired with the task of allocating a budget B to provide these services, but can choose to divert money to private ends instead. For each service i, the quality delivered to the citizens is denoted si , si = f (θi , bi ) . This is a function of the share bi of the budget spent on i and an exogenous and uncertain service productivity parameter θ i ∈ θ, θ . Productivity θi is high (θi ) with probability p and low (θ i ) with probability 1−p. The productivity parameters are independently distributed. The quality of each service i can be either zero, low or high, si ∈ {0, s, s}, where si = 0 is service quality below the minimum-acceptable level. Service provision below the minimum level is liable to lead to

6

prosecution.7 The technology for service quality delivery is as follows: - f (θi , bi ) = s for bi ≥ - f (θi , bi ) = s for bi < - f (θi , bi ) = 0 for bi <

B n and θ i = θi B n and θ i = θi B n and θ i = θi .

or bi ≥

B n

and θi = θi ;

With this technology, and since the only purpose of allocating money is to provide quality, the choice of bi can be simplified to bi ∈ 0, B n

and the mandate of the PO simplifies to spending

B n

(also referred to as a budget unit) on each service.8 There are two crucial features to this production technology: • The delivery of low-quality service s can result from either spending the appropriate budget on a low-productivity service f θi , B n = s or spending 0 (diverting money) on a productive service f θi , 0 = s. • The amount that the PO can divert (the discretionary budget) depends on the state of the world, e.g. in state θ = (θ 1, ..., θn ) with θi =θ for all i = 1, ...n, there is no discretionary budget to be diverted. The PO cannot afford to deliver below the minimal level.9 The PO’s decision (x1 , ..., xn ) consists, for each service i, in either spending the share i denoted xi = 0 or keeping the money for himself xi = 1,; let x =

i∈N

B n

on service

xi , where x is the number of

diverted budget units. The PO’s objective is to maximize his expected payoff, including the money he diverts and the revenue when remaining in office EU =

xi i∈N

B + P (K)w n

(1)

where P (K) is the probability that the citizens, denoted CI, reappoint the PO and w is the (discounted) continuation wage.10 The probability P (K) depends on the procedure for reappointment, which is the choice of the CI (see below). The CI’s incentives are assumed to be perfectly aligned with those of service users, who want to maximize the quality of service delivery. In line with the motivation of the paper we should not think about the CI as a government agency but rather as a NGO bringing together service users with "militant" motivation. 7 Failure

to provide the minimal service is an overt violation of the PO’s obligations. Such failure may endanger the

life of citizens or deny their basic rights, for example by closing the hospital or school. By way of contrast, the diversion of funds does not in general lead to legal prosecution: this is because it is often difficult to legally establish diversion unless the diverted funds can be traced. Our concern in this paper is how can we monitor covert diversion without relying on lengthy and demanding legal procedures. 8 This corresponds to the allocation that secures the best value for money. 9 The assumption is that the punishment for overt diversion that denies basic rights is very servere. In some countries corruption is punished with the death penalty but more generally it can lead to prison and ruined reputation. 1 0 The present paper depicts a situation where the budget-allocation task is unique, but takes place in a long-running relationship so that w is the discounted wage. In a repeated-allocation game the discounted income would include future rents from stealing as well.

7

The CI’s objective is to maximize a social utility function which is linear in the number of HQ services. This is expressed as the maximization of the expected discretionary budget allocated to productive services, or equivalently of the expected discretionary budget saved from diversion i=n

q (i)

EV = i=0

j (i) B n

(2)

where q (z) is the probability that the state is characterized by z high productivity services (see below for the exact definition) and j (i) ≤ i is the number of budget shares saved in the (productivity) states belonging to Θ (i). To discourage the PO from stealing, we assume that the CI can commit to a reappointment procedure. This involves a rule for checking the state of productivity and (when unobservable) the quality of some service and a reappointment rule. The CI is assumed to only have resources to check one single service.11 More formally, at the beginning of the period the CI commits to a persuasion or P-rule. A P-rule is composed of two mappings f and d : • The first mapping depicts the rule for selecting one service, if any, for verification: f : I 0 → P where I 0 is the information set at time 0. I 0 may be confined to basic information about the state of productivity, or may include the quality of services delivered, some announcement made by the PO and complaints by service users. P is the set of probability distributions over the elements of N so that (p1 , ..., pn ) is a probability vector. With probability pi element (θi , si ) is checked. • The second mapping sets out the reappointment rule. We confine our attention to deterministic decision rules of the form:12 d : I 1 → {K, D} with I 1 being the information at time 1. I 1 includes initial information and the verification outcome. K means reappoint (Keep) and D stands for dismiss. The general timing of the interaction between the PO, the CI and service users is as follows: (1) The CI publicly commits to a P-rule (a f and d mapping); (2) Nature picks the profile of productivity parameters θ which is privately observed by the PO; (3) The PO makes his allocation decisions: (x1 , ..., xn ); (4) Services (s1 , ..., sn ) are delivered to the service users; (5) A message may be requested from the PO and complaints received from service users; (6) A service i is selected and checked according to the P-rule (f mapping). (7) The CI decides whether to keep or dismiss the PO according to the d mapping. 1 1 Our 1 2 To

interpretation is that this reflects reflects limited capacities to process the evidence provided by the PO. the best of our knowledge no public administration uses a stochastic dismissal rule, presumably because having

a random device decide individuals’ fate is not politically feasible. We focus on deterministic dismissal rules. The probabilistic character of the mechanism is introduced through verification, signaling or communication.

8

Our basic framework is characterized by linear utilities and a liability constraint: the largest cost that can be imposed on the PO is dismissal (the loss of fixed w). We have the two following assumptions: - B > w, meaning that the PO always prefers to divert the whole budget and lose his job rather than totally refraining from diverting. But -

B n

< w, so the PO does not always divert whatever he can.

The two assumptions are summarized in a number l ∈ [2, n] such that l B n < w whereas

B n

(l + 1) ≥

w. The number l corresponds to the largest amount of budget units the PO is willing to forgo in order to keep his job. Definition 1 Let l = maxy∈N y; y <

wn B

.

The magnitude l (B, w) plays a central role in the analysis.13 We have

dl dw

≥ 0 and

dl dB

≥ 0. For w

large enough or B small enough, l = n, perfect monitoring is, as we shall see, easily achievable when relying on verification. The constraint on "punishments" (losing the fixed w) relative to "effective power" (B) is what creates the challenge of accountability. We assume that when indifferent between stealing or not the PO chooses not to. For the remainder of the paper it is useful to introduce the following notation. The set Θ of states with θ= (θ1 , ..., θn ) is partitioned into classes characterized by the number of high-productivity services. We call class-k the element of the partition of Θ where there are exactly k high-productivity services, and denote this subset Θ(k). The cardinality of Θ(k) is to belong to Θ(k) is q(k) = prob {θ ∈ Θ (k)} =

n! k k!(n−k)! p

n! k!(n−k)! . (1−k)

(1 − p)

The probability for the state

. We are also interested in sub-

sets of services M ⊆ N. We analogously define classes on Θ (M): Θ(k; M ) is the class that contains k high-productivity services among the m ≤ n first services, so clearly Θ(k; M ) ⊆ Θ(k). The probability for a state to belong to Θ (k; M ) is q ′ (k; M ) = prob {θ ∈ Θ (k; M)} =

m! k k!(m−k)! p

(1 − p)m−k . With

some abuse of language we shall refer to a PO of type k in the sense of a PO who inherited a state in class Θ(k).

4 4.1

Two Benchmarks Accountability with full observability

As a benchmark we consider the case when I 0 = Θ × S, i.e., the CI observes the productivity state θ and the service quality s. It follows that the CI perfectly detects any diversion of funds by the PO. We shall consider a rule where the PO may be granted some leniency. Once the PO has delivered some specified amount of HQ services, the CI is satisfied, he "closes his eyes" so the PO may unsanctioned 1 3 Note

that the liability constraint could just as well be formulated in terms of a maximal fine that can be imposed

on the PO.

9

steal above the target (y). For the sake of comparison, we express all P-rules in the same terms. Let s be defined as #si ∈ S; si = s, it is the number of HQ services. The first mapping f : Θ × S → P is trivial since there is no use to verification because of complete information: - f (θ, s) = (0, ..., 0) ∀ θ ∈Θ and s ∈ S. The second mapping d : Θ × S → {K, D} is defined as follows: - d(k, s) = K if either s = k, k < y or s ≥ y, k ≥ y, and - d(k, s) = D if either s < k and k < y or s < y and k ≥ y. Let (1 − Q (z)) = 1 −

i=z i=0

q(i) with q (z) = prob {θi ∈ Θ (z)} denote the probability that the PO

faces at least z high-productivity services. We have the following proposition: Proposition 1 With full observability, (i) the optimal rule is the “lenient P-rule” with a target y∗ = l. This yields (ii) high-quality delivery of all productive services in every state in Θ(k) for k < l and (iii) high-quality delivery of l productive services in every state in Θ(k) for k ≥ l. (vi) In the optimal scheme the expected saved budget is EV fi =

i=l i i=0 q (i) n B

+ (1 − Q (l)) nl B.

All proofs appear in the Appendix Proposition 1 shows that the widely-advocated zero-tolerance principle (y = k) is not optimal even under full observability.14 Indeed, due to the liability constraint, the zero-tolerance rule induces the PO to steal the whole budget whenever θ ∈ Θ(k), k ≥ l.15 The optimal P-rule instead involves a sufficient performance target, implying leniency beyond the target. This is the best outcome that can be achieved within the institutional setting captured by l (w, B). For w large enough relative to B we have l (w, B) = n so the first-best in that case corresponds to perfect monitoring. In the following we refer to the outcome in Proposition 1 as the first-best. The results in Proposition 1 are consistent with Persson et al. (1997). They find that the politician must be granted "power rents" in order to refrain from stealing the whole budget.

4.2

Accountability with no information

Consider the polar case in which the CI observes neither the productivity state nor the quality of delivered services: I = ∅. Since there is no information on which to condition verification probabilities, the f −mapping boils down to the set of probability distributions P, p = (p1 , ..., pn ) ;

i=n i=0 pi

≤ 1. Since

the services are symmetric for the CI, we can focus on verification rules that treat services symmetrically. The second mapping d : Xi → {K, D} is trivially determined: d(0) = K and d(1) = D.16 In 1 4 See 1 5 As

for a more detailed proof PSE WP 2013-42. pointed out by an anonymous referee, this is reminiscient of results on efficiency wage as a tool to prevent

corruption. The wage needed to fully deter corruption may be very high. When there is only a small probability of an opportunity to steal a lot some implicit leniency is then optimal. Note however that the logic is different because here the CI does not have the option to actually fully deter diversion. 1 6 The only feasible alternatives make no use of the only available information, i.e. from verification, and hence have no monitoring power whatsoever.

10

this context the only decision for the CI with respect to the P-rule is therefore the selection of the probabilities for the verification of each service. 4.2.1

Random verification procedures

For any given p = (p1 , ..., pn ) ,

pi ≤ 1, the PO’s objective function is xi

EU = i∈N

B + (1 − n p

pi )w

i ;xi =1

B n

Here the marginal payoff from stealing from service i is determined by the constant gain

and the

expected loss −pi w. Note that the marginal payoff does not depend on whether the PO diverts from other services. This follows from the linearity of utility in money. The IC constraint for any service i is B − wpi ≤ 0 n

(3)

yielding the following Lemma. Lemma 1 Any P-rule that aims to prevent stealing from service i must have pi ≥

B wn

=

1 l

.

The proof follows immediately from the IC constraint (3). In particular, since by assumption B w

> 1, Lemma 1 implies that in any state θ ∈ Θ (k) , k = 1, ...n uniform random verification over the

set N of all services has no monitoring power

1 n

<

B wn

. A PO of type k, ∀k ∈ [0, N ] diverts the

whole discretionary budget and is dismissed with probability

k n.

In general, departures from the uniform distribution can take many forms. Simple arguments allow us to narrow down significantly the set of interesting P-rules, however. We know from Lemma 1 that any candidate optimal rule will have the property that some services are checked with sufficiently high probability to deter stealing. In addition, since verification is a scarce resource there can be no rationale for using it up for no purpose, i.e. to check a service with a probability that is positive but insufficient to deter stealing or more than sufficient to deter stealing. We can therefore focus on verification schemes that partition the set of services into two subsets, M and N \M = U. The set U (for unchecked) contains services that are never checked and the set M contains services that are checked with equal probability. Solving for the reappointment rule d is straightforward: this uses the only available information, i.e. the result from the verification procedure, in the most natural way: dismiss if diversion is uncovered and reappoint otherwise. The relevant set of P-rules is: f: p = (pi , ..., pn ) with pi =

1 m,

i ≤ m and pi = 0, i > m;

d: X → {K, D}; d(xi = 0) = K and d(xi = 1) = D, xi ; .i ≤ m Recall that q ′ (k; M ) = prob {θ ∈ Θ (k; M)} is the probability that the PO face k high-productivity services in subset M. The next Proposition characterizes the optimal P-rule under no information: Proposition 2 Under no information, I = ∅,

11

i. the optimal (partial verification) P-rule yields a pre-announced partition of the set of services into two subsets. The subset subject to verification has cardinality l. The expected saving of public funds is EV P V =

i=l ′ i i=0 q (i; M) n B.

ii. In equilibrium the PO is always reappointed, diversion occurs in subset U but it is never uncovered by verification. The results in Proposition 2 are not surprising. Partial Verification (PV) follows the same logic as the optimal complete-information scheme: it leaves rents in order to avoid full diversion. In contrast with the complete-information setting, the P-rule cannot be made conditional on the true state of productivity or on performance. The efficiency loss due to asymmetric information is only partially mitigated by ex-post verification. As in the complete-information context, the impact of the wage and the budget is unambiguous. dl dw

Since

≥ 0 and

dl dB

≤ 0, we have

∂EV P V ∂w

≥ 0 and

∂EV P V ∂B

≤ 0. As l → n, EV P V → EV F I , so

provided the wage is high enough relative to the budget, in the absence of information full monitoring is achievable under PV. Remark 1 Pre-announcement is key to the efficiency of PV. Interestingly practitioners are often reluctant to announce M because they understand that the PO is given "carte blanche" over the complement subset N \M. They often fail to realize that this is the cost they have to pay to achieve any monitoring effect. Part of the confusion comes from the fact the efficiency of verification and audits is often measured in terms of unveiled diversion i.e., from an ex-post perspective. However, the optimal PV never unveils any diversion even though diversion occurs whenever there is an opportunity in subset N \M . In this context the CI must credibly resist the temptation to check outside of the preannounced M. The CI’s commitment to a PV scheme can thus be a demanding feature of the solution.

5

Accountability with observable service delivery

In this section we are interested in whether and how the CI can improve upon the result in Proposition 2 when service delivery is observed. There are potentially, infinitely many ways to use this information. We are interested in characterizing the best among P-rules that make use of available information in a simple way.17 Our main focus is on verification schemes conditional on performance relative to a target in a spirit reminiscent of the optimal complete-information scheme. More precisely, we focus on P-rules of the form p (si , s, y), with s = #si , si ∈ S; si = s i.e., we allow the verification probability to depend on own delivery status (HQ or LQ) while the delivery status of the other services only affects this probability through an aggregate measure, s, the sum of HQ services.18 Similarly, we limit our attention to decision rules d : S × Xi → {K, D} of the form d (si , s, xi ) . 1 7 This 1 8 We

is in line with our concern for practical applications. cannot preclude a scheme that makes use of more sophisticated information, e.g. the precise composition of

the vector of services delivered could produce a better outcome. However to make sense of these we would need more structure, e.g. a partitioning of services into classes of similar types or services or services located within the same

12

Before investigating the optimal mechanism (within this restricted class), we shall establish a somewhat surprising result: focusing verification on LQ services is generally counter-productive in that it induces diversion. This result is surprising because it appears most natural to concentrate scarce verification resources on services where stealing might have occurred. Formally: - f : S → P ; p (si , s) =

1 n−s

for si =s and pi (si , s) = 0 for si = s;

- d : S × Xi → {K, D}; d(s, 0) = K and d(s, 1) = D ∀s. A new feature compared with the previous P-rules is that when considering whether or not to steal from service i, the PO faces a probability of detection and dismissal pi (si , s) that is a function of the total amount stolen x =

j∈N−i

xj , since s = k − x. In state θ ∈ Θ (k) this is given by p (s; k) =

x . n−k+x

This detection risk rises with the total amount stolen, rate:

∂ 2 p(x,k) ∂2x

= −2

n−k (n−k+x)3

∂p(s,k) ∂x

=

n−k (n−k+x)2

≥ 0 , but at a decreasing

< 0.19 This implies that, in any given class, it may not be IC to steal a

single unit, but if it is IC to steal some number of units, it is optimal to steal the whole discretionary budget. The PO’s optimal response is therefore again in terms of a binary choice. In state θ ∈ Θ (k) the IC is: [p (k, k) − p (0, k)] w > k

k B B ⇐⇒ w > k ⇔ B ≤ w n n n

which we know is violated. This yields the following Lemma: Lemma 2 A P-rule that focuses random verification on observed LQ services has no monitoring power whatsoever. In all states the PO steals the whole discretionary budget. The result in Lemma 2 reveals an essential limitation in the use of information on service delivery to monitor the PO.20 Focused verification reallocates the marginal probability of detection so that the marginal detection risk falls with the number of stolen units, which creates perverse incentives. Remark 2 The failure of common-sense intuition reflects again the conflict between the ex-post perspective on verification as an instrument to detect diversion and the ex-ante perspective on verification as an instrument to monitor action i.e., to minimize diversion. This conflict comes from service-delivery quality being an imperfect signal of diversion: relying on this signal for verification makes verification monitoring power manipulable by the PO. This creates new incentives to divert, i.e. to dilute the probability of being caught. Given the result in Lemma 2, the remainder of this section focuses on rules that combine partial verification with a target. region. 1 9 For simplicity we act as if the PO’s choice is continuous. This has no impact on the qualitative results 2 0 It can be shown (see PSE WP 2013-42) that Lemma 2 applies for focused verification operated within any subset M ⊆ N of cardinality strictly larger than l and has no effect when applied within M with lower cardinality.

13

5.1

The optimal P-rule with observed service delivery

Recall our results from Proposition 1: the PO is willing to refrain from diversion in up to l services, and from Proposition 2: l services cannot always be secured even when feasible in the absence of observability. The hope is that when service delivery is observable, monitoring can be improved by creating incentives to deliver at least a targeted amount of HQ services. We next derive the optimal P-rule for a necessary target.21 Let f : S → P, with p (si , s) : - p (si ; s ≥ y) =

1 m,

for i ≤ m, p (si ; s ≥ y) = 0 for i > m; p (si ; s < y) = 0, ∀i.

- d(s, xi ) = K if s ≥ y, and xi = 0 and d(xi , s) = D otherwise, i.e. if s < y or if s ≥ y but xi = 1. The CI’s objective function is max EV = (1 − Q (y)) y,M

y B + Eθ∈Θ(k;m),k>y n

i=m

q ′ (i; M ) i=0

i−y B , s.t. y, m ≤ l n

(4)

The first term (1 − Q (y)) is the probability that the target y be feasible. The second term captures the expected payoff when the target is feasible. The first term in squared brackets is the value of fulfilling the target while the second term is the expected value of the additional HQ services that can be secured if they happen to be in excess of y and in M.22 We have the following proposition Proposition 3 When service delivery is observable, the optimal P-rule among those with pi = p (si , s, y) and di = d (s, y, xi ) i. entails a subset subject to random verification that is pre-announced and has cardinality l and a performance target y∗ ≥ 0; ii. secures the delivery of y ∗ high-quality services iff θ ∈ Θ (k) with k ≥ y ∗ , and deters diversion within subset M. However, the whole discretionary budget is lost whenever θ ∈ Θ (k), with k < y∗ ; iii. The PO is reappointed whenever θ ∈ Θ (k) with k ≥ y∗ , and dismissed whenever θ ∈ Θ (k) with k < y ∗ . As expected from Lemma 2 we find that the optimal M does not depend on performance, it is pre-announced. Compared with PV in Proposition 2, the impact of the target is twofold: first, it secures at least y ∗ HQ services in all states where this is feasible and thus even in some states where PV fails to produce this outcome; second, this comes at the cost of losing the whole budget in states where k < y∗ . In these latter states the PO is always dismissed for failing to reach the target. It is easy to see that the optimal P-rule with performance targets only weakly improves upon partial verification in Proposition 2. Consider the case with high wages, as l tends to n. In this case the optimal subset M subject to verification tends to N, and imposing a target y > 0 only produces the loss of the discretionary budget in poor states (k < y) so y∗ = 0. In contrast, for small l, e.g. l = 1, implying M = {s1 } , the gain from the target y = 1 occurs with high probability 2 1 In

the working paper version we show that a P-rule with a sufficient (rather than a necessary target) such that

fulfilling the target secures reappointment, does strictly worse. 2 2 This requires the state θ ∈ Θ (k; m) , k > y.

14

n

(1 − Q (1)) = 1−(1 − p) outweighing the cost of not meeting the target, which comes with a vanishing small probability (1 − p)n . Hence a performance target is expected to improve accountability in contexts where incentives to stay in office are very weak. We may then wish to exploit those incentives maximally even if it entails treating the PO "unfairly". The P-rule actually "punishes" (dismisses) the PO when he fails to deliver the performance target, which in equilibrium only happens because of bad luck, i.e. k < y ∗ . In contrast, when the value of staying in office is large enough, there may not be any value in imposing a performance target. This section does not offer an exhaustive investigation of the value of using the observation of service delivery for accountability. Nevertheless procedures involving verification and a performance target are indisputably most relevant classes of accountability mechanisms. The results in Lemma 2 and Proposition 3 show significant limitations in the use of information about service delivery for accountability. The intuition is that conditioning verification on performance introduces gaming opportunities. Any attempt to condition verification on information about performance creates incentives to take action to manipulate the detection risk. In the working paper version we show that the same logic precludes the use of messages from the PO. In particular no claim about own records or about the state of the world (θ) can be used to improve the monitoring performance of the P-rules investigated up so far.

6

Social Accountability

In the previous sections we have established that there may only be limited gains from the use of information about service delivery in common accountability mechanisms. In this section we introduce a new instrument. We consider the case when the CI has access to new technologies which make it possible to collect and process complaints at minimal cost for both service users and the CI. The assumption is that service users are (with some probability) able to identify the reasons for low service quality. They can see when, for example, a school is dysfunctional because the teachers have not received their wages, i.e. basic resources fail to accrue to the school - a situation that corresponds to a high-productivity state which is denied resources. This is distinguished from situations where only some of the teaching positions have been filled or the teachers have only low qualifications - a situation that corresponds to a low-productivity state. We do not model service users explicitly at this stage. We do however conjecture that the results will hold for strategic service users provided the cost of complaining is sufficiently low and their use is transparently reported.23 The objective of this section is to show that a significant improvement in welfare can be achieved with a simple accountability mechanism that uses service users’ complaints in response to a public claim of records 2 3 Experience

from Ipaidabribe in India shows that the fear that complaints be misused against public officials was

not warranted. But service users may be reluctant to make the effort to complain if they do not believe that complaints will be used efficiently.

15

made by the PO.

6.1

Complaint-Based Accountability

We shall consider the case when the vector of service delivery is not observable by the CI. The mechanism requests that the PO "defends his record", i.e. makes an announcement about both the productivity and quality of each service. Thereafter, the service users are invited to complain, in the sense of refuting the announcement with respect to well-identified services if they believe this not to be true. This refutation is cheap talk: service users do not provide hard evidence. We assume that they are sincere, but may be lazy and/or mistaken. The P-rule below is designed to be undemanding for service users, a feature which is presumably of significant practical value.24 Complaints are used to generate a signal, although we shall not go into the details on how this signal is obtained. The idea is that a suitable algorithm aggregates the complaints received from service users, for example via an electronic platform. We will show that there exists a P-rule using both announcements and complaints which approaches the first-best outcome. The communication phase of the game takes place after the PO has made his allocation decision (see the timing in Section 3). This consists of a public message or claim from the PO and a response from service users who may refute the claim. Let A : {ani=0 } , ai = θi , si be the set of claims. The PO is requested to declare a productivity state of and quality level for each service. We restrict the set of acceptable claims to those that are consistent (technically feasible) and denote by α any claim that confesses diversion (irrespective of the combination and extent of diversion). After having seen the claim, service users privately post complaints on the platform. The set of acceptable complaints is denoted C. The platform accepts two types of complaints: • Type 1: the PO claims the productivity and service quality is high ai = θ i , si = θ, s , but (1)j

user j posts a complaint ci

(ai ) refuting the high quality result implicitely contending that

(θi , si ) = (θ, s). Type-1 complaints are not related to diversion and are assumed to be perfect signals.25 • Type 2: the PO claims θ i , si

(2)j

= (θ, s) but service user j post a complaint ci

(ai ) refuting

the low productivity claim i.e., implicitely contending that (θ i , si ) = θ, s . Type-2 complaints suggest that diversion has occurred, but may with some probability be mistaken. All other messages by service users are discarded by the platform processing the announcement and the acceptable complaints to produce a signal σ : A × C → Σ, σ = (a, i∗ ) 2 4 We 25 A

return to this issue at the end of this section, where we discuss alternative ways of implementing the outcome. false claim that "everything is fine" when it is not is very easy to identify. The assumption is that there will

always be service users outraged by the lie who complain.

16

where Σ is a set of signals σ. The first term of the signal is a = #si ; si = s and such that #cji (ai ) = 0, ∀j, i.e. no complaint of any kind was posted regarding the announced HQ service i. a is the number of announced high-quality services with no complaints. The second term, i∗ , is a signal about (2)

(2)

which service is most likely to have been the object of diversion: i∗ ∈ arg maxi ci , ..., cn (2) ci

26

is the complaint score (e.g., the number of complaints) regarding service i.

service(s) that has received the most type-2 complaints. For the case

(2) ci

where

This is (one of) the

= 0 for all i, i∗ = v ∈ /N

i.e. it does not correspond to any service. Finally, let δ = prob({si = s} ∪ θi = θ ; i = i∗ ), δ ≤ 1 be the probability that the PO diverted on service i conditional on service i having received the highest complaint score. This captures the informativeness of i∗ , δ = 1 implies that i∗ is a perfect signal. For simplicity we let δ be independent of the total amount of budget diverted.27 The idea of the P-rule is as follows. When the PO admits having diverted funds, a = α, he is dismissed without checking. If a = α the following applies. If the announcement receives no complaints or if the uncomplained-about part of the announced performance is good, i.e. a ≥ y for some y to be determined, the PO is automatically reappointed. Otherwise, if the announced performance is poor, a < y, i∗ is checked. In detail: f : A × Σ → P entails - pi (a, σ; #si = a) = 0 and pi (a, σ; #si = a, a ≥ y) = 0 ∀i; - pi (a, σ; #si = a) = 1, if a < y, and i = i∗ (and pi (a, σ; #si = a) = 0 if i = i∗ ). d : A × Σ × Xi∗ → {K, D} is defined as follows: - d(a, σ, xi∗ ) = K if a = α, a < y and xi∗ = 0 or a > y; - d(a, σ, xi∗ ) = D otherwise, i.e. if a = α or a < y and xi∗ = 1. In the Appendix we investigate the PO’s incentives to take actions and make announcements in response to this P-rule for some pre-announced target y. We distinguish between two regions of the state space depending on whether the target y is feasible or not. We show that for a PO of type ′

k < y, refraining from diversion and announcing the truth is optimal provided y ≤ l′ , where l is the integer part of δl, l′ = int(δl). We note that the equilibrium is not unique with respect to the PO’s claim. In fact any a = α will induce a < y and is consistent with a no-stealing equilibrium in Region 1. In Region 2 (k ≥ y) we show that for y ≤ l′ it is optimal for the PO to divert on x = k − y, and announce some a such that a ≥ y. He is never truthful regarding the k − y services where he diverts. The CI’s objective function has the same form as under full information: i=y

EV SAP =

i y q (i) B + (1 − Q (y)) B s.t. y ≤ δl. n n i=0

This is maximized at y ∗ = l′ . We have the following results: 2 6 All

services are assumed to have an equal number of users, and those users have an equal propensity to complain

and are identical with respect to their accuracy. 2 7 It is a research task in itself to design the information-processing algorithm to make best use of the information in the complaints. We here only consider a very simple technology.

17

Proposition 4 A P-rule using service users’ complaints and announcements can achieve (i) high-quality delivery of all productive services in every state in Θ(k) for k ≤ l′ and (ii) high-quality delivery of l′ services in every state in Θ(k) for k ≥ l′ . (iii) In equilibrium the expected budget saved is EV SAP =

i=l′ i=0



q (i) ni B + (1 − Q (l′ )) ln B;

Corollary 1 In the absence of a PO announcement, a P-rule built on complaints alone allows the detection of diversion with probability δ independently of the state. Such a P-rule induces full diversion in any state belonging to Θ (k) , k ≥ l′ . Proposition 4 and its Corollary establish that a P-rule that uses information from service users and the PO’s announcements can approach the complete-information outcome. The role of the PO’s announcement in a context where refutation can be obtained from service users, is to make possible a state-contingent leniency rule similar to the optimal full-information scheme. This contrasts with PV (partial verification) where leniency is state-independent, in that it applies always within the preannounced subset U. With the PO’s announcement and refutations, the leniency rule applies only in states where the target is feasible and fulfilled. The SAP (Social Accountability P—rule) secures full ′

compliance (no diversion) in states θ ∈ ∪Θlk=0 (k) and secures at least l′ services in θ ∈ ∪Θnk=l′ (k) . As SAP is state-dependent, for δ close to 1, it dominates the PV even in the absence of announcements (Corollary). The role of complaints is further to guide verification so as to optimize the use of verification resources. The imperfect informativeness of the signal i∗ bounds away the outcome from the first-best. Interestingly, the equilibrium characterized in Proposition 4 is not unique with respect to claims. In Region 1 the PO can overstate or understate performance (and the productivity state). If there is some positive value for the PO to avoid service-user complaints, truth-telling dominates however. In Region 2, the PO’s claims are untruthful except when the state allows the target to be met exactly a = k = y. The presence of lies in the communication game is a reflection of the liability constraint together with the provision that calls for dismissal when the PO implicitly recognizes he diverted money. It should be emphasized that the reliance on the PO’s claims is motivated by a concern for simplifying the service users’ task. The same outcome could be implemented without the PO’s claims but with more sophisticated participation by service users. The CI would in such a case have to explain to service users the "principles for PO proper behavior in every situation" and would ask them to report deviations from those principles. This kind of scheme is obviously much more demanding for service users than the SAP which only asks for the refutation of explicit claims. Given that serviceuser participation, regarding both sophistication and intensity, is an issue in practice, the SAP as proposed is arguably of greater practical value. The result in Proposition 4 is consistent with Lipman and Seppi (1995). The communicating parties, the PO and service users, have conflicting interests28 with respect to the CI’s decision about 2 8 We

identify service users’ interest with that of the CI.

18

reappointment in some states of the world, i.e. when the PO has diverted money from service provision. Were service users able to prove their counter claims, the failure of service users to refute the PO’s service claim could be sufficient evidence that the claim is true. No verification by the CI would then be needed. In our context, users’ complaints are not hard evidence: they are instead used to guide verification to provide evidence of diversion. Another issue is that in our context the CI does not care about truth per se, but rather about diversion. In particular, false claims that do not hide diversion can be used to manipulate verification, as was shown in the working-paper version. By distinguishing between (type-1) complaints against false "good news" hiding a low-productivity state with low-quality service from (type-2) complaints that may hide diversion, the SAP-rule restrains the PO’s incentives to falsely claim good news. The result in Proposition 4 also underlines the significance of the informativeness of the signal produced by the platform. One immediate recommendation is for the CI to devise a platform which processes information so as to generate the most informative signal about diversion. We use the CI’s objective i.e., the budget saved

The welfare value of social accountability 29

from diversion, as a proxy for welfare.

Indeed in our context "saved budget" translates immediately

into improved service delivery. We consider the case when the CI does not observe the productivity state or service-delivery quality, and compare the budget saved under the optimal partial verification scheme with that from the SAP.30 Comparing EV P V = saved budget under SAP EV SAP =

i=l′ i=0

q(i) ni B + (1

i=l ′ i i=0 q (i; M) n B to the equilibrium expected ′ − Q (l′ )) ln B, we see that the first term of ′

the two expressions are quite similar but q(i) > q ′ (i; M ) as q (i) is the probability that θ belongs to Θ (i; M ) < Θ (i) . On the other hand we have l ≥ l′ , so the comparison of the first term is ambiguous. i=l ′ i i=0 q (i; M) n B

But for δ close to 1, we have

<

i=l′ i=0 ′

q(i) ni B. Moreover, PV cannot prevent full

stealing in the complementary set U , while with SAP l HQ services are guaranteed in good states (1 − Q (l′ )) nl B. So when the signal generated by service user’s complaints is sufficiently informative, SAP ensures that the PO allocates all of the discretionary budget ( kB n ) to services in poor states (θ ∈ Θ (k) , k ≤ l′ ) and a share contrast, PV only secures

7

kB n

l′ B n

of the discretionary budget in good states (θ ∈ Θ (k) , k > l′ ). In

in states θ ∈ Θ (k; M) with card(M ) = l.

Concluding Remarks

This paper considered the situation where a public official is delegated the power to allocate resources to the production and delivery of public services. However, he may divert some money to his own 2 9 This

means that we do not take into account the PO’s value of stolen money. Similar qualitative results obtain

when accounting for PO’s utility of stolen money provided the constant marginal utility of money is less than one. Such a situation corresponds to, e.g. the case when there are transaction costs associated with money laundering. 3 0 We perform the comparison in the no-observation case for several reasons. First, even if the CI can observe the delivery of some services, it is unlikely that in reality he observes everything. Second, our result in Proposition 3 shows that even perfect observability does not dramatically change the welfare performance of the mechanisms investigated.

19

pocket. The institutional context is characterized by a fixed wage and limited liability: the public official may at worst be dismissed. We are interested in the monitoring value of simple accountability mechanisms that rely on limited verification resources and may use communication. The presence of the liability constraint implies that there is diversion of funds in some states in the first-best complete-information outcome. The optimal accountability mechanism is characterized by a performance target above which the PO is implicitly allowed to divert funds. In the absence of any information about the PO’s behavior, the optimal P-rule calls for random verification within a preannounced subset of services. Surprisingly, information about the quality of service delivery (a signal of the PO’s behavior) is of little value. In particular, we find that the most intuitive mechanism focusing verification resources on services where diversion might have occurred leads to maximal diversion. We show that combining random verification with a necessary performance target weakly improves upon the optimal partial-verification outcome. In contrast, introducing communication changes the picture dramatically. We show that requesting that the public official publicly defend his record when service users can refute his claims can be exploited in a mechanism that approaches the completeinformation first-best. This result is in line with the intuition behind some social-accountability initiatives. It reveals that a well-designed persuasion game involving the public can play a significant role in improving welfare by securing the accountability of public officials and reducing the extent of corruption. We hope that clarifying the properties of citizen-led accountability mechanisms can play an important role in proving their legitimacy and strengthening incentives for enforcement. A direct integration of well-understood accountability mechanisms in more formal contractual relationships can also be considered. For instance, public officials’ employment contracts can include a provision that they commit to accept the outcome of a social accountability mechanism, just as professional evaluation plays a role in the career of employees.

20

References [1] Austen-Smith D., (1992) "Strategic Models of Talk in Political Decision-Making" International Political Sciences Review, 13/1, 45-58. [2] Ben Porath E. E. Dekel and B. Lipman (2012) "Optimal Allocation with Costly Verification" mimeo. [3] Gale D. and M. Hellwig (1985) "Incentive Compatible Debts Contract: the one-period problem", Review of Economic Studies, 52/4, 647-663. [4] McGee R. and J. Gaventa (2011) "Shifting Power? Assessing the Impact of Transparency and Accountability Initiatives" IDS Working Paper 2011/383 [5] McGee R. and J. Gaventa (2011) "Synthesis Report: fectiveness

of

Transparency

and

Accountability

Review of Impact and Ef-

Initiatives"

http://www.transparency-

initiative.org/reports/synthesis-report-impact-and-effectiveness-of-transparency-andaccountability-initiatives. [6] Glazer J. and A. Rubinstein (2004) "On Optimal Rules of Persuasion" Econometrica, 72/6, 17151736. [7] Glazer J. and A. Rubinstein (2006) "A Study of Pragmatics of Persuasion: A Game Theoretical Approach" Theoretical Economics, 1, 395-410. [8] Joshi A. and P. Houtzager "Widget or Watchdogs? Conceptual Exploration of Social Accountability" Public Management Review, 14/2, 145-162. [9] Lambert-Mogiliansky A. "Social Accountability: Persuasion and Debate to Contain Corruption" PSE Working Papers 2013-42. [10] Lipman B. and D. Seppi "Robust Inference with Partial Provability" Journal of Economic Theory, 66/2, 370-405. [11] Lupia A. and M. McCubbins (1994) "Designing Bureaucratic Accountability", Law and Contemporary Problems, 91-126. [12] Malena et al. (2004) , "Social Accountability: An Introduction to Concepts and Emerging Practice" Social Development Paper No. 76, World Bank. [13] Maskin E. and J. Tirole (2004), "The Politician and The Judge: Accountability in Government" The American Economic Review, 94/4, 1034-1054. [14] Noveck S. (2009), Wiki Government: How Technology Can Make Government Better, Democracy Stronger, and Citizens More Powerful, Brookings Institution Press. 21

[15] Persson T., G. Roland and G. Tabellini (1997) "Separartion of Power and Political Accountability" Quartely Journal of Economics, 112/4, 1163-1202. [16] Posani bala and Ayard (2009) "State of Accountability: Evolution, Practice and Emerging Questions" AI Working Paper 2009/2. http://indiagovernance.gov.in/files/Public-Accountability.pdf. [17] Townsend R. (1997) "Optimal Contract and Competitive Markets with Costly Verification" Journal of Economic Theory, 21, 265-293. [18] Sahgal G. (2008) "Unpacking Transparency and Accountability Measures - A Case Study of Vijaipura Panchayat, Rajastan", www.accounatabilityindia.in. [19] UNDP (2013) "Reflection on Social Accountability" July 2013. [20] Weber M. "Bureaucracy" in M. Weber (1958), Essays in Sociology 196, 232-235 (H.H. Gerth and Wrights Mills eds. & trans. Oxford Paperback. [21] World Bank (2013) "Mapping Context to Social Accountability", Resource Paper, Social Development Department, World Bank Washington DC, 2013.

22

Appendix Proof of Proposition 1 For any given y, the incentive to refrain from diverting for a PO of type k ≤ y is: x where x =

i∈N

B ≤w n

∗ ∗ xi . The IC is most demanding at x = k. Since l B n ≤ w, for y ≤ l, x = 0, s = k

and the PO is reappointed. While for y > l, whenever k < l, x∗ = 0 and the PO is reappointed. But for k > y > l, x∗ = k so the PO steals the whole budget and is dismissed. Consider now a PO of type k > y. Since the P-rule allows him to steal unpunished (k − y) , i.e. above y, the only issue is whether he steals more than (k − y) x

B B B ≤ w + (k − y) ⇐⇒ y ≤ w n n n

Hence for y ≤ l, x∗ = (k − y), while for y > l, x∗ = k. Thus for y = l we have proved (i) and (ii). We now integrate the PO’s best response to the P-rule into the CI’s objective function: i=y

max EV f i y

s.t y where q (z) = prob {θi ∈ Θ (z)} =

=

(5)

≤ l

n! z z!(n−z)! p

ability that the PO is of type i ≥ z. We

i y q (i) B + (1 − Q (y)) B n n i=0

(6)

(1 − p)(1−z) , Q(z) =

fi have ∆EV ∆y

= q (y + 1)

i=z i=0

y+1 n B

q(i) so (1 − Q (z)) is the prob-

y + (1 − Q (y) B n − q (y + 1) n B >

0 implying y ∗ = l. Since by the liability constraint l B n is the maximal budget saved that can be ever be obtained, the proposed P-rule is optimal: no budget is diverted when k < l and diversion is limited to k − l otherwise. The total expected budget saved is EV f i =

i=l i i=0 q (i) n B

+ (1 − Q (l)) nl B. QED

Proof of Proposition 2 First note that there is no point in confining verification to services in M ⊂ N if the subset M (#M = m) is not pre-announced. Since all services are symmetric, in the absence of an announce1 ment the probability for verification for the PO is prob {si ∈ M} . m =

m 1 n m

=

1 n

which we know by

Lemma 1 has no monitoring power. Consider the optimal response to P V (m) of a PO of type k. Assume there are k′ ≤ k productive services in U when the state is θ ∈ Θ(k). The IC for not diverting in M is k′ xm + k′ m − xm 1 1 B+w ≥ B+ w ⇔ B− w≤0 n n m n m where xm =

i∈M

xi . The marginal payoff from stealing in M is constant by linearity and the

equiprobability of verification. Consequently, if m > l, we have that

m nB ′

> w, and the PO chooses to

steal from every productive service in M , and so he steals from k′ +k−k = k services. If m ≤ l, utility is a decreasing function of xm and the PO chooses to steal only from the productive services in U . We next consider the CI’s incentives with respect to m. Let q ′ (z; M) = prob {θ ∈ Θ (z; M)} = 23

m! z z!(m−z)! p

(1 − p)

m−z

. This is the probability that θ = (θ1 , ..., θ m ) has z high-productivity parameters.

We can now express the expected budget saved for an incentive-compatible PV: i=m

EV P V = i=0

i q ′ (i; M) B, s.t m < l n

(7)

which is unambiguously increasing in m, hence m∗ = arg maxm EV P V = l. (ii) Given m∗ = l, x∗i = 0, ∀i; si ∈ M. Since U contains services that are never checked for any θi = θ, i ∈ [l + 1, n] , xi = 1, the PO diverts the whole discretionary budget in U but is always reappointed: d(x∗i ) = K, ∀i; si ∈ M.QED Proof of Proposition 3 The PO’s incentives depends on the state θ as follows: if θ ∈ Θ (k) with k < y, the target is not feasible prob(K; s) = 0 ∀s, since s < y and the PO’s payoff is EU =

x B n

so x∗ = k. In states θ ∈ Θ (k) with k ≥ y, and since prob(K; s) = 0 for x > k − y we know from Proposition 1 that x∗ ≤ k − y for y ≤ l. We also know from Proposition 2 that for m ≤ l, x∗i = 0, ∀i; si ∈ M substituting for the PO’s optimal response the CI’s objective is max EV y,m

= (1 − Q (y))

y B + Eθ∈Θ(k;m),k>y n

i=m

q ′ (i; M ) i=0

i−y B n

(8)

s.t y, m ≤ l As in the case with no information (Proposition 2), the objective function is increasing in m so that i=m ′ i−y i=y q (i; M) n B d 0 for m = l. Considering the derivative of the first term of wrt y : dy (1 − Q (y)) ny B = (1 − Q (y)) B n− i=y−1 y B B ′ Q (y) n B ≥ 0 using the definition of Q (y) , we obtain 1 − i=0 q(i) − q (y) n − q (y) y n ≥ ∗ 0 ⇔ 1 − i=y−1 q(i) − (y + 1) q (y) B i=0 n ≥ 0, defining y as the largest y, y ≤ l, such that 1 − i=y−1 q(i) ≥ (y + 1) q (y) . We here have a trade-off between the gain of an additional unit in i=0

m∗ = l. y∗ is the largest integer y, y ≤ l such that

d dy

(1 − Q (y)) ny B + Eθ∈Θ(k;m),k>y

states where the target is feasible with the loss of the whole budget in states where the target is not feasible. The second term Eθ∈Θ(k;m),k>y

i=m ′ i−y i=0 q (i; M ) n B

is strictly decreasing in y. This implies



that y ≤ l. QED Proof of Proposition 4 We shall proceed by backward induction. We distinguish between two regions of the state space dey−1 pending on whether the target y is feasible or not: Region 1: θ ∈ ∪Θk=0 (k) and Region 2: θ ∈ ∪Θnk=y (k) .

Region 1: A PO of type k < y We first consider the PO’s choice of announcement. Note that announcing a = α can never be optimal since the PO is then dismissed with probability 1. Next, whatever x, and whatever a ∈ A, a ≤ y because any a; #si > y will be refuted. There cannot be y services that are not complained about when k < y, since by assumption type-1 complaints are perfect signals. Whatever a = α the platform 24



will always produce a signal σ with a < y and the P-rule calls for the verification of i∗ ∈ N. Hence the PO may but need not announce a such that {#si ; si = s} = k < y. With probability δ the PO is detected and loses his job if he diverted. Since the risk of detection is independent of the number of diverted shares, the PO faces a binary choice x = k or x = 0; the IC is k k B + (1 − δ)w < w ⇔ B < δw n n where the most demanding constraint is at k = y. Using our earlier results, for a PO of type k < y, refraining from diversion and announcing the truth is optimal provided y ≤ l′ . Region 2: A PO of type k ≥ y If the PO announces a such that a ≥ y, we must have x ≤ k − y. The PO is then reappointed without verification. While if a < y suggesting x > k − y there will be verification and detection with probability δ. Following the same reasoning as above, the choice is binary between diverting unsanctioned

k−y n B

or with risk

k n B,

the IC constraint is:

k−y k y B + w > B + (1 − δ)w ⇐⇒ B − δw < 0 n n n which is satisfied for y ≤ l′ . For y ≤ l′ it is optimal for the PO to divert on x∗ = k − y, to announce some a such that a ≥ y, and since a = α triggers dismissal, he will not be truthful on the k −y services where he diverts. So in equilibrium we have untruthful announcements and complaints that do not lead to verification. Given the PO’s best reply, the CI solves maxy EV SAP =

i=y i=0

q (i) ni B+(1−Q (y)) ny B

subject to y ≤ l′ , which yields y ∗ = l′ since EV SAP is increasing in y (see the proof of Proposition 1).QED

25

Social Accountability to Contain Corruption

Mar 11, 2015 - the proper analytical tools to evaluate accountability initiatives:6 A major ... ing, this new term refers to an analytical framework that maps the ...

291KB Sizes 0 Downloads 215 Views

Recommend Documents

Performance Accountability and Combating Corruption
All other queries on rights and licenses, including subsidiary rights, should be addressed to the Office of ..... 11.6 Finding Elements, Data, and Analysis Methods Needed to. Conduct .... Electronic Business in Berlin, Germany. He is also ..... Only

Electoral Accountability and Corruption: Evidence from ...
University of Maryland, Wharton Business School, World Bank, and Yale University. We also thank ... With estimates for corruption at the municipal level, we compare mayors serving in a first ..... on the internet and disclosed to media sources.

Electoral Accountability and Corruption: Evidence from ...
public procurement of goods and services, diversion of funds, and overinvoicing of goods and services.4 Based on our estimates, corruption in local ...

Isolated Capital Cities, Accountability, and Corruption
degree of accountability, as has long been noted in the particular context of US state politics. For instance ... ity affects the ultimate provision of public goods: states with isolated capital cities also seem to ..... 26To check that this is not d

Isolated Capital Cities, Accountability, and Corruption
based on that location, and find that the effect of an isolated capital city on ..... 6As an illustration of the former, consider the case involving former Alabama ..... latter is also the capital city in 17 out of 50 states, one might wonder whether

Electoral Accountability and Corruption: Evidence from ...
suggest that electoral accountability acts as a powerful mechanism to align ..... For each municipality audited, a summary of the main findings is posted.

Electoral Accountability and Corruption: Evidence from ...
With estimates for corruption at the municipal level, we compare mayors serving in a first ..... that diverted resources from the Ministry of Health by over-invoicing the price of ... of the audit findings on the internet and to media sources.

Electoral Accountability and Corruption: Evidence from ...
University of Maryland, Wharton Business School, World Bank, and Yale University. ... We use audit reports from an anti-corruption program in ... Variation in electoral systems is believed to explain a significant portion of the .... small rents. ...

Corruption Corruption Corruption - Collapse of Rule ... -
About the Speaker: Yogesh Pratap Singh is a former uncorrupt officer in the police force of India who resigned to become a lawyer and activist in Mumbai.

Corruption Corruption Corruption - Collapse of Rule ... -
About the Speaker: Yogesh Pratap Singh is a former uncorrupt officer in the police force of India who resigned to become a lawyer and activist in Mumbai.

Corruption on the Court: The Causes and Social ...
explanation for the presence of point-shaving in college basketball: “The key incentive ..... technology for team sabotage and thus the same bribe opportunities. ..... c a b m λ β β γ γ λ. −. +. −. −. = +. −. −. X for. 1;. k n. = âˆ

Contain Your Environment Code
Oct 30, 2008 - Many modules must access elements of their environment that are too ... FileCleaner::Env lets us test FileCleaner without accessing the real file ...

Contain 6 enzymes to aid digestion: • Contains ...
Sucrase – hydrolyzes sucrose into glucose and fructose. ▫. Maltase – hydrolyzes maltose into two glucose molecules. ▫. Papain – sulfhydryl cysteine protease ...

Contain Your Environment Code
Oct 30, 2008 - the file system or network. ... virtual bool MatchFiles(const char* pattern, vector* ... virtual ~Env(); // don't forget the virtual destructor! };.

Committing to Transparency to Resist Corruption
Aug 24, 2012 - In his evaluation of firms' offers for a public contract the agent has some ... “Business should work against corruption in all its forms, including extortion and .... The allocation of 3G cell phone licenses in Europe offers a recen

Texas Accountability Ratings.pdf
Performance Index Report. 0. 25. 50. 75. 100. Index 1. Student. Achievement. (Target Score=60). Index 2. Student. Progress. (Target Score=22). Index 3. Closing.

Accountability-The-Key-To-Driving-A-High-Performance-Culture.pdf ...
executives around the world--from such admired companies as Marriott, Container Store, Ernst & Young, Sony, Herman. Miller, Nucor, and Southwest Airlines--to understand how high-performing corporations successfully create and sustain a. culture of pu

2015 Accountability Summary.pdf
Page 1 of 1. TEXAS EDUCATION AGENCY. 2015 Accountability Summary. QUANAH ISD (099903). Accountability Rating. Met Standard. Met Standards on Did ...