As good as it gets? Projects that make a difference: the role of monitoring and evaluation.

Karen Munce University of Western Sydney

Abstract Project monitoring and evaluation (M&E) strategies have frequently shed insufficient light on development outcomes and impact. The question that always needs to be asked is: “what difference does the significant investment in development assistance projects really make in the lives of individuals at the local level, in contexts where project interventions have been justified on the basis of poverty alleviation?” Criticisms made of conventional project monitoring and evaluation practices, include: insufficient stakeholder participation in M&E planning and implementation; narrow focus on project inputs, implementation processes and outputs rather than development outcomes, impact and sustainability; limited local ownership of M&E processes and limited use of outcomes; and little building of local capacity for ongoing M&E. This paper suggests that there is considerable scope for the enhancement of monitoring and evaluation as a tool to not merely passively monitor project achievement but to actively contribute to project effectiveness. A framework is presented for the enhancement of project monitoring and evaluation, encompassing a set of key principles relating to successful project outcomes. While examples provided relate to basic education project activity, the planning model and key principles proposed have wide application. Keywords Monitoring and Evaluation (M&E), basic education, development projects, development impact, participatory monitoring and evaluation, M&E capacity building, results-based management, continuous quality improvement, stakeholder responsiveness, project planning

Development Context

Billions of dollars are spend every year on overseas development assistance projects in the name of poverty alleviation and sustainable development. In the education sector, over the past 15 years, the imperative to achieve ‘quality basic education for all’ has been the catalyst for many projects. Overseas aid is big business.

The problem is, project monitoring and evaluation (M&E) strategies infrequently shed sufficient light on development outcomes and impact. If project ‘outputs’ are achieved ‘on time’ and ‘within budget’, a project is often judged as successful. Insufficiently often is it asked, “So what? - What difference does the significant investment in development assistance projects actually make in the lives of those individuals in whose name these projects were justified?” Given lack of evidence otherwise, it is not difficult to form an impression that many internationally-supported education reforms have produced disappointing results in impact and sustainability terms (Buchert, 1998 in Crossley, 2001). Consider the following scenario:

A major donor-supported education initiative in a particular country. Over an eight-year period and at an approximate cost of $US 200 million, the project’s aim was to increase basic education completion rates and to enhance learning achievement. Considerable technical assistance was provided for this purpose. The project’s approaches were considered innovative and a possible model for replication. On this basis, the project was selected for review by an independent team.

Project personnel pointed to an impressive range of ‘quality improvements’. These included the establishment of systems for teacher training and outreach, the supply of instructional materials, the conduct of public information campaigns, improvements in the teachers’ payroll system and decentralised classroom construction, amongst other. Many people benefited from the project - curriculum developers, teacher trainers, education administrators, materials publishers and distributors, consultants, construction companies, project management companies and donor agency personnel. Was this a successful project? What actually happened as a result of these quality improvements?

The project did not in fact formally investigate the manner in which the ‘project outputs’ interacted in the context of an individual school, community or classroom. The project did not assess their collective impact in terms of enhanced teaching and learning processes or in relation to student participation and learning outcomes.

Project

monitoring did not assess how the supply of inputs may have impacted differently on different sub-groups.

The national Education Management Information System data

was said to be unreliable, but the project did not build its capacity nor strengthen education sector monitoring and evaluation more broadly.

Anecdotal reports on the other hand suggested less than favourable project outcomes. Minimal teacher involvement in piloting of the new materials was said to have resulted in a curriculum that is controversial and whose implementation is uncertain. The impact on learning of textbook provision was said to be doubtful because of constraints at the school and classroom level. These include malnutrition, insufficient quantities of materials supplied, teachers feeling ill-equipped to teach the new curriculum, absenteeism of pupils and teachers, lack of staff housing, lack of classroom furniture and

lack of latrines.

Despite training provided, community members were unwilling (or

unable) to devote their time to project activity - without compensation.

A local independent study found unacceptably high student absenteeism, dropout and poor performance on academic skills tests, especially in rural schools and in poorer districts. Specific measures for disadvantaged pupils and districts had not been included in the project. Parents and pupils cited the poor quality of teaching and school facilities as two of the contributing factors, despite this project’s self-reported quality improvements. How can this be? Is it naïve to have expected more? Is this ‘ as good as it gets?’ Is ‘as good as it gets’ on this basis, good enough?

The independent assessment commented that enhanced impact on student participation and learning outcomes may have resulted had a broader conceptualisation of ‘education quality’ been utilised, encompassing a whole range of school / community-level factors that influence the effectiveness of the inputs supplied. It was recommended that the project design should have been based on a participatory and comprehensive analysis of the existing strengths and weaknesses at the school and community level, which would, at the same time, have provided a baseline for monitoring. Pressures from outside the sector are now pushing for better evidence of ‘value for money’, through ongoing monitoring of the relationship between project interventions and development impact, including attendance, dropout, primary completion rates and trends in learning outcomes. The involvement of relevant stakeholders is considered critical to this process.

Education development practitioners may find this scenario familiar. While based on a real and recent project situation, in terms of project approach, conceptualisation of quality, intervention

strategies and approaches to monitoring and evaluation, the story presented might be describing any number of comparable projects in the recent history of education sector development assistance programs. The scenario raises a number of issues that are relevant to the present paper.

This paper has two objectives. Firstly, it aims to present several key themes relating M&E to project effectiveness which have emerged from a review of the literature over the past ten year period.

The literature is drawn from a number of sources including journal articles,

development manuals and guides, project documentation, donor agency web sites, annotated bibliographies and conference papers, amongst other. This review has been augmented by reflection on the present author’s own professional involvement in a wide range of education sector project planning and review activities over the same period.

Secondly, the paper attempts to establish a set of general principles to guide the enhancement of M&E such that M&E functions not merely to passively monitor project achievement in output terms but to actively contribute to increased project effectiveness in terms of development impact.

While the focus of the paper is monitoring and evaluation, the

implications of the proposed approach for improved project planning are emphasised. Examples provided relate to basic education project activity, however the approach has wider application. Reflective application of the principles is recommended with ongoing monitoring and evaluation of the process and its outcomes and with dissemination of lessons learnt.

Critique of Monitoring and Evaluation

Monitoring and evaluation have traditionally been considered separate activities although they are inter-related.

Monitoring is the process of collecting evidence, whether through

measurement, systematic observation, regular record keeping or planned qualitative study. This process is essentially descriptive. When the results of monitoring are used to make judgments about project progress and effects, evaluation is involved and implications can then be drawn for subsequent action. In the present paper, ‘monitoring and evaluation’ are referred to collectively by the combined acronym ‘M&E’, with the understanding that together, they constitute a two-pronged tool which can be applied to assess and promote project performance. ‘Performance’ is defined as progress towards, and the achievement of, targeted development results (UNDP, 2002). The assessment of performance thus defined does not preclude consideration of unanticipated results or outcomes, nor reconsideration of the continuing relevance of the results initially targeted.

Project ‘results’ are commonly expressed in terms of a hierarchy of outputs, outcomes and impact, summarised in a Logical Framework Matrix (or ‘Logframe’). Project outputs (or ‘deliverables’) typically comprise tangible products, processes or services, resulting from project activity, that are intended to contribute to the realisation of project objectives (for example, people trained, curriculum developed, materials printed, classrooms constructed). Project outcomes are changes in the development conditions resulting from the achievement of project outputs (for example, enhanced teaching and learning processes resulting from the training completed, the textbooks supplied or the curriculum revised). Development impact is expected to flow as a consequence of the outcomes and usually relates to improvements in relation to development goals for targeted groups (for example, increased participation in and

completion of a relevant basic education program). Project results can also be defined by the level of intervention, including macro (or policy) level, meso (or institutional) level, and micro (or community, household or individual) level. As Jackson and Kassam (1998a, p.51) have noted,

“successful project interventions are often characterised by mutually reinforcing

activities and results up and down these different levels”.

Within the international development environment, criticisms of conventional approaches to project monitoring and evaluation practices have been reported on the grounds of diminished contribution towards development results. The recurring themes of these criticisms include the following:

Lack of stakeholder participation and responsiveness M&E activities are frequently: conceptualised in a manner that advocates independence and objectivity, limiting local participation, conducted by outside ‘experts’, who ‘extract’ information, utilising externally determined indicators, over a time-span too short to understand the complexity of local contexts, in a manner which is insufficiently sensitive to cross-cultural issues and placing onerous demands on less than robust local systems (ALNAP, 2001; Capeling-Alakija, Lopes, Benbouali, & Diallo, 1997; Ebbutt, 1998; Estrella & Gaventa, 1998; McDonald, 1999; Riddell, 1999; Snyder & Doan, 1995).

Lack of M&E focus on project processes in relation to development results M&E activities frequently focus narrowly on project inputs, activities and outputs rather than development outcomes, impact, sustainability and the relations among them. M&E activities accord insufficient regard to the complexity of development contexts, fail to scrutinise the theories of change underpinning particular strategies, utilise dubious criteria for ascribing

project success and fail to disaggregate development benefits by population type (ALNAP, 2001; Bamberger, 2000; Bernard, 2002; McDonald, 1999; Nichols, 2002; Riddell, 1999).

Limited conceptualisation of M&E purpose and use M&E activities frequently privilege the information needs of certain stakeholder groups whilst failing to acknowledge or respond to the different information needs of others. M&E activities often fail to specify the manner in which M&E knowledge will feed back into improved practice and fail to facilitate and document the outcomes of such feedback processes. Access to M&E knowledge is limited. (ALNAP, 2001; Bamberger, 2000; Capeling-Alakija et al., 1997; Ebbutt, 1998; Estrella & Gaventa, 1998; McDonald, 1999; Riddell, 1999; Snyder & Doan, 1995).

Lack of M&E capacity building M&E is typically conceptualised as ‘project-specific’ - ending with the project, rather than as a potentially valuable tool that could be embedded in local practice to inform ongoing quality improvement processes. M&E capacity building is infrequently included as an explicitly resourced and carefully planned project intention. Thus opportunities are lost to maximise the potential gain in capacity building terms from the significant investment in M&E activities (Middleton, Terry, & Bloch, 1992; Riddell, 1999; Snyder & Doan, 1995). Lack of adequate planning for M&E and lack of M&E quality control M&E is often inadequately planned and poorly timed, with limited regard for quality control or monitoring of standards (ALNAP 2001, MacDonald, 1999; Nicholls, 2002; Riddell, 1999).

Dimensions of an Enhanced Approach to M&E

While the above-mentioned characteristics of M&E activities are still prevalent in many development contexts today, parallel efforts are being made by a growing number of concerned practitioners to develop strategies that strengthen the relationship between M&E and project effectiveness. The following section discusses these strategies, explains their importance for successful project outcomes, raises a number of issues and constraints related to their operationalisation and presents a set of principles thus derived to inform improved practice. Although the strategies are not mutually exclusive, they are presented separately in the following discussion because each is significant in both conceptual and practical ways.

Stakeholder Participation, Responsiveness and Ownership A major theme in the literature over the past ten years has been the whole issue of stakeholder participation. Two frequent criticisms have been made of M&E approaches in relation to stakeholders. Firstly, the most important stakeholders, those who are intended to facilitate and/or benefit from project activities, have often had very little voice in the design, implementation, monitoring and evaluation of project activity. Such an approach is increasingly viewed as counter-productive to local ownership and capacity building and detrimental to sustainable development outcomes.

Secondly, both project design and project M&E activities treat stakeholders in a generic way, as an undifferentiated mass, with insufficient regard for the different interests and motivations of different stakeholder groups at different levels, their varying capacities to participate and their varying perceptions of and access to project benefits.

Interest in ‘participatory’ M&E has increased over the past ten years as a natural off-shoot of the growing support for participatory approaches to development more broadly. (See for example Blackburn & Holland, 1998; Chambers, 1997; Cornwall, 2000; Eyben & Ladbury, 1995; Gaventa, 1998; Guijit & Kaul Shah, 1998; Holland & Blackburn, 1998; Kane, 1997; Nelson & Wright, 1995). A rapidly expanding literature on participatory M&E documents a wide range of approaches, purposes, principles, tools, methods, resources and issues (Booth, Ebrahim, & Morin, 2001; Capeling-Alakija et al., 1997; Chambers, 1997; Cornwall & Jewkes, 1995; Davis Case, 1990; Estrella, 2000; Estrella & Gaventa, 1998; Guijit, 2000; Huberman, 1995; Jackson & Kassam, 1998a; McAllister, 1999; Shotton, 1998).

But what exactly is meant by ‘participation’? As commented by Cornwall (2000, p.8) “ ‘participation’ appears to offer everybody what they would like to understand it to mean, evoking a warm sense of togetherness, common purpose and mutual understanding”. Yet, a common understanding of participation is often not the case. Stakeholder participation in project activity can vary widely in terms of purpose, breadth, depth, and outcome, depending on the conceptual and methodological approach informing a project’s strategy. Numerous typologies have been developed which generally classify participation in terms of varying degrees of control over decisions and resources (see for example Guijit & Shah, 1998; IIED, 1998; Pretty & Chambers, 1994; White, 1996). For participatory M&E advocates, stakeholder participation is taken as meaning the collective, active and democratic examination of project progress towards, and achievement of, locally determined results, and the subsequent use of knowledge thus generated, by relevant stakeholders. Relevant stakeholders are those individuals who have a role and interest in the objectives, implementation and outcomes of a project (or part thereof). This is the meaning intended in the subsequent discussion.

Participatory approaches to M&E represent an epistemological shift from positivist research orientations emphasising objectivity and impartiality, towards interpretive approaches, providing space for multi-voice discourses (McKay, 1998). As Kane and O’Reilly-de Brun (2001)

have suggested, participatory approaches represent an interesting mix of

phenomenology (or constructivism), post-positivism and critical theory. The approach has been particularly influenced by Guba and Lincoln’s (1989) constructivist approach to evaluation which posited that evaluation findings are not ‘facts’ but are created through interactive processes, that everything is value-laden, that there is need to take different value-positions into account, that people’s constructions are formed in the local context and that a process of negotiation is required between stakeholders that respects plural value systems and multiple perspectives on a situation or problem.

When participatory approaches are coupled with a

results-orientation, on the one hand and anti-poverty biases, reversals of perspective and emphases on learning, empowerment and action, on the other, influences of both post-positivist methodology and critical theory can be traced respectively. For Guijit et al. (1998) participatory M&E is a social process (of bringing people together in new ways), a cultural process (of coming to understand different views) and a political process (of sharing decisions in a more democratic manner). Jackson (1999) refers to two streams – the pragmatic (or participation as a ‘means’ - to increase stakeholder commitment to use) and the transformative (or participation as an ‘end’ - promoting democratisation of knowledge and social change in favour of the poor and marginalised).

Benefits of participatory M&E are widely reported (Capeling-Alakija et al., 1997; Doherty & Rispel, 1995; Estrella, 2000; Estrella & Gaventa, 1998; Jackson & Kassam, 1998a; Ryan, Greene, Lincoln, Mathison, & Mertens, 1998; Shulha & Cousins, 1997; Snyder & Doan, 1995). These variously include enhanced ownership, learning, capacity, utilisation, relevance,

empowerment, consensus, institutionalisation, usefulness and complexity in relation to M&E processes and knowledge.

Despite the growing recognition of its benefits, challenges for participatory M&E are acknowledged (Doherty & Rispel, 1995; House, 2003; Jackson, 1999; Leurs, 1998; Murphy & Rea-Dickens, 1998; Nichols, 2002). These include: the potential loss of evaluation rigour; the potential for higher status interest groups monopolising the processes; considerable front-end costs; longer timelines required; varying stakeholder capacity; difficulties in balancing diverse stakeholder interests and facilitator fatigue. Effective participation requires, perhaps above all, a climate where stakeholders, including donors, see each other as partners with the common ultimate purpose of achieving development results (Nichols, 2002). One might naively have assumed this to be a given. However, realignment of power relations among stakeholders and the related adjustment of institutional structures and processes to provide meaningful opportunities for participation is perhaps the greatest obstacle (UNICEF, 2001).

Participation is considered by advocates to be a process that typifies improved development practice. The long-term gains in terms of potentially more sustainable development outcomes warrant perseverance in the short-term. Consequently, the first two principles proposed in this paper encompass stakeholder participation and stakeholder responsiveness. These are:

that the participation of relevant stakeholders is actively promoted, encompassing stakeholder engagement in the situation analysis, specification of results targeted, establishment of criteria for their assessment, examination of progress towards their achievement and determination of subsequent action;

that M&E processes are stakeholder responsive, acknowledging the diversity of stakeholder types, interests, perspectives, information requirements and capacities, and their implications for participation, resource allocations, timelines, M&E questions and methodology.

Clarification of M&E Focus, Purpose and Use Several points need to be made about M&E focus, purpose and use. Firstly, projects are not always clear (or have realistic expectations about) what exactly is expected to result over a given time, with given resources and given strategies. It is difficult to assess results when it is unclear what was intended.

Secondly, agreement on the meaning of project success is often assumed but not explicit. That project success can mean different things to different stakeholders is not always acknowledged. In an education project, for example, for the donor, success might be a general notion of ‘money well spent’ (however that is defined); for a national policy maker, it might include higher enrolments or higher standards; for teachers, it might mean better housing, regular receipt of salary payments or more teaching resources; for a parent it might mean a child progressing to secondary school (Riddell, 1999); for a student, it might be a school lunch or the removal of an abusive teacher. Depending on the project’s concept, these may well all be valid concerns and merely parts of a whole, each representing different perspectives according to the location of the viewer. This complexity and these inter-relationships require articulation. Different methodological approaches are likely to be required to address information needs at different levels. The efficiency/accountability concerns of high-level policy makers and funding agencies, for example, may be best addressed through more quantitative approaches including surveys. The utility / managerial /organisational effectiveness concerns of middle-

level program managers may require a more eclectic or pragmatic approach. At the local level, understanding, pluralisms and contextualisation may be more relevant, calling for qualitative approaches (Greene, 2003). Care needs to be taken to ensure methodologies chosen are capable of analysing the differential effects of significant project components on different sub-groups.

Thirdly, while our ultimate concern is the development impact of a particular intervention, project M&E activities frequently focus on the achievement of project outputs alone. Systematic collection of evidence to support claims of a significant impact in development terms is necessary to enable new knowledge generated about aid effectiveness to be fed back into subsequent development efforts. As a corollary, lack of evidence thwarts opportunities to learn lessons about aid effectiveness resulting in repetition of faulty approaches. Project effectiveness should be considered not just in terms of outputs, outcomes or development impact in isolation, but the interaction between these. Broader understanding is required of local interactions and interests driving the success or failure of an initiative together with greater recognition of the multi-dimensionality of projects and the relationship of the parts.

Fourthly, projects are not always explicit as to the overall purpose to be served by M&E activities or the specific uses that will be made of M&E results. Sole focus on one particular use of M&E information (for example, meeting donor accountability requirements) can obscure sight of a) the overarching purpose of M&E, namely, to enhance the likelihood that targeted development results will be achieved, and b) other potential uses of M&E information that can contribute to this overarching purpose.

These might include a mix of professional,

institutional or policy development, program enhancement, information sharing, awareness and communication, theory development, impact assessment, amongst others (Guijit, Arevalo, & Saladores, 1998). Documentation of the use of participatory M&E for a range of education

specific purposes is starting to emerge. These uses include: participatory school assessment and school planning in the context of school improvement processes (UNICEF 2001); participatory school-focused baseline as catalyst for learning and change (Moloney, 1998); teacher selfevaluation of the effectiveness of professional development (Peacock and Rawson 2001), school level impact assessment of education reform projects (Crossley, 2001) and education policy development (Kane, Bruce, & O'Reilly de Brun, 1998). While accountability and program enhancement uses of M&E knowledge have traditionally been considered opposed, this tension has more to do with the criteria used than accountability as such. An expanded notion of accountability beyond scrutiny of resource use, towards a broader range of outcomes, means accountability and program enhancement might be complementary.

Finally, it cannot be assumed that subsequent positive action will be automatic once M&E information becomes available. Frequently an M&E exercise is considered complete once a report is delivered with little transparency as to what actually happens as a consequence. Many reports are inaccessible or unknown, residing on donor shelves, in archives beyond the public domain or in piles in the corner of Ministry of Education offices. If knowledge generated by the M&E process is unused, available information is wasted and potential lessons to be learnt are lost. Effective utilisation of M&E knowledge requires enabling conditions, including a conducive policy environment, appropriate feedback mechanisms, control over the direction of change and access to the resources required.

Founded on the realisation that producing ‘good deliverables’ (or outputs) is not enough, there has been a shift in recent years within a number of donor agencies from activity-based management to results-based management, whereby an organization endeavours to ensure its activities contribute to the achievement of clearly stated goals and objectives. (Binnendijk,

2001; UNDP, 2001; Wholey, 2001; WorldBank, 1996). Continuous monitoring and evaluation is critical to this process. The design of useful, participatory and results-oriented M&E is more than just matching stakeholder information needs with particular methods, approaches and models, although this in itself is important. A re-conceptualisation of the M&E process is required – as one involving continuous and inclusive assessment, reflection, dialogue, learning, feedback and action - on multiple levels. As Symes and Jasser (1998) have argued, projects implemented exactly as planned have more to do with lack of effective M&E than with being exceptionally well-planned. A participatory results-based approach recognises that M&E ‘use’ does not start (or stop) once reports are generated but that the M&E process itself constitutes ‘use’.

However, while the theory of participation assumes the address of issues locally-

identified as meaningful, will prompt stakeholders to act on what they come to know (Freedman, 1998), unless stakeholders are empowered (and resourced) to act on their knowledge, the process will be thwarted.

Advocates of results-based management are not necessarily also participation advocates (Jackson 1998). However, a participatory approach with a results orientation is a potentially effective combination for enhancing project effectiveness.

Consequently, several further

principles are proposed encompassing M&E purpose, use and focus. These are:

that the overarching purpose of M&E is to enhance the likelihood that targeted development results will be achieved and that a range of specific uses of M&E should be identified at different levels in order to maximise the achievement of this purpose; that M&E is conceptualised as a process of continuous and ongoing information dialogue, learning, feedback and action, relative to the information needs of different stakeholder needs;

that a results-orientation provides a focus for project direction, encompassing reflection on the relationship between project processes, the achievement of lower order results and their collective contribution towards development impact, whilst remaining mindful of unintended and/or undesirable outcomes.

Capacity Building, Institutionalisation and Sustainability It is widely recognised that development results are unlikely to be achieved in the lifetime of a short duration project. Thus, a successful project might be considered one that establishes processes (rather then products) that are shown to be effective in working towards the achievement of targeted results over time and which are sustained after project completion. M&E capacity building therefore is critical to the sustainability of both M&E activities and the sustainability of benefits that derive from projects more broadly.

M&E capacity building is more than the participation of local stakeholders in a particular monitoring or evaluation exercise or the development of a set of indicators or tools.

It

involves the institutionalisation of an ongoing M&E process including the capacity to continually evolve the M&E system according to changing needs (Guijit et al., 1998). A teacher’s college upgrade project, for example, may yield immediate outputs including enhanced curriculum and trained lecturers as part of a longer-term goal of achieving effective teachers in a nation’s classrooms. The benefit may however be short-lived. In a relatively short time, the curriculum will again become outdated and the trained lecturers will transfer, be promoted, or not maintain their professional development. In contrast, if the college, with project support, develops and institutionalises an ongoing process of monitoring and evaluation for purposes of continuous quality review and enhancement (including a process whereby new

staff are inducted into college quality assurance practices), then project effectiveness and sustainability will be enhanced.

Despite the term’s frequent usage and cited importance in the literature and despite the vast number of project M&E activities annually conducted, capacity building for M&E is frequently overlooked. On the basis of an extensive review of USAID project evaluations, Snyder and Doan (1995) demonstrated that even where the same funding agency had multiple evaluations in the same country in the same sector and in the same year, there was no discernible strategy to develop the capacity of any local entity to pursue evaluation as an important means of ongoing performance assessment. While pressure on international development agencies to respond to information needs of their own parliaments is said to have contributed to this neglect (Bamberger 2000), it is not readily apparent why these two intentions should be mutually exclusive. New indicators are needed for assessing performance and measuring success (Thompson 1998). Short-term achievement reporting could include the establishment of systems and capacity to progressively measure impact over time, beyond project expiry. In this manner, donor accountability requirements can be met and at the same time, the benefits of effective M&E can outlive the limited duration ‘comings and goings’ of individual projects and project managers (Binnendijk, 2001).

The problem has been exacerbated by the ‘business of development’ whereby outputs-based contracts influence external consultants and managing companies to focus on payment milestones in the delivery of development services. Achieving payment milestones in the most expeditious and efficient manner is not conducive to local participation or capacity building because these are time-consuming processes. Though the model of intensive short-term evaluation studies conducted by an external team may suit the project cycle of donor agencies,

it does not serve the long-term goals of improving program implementation and encouraging sustainable development initiatives with significant host country ownership. Furthermore, projects are not always clear whether the primary intention is the efficient delivery of goods and services or building local capacity for the ongoing delivery of goods and services. While the two intentions could be complementary, project timeframes, roles and responsibilities, contracts and terms of references need to be adjusted to accommodate both expectations.

In education projects, where M&E capacity building efforts have been made, the scope has often been limited to the development of centralised education management information systems, to support the information needs of ministries and donors – albeit with varying degrees of success. There has been relatively limited investment in M&E capacity building which supports stakeholders closest to the classroom - students, their parents and teachers (Riddell 1999). Infrequently are efforts made to develop a comprehensive approach to M&E capacity building that links the information needs of individuals and organizations at local, district and national levels.

Institutionalising a participatory, results-oriented approach to M&E presents its own challenges with implications for change to organisational cultures, procedures, incentives, rewards, recruitment and staffing policies, amongst other (Jackson 1999). As Blackburn and Holland (1998, p.3) have demonstrated, ‘it is one thing to facilitate participatory planning, (monitoring and evaluation) at the village level – but another thing to change often rigidly hierarchical riskadverse management structures within organizations that make participation difficult over the longer term.’ M&E capacity building requires: commitment to the notion of individual and organisational learning as a route to improvement, motivated individuals and dedicated time and resources. These requirements may appear onerous in environments that are resource poor,

under-staffed and struggling to deliver basic core services (Estrella, 2000; Thompson, 1998). Care needs to be taken to avoid compounding the staffing burden when a limited number of over-worked and underpaid functionaries are saddled with more and more responsibilities.

Given sustainable development ultimately rests on local ownership and capacity, a capacity building principle is proposed, namely:

that specific support be provided for the development and institutionalisation of a monitoring and evaluation regime that will prevail beyond the life of the project to enable a) ongoing review and update of project outputs over time and in response to changing local needs and b) long-term lessons to be learnt as to the effectiveness and sustainability of strategies implemented.

Implications for Project Planning, Project Effectiveness and Further Investigation

Incorporation of the above-proposed principles has conceptual implications for the entire project planning and implementation process beyond the mere inclusion of an M&E component, which are anticipated to reap benefits in terms of the enhanced quality of project design and sustainability of outcomes. Implementation of a participatory, results-oriented and capacity building approach requires support from organizations which are open to change and are prepared to make the necessary adjustments to established procedures. These include shifts in the role of external consultants from evaluator to facilitator / negotiator / capacity builder; in the role of local partners from informant to participant/analyst; from judging to learning; from extracting to empowering; from ‘one-off’ to ‘ongoing’; from rewarding rapid disbursement,

top-down management and achievement of project outputs to problem solving, flexibility rewarding flexibility, problem-solving, stakeholder engagement and focus on outcomes (Cracknell, 2000; Estrella, 2000; Thompson, 1998).

Given commitment to ‘participation’, a common understanding is required as to what it actually means in a given project context, its purpose and practical implications. Conscious effort is required to identify and actively build trust between stakeholders so that concerns can be voiced and heard and that all can contribute to the planning process. Many of the so-called challenges to participation can be addressed at the project inception stage with thoughtful planning, adequate resource allocation, appropriate timelines and effective approaches to decision-making, coordination and management (Dugan, 1996; Holland & Blackburn, 1998; Huberman, 1995; Jackson, 1999; Kane et al., 1998; Schoes, Murphy-Berman, & Chambers, 2000).

The development of a results framework at the project planning stage is a strategic process to help clarify the project logic. Reflection on anticipated results and on how their achievement will be determined, forces consideration of the adequacy and realism of proposed strategies. The development of a M&E results matrix is a useful planning tool. Vertical columns would comprise the hierarchy of results (and criteria for their assessment), the stakeholders for whom these results are of interest, the information needs of stakeholders relative to these results, the approaches to M&E to be pursued, the specific uses of M&E outcomes and feedback mechanisms, the capacity building requirements at each level, and planning implications for the lot (including resources and timeframes). Horizontal rows represent the detail of a particular M&E task.

M&E planning at project outset needs to take into account the receptiveness of the existing policy and institutional environment to the introduction of a more active participatory M&E process. Assessment is required of the adequacy of existing feedback mechanisms and capacity building requirements. M&E capacity building definitions and targets are required. Planning for M&E capacity building requires recognition of the ‘time’ and ‘ongoing support’ needed to institutionalise a continuous improvement process as a basic function of management. Consideration is also required as to how the quality of M&E processes and the effective use of M&E outcomes will be assured. The principles proposed in this paper provide a starting point for discussion on the development of M&E quality standards and indicators for M&E effectiveness.

The final set of principles, pertaining to M&E planning, quality assurance and further development, include:

that planning M&E at project outset, encompassing a participatory, results-focused, utilisation-oriented, capacity-building and self-reflective approach, will enhance both the quality of the project design itself and the relevance and sustainability of project outcomes

that the development of a Results Framework is a useful planning tool to assist in articulating the hierarchy of results, their relevant stakeholders, their M&E concerns, methodological approaches, specific M&E uses, feedback loops, capacity building requirements and resource implications;

that M&E itself be monitored and evaluated in terms of process and results and that lessons learnt be actively disseminated to inform an ongoing process of reflection and learning on ways and means to enhance project effectiveness vis-a-vis development results

The set of principles in this paper are presented as ‘working’ principles, open to scrutiny and revision. The principles proposed may appear logical, sensible and obvious or optimistic, ambitious and daunting. Either way, further investigation is required. Meta-evaluation of past M&E activities using the principles as criteria for assessment would further confirm (or refute) the claims made. Application and case study assessment of the practicality and outcomes of the various dimensions – participation, results-orientation, multi-dimensional usage, capacity building, planning and resource implications and institutional challenges – would be useful in furthering understanding as to whether the principles suggested make the difference proposed.

Bibliography

ALNAP. (2001). Humanitarian Action: Learning from Evaluation. ALNAP Annual Review 2001, from www.alnap.org/index Bamberger, M. (2000). The Evaluation of International Development Programs: A View from the Front. American Journal of Evaluation, 21(1), 95-102. Bernard, A. (2002). Lessons and Implications from Girls' Education Activities: A Synthesis from Evaluations. New York: UNICEF Evaluation Office.

Binnendijk, A. (2001). Results-based Management - A Literature Review. Paris: Development Assistance Committee (DAC) Working Party on Aid Evaluation, Organisation for Economic Cooperation and Development. Blackburn, J., & Holland, J. (1998) (Eds.). Who Changes? Institutionalising Participation in Development. London: Intermediate Technology Publications. Booth, W., Ebrahim, R., & Morin, R. (2001). Participatory Monitoring, Evaluation and Reporting. An organisational development perspective for South African NGOs. Braamfontein, South Africa: PACT. Buchert, L. (Ed.). (1998). Education Reform in the South in the 1990s. Paris: UNESCO. Capeling-Alakija, S., Lopes, C., Benbouali, A., & Diallo, D. (1997) (Eds.). Who are the Question-Makers? A participatory evaluation handbook. New York: United Nations Development Program (UNDP), Office of Evaluation and Strategic Planning. Chambers, R. (1997). Whose Reality Counts?: Putting the First Last. London: Intermediate Technology. Cornwall, A. (2000). Making a Difference? Gender and Participatory Development (IDS Discussion Paper No. IDS Discussion Paper 378). Sussex: Institute of Development Studies. Cornwall, A., & Jewkes, R. (1995). What is Participatory Research? Social Science and Medicine, 41(12), 1667-1676. Cracknell, B. E. (2000). Evaluating Development Aid: Issues, Problems, Solutions. New Delhi and London: Sage Publications.

Crossley, M. (2001). Cross-cultural issues, small states and research: capacity building in Belize. International Journal of Educational Development, 21, 217-229. Davis Case, D. A. (1990). The Community's Toolbox: the idea, methods and tools for participatory assessment, monitoring and evaluation in community forestry. Rome: FAO. Doherty, J., & Rispel, L. (1995). From conflict to cohesion: involving stakeholders in policy research. Evaluation and Pogram Planning, 18(4), 409-415. Dugan, M. (1996). Participatory and Empowerment Evaluation. Lessons Learnt in Training and Technical Assistance. In D. Fetterman, S. Kaftarian & A. Wandersman (Eds.), Empowerment Evaluation. Knowledge and Tools for SelfAssessment and Accountability. London and New Delhi: SAGE Publications. Ebbutt, D. (1998). Evaluation of projects in the developing world: some cultural and methodological issues. International Journal of Educational Development, 18(5), 415-424. Estrella, M. (2000) (Ed.). Learning from Change. Issues and experiences in participatory monitoring and evaluation. London: Intermediate Technology Publications Ltd. Estrella, M., & Gaventa, J. (1998). Who Counts Reality? Participatory Monitoring and Evaluation: A Literature Review (IDS Working Paper No. IDS Working Paper 70). Sussex: Institute of Development Studies. Eyben, R., & Ladbury, S. (1995). Popular participation in aid-asisted projects: why more in theory than in practice? In N. Nelson & S. Wright (Eds.), Power and Participatory Development (pp. 192-200). London: Intermediate Technology Publications.

Freedman, J. (1998). Simplicities and Complexities of Participatory Evaluation. In E. Jackson & Y. Kassam (Eds.), Knowledge Shared. Participatory Evaluation in Development Cooperation. Ottawa: International Development Research Centre. Gaventa, J. (1998). The scaling-up and institutionalisation of PRA: lessons and challenges. In J. Blackburn & J. Holland (Eds.), Who Changes? Institutionalising

participation

in

development.

London:

Intermediate

Technology Publications. Greene, J. (2003). Understanding Social Programs through Evaluation. In N. Denzin & Y. S. Lincoln (Eds.), Collecting and Interpreting Qualitative Materials (pp. 590-618). Thousand Oaks, California: SAGE Publications. Guba, E., & Lincoln, Y. S. (1989). Fouth Generation Evaluation. Newbury Park, CA: SAGE. Guijit, I. (2000). Methodological Issues in Participatory Monitoring and Evaluation. In M. Estrella (Ed.), Learning from Change. Issues and experiences in participatory monitoring and evaluation. (pp. 201-216). London: Intermediate Technology Publications Ltd. Guijit, I., Arevalo, M., & Saladores, K. (1998). Tracking Change Together. International Institute of Environmental Development (IIED), London. Guijit, I., & Kaul Shah, M. (Eds.). (1998). The Myth of Community. Gender issues in participatory development. London: Intermediate Technology Publications. Guijit, I., & Shah, K. (1998). Waking up to power, conflict and process. In I. Guijit & K. Shah (Eds.), The Myth of Community. London: Intermediate Technology Press.

Holland, J., & Blackburn, J. (1998) (Eds.). Whose Voice? Participatory Research and Policy Change. London: Intermediate Technology Publications. House, E. R. (2003). Stakeholder Bias. New Directions for Evaluation, 97(Spring), 53-56. Huberman, M. (1995). The many modes of participatory evaluation. In J. B. Cousins & L. M. Earl (Eds.), Participatory Evaluation in Education (pp. 103-111). London: Falmer Press. IIED. (1998). Participatory Monitoring and Evaluation. London: International Institute for Environment and Development, Sustainable Agriculture and Rural Livelihoods Programme. Jackson, E. (1999). The Strategic Choices of Stakeholders: Examining Front-End Costs and Downstream Benefits of Participatory Evaluation. Paper presented at the World Bank Conference on Evaluation and Poverty Reduction, Washington D.C. Jackson, E., & Kassam, Y. (1998a). Knowledge Shared: Participatory evaluation in development co-operation. Ottawa: Kumarian Press. Kane, E. (1997). Participatory Rural Appraisal for Educational Research: Helping to See the "Invisible". Irish Journal of Anthropology, 2, 69-85. Kane, E., Bruce, L., & O'Reilly de Brun, M. (1998). Designing the Future Together: PRA and education policy in The Gambia. In J. Holland & J. Blackburn (Eds.), Whose Voice? Participatory research and policy change (pp. 31-43). London: Intermediate Technology Publications. Kane, E., & O'Reilly-de Brun, M. (2001). Doing your own research. London: Marion Boyars Publishers.

Leurs, R. (1998). Current challenges facing participatory appraisal. In J. Blackburn & J. Holland (Eds.), Who Changes? Institutionalising participation in development. London: Intermediate Technology Publications. McAllister, K. (1999). Understanding Participation: Monitoring and evaluating process, outputs and outcomes. Ottawa: International Development Research Centre. McDonald, D. (1999). Developing guidelines to enhance the evaluation of overseas development projects. Evaluation and Program Planning, 22, 163-174. McKay, V. (1998). Participatory action research as an approach to impact assessment. In V. McKay & C. Treffgarne (Eds.), Evaluating Impact (pp. 25-38). London: Department for International Development. Middleton, J., Terry, J., & Bloch, D. (1992). Building education evaluation capacity. In D. W. Chapman & H. J. Walberg (Eds.), International Perspectives on Educational Productivity (Vol. Vol.2, pp. 151-191). London: JAI Press. Moloney, C. (1998). School focused baseline as a catalyst for change. In V. McKay & C. Treffgarne (Eds.), Evaluating Impact (pp. 57-68). London: Department for International Development. Murphy, D., & Rea-Dickens, P. (1998). Identifying stakeholders. In V. McKay & C. Treffgarne (Eds.), Evaluating Impact (pp. 89-98). London: Department for International Development. Nelson, N., & Wright, S. (1995) (Eds.). Power and Participatory Development. London: Intermediate Technology Publications. Nichols, L. (2002). Participatory program planning: including program participants and evaluators. Evaluation and Program Planning, 25, 1-14.

Pretty, J., & Chambers, R. (1994). Towards a Learning Paradigm: new professionalism and institutions for agriculture. In I. Scoones & J. Thompson (Eds.), Beyond Farmer First: rural people's knowledge, agricultural research and extension practice (pp. 182-203). London: Intermediate Technology Publications. Riddell, A. (1999). Evaluations of educational reform programmes in developing countries: whose life is it anyway? International Journal of Educational Development, 19(6), 383-394. Ryan, K., Greene, J., Lincoln, Y., Mathison, S., & Mertens, D. (1998). Advantages and Challenges of Using Inclusive Evaluation Approaches in Evaluation Practice. American Journal of Evaluation, 19(1), 101-122. Schoes, C., Murphy-Berman, V., & Chambers, J. (2000). Empowerment Evaluation Applied: Experiences, Analysis and Recommendations from a Case Study. American Journal of Evaluation, 21(1), 53-64. Shotton, J. (1998). Participatory impact assessment. In V. McKay & C. Treffgarne (Eds.), Evaluating Impact (pp. 17-24). London: Department for International Development. Shulha, L., & Cousins, B. (1997). Evaluation Use: Theory, Research and Practice Since 1986. Evaluation Practice, 18(3), 195-208. Snyder, M., & Doan, P. (1995). Who participates in the evaluation of international development aid? Evaluation Practice, 16(2), 141-152. Symes, J., & Jasser, S. (1998). Growing from the grassroots: building participatory planning, monitoring and evaluation methods in PARC. London: International Institute of Environmental Development (IIED).

Thompson, J. (1998). Participatory approaches in government bureaucracies: facilitating institutional change. In J. Blackburn & J. Holland (Eds.), Who Changes?

Institutionalising

participation

in

development.

London:

Intermediate Technology Publications. UNDP. (2001). Managing for Results: Monitoring and Evaluation in UNDP. A Results-Oriented Framework. New York: United Nations Development Program, Evaluation Office. UNDP. (2002). Handbook on Monitoring and Evaluating for Results. New York: United Nations Development Program, Office of Evaluation and Strategic Planning. UNICEF. (2001). Making Schools more Child-Friendly. Lessons from Thailand. Bangkok: UNICEF EAPRO. White, S. (1996). Depoliticising development: the uses and abuses of participation. Development in Practice, 6(1), 6-15. Wholey, J. S. (2001). Managing for Results: Roles for Evaluators in a New Management Era. American Journal of Evaluation, 22(3), 343-347. WorldBank. (1996). Designing Project Monitoring and Evaluation. Washington: The World Bank, Operations Evaluation Department.

Paper 1 - Participatory Monitoring and Evaluation – a tool to enhance ...

monitoring and evaluation as a tool to not merely passively monitor project ..... which posited that evaluation findings are not 'facts' but are created through interactive .... use of M&E information (for example, meeting donor accountability ...

113KB Sizes 0 Downloads 44 Views

Recommend Documents

Monitoring and Evaluation Coordinator The ... -
Jul 24, 2015 - strengthen juvenile justice and youth at-risk programs, and further support ... Relevant Bachelor's degree and at least 5 years of relevant ...

Monitoring and Evaluation Coordinator The ... -
Jul 24, 2015 - strengthen juvenile justice and youth at-risk programs, and further support ... Relevant Bachelor's degree and at least 5 years of relevant experience in the design, management, and implementation of M&E processes.

Lustre Monitoring Tool Version 3 - GitHub
LMT version 1 was a python application for visualizing ... The Lustre Monitoring tool uses cerebro and MySQL for data collection ... MySQL API ... (Test with ltop.).

General Evaluation - The BEPS Monitoring Group
This implied a new approach, to treat the corporate group of a MNE as a single ... template for country by country reports is a major advance, although the ... all relevant tax authorities create unnecessary obstacles: publication would be a far ...

PFM MDTF Monitoring and Evaluation Report1
other core areas (ad hoc analytical activities). (WB-executed). Health sector PER is undergoing review. A detailed note on the expenditure and revenue implications of movements in international crude oil prices is being finalized. Provision of equipm

evaluation and monitoring of direct support for the ...
Based on the Strategy of Rural Development, the Albanian Government supports directly the agricultural ..... protection and cultivation technology, which should.

A Participatory Approach to Design Education for the ...
define design as finding working solutions that are immediately applicable to problems in the real world ..... clouds and then to rainfall at the head of the river.

Persuasive Speaking: A Review to Enhance the ...
his/her name as a judge for persuasive speaking finals? To help ... sales, persuasion, oratory, peace oratory, original oratory, public address ... solution domain.

A Tool for Prioritizing DAGMan Jobs and Its Evaluation - Swift-Lang.org
Apr 29, 2000 - in an event-driven fashion, in which we call the processing of a single ... of dags, which are “assembled” in a uniform way, but whose structures.

Ten Steps to a Results-Based Monitoring and ...
Public administration—Developing countries—Evaluation. I. Rist, Ray. C. II. .... health programs and assume that successful implementation is equiv- alent to ...

Internet-Addiction-A-Handbook-And-Guide-To-Evaluation-And ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item.

kirafatyangra - a tool to recommend insecticides - GitHub
Department of Computer Science and Information Technology. DWIT College. In partial fulfillment of the requirements for the Bachelor's Degree in ... Page 2 ...

MMRRC – Helping to Optimize and Enhance Scientific Rigor ...
Rigorous Experimental Design - the MMRRC provides authentic and key ... to publicize information about their strains on their own web site may want to include ...

Using hedges to enhance a disease outbreak report ...
outbreak reports — which to the best of our knowl- edge has not been ... registered users as email alerts (Collier et al., 2008). In addition to this ... For example, an article may be about, say ... cle could report a vaccination campaign or resea