M&E Sourcebook: Introduction

Introduction to Monitoring & Evaluation

The purpose of evaluation

Evaluations have threemain purposes:

  • To improve future aid policy, programmes and projects through feedback of lessons learned.
  • To provide a basis for accountability, including the provision of information to the public.
  • To promote empowerment of beneficiaries.

Other benefits of evaluations include:

  • Method by which agencies seek to learn lessons from their work and incorporate them into policy and practice. This organisational learning is necessary for the transfer of knowledge between agencies.
  • Are the only consolidated source showing how a project or programme progressed.
  • Provide a means for retaining and building institutional memory.
  • Question and test basic assumptions.
  • Create space for lesson learning.
  • Generate written reports which contribute to transparency and accountability, and allow lessons to be shared more easily.
  • Provide a way of assessing the critical link between those on the ground and decision makers.
  • Learning from experience is of particular value at times of policy uncertainty.

The range of M&E approaches and methods in development and relief has grown considerably since the early 1990s, but far less thought has been given to M&E methods applicable specifically to DRR.

Specific challenges are presented by DRR M&E. These include: the wide range of types of initiative that fall under the heading of DRR (see DRR typology), its previous neglect, and the reverse logic of DRR (see below).

The neglect of M&E in disaster risk reduction

Technical manuals overlook methods for assessing the performance of DRR programmes and projects. The need for regular monitoring or performance review is occasionally noted but methods are rarely discussed. The few exceptions give little detail or (especially in older manuals) concentrate on economic cost-effectiveness. There is a similar neglect in training courses, which concentrate on raising awareness, understanding concepts, hazard/risk/vulnerability/capacity assessment, and identification and implementation of risk reduction options. M&E training is more likely to focus on emergency response applications or learning from the impact of past emergencies for disaster planning. It is therefore unsurprising that DRR organisations have given low priority to M&E.


Examples: The neglect of M&E in DRR

A recent study of 44 US state and territory post-disaster mitigation plans found their M&E provisions to be weak. In fact, most plans did not even address most elements of monitoring and evaluation. About half specified monitoring of implementation progress or development of an ongoing M&E information system, organisation and process. One third provided for assessment of obstacles and problems in implementation; 32% for updating baseline data, 34% for monitoring hazards, 23% for evaluating success or failure (see Goldschalk et al. 1999).


Tools for evaluating DRR initiatives

For further information on evaluating DRR initiatives see Guidance Note 13 of the Tools for Mainstreaming Disaster Risk Reduction series: Evaluating Disaster Risk Reduction Initiatives. See Guidance Note 13.

Studies of NGOs' mitigation and preparedness activity, NGO post-cyclone reconstruction projects and European Union (EU) food aid/security programmes have reached similar conclusions.

The reverse logic of DRR

M&E is designed to measure change (positive or negative). However, DRR can present problems because of what has been called its "reverse logic": i.e. the success of an initiative is that something - the disaster - does not happen.

Further reading and website resources

The literature on M&E in general is extensive. The following publications and websites provide useful practical guidance, overviews of the subject and reviews of key issues.

On M&E in development projects:

  • Gosling, L. and M. Edwards (2003) Toolkits: A practical guide to planning, monitoring, evaluation and impact assessment. The Save the Children Fund, London.
  • Roche, R (1999) Impact Assessment for Development Agencies: Learning to value change. Oxfam, Oxford.
  • OECD-DAC (1991) Principles for Evaluation of Development Assistance. Organisation for Economic Co-operation and Development, Development Assistance Committee, Paris. Available at: www.oecd.org/dataoecd/secure/9/11/31779367.pdf

On M&E in humanitarian assistance:

  • Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP) website: www.alnap.org
  • Hallam, A. (1998) Evaluating Humanitarian Assistance Programmes in Complex Emergencies. Good Practice Review no. 7. Overseas Development Institute, Humanitarian Practice Network, London. Available at www.odihpn.org/documents/gpr7.pdf
  • Wood, A., R. Anthorpe and J. Borton (eds.) Evaluating International Humanitarian Action: Reflections from Practitioners. Zed Books/ALNAP, London.

Other sources of information:

  • Godschalk, D.R., T. Beatley, P. Berke, D.J. Brower, and E.J. Kaiser (1999) Natural Hazard Mitigation: Recasting disaster policy and planning. Island Press, Washington D.C.
  • MandE News, newsletter edited by Rick Davies. Available at www.mande.co.uk
  • Outcome Mapping Learning Community. Available at www.outcomemapping.ca/