M&E Sourcebook: Methods and Process

M&E design

Selecting the measurement approach

Evaluators have to make choices about the most appropriate overall approach and regarding individual methods. The following section discusses the key methodological issues involved in making such choices.

Scientific, "objective" approaches to measuring impact focus on the collection and analysis of quantitative data. In social sciences it is impossible to carry out pure "scientific" experiments to determine impact so "quasi-scientific" methods are used, such as formal surveys with probabilistic sampling to compare a group affected by the intervention against a control group (although this is not always considered morally acceptable in DRR). ( See section below on control groups.)

"Before and after" comparisons of the same group is another, less rigorous, method, requiring the collection of baseline data so that conditions before the intervention can be compared with those afterwards. This method does not, however, control for other factors affecting outcomes. The double difference method combines the previous two by comparing the change in the outcome in a treatment group with the change in the control group.

A deductive/inductive approach adopts a more anthropological and socio-economic approach to data collection and analysis, relying on interviews with key informants, visual observations and inductions from other similar cases. It allows for greater subjectivity: the evaluator searches out the most plausible interpretation of the link between interventions and impact, necessarily imposing her/his own view of how the world works.

Participatory monitoring and evaluation is based on the view that the impact of a project or programme on beneficiaries is subjective and should be measured by the beneficiaries themselves: it must take account of their values, priorities and judgements. The role of the evaluator is therefore to facilitate dialogue between different stakeholders and achieve a consensus over the project's impact. This approach rejects scientific method and does not resolve the problem of attribution unless it is combined with other data collection techniques. However, the evaluation exercise itself is empowering and is the only method which can give emphasis to voices of the most vulnerable and marginalised groups.

Data take many forms: subjective/objective, quantitative/qualitative, cross-sectional/longitudinal, primary/secondary. A wide variety of tools is available for collecting data, including formal surveys, structured or semi- structured interviews, group discussions, direct observation and case studies. Each method brings its own advantages and drawbacks, as discussed in M&E manuals and handbooks. The choice of method depends on the nature and scale of the project, the type of information required, and the frequency, ease and cost of collection. Data that can be collected or measured easily by field workers (e.g. levels of beneficiary participation in meetings, the number of rainwater harvesting structures completed) can be put into monthly or quarterly reports. Data requiring more systematic or time-consuming collection are gathered less frequently - perhaps annually.

Participatory methods

Data collection methodologies can be broadly divided into two main kinds: participatory and non-participatory. Modern development thinking emphasises the value of the former, and many participatory methods have been developed. Where the project implementation is participatory and geared towards community action, it follows that the community must also be involved in the evaluation.

Participatory approaches are relatively new in disaster management (other than in food security/drought mitigation): they have only become widespread since the mid-1990s. With emphasis now shifting towards community-based approaches, traditional implementers have had to learn new skills and attitudes. This process is not quick or straightforward. The quality of participation may fall well short of the rhetoric. Staff may find it hard to move from service delivery to a facilitating role. There has been some increase in the use of participatory methods in DRR evaluations and a broadening of the range of tools used, but this is limited compared to development practice. M&E systems remain predominantly top-down, designed to provide information to headquarters staff and donors.

Some of the most influential methodological approaches developed since the 1970's and increasingly used in DRR evaluations include rapid rural appraisal, participatory rural appraisal and participatory learning and action:

  • RRA: flexible progressive learning, multi-disciplinary research teams, community participation, outsiders gain information from rural people in a timely and cost- effective manner
  • PRA: shift from extractive mode to empowering and facilitating active local participation in planning activities
  • PLA: more emphasis on mutual learning, attitudes and behaviour of researchers, and taking action on the outcome

Evaluators need to obtain the views of a wide range of stakeholders. Stakeholder analysis methods are well established (see Gosling and Edwards 2003: 302-307).

In community-based initiatives, where the project management structure may involve several layers down to community level, all levels of actor must be included. This is sometimes known as a "layer" or "onion peeling" approach.


Examples

Stakeholder assessment methods: the "layer" or "onion peeling" approach: The evaluation of the IFRC’s Golfo de Fonseca project in El Salvador and Nicaragua involved talking to: households, community health brigades trained by the project, community leaders, Red Cross branch members and councils, Red Cross National Society heads of departments and presidents. Questions were rephrased or adapted when they were not well understood: for example, at household level, questions about knowledge of the project were replaced by questions about knowledge of Red Cross activities in the community, as the project itself was not known to many. (See IFRC: n.d.)


In participatory projects it is crucial that the community is involved in evaluation, not just data collection, and is empowered to make appropriate decisions about future activities as a result. Although external agencies and their funders need M&E reports, collection of data solely for external use can undermine the participatory process. Communities must develop their own targets, indicators and priorities, as their views of these may differ considerably from those of staff in supporting agencies (see example). In such systems, monitoring impact primarily means monitoring change and may not rely on pre-determined indicators.

Benefits of combined approaches

Many DRR evaluations adopt a range of data collection methods. A more participatory evaluation might include: literature review, preparatory site visit, use of semi- structured and structured questionnaires, structured and informal discussions, field observation of activities, outputs and processes, group discussions including workshops, informal conversations with interested groups, and feedback sessions with local actors and project management committees. A similar methodological mix can be used for much larger projects.


Examples: A methodological mix

The review of CARE International’s Central America Disaster Mitigation Initiative (CAMI), which covered four countries, comprised: a review of internal documentation (monthly and quarterly reports, training and methodology manuals, financial reports, etc.), semi-structured interviews with CARE personnel and consultants in Atlanta and Central America, 7 participatory workshops with representatives of target communities and municipalities, a participatory workshop with the project team to analyse the preliminary findings, semi-structured interviews with project counterparts, and wider reading and consultation (CARE International 2003). See Case study: CARE International case and methodological approaches.

SOS Sahel’s research/ evaluation of its food security and cash for work programme in Ethiopia used the following methods:

  • Interview-based survey of 245 randomly selected heads of household representing 5% of households in three project areas. This generated basic data on population, food availability, production, consumption, marketing, income, land and livestock ownership.
  • Interview-based survey of 225 women in the above households. This covered the same areas as the other survey but included questions on the management of food and other resources at the household level.
  • Interviews (questionnaire format) with 159 people working on a local road improvement as part of the cash for work scheme.
  • Structured interviews with 60 workers employed on the same scheme.
  • Marketing survey: structured interviews with 120 farmers/traders in local markets.
  • Local workshop feeding back preliminary findings to farmers, project staff and local government representatives.
  • Study of household coping strategies, control of resources at household level and local support mechanisms.
  • Price survey: data on food and livestock prices in two local markets joined by road improved through cash for work scheme.
  • Traffic survey: recording of traffic flows on the improved road (before, during and after the improvements).
  • Reports by each of the six research assistants on their fieldwork, highlighting areas of relevance.
(See Jenden 1994: 7-8)


Why is DRR M&E different from "normal" M&E?

Lack of control group

The use of control groups is not necessarily straightforward, especially in risk reduction. It is arguably unethical to study at-risk groups that one has not attempted to protect - the argument is even stronger in humanitarian response. However, it is important to identify reasons for non-participation in projects where people were offered support, as well as why groups drop out. The IFRC's review of its Camalotte regional programme in the Rio de la Plata river basin of Argentina, Paraguay and Uruguay made a point of visiting one Red Cross branch that was no longer involved in order to understand its reasons for withdrawing.

Lack of baseline data

Evaluation relies on good baseline data. Project design should be based on baseline studies, linked to objectives and indicators of achievement. Baseline data collection can be targeted towards areas defined by the indicators. However, it is impossible to predict all the information that might be needed, and the collection, analysis, storage and recovery of information may be inadequate. The problem of absent or deficient baselines is common to projects of all kinds but is particularly notable in DRR initiatives. DRR M&E therefore often requires extra thought and time given to baseline reconstruction.

Approaches and methods specific to DRR

Ex-post hazard method

Observed or documented response to disaster events is a strong indicator of impact. Repeat hazard events are the ideal opportunity to test measures - allowing for each event's uniqueness in its location, scale, timing and impact. There are many documented examples of such effectiveness, notably from the Red Cross/Red Crescent's experiences, and particularly with regard to disaster preparedness.

Perhaps the best known example is the Bangladesh Red Crescent’s cyclone preparedness programme, which over more than 30 years has built up a network of volunteers and shelters covering 3,500 villages, supplemented by other awareness- raising and community mobilisation activities. Here, success is measured by the capacity to move people to safety ahead of impending cyclones: for instance, in May 1994 three quarters of a million people were evacuated safely and only 127 died.


Examples: Ex-post hazard method

Disaster response evaluations provide insights into the effectiveness of DRR measures. Typically, a disaster response evaluation addresses issues such as:

  • Assessment – timing, extent and quality of coverage/data, involvement of local vis-à-vis external actors, quality of presentation of findings (i.e. can they be read and understood quickly), communication of findings (speed, presentation).
  • Communications – staff knowledge of information needs and systems, volume, frequency and direction of information flows, coverage and reliability of communications technology/infrastructure.
  • Operations – adequacy of stockpiles, transport and distribution of resources, interaction between agencies/ coherence, human, technical and material capacity, involvement of local organisations and communities in needs assessment and distribution of relief, adherence to common codes and standards, connectedness (linkages between emergency and other aid, between relief and development).
  • Targeting, impact and empowerment – ability to reach those most in need and to address needs of the poorest and most vulnerable, extent to which assistance empowers beneficiaries (e.g. through participation in processes), appropriateness: extent to which goods and services provided meet priorities of beneficiaries (e.g. livelihoods as well as immediate needs), timeliness of aid delivery, efficiency and cost-effectiveness of aid delivery, impact (lives saved, alleviation of suffering, positive and negative effects of assistance on livelihoods).
  • Monitoring and evaluation – systems, capacity to carry out M&E, level of beneficiary participation, transparency and accountability (to beneficiaries and donors).


Other approaches to measuring DRR

Cross-cutting approaches

  • Integrating gender
    It is rare for evaluations to discuss the subject of gender in any depth. Projects and evaluators are generally content with limited indicators, such as the number of women taking part in project activities such as training, as evidence of greater gender equity in DRR. Evaluations of drought/food security are more likely to explore gender issues in depth. The literature on gender and disasters has become quite extensive in recent years, including a growing body of case study evidence of women's vulnerability, involvement and empowerment through DRR. Tools for evaluating gender-specific outcomes of DRR are not widely available although the framework and indicators outlined by Gander et al. form a useful starting point that should be tested in the field.
  • Vulnerability and capacity analysis (VCA)
    In recent times, thinking about poverty and sustainable development has begun to converge around the linked themes of vulnerability, social protection and livelihoods. This has been accompanied by the development of a variety of approaches to analyse situations and assess the likely impact of project interventions. These include vulnerability analysis. See Guidance Note 9: Vulnerability and capacity analysis.
  • Sustainable livelihoods approaches
    An Sustainable Livelihoods approach is essentially a way of organising data and analysis, or a "lens" through which to view development interventions. Taking a holistic view of a project (need, focus and objectives), it provides a coherent framework and structure for analysis, identifies gaps and ensures that links are made between different issues and activities. The aim is to help stakeholders engage in debate about the many factors that affect livelihoods, their relative importance, the ways in which they interact and the most effective means of promoting more sustainable livelihoods. See Guidance Note 10: Sustainable livelihoods approaches.

Integrating DRR in recovery and reconstruction

While post-disaster recovery has frequently been treated as a separate phase distinct from both emergency relief and long-term development, there is increasing recognition that these activities are often integrally related, especially from the perspective of reducing risk and vulnerability. Ultimately the implementation of effective risk reduction measures is necessarily a critical aspect of sustainable development, just as relief and recovery activities must start with protecting vulnerable people from further risks. See ProVention website.

Further reading and website resources

  • Chambers, R. (1997) Whose Reality Counts?: Putting the First Last. ITDG, London.
  • Dawson, J. (1996) 'Impact assessment: A review of best practice'. ITDG, Rugby.
  • Gosling, L. and M. Edwards (2003) Toolkits: A practical guide to planning, monitoring, evaluation and impact assessment. The Save the Children Fund, London.
  • IFRC n.d. untitled evaluation of Golfo de Fonseca project. Unpublished report. International Federation of Red Cross and Red Crescent Societies, Geneva.
  • Jenden, P. (1984) Cash for Work and Food Insecurity, Koisha Woreda - Wellaita: a report on SOS Sahel's Food Security Project, 1992-1994. SOS Sahel, London.
  • Roche, R. (1999) Impact Assessment for Development Agencies: Learning to value change. Oxfam, Oxford.