M&E Sourcebook: Methods and Process

Collecting data

Selecting appropriate methods and tools for different M&E approaches

Data take many forms: subjective/objective, quantitative/qualitative, cross-sectional/ longitudinal, primary/secondary. A wide variety of tools is available for collecting data, including formal surveys, structured or semi-structured interviews, group discussions, direct observation and case studies. Each method brings its own advantages and drawbacks. The choice of method depends on the nature and scale of the project, the type of information required, and the frequency, ease and cost of collection. Data that can be collected or measured easily by field workers (e.g. levels of beneficiary participation in meetings, the number of rainwater harvesting structures completed) can be put into monthly or quarterly reports. Data requiring more systematic or time-consuming collection are gathered less frequently – perhaps annually.

Examples: A mix of methods in evaluation

SOS Sahel's research/evaluation of its food security and cash for work programme in Ethiopia used a combination of survey methods, individual and group interviews and workshops. See case studies.

Pointers in primary data collection

Interviews (individual and group)

Individual and group interviews can capture stakeholders’ knowledge and perspectives, identify differences and bias, and are an important tool in triangulation of evidence, although interviewing requires certain skills and attitudes to be effective.

  • Individual interviews -- In DRR evaluations, individual interviews with stakeholders often play an important role. They are used to identify agencies' commitment, the strength of working relationships and co-ordination mechanisms, the extent to which concepts and methods are understood, and the quality of information flows. Community interviews indicate levels of commitment, understanding of the project, the nature of community participation, and differences of opinion.
  • Expert informants -- Interviews with professionals (in the implementing agency and its partners or other interested organisations) feature prominently as evidence in many DRR evaluations. This is due to the requirements of evaluators’ terms of reference, the desire for multi-stakeholder perspectives and perhaps particularly the limited time available to evaluators. Whether interviewing agency representatives in offices is more useful than site visits is debatable – evaluation guidelines usually warn against over-reliance on a narrow sample or range of "expert opinion".
  • Semi-structured interviews -- Semi-structured interviews are a main source of evidence in most DRR evaluations. This method can be valuable in allowing stakeholders to talk freely and explain points in their own words. Interviewees may be intimidated by formal questionnaires and interviewing techniques. Nevertheless, semi-structured interviewing requires a high degree of planning and preparation. Evaluators must be clear about how they will conduct the interviews, and about the semi-structured topic or question frameworks used. Evaluation findings can be significantly slanted as a result of omissions or bias in the interview approach.


Formal questionnaire-type surveys are not commonly used in DRR evaluations, probably due to limited time and resources. Larger-scale or research-oriented evaluations are more likely to use formal surveying techniques and more scientific interview methods. These are also central to epidemiological studies of risk factors. More effort could be made to adapt sociological methods of gathering and analysing interview data, such as those used by the Disaster Research Center and others in disaster research in the USA. But all DRR evaluations should develop interview frameworks and if possible pre-test questions (the latter appears to be very rare). See The Disaster Research Center's Evaluation of FEMA's Project Impact.

  • Survey sampling techniques -- Sampling is the process of selecting units (e.g., people, organizations) from a population of interest. In order to be able to generalise about a larger population from a small-scale survey in a way that is statistically significant, evaluators should refer to agreed statistical procedures for estimating how confidently this can be done for different sample sizes.

Discussions, focus groups and workshops

Group discussions are an important component of M&E. Most DRR evaluations involve discussions and workshops of some kind with beneficiaries and other stakeholders, but evaluation reports say little about how these are planned and facilitated, who are the participants or how they are selected. It is important to plan and prepare carefully for these. Time pressure may lead to ad hoc and unsystematic discussions.

Groups are particularly useful when the aim is to obtain as wide a range of stakeholder/beneficiary views as possible. Groups should comprise participants who share similar concerns and responsibilities.

  • Focus groups -- A focus group is a carefully planned and moderated discussion to obtain perspectives on a defined area of interest in a non-threatening environment. Participants express their views openly; it is not an attempt at problem-solving and does not seek consensus. Focus groups are particularly useful in evaluation, where the aim is to obtain as wide a range of stakeholder/beneficiary views as possible. Groups should comprise participants who share similar concerns and responsibilities but are strangers or have minimal contact with one another in their daily lives. Because groups differ in their composition and dynamics, multiple groups are organised to discuss a given topic. Groups typically contain 6-10 people: large enough to provide for a range of views but small enough for everyone to contribute.

    Focus groups feature in a few DRR evaluations. The Disaster Research Center's evaluations of the Project Impact initiative to build disaster-resilient communities in the USA is a good example. Its approach to focus group evaluation indicates the potential of this method, but could not be used in many circumstances. The Project Impact evaluation was extensive (in terms of time and coverage), well resourced and carried out by an expert team of disaster sociologists. Focus groups also seem to be a particularly Northern method. Participatory discussions undertaken as part of PRA exercises in developing countries are typically less labour-intensive and structured.
  • Workshops -- Formal workshops are usually convened to feed findings back to project stakeholders and validate them but can be used in other ways.


  • PAHO convened a 4-day meeting in 1999 to evaluate preparedness and response to hurricanes Georges and Mitch. More than 400 professionals from 48 countries took part, drawing up a comprehensive set of findings and recommendations in 20 working group sessions.
  • GeoHazards International convened expert workshops to assess the effectiveness of its new tools for measuring earthquake risk.

Direct observation

Direct observation plays a part in most development evaluations. It is useful for cross-checking information (e.g. comparing statements to observed practice), assessing the quality of relationships between individuals and groups, and identifying factors not previously recognised. But it requires training and preparation, and is open to bias.

Direct observation is part of most DRR evaluations, although its influence and the techniques used are usually unclear from evaluation reports. Visual surveying of structural mitigation measures is used to determine the quality of design and workmanship, and the extent to which technologies or techniques are adopted. It forms an important part of some evaluations, such as hazard-resistant housing projects and may be quite detailed and extensive. The resilience of structural measures is usually inferred from an assessment of technical quality, particularly where the hazard concerned has a long return period (e.g. earthquakes). It can sometimes be demonstrated by performance during repeat events, especially hydro-meteorological events, which are often seasonal. Community-built structures have been shown to protect fields, houses and drinking water supplies from floods; strengthened tracks and footpaths have proved they can withstand heavy rains.


  • Housing: Alto Mayo evaluation (Richmond 1996)
  • Community-built structures in the Philippines (PNRC 2002: 22-23)
  • Tracks and footpaths in Tanzania (Carling 1999)

This approach can be applied at project level and on a larger scale as part of more wide-ranging research.

Examples: Direct observation

World Neighbors' assessment of Hurricane Mitch's impact on farming systems in Nicaragua, Honduras and Guatemala, which surveyed 1,804 plots in 360 communities, generated substantial data on retention of topsoil and soil moisture, and levels of surface erosion - and led to important conclusions about the resilience of different farming methods. (See World Neighbors 2000 report)

See BRC case study.

Such evidence should be backed up with other data, such as the process by which structures were designed, built and monitored, the cost of designs and ease of operation and maintenance. Assessment of the choice of designs and sites is valuable in showing how well these respond to the needs of vulnerable people. This is important, given the ability of local elites to "capture" facilities for their own use by influencing their location. An evaluation of small-scale infrastructure rehabilitation against floods in Cambodia saw how projects were identified and ranked by beneficiaries and village development committees through participatory risk assessments before proposals were developed.

Warehousing and pre-positioning of relief items, which can also be assessed visually from the physical conditions of warehouses and their contents, can be triangulated against assessment of stock-keeping and management practices.

Wherever possible, evaluators should seek community members’ own perspectives on the merits of structural measures to back up their technical observations. This is particularly important in assessing the sustainability of such measures, either in terms of replication of the approach or maintenance of the structures concerned.

Examples: Backing up technical observations

The evaluation of ITDG’s Alto Mayo Reconstruction Project included six group discussions with beneficiaries, allowing them to explain advantages and drawbacks of the seismic-resistant housing, and variations in the application of the alternative construction technology promoted by the project. This was followed up by house visits to view problems identified in the meetings. Where the strength of structural features could not be assessed visually discussions with builders were held to ascertain their understanding and application of the technique. (See Richmond 1996)

Surveys were used to find out if occupants of improved housing in southern India felt they were more secure against theft, cyclones, monsoon rains and fire. (See Platt 1997: 40)

Evaluations should also consider structures' long-term durability, key issues being the capacity for effective operation and maintenance of structures (who will clear drainage culverts, for example, or repair water-retaining bunds) and the wear and tear caused by everyday usage (e.g. carts using a track or crossing a dam).Evaluators can look further for evidence of impact. The effectiveness of flood-control measures such as culverts and bridges may be seen not only in water levels and flows but also in improved road/track access during the wet season - indicated by reduced journey times, lower transport costs or more frequent use of transport routes. Beneficiary communities are very sensitive to such benefits and their views should be obtained.

RRA, PRA/PLA tools

Small-scale or rapid assessments can provide valuable insights, especially when focused. Participatory tools commonly used include time lines and historical profiles, to capture broad changes in an individual's or community's life; Venn diagrams, which are often used to capture respondents' perceptions of the perceived importance of various institutions and their relationship with them; impact flow charts, which depict the flow or direction of a particular activity or process; trend analysis, which in its simplest form is a 'before and after' exercise; and mapping.

Examples: Participatory tools

The Orissa State Branch of the Indian Red Cross assessed the effectiveness of its disaster preparedness work when a weak cyclone struck in November 2002. The initial assessment was based on telephone calls from local voluntary coordinators and emergency team members in eight locations. These conversations focused on: when the warning was received, and from which source(s); actions taken by local disaster preparedness teams and villagers; and details of the event (wind speed, condition of the sea, rainfall) and its impact. The phone calls provided plenty of local detail, from which it was possible to build up a picture of the situation on the ground and actions taken (almost as they happened), the effectiveness of warning and response mechanisms and factors affecting them, and variations between the locations. This was not a substitute for field surveys, but it would not have been possible to carry out such surveys immediately after the event, and it helped to identify priority issues for further assessment. (See Orissa State Branch, Indian Red Cross Society 2002)

Case studies

Case studies may be created using several methods to examine individuals, communities, organisations, events, programmes or time periods. They are particularly valuable in evaluations of complex situations, in highlighting the need for non-standard approaches and outcomes, and in exploring qualitative impact.

DRR evaluations sometimes use personal stories to illustrate findings in the main analysis. Personal or anecdotal accounts have value in supplementing more extensive data: for instance, a woman who saved a child from drowning by applying first aid learnt in a community disaster management training course. They are also a reminder that projects are about people, and they can help make reports more readable - an important factor if in influencing agency staff. DFID's evaluation of PAHO's Emergency Preparedness Programme (PED) documented results through project case studies.

A growing number of DRR case studies are appearing as printed or "grey" literature, but they tend to be too short to be informative. Many are produced by the implementing agencies concerned, which raises questions about impartiality, especially since only ‘success stories’ are reported. However, some aim to be objective and explain results to internal and external readers. NGOs are the main producers of such studies, especially of drought mitigation/food security initiatives.


Simulation exercises are commonly used to test emergency preparedness. They are not common in evaluation but are potentially useful. In every case, it is important that participants have an opportunity to discuss what was revealed by the simulation.

Examples: Simulation exercises

  • In the IFRC's Camalotte programme review, a 2-day workshop for national and local project co-coordinators took the form of a simulation exercise to test their understanding of the programme and capacity to implement it. The workshop was based on a fictitious community. Participants completed a series of exercises simulating the entire project cycle, including: relevant concepts (community, community development, participation), community selection, approaching community members, analysis, needs identification, development of objectives and indicators, measuring indicators, planning, community processes, conflict, evaluation, and narrative and financial reporting. Although levels of skill varied greatly, the workshop allowed staff to demonstrate their knowledge and skills and identify areas needing improvement. (See Gelfand 2003: 3, 9-10)
  • In the Philippines, villagers re-enacted their response to a typhoon and discussed what they had acted and seen. This gave insights into how people had responded to warnings and why. (See Bellers 1996)

Pointers in secondary data collection

Secondary data is existing data that has been, or will be, collected for another purpose. The use of secondary data represents a cost and time saving and ever effort should be made to establish what secondary data exist and the relevance of it before collecting primary data. However, existing data may not be sufficient; it may not provide the appropriate indicators or desegregation of indicators needed to monitor and evaluate specific interventions. Even if secondary data does provide appropriate indicators, it may be out of date or it may not cover the geographic area and units of study required.

Once secondary data has been deemed available, appropriate and relevant it must also be assessed for reliability. Caution should be taken when using data for which sampling and methodologies used are not described. See WFP Monitoring and Evaluation Guidelines.

Secondary data can be collected from a number of different sources, including:

  • Official records and surveys, undertaken by government agencies, multilateral institutions, research institutes, or other aid agencies.
  • Project documents and record reviews should be carefully examined as they often contain valuable information already collected at the start and of the project and during the monitoring process.
  • Literature reviews by academic researchers and aid agencies often provide a useful starting point for understanding the relationship between different variables to be analysed in a monitoring or evaluation process.

Further reading and website resources

  • Bellers, R. (1996) 'Simulation exercise notes: Igbalangao's experience of a typhoon'. Unpublished field report. Oxford Centre for Disaster Studies, Oxford.
  • Carling, A. (1999) Healing the Rift: Footpath Repair Work on the Dareda Section of the Rift Valley Escarpment for FARM Africa - Babati Agricultural development Project, March 1996 - December 1997. Mountain Path Repair International, Ambleside. Available at: http://users.pandora.be/quarsan/trails/tanzfull.html#geographic_features
  • Gelfand, J. (2003) 'Camalotte Programme Review'. Unpublished report. International Federation of Red Cross and Red Crescent Societies, Geneva.
  • Orissa State Branch, Indian Red Cross Society (2002) 'Actions by 8 Red Cross Cyclone Shelter Communities in Orissa During Cyclone Warning (November 11 to 12, 2002)'. Unpublished report. Orissa State Branch, Indian Red Cross Society, Bhubaneshwar.
  • Platt, R. (1997) Ensuring Effective Provision of Low Cost Housing Finance in India: an in-depth case analysis. Working Paper No. 9725. University of Bradford Management Centre, Bradford.
  • PNRC (2002) Preparing for a Disaster: A Community-based Appraoch. Phillipine National Red Cross, Manila.
  • Richmond, P.R. (1996) 'The Alto Mayo Reconstruction and Development Project: External evaluation'. Unpublished report, ITDG, Rugby.
  • World Neighbors (2000) Reasons for Resiliency: Toward a Sustainable Recovery after Hurricane Mitch. World Neighbors, Oklahoma City. Available at: www.wn.org/store/library/3843_LFF Reasons for Resiliency - Toward a Sustainable Recovery After Hurricane Mitch.pdf