M&E Sourcebook: Case Studies

Data collection and analysis

  1. CARE International, Risk Management for Local Sustainable Development 2001-2003

    A mix of data collection techniques were used to evaluate CARE International's Risk Management for Local Sustainable Development programme:

    • Review of the internal documentation
    • Semi-structured interviews with personnel and consultants of CARE in Central America and Atlanta
    • 7 participatory workshops with project beneficiaries who represent target communities and municipalities in four countries
    • 1 participatory workshop with the CARE-CAMI team to analyze preliminary results of the evaluation
    • Semi-structured interviews with project counterparts
    • Reading and selective consultations

    A quantitative analysis of project documentation and interviews with personnel was carried out in order to measure project outputs against the predetermined benchmarks ("objective indicators").

    Participatory workshops amongst beneficiaries produced qualitative data on the project's impact. For example: "They uniformly stated that they faced a reduced level of risk to life and property as a result of participation in the project." Assuming that such workshops are carried out in a truly participatory manner, such methods can resolve any doubts about attribution as beneficiaries themselves identify how the project has affected them. However, results should always be triangulated to ensure reliability. It is not clear from the project evaluation report if this was done. Finally, a qualitative analysis of data produced through interviews with personnel, a workshop and review of project documentation was carried out to assess progress in terms of institutional development.

  2. BRCS/IFRC/DfID Disaster Reduction Programme, 2001-2003

    A number of data collection techniques were used to assess the impact of the BRCS/IFRC/DfID Disaster Reduction Programme:

    • A critical desk-top review of IFRC/National Society documented materials including biannual reports, monitoring visits, DfID/ IFRC assessment reports and key correspondence
    • Interviews and or workshops with National Society HQ, regional and global Federation staff
    • Interviews and/or other approaches to a sample group of programme beneficiaries selected on the basis of agreed criteria
    • Visits to at least 1 branch involved in the programme
    • In-depth community survey and discussion with Red Cross branches in one area where community-based activities were carried out
    • Interviews with other key stakeholders – government, key disaster management actors at international, regional national, branch and community levels
    • Where possible, visits to any areas of the country where major disasters have occurred in the past twelve months to allow interviewing of Red Cross staff/ volunteers and affected populations

    In view of resource and time limitations, a non-probabilistic sampling technique was used to select one of the two regions for field visits and one community to conduct an in-depth survey. This choice was justified in the following way: “East Africa was selected because it had generally been at a lower level of capacities at the start of the programme (compared to Asia) and thus represented the more challenging case”. In the absence of clear indicators to measure against logframe objectives the evaluation team grouped programme activities and outputs within the Well-Prepared National Society (WPNS), an IFRC framework for assessing disaster preparedness and reduction within the Red Cross/Red Crescent. This framework has six key features: relevance/ assessments; DP policy and planning; structures and organisations; human resources; material and financial resources; and advocacy and effectiveness.

    Programme outputs were reviewed against the original plans and objectives. However, evaluators conceded that these objectives were over ambitious and the timeframe unrealistic. As a result, the overall success of the programme appears to be only moderate.

    A Vulnerability and Capacity Assessment (VCA) was applied in Rwanda, Ethiopia and Sudan all as one-off exercises, but nevertheless they were considered to be successful overall by stakeholders who participated in a post-programme participatory workshop. The VCAs identified vulnerabilities and, particularly, capacities, giving National Societies better-quality information and hence allowing them to move beyond earlier needs-focused approaches. They also promoted awareness, helping communities to understand their situations and the possibilities of low-cost interventions, which led them to express and interest in community-based disaster preparedness. Although on this occasion the VCA was not repeated after the end of the programme to assess longer term impact of the programme’s disaster preparedness activities, it is potentially a very useful evaluation tool. However, the IFRC is aware that it needs improving and more training is required in how to carry out a VCA. At present, many National Societies lack sufficient experience and capacity.

    It was widely felt that VCAs tend to generate more information than needed, and identify more issues than National Societies and their branches can address. Processing information can put pressure on a National Society. To avoid generating excessive and unnecessary data it was suggested that clear and realistic targets be set and pilot tests carried out. Vulnerability analysis can be simplified and developed gradually through a series of smaller assessment exercises rather than a single intensive or complex VCA. It was recommended that secondary data sets and baseline data collected by other agencies be used to build up a picture of vulnerability before carrying out participatory work with communities.

    Ex-post hazard measurements

    An attempt was made to measure the programme's effectiveness in terms of improving disaster response capacity. Two examples were found: the BRCS-supported learning review of the Kassala floods, conducted within weeks of the onset of the floods with the input of an external facilitator, and the evaluation of the India DPR programmes, also supported by BRCS. Both of these occurred after the end of the programme and involved external facilitation. In August 2003, some eight months after the end of the programme, the National Society in Sudan responded to medium-scale floods in Kassala. The evaluation report concluded that, "In terms of longer-term impact/continuity of elements of the programme it is possible to assert that response capacity has continued to be built following the programme. This was demonstrated through the National Society's quick response to the Kassala floods."

    In India, a brief review of performance in a mini-tornado response six months after the programme ended, allowed analysis to be made with previous responses. It appears that the stockpiling of relief materials, renovation of warehouse, systematic training of field-based volunteers and upgrading of a mobile emergency medical unit, had allowed a significant improvement in local disaster management capacity. This was illustrated by an increase in the speed of assessment and response compared to emergencies occurring prior to programme completion. In Assam, India, evaluators were not able to test other areas of the programme aside from response capacity, such as mitigation. These structural elements of the programme were focused on flood prevention (the construction of one raised platform and provision of water and sanitation facilities at several other local flood evacuation points) but no significant flooding occurred to test them between the end of the programme and its evaluation. It was merely noted that they were well constructed and welcomed by the local population.

    No two disasters are the same and it is clearly difficult to compare a large disaster, such as the autumn 2000 floods in Assam (prior to the programme) with a smaller one in the same locality, such as the mini tornado which struck in May 2003. However, as an annex to the programme evaluation document "Observations on the impact of the Programme on Disaster Response" noted, "careful collection, analysis and documentation of historical evidence combined with sober judgement of the differences as well as similarities between disaster situations, should allow at least some overall conclusions to be drawn about the track record of Red Cross disaster performance and the impact of Red Cross support on the direct response of the local population."

  3. DIPECHO Action Plan for South East Asia, 1998

    Data collected through interviews, document reviews and direct observation was analysed in terms of a list of areas of examination for project evaluations as set out in the Ministry of Foreign Affairs, Norway Evaluation Policy 2006–2010. It should be stressed, however, that none of the project evaluations addressed all of the areas of examination, suggesting that such a complete list was too ambitious given the time constraints for collecting data and the lack of methodological guidance on how to analyse it.

  4. CAMI/ARC Mitigation Grant for Risk Management and Community Preparedness, 2001-2003

    Three principal approaches to data collection were adopted:

    1. To evaluate disaster preparedness indicators, face-to-face interviews were conducted with key informants (e.g., heads of households, principals). These interviews which recorded self-reported data were supplemented by systematic, non-participant observation using structured instruments and guidelines about who and what to observe.
    2. To evaluate disaster response indicators, "disaster drill scenarios" were conducted utilizing systematic observation as the primary method of review. Systematic, non-participant observation using structured instruments and guidelines about who and what to observe. Key indicators (pre-determined behaviours) were established for CAMI and observation checklists were used by trained observers during the drills to record the presence or absence of something, whether a particular event did or did not occur, and/or frequency of occurrences of events.
    3. Household surveys used Probability Proportionate to Size (PPS) cluster sampling. This method entails first selecting a sample of Caserios (communities), and then, a sample of households to interview within each of those Caserios. PPS means that larger clusters or Caserios were given a greater chance of selection in the sample than smaller Caserios. Household selection utilized “segmentation”.

    When combined with the sampling method described above, our sample had the least amount of bias and was "self-weighted" making analysis of the data much simpler. Interviewers contacted 889 households in total, with 824 completed interviews for a survey response rate of 92.8%. Completed Interviews were located within 42 clusters of the four countries, in "segments" of 20 households each (approximate).