Introduction
Methods & Processes
M&E Case Studies
More information
M&E Sourcebook: Methods and Process |
Processing and analysing data
Quantitative data
Consolidating and processing
The first step following data collection and prior to data analysis is to process and consolidate raw quantitative data from questionnaires (or other data collection instruments). This requires some form of data cleaning, organising and coding to prepare the data to be entered into a database or spreadsheet. Ideally, consolidation and processing is conducted by the team of interviewers who completed the data collection, however, in the case of large data sets, additional staff may be required.
Analysing survey data
Three types of analysis are commonly used. The most simple, a descriptive analysis, involves the calculation of means and mediums for continuous variables, and percentages for categorical variables. The second type of analysis often used in evaluations is stratified descriptive analysis. This is used to compare a variable between two sub-groups - for example, to analyse income disaggregated by gender. Finally, inferential analysis is used to look for associations between different variables. It goes beyond stratified descriptive analysis because it not only examines differences between variables but also attempts to infer why these difference exist. This is done by either referring to known causal relationships (e.g. water source and diarrhoea) or through testing for relationships with other variables (known as regression analysis).
Qualitative data
Consolidating and processing
The first step following data collection and prior to data analysis is to process and consolidate qualitative data such as interview notes, tables, charts, drawings and maps. This requires data cleaning, organising and coding to prepare the data for analysis. Ideally, consolidation and processing is conducted by the team of interviewers who completed the data collection.
Summarising qualitative data
Techniques for summarising qualitative data must ensure that patterns are revealed rather than distorted. The failure to use a systematic approach may lead to “cherry picking”. Apart from now widely supported participatory approaches to qualitative data, there are other well-established methods available for the systematic analysis of qualitative data, including the use of computer programmes such as QSR Nudist.
Disaggregating results: identifying beneficiaries
The importance of identifying who benefits from an initiative cannot be overemphasised. There is a tendency in many, if not most, DRR evaluations to assume that benefits are spread evenly across a community, especially where evaluators focus on lives saved rather than the impact of an event on livelihoods. Drought/food security evaluations usually do address the question, reflecting their origins in development work, where identification of the most vulnerable is acknowledged to be a crucial factor. Few evaluations of DRR in other contexts consider targeting and differential vulnerability in any depth.
Failure to appreciate differential vulnerability is noticeable in the case of gender issues.
Vulnerability caused by other socio-economic factors including ethnicity, age and disability, is almost completely ignored by evaluations. Many evaluations fail to define the socio-economic characteristics of those involved in interviews and discussions, and even the number of those taking part is often omitted from reports.
Cross-checking
Triangulation/Cross-checking
Cross-checking (or triangulation) of different data sets and sources is helpful in isolating particular factors affecting success or failure. Triangulation is particularly important in the case of qualitative evidence collected through stakeholder interviews, where much of the evidence may be anecdotal or inferred. Good evaluations ensure that a wide range of stakeholders is consulted and the results are cross-checked.
Triangulation serves several purposes. First, it is a means of identifying inconsistencies in data. There may well be discrepancies and variations in the information the information provided to evaluators by different stakeholders – e.g. between local NGO staff and community representatives, or between community leaders and householders. In projects involving partnerships, triangulation of interview data or documents can quickly reveal differences between partners in their aims or expectations.
Direct observation is a useful way of checking if there are discrepancies between what people say and what they do. Evaluators do not always have time to do this and such discrepancies are more usually picked up by project workers or researchers who spend longer periods in the field.
Feedback workshops with stakeholders appear to be coming more common in DRR evaluations. These provide a linked triangulation-validation mechanism. They usually take place towards the end of an evaluation, when it is too late for further data collection or cross-checking with informants in the field, but some evaluations hold workshops at different stages or levels.
Examples: Cross-checking results
People who live on the banks and islands of the Jamuna River in Bangladesh are very vulnerable to floods and erosion. Researchers who asked them about their views of these risks found that a significant proportion explained them as the will of God and saw prayer as the best response. The researchers concluded that the people were largely fatalistic and that their strategies for managing risk were limited.
An anthropologist on the mid-river islands obtained a similar response when using a standard questionnaire. However, when she lived on the islands during the 1998 floods she observed that people were following a variety of strategies that had been used on the islands for generations. They built platforms out of reeds and banana stalks for their animals, fixed beds below the roof, cooked on portable ovens, lived off stocks of food saved from the winter harvest, switched temporarily to other sources of income and referred to their wide networks of relatives.
At the same time, the people expressed their faith in God, interpreting the high floods as his way of showing his power and testing their belief. God was thought to have sent the floods, but he also gave believers the strength to survive them. (See Schmuck 2001)
Problem solving
How to deal with intended vs unintended effects
Tracking unforeseen impacts is a major methodological problem. Indicators chosen to verify impact can only identify expected change, and will only reflect those changes that have been made explicit or agreed by the stakeholders. But what happens where change is unexpected or was not agreed by stakeholders, or where a particular stakeholder group did not reveal an area of change that was important to them? M&E systems need to be sensitive to this problem, sometimes referred to as the "indicator dilemma". Beneficiary participation is clearly essential here.
For smaller projects, it may be enough for staff to identify potentially unanticipated impacts at the outset and monitor them. But in larger, more complex projects or those where social process is of central importance, formal systems for identification of unexpected impacts may be needed.
The "group-based assessment of change" method is one example of a method for addressing this problem. The potential of this relatively simple method to capture changes in vulnerability deserves further testing.
Examples: A group-based assessment of change
This method, piloted by ActionAid in Vietnam, works without predetermined indicators. By keeping questions as open as possible, it produces unexpected but important information that might have been missed in a more defined evaluation format. Representative samples from groups of poor people supported by a project are asked how well the rest of the group members have fared during the past year, in particular:
- Which members' households have experienced improvement in their situation, which have experienced deterioration and which have remained in the same condition?
- For households whose situation has improved or deteriorated, how has their situation changed?
- For households whose situation has improved or deteriorated, why has their situation changed?
Ensuring attribution
Process indicators often act as proxy indicators of impact. They are particularly important when hazards are infrequent (e.g. earthquakes). Actions carried out during a project give some indication of potential effectiveness. In a community disaster preparedness initiative, for example, these process indicators might include: recruiting, training and establishing a community disaster management team; organising public meetings to identify threats and the most vulnerable households; building relevant structures (e.g. evacuation shelters, embankments); and ongoing evacuation drills. Potential impact may be inferred from various kinds of data.
Examples: Ensuring attribution
An evaluation of a food security project in Cambodia concluded that distribution of 86.8MT of rice seed to 3750 families in 98 villages might - in combination with the rehabilitation of small-scale irrigation systems - have a significant impact on food security in the following year. The conclusion was not based on the distribution figures alone but drew on more qualitative evidence of the project's approach: the most vulnerable beneficiary families had been selected by the target villagers (the elderly, handicapped, blind, injured, with little or no land, or with insufficient rice seed as the result of floods) through participatory village meetings, and the government's Department of Agriculture, Forestry and Fisheries had provided technical assistance (market survey of available seed and quality control testing of potential seed varieties). Nevertheless, the evaluation was still making explicit assumptions about potential impact the following year. (See Tracey 2002: 13-15, 24-5)
Implementation of measures recommended by a vulnerability analysis or to address issues highlighted by it is also an indicator of potential impact. Implementation of measures can be substantiated through discussions with stakeholders. Process indicators have value in suggesting likely outputs and impact, as well as helping to ensure the project is on track, but it is essential to assess the quality of the process and ask what it is leading to. One of the main purposes of evaluation is to analyse a project's "intervention logic". Where projects' M&E places undue emphasis on process, this may be because of unclear objectives or insufficient consideration of impact.
Measuring sustainability in DRR
Post-project impact evaluations provide the best opportunity to assess sustainability. Ideally, impact evaluations should take place over a year afterward a project or programme has finished, in order to be able to assess long term impact or sustainability. In community-based projects, the strength of community organisation is central to sustainability. However, existence of local committees is not enough; evidence of group activity is needed. Evaluators should therefore assess the frequency, nature and quality of activities such as the preparation of risk/vulnerability maps and emergency plans, awareness raising activities and evacuation drills and mitigation works.
Examples: Measuring sustainability in DRR
A participatory evaluation of rainwater harvesting work in Kenya, covering impact over ten years, formed judgements about the level of dependence on external inputs (food, technical support, tools, seeds, storage and draught animals), acquisition and application of technical skills and knowledge, extent of technical innovation, expansion of rainwater harvesting structures and abandonment of existing ones, costs and benefits, and the project's institutional base and support. (See Watson and Ndung'u 1997: 36-54)
If the evaluation takes place soon after the project ends, different indicators can be used to assess the likelihood of sustainability being achieved, such as the level of stakeholder contributions (financial or other resources) to a project.
Reconstruction of missing baselines
The problem of absent or deficient baselines is common to projects of all kinds and makes baseline reconstruction necessary – by reviewing project documents and records, obtaining data collected by other organisations, and interviewing key informants. In many DRR initiatives, adequate baseline data are not collected, leaving evaluators struggling to find adequate measures of success. It may be necessary to reconstruct baselines from project documents, interviews with key informants and data from other organisations.
Examples: Creating a retrospective baseline
The DRC's evaluation of Project Impact (PI) created a retrospective baseline: an 11-point checklist of possible mitigation actions that could have been undertaken by the 7 PI pilot communities before the initiative began. In-depth interviews with key stakeholders and project documentation were then used to form judgments about how much progress was being made during the project. A simple quantitative score was used to assess which areas mitigation activity was taking place in. An increase in the range/type of mitigation activities then became an indicator of progress. This overview was supplemented by more detailed follow-up on the progress of individual activities in each community, and the reasons for this. (See Nigg et al 2001: 2-4)
A vulnerability/capacity analysis (VCA) should provide good baseline data and guide interventions. Applications of this method before and after the project should make it possible to draw conclusions about impact. However, vulnerability analysis techniques need to be well understood by agency staff and considerable resources are needed to carry out a comprehensive VCA. See Guidance Note 9: Vulnerability and capacity analysis.
Appropriate aggregation methods
Quantitative, probabilistic sampling approaches are often used to generalise about a larger populations. This saves time and resources but it is important to remember that the results are only estimates.
Further reading and website resources
- Coyet, C.M. (2000) 'Bangladesh Red Crescent Society: Institutional Development/Capacity Building Programme: a review'. Unpublished report. Bangladesh Red Crescent Society, Dhaka.
- Gander, C, et al. (2002) Evaluation of IHA’s (International Humanitarian Assistance Division) Disaster Preparedness Strategy: Towards a New Disaster Risk Management Approach for CIDA. CIDA, Canada.
- Nigg, J.M., J.K. Riad, T. Wachtendorf and K.J. Tierney (2001) Disaster Resilient Communities Initiative: Evaluation of the Pilot Phase Year 2. Disaster Research Center, University of Delaware. Available at: www.udel.edu/DRC/projectreport41.pdf
- Schmuck, H. (2001) 'Empowering women in Bangladesh'. FOCUS Asia/Pacific 27 (4).
- Smith, W. (1998) Group Based Assessment of Change: Methods and Results 1998. RDA 2 Can Loc district, Ha Tinh province. ActionAid Viet Nam, Hanoi.
- Tracey, R. (2002) 'Food Assistance through Small-Scale Iinfrastructure Rehabilitation'. Unpublished report. International Federation of Red Cross and Red Crescent Societies/Cambodian Red Cross/ECHO.
- Watson, C. and B. Ndung’u (1997) Rainwater Harvesting in Turkana: An Evaluation of Impact and Sustainability. ITDG, mimeo.
- WFP n.d. Monitoring and Evaluation Guidelines. World Food Programme, Rome. Available at: www.wfp.org/operations/evaluation/guidelines.asp?section=5&sub_section=8