M&E Sourcebook: Methods and Process

Selecting appropriate indicators

What is an indicator?

The United Nations World Food Programme's Office of Evaluation describes an indicator as a quantitative or qualitative factor or variable that provides a simple and reliable means to measure achievement or to reflect the changes connected with an intervention. Indicators are compared over time in order to assess change. In the logical framework approach, an operation is broken down into design elements (inputs, activities, outputs, outcomes and impacts) and separate indicators are used to measure performance.

General characteristics

The desired properties of indicators, also known as variables, will depend on the approach adopted and the nature of the programme or project being evaluated. All indicators have specific characteristics:
  • Numeric = the values are numbers
  • Nominal = the values have names (e.g. male and female)
  • Continuous = the values are infinite or very large
  • Ordinal/categorical = the values have a known order (e.g. low to high)

Good indicator characteristics

Indicators will vary from one project to another, according to the work and its context, but in general they are often expected to be:

  • SMART (specific, measurable, attainable, relevant and time-bound)
  • SPICED (subjective, participatory, interpreted, cross-checked, empowering and diverse)
(See Roche, 1999: 48-49)

In practice there can be tension between the participatory, subjective character of the SPICED indicator approach and the emphasis on objective measuring in the SMART approach, and evaluators may have to make effort to reconcile this. Other questions to be asked regarding the practicality of indicators include:

  • Measurability. Is the indicator measurable? Is it sufficiently sensitive to an improvement or deterioration in conditions?
  • Ease and cost of collection. How easy is it to obtain the information required? How costly will this be? Can the community participate? Are some relevant data already collected?
  • Credibility and validity. Are the indicators easy to understand, or will people end up arguing over what they mean? Do they measure something that is important to communities as well as implementing organisations?
  • Balance. Do the selected indicators provide a comprehensive view of the key issues?
  • Potential for influencing change. Will the evidence collected be useful for communities, implementers and decision-makers?

Even with this guidance in mind, it is rare to find all the evidence one wants. Indicators are indicators: they are not necessarily final proof. Indicators do not need to record absolute change. It is often enough to identify relative change.

Part of the process of collecting baseline information should be to identify valid indicators for M&E. Where baseline data are lacking, or indicators are found difficult to assess or simply irrelevant, new indicators must be developed. Some development agencies have experimented with approaches to assessing change that do not use predetermined indicators – instead, poor and vulnerable people review the changes that have taken place over a particular time and related factors.

Quantitative and qualitative indicators

The tension between the needs for subjectivity/participation and objectivity/measurement in evaluation is often played out in decisions about whether to use quantitative or qualitative indicators. Organisations working in DRR are usually comfortable with indicators of output (especially quantitative indicators), but unsure about how to select and apply indicators of impact.

Quantitative indicators are widely used to assess progress towards stated targets (e.g. numbers of community disaster response teams trained and equipped).

Process indicators (activity and output) measure the implementation of project activities, and are usually quantitative. Outcome or impact indicators can be quantitative and qualitative, and measure changes that occur as the result of project activities. Analysis of the relationship between the two indicator types is essential in understanding the chain of cause and effect.

Most DRR evaluations focus on outputs rather than outcomes or impact, partly due to their timing. Agency reports to donors are also predominantly activity-focused, with relatively little analysis of outcomes (and often some rather tenuous linking of output to outcome).


Examples: Measuring progress

Between 1994 and 2001 the Philippine Red Cross's Integrated Community-based Disaster Preparedness Programme (ICDPP) formed disaster action teams in 64 communities, which all developed action plans; 100 mitigation measures of different kinds were carried out. A breakdown of these measures into different types indicated their nature (See PNRC 2002: 6).

There are also several efforts underway to develop sets of indicators for measuring progress toward the goals and priorities outlined in the Hyogo Framework for Action (HFA). UN-ISDR and OCHA are both working on sets of indicators to be applied at national and global levels for the five priority areas outlined in the HFA. In addition John Twigg of the Benfield ULC Hazard Research Centre has developed a guidance note on Characteristics of a Disaster-resilient Community on behalf of the DFID Disaster Risk Reduction Interagency Coordination Group. The NGOs in the Interagency Coordination Group commissioned the guidance note to better inform their own efforts in measuring progress on disaster risk reduction and the impact of the HFA at community level.


Developing a ladder of indicators

Projects with clear objectives and targets develop a hierarchy of indicators that link process to impact and thereby make M&E more coherent.

An example is the Results Framework by AUDMP. The principal indicators are mostly numerical and the emphasis overall is quantitative. However, the framework goes down to a more detailed level characterising subsidiary evidence required to arrive at the main conclusions, and outlines sources of information and the evidence-gathering activities to be undertaken. These subsidiary indicators are more diverse.

As with logical frameworks, creation of hierarchies of indicators allows evaluators to form judgements at all levels (activity-output-outcome-impact), to assess cause-effect linkages, and to form a view about overall coherence.

Indices

There are a number of indices of national and sub-national-level disaster risk that can be used to measure the effectiveness of DRR policies. These include:

  • UNDP's Disaster Risk Index - a global assessment of national disaster risk developed to demonstrate how development can contribute to risk. The index calculates the average risk of deaths per country in large and medium scale disasters associated with earthquakes, tropical cyclones and floods.
  • World Bank/ProVention's Hotspots project - a global, sub-national assessment of risk calculated for grid cells rather than for countries as a whole, intended to provide a rational basis for prioritising risk reduction efforts and highlighting areas where risk management is most needed. Risks of both mortality and economic losses are calculated as a function of the expected hazard frequency and expected losses per hazard event.
  • IDB/IDEA Americas Program - a series of national and sub-national indices of disaster risk for Latin America and the Caribbean for use in country programming. Four indicators have been developed, measuring a country's performance in disaster risk management, its financial capacity to meet recovery costs, localised levels of risk and prevailing conditions of national level human vulnerability.
  • ECHO's Disaster Risk Index - a measure of national risk developed for use in determining the priority country focus for ECHO's disaster reduction activities. ECHO's index combines information on natural hazards, vulnerability and, where available, national coping capacity.

National-level DRR methods and indicators

Indicators for measuring progress in the implementation of national level disaster risk reduction programmes should cover a broader range of categories and components than NGO projects. These could include an assessment of changes in political commitment and institutional aspects (governance), risk identification, knowledge management, risk management applications - such as environmental management, social protection and safety nets, financial instruments, land use planning, urban and regional planning and physical/structural measures - and preparedness and emergency management.

Examples of indicators to measure progress in implementation of disaster risk reduction activities could be: specific disaster management legislation enacted; small-scale disaster risk reduction investments piloted; disaster social safety nets fully integrated into the PRS; and public awareness of disaster risks strengthened.


Examples: Key areas of change and relevant indicators

Key areas of change

Dimensions

Starting points for developing indicators

Economic well-being

Productive assets, occupational status, food security, income and savings, access to markets, environmental awareness and practice, etc.

Land holding, farm animals

Housing status

Household expenditures and consumption

Indebtedness

Market mobility

Quality of diet

Ability to cope with crisis

Social well-being or human capital

Health status, education, water and sanitation, etc.

Literacy rates

Educational level

School attendance rates

Health education and awareness

Infant mortality

Adequacy and reliability of water supply

Political empowerment

Ownership and control over assets, perceptions of well-being and quality of life, participation in decision-making and public institutions, access to public resources, dependency and mobility, etc.

Conflict resolution mechanisms

Awareness and exercise of civil-political rights

Degree of influence in decision making

Women's empowerment

Access to public resources, gender awareness, self confidence and identity, valuation of reproductive roles

Women’s involvement in income-generation

Ownership and control of assets

Degree of economic dependence

Perceptions of own well-being

Literacy rates

Maternal mortality/morbidity

Women’s workload

Time and space for recreation

See ADPC, CBDRM 2004.


Mainstreaming DRR methods and indicators

Country strategies and PRSPs

Country strategies and PRSPs can be designed with targets and measurement of risk reduction in mind. Relevant short- and long-term targets and indicators and related systems for monitoring and evaluating implementation and achievements, particularly impacts on the poor, should be considered. See Guidance Note 3: Poverty Reduction Strategy Papers.

Selecting unit(s) of analysis

Analysis can take place at the level of the individual, household, community, organisation or combination of these. Different aspects of disaster risk are evident at different levels of social organisation. The advantage of using the individual as a unit of assessment is that this allows differential vulnerability (between men and women, old and young) to be explored. However, most interventions have impact beyond the individual level and trying to attribute the impact of a particular action on one individual poses serious problems.

Further reading and website resources