M&E Sourcebook: Introduction

The evaluation team and timing

Participation and accountability are significant factors when forming assessment teams. The balance between internal and external assessors is an important consideration. In development projects, there are no fixed rules for this: the appropriate size and mix are selected for the specific project, and there is increasing emphasis on gender balance and local participation. In contrast, external specialists - often men - continue to dominate teams evaluating risk reduction and humanitarian aid initiatives.

External evaluators' independence and impartiality make findings more credible, reduce bias and help to overcome conflicts of interest.


Examples: Selecting the evaluation team

To ensure the use of lesson learning/knowledge generated from the evaluation, it was decided that in the case of the Joint Evaluation of the BRCS/IFRC/DfID Disaster Reduction Programme 2001-2003, there should be a joint mission team composed of three people from "within" the programme representing the donor, BRCS and one of the implementing National Societies. An external team leader was hired to provide an independent perspective. See BRCS/IFRC/DfID evaluation report.


This is the main reason for agencies using external consultants so extensively, and donors will often insist on it. Other ways they can be useful include: facilitating internal reviews and "lesson-learning" exercises and providing an outside perspective at project review meetings and collating findings. The presence of external, especially foreign, evaluators can also reinforce the credibility of a project among beneficiaries and those involved in running it.

However, there is a significant risk that an outsider will not understand all the project's complexities, especially if time is limited. The high cost of foreign consultants and international travel acts as a constraint where funds are limited, often leading to rapid evaluations with limited field access. Access to evidence and informants may be limited and uneven in such circumstances. In DRR, small evaluation teams appear to be common; sometimes they comprise an external and a regional or local consultant. Analysis by a single person is vulnerable to individual bias, whereas teams can debate methods and findings. Feedback meetings with the implementing agency are a poor substitute for this, as the agency is likely to be over-defensive about evaluation findings.

The evaluation's purpose offers some guidance to the balance of the team. If the main purpose is lesson-learning, it makes sense to involve more internal staff; if it is accountability, the independence of external evaluators becomes more crucial. In practice, however, most evaluations aim at lesson learning and accountability.

There are also no fixed rules about the appropriate skills mix in teams. Some people feel that a wide range of relevant technical skills is essential; others maintain that experience in evaluation methods is more important. In some kinds of risk reduction project, technical expertise may be valuable (e.g. science, engineering, architecture, nutrition, economics or social sciences). Evaluators need to be able to use quantitative and qualitative data and relevant data collection methods. Knowledge of local geography, society, cultures and institutions is helpful, especially since evaluators tend to draw on previous experience in identifying good practice.

Another problem noted in some evaluations is changes in team composition, as members come and go for operational, personal and institutional reasons. This is more likely in large evaluations and those taking place in more than one country. Inconsistency of method and analysis may result, which can place much of the burden of analysis and writing up on the team leader.

Resource availability and scope in M&E

Time allotted to evaluations is usually very short, particularly where external evaluators are used. A week to 10 days in country is standard, which includes briefing, interviewing key stakeholders and site visits. Where evaluations cover more than one country, time may be even more restricted. Much time can be taken up in arranging meetings and travelling to field sites. If a visit coincides with political or other disturbances, valuable time is lost. Time limitations influence data collection methods. They may lead to greater emphasis on qualitative information and over-reliance on selective field evidence, agency documents, and interviews in head offices. Many evaluation reports highlight these limitations. Where evaluators identify the issue in advance, commissioning agencies should be prepared to negotiate the scope or emphasis of the evaluation.


Examples: Dealing with a limited timeframe

In 2002 the Department for International Development (DFID) commissioned an evaluation of the Emergency Preparedness and Disaster Relief Coordination Program (PED) of the Pan American Health Organization (PAHO) which it was funding in partnership with other donors. The three-person evaluation team visited four countries in Central America, three countries in South America and three Caribbean countries during a three-week mission.

Through individual interviews or focus groups, the team consulted over 200 people working for DFID, PAHO, the Ministry of Health and other government departments, international agencies, NGOs and community associations.

The evaluation team thought that the terms of reference were too broad and ambitious for the allotted timeframe and discussed this issue with DFID at the outset of the evaluation. Rather than reducing the scope of the evaluation, DFID agreed to give the evaluation team some latitude in determining upon which issues they would focus more. (See Gander et al. 2003: 2-4)


Where there is time, preparatory visits can be made to become acquainted with the area and project, meet local project staff and explain the evaluation to them, and carry out initial data collection. This happens only occasionally, in more participatory evaluations.

Research projects are often the most successful evaluations owing to the greater time and resources committed. SOS Sahel's analysis of a food security/cash-for-work initiative in Ethiopia had six local researchers and an external lead researcher and lasted for six months.

A series of evaluations during and after a project's lifetime is even more effective as it allows longitudinal analysis. This happens rarely. The Disaster Research Center's evaluation of FEMA's Project Impact initiative in the USA is a notable exception. This allowed researchers to assess the project's development and potential sustainability, and to re-analyse earlier evaluation data.

Evaluations can be carried out during implementation of a project (mid-term), at its end (final) or after (ex-post). Most take place at the end of a project or phase within it, often after only two or three years. Longer-term post-project impact evaluations are rare. Many evaluations take place too early in the project's life to assess effectiveness or impact (e.g. owing to donor regulations, an evaluation of a project in Cambodia promoting food security through flood mitigation and rice seed distribution had to be evaluated before the next harvest when its full impact would have become apparent). Long-term follow-up provides a comprehensive picture of impact.


Examples: Longer-term evaluation

An independent evaluation in 1997 of a rainwater harvesting initiative in Kenya launched over ten years previously covered:

  • impact on average sorghum yields, and comparison of yields between traditional sorghum gardens and those improved by rainwater harvesting (in good and bad years);
  • how the sorghum harvest was used, in good and bad years (e.g. to purchase food, seeds or livestock, to sell for cash, or to give to relatives and friends);
  • impact on diet;
  • impact on wealth;
  • gender issues in control and decision-making, relating to decisions about whether to improve a sorghum garden, when to begin planting, division of labour and control over disposal of the harvest;
  • impact on women's status (linked to point 5);
  • how the creation of new sorghum gardens affected traditional land tenure arrangements;
  • positive and negative impact on the environment (water run-off, soil erosion, soil fertility).
(See Watson and Ndung'u 1997)


A difficulty with long-term impact assessments is that vulnerability is dynamic and affected by a range of external factors. The context at the time of evaluation may be very different from that at the time of implementation. Identification of such contextual changes is therefore an important part of impact evaluation.

Further reading

  • Gander, C, et al. (2002) Evaluation of IHA's (International Humanitarian Assistance Division) Disaster Preparedness Strategy: Towards a New Disaster Risk Management Approach for CIDA. CIDA, Canada.
  • Watson, C. and B. Ndung'u (1997) Rainwater Harvesting in Turkana: An Evaluation of Impact and Sustainability. ITDG, mimeo.
  • Disaster Research Center (various dates) Disaster-Resistant Community Initiative: Evaluation of the Pilot Phase. University of Delaware. Available at: www.udel.edu/DRC/projectimpact.html
  • Jendan, P. (1995) Cash-for-Work and Food Insecurity in Koisha, Southern Ethiopia (description of the SOS Sahel Food Security Study). ODI Relief and Rehabilitation Network, Network Paper 11. Available at: www.odihpn.org/documents/networkpaper011.pdf