Strengthening Health System Responses to Gender-based Violence in Eastern Europe and Central Asia

A resource package

5.4. Monitoring and evaluation during the different phases of an intervention

5.4.1 The cycle of an intervention

Monitoring and evaluation have to be integrated into all phases of an intervention, from the planning to the implementation phase and beyond. Figure 12 illustrates these different phases, using the concept of the project cycle. This concept can also be applied to interventions other than projects, such as larger-scale interventions in health facilities, communities and at the policy level.

Figure 12: The five stages of the project cycle

  • Phase 1 is the assessment stage. The situation needs to be analysed; health and human rights concerns need to be identified and addressed.
  • Phase 2 is the strategic planning stage. The stakeholders involved in the implementation process identify the objectives they plan to achieve through the suggested interventions. For this purpose, implementers need to make sure they have the necessary information available to them in order to be able to make decisions about how to allocate money and effort in order to meet the identified objectives. In this phase, the foundation for monitoring and evaluation is created, and both approaches need to be integrated into the intervention and planned for.
  • Phase 3 is the design stage. The intervention may be designed from scratch or be an already existing project, programme or policy that can be revised or adapted to meet the specified objectives. At this stage, the following questions need to be answered: What strategy or interventions should the intervention use to achieve these objectives? Monitoring and evaluation activities may include pilot testing and assessing the effectiveness and feasibility of alternative methods of service delivery, e.g. early detention methods or effective referral mechanisms.
  • Phase 4 is the implementation and monitoring stage. During this stage, the staff members begin the interventions. They adapt them as necessary to a particular setting, resolve problems that arise, and get the interventions to a point where it is running smoothly. Information describing how the intervention is operating and how it can be improved is needed: To what extent has the intervention been implemented as designed? How much does implementation vary from site to site? How can the intervention become more efficient or effective?
  • The final phase 5 is the evaluation stage. The intervention has been established and information is needed on the extent to which the highest priority goals of the intervention have been achieved. For monitoring and evaluation to be successful, strategic planning and the development of a monitoring and evaluation strategy should go hand in hand.

Even though the monitoring and evaluation system should be setup during the strategic planning and design phase, evaluations are - contrary to monitoring that is done continuously during the whole implementation period – done at certain times in the cycle of the intervention.

  •  At the beginning of the planning stage (also known as ex-ante evaluation): the focus here is on assessing the planned intervention in terms of relevance, feasibility, potential impact or expected benefits. The evaluation is like a second opinion on whether or not the intervention is viable. It includes checking to see if the needs of the stakeholders have been assessed properly and if the strategies and plans have been developed adequately. In the case of health sector initiatives to combat violence, an ex-ante evaluation could assess the incidence and patterns of violence, national policies and strategies, collaboration and referral mechanisms, as well as compare envisaged intervention strategies with international good practices.
  •  At the mid-term (or other) point during the intervention: the focus here is on looking at the progress and performance of the intervention and identifying changes in the environment that might affect its effectiveness. The evaluation involves collecting and analysing data for performance indicators, in order to find out to what extent the project is achieving the expected results. Sometimes a mid-term evaluation is conducted to explain an unusual event (e.g., the monitoring data might be showing a disturbing or remarkable trend). For example, a new health sector initiative against violence may lead to an increased number of reported cases. A mid-term evaluation can help health professionals to analyse whether this increase is due to increased awareness and counselling services or whether the violence rate has in fact increased. The latter would then most likely imply a change of strategy (e.g. stronger focus on prevention). The evaluation would also address the issue of what changes in the intervention are necessary to address the increased number of reported cases.
  • At the end of the intervention (also known as ex-post evaluation): the focus here is on reviewing the whole cycle within the context of its background, objectives, results, activities and inputs. The evaluation looks at how well the intervention did in terms of the expected outcomes, how sustainable these outcomes appear to be and what factors led to the results. In this case, all of the internationally agreed criteria (see section 5.1, figure 11) should be assessed.

If possible, evaluation should also be a learning event. This can be achieved by, for example, organizing stakeholder workshops in order to generate information as well as to communicate and discuss key findings and lessons learned of the evaluation.

5.4.2 Developing and implementing an evaluation – steps and principles

The following steps should be considered when planning an evaluation:

  • Preparing the terms of reference for the evaluation: This involves defining the scope and purpose of the evaluation, identifying data sources, deciding on methodology and communication tools, selecting the evaluation team and designing the work plan and budget.
  • Designing the evaluation:This involves reviewing the intervention and data needs, deciding on the focus of the evaluation and the key questions to address, selecting appropriate data collection and analysis methods, and communicating the results.
  • Implementing the evaluation:This involves collecting and analysing the data, and reviewing and reporting the findings.
  • Following up on the evaluation:This involves drafting a plan to act on the findings, monitoring its implementation and managing any follow-up activities or consequent changes.

There are several key principles, which should be applied to the design and implementation of evaluations.

  • Free and open evaluation process. Evaluations should be independent from the management and implementation of interventions. All steps of an evaluation should be embedded in transparent processes. In order to enhance credibility and accountability, evaluations should integrate independent evaluators, which are recruited through a transparent process. Mixed teams of internal and external evaluators can increase acceptance internally and foster institutional learning as well as a positive institutional culture for monitoring and evaluation.
  • Evaluation ethics.Evaluations must be undertaken with impartiality, integrity and honesty. All parties that oversee, manage and implement an evaluation must respect human rights and cultural diversity, customs, religious beliefs and practices.
  • Partnerships and mutual accountability.It is likely that partners feel increased ownership and are willing to learn from an evaluation when they form an active part of it. Consequently, in order to increase ownership and credibility, enhance utilization, and build mutual accountability, a partnership approach should be applied early in the evaluation process. This implies, for example, that different institutions affected by or working on a problem, e.g. effectively identifying survivors of violence, should also be consulted before, during and after the evaluation. Evaluations conducted in partnership with other relevant stakeholders enhance shared understanding, learning and application of recommendations.
  • Coordination and alignment. Coordination of evaluations aim to reduce transaction costs, promote partnerships, and enhance mutual accountability and alignment. Therefore, wherever possible, evaluations should take into account national and local evaluation plans and corresponding activities and policies; where feasible, they should build on these and also regard them as capacity-building opportunities.
  • Capacity development. If necessary, capacities within health institutions should be built in order to foster ownership and acceptance for the process and the results and to enhance institutional learning. It should be emphasized that evaluations serve for institutional learning and the generation of lessons learnt.
  • Ensuring quality.Quality assurance is the responsibility of the evaluation team selected to conduct a particular evaluation and should be integrated in the contract with the selected evaluation team. This implies providing feedback on planned evaluation methods, cross-checking data and developing a coherent and readable evaluation report that provides lessons learned and relevant recommendations.

Source: OECD-DAC 2010