Strengthening Health System Responses to Gender-based Violence in Eastern Europe and Central Asia

A resource package

5.6. Designing and implementing a monitoring and evaluation system

The monitoring and evaluation system – meaning the clarification of what should be monitored and evaluated, by whom, how and when – should be set up during the planning phase or at the latest in the beginning of implementation. A solid analysis of the problem and its context should be carried out as part of the strategy development and planning and can serve as a baseline for subsequent monitoring and evaluation. If such an analysis was not undertaken, it is essential to implement such an analysis at a later stage and make necessary adjustments in the planned intervention.

A monitoring system is a way of steering and organizing the monitoring work so that it is less time consuming and easy to implement. Monitoring systems vary in sophistication from a piece of paper and some notebooks or files, to electronic filing systems and databases. The most important thing is not how sophisticated the system is, but whether the information needed for decision-making is collected, reviewed systematically and used for necessary adaptations.

A well-designed and organized system will ensure that the right data are being collected at the right time during and after the implementation of the intervention and that this data will help guide implementation and strategic decision-making. It will also ensure that staff involved in the intervention and stakeholders will not be overwhelmed by the amount of data gathered and that a reasonable amount of time and money is being spent in collecting and analysing data, and collating and reporting the information.

5.6.1 Steps in setting up a monitoring and evaluation system

In order to set up a monitoring and evaluation system, health professionals should consider the following steps and answer the following key questions:

1. What do we need to assess?

Define the purpose and scope of the monitoring and evaluation system

The purposes could include issues such as accountability to funding agencies, partners and beneficiaries; informing strategic directions, to make changes, if necessary; informing operational directions, to make changes, if necessary; and empowering key stakeholders. Each purpose has different consequences for the process (e.g. if the purpose is to empower the stakeholders, the process will be more participatory and learning-oriented). By 'scope' we mean the level of detail required, the level of stakeholder participation and the level of funding available (e.g. you might want to make the system highly participatory, but funding constraints limit the extent to which you can involve stakeholders).

2. What do we want to achieve?

Review the project concept and objectives

This involves asking questions such as: What is the intervention about? What is the theory of action underlying it? What are the assumptions?

3. Who needs what kind of information?

Assess the stakeholders’ key information needs

The most important question here is: What do management, other staff, beneficiaries and other stakeholders need to know and when?

4. What are we specifically looking at to measure achievement? Where are we today relative to our goals? When do we want to achieve what?

Formulate indicators and other data requirements

You need to formulate a list of criteria against which to measure effectiveness and efficiency, and determine the type of data (quantitative and qualitative) you will need to carry out this measurement (e.g. ‘number of survivors applying for services’ and ‘reasons for not applying’).Thequality of the indicators should then be assessed according to the SMART and SPICED rule (see chapter 5.5.1 on the logical framework approach, box 29 on indicators).

5. Who is responsible for data collection? In which sequence?

Organize the data collection and analysis

What methods will you use to ensure the right data are being collected and analysed? And who will be responsible for this? Methods can be qualitative/quantitative, individual/group based, participatory/conventional. How will the various stakeholders be involved in these processes?

6. How are we doing relative to our targets? Why? What did we achieve and what needs to be done?

Organize critical reflection of events and processes

Critical reflection means asking not only ‘what happened’ and ‘why,’ but also ‘what does this mean’ and ‘what are we going to do about it?’ This assists in learning and managing for impact.

7. How can we ensure the information is systematically being disseminated and used to generate lessons learned?

Develop the communication and reporting process

You should decide whom you need to communicate with and report to during the monitoring and evaluation processes, and how to do this. Keep in mind that different stakeholders have different information needs and different reporting requirements. The evaluation results should be presented in an accessible format, so that they can be systematically distributed internally and externally for learning and follow-up actions and to ensure transparency. In light of lessons emerging from the evaluation, additional interested parties in the wider development community are identified and targeted to maximize the use of relevant findings.

8. What kinds of capacities are neededand should be strengthened?

Assess capacities and conditions for implementing the system

You need to be clear about what you need in terms of human capacities, incentives, structures, procedures and finance. You also need to assess the potential risks in the process of data collection, recording and reporting, such as re-traumatization of survivors or work overload of staff. Training staff members in participatory data collection methods will help to build the human capacity and also be an incentive for them to be involved in participatory monitoring and evaluation. A low level of support for a project will have implications for the capacities and conditions necessary to conduct monitoring, evaluation and impact assessment, so it may be necessary to think about how to motivate stakeholders about the importance of these processes.

5.6.2  Addressing challenges in monitoring and evaluation

Monitoring and evaluation of initiatives addressing violence against women can be hampered by several obstacles. This has to be taken into account when setting up a monitoring and evaluation system. This section on challenges summarizes some of the major obstacles and suggests ways to tackle them.

  • There is a lack of comparable definitions, indicators and instruments, especially on the prevalence of forms of violence. Therefore, it is challenging to make comparisons across regions. Thus, when developing a monitoring and evaluation system, the concepts and terms have to be thoroughly defined, if appropriate, according to international or national standards.
  • Many studies measure processes and outcomes, but not impact. For example, there could be data provided on a number of health professionals who have been trained, but no data on the impact the training had on their behaviour or certain institutional procedures. Often change is measured at the individual level rather than at the community level. For results-based management, it is vital to broaden the focus and measure the overall impact. For example, when training health personnel, the key is not to simply collect the evaluation forms but rather determine whether there was an improvement in the quality of services provided due to the higher knowledge level among the trained health care staff.
  • Many indicators for behaviour change rely on self-reporting of either survivors or perpetrators. Due to the sensitivity of the topic of GBV, this information could be biased. Many participants might use the socially desirable answers rather than mentioning the violence episodes. Moreover, there is a ‘culture of silence’ surrounding GBV and in some settings violent behaviours are viewed as “normal” or “adequate.” Thus, there is considerable potential for under-reporting, especially in cases where violence is hidden, as it is in female homicide cases and trafficking in women. Furthermore, rigorous statistical methods are frequently not used since data collection and analysis is costly. This must be taken into account when analysing such data.
  • Whenever possible, triangulation of data through other means of data collection should be the goal.
  • Different kinds of interventions(policy and legal reforms; strengthening health, legal, security and support service; community mobilization; awareness raising campaigns) and different contexts require different evaluation tools and methods. If possible, consulting an experienced evaluator, in order to identify the most appropriate research methods, is recommended.
  • It is often difficult to determine specific contributions of individual institutions or strategies to an observed outcome or impact, especially in the case of complex, multi-sectoral or integrated interventions. In this case, we should present arguments why we think that our intervention has caused a change.
  • It is difficult to define what success means or looks like. This challenge has to be addressed already when establishing a monitoring and evaluation system, either through the definition of objectives and indicators (see logical framework) or by selecting alternative approaches, which allows for a flexible definition of objectives through the target group (see outcome mapping and most significant change). For more information on evaluation approaches, see chapter 5.4.
  • Monitoring and evaluation plans often lack clear, appropriate conceptual frameworks. If possible, consulting a monitoring and evaluation specialist in order to establish such a system, which can then still be implemented by internal staff, is recommended.
  • Interpreting datais often challenging and requires significant expertise and capacity that may not be available in-house.
  • Budgets often fail to allocatesufficient resources towards monitoring and evaluation, which may cost as much as 10 to 40 per cent of the entire budget, depending on the goals and objectives of the program, scope and type of intervention and activities. Given the important role that sound monitoring and evaluation can play to improve the effectiveness of an intervention, factoring in adequate funds is a worthwhile investment. 
  • Certain evaluation methods that are commonly used to assess the impact of interventions may be unethical in the context of violence against women. For example, survivors of violence might face discrimination or be re-traumatized during evaluation interviews if their specific conditions are not taken into consideration by the interviewer. The well-being of the persons interviewed should therefore always be on the forefront of any monitoring and evaluation efforts (see chapter 5.2on ethical considerations).
  • Behaviour change is long-term change, which can often not be reached through short-term interventions. Monitoring and evaluations systems should therefore be targeted to measure both short-term and long-term impacts and therefore consequently need to be planned as continuous processes accompanying interventions against violence.

(Adapted and amended from UN Women Virtual Knowledge Center, citing Gage and Dunn 2009, Frankel and Gage 2007, Watts 2008, WHO 2005)

5.6.3 Practical tips for planning and implementing monitoring and evaluation

Practical tips – monitoring

  • Ensure that the time and costs of the monitoring system are in balance with total time and costs of the intervention.
  • Link monitoring to the operational plan.
  • Involve stakeholders in the monitoring process. This will help to create understanding, ownership and commitment when making changes in the operational plan.
  • Decide on what is essential data to be collected, as well as how to go about collecting, processing and reflecting on it together with the stakeholders involved in the intervention.
  • Organize shared learning events with stakeholders during the monitoring process.
  • Use monitoring data as a management tool, particularly at the level of operations (inputs, activities, outputs/expected results), to inform management and stakeholders about possible action that needs to be taken.
  • Ensure adequate and timely reporting to management and stakeholders, addressing their specific information needs.

Practical tips – evaluation

  • Evaluations should be planned assessments that focus on the extent to which an intervention has realized its objectives.
  • Ensure that, as far as possible, evaluations are viewed by the implementers of the intervention and stakeholders as a learning mechanism to enhance strategic and operational management. If they are only perceived as external control, the openness to also reflect on critical issues might be limited and the evaluation becomes a pure academic exercise without further impact.
  • Plan evaluations carefully at the start of an intervention, preferably with stakeholders, ensuring that there are enough resources to conduct the evaluations properly, and also to create ownership and commitment.
  • Develop the evaluation process in collaboration with stakeholders, ensuring that their specific information needs are integrated and their views on data collection and analysis are considered; this will contribute positively to relevance, impact, usability, accessibility, sustainability, utility, effectiveness and efficiency.
  • Evaluation questions should be broad questions that help focus on what you need to know, both positive and negative.
  • Involve stakeholders in the implementing the evaluation process (e.g. by setting up a stakeholders’ evaluation committee and organizing stakeholder workshops).
  •  Consider an evaluation as an important opportunity to learn and interact with stakeholders.
  • Relate the recommendations generated through the evaluations back to the original evaluation questions.