Strengthening Health System Responses to Gender-based Violence in Eastern Europe and Central Asia

A resource package

5.2. Rationale and benefits of monitoring and evaluation

Monitoring and evaluation is essential to maintain or improve the quality of health system interventions against GBV and to understand whether these interventions have achieved the planned goals. Monitoring allows managers to track progress of projects, programmes or policies vis-a-vis the planned goals. Especially when new approaches are used, such as innovative early detection programmes, it is vital to closely monitor both intended and unintended results and to test and revise the assumptions on which the intervention is based. Evaluations involve an assessment of the strengths and weaknesses of projects, programmes or policy to improve their effectiveness. It is an important source of evidence of the performance of the project, programme or policy, of the persons and institutions in charge of implementation. 

Monitoring and evaluation can and should be integrated into the daily work of health professionals and other relevant stakeholders. Once set up, these systems can generate data and information allowing for greater transparency and accountability and help identifying lessons learned. These insights can then be used in adjusting the existing approach in order to make it more effective and efficient. In some cases external support by experienced evaluators is required in order to establish a coherent system. Evaluations performed by external evaluators are also often perceived as being more objective than those done from within the organization.

The way planning, monitoring and evaluation are performed has considerably changed in the last decade. Many institutions and organizations moved from activity planning (what are we going to do?) to planning for the overall results (what do we want to achieve?). Therefore the focus is on the results and consequences of actions and implementation, rather than on the inputs (money, time, human resources) provided. This is called results-based management (RBM) and monitoring and evaluation play a vital role for results-based management. For more information on RBM in the context of management, monitoring, evaluation and donor reporting, see UNDG 2011.

Measuring the degree to which an intervention, strategy or policy (for example on combat violence against women) has achieved its planned results provides the basis for accurate reporting and the identification of lessons learned and obstacles faced. This can be used to plan new interventions as well as to adjust and tailor existing programs to combat violence. Monitoring and evaluation are also valuable planning and management tools. Building monitoring systems and envisaging evaluations at the outset of the intervention helps to design and review objectives and to forecast future resource needs. It allows organizations to identify where they might have failed to have the intended impact or even where they have had an unintended impact, which may be positive or negative.

Monitoring and evaluation are valuable tools to support the learning process within organizations and beyond as they help us to understand what works and why. Therefore, it is of vital importance that lessons learned derived from monitoring and evaluations are being shared both, within the organization and externally: with practitioners, policy and decision makers, and, if applicable, donors.

In order to implement a comprehensive monitoring and evaluation system on GBV, it is necessary to gather data, stemming from qualitative or multiple-choice questionnaires, surveys, focus group interviews with clients or staff, clinical observations, evaluation of the existing data (e.g. random record reviews), review of planning and strategic documents individual interviews with survivors. For an overview of quantitative and qualitative research strategies and methodologies in the context of GBV and health, see WHO/PATH (2005), Researching Violence against Women, A Practical Guide for Researchers and Activists. This manual seeks to provide researchers, activists, community workers, and service providers interested in applied research with background information and tools to apply public health research methods to the study of GBV, with a focus on developing countries and other resource-poor settings. After addressing ethical considerations for researching GBV, the manual provides guidance on the development of a research strategy. It further provides an overview of qualitative and quantitative approaches to research, shares tips for developing samples and introduces tools and tips for qualitative and quantitative data collection and analysis.

The following list provides examples of questions that can be answered by monitoring and evaluation (UN Women Virtual Knowledge Center).

Monitoring and evaluating initiatives addressing violence against women can provide us with answers to the following questions:

  • What interventions and strategies are effective in preventing and responding to violence against women and girls within health care systems?
  • What services are needed to help women and girls recover from violence?
  • What could be the role of different sectors and the health care sector in particular in addressing and preventing violence?
  • What factors (social, economic, political, cultural etc.) play a role in perpetuating vulnerability to violence or hindering access to services?
  • What kinds of investments produce more promising results than others and how much do they cost?

More specifically, the following lessons can be learned from monitoring:

  • Are the activities of the intervention being carried out as planned?
  • What services are provided, to whom, when, how often, for how long, in what context?
  • Is the quality of services adequate? Is the target population being reached?
  • Are women being further harmed or endangered because of the intervention?
  • Have there been any unforeseen consequences as a result of the activities?
  • Are activities leading to expected results?
  • Do the interventions or assumptions need to be amended in any way?
  • What results can be observed?

Other lessons can be learned from evaluations:

  • Why or why not have activities been implemented as planned or were adapted?
  • Did the intervention have an impact? Why or why not? How and for whom did it have an impact?
  • To what extent can the measured or observed changes be attributed to the intervention?
  • Did the intervention have any unintended consequences?
  • Is the intervention cost effective? Can the cost be compared with alternatives to investment, in other words, could the results have been reached with less input?

If the intervention was successful, can it be replicated to other settings and if so, in which settings?  Can it be adapted, replicated or built on to increase its reach or scope (for a larger population or a different region)?

The following example showcases an evaluation of a domestic violence intervention in the maternity and sexual health services in a UK hospital (box 27).

Box 27:Evaluation of the MOZAIC Women’s Wellbeing Project, UK

The MOZAIC project was implemented from 2004-2007 as a partnership between the maternity and sexual health services of Guy’s and St. Thomas Foundation Trust and the 170 Community Project, a NGO providing specialized domestic violence support services on-site. As part of the intervention, clinical guidelines were introduced and a training programme was implemented to increase health professionals’ knowledge on domestic violence and to enable them to identify and document violence and to refer survivors to MOZAIC Women’s Wellbeing Service, who provided on-site support and counseling. Male patients who disclosed domestic violence were also provided with counselling. In this setting, routine enquiry was practiced for asking women about domestic violence (Bacchus et al 2010). In 2007, an evaluation was undertaken. The partnership did not stop at the end of the project: The partners built on the results of the evaluation to further improve the intervention.

The evaluation took into account the perspective of both, hospital staff and service users. It used the following methods:

  • qualitative semi-structured interviews with service users and providers,
  • focus-group discussion among service providers,
  • review of patient records, and
  • pre-post training questionnaires for health professionals, implemented before, immediately after and six months post-training.

The evaluation revealed, among others, the following findings:

  • The vast majority of participants rated the increase in knowledge as result of the training as “very much” or “quite a lot”. At the same time, findings from six months later revealed a number of challenges faced in day-to-day clinical practice that prevented an effective response, such as the presence of partners or family members during consultations, language barriers, time constraints in busy clinics, or reluctance of some women to trust health professionals.
  • Combined training of maternity staff and sexual health services staff was not found to be useful, mainly because sexual health professionals required specific skills that were not covered in the training, such as dealing with male patients experiencing domestic violence.
  • Maternity and sexual health services were not found to be early points of intervention to prevent domestic violence from occurring. Rather, some women using maternity services revealed a long history of partner violence. Therefore, these services were found to provide “opportune” points of intervention for women survivors at different stages to seek and accept help.
  • Confidence and sensitivity on the part of health professionals was deemed helpful by patients in situations where they were reluctant to disclose abuse, as shown by the following quote from a 31 year old user of maternity services: “…I was in tears and she noticed the bruises on my arm… and she started questioning me and I said to her that I was fine and she said ‘No, you can talk to me’ and then she dug and dug and then I opened up to her.”
  • Survivors reported that the support provided by MOZAIC had initiated a process of re-assessing their personal situation and of gaining confidence in their ability to begin and sustain changes. They were able to tentatively explore options, such as temporarily leaving the abuser, contacting the police or seeking legal advice. At the same time, their ability to take decisions was impacted by the quality and availability of alternative options, such as housing or financial resources, also taking into account immigration status.
  • The evaluation also identified potential sources of harm in clinical practice, such as negative labelling and stereotyping by health professionals, failure to document cases adequately or breaches of confidentiality.

Source : Bacchus et al 2010