1. Outline of Impact Evaluation

1.2. Evaluation criteria and methods

There are many sets of evaluation standards and criteria that have been developed to support the better use of evaluations. The most well-known set of criteria is the OECD-DAC evaluation criteria, that have been recently improved following an adaptation process in 2017- 2019. The new set consists of six criteria for assessing development evaluations. Though originally developed for evaluation in international co-operation activities, such criteria are nowadays widely used by many organisations and actors, having become a reference for all types of evaluation, including impact evaluation.

They are basically a list of different aspects of a policy, project or program that an evaluation ought to cover. They are designed to be a checklist to ensure that key issues are considered in each evaluation, although not all criteria are designed to be applied in every evaluation.The six evaluation criteria by OECD are as follows:

  • Relevance (= is the intervention, policy, project doing the right thing?): The extent to which the objectives of an intervention are consistent with recipients’ requirements, country needs, global priorities and partners’ policies. Relevance means the extent to which a development intervention was suited to the priorities and policies of the target group, recipient and donor.
  • Coherence (= how well does the intervention, policy, project fit?): it is about the compatibility of the intervention with other interventions in a country, sector or institution.
  • Effectiveness (= is the intervention policy, project achieving its objectives?): The extent to which the intervention’s objectives were achieved, or are expected to be achieved, taking into account their relative importance.
  • Efficiency (= how well are resources being used?): The extend to which the intervention delivers, or is likely to deliver, results in an economic and timely way.
  • Impact (= what difference does the intervention policy, project make?): the extend to which the intervention has generated, or is expected to generate, significant positive or negative, intended or primary and secondary long-term effects produced by the intervention, whether directly or indirectly, intended or unintended.
  • Sustainability (= will the benefits last?): The extend to which the net benefits of the intervention continue or are likely to continue beyond its termination.
In order to meet the purposes and the standard criteria of evaluation, the choice of methods is focal.

An evaluation can use use quantitative or qualitative methods and often includes both. The terms ‘qualitative’ and ‘quantitative’ refer to the type of data generated in the social research process. Quantitative research produces data in the form of numbers while qualitative research tends to produce data that are stated in prose or textual forms. In order to produce different types of data, qualitative and quantitative research tend to employ different methods (Garbarino et al., 2009).

Quantitative methods use data that can be collected by survey or questionanires, pre-test and post-test, observation, or review of existing documents and databases or by gathering clinical data. Surveys may be self- or interviewer-administered and conducted face-to-face or by telephone, by mail, or online. Analysis of quantitative data involves statistical analysis, from basic descriptive statistics to complex analyses.

Quantitative data collected before and after an intervention can show its outcomes and impact. The strengths of quantitative data for evaluation purposes include their generalizability (if the sample represents the population), the ease of analysis, and their consistency and precision (if collected reliably). The limitations of using quantitative data for evaluation can include poor response rates from surveys, difficulty in obtaining documents, and difficulties in valid measurement. In addition, quantitative data do not provide an understanding of the program’s context and may not be robust enough to explain complex issues or interactions. In some circumstances, i.e. in pilot project with a small number of beneficiaries, the use of quantitative methods is very limited.

Qualitative methods
use data collected through direct or participant observation, interviews, focus groups, case studies and from written documents. Analyses of qualitative data include examining, comparing and contrasting, and interpreting patterns. Analysis will likely include the identification of themes, coding, clustering similar data, and reducing data to meaningful and important points, such as in grounded theory-building or other approaches to qualitative analysis.

The strengths of qualitative data include providing contextual data to explain complex issues as well as the “why” and “how” behind the “what.” The limitations of qualitative data for evaluation may include lack of generalizability, the time-consuming and costly nature of data collection, and the difficulty and complexity of data analysis and interpretation (Patton, 2002).

No single method is the perfect solution for producing evidence for the IE and there is no inherent incompatibility between qualitative and quantitative methods, each of them brings valuable information to the evaluation. Recent developments in methology indeed have blurred the distinctions between quantitative and qualitative methods making combinations of methods more feasibile. A greater number of studies supports the need to adopt a combined approach of metodology, acknowledging that the best way to fulfil an evaluation mandate is to range within a broad methodological framework.

The mixed approach was used in the Woodie project. Cconsidering the findings of the document analysis carried out on the Whistleblower protection and Open data policy, it appears more constructive to use a multi-approach perspective.