SlideShare a Scribd company logo
1 of 170
Download to read offline
DEPARTMENT OF ECONOMIC AND SOCIAL SCIENCES & SOLVAY BUSINESS SCHOOL
A comparative analysis of European
Commission evaluations of Development
Co-operation programmes
2012-2014
Teodora Irina Virban
Student number: 0106917
Promotor prof. dr. Youri Devuyst
August 2014
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 1 of 170
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 2 of 170
Table of Contents
List of Tables .............................................................................................................................4
List of Figures............................................................................................................................4
Abstract......................................................................................................................................5
Introduction................................................................................................................................6
Chapter 1: Overview of Program Evaluation: definition and components..............................11
1.1. Types of Program Evaluation and different approaches ...............................................11
1.1.2. Definition of Program.............................................................................................12
1.1.1. Program Evaluation: definition, types and central concepts ..................................13
1.2. Program Evaluation Theories........................................................................................16
1.3. Program Evaluators.......................................................................................................18
1.4. Economic and Politic Perspectives of Program Evaluation..........................................22
Chapter 2: European Commission's approach to evaluations..................................................24
2.1. Evaluating EU Activities...............................................................................................24
2.1.1. European Commission evaluations: Definition and scope.....................................25
2.1.2. The Evaluation Function ........................................................................................30
2.1.3. Designing, conducting and finalizing an evaluation ..............................................32
2.2. Evaluation methods.......................................................................................................39
2.2.1. Methodology of evaluations...................................................................................40
2.2.2. Evaluating tools ......................................................................................................51
Chapter 3: The comparative analysis.......................................................................................55
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 3 of 170
3.1. The research methodology ............................................................................................55
3.2. The case study choice....................................................................................................57
3.3. The comparative analysis..............................................................................................60
Stage 1: Verifying that all of the four reports are integrally retrieved from the European
Commission platform, on the DG DEVCO “Reports” section. .......................................61
Stage 2: Examining each of the four reports in order to observe the respectfulness in their
composition with regard to the Evaluation Guidelines proposed by the EC....................62
Stage 3: Comparing the findings from the four summaries done in the previous stages .78
Stage 4: The elaborated comparative analysis..................................................................88
Conclusion .............................................................................................................................117
Annexes..................................................................................................................................134
Annex 1 – Evaluation Methodologies by country .................................................................134
Annex 2 - Evaluation questions and answers by country ......................................................138
Annex 2.1. Coverange of the evaluation criteria and questions ............................................148
Annex 3 – Quality Assessment Grid by country ...................................................................150
Annex 4 – Terms of references representation by country....................................................154
Annex 5 – Data collection and analysis by country...............................................................155
Annex 6 – Main conclusions and recommendations .............................................................157
Annex 7 – Composition .........................................................................................................159
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 4 of 170
List of Tables
Table 1: Evaluation concepts through the eyes of Joseph S. Wholey / Source: created by
author using information from original source, retrieved on February 2014 (Shadish, Cook,
& Leviton, 1995, p. 225)_____________________________________________________18
Table 2: The defintion of evaluation criteria _____________________________________45
Table 3: Indicators and intervention logic, retrieved in May 2014, from “Evaluation Methods
for European Union’s External Assistance”, Volume 1, p. 60________________________46
Table 4: Structure of a design table retrieved in May 2014, from “Evaluation Methods for
European Union’s External Assistance”, Volume 1, p. 65___________________________47
Table 5: Own Quality Assessment Grid _________________________________________91
Table 6: Similarities and differences based on the evaluation criteria between the four
countries, table constructed by author _________________________________________102
Table 7: Differences and their concequences in using data collecting tools in the four cases,
table constructed by author _________________________________________________108
Table 8: Topics linking the recommendations by country, table constructed by author ___111
List of Figures
Figure 1: Evaluation, monitoring and audit functions of EC_________________________26
Figure 2: Evaluation criteria representation retrieved on October 2013 from “Evaluation
Methods for European Union’s External Assistance”, Volume 1, p. 50 ________________43
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 5 of 170
Abstract
In an age of great efforts done by high authorities around the globe to reach primarily
economic and political linearity between nations, a very important subject represents the
development co-operation programmes. The programmes cover a large range of aspects such
as health or environment systems, empowering women and educational structures, peace
building, food security, eradication of poverty, trade and rural development, justice and
human rights and so forth. Considering the complexity of the objectives, the involvement of
many actors is strongly required. The United Nations, OECD or the European Commission,
the numerous civil societies and humanitarian aid NGOs, together with the national
associations and with the assistance from the governments’ policy makers, they all appertain
to the cluster of ambassadors of the above mentioned global movement.
The current paper aims to emphasize the outcomes of the European Commission’s
development co-operation programmes by discussing one of the most pertinent aspects: the
Evaluation of development co-operation programmes. The analysis will embed a comparison
of such evaluations in the case of four carefully chosen countries.
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 6 of 170
“We can evaluate anything – including evaluation itself”
(Shadish, Cook, & Leviton, 1995, p. 7)
Introduction
The paper will focus firstly on making a detailed assessment of a specific category of
evaluations: the European Commission evaluations on specific development co-operation
programmes, and secondly, it will focus on establishing a comparison of four such
evaluations considering aspects discussed further in the paper.
The assessment proposes to set strong grounds while presenting which tools are used and
what results follow for evaluating activities in the European Union. More precisely, the study
will address the guidelines set by the European Commission concerning project or
programme evaluation in development countries (the evaluation conducted on DEVCO1
’s
behalf). Therefore, the phases of evaluating will be thoroughly presented (preparatory, desk,
field, synthesis and dissemination and fallow-up phases) necessary in each evaluation process.
Moreover, the evaluation methodology will be clearly emphasized, most importantly
reminding the key methodological elements used such as relevance, effectiveness, efficiency,
sustainability and impact endorsed by the OECD-DAC2
as well as other two additional ones
proposed by the European Commission, added value and coherence. The Need Assessment
components and the Quality Assessment Grid will also be of importance. The knowledge and
the amount of information gained will serve at the fulfillment of the paper’s aim, namely
delivering an analysis upon four such evaluations, where the comparability or at least the
coherence of the definitions used to frame these key elements are examined in parallel.
Additionally, other helpful research questions will conclude the paper, all of them assigned to
all cases, in a comparative way.
1
DEVCO represents the EuropeAid Development and Co-operation Directorate- General, a department of
European Commission defined earlier in the paper
2
The Organisation for Economica Co-operation and Development- Development Assistance Committee
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 7 of 170
In order to achieve the goal of the paper, some related concepts and information need to be
introduced. Consequently, the formation of European Union and the different roles of its
current institutions will be brought into the stage, the European Commission’s components,
functions and responsibilities will be mentioned and carefully disclosed and last but not least,
the development and co-operation programmes done by the European Commission will be
emphasized. In addition, a short presentation of the category of developing countries within
the global framework will be shown, while underlining the reason for choosing the four
countries subject of the thesis: Colombia, Jamaica, Nepal and The Philippines.
The European Union: creation, functioning and current status
The founding of the European Union has led to a completely new situation in our recent
history. An impressive amount of work has been invested into the establishment of what we
now know as the European Union. In 1957, six states were signing the Treaty of Rome that
brought to life the European Economic Community and European Atomic Energy
Community. The grounds of such a unification of powers were to install peace, to put an end
to all the conflicts between France and Germany. In the following years, a multitude of
important steps have been taken, such as a common policy for agriculture, fisheries and
regional development, common external tariffs, a single market and the creation of an
Economic and Monetary Union. Other states were joining these agreements, and in 1992, the
signing of the Maastricht Treaty has led to the creation of what we now know as the
European Union.
Currently, the European Union is composed by 28 countries and has a list of five other
countries recognized as candidates for membership. The Institutional Structure of the
European Union represents a very complex system that confirms the thorough integration of
the Union at both, a political and economical level. The “Big-five” institutions are the
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 8 of 170
European Council, the Council, the European Commission, the European Parliament and the
Court of Justice of the EU.
While the European Council is considered to be the highest political-level body without a
formal role in the EU law-making, the Council of the EU is the main decision-making body
of EU. On the other hand, the European Parliament has two main tasks, namely to share
legislative powers with the Council of Ministers and the Commission, and to oversee EU
institutions (specially the Commission). Because EU laws and decisions are opened to
interpretations, some disputes are probable to appear; the Court of Justice has the power to
settle these disputes.
Considering that the fifth component of the institutional framework of the EU, the European
Commission is the subject of interest for the paper, a much more detailed description will be
provided. Hence, the European Commission is the executive branch of the EU; it enforces the
Treaties and it is driving forward European integration through the next aspects:
i. it proposes legislation to the Council and Parliament;
ii. it administers and implements EU policies;
iii. it provides surveillance and enforcement of EU law in coordination with the EU
Court. (Baldwin & Wyplosz, 2009, p. 72)
The European Commission is at the heart of so-called the Community Method. It was
designed as the independent “motor of the integration process, with the duty to act in the
interest of the entire Union” (Devuyst, 2005, p. 12)
The bureaucratic composition of the Commission is complex; the work is separated into
departments and services. The departments are called Directorates-General (DGs), each
dealing with a different policy, while the services are in charge of more administrative
responsibilities.
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 9 of 170
The objective of the paper is to analyze a number of public reports realized for a certain DG,
namely for EuropeAid Development and Co-operation (DEVCO). As the official source of
DEVCO says, this DG “is the result of the merger of parts of the former Directorate-General
for Development and Relations with African, Caribbean and Pacific States with the former
EuropeAid Co-operation Office. EuropeAid is now responsible not only for defining EU
development policy but also for ensuring the effective programming and implementation of
aid” (European Commission). As a beneficial result, DEVCO is now encompassing both
areas, of cooperation and of developement, being of a central point for inside and outside
stakeholders in different matters. Moreover, the Developement and Co-operation department
is the only responsible for the European External Action Service.
The role of DEVCO is extremely important; its job is to provide state-of the art development
policies for all emerging markets, to reach to a coherent policy for development while
making sure the mechanisms used are renewed when needed and properly used; continue
imposing new beneficial policies for development and being very responsive and involved
into the problems in the world. (European Commission)
The objective of the DG is to contribute to well-being of the globe, while acting when needed
to reduce poverty, to impose sustainable development, to act in respect to peace, security and
democracy. In addition, the external aid instruments are also elements consisting of the
DEVCO. (European Commission)
EuropeAid Development and Co-operation represents the main resource within the evaluative
analysis. The sources used are the evaluation reports done on four different countries,
evaluation of the development and co-operation programmes realized by the Commission in
developing countries. The countries were carefully chosen to have been evaluated in the same
period of time and by two different types of evaluating consortiums.
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 10 of 170
In conclusion, the goal of the paper is to realize a comparative assessment between the
evaluation reports of the countries, aiming to underline the respectfulness of the basic
concepts that are central in any evaluation, such as relevance, effectiveness, efficiency,
sustainability and impact. The assessment will generate an own new set of different findings,
a quality assessment grid and a set of recommendations.
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 11 of 170
Chapter 1: Overview of Program Evaluation: definition and components
Ever since the beginning of time, human nature was programmed to apportion all the
activities into very well organized sequence of tasks. These tasks together were forming what
would be now defined as a “project” or a “programme”, broadly speaking a series of detailed
activities, attributed to a number of participants in order to achieve a specific goal while
respecting the different objectives along the path-line. As in a logical next stage shortly after
the first projects were created, some flaws or even real problems were detected; the need of
evaluating the processes of conducting projects came into light. It is to say, that the
evaluation resulted to be seen as a crucial component for the improvement of projects and
programs.
One point of view about the appearance of the evaluation of social programs is seen as a
series of steps triggering the elimination of the causes of issues. The progression could be (a)
identifying a problem, (b) generating and implementing alternatives to reduce its symptoms,
(c) evaluating these alternatives, and then (d) adopting those that results suggest will reduce
the problem satisfactorily” (Shadish, Cook, & Leviton, 1995, p. 21) This concept is
comparable to the projects done by people within their daily activities along the centuries. In
return, this judgment was recommended to be applied on bigger targets, namely on social
programs in order to evaluate their performance.
1.1. Types of Program Evaluation and different approaches
This section of the paper intends to provide a detailed image of program and program
evaluation, to mention in parallel different program evaluation theories and to create an
understanding of economical and political perspective of the evaluation process.
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 12 of 170
Finally, the section will conclude in summarizing the means of evaluating the evaluation of
the European Commissions’ developing and co-operation programs, which represents the
final outcome desired from this research.
1.1.2. Definition of Program
“A program is an organized collection of activities designed to reach certain objectives”
(Royse, Thyer, & Padgett, 2009, p. 5) that requires above all the involvement of human
resources. In their view, the characteristics of a good social service program are the staffing
(common for every program) and the next elements: budgets, stable funding, recognized
identity, conceptual or theoretical foundation, a service psychology, systematic efforts at
empirical evaluation of services and evidence- based research foundation. Because the goal
of the paper is to compare the evaluations done on specific programs, the need to provide
more information about the definition and components of a program itself is not necessary.
An important aspect of the program evaluation is that contains in its structure a certain
theoretical model “that would have examined the problem – how and why it originated and
what would work best to remedy the situation”. (Royse, Thyer, & Padgett, 2009, p. 8)
From a more economic perspective, the program is defined in comparison to a project, to
avoid any confusion; it is seen as “a group of related projects designed to accomplish a
common goal over an extended period of time (..) The major differences lie in scale and time
span.” (Larson & Gray, 2011, p. 6) A more precise clarification of this difference is made by
identifying the organization’s composition “The parent organization has a change objective
which may require contributions from several different areas, or several different types of
project for its achievement.” (Turner, 2009, p. 324) The explanation is return, simple and
concise, with an added value for the actual research regarding the evaluation of programs.
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 13 of 170
1.1.1. Program Evaluation: definition, types and central concepts
“Evaluation is a profession in the sense that is shares certain attributes with other professions
and differs from purely academic specialties such as psychology or sociology.” (Austin, 1981,
p. 17) From the logic beneath this framing we can conclude that at the time when this
research or study was made upon the evaluations of projects or programmes, the evaluation
itself was not a standardized process. Moreover, it represented a broad field that required
involvement of different representatives mastering assorted fields of activity. Indubitably, as
a study realized in line with the specific situation in United States of America, their
conclusions might not fully comply with the procedures or techniques outside of America
relating program evaluation. This issue remains to be further discussed.
Angela Browne and Aaron Wildavsy, two of the authors of “The politics of program
evaluation” consider that an evaluation is enriched with a more solid identity once the
evaluators respond to five important questions regarding the evaluation “When?”, “Where?”,
“From whom?”, “What?” and “Why?”. (Palumbo, 1987, pp. 150-151)
Relating to the first question, “When?” the evaluations could be either retrospective
(depending on the past events, it is obliged to start the program), either perspective (in
contrast, it does not need a prior implementation) or a continuous (it has a proactive role).
The question “Where?” refers to the institution inside witch these evaluations are done,
hence they can range between informal to formal, meaning between small agencies or
evaluation to government authority in evaluation. “For whom?” specifics the stakeholders
that is responsible for the performance and final realization of the evaluation, thus they can be
sponsors, government representatives and so forth. The question “What?” is the most
complex one, revealing a multitude of possibilities of evaluation, when talking about the
scope of it. They could be: pseudo-evaluation, quavi-evaluation, “goal-fixed evaluation”,
comprehension or inferential evaluation and onward (Palumbo, 1987, pp. 153-162). The last
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 14 of 170
question represents a classification of evaluations of programmes through the eyes of the
final “consumer”. They could be focused on utilization, or could be interactive, or can have
an evaluability assessment goal or more, a learning objective.
Moving forward to a more specific category of program evaluation, it is essential to introduce
the social programs “designed to benefit the human condition” (Rossi, Lipsey, & Freeman,
2004, p. 6). Social programs were first active in fields as education and public health, but
ever since the fast development of the economic situation, many other programs were
oriented towards “urban development and housing, technological and cultural education,
occupational training, and preventive health activities” (Rossi, Lipsey, & Freeman, 2004, p.
8). From a broad perspective, the evolution of social program evaluation moved the interest
from local level to national level, to world-wide level in current period. The founding for
such programs is supported in big majority by the governments, civil society sponsorship to
global organizations between nations. The focus in the moment is to “ameliorate a social
problem or to improve social conditions” (Rossi, Lipsey, & Freeman, 2004, p. 17) when their
evaluations trigger the most the effectiveness of certain programs.
In order to move the focus on the proposed subject by the paper, certain definitions need to be
carefully mentioned. This effort will provide the transit from general to more individual cases.
Therefore, the European Commission proposes evaluation to been seen as “a key Smart
Regulation tool, helping the Commission to assess whether EU actions are actually delivering
the expected result in the most efficient and effective way” (European Commission) , where
the Smart Regulation are tools created to help the Commission to assess key concepts such as
effectiveness, efficiency, relevance and onward of the implementation of policies, legislation,
trade agreements and so forth. In this category it is to be found also the evaluation standards,
standards similar to the ones applied by other specialized organization than the Commission
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 15 of 170
itself. The guiding principles of these evaluation standards will be presented in more detail in
Chapter 2.
Going over the Evaluation Guidelines proposed by EuropeAid, we can find the definition
proposed by the European Commission of a “programme” and of its evaluation. Therefore,
the definition is very simple and concrete revealing some grouped efforts to achieve the
overall aims, thus “a programme is a set of simple, homogeneous interventions grouped to
attain global objective” (European Commission) With regard to the evaluation of the
homogeneous programme, it is stated that is represent a complicated process because both of
the multiple cases and their needs to be assessed, and of the “synergies effects between the
different components of the programme” (European Commission) In this case, traditional
techniques are employed (questionnaires and comparison groups), but with a visible diminish
of the number of questions posed.
To conclude this section, the “12 Lessons from the OECD DAC 3
” for “Evaluating
Development Activities” will be shortly mentioned. Firstly, The Organisation for Economical
Co-operation and Development is a very important world-wide actor in development and co-
operation fields, an organization that aims to positively contribute at the global well-being
through social and economic policies. Secondly, the Development Assistance Committee has
as objectives to achieve a better quality and quantity of development co-operation, to provide
useful analysis of a world-wise utilization and help and to offer the possibility of the
members of DAC to share their expertise. (OECD, 2013, p. 5)
The lessons given by them are:
1. “Base development policy decisions on evidence“ (OECD, 2013, p. 9)
2. “Make learning part of the culture of development co-operation“ (OECD, 2013, p. 13)
3. “Define a clear role for evaluation“ (OECD, 2013, p. 17)
3
The Organisation for Economical Co-operation and Development- Development Assistance Committee
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 16 of 170
4. “Match ambition with adequate resources“ (OECD, 2013, p. 21)
5. “Strengthen programme design and management systems“ (OECD, 2013, p. 23)
6. “Ask the right questions and be realistic about expected results“ (OECD, 2013, p. 25)
7. “Choose the right evaluation tools“ (OECD, 2013, p. 27)
8. “Work together“ (OECD, 2013, p. 29)
9. “Help strengthen partner country capacities and use them“ (OECD, 2013, p. 33)
10. “Act on evaluation findings“ (OECD, 2013, p. 35)
11. “Communicate evaluation results effectively“ (OECD, 2013, p. 39)
12. “Evaluate the evaluators“ (OECD, 2013, p. 43)
A further detailed exemplification of the 12 Lessons is not of interest for the goal of the paper.
Nevertheless, the content and the reasoning of the advices given by OECD will be addressed
and closely followed in the research effort done for achieving the objectives of the paper.
1.2. Program Evaluation Theories
A very solid and well know evaluation theory was presented by Joseph S. Wholey (a
respected man with a master degree in mathematics and a PHD in mathematical philosophy at
Harvard, USA). The theory is called Wholey Theory of Evaluation and has as a focal point the
evaluation done on the government, of the social programs conducted by the central power of
a certain state (yet again in the United States of America). Nevertheless, this means that the
objective of such an evaluation is to make sure the programs follow the required, stardard
course and that are performing in the public interest, and not in increasing the gains on a
federal level. The theory itself has three main categories to be evaluated: customer market,
policy market and program management market, the latter having the biggest impact in the
evaluation framework. (Shadish, Cook, & Leviton, 1995, p. 227)
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 17 of 170
Relating the evaluation done for testing the improvement of programs, Joseph S. Wholey
presents a list of concepts that he considers of a great importance. The table below will
identify and characterize in a more detail the list presented by Joseph S. Wholey in the
chapter named “Evaluation for Program Improvement” from the book “Foundation of
Program Evaluation”.
Results Oriented
Management
The management team is responsible to act towards
accomplishing expected results of the program while making use
of all the known content and available resources
Performance – Oriented
Evaluation
As for the same category, the evaluation will be an added value
for managers performances
Sequential Purchase
of Information
The actual information could be purchased before its intended use
period (the future utility is higher than the price paid to acquire it),
being of great assistance in the next steps
Evaluability
Assessment
Analyzing the program capability of delivering results, what
needs to be done to have those results and the most important
feature is whether the evaluation will impose enhancements on the
project/program
Rapid Feedback
Evaluation
The elements of performance (objective s and indicators) are re-
analyzed and a better design is decided on.
Performance Monitoring The process and the outcome are observed.
Intensive
Evaluation
The actual evaluation is realized, where a rigorous package of
tests assesses the link between the objective and the actual result
of the process.
Service Delivery A final and free of constraints evaluation is done considering the
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 18 of 170
Assessment feedback received, without including the standard line of
assessment on objectives and indicators, as the other stages full
field.
Table 1: Evaluation concepts through the eyes of Joseph S. Wholey / Source: created
by author using information from original source, retrieved on February 2014 (Shadish,
Cook, & Leviton, 1995, p. 225)
Striven’s theory of evaluation represent a strong point in the evaluation process. Michael
Striven developed a major theory both in an explicit and a general way. The evaluation
sequence that he proposes is to firstly have a clear idea about the criteria of merit, secondly to
determine the standards and finally to examine the performance. (Shadish, Cook, & Leviton,
1995, p. 94) His distinct approach is to ask evaluators to know from the start the value of the
object that they will evaluate. Similar to other theorist concepts, namely Nicholas Rescher,
the valuation theory of Striven brings together two aspects; first the object, subject of the
evaluation and second the actual valuation, but involving the framework specific to the future
evaluation. Moreover, the two aspects are completed by “a criterion of evaluation that
embodies the standards in terms of which the standing of the object within the valuation
framework is determined”. (Rescher, 1969, p. 72) In conclusion, the methodologies used in
the evaluation are of comparable value to the evaluation object, and the “criterion” needs to
be closely developed and respected.
1.3. Program Evaluators
Moving further and observing the evaluation from a different angle, the evaluator itself
represents an important part of the process. In Carol H. Weiss book “Evaluation research” we
can find the means through which the evaluator exercises his expertise. Moreover the focus is
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 19 of 170
on” ways by which the evaluator can help institute the conditions that permit sound research”
(Weiss, 1972, p. viii)
In a broad acceptance, the role of the evaluator is to collect all the data and transform them
into a coherent report. It is frequently the case that the evaluation report will not taken into
consideration because of a large range of different reasons. The constraints that get into the
way of using the evaluations could be the perception of the evaluator considering his role, the
responsiveness to change of the organization in matter, the way to perceive and handle the
evaluation report itself, the discrepancies between the findings and the intended next steps or
the feature of many evaluation to underline mostly the negative findings. (Weiss, 1972, p.
110) In order to create a real image of the input the evaluators bring to the evaluation itself, it
is very important to remind the fact that most of the evaluators come from academic research
background, a fact that underlines that they “tend to look to the academic community for
recognition and reward” (Weiss, 1972, p. 111) and also “stop short of drawing conclusions
when they report their results” (Weiss, 1972, p. 111) In the same time, other evaluators
“perceive their role as encompassing the “selling” of their results” (Weiss, 1972, p. 113)
Hence, there can be different types of evaluators, different in the sense of their expectations
over a project involvement; some aim only the fame, others considers themselves responsible
only for studying and analyzing the data, leaving the recommendation to other. There are also
some very committed evaluators that want to have an impact upon the world with their work.
In addition, “evaluators’ ways of thinking are different from ordinary daily decision making,
because they engage in a process of figuring out what is needed to address challenges through
the systematic collection and use of data.” (Mertens & Wilson, 2012, p. 3)
In conclusion, after presenting come constraints that might appear when assessing the reports
and after describing the feature of some evaluators, we can observe that there will always be
differences between evaluations realized in different organizations, by different evaluators.
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 20 of 170
Further discussing about the gap between the evaluation and its recommendation we can
notice that even the process is in a continuous improvement, there are still some situation
when the results are evasive or even ambiguous. Thus, this existing gap between the program
evaluation and its actual recommendation, from Weiss’s point of view, needs to be completed
with “intuition, experience, gleaning from the research literature assumptions based on theory,
ideology and a certain amount of plain guessing” (Weiss, 1972, p. 125) Hence, the evaluation
will reveal all the unresolved issues of current programmes and will underline what needs to
be changed.
In the book “Evaluation. A systemic Approach”, Peter Rossi and his co-authors propose a
listing of people more likely to get involved in an evaluation. The taxonomy is composed by:
- Policymakers and decisionmakers: persons in charge with the stages of a program,
from inception to closure, and from expansion to restructure. (Rossi, Lipsey, & Freeman,
2004, p. 48)
- Program sponsors: organization in that deal with the funds (sometimes act in the
same way as the first category) (Rossi, Lipsey, & Freeman, 2004, p. 48)
- Evaluation sponsors: organization in that deal with the funds (sometimes the
program sponsors and the evaluation sponsors are the same) (Rossi, Lipsey, & Freeman, 2004,
p. 48)
- Target participants: the category depends on the focus group/individual of the
program to be evaluated (Rossi, Lipsey, & Freeman, 2004, p. 48)
- Program managers: “Personnel responsible for overseeing and administrating the
intervention program” (Rossi, Lipsey, & Freeman, 2004, p. 49)
- Program staff: personnel in charge with service delivery or with support (Rossi,
Lipsey, & Freeman, 2004, p. 49)
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 21 of 170
- Program competitors: the organizations that employ same resources in other
programs (Rossi, Lipsey, & Freeman, 2004, p. 49)
- Contextual stakeholders: the group directly interested in the implementation of the
program, such as “other agencies or groups, public officials, or citizens’ groups (..) ” (Rossi,
Lipsey, & Freeman, 2004, p. 49)
- Evaluation and research community: experts in evaluation and researchers in the
same field who assess the quality and the reliability of the evaluation (Rossi, Lipsey, &
Freeman, 2004, p. 49)
The general overview that encompasses the composition of the group of stakeholders
involved in evaluating a program represents a helpful link in comparing this structure with
the one subject to the paper, the personnel involved in the evaluation of European
Commission development and co-operation programmes.
In the case of the European Union’s external assistance, the evaluative organization is split in
four broad categories: evaluation manager, reference group, evaluation team and stakeholders.
(European Commission, p. 30) Briefly, their details will be mentioned below:
- Evaluation manager: the person responsible for evaluation process, on behalf of
European Commission (European Commission, p. 30)
- Reference group: composed by representatives from the country in stake and
European Commission members, persons chosen by the evaluation manager to help in with
the surveillance and administration of the process (European Commission, p. 30)
- Evaluation team: the persons responsible for collecting and analyzing data,
answering to the research questions and formulating the report of the evaluation, constantly
keeping a two side collaboration with the latter groups (European Commission, p. 30)
- Stakeholders: the persons (individual, groups or associations) interested of the
implementation of the program (European Commission, p. 30)
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 22 of 170
In conclusion, we can state that there are not that many differences in the concept of
composing the group involved in program evaluation, from a general case to the case of the
European Commission’s evaluation methodologies.
1.4. Economic and Politic Perspectives of Program Evaluation
Another aspect of project evaluation is its economic involvement and effects. In order to
reveal the aspect considering the economic perspective it is necessary to define the ancient
Economic Development Institute (EDI), an instrument of the World Bank which is currently
known as World Bank Institute. WBI is defined as “is a global connector of knowledge,
learning and innovation for poverty reduction” (The World Bank )and aims in connecting
interested parties to work together and come up with solutions to the developing issues.
Returning to EDI and keeping in mind the fact that the concept of project evaluation appeared
around 1960s, the response of the developing countries to such a sub-discipline of the
program evaluation programme “was that they had to plan, guide and direct their economies.
Development planning was the watchword” (Davis, et al., 1997, p. 51) The indicator of great
importance from an economic perspective are the values at which program evaluation in
developing countries manages to impose the acceptable level of quotas, tariffs and other
prohibitions or requirements.
Furthermore, in the range of twenty years after the installation of project evaluation “EDI was
a direct provider of project training for officials from developing countries” (Shadish, Cook,
& Leviton, 1995, p. 85) Trainings were taking place at EDI (Washington, D.C.), at a regional
level in collaboration with different trans-national institutions and at national level of the
emerging markets.
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 23 of 170
In conclusion, all these facts were happening outside Europe before the European Union have
reached the final unified formation, with all the competences of the members states and all
the common powers to action.
The politics of program evaluation represent a very interesting aspect to present because of
the similarities between politics and evaluation. Firstly, whenever an evaluation is done it
will be normally integrated into a political decision, while evaluations will finally become
active in a political environment. Secondly, the act itself in taking a position through an
evaluation process is political and also, the mutual supporting relationship between the
program and its evaluation is a political approach. Thirdly, in many cases the meaning of
“program evaluation” is negatively perceived as “political evaluation of programmes”.
(Palumbo, 1987, p. 12) The controversial discussion about this matter has as main reason the
fact that this association between politics and evaluation of programmes can have negative
influences on the evaluators themselves. The problems range between researchers and
analysts using the evaluation as an “ideological tool” to focusing only on the political goal of
the evaluation. Hence, the role in policymaking determines the shared opinion whether the
involvement of evaluators in politics is beneficial or not. (Palumbo, 1987, p. 13)
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 24 of 170
Chapter 2: European Commission's approach to evaluations
This chapter will consist of a thorough presentation on the evaluation process, starting from
its definition and purposes, arriving to defining the different temporal types evaluation,
namely the ex-ante, interim and ex post evaluation. Further on, the evaluation function will
be described, with its acquired profile tasks and roles. The creation of an evaluation report
will be divided between designing, conducting and finalizing stages.
Finally, the chapter will detail the evaluation methodologies used for developing co-operation
programmes done by the Commission.
The large amount of information will serve at introducing the Chapter 3, the “Comparative
analysis”, the research subject of the paper. The progressively introduction of terms and
classification will facilitate the clearance of the analysis and the simplicity of drawing
conclusion relating to the comparability between the evaluation reports.
2.1. Evaluating EU Activities
This section introduces a very important baggage of knowledge concerning the evaluation of
the interventions done by the European Commission. Primarily, it will define the essential
terms such as “evaluation” and of course the different categories that it embodies “ex ante
evaluation”, “ex post evaluation” or “interim evaluation”. Secondarily, the functions related
to evaluation are presented in a comparative way, in order to underline the different
functionalities between evaluation and monitoring, control and audit. Finally, it will present a
clear image about the objectives, characteristics or functions and the scope and steps of the
Evaluation in the European Commission.
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 25 of 170
2.1.1. European Commission evaluations: Definition and scope
The definition of evaluations given by the European Commission offers a clear perspective of
the usefulness of the function itself, namely evaluation as a “judgment of interventions
according to their results, impacts and needs they aim to satisfy” (European Commission,
2004, p. 9) Therefore, the evaluation is an assessment of the elements above mentioned that
trigger four different targets:
- to offer support in the design of the intervention, moreover considering the political
implication;
- to be part of the distribution of resources in a even and efficient way;
- to help in settling a good quality intervention;
- to offer a thorough final report. (European Commission, 2004, p. 9)
Recently, the European Commission raised the need for using more management tools within
their activities, therefore a set of basic evaluation requirements can be found in the following
documents: The Financial Regulation, The Implementing Rules of Financial Regulation,
The Communication on Evaluation and The Communication on Evaluation Standards
and Good Practices. What is to be further discussed is the liberty of the European
Commission to define for each case a separate set of differentiated rules. Moreover, while the
first document is applicable to all institution, the Implementing Rules document depends on
the Financial Rules one. They have to be assessed together. The two Communication
documents were the initial ones, thus the rules of the Financial Regulation and the
Implementing Rules of Financial Regulation will prevail upon them.
Moving further, the discussion sets the focus on the differences between the evaluation,
monitoring and audit functions that the European Commission applies. Therefore, monitoring
is a process performed during the intervention in order to obtain the quantitative data on the
process, but not on its effect; the objective is to make sure it respects the objectives, offering
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 26 of 170
a better performance and a set of best practices for future similar programmers. (European
Commission, 2004, p. 10) In addition, a new type of monitoring has been adopted, the
performance monitoring, which provides continuous feedback of the completion of activities
in order to assess the performance of the programme. As a distinct feature between evaluation
and monitoring functions, it is necessary to mention the fact that neither the basic monitoring
function, nor the performance monitoring one do not trigger the impacts of the result of the
programmes as the evaluation does; they focus more on the implementation of the activities
of a specific programme.
The audit is applied for a multitude of activities and can make use of inputs and outputs as
well as of elements common to evaluation. The figure below will offers a more solid image
about the functions analyzed up to this point.
Figure 1: Evaluation, monitoring and audit functions of EC
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 27 of 170
Source: Diagram retrieved on October 2013, from “Evaluating EU Activities. A practical
guide of Commission services” published by the European Commission on the official web-
site
From theory, the evaluation and the audit have as objective the determination of “economy,
effectiveness and efficiency” (European Commission, 2004, p. 11) during the processes and
considering the effects of a certain programme. Examining the well structure exposure of the
utilization of the three functions, we can observe where the two functions differ: the audit
assesses the immediate effects of the implementation of a programme, while the evaluation
aims in determining the overall effect of it and also “the relevance, utility and sustainability”
(European Commission, 2004, p. 11) , elements that will be detailed later in the paper.
The temporal variants of evaluation are the ex ante evaluation, the interim evaluation and the
ex post evaluation. The objective here is to offer a broad understanding of the use and
function of the three temporal categories of evaluation, but the accent will fall onto the first
category, being a source of different actions.
Therefore, the ex ante evaluation intervene before the actions of the Community are initiated,
in “the preparation of proposals for new and renewed” such actions (European Commission,
2004, p. 12) while the interim evaluation intervene in the middle of an activity to determine
the nature of the activity either as a certain “programme with a limited duration or a policy,
which continue for a indefinite period” (European Commission, 2004, p. 13). Through its role
of examination during the process, the interim evaluation offers improvement in quality for
the present programmes and a set of new information the future generation. On the other hand,
the ex post evaluation addresses the intervention as a whole in order to assess the impacts
and their sustainability, what it determined to be either a success or a failure, and in the same
time the indicators of efficiency and effectiveness. The time frames of their availability are
suggested by their denomination.
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 28 of 170
The ex ante evaluation brings more elements to our attention, firstly because of its numerous
purposes and, secondly because of the resemblance between its processes of analysis and the
preparation and design processes.
The purposes are in line with the stages that the ex-ante evaluations realize. Thus, while
listing the stages and the checklist for each, the objective of such an evaluation will be
determined. The below information are retrieved form an European Commission publication
named “Note on including requirements of ex-ante evaluation in external aid programming”
and present the elements of ex-ante evaluations in the Commission published in the
Implementing rules of the Financial Regulation (Art 21).
Stage 1: Analysis of the problem and needs assessment
The analysis of the problem represents the crucial point in initiating an ex-ante evaluation; it
consists of determining:
- the essential characteristics of the programme and the most probable factors influencing the
key problem;
- the representative cluster that can have an interdependent relationship in the given situation;
- the cause and effect between factors and actors presented in a visual way (“problem tree”)
(European Commission, 2005, p. 7)
The needs assessment closely follows the analysis of the problem, an assessment that aims
to conclude the target group and its actual needs addressing different elements such as the
population and its sub-divisions, the people’s situation, motivation and interest in order to
determine and rank their needs. (European Commission, 2005, p. 7)
Stage 2: Objective setting and related indicators
The objectives of the evaluation must be divided into more explicit categories of objectives
by their main concerns such as general objectives (produce impacts), specific objectives
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 29 of 170
(produce results) or operational objectives (produce outputs), division that embodies the
wanted changes. (European Commission, 2005, p. 8)
The indicators handling pursue the progress, but before calculating them the success criteria
must be set, where the latter wishes to reveal the judgment of how the action is classified as
successful or unsuccessful. (European Commission, 2005, p. 8)
Stage 3: Added value of the Commission intervention
In this stage, there are three elements that are observed: the coherence of the EC action,
strong coordination and complementarity, thus the purpose is to test the existence of
conflicts/synergies, the linearity of characteristics specific to any intervention and the
uniqueness of the action. (European Commission, 2005, pp. 8-9)
Stage 4: Alternative options and risk assessment
Being in the ex-ante evaluation level, there is necessary to create a list encompassing
different mechanisms to deal with the intervention, while making a parallel between them and
the criteria of effectiveness, costs and risks. The present is called “a list of possible options”
(European Commission, 2005, p. 9)
The risk assessment represents an important step in the ex-ante evaluation, because it
delivers a clearer view about what can go wrong during the process.
Stage 5: Lessons from the past
The lessons learned are crucial to the continuance of any intervention. It offers best practices
and presents experiences from previous actions, managing to improve the quality and the
results of the next generation of interventions.
Stage 6: Guaranteeing cost-effectiveness
Respecting the rules presented by the Financial Regulation while being more difficult to
calculating one essential indicator in this field (“cost-effectiveness ratio”), the ex-ante
evaluation should pay attention to an estimation of the cost of the proposed intervention, to
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 30 of 170
conclude if the objectives justify the cost and to examine if the a lower cost could have
been the subject in question for same outcomes. (European Commission, 2005, p. 10)
Stage 7: Monitoring the intervention and future evaluation
The last stage is also very important, consisting of daily examination of inputs and general
monitoring of future evaluation.
The last aspect in this section is the scope of evaluation in the Commission, namely the
obligation that the evaluation has to be centred on activities. Hence, within the Commission
composition, the services are in charge with the evaluation of the activities outside the
European Institutions, having as objective the accumulation of performance information
related to the activities. The decision to conduct such evaluation is determine by the
usefulness, the added value and the costs linked to the specific activity.
Finally, we saw a concise definition of evaluation, as a process dealing with judgments
concerning results, impacts and needs. Moreover some evaluation aims were presented, such
as designing an intervention with regard to political matters, evenly distributing the resources,
delivering a good quality action and a final report, whilst being able to find the basic of
evaluation requirements in the four documents. The difference between evaluation,
monitoring and audit was clearly structured and the three temporal variants of evaluation
were represented in a more detailed matter to build on previous knowledge.
2.1.2. The Evaluation Function
When discussing the evaluation function it is mandatory to emphasize its quality of being
present in each and every DG of the European Commission with the remark of also to
underline the un-common features that evaluation plays in the numerous DG. In order to do
so, four elements must be thoroughly described, namely the profile, the role, the tasks and
the resources involved in the evaluation process.
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 31 of 170
The profile is determined with regard to the decided set-up in the certain, the evaluation
functions taking different shapes, different modes in this situation. Moreover, the role is
presented together with the description of the evaluation underlying principle.
Firstly, we need to understand what is considered to be an evaluation function. Thus, it is
seen as a constituent of the DGs and Services structure “which enables them to fulfill
responsibilities for co-ordinating, planning and exploiting their evaluation activities”
(European Commission, 2004, p. 26).The immediate conclusion is that the evaluation
function is a vital element in the organization of European Commissions activities, which
contributes to the well functioning of the Community. In addition, the modes that a
evaluation function can be applied are distinct, going from an internal evaluation – evaluation
networks within a DG – to evaluations done between the different services or outside the DG
– inter-service networks or external evaluation networks – As a good practices requirements
regarding the results of an evaluation, the evaluation function need to make sure to aim the
objectives while delivering steady data on relevance, efficiency, economy, effectiveness,
consistency, sustainability and on the added value of the targeted category evaluated (project,
programme, activity) (European Commission, 2004, p. 28)
The tasks involved are chosen according to the Evaluation Standards that the function needs
to use. Some tasks that an evaluation function has to fulfill as presented by Standard A.6. are:
“Co-ordination of evaluation activities of the Commission services” or “Anticipating of the
decision-making needs of Commission’s services and establishment and implementation of
annual plans and multi-annual evaluation programmes” and even “Defining and promoting
the quality standards and methods”. (European Commission, 2004, pp. 29,31)
Further on, the elements that are in charge with the execution of the previous presented tasks
are both financial and human resources. Before starting the evaluation process it is
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 32 of 170
compulsory to identify and plan the use of these resources, a discussion that will encompass
also the evaluation skills of the participants and the distribution of resources.
2.1.3. Designing, conducting and finalizing an evaluation
This section offers a very complex image of the development of an evaluation, starting from
its creation (designing), arriving at the actual realization of it (conducting) and ending with its
closure (finalizing).
I. Designing an evaluation
The design phase has as a starting point the definition of the mandate that the evaluation takes
place and continues with more details regarding the preparatory phase, as in the questions
addressed and the settling of the evaluation Terms of References (TOR).
Accordingly, the mandate is a preliminary phase of the in an evaluation before concretizing
the TOR, being a descriptive document containing information about the circumstances of
composing an evaluation projects, its motives and objective, the organization of the work
(timetable and the human resource involved in the project) and also what to be expected from
the actions. A very important consequence of the realization of the mandate is that it involves
the presence of the stakeholders, managing to inform them and in the same time making sure
of their support and common objectives.
The second step in designing an evaluation represents the clear composition of the
evaluation questions, an element of the structuring phase; these questions have as a target to
set up and clarify what exactly is to be evaluated. Therefore, some evaluation issues are
scanned for all the temporal stages of a programme (ex-ante, interim and ex-post stage). The
evaluation issues will be described in more detail while presenting the Evaluation Guidelines
in Section 2.3. of the paper, but they can also be reminded here; there are relevance,
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 33 of 170
coherence, economy, effectiveness, efficiency, sustainability, utility, consistency,
allocative/distributional effects and acceptability. (European Commission, 2004, p. 39)
The Terms of Reference (TOR) represents a central document, element of an evaluation
process having the role to present four very important aspects of such a process: the origin,
the scope, the aim and the allocation of roles of a programme. (European Commission, 2004,
p. 43) These four essential elements are extremely important for the cases in which the
evaluator is external. Moreover, TOR contains also the questions to be addressed and the
specifications of the work, on the ground of the chosen evaluation methodology.
The list of TORs’ components as the European Commission presents in the publication
“Evaluating EU Activities. A practical guide for the Commission services” (p. 43) is shown
below. More specifications will be developed in the next sections. Thus, the components are:
a).The purpose, objectives and justification for evaluation (including legal base),
b).A description of the activity to be evaluated,
c).The scope of the evaluation,
d).The main evaluation questions,
e).The overall approach for data collection and analysis,
f).The framework delimiting the work plan, organization and budget of the process,
g).In the case of external evaluator, clear selection criteria,
h).The structure of the final report, and of possible also of the profess reports,
i).The expected use, and users of the evaluation.
It is to be pointed out, in the case of data collection and analysis that the design of an
evaluation process can be influenced by time and resources involved in the evaluation. For
this reason, the connection between the available budgets and the time span is materialized
into possibilities and restrictions as follows:
-Short time span and small budget: desk studies, interviews, focus groups.
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 34 of 170
-Medium time span and budget: case studies, surveys, expert panels.
-Long time span and considerable budget: econometric models, cost-benefit and cost-
effectiveness.
This classification will help us indentify from what kind of category the analyzed evaluation
take part.
II. Conducting an evaluation
This stage will firstly address the establishment of the steering group, will then go to the
actual evaluation process respecting both the administrative aspects and the acceptance of the
methodology chosen and will close with the validation of the evaluation report.
A. The steering group aims to offer the evaluator access to the information needed, to
support the work performed mostly towards the methodology used and is an active
participant in the quality examination of the evaluation. In order to set up a steering group it
is primarily a question of how to choose the right members and secondly to attribute them
with proper tasks and responsibilities.
The group is generally composed by members of the European Commission that deals in their
daily activities with subjects specific to the evaluations. There are also cases where the input
of members outside the Commission is required, but even so the chair of the work done has to
be an inside member. The stakeholders can be managers, operators or agencies in charge with
the implementation or the groups directly or indirectly influenced by the intervention.
The specific tasks are determined by the group’s involvement in the evaluation process; since
its actions are required before, during and after of an evaluation, the tasks are also numerous.
It is responsible to convert the political issue/question into operational elements and it also
oversees and offers guidance to the team in charge with the evaluation process to assure the
proper implementation of the strategy imposed by TOR. (European Commission, 2004, p. 52)
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 35 of 170
B. The carrying out evaluation is distributed into the administrative set-up stage and the
implementation stage.
a). The administrative set-up refers to the decision on who will carry on the evaluation; the
three choices are simple, the evaluation can be done by internal evaluators, external
evaluators or can delegate parts of it to outside sources. The decision mostly relies on the
quality of the summative and formative dimensions; it is wise to have external evaluators
when the summative dimension is strong, while if it is weak but the formative dimension is
strong, the in-house option is better suited. Nevertheless, in general the evaluation project
combines both summative and formative aspects. (European Commission, 2004, p. 53)
The other determinant of the way to administer an evaluation is the number of resources
available for conducting the evaluation project. As an example, for a short timeframe
availability to carry an evaluation, the in-house option is the most appropriate.
b). The implementation of the evaluation represents the methodology of evaluation
constructed by tools and techniques that help in answering the questions within the evaluation
framework considering time, budget and data limitations. Furthermore, a considerable “desk
research” is required by the methodology upon the secondary sources at the beginning related
to the actual evaluation, and then introducing the analysis of data in the field. (European
Commission, 2004, p. 54)
It is essential to mention the four phases of evaluation as the European Commission presents
in the “Evaluating EU Activities. A practical guide for the Commission services”: the
structuring phase, the data collection phase, the analysis of data and the formulation of
judgments. There are broadly characterized below.
-The structuring phase: represents preparatory work from setting the TOR to the delivery of
inception report; the main objectives are to be able to provide the needed information for the
users and to assure the evaluation method is complementary with the data collection, and that
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 36 of 170
the decided tasks will help to the performance of the evaluating process.
-The data collection phase: it begins after the inception report, as resulted from the
structuring group, is delivered and it will follow the methodology imposed by the evaluator
and accepted by the steering group. The data are both qualitative and quantitative, composed
also by the evaluator own judgment.
-The analysis of data: consist of the result of individual data collected with respect to the
tools and technique used and also, it will offer a broad analysis of information, documentary,
statistical sources and different other surveillance systems. The evaluator has to examine the
causes and the effects that occurred.
-The formulation of judgments: this final phase represents the reflection conceptual issues
upon the reports delivered. Therefore the evaluators will judge the effectiveness, the
efficiency, the utility, the sustainability and relevance of the information provided. The tools
and techniques used will be presented in Section 2.4.2. Evaluating tools.
C. The validation of evaluating reports is composed by different reports deliver by the
evaluators with respect of different stages that the evaluation reached. The reports are
identified below as presented in the European Commission publication “Evaluating EU
Activities. A practical guide for the Commission services”:
a).The inception report: as discussed before, the inception report is delivered at the end of
the structuring phase consisting of the methods to be used in the implementation of the
evaluation process, while some also present result of some initial analyses; it is essential to
conclude that the evaluation plan is feasible.
b).The interim report: is composed by the analysis of primary and secondary data analysis,
which can contribute even at answering some evaluation questions.
c).Draft final report: is composed by some conclusions of the evaluator concerning the
question posed in the TOR, a conclusion of the judgments and some recommendations
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 37 of 170
relating the result. It is only a draft final report, because it will go through a preliminary
quality assessment phase.
d).Final report: the conclusion made upon the result of the quality assessment and the
discussions of the steering group about the draft final report.
III. Finalizing an evaluation
The end of an evaluation project requires reporting the results, disseminating them and using
the evaluation findings. The three related aspects will be further discussed.
A. Reporting
The evaluation report consists of a numerous items, such as the scope the evaluation was
done and its context, the goal aimed and the objectives as well as the methodologies
(questions to be asked, standards to be applied, rules to be obeyed, procedures to be follow)
chosen to fulfill the goal.
The structure of a report needs to be understandable and clear and it comprises the next items:
the executive summary, the main report and the technical annexes.
According to the European Commission publication “Evaluation EU Activities” the main
report is the most complex document, being represented by three chapters as follows:
1. Introductory chapters on evaluation on objectives and scope
2. Descriptive chapters presenting the activity to be evaluated and the method used
3. Substantive chapters presenting the results of the analysis, their interpretation and
subsequent conclusions
B. Dissemination of evaluation results
The evaluations done upon the European Commission developing and co-operation
programmes in third countries assigns the accountability, but in the same time aim in
concluding with some recommendations and lessons from both successful and failure
programmes, in order correct and add more value to the next generation of programmes. Thus,
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 38 of 170
the dissemination and feedback represent an essential aspect to be discussed when analyzing
evaluation processes on the EC programmes. (OECD, 2003, p. 1)
The Guidelines on dissemination and feedback of evaluation present these features as done
by the Evaluation Unit. Thus, the purpose of disseminating evaluation reports is to offer a
certainty to the internal and external to the Services stakeholders about the actual report.
In order to offer the evaluation findings as coherent documents it is mandatory to set up a
dissemination strategy that begins with determining right from the designing on Terms Of
References where will this reports will go, to which users will be of need. Therefore, the list
below presented by the publication “Evaluating EU Activities” published by the EC, names
the potential interested parties to receiving and using these evaluations:
a).Key policy-makers interested institutional parties
b).Managers and operators of intervention being evaluated
c).Addresses of intervention (civil societies, NGOs, private firms of individuals)
d).Other services from the Commission
e).Other interest groups (organizations, groups or individual focused on these topics,
academic related groups or individuals)
The channels to deliver these reports are numerous as the publication of European
Commission”Guideline for dissemination and feedback of evaluations” presents: 1).
distribution of the report via mail and e-mail, 2) Internet, 3) Newsletter, 4) Seminars and
workshops.
The Feedback is offered by the Evaluation Unit and, as required by the Board of Directors of
EuropeAid and aims in receiving the lessons and recommendation and implement them “in
new operations and in the decision-making processes.” (OECD, 2003, p. 4) and also
delivering in a systematic manner the status of use of these lessons and recommendations to
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 39 of 170
the Evaluation Unit. In accordance to the feedback and follow-up field, the timing and the
quality are the most important items to be considered.
C. Using of evaluation results
The biggest challenge when finalizing an evaluation report is to determine the most suited
way to deliver the evaluation report in a form likely to be directly used. There are factors that
influence the way the results are use, such as the level of involvement of the future, potential
users in the evaluation processes or their expectations on the evaluation. The involvement of
users in the evaluation process concerns the actual construction of the dissemination of the
results. As a result of all stated above, the most important aspect of using properly evaluation
results is to aim from the beginning the accurate audience.
In conclusion, the section aim to define the evaluation process, in the same time while
making the difference between diverse types of examination within a project. The different
categories of temporal examination reveal their usefulness and the examination function is
presented with its profile, roles and tasks. The final discussion encompasses a very complex
presentation of the stages that compose an evaluation process from designing it, through
actual implementation and to finalizing it. All these information will be of importance when
commencing the comparative analysis of the evaluation reports on the countries chosen.
2.2. Evaluation methods
The evaluation methods will be presented as proposed and publicly distributed by the
European Commission, by the Joint Evaluation Unit composed by DG External Relations,
DG Development and EruopeAid Co-operation Office. Within their “Evaluation
methodology” section they put forward four different volumes of evaluation of different
aspects. These volumes are entitled “Evaluation Methods for the European Union’s External
Assistance” are the following:
1). Volume 1– Methodological Bases for evaluation
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 40 of 170
2). Volume 2 – Guidelines for Geographic and Thematic evaluations
3). Volume 3 – Guidelines for project and programme evaluation
4). Volume 4 – Evaluation tools
This section 2.2.1 will encompass the essentials from the first three volumes, information on
which the paper’s objective could be built on. The “Evaluation tools” will be discussed
separately in the next section.
2.2.1. Methodology of evaluations
As previous mentioned a clear and concrete structure will be build accordingly to the
guidelines retrieved from the Volume 1, 2 and 3 of the Evaluation Methods for the European
Union’s External Assistance.
Volume 1 – Methodological Bases for evaluation
The volume is presents the process and methods used for evaluation. The process consists of
evaluation indicators such as the object, timing, utilization, players and their roles (European
Commission, 2006, p. 14), while the methods consists of “intervention strategy; evaluation
questions (usefulness, feasibility, formulation); judgment references (criteria and indicators);
methodological design; data collection and analysis; value judgment (conclusions, lessons
and recommendations); and finally quality assurance” (European Commission, 2006, p. 14)
I. The process
1. The first section presents the subject of the evaluation, namely it will define the
evaluation of programme and the scope of the evaluation, and as well it identifies the sectors,
themes and cross-cutting issues.
The definition of a programme, a homogeneous programme and their evaluation was already
mentioned in Section 1.1.1. of the current paper.
Public interventions can be categorized by sector or theme, while more private interventions
are subject to the cross-cutting issues, as a thematic evaluation. The sector classification
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 41 of 170
depends on the nature of the activities and outputs, while the cross-cutting issues target the
impacts, as observed.
The scope of the evaluation is defined in direct connection to the territory, the period in
discussion and the regulations involved.
2. The second section refers to timing of the evaluations, an aspect that discusses the ex ante,
interim and ex post evaluations, aspect fully described in Section 2.1.1. of the current paper.
3. The third section discusses the use of an evaluation, describing the users, the types of use
and the dissemination of the evaluation. Hence, between users we can remind the policy
makers and designers, managers, partners and operator and also other actors involved. In
accordance with the use, evaluation can either assist the decision-making or the formulation
of judgments. The dissemination part of an evaluation is thoroughly presented in Section
2.1.3 of the present paper.
4. The fourth section identifies the players and their roles in an evaluation process. Hence
the players, as detailed in Section 1.3., are the evaluation manager, the reference group, the
evaluation team and the stakeholder. Their roles are to be found in the mentioned section.
II. On methods
1. The first element as a method to evaluate is the intervention strategy, which is rational,
logic and connects the other policies.
The interventional rationale is distributed through the programming documents and aims to
“satisfy the needs, solve the problems or tackle the challenges that are considered to be
priorities in a particular context and that cannot be addressed more effectively in another way”
(European Commission, 2006, p. 37) The exact definition was required here for two reasons,
firstly to underline the usefulness and importance to start an evaluation with a correctly
thought rationale and, secondly to introduce the imperative action in sorting priorities.
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 42 of 170
On the other hand, the logic of an intervention refers primarily to the activities and
secondarily to their effects (outputs, result and impacts). When the logic of the intervention is
“faithful” it means that it follows exactly the initial programming documents, but it can also
appear as a reconstruction of the logic, following other effects that might be encountered
while modifying the initial objectives that the intervention aims to achieve. In this case it is
mandatory to be announcing that the logic is no longer “faithful”. The ways to organize the
logic is by a logical box, a diagram of objectives or a diagram of expected effects.
The related policies refer to the link between relevant interventions and the similar evaluated
interventions, in order to quickly respond to the questions regarding coherence,
complementarily and relevance concepts and to analyze the current quality of the intervention
with the one obtained from previous similar interventions.
2. The second method to evaluate is the evaluation questions, a part that encompasses the
usefulness, the origin and the selection of questions, but also the questions and evaluation
criteria and preparing an evaluation question. (European Commission, 2006, p. 43)
The evaluation questions are a very useful aspect of the evaluation process. They put the
accent on a concise number of main aspects, helping to have a narrower data collection
process, a more precise examination, which delivers a report of great assistance. The useful
summary is the result of maximum 10 questions gathering in their composition aspects about
the activities, the targeted group, the expected effects and the evaluation criteria. There are
three categories of questions are: the ones inferred from the intervention logic either directly
or indirectly and other types of questions that do not focus on the existing effects in the
intervention logic.
The selection of questions follows the next steps: identify questions, assess the potential
usefulness of answers and assess the feasibility of questions. Simply stated, the main motive
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 43 of 170
to pose questions is addressed either if their answer is unidentified, either if somebody
questions something or because the findings of a question are constructive.
The identification of questions consists of the analysis of the logic and rational of the
intervention, the issues for which the evaluation is of need and also as stated by TORs and
finally the questions posed in the ex ante evaluation. (European Commission, 2006, p. 43)
Secondly, there are also questions raised by persons that start the evaluation and also from the
evaluation team and created from the expectations of the reference group. Finally, the chosen
questions are subject of analysis to underline the level of usefulness and their feasibility.
The evaluation criteria represent one method of evaluation of a great importance.
The seven criteria are connected to the main “viewpoints” of the examination content. There
are graphically presented in the figure below:
Figure 2: Evaluation criteria representation retrieved on October 2013 from
“Evaluation Methods for European Union’s External Assistance”, Volume 1, p. 50
Examining Figure 2 we can observe the interdependency and close connectivity between the
seven evaluation criteria and the viewpoints of an evaluation process. As mentioned before,
the rationale of an intervention aim is determining the needs, resolving problems and
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 44 of 170
challenges while the logic of the same intervention triggers the determination of the expected
effects (outputs, results and impacts). For both, the objectives are required to be achieved.
Moreover, the logic of a reconstruction of intervention logic is no longer “faithful” and
introduces new objectives not contained in the strategy. In conclusion, we can observe the
circle of all the elements previously stated, in the same time with the attribution of the
evaluation criteria.
The first volume of the “Evaluation Methods for European Union’s External Assistance”,
published by the Joint Evaluation Unit identifies and defines the key concepts that stand as a
basis of the evaluation process, namely the evaluation criteria. Their definitions are to be
found in the below table generated by author.
Evaluation criteria Representation
Relevance “The extent to which the objectives of the development
intervention are consistent with beneficiaries' requirements,
country needs, global priorities and partners' and EC's
policies.”
Effectiveness “The extent to which the development intervention's objectives
were achieved, or are expected to be achieved, taking into
account their relative importance. “
Efficiency “The extent to which outputs and/or the desired effects are
achieved with the lowest possible use of resources/inputs
(funds, expertise, time, administrative costs, etc.) “
Sustainability “The continuation of benefits from a development intervention
after major development assistance has been completed.
The probability of continued long-term benefits. The resilience
to risk of the net benefit flows over time.”
Impact “Positive and negative, primary and secondary long-term
effects produced by a development intervention, directly or
indirectly, intended or unintended. “
Coherence/Comparability Coherence within the Commission's development programme
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 45 of 170
Coherence/complementarity with the partner country's policies
and with other donors' interventions and also with the other
Community policies
Community value added The extent to which the development intervention adds benefits
to what would have resulted from Member States' interventions
only in the partner country.
Table 2: The defintion of evaluation criteria
Retrieved in May 2014 Source: generated by author from “Evaluation Methods for European
Union’s External Assistance”, Volume 1, p. 50 – 52
The preparation and evaluation questions follow a logical line. The preparation begins
with determining whether or not the questions are appropriate to the evaluation, secondly
they mention the purpose of the question. The next two steps assure the connection between
the questions with the logic of the intervention and the evaluation criterion. Finally, the
questions are written.
3. The third method of evaluation is represented by the judgment references, which also
consists of different aspects, namely the judgment criteria, the target levels, the indicators and
the distance between the actual questions to criterion and indicator.
The judgment criteria, also called “reasoned assessment criterion” is an aspect of the
evaluation offering the possibility to examine the success of the evaluation, while the target
level represents either an objective defined in a verifiable way, either a comparable good
practices or the actual best practices within the intervention. The indicators focus on the
quantitative examination of the evaluation, being represented by ratio or rates, or on the
qualitative examination (the descriptive side of an evaluation). Below, the presentation done
in the first volume of the “Evaluation Methods for European Union’s External Assistance” is
represented:
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 46 of 170
Table 3: Indicators and intervention logic, retrieved in May 2014, from “Evaluation
Methods for European Union’s External Assistance”, Volume 1, p. 60
4. The methodological design aspect presents the way to design the table per question, the
reasoning chain and how to optimize the overall design.
The methodological design is extremely useful for the evaluation team that is supported in the
answering of questions process and to conclude it, with own interpretation. It is composed by
the following: The chain of reasoning that will be followed; A strategy for collecting and
analyzing data; Selected investigation areas; A series of specifically designed tools; A work
plan (European Commission, 2006, p. 63)
The example proposed by the same document of a designed table for a certain question is
distributed in Table 4, providing a clear understanding about all the aspects of the
methodological design:
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 47 of 170
Table 4: Structure of a design table retrieved in May 2014, from “Evaluation Methods
for European Union’s External Assistance”, Volume 1, p. 65
In order to optimize the design there are different solution, such as combining tools and
questions, preparing the overall assessment taking in consideration the allocation of resource
and also the cost and time constraints. Lastly, the developing tools foresee the Volume 4 of
the “Evaluation Methods for European Union’s External Assistance” where the actual tools
on which the evaluation is built are presented. In our case, there are also some preliminary
stages before implementing the evaluation tools.
5. The data collection method is build by the work plan, the frequent difficulties and
solutions, and the reliability feature of collecting data.
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 48 of 170
The work plan consists mostly of the collection of data using the evaluation tools presented
further on such as interviews, field visits, questionnaires, observation and onward. Discussing
the frequent difficulties they can be the access to information, cultural gap or lack or
weakness of data. For these issues there are diverse solutions to be adopted, depending on the
case.
The reliability of data is also a challenge that the evaluation team can face, such as
confirmation bias, self-censorship, informants’ strategy, question induced answers, empathy
bias, unrepresentative sample, sample selection bias.
6. The analysis as an evaluation method encompasses the different analysis strategies, the
processes and the validity of analysis.
The change analysis strategy underlines the change encountered upon quantitative and/or
qualitative indicators as the time passes, while the meta-analysis brings into light findings
about other evaluations and studies.
The last two strategies of analysis, the attribution and the contribution analysis are focused on
cause-and-effect.The analysis process is composed by data processing, exploration,
explanation and confirmation, at the same time as the validity of analysis has three branches:
external, internal and construct validity.
7. The judgment is composed by conclusions, lessons and recommendations (related to the
conclusion, but not as a copy of them). The first two are based on the judgment criteria, in the
same time when they respect ethical principles as impartiality, legitimacy, protection of
individuals and responsibility.
8. The quality assurance follows the rules of the game and is characterized by approval of
deliverables.
The rules of the game concern of approval of documents, quality criteria and dissemination of
the quality assessment of the final report. (European Commission, 2006, p. 90) The quality
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 49 of 170
criteria are: meeting needs, relevant scope, defensible design, reliable data, sound analysis,
credible findings, valid conclusions, useful recommendations and a clear report. (European
Commission, 2006, pp. 91-92)
In conclusion, the evaluation of methodology is a very complex, well structured and clear
process. The process itself consists of the subject, timing and use of the evaluation and the
players with their roles, while the methods are more numerous and more compound. They
follow a clear rational line from intervention strategy, to evaluation questions and judgment
references, also to the design and the data collection, arriving at the analysis level, judgment
and quality assurance.
Volume 2 and 3
The second and the third volume from the publication “Evaluation Methods for European
Union’s External Assistance”, have the same structure, but a different content.
Volume 2 offers the guidelines for evaluations considering the geographical and thematic
feature, while the next one tender the guidelines for project and programme evaluation in
general.
Even if the four countries subject to evaluation in the current paper were chosen also from
geographical consideration, the presentation will target Volume 3, namely the guidelines
oriented more towards project and programme evaluation.
Thus, the structure of Volume 3 is divided into two parts: Guidelines attribute to the
evaluation manager and Guidelines for the evaluating team. Both parts assess the 5
mandatory phases for realizing an evaluation of a project or programme. Since the paper has
a goal to analyze the comparability and coherence of the four evaluation reports with regard
to the evaluation guidelines, covering in detail all the phases for the manager and his team
represents aspects outside the proposed objectives. If some information from these two
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 50 of 170
volumes will be defined as necessary, supplementary theoretical points will be added in
Chapter 3 along with the flow of the comparative analysis.
A summary of the evaluation process is presented below and consists of five phases,
including the Phase 0.
1. Preparatory phase (Phase 0)
The evaluation manager is put in function; he has to also name the reference group, to decide
on the TORs and to engage in the project the external evaluation team.
2. Desk phase (Phase 1)
The external evaluation team gets informed about the intervention logic consulting the related
official documents and proposes the evaluation questions and judgment criteria. The latter
questions have to be approved by the reference group, moment after which the team
intervenes again to settle on the indicators and to offer partial answers to the questions.
Shorty after, the assumptions to be tested are clear and the next stage is ready, the data
collection and analysis.
3. Field phase (Phase 2)
The evaluation team moves to the country/countries in case and starts the work, which is to
collect data, apply the evaluation techniques and begins assessing the assumption of the
results.
4. Synthesis phase (Phase 3)
The final report is presented by the evaluation team. This includes the findings and the
conclusions, considering the questions asked and the overall assessment. The clustered and
ranked in accordance with the priorities recommendations of the evaluation are also
distributed. Afterwards, the quality assessment process comes into act.
5. Dissemination and follow-up phase (Phase 4)
A comparative analysis of European Commission evaluations Virban Teodora Irina
of Development Co-operation programmes Master of Science in Management Science
Page 51 of 170
The evaluation reports in all of their forms are sent to the policy-makers, to concerned
services and partners. The report is also publicly delivered and can be found on the European
Commission’s official web-site. From that point on, a follow up is done on the appliance of
the recommendation offered. (European Commission, 2006, p. 6)
These phases have to be closely checked if they were respected, when analyzing a final report.
In conclusion, the three Volumes published by the Joint Evaluation Unit entitled “Evaluation
Methods for the European Union’s External Assistance”, created as a whole by the DG
External Relations, DG Development and EuropeAid Co-operation Office represent a solid
background to the current research paper. The existent content will be most often required to
be able to conclude upon the findings in the research process.
2.2.2. Evaluating tools
This section will present in more detail the tools proposed to be use in evaluations of the
European Commission development and co-operation programmes. This represents the fourth
Volume of the “Evaluation Methods for the European Union’s External Assistance”, created
by the Joint Evaluation Unit (DG External Relation together with DV Development and
EuropeAid Co-operation Office).
Before presenting the numerous tools applicability and purposes, it is essential to emphasise
that the tools can be used as individual instrument of collecting data, but in most cases
combination are realized in order to achieve a more on-depth reasoning and with multiple and
more complex input possibilities.
Moreover, the tools are correlated with four functions that the evaluator’s employs.
Depending on type of tools used, the functions are divided into main and secondary. The four
functions attributed to the tools are: organisation, observation/collection, analysis and
judgment. (European Commission, 2006, p. 12)
The evaluation tools are defined next:
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban
Master thesis Solvay Brussels School 2012-2014 - Teodora Virban

More Related Content

Similar to Master thesis Solvay Brussels School 2012-2014 - Teodora Virban

BR 1 / Systémy hodnocení v mezinárodní praxi
BR 1 / Systémy hodnocení v mezinárodní praxiBR 1 / Systémy hodnocení v mezinárodní praxi
BR 1 / Systémy hodnocení v mezinárodní praxiMEYS, MŠMT in Czech
 
Evaluation of the European Unions Humanitarian Interventions in India and Nep...
Evaluation of the European Unions Humanitarian Interventions in India and Nep...Evaluation of the European Unions Humanitarian Interventions in India and Nep...
Evaluation of the European Unions Humanitarian Interventions in India and Nep...Megh Rai
 
Evaluation and Monitoring of Transboundary Aspects of Maritime Spatial Planni...
Evaluation and Monitoring of Transboundary Aspects of Maritime Spatial Planni...Evaluation and Monitoring of Transboundary Aspects of Maritime Spatial Planni...
Evaluation and Monitoring of Transboundary Aspects of Maritime Spatial Planni...Pan Baltic Scope / Baltic SCOPE
 
Erasmus plus-programme-guide en-v.2
Erasmus plus-programme-guide en-v.2Erasmus plus-programme-guide en-v.2
Erasmus plus-programme-guide en-v.2Mario Verissimo
 
BR 6 / Řízení VaV a systémy financování výzkumu v mezinárodní praxi
BR 6 / Řízení VaV a systémy financování výzkumu v mezinárodní praxiBR 6 / Řízení VaV a systémy financování výzkumu v mezinárodní praxi
BR 6 / Řízení VaV a systémy financování výzkumu v mezinárodní praxiMEYS, MŠMT in Czech
 
Draft of the Second Interim Report - R&D governance and funding systems for r...
Draft of the Second Interim Report - R&D governance and funding systems for r...Draft of the Second Interim Report - R&D governance and funding systems for r...
Draft of the Second Interim Report - R&D governance and funding systems for r...MEYS, MŠMT in Czech
 
2010 OCDE Achieving Efficiency Improvements in the Health Sector through the ...
2010 OCDE Achieving Efficiency Improvements in the Health Sector through the ...2010 OCDE Achieving Efficiency Improvements in the Health Sector through the ...
2010 OCDE Achieving Efficiency Improvements in the Health Sector through the ...Madrid Network
 
2010 Ocde Informe Salud Tics En
2010 Ocde Informe Salud Tics En2010 Ocde Informe Salud Tics En
2010 Ocde Informe Salud Tics Enguest4c3ea7
 
2010 OCDE Achieving Efficiency Improvements in the Health Sector through the ...
2010 OCDE Achieving Efficiency Improvements in the Health Sector through the ...2010 OCDE Achieving Efficiency Improvements in the Health Sector through the ...
2010 OCDE Achieving Efficiency Improvements in the Health Sector through the ...guest4c3ea7
 
Report EPS Review 2009 - 2015
Report EPS Review 2009 - 2015Report EPS Review 2009 - 2015
Report EPS Review 2009 - 2015Frans van Steijn
 
61506_Capstone_Report_DFID_FINAL_Quantifying_Governance__Indicators-4
61506_Capstone_Report_DFID_FINAL_Quantifying_Governance__Indicators-461506_Capstone_Report_DFID_FINAL_Quantifying_Governance__Indicators-4
61506_Capstone_Report_DFID_FINAL_Quantifying_Governance__Indicators-4Alexander Hamilton, PhD
 
Evaluation peer review fao
Evaluation peer review faoEvaluation peer review fao
Evaluation peer review faoDr Lendy Spires
 
D2.3.1 Evaluation results of the LinkedUp Veni competition
D2.3.1 Evaluation results of the LinkedUp Veni competitionD2.3.1 Evaluation results of the LinkedUp Veni competition
D2.3.1 Evaluation results of the LinkedUp Veni competitionHendrik Drachsler
 

Similar to Master thesis Solvay Brussels School 2012-2014 - Teodora Virban (20)

BR 1 / Systémy hodnocení v mezinárodní praxi
BR 1 / Systémy hodnocení v mezinárodní praxiBR 1 / Systémy hodnocení v mezinárodní praxi
BR 1 / Systémy hodnocení v mezinárodní praxi
 
Evaluation of the European Unions Humanitarian Interventions in India and Nep...
Evaluation of the European Unions Humanitarian Interventions in India and Nep...Evaluation of the European Unions Humanitarian Interventions in India and Nep...
Evaluation of the European Unions Humanitarian Interventions in India and Nep...
 
Evaluation and Monitoring of Transboundary Aspects of Maritime Spatial Planni...
Evaluation and Monitoring of Transboundary Aspects of Maritime Spatial Planni...Evaluation and Monitoring of Transboundary Aspects of Maritime Spatial Planni...
Evaluation and Monitoring of Transboundary Aspects of Maritime Spatial Planni...
 
My thesis
My thesisMy thesis
My thesis
 
CASE Network Studies and Analyses 387 - Institutional Harmonization and Its C...
CASE Network Studies and Analyses 387 - Institutional Harmonization and Its C...CASE Network Studies and Analyses 387 - Institutional Harmonization and Its C...
CASE Network Studies and Analyses 387 - Institutional Harmonization and Its C...
 
Social harmonization and labor market performance in europe
Social harmonization and labor market performance in europeSocial harmonization and labor market performance in europe
Social harmonization and labor market performance in europe
 
Erasmus plus-programme-guide en-v.2
Erasmus plus-programme-guide en-v.2Erasmus plus-programme-guide en-v.2
Erasmus plus-programme-guide en-v.2
 
BR 6 / Řízení VaV a systémy financování výzkumu v mezinárodní praxi
BR 6 / Řízení VaV a systémy financování výzkumu v mezinárodní praxiBR 6 / Řízení VaV a systémy financování výzkumu v mezinárodní praxi
BR 6 / Řízení VaV a systémy financování výzkumu v mezinárodní praxi
 
Draft of the Second Interim Report - R&D governance and funding systems for r...
Draft of the Second Interim Report - R&D governance and funding systems for r...Draft of the Second Interim Report - R&D governance and funding systems for r...
Draft of the Second Interim Report - R&D governance and funding systems for r...
 
2010 OCDE Achieving Efficiency Improvements in the Health Sector through the ...
2010 OCDE Achieving Efficiency Improvements in the Health Sector through the ...2010 OCDE Achieving Efficiency Improvements in the Health Sector through the ...
2010 OCDE Achieving Efficiency Improvements in the Health Sector through the ...
 
2010 Ocde Informe Salud Tics En
2010 Ocde Informe Salud Tics En2010 Ocde Informe Salud Tics En
2010 Ocde Informe Salud Tics En
 
2010 OCDE Achieving Efficiency Improvements in the Health Sector through the ...
2010 OCDE Achieving Efficiency Improvements in the Health Sector through the ...2010 OCDE Achieving Efficiency Improvements in the Health Sector through the ...
2010 OCDE Achieving Efficiency Improvements in the Health Sector through the ...
 
Report EPS Review 2009 - 2015
Report EPS Review 2009 - 2015Report EPS Review 2009 - 2015
Report EPS Review 2009 - 2015
 
61506_Capstone_Report_DFID_FINAL_Quantifying_Governance__Indicators-4
61506_Capstone_Report_DFID_FINAL_Quantifying_Governance__Indicators-461506_Capstone_Report_DFID_FINAL_Quantifying_Governance__Indicators-4
61506_Capstone_Report_DFID_FINAL_Quantifying_Governance__Indicators-4
 
Evaluation peer review fao
Evaluation peer review faoEvaluation peer review fao
Evaluation peer review fao
 
D2.3.1 Evaluation results of the LinkedUp Veni competition
D2.3.1 Evaluation results of the LinkedUp Veni competitionD2.3.1 Evaluation results of the LinkedUp Veni competition
D2.3.1 Evaluation results of the LinkedUp Veni competition
 
Erasmus plus-programme-guide en (1)
Erasmus plus-programme-guide en (1)Erasmus plus-programme-guide en (1)
Erasmus plus-programme-guide en (1)
 
Urban Game 2013
Urban Game 2013Urban Game 2013
Urban Game 2013
 
Erasmus + -programme guide 2014-2020
Erasmus + -programme guide 2014-2020Erasmus + -programme guide 2014-2020
Erasmus + -programme guide 2014-2020
 
BR 5 / Příručka hodnocení
BR 5 / Příručka hodnoceníBR 5 / Příručka hodnocení
BR 5 / Příručka hodnocení
 

Master thesis Solvay Brussels School 2012-2014 - Teodora Virban

  • 1. DEPARTMENT OF ECONOMIC AND SOCIAL SCIENCES & SOLVAY BUSINESS SCHOOL A comparative analysis of European Commission evaluations of Development Co-operation programmes 2012-2014 Teodora Irina Virban Student number: 0106917 Promotor prof. dr. Youri Devuyst August 2014
  • 2. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 1 of 170
  • 3. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 2 of 170 Table of Contents List of Tables .............................................................................................................................4 List of Figures............................................................................................................................4 Abstract......................................................................................................................................5 Introduction................................................................................................................................6 Chapter 1: Overview of Program Evaluation: definition and components..............................11 1.1. Types of Program Evaluation and different approaches ...............................................11 1.1.2. Definition of Program.............................................................................................12 1.1.1. Program Evaluation: definition, types and central concepts ..................................13 1.2. Program Evaluation Theories........................................................................................16 1.3. Program Evaluators.......................................................................................................18 1.4. Economic and Politic Perspectives of Program Evaluation..........................................22 Chapter 2: European Commission's approach to evaluations..................................................24 2.1. Evaluating EU Activities...............................................................................................24 2.1.1. European Commission evaluations: Definition and scope.....................................25 2.1.2. The Evaluation Function ........................................................................................30 2.1.3. Designing, conducting and finalizing an evaluation ..............................................32 2.2. Evaluation methods.......................................................................................................39 2.2.1. Methodology of evaluations...................................................................................40 2.2.2. Evaluating tools ......................................................................................................51 Chapter 3: The comparative analysis.......................................................................................55
  • 4. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 3 of 170 3.1. The research methodology ............................................................................................55 3.2. The case study choice....................................................................................................57 3.3. The comparative analysis..............................................................................................60 Stage 1: Verifying that all of the four reports are integrally retrieved from the European Commission platform, on the DG DEVCO “Reports” section. .......................................61 Stage 2: Examining each of the four reports in order to observe the respectfulness in their composition with regard to the Evaluation Guidelines proposed by the EC....................62 Stage 3: Comparing the findings from the four summaries done in the previous stages .78 Stage 4: The elaborated comparative analysis..................................................................88 Conclusion .............................................................................................................................117 Annexes..................................................................................................................................134 Annex 1 – Evaluation Methodologies by country .................................................................134 Annex 2 - Evaluation questions and answers by country ......................................................138 Annex 2.1. Coverange of the evaluation criteria and questions ............................................148 Annex 3 – Quality Assessment Grid by country ...................................................................150 Annex 4 – Terms of references representation by country....................................................154 Annex 5 – Data collection and analysis by country...............................................................155 Annex 6 – Main conclusions and recommendations .............................................................157 Annex 7 – Composition .........................................................................................................159
  • 5. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 4 of 170 List of Tables Table 1: Evaluation concepts through the eyes of Joseph S. Wholey / Source: created by author using information from original source, retrieved on February 2014 (Shadish, Cook, & Leviton, 1995, p. 225)_____________________________________________________18 Table 2: The defintion of evaluation criteria _____________________________________45 Table 3: Indicators and intervention logic, retrieved in May 2014, from “Evaluation Methods for European Union’s External Assistance”, Volume 1, p. 60________________________46 Table 4: Structure of a design table retrieved in May 2014, from “Evaluation Methods for European Union’s External Assistance”, Volume 1, p. 65___________________________47 Table 5: Own Quality Assessment Grid _________________________________________91 Table 6: Similarities and differences based on the evaluation criteria between the four countries, table constructed by author _________________________________________102 Table 7: Differences and their concequences in using data collecting tools in the four cases, table constructed by author _________________________________________________108 Table 8: Topics linking the recommendations by country, table constructed by author ___111 List of Figures Figure 1: Evaluation, monitoring and audit functions of EC_________________________26 Figure 2: Evaluation criteria representation retrieved on October 2013 from “Evaluation Methods for European Union’s External Assistance”, Volume 1, p. 50 ________________43
  • 6. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 5 of 170 Abstract In an age of great efforts done by high authorities around the globe to reach primarily economic and political linearity between nations, a very important subject represents the development co-operation programmes. The programmes cover a large range of aspects such as health or environment systems, empowering women and educational structures, peace building, food security, eradication of poverty, trade and rural development, justice and human rights and so forth. Considering the complexity of the objectives, the involvement of many actors is strongly required. The United Nations, OECD or the European Commission, the numerous civil societies and humanitarian aid NGOs, together with the national associations and with the assistance from the governments’ policy makers, they all appertain to the cluster of ambassadors of the above mentioned global movement. The current paper aims to emphasize the outcomes of the European Commission’s development co-operation programmes by discussing one of the most pertinent aspects: the Evaluation of development co-operation programmes. The analysis will embed a comparison of such evaluations in the case of four carefully chosen countries.
  • 7. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 6 of 170 “We can evaluate anything – including evaluation itself” (Shadish, Cook, & Leviton, 1995, p. 7) Introduction The paper will focus firstly on making a detailed assessment of a specific category of evaluations: the European Commission evaluations on specific development co-operation programmes, and secondly, it will focus on establishing a comparison of four such evaluations considering aspects discussed further in the paper. The assessment proposes to set strong grounds while presenting which tools are used and what results follow for evaluating activities in the European Union. More precisely, the study will address the guidelines set by the European Commission concerning project or programme evaluation in development countries (the evaluation conducted on DEVCO1 ’s behalf). Therefore, the phases of evaluating will be thoroughly presented (preparatory, desk, field, synthesis and dissemination and fallow-up phases) necessary in each evaluation process. Moreover, the evaluation methodology will be clearly emphasized, most importantly reminding the key methodological elements used such as relevance, effectiveness, efficiency, sustainability and impact endorsed by the OECD-DAC2 as well as other two additional ones proposed by the European Commission, added value and coherence. The Need Assessment components and the Quality Assessment Grid will also be of importance. The knowledge and the amount of information gained will serve at the fulfillment of the paper’s aim, namely delivering an analysis upon four such evaluations, where the comparability or at least the coherence of the definitions used to frame these key elements are examined in parallel. Additionally, other helpful research questions will conclude the paper, all of them assigned to all cases, in a comparative way. 1 DEVCO represents the EuropeAid Development and Co-operation Directorate- General, a department of European Commission defined earlier in the paper 2 The Organisation for Economica Co-operation and Development- Development Assistance Committee
  • 8. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 7 of 170 In order to achieve the goal of the paper, some related concepts and information need to be introduced. Consequently, the formation of European Union and the different roles of its current institutions will be brought into the stage, the European Commission’s components, functions and responsibilities will be mentioned and carefully disclosed and last but not least, the development and co-operation programmes done by the European Commission will be emphasized. In addition, a short presentation of the category of developing countries within the global framework will be shown, while underlining the reason for choosing the four countries subject of the thesis: Colombia, Jamaica, Nepal and The Philippines. The European Union: creation, functioning and current status The founding of the European Union has led to a completely new situation in our recent history. An impressive amount of work has been invested into the establishment of what we now know as the European Union. In 1957, six states were signing the Treaty of Rome that brought to life the European Economic Community and European Atomic Energy Community. The grounds of such a unification of powers were to install peace, to put an end to all the conflicts between France and Germany. In the following years, a multitude of important steps have been taken, such as a common policy for agriculture, fisheries and regional development, common external tariffs, a single market and the creation of an Economic and Monetary Union. Other states were joining these agreements, and in 1992, the signing of the Maastricht Treaty has led to the creation of what we now know as the European Union. Currently, the European Union is composed by 28 countries and has a list of five other countries recognized as candidates for membership. The Institutional Structure of the European Union represents a very complex system that confirms the thorough integration of the Union at both, a political and economical level. The “Big-five” institutions are the
  • 9. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 8 of 170 European Council, the Council, the European Commission, the European Parliament and the Court of Justice of the EU. While the European Council is considered to be the highest political-level body without a formal role in the EU law-making, the Council of the EU is the main decision-making body of EU. On the other hand, the European Parliament has two main tasks, namely to share legislative powers with the Council of Ministers and the Commission, and to oversee EU institutions (specially the Commission). Because EU laws and decisions are opened to interpretations, some disputes are probable to appear; the Court of Justice has the power to settle these disputes. Considering that the fifth component of the institutional framework of the EU, the European Commission is the subject of interest for the paper, a much more detailed description will be provided. Hence, the European Commission is the executive branch of the EU; it enforces the Treaties and it is driving forward European integration through the next aspects: i. it proposes legislation to the Council and Parliament; ii. it administers and implements EU policies; iii. it provides surveillance and enforcement of EU law in coordination with the EU Court. (Baldwin & Wyplosz, 2009, p. 72) The European Commission is at the heart of so-called the Community Method. It was designed as the independent “motor of the integration process, with the duty to act in the interest of the entire Union” (Devuyst, 2005, p. 12) The bureaucratic composition of the Commission is complex; the work is separated into departments and services. The departments are called Directorates-General (DGs), each dealing with a different policy, while the services are in charge of more administrative responsibilities.
  • 10. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 9 of 170 The objective of the paper is to analyze a number of public reports realized for a certain DG, namely for EuropeAid Development and Co-operation (DEVCO). As the official source of DEVCO says, this DG “is the result of the merger of parts of the former Directorate-General for Development and Relations with African, Caribbean and Pacific States with the former EuropeAid Co-operation Office. EuropeAid is now responsible not only for defining EU development policy but also for ensuring the effective programming and implementation of aid” (European Commission). As a beneficial result, DEVCO is now encompassing both areas, of cooperation and of developement, being of a central point for inside and outside stakeholders in different matters. Moreover, the Developement and Co-operation department is the only responsible for the European External Action Service. The role of DEVCO is extremely important; its job is to provide state-of the art development policies for all emerging markets, to reach to a coherent policy for development while making sure the mechanisms used are renewed when needed and properly used; continue imposing new beneficial policies for development and being very responsive and involved into the problems in the world. (European Commission) The objective of the DG is to contribute to well-being of the globe, while acting when needed to reduce poverty, to impose sustainable development, to act in respect to peace, security and democracy. In addition, the external aid instruments are also elements consisting of the DEVCO. (European Commission) EuropeAid Development and Co-operation represents the main resource within the evaluative analysis. The sources used are the evaluation reports done on four different countries, evaluation of the development and co-operation programmes realized by the Commission in developing countries. The countries were carefully chosen to have been evaluated in the same period of time and by two different types of evaluating consortiums.
  • 11. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 10 of 170 In conclusion, the goal of the paper is to realize a comparative assessment between the evaluation reports of the countries, aiming to underline the respectfulness of the basic concepts that are central in any evaluation, such as relevance, effectiveness, efficiency, sustainability and impact. The assessment will generate an own new set of different findings, a quality assessment grid and a set of recommendations.
  • 12. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 11 of 170 Chapter 1: Overview of Program Evaluation: definition and components Ever since the beginning of time, human nature was programmed to apportion all the activities into very well organized sequence of tasks. These tasks together were forming what would be now defined as a “project” or a “programme”, broadly speaking a series of detailed activities, attributed to a number of participants in order to achieve a specific goal while respecting the different objectives along the path-line. As in a logical next stage shortly after the first projects were created, some flaws or even real problems were detected; the need of evaluating the processes of conducting projects came into light. It is to say, that the evaluation resulted to be seen as a crucial component for the improvement of projects and programs. One point of view about the appearance of the evaluation of social programs is seen as a series of steps triggering the elimination of the causes of issues. The progression could be (a) identifying a problem, (b) generating and implementing alternatives to reduce its symptoms, (c) evaluating these alternatives, and then (d) adopting those that results suggest will reduce the problem satisfactorily” (Shadish, Cook, & Leviton, 1995, p. 21) This concept is comparable to the projects done by people within their daily activities along the centuries. In return, this judgment was recommended to be applied on bigger targets, namely on social programs in order to evaluate their performance. 1.1. Types of Program Evaluation and different approaches This section of the paper intends to provide a detailed image of program and program evaluation, to mention in parallel different program evaluation theories and to create an understanding of economical and political perspective of the evaluation process.
  • 13. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 12 of 170 Finally, the section will conclude in summarizing the means of evaluating the evaluation of the European Commissions’ developing and co-operation programs, which represents the final outcome desired from this research. 1.1.2. Definition of Program “A program is an organized collection of activities designed to reach certain objectives” (Royse, Thyer, & Padgett, 2009, p. 5) that requires above all the involvement of human resources. In their view, the characteristics of a good social service program are the staffing (common for every program) and the next elements: budgets, stable funding, recognized identity, conceptual or theoretical foundation, a service psychology, systematic efforts at empirical evaluation of services and evidence- based research foundation. Because the goal of the paper is to compare the evaluations done on specific programs, the need to provide more information about the definition and components of a program itself is not necessary. An important aspect of the program evaluation is that contains in its structure a certain theoretical model “that would have examined the problem – how and why it originated and what would work best to remedy the situation”. (Royse, Thyer, & Padgett, 2009, p. 8) From a more economic perspective, the program is defined in comparison to a project, to avoid any confusion; it is seen as “a group of related projects designed to accomplish a common goal over an extended period of time (..) The major differences lie in scale and time span.” (Larson & Gray, 2011, p. 6) A more precise clarification of this difference is made by identifying the organization’s composition “The parent organization has a change objective which may require contributions from several different areas, or several different types of project for its achievement.” (Turner, 2009, p. 324) The explanation is return, simple and concise, with an added value for the actual research regarding the evaluation of programs.
  • 14. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 13 of 170 1.1.1. Program Evaluation: definition, types and central concepts “Evaluation is a profession in the sense that is shares certain attributes with other professions and differs from purely academic specialties such as psychology or sociology.” (Austin, 1981, p. 17) From the logic beneath this framing we can conclude that at the time when this research or study was made upon the evaluations of projects or programmes, the evaluation itself was not a standardized process. Moreover, it represented a broad field that required involvement of different representatives mastering assorted fields of activity. Indubitably, as a study realized in line with the specific situation in United States of America, their conclusions might not fully comply with the procedures or techniques outside of America relating program evaluation. This issue remains to be further discussed. Angela Browne and Aaron Wildavsy, two of the authors of “The politics of program evaluation” consider that an evaluation is enriched with a more solid identity once the evaluators respond to five important questions regarding the evaluation “When?”, “Where?”, “From whom?”, “What?” and “Why?”. (Palumbo, 1987, pp. 150-151) Relating to the first question, “When?” the evaluations could be either retrospective (depending on the past events, it is obliged to start the program), either perspective (in contrast, it does not need a prior implementation) or a continuous (it has a proactive role). The question “Where?” refers to the institution inside witch these evaluations are done, hence they can range between informal to formal, meaning between small agencies or evaluation to government authority in evaluation. “For whom?” specifics the stakeholders that is responsible for the performance and final realization of the evaluation, thus they can be sponsors, government representatives and so forth. The question “What?” is the most complex one, revealing a multitude of possibilities of evaluation, when talking about the scope of it. They could be: pseudo-evaluation, quavi-evaluation, “goal-fixed evaluation”, comprehension or inferential evaluation and onward (Palumbo, 1987, pp. 153-162). The last
  • 15. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 14 of 170 question represents a classification of evaluations of programmes through the eyes of the final “consumer”. They could be focused on utilization, or could be interactive, or can have an evaluability assessment goal or more, a learning objective. Moving forward to a more specific category of program evaluation, it is essential to introduce the social programs “designed to benefit the human condition” (Rossi, Lipsey, & Freeman, 2004, p. 6). Social programs were first active in fields as education and public health, but ever since the fast development of the economic situation, many other programs were oriented towards “urban development and housing, technological and cultural education, occupational training, and preventive health activities” (Rossi, Lipsey, & Freeman, 2004, p. 8). From a broad perspective, the evolution of social program evaluation moved the interest from local level to national level, to world-wide level in current period. The founding for such programs is supported in big majority by the governments, civil society sponsorship to global organizations between nations. The focus in the moment is to “ameliorate a social problem or to improve social conditions” (Rossi, Lipsey, & Freeman, 2004, p. 17) when their evaluations trigger the most the effectiveness of certain programs. In order to move the focus on the proposed subject by the paper, certain definitions need to be carefully mentioned. This effort will provide the transit from general to more individual cases. Therefore, the European Commission proposes evaluation to been seen as “a key Smart Regulation tool, helping the Commission to assess whether EU actions are actually delivering the expected result in the most efficient and effective way” (European Commission) , where the Smart Regulation are tools created to help the Commission to assess key concepts such as effectiveness, efficiency, relevance and onward of the implementation of policies, legislation, trade agreements and so forth. In this category it is to be found also the evaluation standards, standards similar to the ones applied by other specialized organization than the Commission
  • 16. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 15 of 170 itself. The guiding principles of these evaluation standards will be presented in more detail in Chapter 2. Going over the Evaluation Guidelines proposed by EuropeAid, we can find the definition proposed by the European Commission of a “programme” and of its evaluation. Therefore, the definition is very simple and concrete revealing some grouped efforts to achieve the overall aims, thus “a programme is a set of simple, homogeneous interventions grouped to attain global objective” (European Commission) With regard to the evaluation of the homogeneous programme, it is stated that is represent a complicated process because both of the multiple cases and their needs to be assessed, and of the “synergies effects between the different components of the programme” (European Commission) In this case, traditional techniques are employed (questionnaires and comparison groups), but with a visible diminish of the number of questions posed. To conclude this section, the “12 Lessons from the OECD DAC 3 ” for “Evaluating Development Activities” will be shortly mentioned. Firstly, The Organisation for Economical Co-operation and Development is a very important world-wide actor in development and co- operation fields, an organization that aims to positively contribute at the global well-being through social and economic policies. Secondly, the Development Assistance Committee has as objectives to achieve a better quality and quantity of development co-operation, to provide useful analysis of a world-wise utilization and help and to offer the possibility of the members of DAC to share their expertise. (OECD, 2013, p. 5) The lessons given by them are: 1. “Base development policy decisions on evidence“ (OECD, 2013, p. 9) 2. “Make learning part of the culture of development co-operation“ (OECD, 2013, p. 13) 3. “Define a clear role for evaluation“ (OECD, 2013, p. 17) 3 The Organisation for Economical Co-operation and Development- Development Assistance Committee
  • 17. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 16 of 170 4. “Match ambition with adequate resources“ (OECD, 2013, p. 21) 5. “Strengthen programme design and management systems“ (OECD, 2013, p. 23) 6. “Ask the right questions and be realistic about expected results“ (OECD, 2013, p. 25) 7. “Choose the right evaluation tools“ (OECD, 2013, p. 27) 8. “Work together“ (OECD, 2013, p. 29) 9. “Help strengthen partner country capacities and use them“ (OECD, 2013, p. 33) 10. “Act on evaluation findings“ (OECD, 2013, p. 35) 11. “Communicate evaluation results effectively“ (OECD, 2013, p. 39) 12. “Evaluate the evaluators“ (OECD, 2013, p. 43) A further detailed exemplification of the 12 Lessons is not of interest for the goal of the paper. Nevertheless, the content and the reasoning of the advices given by OECD will be addressed and closely followed in the research effort done for achieving the objectives of the paper. 1.2. Program Evaluation Theories A very solid and well know evaluation theory was presented by Joseph S. Wholey (a respected man with a master degree in mathematics and a PHD in mathematical philosophy at Harvard, USA). The theory is called Wholey Theory of Evaluation and has as a focal point the evaluation done on the government, of the social programs conducted by the central power of a certain state (yet again in the United States of America). Nevertheless, this means that the objective of such an evaluation is to make sure the programs follow the required, stardard course and that are performing in the public interest, and not in increasing the gains on a federal level. The theory itself has three main categories to be evaluated: customer market, policy market and program management market, the latter having the biggest impact in the evaluation framework. (Shadish, Cook, & Leviton, 1995, p. 227)
  • 18. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 17 of 170 Relating the evaluation done for testing the improvement of programs, Joseph S. Wholey presents a list of concepts that he considers of a great importance. The table below will identify and characterize in a more detail the list presented by Joseph S. Wholey in the chapter named “Evaluation for Program Improvement” from the book “Foundation of Program Evaluation”. Results Oriented Management The management team is responsible to act towards accomplishing expected results of the program while making use of all the known content and available resources Performance – Oriented Evaluation As for the same category, the evaluation will be an added value for managers performances Sequential Purchase of Information The actual information could be purchased before its intended use period (the future utility is higher than the price paid to acquire it), being of great assistance in the next steps Evaluability Assessment Analyzing the program capability of delivering results, what needs to be done to have those results and the most important feature is whether the evaluation will impose enhancements on the project/program Rapid Feedback Evaluation The elements of performance (objective s and indicators) are re- analyzed and a better design is decided on. Performance Monitoring The process and the outcome are observed. Intensive Evaluation The actual evaluation is realized, where a rigorous package of tests assesses the link between the objective and the actual result of the process. Service Delivery A final and free of constraints evaluation is done considering the
  • 19. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 18 of 170 Assessment feedback received, without including the standard line of assessment on objectives and indicators, as the other stages full field. Table 1: Evaluation concepts through the eyes of Joseph S. Wholey / Source: created by author using information from original source, retrieved on February 2014 (Shadish, Cook, & Leviton, 1995, p. 225) Striven’s theory of evaluation represent a strong point in the evaluation process. Michael Striven developed a major theory both in an explicit and a general way. The evaluation sequence that he proposes is to firstly have a clear idea about the criteria of merit, secondly to determine the standards and finally to examine the performance. (Shadish, Cook, & Leviton, 1995, p. 94) His distinct approach is to ask evaluators to know from the start the value of the object that they will evaluate. Similar to other theorist concepts, namely Nicholas Rescher, the valuation theory of Striven brings together two aspects; first the object, subject of the evaluation and second the actual valuation, but involving the framework specific to the future evaluation. Moreover, the two aspects are completed by “a criterion of evaluation that embodies the standards in terms of which the standing of the object within the valuation framework is determined”. (Rescher, 1969, p. 72) In conclusion, the methodologies used in the evaluation are of comparable value to the evaluation object, and the “criterion” needs to be closely developed and respected. 1.3. Program Evaluators Moving further and observing the evaluation from a different angle, the evaluator itself represents an important part of the process. In Carol H. Weiss book “Evaluation research” we can find the means through which the evaluator exercises his expertise. Moreover the focus is
  • 20. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 19 of 170 on” ways by which the evaluator can help institute the conditions that permit sound research” (Weiss, 1972, p. viii) In a broad acceptance, the role of the evaluator is to collect all the data and transform them into a coherent report. It is frequently the case that the evaluation report will not taken into consideration because of a large range of different reasons. The constraints that get into the way of using the evaluations could be the perception of the evaluator considering his role, the responsiveness to change of the organization in matter, the way to perceive and handle the evaluation report itself, the discrepancies between the findings and the intended next steps or the feature of many evaluation to underline mostly the negative findings. (Weiss, 1972, p. 110) In order to create a real image of the input the evaluators bring to the evaluation itself, it is very important to remind the fact that most of the evaluators come from academic research background, a fact that underlines that they “tend to look to the academic community for recognition and reward” (Weiss, 1972, p. 111) and also “stop short of drawing conclusions when they report their results” (Weiss, 1972, p. 111) In the same time, other evaluators “perceive their role as encompassing the “selling” of their results” (Weiss, 1972, p. 113) Hence, there can be different types of evaluators, different in the sense of their expectations over a project involvement; some aim only the fame, others considers themselves responsible only for studying and analyzing the data, leaving the recommendation to other. There are also some very committed evaluators that want to have an impact upon the world with their work. In addition, “evaluators’ ways of thinking are different from ordinary daily decision making, because they engage in a process of figuring out what is needed to address challenges through the systematic collection and use of data.” (Mertens & Wilson, 2012, p. 3) In conclusion, after presenting come constraints that might appear when assessing the reports and after describing the feature of some evaluators, we can observe that there will always be differences between evaluations realized in different organizations, by different evaluators.
  • 21. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 20 of 170 Further discussing about the gap between the evaluation and its recommendation we can notice that even the process is in a continuous improvement, there are still some situation when the results are evasive or even ambiguous. Thus, this existing gap between the program evaluation and its actual recommendation, from Weiss’s point of view, needs to be completed with “intuition, experience, gleaning from the research literature assumptions based on theory, ideology and a certain amount of plain guessing” (Weiss, 1972, p. 125) Hence, the evaluation will reveal all the unresolved issues of current programmes and will underline what needs to be changed. In the book “Evaluation. A systemic Approach”, Peter Rossi and his co-authors propose a listing of people more likely to get involved in an evaluation. The taxonomy is composed by: - Policymakers and decisionmakers: persons in charge with the stages of a program, from inception to closure, and from expansion to restructure. (Rossi, Lipsey, & Freeman, 2004, p. 48) - Program sponsors: organization in that deal with the funds (sometimes act in the same way as the first category) (Rossi, Lipsey, & Freeman, 2004, p. 48) - Evaluation sponsors: organization in that deal with the funds (sometimes the program sponsors and the evaluation sponsors are the same) (Rossi, Lipsey, & Freeman, 2004, p. 48) - Target participants: the category depends on the focus group/individual of the program to be evaluated (Rossi, Lipsey, & Freeman, 2004, p. 48) - Program managers: “Personnel responsible for overseeing and administrating the intervention program” (Rossi, Lipsey, & Freeman, 2004, p. 49) - Program staff: personnel in charge with service delivery or with support (Rossi, Lipsey, & Freeman, 2004, p. 49)
  • 22. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 21 of 170 - Program competitors: the organizations that employ same resources in other programs (Rossi, Lipsey, & Freeman, 2004, p. 49) - Contextual stakeholders: the group directly interested in the implementation of the program, such as “other agencies or groups, public officials, or citizens’ groups (..) ” (Rossi, Lipsey, & Freeman, 2004, p. 49) - Evaluation and research community: experts in evaluation and researchers in the same field who assess the quality and the reliability of the evaluation (Rossi, Lipsey, & Freeman, 2004, p. 49) The general overview that encompasses the composition of the group of stakeholders involved in evaluating a program represents a helpful link in comparing this structure with the one subject to the paper, the personnel involved in the evaluation of European Commission development and co-operation programmes. In the case of the European Union’s external assistance, the evaluative organization is split in four broad categories: evaluation manager, reference group, evaluation team and stakeholders. (European Commission, p. 30) Briefly, their details will be mentioned below: - Evaluation manager: the person responsible for evaluation process, on behalf of European Commission (European Commission, p. 30) - Reference group: composed by representatives from the country in stake and European Commission members, persons chosen by the evaluation manager to help in with the surveillance and administration of the process (European Commission, p. 30) - Evaluation team: the persons responsible for collecting and analyzing data, answering to the research questions and formulating the report of the evaluation, constantly keeping a two side collaboration with the latter groups (European Commission, p. 30) - Stakeholders: the persons (individual, groups or associations) interested of the implementation of the program (European Commission, p. 30)
  • 23. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 22 of 170 In conclusion, we can state that there are not that many differences in the concept of composing the group involved in program evaluation, from a general case to the case of the European Commission’s evaluation methodologies. 1.4. Economic and Politic Perspectives of Program Evaluation Another aspect of project evaluation is its economic involvement and effects. In order to reveal the aspect considering the economic perspective it is necessary to define the ancient Economic Development Institute (EDI), an instrument of the World Bank which is currently known as World Bank Institute. WBI is defined as “is a global connector of knowledge, learning and innovation for poverty reduction” (The World Bank )and aims in connecting interested parties to work together and come up with solutions to the developing issues. Returning to EDI and keeping in mind the fact that the concept of project evaluation appeared around 1960s, the response of the developing countries to such a sub-discipline of the program evaluation programme “was that they had to plan, guide and direct their economies. Development planning was the watchword” (Davis, et al., 1997, p. 51) The indicator of great importance from an economic perspective are the values at which program evaluation in developing countries manages to impose the acceptable level of quotas, tariffs and other prohibitions or requirements. Furthermore, in the range of twenty years after the installation of project evaluation “EDI was a direct provider of project training for officials from developing countries” (Shadish, Cook, & Leviton, 1995, p. 85) Trainings were taking place at EDI (Washington, D.C.), at a regional level in collaboration with different trans-national institutions and at national level of the emerging markets.
  • 24. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 23 of 170 In conclusion, all these facts were happening outside Europe before the European Union have reached the final unified formation, with all the competences of the members states and all the common powers to action. The politics of program evaluation represent a very interesting aspect to present because of the similarities between politics and evaluation. Firstly, whenever an evaluation is done it will be normally integrated into a political decision, while evaluations will finally become active in a political environment. Secondly, the act itself in taking a position through an evaluation process is political and also, the mutual supporting relationship between the program and its evaluation is a political approach. Thirdly, in many cases the meaning of “program evaluation” is negatively perceived as “political evaluation of programmes”. (Palumbo, 1987, p. 12) The controversial discussion about this matter has as main reason the fact that this association between politics and evaluation of programmes can have negative influences on the evaluators themselves. The problems range between researchers and analysts using the evaluation as an “ideological tool” to focusing only on the political goal of the evaluation. Hence, the role in policymaking determines the shared opinion whether the involvement of evaluators in politics is beneficial or not. (Palumbo, 1987, p. 13)
  • 25. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 24 of 170 Chapter 2: European Commission's approach to evaluations This chapter will consist of a thorough presentation on the evaluation process, starting from its definition and purposes, arriving to defining the different temporal types evaluation, namely the ex-ante, interim and ex post evaluation. Further on, the evaluation function will be described, with its acquired profile tasks and roles. The creation of an evaluation report will be divided between designing, conducting and finalizing stages. Finally, the chapter will detail the evaluation methodologies used for developing co-operation programmes done by the Commission. The large amount of information will serve at introducing the Chapter 3, the “Comparative analysis”, the research subject of the paper. The progressively introduction of terms and classification will facilitate the clearance of the analysis and the simplicity of drawing conclusion relating to the comparability between the evaluation reports. 2.1. Evaluating EU Activities This section introduces a very important baggage of knowledge concerning the evaluation of the interventions done by the European Commission. Primarily, it will define the essential terms such as “evaluation” and of course the different categories that it embodies “ex ante evaluation”, “ex post evaluation” or “interim evaluation”. Secondarily, the functions related to evaluation are presented in a comparative way, in order to underline the different functionalities between evaluation and monitoring, control and audit. Finally, it will present a clear image about the objectives, characteristics or functions and the scope and steps of the Evaluation in the European Commission.
  • 26. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 25 of 170 2.1.1. European Commission evaluations: Definition and scope The definition of evaluations given by the European Commission offers a clear perspective of the usefulness of the function itself, namely evaluation as a “judgment of interventions according to their results, impacts and needs they aim to satisfy” (European Commission, 2004, p. 9) Therefore, the evaluation is an assessment of the elements above mentioned that trigger four different targets: - to offer support in the design of the intervention, moreover considering the political implication; - to be part of the distribution of resources in a even and efficient way; - to help in settling a good quality intervention; - to offer a thorough final report. (European Commission, 2004, p. 9) Recently, the European Commission raised the need for using more management tools within their activities, therefore a set of basic evaluation requirements can be found in the following documents: The Financial Regulation, The Implementing Rules of Financial Regulation, The Communication on Evaluation and The Communication on Evaluation Standards and Good Practices. What is to be further discussed is the liberty of the European Commission to define for each case a separate set of differentiated rules. Moreover, while the first document is applicable to all institution, the Implementing Rules document depends on the Financial Rules one. They have to be assessed together. The two Communication documents were the initial ones, thus the rules of the Financial Regulation and the Implementing Rules of Financial Regulation will prevail upon them. Moving further, the discussion sets the focus on the differences between the evaluation, monitoring and audit functions that the European Commission applies. Therefore, monitoring is a process performed during the intervention in order to obtain the quantitative data on the process, but not on its effect; the objective is to make sure it respects the objectives, offering
  • 27. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 26 of 170 a better performance and a set of best practices for future similar programmers. (European Commission, 2004, p. 10) In addition, a new type of monitoring has been adopted, the performance monitoring, which provides continuous feedback of the completion of activities in order to assess the performance of the programme. As a distinct feature between evaluation and monitoring functions, it is necessary to mention the fact that neither the basic monitoring function, nor the performance monitoring one do not trigger the impacts of the result of the programmes as the evaluation does; they focus more on the implementation of the activities of a specific programme. The audit is applied for a multitude of activities and can make use of inputs and outputs as well as of elements common to evaluation. The figure below will offers a more solid image about the functions analyzed up to this point. Figure 1: Evaluation, monitoring and audit functions of EC
  • 28. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 27 of 170 Source: Diagram retrieved on October 2013, from “Evaluating EU Activities. A practical guide of Commission services” published by the European Commission on the official web- site From theory, the evaluation and the audit have as objective the determination of “economy, effectiveness and efficiency” (European Commission, 2004, p. 11) during the processes and considering the effects of a certain programme. Examining the well structure exposure of the utilization of the three functions, we can observe where the two functions differ: the audit assesses the immediate effects of the implementation of a programme, while the evaluation aims in determining the overall effect of it and also “the relevance, utility and sustainability” (European Commission, 2004, p. 11) , elements that will be detailed later in the paper. The temporal variants of evaluation are the ex ante evaluation, the interim evaluation and the ex post evaluation. The objective here is to offer a broad understanding of the use and function of the three temporal categories of evaluation, but the accent will fall onto the first category, being a source of different actions. Therefore, the ex ante evaluation intervene before the actions of the Community are initiated, in “the preparation of proposals for new and renewed” such actions (European Commission, 2004, p. 12) while the interim evaluation intervene in the middle of an activity to determine the nature of the activity either as a certain “programme with a limited duration or a policy, which continue for a indefinite period” (European Commission, 2004, p. 13). Through its role of examination during the process, the interim evaluation offers improvement in quality for the present programmes and a set of new information the future generation. On the other hand, the ex post evaluation addresses the intervention as a whole in order to assess the impacts and their sustainability, what it determined to be either a success or a failure, and in the same time the indicators of efficiency and effectiveness. The time frames of their availability are suggested by their denomination.
  • 29. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 28 of 170 The ex ante evaluation brings more elements to our attention, firstly because of its numerous purposes and, secondly because of the resemblance between its processes of analysis and the preparation and design processes. The purposes are in line with the stages that the ex-ante evaluations realize. Thus, while listing the stages and the checklist for each, the objective of such an evaluation will be determined. The below information are retrieved form an European Commission publication named “Note on including requirements of ex-ante evaluation in external aid programming” and present the elements of ex-ante evaluations in the Commission published in the Implementing rules of the Financial Regulation (Art 21). Stage 1: Analysis of the problem and needs assessment The analysis of the problem represents the crucial point in initiating an ex-ante evaluation; it consists of determining: - the essential characteristics of the programme and the most probable factors influencing the key problem; - the representative cluster that can have an interdependent relationship in the given situation; - the cause and effect between factors and actors presented in a visual way (“problem tree”) (European Commission, 2005, p. 7) The needs assessment closely follows the analysis of the problem, an assessment that aims to conclude the target group and its actual needs addressing different elements such as the population and its sub-divisions, the people’s situation, motivation and interest in order to determine and rank their needs. (European Commission, 2005, p. 7) Stage 2: Objective setting and related indicators The objectives of the evaluation must be divided into more explicit categories of objectives by their main concerns such as general objectives (produce impacts), specific objectives
  • 30. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 29 of 170 (produce results) or operational objectives (produce outputs), division that embodies the wanted changes. (European Commission, 2005, p. 8) The indicators handling pursue the progress, but before calculating them the success criteria must be set, where the latter wishes to reveal the judgment of how the action is classified as successful or unsuccessful. (European Commission, 2005, p. 8) Stage 3: Added value of the Commission intervention In this stage, there are three elements that are observed: the coherence of the EC action, strong coordination and complementarity, thus the purpose is to test the existence of conflicts/synergies, the linearity of characteristics specific to any intervention and the uniqueness of the action. (European Commission, 2005, pp. 8-9) Stage 4: Alternative options and risk assessment Being in the ex-ante evaluation level, there is necessary to create a list encompassing different mechanisms to deal with the intervention, while making a parallel between them and the criteria of effectiveness, costs and risks. The present is called “a list of possible options” (European Commission, 2005, p. 9) The risk assessment represents an important step in the ex-ante evaluation, because it delivers a clearer view about what can go wrong during the process. Stage 5: Lessons from the past The lessons learned are crucial to the continuance of any intervention. It offers best practices and presents experiences from previous actions, managing to improve the quality and the results of the next generation of interventions. Stage 6: Guaranteeing cost-effectiveness Respecting the rules presented by the Financial Regulation while being more difficult to calculating one essential indicator in this field (“cost-effectiveness ratio”), the ex-ante evaluation should pay attention to an estimation of the cost of the proposed intervention, to
  • 31. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 30 of 170 conclude if the objectives justify the cost and to examine if the a lower cost could have been the subject in question for same outcomes. (European Commission, 2005, p. 10) Stage 7: Monitoring the intervention and future evaluation The last stage is also very important, consisting of daily examination of inputs and general monitoring of future evaluation. The last aspect in this section is the scope of evaluation in the Commission, namely the obligation that the evaluation has to be centred on activities. Hence, within the Commission composition, the services are in charge with the evaluation of the activities outside the European Institutions, having as objective the accumulation of performance information related to the activities. The decision to conduct such evaluation is determine by the usefulness, the added value and the costs linked to the specific activity. Finally, we saw a concise definition of evaluation, as a process dealing with judgments concerning results, impacts and needs. Moreover some evaluation aims were presented, such as designing an intervention with regard to political matters, evenly distributing the resources, delivering a good quality action and a final report, whilst being able to find the basic of evaluation requirements in the four documents. The difference between evaluation, monitoring and audit was clearly structured and the three temporal variants of evaluation were represented in a more detailed matter to build on previous knowledge. 2.1.2. The Evaluation Function When discussing the evaluation function it is mandatory to emphasize its quality of being present in each and every DG of the European Commission with the remark of also to underline the un-common features that evaluation plays in the numerous DG. In order to do so, four elements must be thoroughly described, namely the profile, the role, the tasks and the resources involved in the evaluation process.
  • 32. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 31 of 170 The profile is determined with regard to the decided set-up in the certain, the evaluation functions taking different shapes, different modes in this situation. Moreover, the role is presented together with the description of the evaluation underlying principle. Firstly, we need to understand what is considered to be an evaluation function. Thus, it is seen as a constituent of the DGs and Services structure “which enables them to fulfill responsibilities for co-ordinating, planning and exploiting their evaluation activities” (European Commission, 2004, p. 26).The immediate conclusion is that the evaluation function is a vital element in the organization of European Commissions activities, which contributes to the well functioning of the Community. In addition, the modes that a evaluation function can be applied are distinct, going from an internal evaluation – evaluation networks within a DG – to evaluations done between the different services or outside the DG – inter-service networks or external evaluation networks – As a good practices requirements regarding the results of an evaluation, the evaluation function need to make sure to aim the objectives while delivering steady data on relevance, efficiency, economy, effectiveness, consistency, sustainability and on the added value of the targeted category evaluated (project, programme, activity) (European Commission, 2004, p. 28) The tasks involved are chosen according to the Evaluation Standards that the function needs to use. Some tasks that an evaluation function has to fulfill as presented by Standard A.6. are: “Co-ordination of evaluation activities of the Commission services” or “Anticipating of the decision-making needs of Commission’s services and establishment and implementation of annual plans and multi-annual evaluation programmes” and even “Defining and promoting the quality standards and methods”. (European Commission, 2004, pp. 29,31) Further on, the elements that are in charge with the execution of the previous presented tasks are both financial and human resources. Before starting the evaluation process it is
  • 33. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 32 of 170 compulsory to identify and plan the use of these resources, a discussion that will encompass also the evaluation skills of the participants and the distribution of resources. 2.1.3. Designing, conducting and finalizing an evaluation This section offers a very complex image of the development of an evaluation, starting from its creation (designing), arriving at the actual realization of it (conducting) and ending with its closure (finalizing). I. Designing an evaluation The design phase has as a starting point the definition of the mandate that the evaluation takes place and continues with more details regarding the preparatory phase, as in the questions addressed and the settling of the evaluation Terms of References (TOR). Accordingly, the mandate is a preliminary phase of the in an evaluation before concretizing the TOR, being a descriptive document containing information about the circumstances of composing an evaluation projects, its motives and objective, the organization of the work (timetable and the human resource involved in the project) and also what to be expected from the actions. A very important consequence of the realization of the mandate is that it involves the presence of the stakeholders, managing to inform them and in the same time making sure of their support and common objectives. The second step in designing an evaluation represents the clear composition of the evaluation questions, an element of the structuring phase; these questions have as a target to set up and clarify what exactly is to be evaluated. Therefore, some evaluation issues are scanned for all the temporal stages of a programme (ex-ante, interim and ex-post stage). The evaluation issues will be described in more detail while presenting the Evaluation Guidelines in Section 2.3. of the paper, but they can also be reminded here; there are relevance,
  • 34. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 33 of 170 coherence, economy, effectiveness, efficiency, sustainability, utility, consistency, allocative/distributional effects and acceptability. (European Commission, 2004, p. 39) The Terms of Reference (TOR) represents a central document, element of an evaluation process having the role to present four very important aspects of such a process: the origin, the scope, the aim and the allocation of roles of a programme. (European Commission, 2004, p. 43) These four essential elements are extremely important for the cases in which the evaluator is external. Moreover, TOR contains also the questions to be addressed and the specifications of the work, on the ground of the chosen evaluation methodology. The list of TORs’ components as the European Commission presents in the publication “Evaluating EU Activities. A practical guide for the Commission services” (p. 43) is shown below. More specifications will be developed in the next sections. Thus, the components are: a).The purpose, objectives and justification for evaluation (including legal base), b).A description of the activity to be evaluated, c).The scope of the evaluation, d).The main evaluation questions, e).The overall approach for data collection and analysis, f).The framework delimiting the work plan, organization and budget of the process, g).In the case of external evaluator, clear selection criteria, h).The structure of the final report, and of possible also of the profess reports, i).The expected use, and users of the evaluation. It is to be pointed out, in the case of data collection and analysis that the design of an evaluation process can be influenced by time and resources involved in the evaluation. For this reason, the connection between the available budgets and the time span is materialized into possibilities and restrictions as follows: -Short time span and small budget: desk studies, interviews, focus groups.
  • 35. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 34 of 170 -Medium time span and budget: case studies, surveys, expert panels. -Long time span and considerable budget: econometric models, cost-benefit and cost- effectiveness. This classification will help us indentify from what kind of category the analyzed evaluation take part. II. Conducting an evaluation This stage will firstly address the establishment of the steering group, will then go to the actual evaluation process respecting both the administrative aspects and the acceptance of the methodology chosen and will close with the validation of the evaluation report. A. The steering group aims to offer the evaluator access to the information needed, to support the work performed mostly towards the methodology used and is an active participant in the quality examination of the evaluation. In order to set up a steering group it is primarily a question of how to choose the right members and secondly to attribute them with proper tasks and responsibilities. The group is generally composed by members of the European Commission that deals in their daily activities with subjects specific to the evaluations. There are also cases where the input of members outside the Commission is required, but even so the chair of the work done has to be an inside member. The stakeholders can be managers, operators or agencies in charge with the implementation or the groups directly or indirectly influenced by the intervention. The specific tasks are determined by the group’s involvement in the evaluation process; since its actions are required before, during and after of an evaluation, the tasks are also numerous. It is responsible to convert the political issue/question into operational elements and it also oversees and offers guidance to the team in charge with the evaluation process to assure the proper implementation of the strategy imposed by TOR. (European Commission, 2004, p. 52)
  • 36. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 35 of 170 B. The carrying out evaluation is distributed into the administrative set-up stage and the implementation stage. a). The administrative set-up refers to the decision on who will carry on the evaluation; the three choices are simple, the evaluation can be done by internal evaluators, external evaluators or can delegate parts of it to outside sources. The decision mostly relies on the quality of the summative and formative dimensions; it is wise to have external evaluators when the summative dimension is strong, while if it is weak but the formative dimension is strong, the in-house option is better suited. Nevertheless, in general the evaluation project combines both summative and formative aspects. (European Commission, 2004, p. 53) The other determinant of the way to administer an evaluation is the number of resources available for conducting the evaluation project. As an example, for a short timeframe availability to carry an evaluation, the in-house option is the most appropriate. b). The implementation of the evaluation represents the methodology of evaluation constructed by tools and techniques that help in answering the questions within the evaluation framework considering time, budget and data limitations. Furthermore, a considerable “desk research” is required by the methodology upon the secondary sources at the beginning related to the actual evaluation, and then introducing the analysis of data in the field. (European Commission, 2004, p. 54) It is essential to mention the four phases of evaluation as the European Commission presents in the “Evaluating EU Activities. A practical guide for the Commission services”: the structuring phase, the data collection phase, the analysis of data and the formulation of judgments. There are broadly characterized below. -The structuring phase: represents preparatory work from setting the TOR to the delivery of inception report; the main objectives are to be able to provide the needed information for the users and to assure the evaluation method is complementary with the data collection, and that
  • 37. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 36 of 170 the decided tasks will help to the performance of the evaluating process. -The data collection phase: it begins after the inception report, as resulted from the structuring group, is delivered and it will follow the methodology imposed by the evaluator and accepted by the steering group. The data are both qualitative and quantitative, composed also by the evaluator own judgment. -The analysis of data: consist of the result of individual data collected with respect to the tools and technique used and also, it will offer a broad analysis of information, documentary, statistical sources and different other surveillance systems. The evaluator has to examine the causes and the effects that occurred. -The formulation of judgments: this final phase represents the reflection conceptual issues upon the reports delivered. Therefore the evaluators will judge the effectiveness, the efficiency, the utility, the sustainability and relevance of the information provided. The tools and techniques used will be presented in Section 2.4.2. Evaluating tools. C. The validation of evaluating reports is composed by different reports deliver by the evaluators with respect of different stages that the evaluation reached. The reports are identified below as presented in the European Commission publication “Evaluating EU Activities. A practical guide for the Commission services”: a).The inception report: as discussed before, the inception report is delivered at the end of the structuring phase consisting of the methods to be used in the implementation of the evaluation process, while some also present result of some initial analyses; it is essential to conclude that the evaluation plan is feasible. b).The interim report: is composed by the analysis of primary and secondary data analysis, which can contribute even at answering some evaluation questions. c).Draft final report: is composed by some conclusions of the evaluator concerning the question posed in the TOR, a conclusion of the judgments and some recommendations
  • 38. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 37 of 170 relating the result. It is only a draft final report, because it will go through a preliminary quality assessment phase. d).Final report: the conclusion made upon the result of the quality assessment and the discussions of the steering group about the draft final report. III. Finalizing an evaluation The end of an evaluation project requires reporting the results, disseminating them and using the evaluation findings. The three related aspects will be further discussed. A. Reporting The evaluation report consists of a numerous items, such as the scope the evaluation was done and its context, the goal aimed and the objectives as well as the methodologies (questions to be asked, standards to be applied, rules to be obeyed, procedures to be follow) chosen to fulfill the goal. The structure of a report needs to be understandable and clear and it comprises the next items: the executive summary, the main report and the technical annexes. According to the European Commission publication “Evaluation EU Activities” the main report is the most complex document, being represented by three chapters as follows: 1. Introductory chapters on evaluation on objectives and scope 2. Descriptive chapters presenting the activity to be evaluated and the method used 3. Substantive chapters presenting the results of the analysis, their interpretation and subsequent conclusions B. Dissemination of evaluation results The evaluations done upon the European Commission developing and co-operation programmes in third countries assigns the accountability, but in the same time aim in concluding with some recommendations and lessons from both successful and failure programmes, in order correct and add more value to the next generation of programmes. Thus,
  • 39. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 38 of 170 the dissemination and feedback represent an essential aspect to be discussed when analyzing evaluation processes on the EC programmes. (OECD, 2003, p. 1) The Guidelines on dissemination and feedback of evaluation present these features as done by the Evaluation Unit. Thus, the purpose of disseminating evaluation reports is to offer a certainty to the internal and external to the Services stakeholders about the actual report. In order to offer the evaluation findings as coherent documents it is mandatory to set up a dissemination strategy that begins with determining right from the designing on Terms Of References where will this reports will go, to which users will be of need. Therefore, the list below presented by the publication “Evaluating EU Activities” published by the EC, names the potential interested parties to receiving and using these evaluations: a).Key policy-makers interested institutional parties b).Managers and operators of intervention being evaluated c).Addresses of intervention (civil societies, NGOs, private firms of individuals) d).Other services from the Commission e).Other interest groups (organizations, groups or individual focused on these topics, academic related groups or individuals) The channels to deliver these reports are numerous as the publication of European Commission”Guideline for dissemination and feedback of evaluations” presents: 1). distribution of the report via mail and e-mail, 2) Internet, 3) Newsletter, 4) Seminars and workshops. The Feedback is offered by the Evaluation Unit and, as required by the Board of Directors of EuropeAid and aims in receiving the lessons and recommendation and implement them “in new operations and in the decision-making processes.” (OECD, 2003, p. 4) and also delivering in a systematic manner the status of use of these lessons and recommendations to
  • 40. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 39 of 170 the Evaluation Unit. In accordance to the feedback and follow-up field, the timing and the quality are the most important items to be considered. C. Using of evaluation results The biggest challenge when finalizing an evaluation report is to determine the most suited way to deliver the evaluation report in a form likely to be directly used. There are factors that influence the way the results are use, such as the level of involvement of the future, potential users in the evaluation processes or their expectations on the evaluation. The involvement of users in the evaluation process concerns the actual construction of the dissemination of the results. As a result of all stated above, the most important aspect of using properly evaluation results is to aim from the beginning the accurate audience. In conclusion, the section aim to define the evaluation process, in the same time while making the difference between diverse types of examination within a project. The different categories of temporal examination reveal their usefulness and the examination function is presented with its profile, roles and tasks. The final discussion encompasses a very complex presentation of the stages that compose an evaluation process from designing it, through actual implementation and to finalizing it. All these information will be of importance when commencing the comparative analysis of the evaluation reports on the countries chosen. 2.2. Evaluation methods The evaluation methods will be presented as proposed and publicly distributed by the European Commission, by the Joint Evaluation Unit composed by DG External Relations, DG Development and EruopeAid Co-operation Office. Within their “Evaluation methodology” section they put forward four different volumes of evaluation of different aspects. These volumes are entitled “Evaluation Methods for the European Union’s External Assistance” are the following: 1). Volume 1– Methodological Bases for evaluation
  • 41. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 40 of 170 2). Volume 2 – Guidelines for Geographic and Thematic evaluations 3). Volume 3 – Guidelines for project and programme evaluation 4). Volume 4 – Evaluation tools This section 2.2.1 will encompass the essentials from the first three volumes, information on which the paper’s objective could be built on. The “Evaluation tools” will be discussed separately in the next section. 2.2.1. Methodology of evaluations As previous mentioned a clear and concrete structure will be build accordingly to the guidelines retrieved from the Volume 1, 2 and 3 of the Evaluation Methods for the European Union’s External Assistance. Volume 1 – Methodological Bases for evaluation The volume is presents the process and methods used for evaluation. The process consists of evaluation indicators such as the object, timing, utilization, players and their roles (European Commission, 2006, p. 14), while the methods consists of “intervention strategy; evaluation questions (usefulness, feasibility, formulation); judgment references (criteria and indicators); methodological design; data collection and analysis; value judgment (conclusions, lessons and recommendations); and finally quality assurance” (European Commission, 2006, p. 14) I. The process 1. The first section presents the subject of the evaluation, namely it will define the evaluation of programme and the scope of the evaluation, and as well it identifies the sectors, themes and cross-cutting issues. The definition of a programme, a homogeneous programme and their evaluation was already mentioned in Section 1.1.1. of the current paper. Public interventions can be categorized by sector or theme, while more private interventions are subject to the cross-cutting issues, as a thematic evaluation. The sector classification
  • 42. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 41 of 170 depends on the nature of the activities and outputs, while the cross-cutting issues target the impacts, as observed. The scope of the evaluation is defined in direct connection to the territory, the period in discussion and the regulations involved. 2. The second section refers to timing of the evaluations, an aspect that discusses the ex ante, interim and ex post evaluations, aspect fully described in Section 2.1.1. of the current paper. 3. The third section discusses the use of an evaluation, describing the users, the types of use and the dissemination of the evaluation. Hence, between users we can remind the policy makers and designers, managers, partners and operator and also other actors involved. In accordance with the use, evaluation can either assist the decision-making or the formulation of judgments. The dissemination part of an evaluation is thoroughly presented in Section 2.1.3 of the present paper. 4. The fourth section identifies the players and their roles in an evaluation process. Hence the players, as detailed in Section 1.3., are the evaluation manager, the reference group, the evaluation team and the stakeholder. Their roles are to be found in the mentioned section. II. On methods 1. The first element as a method to evaluate is the intervention strategy, which is rational, logic and connects the other policies. The interventional rationale is distributed through the programming documents and aims to “satisfy the needs, solve the problems or tackle the challenges that are considered to be priorities in a particular context and that cannot be addressed more effectively in another way” (European Commission, 2006, p. 37) The exact definition was required here for two reasons, firstly to underline the usefulness and importance to start an evaluation with a correctly thought rationale and, secondly to introduce the imperative action in sorting priorities.
  • 43. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 42 of 170 On the other hand, the logic of an intervention refers primarily to the activities and secondarily to their effects (outputs, result and impacts). When the logic of the intervention is “faithful” it means that it follows exactly the initial programming documents, but it can also appear as a reconstruction of the logic, following other effects that might be encountered while modifying the initial objectives that the intervention aims to achieve. In this case it is mandatory to be announcing that the logic is no longer “faithful”. The ways to organize the logic is by a logical box, a diagram of objectives or a diagram of expected effects. The related policies refer to the link between relevant interventions and the similar evaluated interventions, in order to quickly respond to the questions regarding coherence, complementarily and relevance concepts and to analyze the current quality of the intervention with the one obtained from previous similar interventions. 2. The second method to evaluate is the evaluation questions, a part that encompasses the usefulness, the origin and the selection of questions, but also the questions and evaluation criteria and preparing an evaluation question. (European Commission, 2006, p. 43) The evaluation questions are a very useful aspect of the evaluation process. They put the accent on a concise number of main aspects, helping to have a narrower data collection process, a more precise examination, which delivers a report of great assistance. The useful summary is the result of maximum 10 questions gathering in their composition aspects about the activities, the targeted group, the expected effects and the evaluation criteria. There are three categories of questions are: the ones inferred from the intervention logic either directly or indirectly and other types of questions that do not focus on the existing effects in the intervention logic. The selection of questions follows the next steps: identify questions, assess the potential usefulness of answers and assess the feasibility of questions. Simply stated, the main motive
  • 44. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 43 of 170 to pose questions is addressed either if their answer is unidentified, either if somebody questions something or because the findings of a question are constructive. The identification of questions consists of the analysis of the logic and rational of the intervention, the issues for which the evaluation is of need and also as stated by TORs and finally the questions posed in the ex ante evaluation. (European Commission, 2006, p. 43) Secondly, there are also questions raised by persons that start the evaluation and also from the evaluation team and created from the expectations of the reference group. Finally, the chosen questions are subject of analysis to underline the level of usefulness and their feasibility. The evaluation criteria represent one method of evaluation of a great importance. The seven criteria are connected to the main “viewpoints” of the examination content. There are graphically presented in the figure below: Figure 2: Evaluation criteria representation retrieved on October 2013 from “Evaluation Methods for European Union’s External Assistance”, Volume 1, p. 50 Examining Figure 2 we can observe the interdependency and close connectivity between the seven evaluation criteria and the viewpoints of an evaluation process. As mentioned before, the rationale of an intervention aim is determining the needs, resolving problems and
  • 45. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 44 of 170 challenges while the logic of the same intervention triggers the determination of the expected effects (outputs, results and impacts). For both, the objectives are required to be achieved. Moreover, the logic of a reconstruction of intervention logic is no longer “faithful” and introduces new objectives not contained in the strategy. In conclusion, we can observe the circle of all the elements previously stated, in the same time with the attribution of the evaluation criteria. The first volume of the “Evaluation Methods for European Union’s External Assistance”, published by the Joint Evaluation Unit identifies and defines the key concepts that stand as a basis of the evaluation process, namely the evaluation criteria. Their definitions are to be found in the below table generated by author. Evaluation criteria Representation Relevance “The extent to which the objectives of the development intervention are consistent with beneficiaries' requirements, country needs, global priorities and partners' and EC's policies.” Effectiveness “The extent to which the development intervention's objectives were achieved, or are expected to be achieved, taking into account their relative importance. “ Efficiency “The extent to which outputs and/or the desired effects are achieved with the lowest possible use of resources/inputs (funds, expertise, time, administrative costs, etc.) “ Sustainability “The continuation of benefits from a development intervention after major development assistance has been completed. The probability of continued long-term benefits. The resilience to risk of the net benefit flows over time.” Impact “Positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended. “ Coherence/Comparability Coherence within the Commission's development programme
  • 46. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 45 of 170 Coherence/complementarity with the partner country's policies and with other donors' interventions and also with the other Community policies Community value added The extent to which the development intervention adds benefits to what would have resulted from Member States' interventions only in the partner country. Table 2: The defintion of evaluation criteria Retrieved in May 2014 Source: generated by author from “Evaluation Methods for European Union’s External Assistance”, Volume 1, p. 50 – 52 The preparation and evaluation questions follow a logical line. The preparation begins with determining whether or not the questions are appropriate to the evaluation, secondly they mention the purpose of the question. The next two steps assure the connection between the questions with the logic of the intervention and the evaluation criterion. Finally, the questions are written. 3. The third method of evaluation is represented by the judgment references, which also consists of different aspects, namely the judgment criteria, the target levels, the indicators and the distance between the actual questions to criterion and indicator. The judgment criteria, also called “reasoned assessment criterion” is an aspect of the evaluation offering the possibility to examine the success of the evaluation, while the target level represents either an objective defined in a verifiable way, either a comparable good practices or the actual best practices within the intervention. The indicators focus on the quantitative examination of the evaluation, being represented by ratio or rates, or on the qualitative examination (the descriptive side of an evaluation). Below, the presentation done in the first volume of the “Evaluation Methods for European Union’s External Assistance” is represented:
  • 47. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 46 of 170 Table 3: Indicators and intervention logic, retrieved in May 2014, from “Evaluation Methods for European Union’s External Assistance”, Volume 1, p. 60 4. The methodological design aspect presents the way to design the table per question, the reasoning chain and how to optimize the overall design. The methodological design is extremely useful for the evaluation team that is supported in the answering of questions process and to conclude it, with own interpretation. It is composed by the following: The chain of reasoning that will be followed; A strategy for collecting and analyzing data; Selected investigation areas; A series of specifically designed tools; A work plan (European Commission, 2006, p. 63) The example proposed by the same document of a designed table for a certain question is distributed in Table 4, providing a clear understanding about all the aspects of the methodological design:
  • 48. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 47 of 170 Table 4: Structure of a design table retrieved in May 2014, from “Evaluation Methods for European Union’s External Assistance”, Volume 1, p. 65 In order to optimize the design there are different solution, such as combining tools and questions, preparing the overall assessment taking in consideration the allocation of resource and also the cost and time constraints. Lastly, the developing tools foresee the Volume 4 of the “Evaluation Methods for European Union’s External Assistance” where the actual tools on which the evaluation is built are presented. In our case, there are also some preliminary stages before implementing the evaluation tools. 5. The data collection method is build by the work plan, the frequent difficulties and solutions, and the reliability feature of collecting data.
  • 49. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 48 of 170 The work plan consists mostly of the collection of data using the evaluation tools presented further on such as interviews, field visits, questionnaires, observation and onward. Discussing the frequent difficulties they can be the access to information, cultural gap or lack or weakness of data. For these issues there are diverse solutions to be adopted, depending on the case. The reliability of data is also a challenge that the evaluation team can face, such as confirmation bias, self-censorship, informants’ strategy, question induced answers, empathy bias, unrepresentative sample, sample selection bias. 6. The analysis as an evaluation method encompasses the different analysis strategies, the processes and the validity of analysis. The change analysis strategy underlines the change encountered upon quantitative and/or qualitative indicators as the time passes, while the meta-analysis brings into light findings about other evaluations and studies. The last two strategies of analysis, the attribution and the contribution analysis are focused on cause-and-effect.The analysis process is composed by data processing, exploration, explanation and confirmation, at the same time as the validity of analysis has three branches: external, internal and construct validity. 7. The judgment is composed by conclusions, lessons and recommendations (related to the conclusion, but not as a copy of them). The first two are based on the judgment criteria, in the same time when they respect ethical principles as impartiality, legitimacy, protection of individuals and responsibility. 8. The quality assurance follows the rules of the game and is characterized by approval of deliverables. The rules of the game concern of approval of documents, quality criteria and dissemination of the quality assessment of the final report. (European Commission, 2006, p. 90) The quality
  • 50. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 49 of 170 criteria are: meeting needs, relevant scope, defensible design, reliable data, sound analysis, credible findings, valid conclusions, useful recommendations and a clear report. (European Commission, 2006, pp. 91-92) In conclusion, the evaluation of methodology is a very complex, well structured and clear process. The process itself consists of the subject, timing and use of the evaluation and the players with their roles, while the methods are more numerous and more compound. They follow a clear rational line from intervention strategy, to evaluation questions and judgment references, also to the design and the data collection, arriving at the analysis level, judgment and quality assurance. Volume 2 and 3 The second and the third volume from the publication “Evaluation Methods for European Union’s External Assistance”, have the same structure, but a different content. Volume 2 offers the guidelines for evaluations considering the geographical and thematic feature, while the next one tender the guidelines for project and programme evaluation in general. Even if the four countries subject to evaluation in the current paper were chosen also from geographical consideration, the presentation will target Volume 3, namely the guidelines oriented more towards project and programme evaluation. Thus, the structure of Volume 3 is divided into two parts: Guidelines attribute to the evaluation manager and Guidelines for the evaluating team. Both parts assess the 5 mandatory phases for realizing an evaluation of a project or programme. Since the paper has a goal to analyze the comparability and coherence of the four evaluation reports with regard to the evaluation guidelines, covering in detail all the phases for the manager and his team represents aspects outside the proposed objectives. If some information from these two
  • 51. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 50 of 170 volumes will be defined as necessary, supplementary theoretical points will be added in Chapter 3 along with the flow of the comparative analysis. A summary of the evaluation process is presented below and consists of five phases, including the Phase 0. 1. Preparatory phase (Phase 0) The evaluation manager is put in function; he has to also name the reference group, to decide on the TORs and to engage in the project the external evaluation team. 2. Desk phase (Phase 1) The external evaluation team gets informed about the intervention logic consulting the related official documents and proposes the evaluation questions and judgment criteria. The latter questions have to be approved by the reference group, moment after which the team intervenes again to settle on the indicators and to offer partial answers to the questions. Shorty after, the assumptions to be tested are clear and the next stage is ready, the data collection and analysis. 3. Field phase (Phase 2) The evaluation team moves to the country/countries in case and starts the work, which is to collect data, apply the evaluation techniques and begins assessing the assumption of the results. 4. Synthesis phase (Phase 3) The final report is presented by the evaluation team. This includes the findings and the conclusions, considering the questions asked and the overall assessment. The clustered and ranked in accordance with the priorities recommendations of the evaluation are also distributed. Afterwards, the quality assessment process comes into act. 5. Dissemination and follow-up phase (Phase 4)
  • 52. A comparative analysis of European Commission evaluations Virban Teodora Irina of Development Co-operation programmes Master of Science in Management Science Page 51 of 170 The evaluation reports in all of their forms are sent to the policy-makers, to concerned services and partners. The report is also publicly delivered and can be found on the European Commission’s official web-site. From that point on, a follow up is done on the appliance of the recommendation offered. (European Commission, 2006, p. 6) These phases have to be closely checked if they were respected, when analyzing a final report. In conclusion, the three Volumes published by the Joint Evaluation Unit entitled “Evaluation Methods for the European Union’s External Assistance”, created as a whole by the DG External Relations, DG Development and EuropeAid Co-operation Office represent a solid background to the current research paper. The existent content will be most often required to be able to conclude upon the findings in the research process. 2.2.2. Evaluating tools This section will present in more detail the tools proposed to be use in evaluations of the European Commission development and co-operation programmes. This represents the fourth Volume of the “Evaluation Methods for the European Union’s External Assistance”, created by the Joint Evaluation Unit (DG External Relation together with DV Development and EuropeAid Co-operation Office). Before presenting the numerous tools applicability and purposes, it is essential to emphasise that the tools can be used as individual instrument of collecting data, but in most cases combination are realized in order to achieve a more on-depth reasoning and with multiple and more complex input possibilities. Moreover, the tools are correlated with four functions that the evaluator’s employs. Depending on type of tools used, the functions are divided into main and secondary. The four functions attributed to the tools are: organisation, observation/collection, analysis and judgment. (European Commission, 2006, p. 12) The evaluation tools are defined next: