1. Measuring Business Excellence
A review of program and proj ect evaluation models
Roberto Linzalone Giovanni Schiuma
Article information:
To cite this document:
Roberto Linzalone Giovanni Schiuma , (2015),"A review of program and project evaluation models", Measuring Business
Excellence, Vol. 19 Iss 3 pp. 90 - 99
Permanent link to this document:
http://dx.doi.org/10.1108/MBE-04-2015-0024
Downloaded on: 28 October 2015, At: 02:11 (PT)
References: this document contains references to 47 other documents.
To copy this document: permissions@emeraldinsight.com
The fulltext of this document has been downloaded 447 times since 2015*
Users who downloaded this article also downloaded:
Paolo Canonico, Ernesto De Nito, Vincenza Esposito, Marcello Martinez, Lorenzo Mercurio, Mario Pezzillo iacono,
(2015),"The boundaries of a performance management system between learning and control", Measuring Business
Excellence, Vol. 19 Iss 3 pp. 7-21 http://dx.doi.org/10.1108/MBE-04-2015-0021
AsbjÞrn RolstadÄs, Iris Tommelein, Per Morten Schiefloe, Glenn Ballard, (2014),"Understanding project success through
analysis of project management approach", International Journal of Managing Projects in Business, Vol. 7 Iss 4 pp. 638-660
http://dx.doi.org/10.1108/IJMPB-09-2013-0048
Davide Aloini, Luisa Pellegrini, Valentina Lazzarotti, Raffaella Manzini, (2015),"Technological strategy, open innovation and
innovation performance: evidences on the basis of a structural-equation-model approach", Measuring Business Excellence,
Vol. 19 Iss 3 pp. 22-41 http://dx.doi.org/10.1108/MBE-04-2015-0018
Access to this document was granted through an Emerald subscription provided by editormbe
For Authors
If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service
information about how to choose which publication to write for and submission guidelines are available for all. Please visit
www.emeraldinsight.com/ authors for more information.
About Emerald www.emeraldinsight.com
Emerald is a global publisher linking research and practice to the benefit of society. The company manages a portfolio of
more than 290 j ournals and over 2,350 books and book series volumes, as well as providing an extensive range of online
products and additional customer resources and services.
Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the Committee on Publication
Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.
*Related content and download information correct at time of download.
Downloaded
by
Jos
van
Iwaarden
At
02:11
28
October
2015
(PT)
3. Therefore, if, on the one hand, PM has largely contributed to the development of effective
theories and models, enabling the effective achievement of projectsâ outputs, on the other
hand, evaluation models (EMs) are becoming more and more attractive, as they enable the
evaluation of the effects of a project, both intended and unintended.
Looking at the literature on EMs, there is a large number of reviews which describe EMs
through different foci of analysis (e.g. theoretical stream, nature, approach, context, scope)
(Chen, 2005; Shadish et al., 1991, Rossi et al., 2003). However, there are a few literature
reviews resulting in an effective collection and identification of the EMs frameworks.
Building on a systematic literature review, this study presents a comprehensive analysis of
the existing frameworks of EMs.
The paper is organized as follows. Section 2 provides definitions of program, project and
their effects; Section 3 deals with the evaluation of programs and projects; Section 4
analyses the existent EMs and highlights the related research gaps; Section 5 reports the
methodology of the research; Section 6 presents the results and the findings, while
Section 7 describes the conclusions.
2. Program, Project, Effects
Several definitions of âprojectâ can be found in the management literature. Each of them
highlights a peculiar aspect of it. While the Project Management Institute (PMI) states that
a âproject is a temporary initiative undertaken to create a unique product or serviceâ (PMI,
2004), Bowen (1996) argues that a project is a unique set of activities designed to produce
a definite result, with a clear start and end date, and a clear allocation of resources. Another
notable scholar, defines a project as a complex accomplishment, unique and of limited
duration, aimed at achieving a clear and agreed goal by a continuous process of planning
and control of different resources and interdependent constraints of time-cost-quality
(Archibald, 2004). Common and agreed characteristics of a project are clear and specific
goals, shared and clear projectâs purpose and no technical ambiguity about the output to
undertake (Archibald, 2004; Cantamessa et al., 2007).
Regarding the concept of program, PMI (2008) defines it as a group of related projects
managed in a coordinated manner to achieve a level of benefits and a level of control that
would not be possible by managing them individually. Therefore, the program is an
elementary part of a complex strategic initiative, whereas the project is an elementary part
of a program.
Projectsâ and programsâ results can be distinguished in outputs, outcomes and impacts
(WK Kellogg Foundation, 1998, 2004; Shapiro, 1996; Rossi et al., 2003). Outputs are the
products and/or services carried out from the project implementation. Outcomes and
impacts are both effects of the output that are observable along the time in the project
environment or on stakeholders. Outcomes are the specific changes in behavior,
knowledge, skills and the state and level of the activity/operation of the project target (i.e.
participants, beneficiaries, companies, processes, etc.). Outcomes reveal in the short-term
(from 1 to 3 years) or in the long-term (over a period of 4-6 years). Impact is the fundamental
change, wanted or unwanted, intended or unintended, that occurs in organizations,
communities or systems as a result of a project (it reveals in the long-term, within 7-10
years) (Kellogg Foundation, 2004). A âcauseâeffectâ relation regulates the mechanism of
creation of outcomes and impacts, whose structure can be linear or systemic (complex).
Referring to the concepts of outputs, outcomes and impacts, programs differ from projects
because programs are focused on the consequences (outcomes) instead of results
(outputs) (Office of Government Commerce, 2005). Moreover Cantamessa et al. (2007)
argue that a project is usually linear in producing effects, while a program, has not linear
relations mechanism between outputs and effects.
VOL. 19 NO. 3 2015 MEASURING BUSINESS EXCELLENCE PAGE 91
Downloaded
by
Jos
van
Iwaarden
At
02:11
28
October
2015
(PT)
4. 3. Program and project evaluation
Although there are documented evaluations of human interventions dating back to 2.200
BC (Shadish et al., 1991), the issue of project and program evaluation became especially
important in the USA in the 1960s, during the period of the social programs known as Great
Society, launched by Kennedyâs and Johnsonâs administrations. Extraordinary public
investment in social programs was financed, but the impact of those investments remained
largely unknown.
With particular regard to projects and programs, evaluation is the assessment and the
analysis of the effectiveness of an activity; it involves the formulation of judgments about the
impact and progress. Evaluation is the comparison of the actual effects of a project, against
the agreed planned ones. Evaluation looks at what is planned to do, what has been
achieved and how it has been achieved (Shapiro, 1996; Archibald, 2012).
In a wider perspective, evaluation is a family of research methods that are used to
systematically investigate the effectiveness of policies, programs, projects and other types
of social intervention, with the aim of achieving improvement in the social, economic and
everyday conditions of peopleâs lives (Government Social Research Unit, 2007).
Many definitions of evaluation are linked to the concept of improvement, as evaluation
enables the review of policy and program development. Kahan and Goodstadt (2005)
conceive evaluation as a set of research questions and methods properly articulated to
review processes, activities and strategies, with the aim of achieving better results. So the
purpose of an evaluation is not just to find out what happened, but to use the information
to make the project better.
Evaluation of projects and programs can be implemented along the whole duration,
according to three stages: ex-ante, interim and ex-post evaluation. The first stage is aimed
to compare, select and finance alternative projects. The second one is aimed to improve
the strategy or the processes. The third stage is aimed to take lessons, insights, judgment
and awareness about taken decisions and projects.
A fundamental role in performing the projects and programs evaluation is played by EMs.
4. Programsâ and projectsâ EMs
The terms approach and model, referred to evaluation, are used in an alternative way,
although there are some differences in meaning. Evaluation approach is the method, the
mental attitude or the particular perspective by which the evaluation is gathered. The model
represents the theoretical reconstruction or simulation of an abstract object, system or
concept, which describes more or less the structure or function of the subject to represent.
A model assists in the prediction of events, according to natural laws (social, economic,
organizational, etc.), and in increasing the understanding of phenomena.
EMs are approaches that assist evaluators in designing and carrying out useful, defensible
program evaluations (Stufflebeam and Shinkfield, 2007).
Many evaluation approaches and models exist. The literature suggests that no one
approach is best for all situations. Rather, the best approach varies according to factors
such as fit with basic values, the intent of the evaluation, the nature of key stakeholders and
the available resources. In addition, it is not necessary to stick strictly to one approach:
evaluations might quite reasonably combine elements of different approaches or adapt to
local conditions (Rogers and Fraser, 2003).
A variety of classifications of EMs can be found in the literature. Kahan (2008) distinguishes
EM in: goal based, goal free, theory based (logic model), utilization, collaborative,
balanced scorecard; appreciative inquiry; and external context, input, process, product
(CIPP). Stufflebeam and Shinkfield (2007) argue that EMs can be classified according to
the following three major types, with respective sub-types:
PAGE 92 MEASURING BUSINESS EXCELLENCE VOL. 19 NO. 3 2015
Downloaded
by
Jos
van
Iwaarden
At
02:11
28
October
2015
(PT)
5. 1. Questions and/or methods-oriented:
 Questions-oriented; and
 Methods-oriented.
2. Improvement/accountability-oriented:
 Decision/accountability-oriented;
 Consumer-oriented; and
 Accreditation/certification.
3. Social agenda/advocacy approaches:
 Client-centered studies;
 Constructivist evaluation;
 Deliberative democratic evaluation; and
 Utilization-focused evaluation.
Hentschel (1999) proposes four main types of EMs, according to qualitative or quantitative
nature of the data and method of evaluation. This distinction gives rise to the following four
types of EMs: subjective welfare, standard household survey, ethnography and
econometric anthropology.
The adoption of an assessment model is required to represent and analyze the
mechanisms of the system, and allow an explicit analysis through analysis of the individual
components of the program (Dyehouse et al., 2009; Chen, 2005).
This is remarked by Chen (2005) who distinguishes an inputâoutput or âblack boxâ
evaluation and a white box evaluation. The first approach to evaluation does not consider
any model of evaluation: things go in and things come out. It is useful in identifying program
merits, but cannot capture the âtransformation processes that turn interventions into
outcomesâ (p. 231), and thus, evaluation findings lack robust explanatory power.
What is needed is the evaluation that considers the details of what occurs in âthe black
boxâ. The function of the EM is to make clearer the system and allows for more explicit
analysis of the program through analysis of the components of the system, which is the
promise of a âwhite boxâ approach. Furthermore, this type of analysis of the inner
components and the logic of the system can enable the needed analyses, leading to
improvement of theoretical models (Dyehouse et al., 2009).
Despite a large number of classifications of EMs, that reflect theoretical dissertations, as
well as a numerous population of EMs developed in heterogeneous projects and programs
settings, the literature lacks comprehensive review of EMs. To overcome this limitation, a
review of EMs has been carried out.
5. Methodology
The research methodology used reflects the pattern of a systematic literature review
(Tranfield et al., 2003). According to it, the following five main phases characterize the
research:
1. planning the review;
2. identifying and evaluating studies;
3. extracting and synthesizing data;
4. reporting; and
5. utilizing the findings.
VOL. 19 NO. 3 2015 MEASURING BUSINESS EXCELLENCE PAGE 93
Downloaded
by
Jos
van
Iwaarden
At
02:11
28
October
2015
(PT)
6. 5.1 Planning the review
As a first step, a review committee composed of two members has been set up. The
committee has mapped the field of study by identifying potential areas/sectors and players,
which use and apply evaluations models in projects and programs. Subsequently, a
double-stage sampling procedure was used to identify and retrieve EMs according to a
defined review protocol.
5.2 Identifying and evaluating studies
In the first stage, a broad scan of potential EMs has been carried out, through systematic
searches of application of EMs to programs and projects in many settings. Primary and
secondary data were collected.
Search terms used to identify potential case examples of program/project evaluations
included, but were not limited to, EM, program evaluation, project evaluation and ânameâ
of the EM (once identified). Searches were conducted using simple search methods as well
as Boolean operators (Reed and Baxter, 2009). The appearance of these terms was
searched using the Web search engines.
The sources collected were: scientific articles, reports on public and private projects,
websites of: associations, foundations, organizations of the third sector, research
institutions, national government agencies, government bodies, and development
agencies (of local, national and international levels).
The aim of this first and broad scan has been to identify the EMs used to evaluate
projects and programs with their main identification data. The review has been
organized according to three main steps: scanning, normalization and systematization
of results.
To be comprehensive and reduce the number of false-negative findings (i.e. relevant
sources not identified), the sampling was not limited only to evaluation-related articles
and/or Web sites (Miller and Campbell, 2006) but also included substantive sources in
areas where evaluations of programs and projects might also be found (i.e. urban
development, environment). Most of the sources were pertaining to the following settings:
education, health, social development, research, development, military and guardianship,
business, services and urban planning.
This stage yielded an initial population of 75 potential EMs that self-identified as being
project and/or program EMs.
The identification of the EMs was followed by the normalization of the data collected.
Each identified model has been analyzed and described according to a standard
sampling template, consisting of a data abstraction form (Miller and Campbell, 2006),
that is a card structured with the following fields: name of model, source, nature
(qualitative vs quantitative), field of application, methodology, pros and cons and
reference. The formâs schema was derived from the focal research questions and
developed during the final stages of the sampling process. It predominately consisted
of fixed items that were open-ended and were low inference items (requiring little
judgment).
All sampled EMsâ data were, therefore, normalized, that is decrypted according to the
form. The procedure of normalization acted also as a means of calibration, by which
each reviewer worked independently on a small subsample of cases without areas of
ambiguity (Wilson, 2009). After the calibration procedure, each model was analyzed.
5.3 Extracting and synthesizing data
After the first scan of EMs, a screening was carried out that aimed to select and discard the
results not pertaining to the socio-economic programs and projects. All 75 potential and
PAGE 94 MEASURING BUSINESS EXCELLENCE VOL. 19 NO. 3 2015
Downloaded
by
Jos
van
Iwaarden
At
02:11
28
October
2015
(PT)
7. Table I Summary table of program/project EMs review
Typology EM Source Nature
Peer review (PR) Direct PR
Modified direct PR
Ancillary PR
Traditional PR
Indirect PR
Pre-emptive PR
Various authors from scientific
literature (most dated source:
Royal Society of Edinburgh,
1731)
Qualitative
Case study (CS) Prospective CS
Retrospective CS
Le Play (1829) Quali-quantitative
Technological forecasting Scenario planning Kahn (1950) Qualitative
Cross-impact matrices (or inter-dependency
matrices)
Gordon and Hayward (1968) Quali-quantitative
Morphological analysis Zwicky (1967) Qualitative
Financial methods Internal rate of return
Net present value
Payback period
Value-based management
literature
Quantitative/financial
Binomial option Pricing model
Trinomial option Pricing model
Cox et al. (1979) Quantitative
Economic-based methods Cost-benefit/Cost-effectiveness analysis Economic literature (most
dated source: Dupuit, 1844)
Quantitative/financial
Social accounting matrix Stone and Brown (1962) Quantitative
Experimental economics
Data
Instrumental variables
Computational methods
Structural econometrics
Economic literature Quantitative
Contingent valuation Ciriacy-Wantrup (1947) Quantitative
Technological-based methods Technology assessment
Technology dynamics
Technology forecasting
Technology management
literature
Ex-ante, quantitative
Narrative methods Storytelling Social sciences Qualitative
Impact narratives Social literature Qualitative
Most significant change Davies (1996) Qualitative
Ethnographic methods Ethnographic evaluation Social sciences Qualitative
Behavioral methods Outcome mapping International Development
Research Centre â Evaluation
Unit, 2001
Qualitative
Scoring methods Analytic hierarchy process Saaty, 1970s Quantitative
Earned value analysis/management US Department of Defense,
1960âs
Quantitative
Program assessment rating tool US Office of Management,
2002
Quantitative
Key performance indicators Management literature Quantitative
Scorecard methods Balanced scorecard Kaplan and Norton (1992) Quali-quantitative
Performance prism Neely et al. (2002) Quali-quantitative
Bibliometric methods Main science and technology indicators OECD (2013) Quantitative
World Bank (undated)
Pathways analysis Participatory impact pathways analysis
CPM/PERT
Critical path method/program evaluation and
review technique
Challenge program on water
and food
Qualitative
Catalytic Construction
Company, 1957
Quantitative
Booz, Allen and Hamilton,
Inc., 1958
Logic model/framework Logical framework approach Rosenberg (1969) Qualitative
Kelloggâs logic model Quigley M., for W.K. Kellogg
Foundation (1998)
Quali-Quantitative
CIPP evaluation framework Stufflebeam et al. (1966) Qualitative
Weaverâs triangle Weaver, undated Qualitative
TQM approach Malcom Baldridge Award/Model Malcolm Baldrige national
quality improvement
Quali-Quantitative
European Foundation for Quality
Management excellence model
European Foundation for
quality management
Quali-quantitative
(continued)
VOL. 19 NO. 3 2015 MEASURING BUSINESS EXCELLENCE PAGE 95
Downloaded
by
Jos
van
Iwaarden
At
02:11
28
October
2015
(PT)
8. self-identified EMs were scrutinized to determine whether they met inclusion criteria. To be
considered for inclusion, the EMs had to:
Â
not replicate other EMs with a different name; and
Â
not be a personalization/derivation of other EMs.
EMs not meeting these criteria were excluded from the sample. Two reviewers
independently screened the 75 models to determine whether they met the inclusion criteria.
The comparison of the EMs, carried out with systematic comparison of the forms, revealed
cases of variations and âpersonalizationâ of the same original model. These variants were
discarded, leaving only the original model.
Consensus between both reviewers verified their inclusion or exclusion. The final sample
obtained from this procedure yielded 57 EMs[1].
6. Reporting results and findings
The EMs included in the final sample were subjected to a further comparison to recognize
and group models that share a common and predominant element (i.e. approach, the field
of application, the subject, methodology, etc.).
The results obtained have been summarized and reported in Table I. The collection
resulted in many models; some of them have common subjects, context/field of application,
evaluation or measure approach. These commonalities allowed to group them in
typologies.
The models have a different responsiveness and effectiveness to the evaluative needs of
programs and projects. The proposed classification of the EMs in Typologies helps identify
and associate with each model the main characteristic evaluation.
Quantitative models provide analytical relations between the elements, enabling the
evaluation reporting and the planning of achievable effects. The qualitative models support
and enable the construction of the program theory. EMs, according to the final results of the
review, appears as a wide spectrum of items differentiated by strategic, contextual,
methodological variables: what dimension is evaluated, what is the purpose of evaluation
purpose and how the evaluation is conducted.
Table I
Typology EM Source Nature
Strategic SWOT analysis Humphrey (1967) Qualitative
Strategy map Kaplan and Norton (1996) Qualitative
Critical success factor Daniel (1961) Qualitative
Rockart (1979)
Breakdown/tree structures Work breakdown structure US Department of Defense,
1957
Qualitative
Cost breakdown structure Management literature Quantitative
Problem tree analysis System analysis literature Qualitative
Statistical Six sigma Motorola (1981) Quali-quantitative
Multicriteria analysis Multicriteria decision analysis Early 1960s Quantitative
Impact assessment Environmental impact assessment 1960s â National
Environmental Policy Act,
USA, 1969
Quantitative
Social impact assessment 1960s â National
Environmental Policy Act, USA
1969
Quali-quantitative
Notes: CPM ⫝̸ critical path method; PERT ⫝̸ program evaluation and review technique; TQM ⫝̸ total quality management
PAGE 96 MEASURING BUSINESS EXCELLENCE VOL. 19 NO. 3 2015
Downloaded
by
Jos
van
Iwaarden
At
02:11
28
October
2015
(PT)
10. Government Social Research Unit (2007), âWhy do social experiments? Experiments and
quasi-experiments for evaluating government policies and programmesâ, in HM Treasury, The
Magenta Book: Guidance Notes for Policy Evaluation and Analysis, HM Treasury, London.
Hentschel, J. (1999), âContextuality and data collection methods: a framework and application to
health service utilizationâ, Journal of Development Studies, Vol. 35 No. 4, pp. 64-94.
Kahan, B. (2008), âExcerpts from review of evaluation frameworksâ, Saskatchewan Ministry of
Education, available at: http://idmbestpractices.ca/pdf/evaluation-frameworks-review.pdf (accessed
24 September 2014).
Kahan, B. and Goodstadt, M. (2005), The IDM Manual Basics â 3rd Edition Centre for Health Promotion,
University of Toronto, Toronto.
Kaplan, R.S. and Norton, D.P. (1966), The Balanced Scorecard: Translating Strategy into Action,
Harvard Business School Press, Boston, MA.
Kaplan, R.S. and Norton, D.P. (1992), The Balanced Scorecard: Measures that Drive Performance,
Harvard Business Review, January/February, pp. 71-79.
Kellogg Foundation (1998), âEvaluation handbookâ, available at: www.wkkf.org (accessed November
2013).
Kellogg Foundation (2004), âLogic model development guideâ, available at: www.wkkf.org (accessed
January 2014).
Miller, R.L. and Campbell, R. (2006), âTaking stock of empowerment evaluation: an empirical reviewâ,
American Journal of Evaluation, Vol. 27 No. 3, pp. 296-319.
Neely, A.D., Adams, C. and Kennerley, M. (2002), The Performance Prism: The Scorecard for
Measuring and Managing Stakeholder Relationships, Financial Times/Prentice Hall, London.
OECD (2013), Main Science and Technology Indicators, OECD Publishing, Paris, Vol. 2013 No. 1.
Office of Government Commerce (2005), Managing Successful Programmes: Delivering Business
Change in Multi-Project Environments, HM Stationary Office, London.
Project Management Institute (2004), A Guide to the Project Management Body of Knowledge, 3rd ed.,
Project Management Institute, Newton Square, PA.
Project Management Institute (2008), The Standard for Program Management, 2nd ed., Project
Management Institute, Newton Square, PA.
Reed, J.G. and Baxter, P.M. (2009), âUsing reference databasesâ, in Cooper, H., Hedges, L.V. and
Valentine, J.C. (Eds), The Handbook of Research Synthesis and Meta-analysis, Russell Sage
Foundation, New York, NY, pp. 73-102.
Rockart, J.F. (1979), âChief executives define their own data needsâ, Harvard Business Review, Vol. 57
No. 2, pp. 81-93.
Rogers, P.J. and Fraser, D. (2003), âAppreciating appreciative inquiryâ, in Preskill, H. and
Coghlan, A.T. (Eds), Using Appreciative Inquiry in Evaluation, Jossey-Bass, San Francisco, CA,
pp. 75-84.
Rosenberg, L. (1969), for the US Agency for International Development (USAID).
Rossi, P.H., Lipsey, M.W. and Freeman, H.E. (2003), Evaluation: A Systematic Approach, Sage
publications, Thousand Oaks, CA.
Shackman, G. (2012), What Is Program Evaluation: A Beginnerâs Guide, The Global Social Change
Research Project, Albany, NY.
Shadish, W.R., Cook, T.D. and Leviton, L.C. (1991), Foundations of Program Evaluation: Theories of
Practice, Sage, Newbury Park, CA.
Shapiro, J. (1996), âMonitoring and evaluationâ, available at: www.civicus.org/new/media/Monitoring
%20and%20Evaluation.pdf (accessed 24 April 2014).
Stone, R. and Brown, A. (1962), A Computable Model for Economic Growth, Cambridge Growth
Project, Cambridge.
Stufflebeam, D.L. (1966), âA depth study of the evaluation requirementâ, Theory Into Practice, Vol. 5
No. 1, pp. 121-134.
PAGE 98 MEASURING BUSINESS EXCELLENCE VOL. 19 NO. 3 2015
Downloaded
by
Jos
van
Iwaarden
At
02:11
28
October
2015
(PT)
11. Stufflebeam, D.L. and Shinkfield, A.J. (2007), Evaluation Theory, Models, and Applications, Jossey-
Bass, San Francisco, CA.
Tranfield, D., Denyer, D. and Smart, P. (2003), âTowards a methodology for developing evidence-
informed management knowledge by means of systematic reviewâ, British Journal of Management,
Vol. 14 No. 3, pp. 207-222.
Wilson, D.B. (2009), âSystematic codingâ, in Cooper, H., Hedges, L.V. and Valentine, J.C. (Eds), The
Handbook of Research Synthesis and Meta-analysis, Russell Sage Foundation, New York, NY,
pp. 159-176.
Zwicky, F. (1967), in Zwicky, F. and Wilson, A. (Eds), New Methods of Thought and Procedure:
Contributions to the Symposium on Methodologies, Springer, Berlin.
Further reading
Coryn, C.L.S., Noakes, L.A., Westine, C.D. and Schroter, D.C. (2011), âA systematic review of
theory-driven evaluation practice from 1990 to 2009â, American Journal of Evaluation, Vol. 32 No. 2,
pp. 199-226.
Darabi, A. (2002), âTeaching program evaluation: using a systems approachâ, American Journal of
Evaluation, Vol. 23 No. 2, pp. 219-228.
Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families (ACF),
US Department of Health & Human Services (HHS) (2010), âWhat is program evaluation?â, in OPRE
(Ed.), The Program Managerâs Guide to Evaluation, Office of Planning, Research and Evaluation
(OPRE), Washington, DC, pp. 6-12.
Rossi, M. (2004), I Progetti di Sviluppo, Franco Angeli, Milano.
Scriven, M. (2007), âKey evaluation checklistâ, available at: www.wmich.edu/evalctr/checklists
(accessed 8 march 2014).
About the authors
Dr Roberto Linzalone is Senior Research Fellow in Management Engineering at the
University of Basilicata. His research interests focus on knowledge management, new
product development, project management and evaluation. Roberto received his degree in
Civil Engineering from the University of Basilicata and PhD in Advanced Systems of
Production from the Polytechnic University of Bari. He is a regular speaker at national and
international conferences and has authored and co-authored about 30 publications,
including books, articles, research reports and working papers. Roberto Linzalone is the
corresponding author and can be contacted at: roberto.linzalone@unibas.it
Dr Giovanni Schiuma is Professor of Arts-based Management and Director of the
Innovation Insights Hub at the University of the Arts London. He is widely recognized as one
of the worldâs leading experts in the arts in business and strategic knowledge management.
He is an inspiring speaker and facilitator, with extensive research management expertise
and excellent ability to coordinate complex projects and lead research teams. He is
creative and innovative, with international mind-set and openness to address and solve key
strategic research and organizational challenges. Giovanni is a leading international expert
on knowledge management and intellectual capital strategy and widely renowned for his
work on the use of the arts for business, as well as his work on assessing and managing
knowledge assets. He Chairs the International Forum on Knowledge Assets Dynamics.
For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: permissions@emeraldinsight.com
VOL. 19 NO. 3 2015 MEASURING BUSINESS EXCELLENCE PAGE 99
Downloaded
by
Jos
van
Iwaarden
At
02:11
28
October
2015
(PT)