Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Ken Guy Presentation

677 views

Published on

Published in: Technology, Economy & Finance
  • Be the first to comment

  • Be the first to like this

Ken Guy Presentation

  1. 1. In Search of the Missing Link The challenge of establishing the link between micro-level activities and impacts at the macro-level <ul><li>Ken Guy </li></ul><ul><li>Wise Guys Ltd </li></ul><ul><li>Research and the Knowledge Based Society – Measuring the Link </li></ul><ul><li>European Conference on Good Practice in Research Evaluation and Indicators </li></ul><ul><li>NUI, Galway </li></ul><ul><li>May 24th 2004 </li></ul>
  2. 2. Route Map <ul><li>Mission Impossible </li></ul><ul><li>Evaluation Approaches </li></ul><ul><li>Evaluations of the Effects of Single Instruments </li></ul><ul><li>Evaluations that Focus on the Functioning of Single Instruments </li></ul><ul><li>Evaluations of Multiple Instruments or Policy Portfolios </li></ul><ul><li>Future Evaluation Needs in a European Context </li></ul><ul><li>Intellectual Underpinnings </li></ul>
  3. 3. Mission Impossible <ul><li>Ex post evaluations invariably involve attempts to assess the downstream impacts of policy-induced events or activities </li></ul><ul><li>This often involves assessing the socio-economic impacts of publicly funded research and development (R&D) projects </li></ul><ul><li>Such assessments are rarely easy </li></ul><ul><li>Although it is sometimes possible to demonstrate a loose relationship or association over time between aggregate R&D spend and aspects of macro-level economic performance , the complexity of interactions within innovation systems makes it almost impossible to demonstrate a causal relationship between specific R&D spend and aggregate macro-economic performance </li></ul><ul><li>Demonstrating that specific socio-economic impacts can be attributed in part to individual R&D projects is also fraught with difficulties </li></ul>
  4. 4. Direct and Indirect Impacts <ul><li>Typically, the causality and attribution problems associated with complex social systems confound attempts to quantify relationships in any meaningful way </li></ul><ul><li>It is possible on small number of occasions, however, to establish the existence of direct links at a project level between specific R&D inputs, R&D outputs, organisational outcomes and broader socio-economic impacts </li></ul><ul><li>Demonstrating the existence of indirect links in complex feedback systems, however, is much more problematic </li></ul><ul><li>Moreover, whereas simple quantitative statements can often be made about R&D inputs and immediate outputs, precision and confidence levels decline very rapidly as one progresses from ‘first circle’ to ‘second circle’ and ‘third circle’ impacts </li></ul>
  5. 5. Expanding Circles <ul><li>First circle impacts are impacts on the research teams involved in projects, e.g. increased know-how, expanded research networks etc. </li></ul><ul><li>Second circle impacts are impacts on the organisations to which these research teams belong, e.g. improved productivity, product developments, increased sales etc. </li></ul><ul><li>Third circle impacts are those which affect the broad socio-economic environment </li></ul>
  6. 6. Evaluation Approaches <ul><li>The evaluation community has responded to the political imperative to assess the consequences of policy interventions by evolving a variety of pragmatic approaches and techniques designed – however imperfectly – to assess impacts </li></ul><ul><li>In practice, there are four classes of evaluation activity that are of particular importance: </li></ul><ul><ul><li>Evaluations of the effects of single instruments </li></ul></ul><ul><ul><li>Evaluations that focus on the functioning of single instruments </li></ul></ul><ul><ul><li>Evaluations of multiple instruments or policy portfolios </li></ul></ul><ul><ul><li>Comparisons of both single and multiple instruments across regions and countries </li></ul></ul>
  7. 7. Evaluation Design Steps <ul><li>Deepen mutual understanding by sharing information about the object to be evaluated (e.g. by running over its history, policy context, rationale, aims and objectives, mode of implementation, barriers to implementation and expected scale of impacts) </li></ul><ul><li>Deepen understanding about the rationale for the evaluation, its timing, expected outcome, mode of delivery and target audiences – concentrating in particular on the role of an evaluation as a tool for accountability, fine-tuning, learning or strategy development </li></ul><ul><li>Discuss the strategic focus of the evaluation , the range of issues it should address and the questions it should answer </li></ul><ul><li>Discuss the tactical focus of the evaluation , i.e. the range of methods, tools and indicators dictated by the strategic focus chosen for the evaluation(typically these include the usual range of case studies, interview-based techniques, peer reviews, questionnaires, output counts and simple data analysis techniques, though there is an increasing need for more sophisticated indicator construction, data analysis techniques and modelling approaches). </li></ul>
  8. 8. Evaluation Issues
  9. 9. Evaluations of the Effects of Single Instruments <ul><li>A common strategy in these evaluations is to identify the target audiences and collect data on: </li></ul><ul><ul><li>Input variables (e.g. the resources committed by individual organisations and by the public sector) </li></ul></ul><ul><ul><li>Structural variables (e.g. size of firm, sector, nature of work etc.) </li></ul></ul><ul><ul><li>Goal variables (e.g. nominal data on the individual goals of target organisations and ordinal data on the importance of these goals) </li></ul></ul><ul><ul><li>G oal attainment (e.g. ordinal data on the extent to which individual organisational goals are met) </li></ul></ul><ul><ul><li>O utput variables (e.g. lists and counts of the outputs generated by these initiatives, plus data concerning the value of outputs) </li></ul></ul><ul><ul><li>P rocess or progress variables (e.g. assessments of the influence of various obstacles or success factors on project progress) </li></ul></ul><ul><ul><li>I mpact variables (e.g. lists, counts and scale assessments of different types of impact on different sets of actors) </li></ul></ul>
  10. 10. Evaluation Indicators Swedish ITYP Programme
  11. 11. Project Variables for an RTD Programme
  12. 15. Data Analysis <ul><li>The tactic once data on all relevant variables have been collected is to search for relationships between the variables, since the common thrust of most evaluations of this type is to make statements of a causal nature between inputs and eventual outcomes, with accompanying explanatory statements as to why these relationships exist and how certain factors influence their magnitude and direction </li></ul>
  13. 16. Major Cluster of Significant Relationships Brite-Euram and SMT Significant (r>=0.058) Weak (0.3>r>=0.058) Medium (0.5>r>=0.3) Strong (r>=0.5) Project Structure Factor Ambition Factor Opportunity Factor Progress Exploitation-oriented Achievements Business-oriented Achievements Knowledge-oriented Achievements Goal Attainment Core Projects Exciting Projects Costly Projects Nature Second Circle Impacts First Circle Impacts Competence Impacts Third Circle Impacts Impact
  14. 17. Key Success Factors <ul><li>The constitution of core projects that are strategically important, necessary, short-term, applied, mission-oriented projects in core technology areas </li></ul><ul><li>Adventurous, risky and technically complex projects that project partners (and others) find exciting </li></ul><ul><li>Large, critical mass projects with costly agendas and high budgets; </li></ul><ul><li>Project structures in which interested and committed partners of proven technical and managerial competence have clear, complementary goals and deploy appropriate levels of financial and human resources to the task </li></ul><ul><li>Projects with a focus on ambitious goals and interesting, technologically complex R&D agendas </li></ul><ul><li>Project partners capable of recognising windows of opportunity and utilising them to exploit project results </li></ul>
  15. 18. Evaluations that Focus on the Functioning of Single Instruments <ul><li>Evaluations concentrating on the impact of interventions focus attention on target audiences and explore how these have been affected </li></ul><ul><li>The efficiency of an initiative can then be assessed as a simple output/input ratio if commensurable units are used </li></ul><ul><li>Often, however, the focus of attention is not just the target audience but also (or even solely) the way in which a public authority has implemented the action as a whole. In this instance it is the ‘process efficiency’ or the efficiency of implementation of the initiative that is the legitimate object of study </li></ul><ul><li>Process implementation evaluations rarely employ sophisticated methodologies. Often they are qualitative in nature and resemble ‘investigative journalism’, relying on interviews, desk-top analyses of data collected via normal management monitoring activities, and comparisons with milestones in planning documents </li></ul>
  16. 19. Evaluations of Multiple Instruments or Policy Portfolios <ul><li>The need for holistic policy mixes to nurture the development of innovation systems, whether at regional, national or even international levels, implies a commensurate need for methodological approaches capable of evaluating the operation and impact of very complex combinations of policy instruments </li></ul><ul><li>To date, however, evaluation methodologies have not been up to this task, though some approaches do possess merit </li></ul><ul><li>There are two approaches in particular which deserve to be mentioned: </li></ul><ul><ul><li>Aggregative methods </li></ul></ul><ul><ul><li>Macroeconomic methods </li></ul></ul>
  17. 20. Aggregative and Macroeconomic Methods <ul><li>Aggregative methods attempt to ‘sum’ or ‘aggregate’ the impacts of individual single instrument evaluations, though this is rarely attempted in a quantitative fashion </li></ul><ul><li>Macroeconomic methods are of two broad types: </li></ul><ul><ul><li>Macroeconometric models based on a set of econometrically estimated structural equations </li></ul></ul><ul><ul><li>Computational general equilibrium models </li></ul></ul><ul><li>These methods consider system level variables such as R&D investment, productivity and employment levels and use models of the relationships between them to estimate the impact of changes in one variable on changes in another (e.g. the effect of changes in R&D investment and ‘resultant’ changes in a socio-economic impact variable) </li></ul>
  18. 21. Composite Innovation System Indicators <ul><li>One of the ways forward in this sphere is the attempt to develop composite indicators that try to capture the essence of innovation system concepts that are often quite complex </li></ul><ul><li>These include: </li></ul><ul><ul><li>Social and Human Capital </li></ul></ul><ul><ul><li>Research Capacity </li></ul></ul><ul><ul><li>Technology and Innovation Performance </li></ul></ul><ul><ul><li>Absorptive Capacity </li></ul></ul>
  19. 22. A Simple Innovation System <ul><li>Social and Human Capital </li></ul><ul><ul><li> Universities </li></ul></ul><ul><ul><li> S&T Training and Education </li></ul></ul><ul><li>Absorptive Capacity </li></ul><ul><ul><li> Follower firms; Intermediate and End Users </li></ul></ul><ul><ul><li> Market for Goods and Services </li></ul></ul><ul><li>Research Capacity </li></ul><ul><ul><li> Universities; Government Laboratories </li></ul></ul><ul><ul><li> Basic Scientific Research </li></ul></ul><ul><li>Technology and Innovation Performance </li></ul><ul><ul><li> Creative Firms </li></ul></ul><ul><ul><li> Applied RTD and Product /Process Development </li></ul></ul>Knowledge Users Knowledge Creators Public Sector Private Sector
  20. 23. Social and Human Capital Indicators <ul><li>The concept of social and human capital is closely related to measures of levels of education in a country and their maintenance. A social and human capital proxy can be based on the average of three indicators: </li></ul><ul><ul><li>A human capital investment indicator based on the educational expenditures in a country (percentage of GDP spent on education) </li></ul></ul><ul><ul><li>A more output-based education performance indicator (percentage of working population with third-level degrees) </li></ul></ul><ul><ul><li>An informal training indicator (participation in life long learning) </li></ul></ul>
  21. 24. Research Capacity Indicators <ul><li>The long-term strength of a country’s research system is largely based on factors such as: </li></ul><ul><ul><li>The capacity of a country to deliver highly qualified researchers (scientists and engineering graduates as a percentage of working population) </li></ul></ul><ul><ul><li>The amount of public resources a country is prepared to invest in R&D (GOVERD and HERD as a percentage of GDP </li></ul></ul><ul><ul><li>The performance of a country’s national research system (number of publications per million population) </li></ul></ul>
  22. 25. Technological and Innovative Performance Indicators <ul><li>Technological performance reflected in: </li></ul><ul><ul><li>The R&D performed by business (BERD as a percentage of GDP) </li></ul></ul><ul><ul><li>The number of patents obtained (triad patents per capita) </li></ul></ul><ul><ul><li>The innovation intensity of a company (innovation expenditures as a percentage of total sales) </li></ul></ul>
  23. 26. Absorptive Capacity Indicators <ul><li>The concept of absorptive capacity should reflect a measure of the successful diffusion of new technologies throughout an economy. As such it can be represented by the weighted average of: </li></ul><ul><ul><li>Sales of new-to-market products </li></ul></ul><ul><ul><li>An indicator measuring industry’s capacity to renew product ranges and adjust to technological change </li></ul></ul><ul><ul><li>A more process-oriented measure of technological improvements, namely labour productivity </li></ul></ul><ul><ul><li>A competitiveness indicator such as relative trade performance in high tech goods </li></ul></ul>
  24. 27. Comparisons of Single/Multiple Instruments Across Regions and Countries <ul><li>Comparing the relative efficiency, effectiveness and impact of similar policies in different innovation systems is conceptually more difficult than assessing impacts within any one innovation system </li></ul><ul><li>Comparing the impact of policy mixes composed of many different instruments is even trickier </li></ul><ul><li>Despite these difficulties, many contemporary efforts to evaluate RTD&I efforts involve cross-country ‘benchmarking’ comparisons </li></ul><ul><li>Benchmarking is term describing “a continuous, systematic process for comparing the performance of organisations, functions, economic processes, policies or sectors of business against the ‘best practice in the world” </li></ul>
  25. 28. Naïve Benchmarking <ul><li>‘ Naive benchmarking’ efforts focus not on policy assessments but on simple comparisons and rankings of indicators representing particular characteristics of an innovation system or aspects of innovation system performance, e.g. R&D investment levels, patent counts, number of science parks etc. </li></ul><ul><li>More elaborate forms depend on similar comparisons and rankings of more complex composite indicators representing concepts such as research capacity or technological and innovative performance </li></ul>
  26. 29. Intelligent Benchmarking <ul><li>‘ Intelligent benchmarking’ efforts do not place reliance on crude ranking and emulation activities </li></ul><ul><li>They are differentiated by the efforts made to understand the processes that underpin shifting indicator patterns in different settings </li></ul><ul><li>Moreover, since the relationships between variables are only weakly understood and not easily amenable to quantitative analysis, most forms of intelligent benchmarking make strong use of qualitative techniques to assess ‘best practices’ and involve concerted efforts to communicate and share these analyses between the stakeholders involved in different innovation systems </li></ul>
  27. 30. Future Evaluation Needs in a European Context <ul><li>The socio-economic goals set for the Fifth Framework Programme (FP5) were a strong driver of developments in the field of impact assessment </li></ul><ul><li>These high-level goals placed enormous pressures on policymakers and evaluators alike to demonstrate that FP5 is leading to significant socio-economic impacts </li></ul><ul><li>Despite the developments which have taken place in the evolution of impact assessment methodologies, it is still possible to argue that efforts to establish quantitative assessments of the wider socio-economic impacts associated with upstream R&D inputs are not only difficult to implement, they are also misconceived </li></ul>
  28. 31. The Rationale for Collaborative R&D <ul><li>Such attempts are based on the notion that policies are inspired by a belief in the existence of simple causal chains between inputs and ultimate downstream impacts </li></ul><ul><li>In reality this is rarely true, and it is certainly not true for collaborative R&D programmes such as the EU Framework Programmes </li></ul><ul><li>Their rationale derives from an appreciation that policies are often to stimulate many of the knowledge-related interactions between the stakeholders of an innovation system if it is to function effectively (essentially rectifying market and information failures) </li></ul>
  29. 32. Making Innovation Systems Work <ul><li>Collaborative programmes thus aim to facilitate the creation and expansion of knowledge bases via the sharing of complementary assets between researchers in universities, firms and other ‘knowledge agents’ </li></ul><ul><li>Certainly there is the presumption that these ‘knowledge goals’ and ‘networking goals’ will ultimately allow firms to exploit their enhanced knowledge bases in an effective and efficient manner, but it is also recognised that the path to such exploitation is not necessarily direct, with many other variables likely to affect both the economic fortunes of the firms involved and the sectors and markets within which they operate </li></ul>
  30. 33. Implications for Evaluation <ul><li>The most important implication for evaluation is that efforts to determine goal attainment should focus primarily on those goals related to the rationale underpinning collaborative R&D (i.e. knowledge and networking goals and the rectification of market and information failures) </li></ul><ul><li>The most important evaluation issues thus revolve around the degree of information sharing and learning in projects, the nature and extent of improved knowledge and networking capabilities, and the degree to which knowledge and networking capabilities are enhanced over and above initial levels </li></ul><ul><li>In the terminology used earlier, these are ‘first circle’ issues and impacts, not second or third circle issues </li></ul>
  31. 34. Socio-economic Indicators <ul><li>The next Five-Year Assessment of the FPs will still need to track changes in broad socio-economic indicators as a check that the rationale for the FPs remains sound </li></ul><ul><li>It should also be possible to check whether perceived impacts, aggregated up to regional or national level, are in line with the expectations and needs of individual regions and countries </li></ul><ul><li>This is currently hindered because of a degree of sensitivity concerning the presentation and publication of FP results on a country-by-country basis </li></ul><ul><li>It is important to rectify this if impacts at the level of regional and national innovation systems are to be explored </li></ul>
  32. 35. Benchmarking and the ERA <ul><li>Benchmarking activities will play an important role in future evaluations of the FPs given the rationale underpinning FP6 and the drive to create the ERA </li></ul><ul><li>ERA is based on a recognition that scientific and technological resources in the EU are fragmented and sub-critical, and that their effective allocation, distribution and incorporation into an efficient and effective European system of innovation is unlikely without a concerted effort on the part of all Member States to rationalise, restructure and remobilise resources to avoid excessive duplication and focus on collective priorities </li></ul><ul><li>FP6 was designed to catalyse and facilitate this type of restructuring </li></ul>
  33. 36. Is FP6 Working? <ul><li>A key role of any future evaluation of FP6 and ERA will thus be to ascertain whether or not the intended restructuring is occurring, and how FP6 has contributed to this process (though with all the same provisos concerning estimates of causality and attribution) </li></ul><ul><li>Given the start date of FP6 and the time spans involved in collecting all the necessary data relating to structural change, it will not be possible during the forthcoming Five-Year Assessment to explore the impact of FP6 on structural change, but it will be important to establish baseline indicators which can be used to benchmark overall system performance in the years to come </li></ul>
  34. 37. Baseline Indicators <ul><li>These indicators should cover: </li></ul><ul><ul><li>Human mobility patterns </li></ul></ul><ul><ul><li>Science infrastructure development </li></ul></ul><ul><ul><li>The distribution of sources of research funding and their recipients </li></ul></ul><ul><ul><li>The concentration and consolidation of resources into networks of excellence </li></ul></ul><ul><ul><li>The establishment of critical masses in strategic areas </li></ul></ul><ul><ul><li>The development of regional clusters of R&D and innovation actors </li></ul></ul><ul><li>To date the organisation of the Five-Year Assessment has been the responsibility DG Research </li></ul><ul><li>In future, it will be imperative to enlist Member States in the task of collecting and compiling all the necessary benchmarking data at national (and regional) levels </li></ul>
  35. 38. Intellectual Underpinnings <ul><li>The socio-economic landscape of Europe at the start of the twenty-first century is full of challenges </li></ul><ul><li>In terms of confronting many of these challenges, the role played by science, technology and innovation is crucial </li></ul><ul><li>From a policy perspective, it is vital to enhance our understanding of these activities and interactions if the appropriate policy levers are to be applied and improvements achieved in the performance not only of scientific and technological activities themselves, but also in the performance of all the socio-economic systems that depend upon them </li></ul>
  36. 39. Evaluation’s Missing Link <ul><li>Critically, the practice of evaluation also needs to be informed by the results of such research. Evaluation and impact assessment involve much more than the simple application of methodologies and analytical techniques. They need to be guided by the theories and hypotheses developed as a result of systematic research investigations into the structure and dynamics of innovation systems </li></ul><ul><li>Just as macro-level economic developments are inextricably associated with micro-level R&D activities (however difficult it might be to establish the ‘missing link’), the future health of evaluation and impact assessment practices is ultimately dependent on the strength of their own ‘missing link’ with the socio-economic research base </li></ul>

×