Using experiments in innovation policy (short)


Published on

Published in: Technology, Education
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Using experiments in innovation policy (short)

  1. 1. Using experiments in innovation policy Albert Bravo-Biosca
  2. 2. Three principles for delivering good innovation policy 3. Judgment 1. Experiment 2. Data
  3. 3. Innovation policy and experimentation – Two interpretations Supporting experimentation in the economy and society Using experimentation to learn what works better to support innovation
  4. 4. Supporting experimentation in the economy and society “The task of industrial policy is as much about eliciting information from the private sector about significant externalities and their remedies as it is about implementing appropriate policies” Rodrik, 2004
  5. 5. Innovation policy focused on information discovery The public sector as partner/enabler on the innovation process by helping reduce uncertainty in the private sector: – A project-based conception of innovation policy – sunset clauses framed explicitly in terms of learning: “the policy ends when the learning ends” – Specific learning methods would include: experimental development funds, testbeds, challenge prizes, observatories…
  6. 6. Using experimentation to learn what works better to support innovation • Large amounts of money invested in schemes to support innovation, but very limited evidence on their effectiveness Typical approach • Introduce large new interventions without prior small-scale testing Experimental approach • Set up pilots to experiment with new instruments, evaluate them using rigorous methods, and scale up those that work (continuing to experiment to improve them)  Experimental approach is a smarter, cheaper and more effective approach to develop better innovation policy instruments
  7. 7. What is an experiment? A continuum of definitions… Trying something new Trying something new and put in place the systems to learn RCTs • No rigorous learning or evaluation strategy • No real “testing mindset” • A “pilot” • Rigorous formal research design • Test a hypothesis • Codifying and sharing resulting knowledge • Sometimes but not always with some form of control group • Randomized control trials • Control group created by the programme manager/researcher using a lottery • Field vs. “lab” experiments • Different from a natural experiment
  8. 8. What is a randomized controlled trial? Design Randomize Implement Compare Treatment group Receive intervention Outcome Control group Don’t receive intervention Outcome Different alternatives to run the lottery (e.g., individual vs. group level randomization, etc) 1/0 vs. A/B experiment: control group gets nothing (0) vs. alternative intervention (B) Collect data using surveys and/or administrative data sources and estimate impact of the intervention Participants Participants can be individuals, but also firms, public organizations, villag es, regions, etc Participants are randomly placed in a "treatment" group and a “control” group, and the impact of the treatment estimated comparing the behaviour and outcomes of both of them
  9. 9. Why are RCTs useful? “Typical” evaluations: RCTs: • Good answer to "how well did the programme participants perform“ (before and after) • The lottery in an RCT addresses selection biases • Fail to provide a compelling answer to "what additional value did the programme generate“  Requires good knowledge on how participants would have performed in absence of the programme • No credible control group (e.g., biased matching, selection biases) • Programme recipient satisfaction survey/”what if” questions/case studies • Differences between the treatment and control groups are the result of the intervention • Provides an accurate/unbiased estimate of the impact of the intervention • “Gold standard” for evaluation …even if they also have some weaknesses and do not always apply (so not the solution for everything but still a very valuable tool, yet almost missing in the innovation policy area)
  10. 10. RCTs can have two non-mutually exclusive aims Testing the impact of an intervention Understanding the behaviour of individuals and what drives it Mechanism experiment Focus: Additionality Hypothesis (e.g.): “Intervention has an effect” vs. “Managers’ actions driven by inertia”
  11. 11. Some misconceptions about RCTs The criticism A potential response Unethical • Assumes intervention does benefit rather than harm recipients • Can provide alternative treatment (compare two alternative interventions, or the same intervention with two different sets of conditions, rather than “all or nothing”)  Replaces decisions based mostly on “opinions” with “data” • Often insufficient resources to support all potential recipients in any case • A lottery can be fairer (and cheaper) than some panel-based scoring approaches • Using resources in programmes that don’t work deprive other more effective programmes from funding  Experimental pilots reduce this risk Expensive • It is often the programme, not the evaluation, that is expensive • Data collection is expensive, regardless of the evaluation method used • RCTs require smaller sample size  cheaper data collection • Analysis can be quite cheap (simple comparison between groups), even if initial design requires more work Findings not applicable to other settings (Internal vs. external validity) Cannot capture unexpected/unintended effects Don’t tell you why there is an effect Don’t use qualitative methods along side • Context matters, as is any other type of evaluation, but some lessons can be generalized (still, multiple evaluations always desirable) • Innovation is uncertain, so it may be difficult ex-ante to identify all potential effects. In contrast to before/after approaches, with an RCT you can collect data ex-post on an unanticipated outcome of particular interest (even if not ideal) • It is possible to design the RCT to be able to find this out • Can be combined with qualitative methods – mixed methods can be the most informative approach
  12. 12. Key questions to design an RCT • What intervention do you want to test? • Does the control group benefit from an alternative intervention? • What is the outcome measure of interest? • Is data available for the outcome? • At what level should randomization be done? • How large should be the treatment and the control group? • Many other design choices available (e.g., randomizing the “treatment” vs “the promotion of the intervention” in a randomized encouragement design, etc)
  13. 13. The use of RCTs increasing around the world Health Development JPAL, IPA, Wor ld Bank, Oxfam… Social Experimentation Education French experimentati on fund for youth, UK job centres Harvard EdLabs, UK Education Endowment Foundation..
  14. 14. Over the last 10 years the JPAL network has worked with NGOs, governments and international organizations to conduct 445 randomized evaluations on poverty alleviation in 54 countries
  15. 15. But very limited use of RCTs on… Innovation Entrepreneurship Business growth advanced economies....even if it is feasible
  16. 16. Creative credits: Nesta’s vouchers RCT • Business-to-business innovation voucher experiment run by Nesta • It awarded 150 vouchers x £4,000 with £1,000 co-funding from SMEs to pay for collaborations with creative businesses • An RCT with longitudinal evaluation Innovation voucher Business led Formal evaluation Innovation project Build connections
  17. 17. Creative credits: The results High short term input additionality SMEs receiving Credit 78% more likely to undertake their project Short term output additionality Strong evidence of short term output additionality in terms of increased innovations after six months No significant long term additionality Source: Bakhshi et al (2013) No significant output, network or behavioral additionality after 12 months
  18. 18. Creative credits: Methods • Mixed methods evaluation  Qualitative analysis extremely useful to complement rigorous quantitative analysis, but cannot replace it • Traditional evaluation methods used in parallel gave a misleading assessment of the impact of the scheme, much more positive, contradicting RCT evaluation findings • See Bakhshi et al (2013) for the full results • Similar results to those obtained in the Dutch innovation vouchers RCT
  19. 19. The UK adopting RCTs in many different areas • Behavioural experiments – “nudge unit” (BIT) e.g., HMRC letters • Job centres – unemployment training • Education – 50 RCTS in 1000+ schools on-going • Business support (Growth vouchers, BIS) • Innovation (Innovation vouchers, TSB)
  20. 20. Growth vouchers • £30 million budget for a new BIS programme of advice for businesses which will be run as a trial • 25,000 micro and small businesses on an equal cost sharing • Vouchers will be available to firms with – less than 50 staff – first time users of business advice • Aims – increase the use of business advice – to collect robust evidence • Research questions: – Does subsidy encourage businesses to seek and use business advice? – What is the impact of advice on our outcome measures (sales, employment, turnover, profit)? – What type of advice is it most effective to subsidise?
  21. 21. Innovation vouchers • Technology Strategy Board programme to connect UK SMEs with knowledge providers (both university-based and knowledge providers) • £5000 pounds vouchers (rolling programme) • Process: 1. 2. 3. 4. Very short application form (with evaluation questions embedded) Screening out of bad applicants Use lottery to select recipients (good for evaluation and has low administration costs) Track innovation behaviour, relationships with knowledge providers, and firm performance using survey instrument and administrative data.
  22. 22. Why haven’t governments and researchers used more RCTs to understand innovation and its drivers, in contrast to other policy areas? Governments • Lack of sufficient examples showcasing their feasibility and value have made governments and intermediary organizations very reluctant to consider using RCTs in this area Researchers • Very few academic researchers in related fields have developed the capabilities and required support infrastructure necessary to set up and run experiments Missing networks • The networks between researchers and practitioners are missing, so even when they would be interested in collaborating on an RCT, they typically don’t know how to find each other Insufficient knowledge • There is insufficient knowledge about when is appropriate and feasible to use RCTs in this domain, and a widely-held misperception that RCTs need to be expensive A new global innovation, entrepreneurship and growth lab to tackle these 4 factors simultaneously
  23. 23. Nesta is seeding a new international initiative for experiments on innovation, entrepreneurship and growth Use RCTs to build the evidence base on the most effective approaches to Increase innovation Support entrepreneurship Accelerate business growth
  24. 24. The approach Identify and pursue opportunities for experimentation, bringing together social science researchers interested in these questions and organizations (whether public or private) with the ability to undertake experiments Programme delivery partners Researchers Experiments that: • Generate actionable insights for decision makers, by piloting new programmes and creating better evidence on their impact • Push the knowledge frontier forward, by giving researchers the opportunity to test with RCTs different hypotheses on the drivers of innovation
  25. 25. What will this new lab do? Develop and run experiments Work with public programmes and other delivery organizations to support their adoption, matching them with interested researchers Build a community of researchers that undertake RCTs Showcase RCTs’ value with real examples to advocate wider use Improve the knowledge-base on how to do RCTs in this space, learn when they work and when don’t, and hence when to use and not to use them Act as an aggregator and translator of the evidence generated through RCTs across countries  A version of the JPAL model but focused on innovation, entrepreneurship and growth, with the aim to expand the research and evaluation toolkit in these areas by facilitating the use RCTs
  26. 26. Thank you Get in touch if you would like to find out more