Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Carrot, stick or competition: experimental evidence of performance contracts in a multitasking setting

359 views

Published on

Presentation at the iHEA World Congress, July 2015

Published in: Health & Medicine
  • Be the first to comment

  • Be the first to like this

Carrot, stick or competition: experimental evidence of performance contracts in a multitasking setting

  1. 1. Carrot, stick or competition? Experimental evidence of performance contracts in a multitasking setting Mylène Lagarde, London School of Hygiene & Tropical Medicine Duane Blaauw, University of Witwatersrand
  2. 2. Motivation • Performance contracts increasingly used in health • A variety of designs used: bonuses, withheld payments, competitive tournaments etc. • Several potential shortcomings to performance contracts – Incomplete contracts with non-incentivised activities – Crowding out effects on intrinsic motivation • Understanding effects of P4P incentives is key to informing policy designs
  3. 3. Limits of the field… • Mixed evidence on effect of performance contracts in health (Scott et al 2011) • Many challenges from (observational) studies: self-selection, measurement errors, other interventions, role of idiosyncratic contextual factors, etc. • Enormous variations in designs (size of incentives, targeted activities, payment attribute etc.) • Political challenges associated with implementing RCTs on the supply-side (China, Argentina, Benin!)
  4. 4. … vs. advantages of the lab • Laboratory experiments allow clear assessment of impact of different incentives in a simple setting • Increasingly used in health economists as a way of testing simple incentives / policy designs (Hennig-Schmidt et al. 2011, Brosig-Koch et al. 2013, Keser et al. 2013, Green 2014, Lagarde and Blaauw 2014) • Some of the health economic experiments look at P4P – Only two employ a real effort task which can elicit intrinsic motivation (Green 2014, Lagarde and Blaauw 2014) – None tests relative effects of different designs
  5. 5. The “medical game”: a real effort task Data entry 22 results of blood tests per laboratory form LABORATORY REPORT REF. NUMBER 1 HAEMATOLOGY AND BIOCHEMISTRY RESULTS Test Result Units Reference Range Full Blood Count RED BLOOD CELLS 3.2 x 1012 /L 4.5 - 6.5 HAEMOGLOBIN 9.4 g/dL 13.8 – 18.8 HAEMATOCRIT 28.5 % 40 - 56 MCV 89.1 fL 79 - 100 MCH 29.4 pg 27 - 35 MCHC 33.0 g/dL 29 - 37 WHITE BLOOD CELLS 4.5 x 109 /L 4.0 – 12.0 PLATELETS 261 x 109 /L 150 - 450 U&E SODIUM 142.0 mmol/L 135 - 150 POTASSIUM 5.1 mmol/L 3.5 - 5.1 CHLORIDE 102.3 mmol/L 98 - 107 BICARBONATE 23.1 mmol/L 21 - 29 UREA 2.5 mmol/L 2.1 - 7.1 CREATININE 88.1 μmol/L 80 - 115 Liver Function Test BILIRUBIN - TOTAL 17.1 μmol/L 2 - 26 BILIRUBIN - CONJUGATED 5.7 μmol/L 1 - 7 ALT 10.5 IU/L 0 - 40 AST 24.6 IU/L 15 - 40 ALKALINE PHOSPHATASE 106.4 IU/L 53 - 128 TOTAL PROTEIN 70.5 g/L 60 - 80 ALBUMIN 40.8 g/L 35 - 50 GLOBULIN 29.7 g/L 19 - 35 Diagnostic identification 10 minutes to enter the results of 10 laboratory reports + make the diagnosis for each
  6. 6. BONUS FINETOURCONTROL Experimental design PERIOD 2CONTROL CONTROL TOUR TOUR + extra bonus per correct diagnosis BONUS BONUS FINE FINE + extra bonus per correct diagnosis + extra bonus per correct diagnosis + extra bonus per correct diagnosis N= N=31 N= N= N= N= N= N= PERIOD 1 N=60 N=60 N=60 N=60
  7. 7. BONUS FINETOURCONTROL Experimental design PERIOD 2CONTROL CONTROL TOUR TOUR + extra bonus per correct diagnosis BONUS BONUS FINE FINE + extra bonus per correct diagnosis + extra bonus per correct diagnosis + extra bonus per correct diagnosis N= N=31 N= N= N= N= N= N= PERIOD 1 N=60 N=60 N=60 N=60 Treatment Fixed payment Conditional payment based on correct data entries Control 105 None Tournament 100 25 for top 20% Bonus 90 Sliding scale (from 100 entries) Fine 130 Sliding scale Income neutrality (based on piloting)
  8. 8. Bonus and fine payment functions Earnings 140R 130R 120R 110R 100R 90R 0 50 100 110 120 130 140 150 200 220 Number of correct entries
  9. 9. Results
  10. 10. Performance: number of correct entries 0 .01 0 50 100 150 200 250 Control Bonus Fine Tournament
  11. 11. Time spent on the different activities
  12. 12. Efficiency: time per correct entry
  13. 13. Regression results (1) (2) (3) (4) Performance Number of correct entries Accuracy % of correct data entries made Effort Time spent on data entry Efficiency Time per accurate entry Bonus 19.733*** 0.040 29.807*** -2.640** (7.078) (0.032) (11.096) (1.264) Fine 20.933*** 0.092*** 4.465 -3.296*** (7.078) (0.032) (11.096) (1.264) Tournament 34.733*** 0.091*** 15.683 -3.693*** (7.078) (0.032) (11.096) (1.264) Mean value in the control treatment 96.983 0.887 390.009 6.968 Observations 240 240 240 239
  14. 14. Impact on non-incentivised activity (diagnostic identification) Number of correct diagnoses Diagnosis accuracy P=0.219P=0.147 P=0.053
  15. 15. Experimental design II • Evidence that P4P is effective at incentivising mundane tasks (process / box ticking) • What about more intellectual ones?
  16. 16. Experimental design II
  17. 17. Experimental design II
  18. 18. Impact on diagnoses Change in number of correct diagnoses (Period 2 – Period 1)
  19. 19. Detrimental impact of P4P on non- incentivised task? Impact on data entry, control treatment Two contrasting results 16.63 0 5 10 15 20 Without diagnostic bonus With diagnostic bonus • Period 1: no negative effect on performance and effort on diagnostics • Period 2, CONTROL: negative impact on performance and effort on data entry • The importance of intrinsic motivation: – 83% of subjects found the diagnostic task “interesting” – Vs. 59% for the data entry task Change in number of correct entries (Period 2 – Period 1)
  20. 20. Concluding remarks • Positive impact of P4P on simple routine task that is perfectly observed and monitored • Results suggest that tournaments are the most effective • No detrimental effect on task in the presence of intrinsic motivation • Many important features not reproduced here, such as cherry-picking or gaming • Value of experiments in health policy design – Test in the lab, then in the field
  21. 21. Thank you Funded by
  22. 22. Benefit-to-cost ratios Average number of correct test results Average payment (in ZAR) Benefit cost ratio CONTROL 95.75 105.00 0.91 BONUS 116.72 113.67 0.99 FINE 117.92 113.50 1.02 TOURNAMENT 131.71 105.00 1.25

×