0
Evaluation of
Health IT Implementation
Nawanan Theera-Ampornpunt, MD, PhD
Informatics Evaluation Methods Book
Friedman & Wyatt (2006)
Why Evaluate Projects?
• Promotional: To encourage more use
• Scholarly: To confirm or create scientific
knowledge
• Pragm...
Complexity of Evaluation in Informatics
Friedman & Wyatt (2006)
Medicine &
Health care
Evaluation
Methodology
Information
...
Marchewka JT (2006)
Project Life Cycle & SDLC
Marchewka JT (2006)
IT Project Management Deliverables
• DeLone & McLean’s IS Success Model (1992;2003)
• Revised model in 2003 adds “Service Quality”
Various Ways to Measure Su...
Health IT as Healthcare Interventions
• Donabedian’s Model
Donabedian (1966), Friedman & Wyatt (2006)
Structure Processes ...
Class Exercise
• Can you provide some examples of
measures in each aspect in the
Donabedian’s model that help evaluate
hea...
A Mindset for Evaluation
• Tailor the study to the problem
• Collect data useful for making decisions
• Look for intended ...
Evaluation vs. Traditional Research
• Different goals
• Who (clients or evaluators) determines the agenda
• Evaluation act...
Evaluation Approaches
• Objectivist vs. Subjectivist approaches
• Objectivist characteristics
– Information resources, use...
Evaluation Approaches
• Objectivist vs. Subjectivist approaches
• Subjectivist characteristics
– What is observed depends ...
Objectivist Approaches
Objectivist
• Comparison-Based Approach
• Objectives-Based Approach (against stated goals)
• Decisi...
Subjectivist Approaches
Subjectivist
• Quasi-Legal Approach (e.g. a mock trial)
• Art Criticism Approach
• Professional Re...
Objectivist Studies
• Measurement studies
– “Studies undertaken to develop and refine methods for making
measurements”
– E...
Study Designs
• Experiments
– Randomized controlled trials
• Quasi-Experiments
– Non-randomized interventions
– Investigat...
Quasi-Experiments
Harris et al. (2006)
Quasi-Experiments
Harris et al. (2006)
Quasi-Experiments
Harris et al. (2006)
Quasi-Experiments
Harris et al. (2006)
Observational Studies
• Cohort studies
– Observe subjects with different exposures over time and
compare outcomes
• Case-c...
Measurements
Friedman & Wyatt (2006) Source: http://ibis.health.state.nm.us/resources/ReliabilityValidity.html
Measurement Validity & Reliability
Source: http://ibis.health.state.nm.us/resources/ReliabilityValidity.html
Measurement Validity
• Content Validity & Face Validity
• Criterion-Related Validity
– Predictive validation
– Concurrent ...
Measurement Reliability
• Test-retest reliability
• Interrater reliability
– E.g. Kappa, intraclass correlations
• Interna...
Threats to Internal Validity: Biases
• Assessment bias
• Allocation and recruitment bias
• The Hawthorne Effect (the tende...
Threats to Internal Validity
Harris et al. (2006)
Threats to Internal Validity: Confounding
Harris et al. (2006)
Threats to External Validity
• Study generalizability
– Sample representativeness
– Intervention (including implementation...
Making Conclusions
• Internal and external validity
• Correlation vs. causation
• Acknowledgement of study limitations
• A...
Special Study Methods Used in Informatics
• Surveys
– Study design: Cross-sectional vs. longitudinal
– Subjects
– Sampling...
Surveys
• Survey Methodology
– Survey delivery methods: paper, electronic
(e-mail, web site)
– Survey administration: self...
Errors in Survey Studies
• Sampling errors
• Coverage errors
• Nonresponse errors
• Measurement errors
• Processing errors...
Survey Book
Dillman et al. (2008)
Special Study Methods Used in Informatics
• Time and Motion Studies
(Time-Motion Studies)
• Economic Analysis
– Cost-effec...
Special Study Methods Used in Informatics
• Qualitative Studies
– Interviews
– Focused groups
– Usability evaluations
– Co...
Special Study Methods Used in Informatics
• Software Testing & Evaluation
Methodology
• Testing Levels
– Unit testing
– In...
Software Testing & Evaluation
• Software Testing Objectives
– Installation testing
– Compatibility testing
– Smoke and san...
Software Testing & Evaluation
• Approaches
– White-box testing
– Black-box testing
– Gray-box testing
http://en.wikipedia....
Image source: Senoo et al. (2007) http://dx.doi.org/10.1108/14601060710776725
Nonaka SECI Model
During
Implementation,
Nea...
“Half the money I spend on
advertising is wasted; the trouble is
I don't know which half.”
-- John Wanamaker
http://www.qu...
References
• DeLone WH, McLean ER. Information systems success: the quest for the
dependent variable. Inform Syst Res. 199...
References
• Mann CJ. Observational research methods. Research design II: cohort,
corss sectional, and case-control studie...
Upcoming SlideShare
Loading in...5
×

Evaluation of Health IT Implementation

255

Published on

Published in: Health & Medicine, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
255
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
5
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Transcript of "Evaluation of Health IT Implementation"

  1. 1. Evaluation of Health IT Implementation Nawanan Theera-Ampornpunt, MD, PhD
  2. 2. Informatics Evaluation Methods Book Friedman & Wyatt (2006)
  3. 3. Why Evaluate Projects? • Promotional: To encourage more use • Scholarly: To confirm or create scientific knowledge • Pragmatic: To know what works and what fails • Ethical: To ensure appropriateness & justify its use or its budget • Medicolegal: To reduce liability risks Friedman & Wyatt (2006)
  4. 4. Complexity of Evaluation in Informatics Friedman & Wyatt (2006) Medicine & Health care Evaluation Methodology Information Systems
  5. 5. Marchewka JT (2006) Project Life Cycle & SDLC
  6. 6. Marchewka JT (2006) IT Project Management Deliverables
  7. 7. • DeLone & McLean’s IS Success Model (1992;2003) • Revised model in 2003 adds “Service Quality” Various Ways to Measure Success DeLone & McLean (1992; 2003)
  8. 8. Health IT as Healthcare Interventions • Donabedian’s Model Donabedian (1966), Friedman & Wyatt (2006) Structure Processes Outcomes
  9. 9. Class Exercise • Can you provide some examples of measures in each aspect in the Donabedian’s model that help evaluate health IT project success?
  10. 10. A Mindset for Evaluation • Tailor the study to the problem • Collect data useful for making decisions • Look for intended and unintended effects • Study the resource while it is under development and after it is deployed • Study the resource in the lab and in the field • Go beyond the developer’s point of view • Take the environment into account • Let the key issues emerge over time • Be methodologically Catholic and eclectic Friedman & Wyatt (2006)
  11. 11. Evaluation vs. Traditional Research • Different goals • Who (clients or evaluators) determines the agenda • Evaluation actively seeks unanticipated effects as well as anticipated ones • Both lab and in-situ evaluations important for evaluation • Evaluations often employ many data-collection paradigm Friedman & Wyatt (2006)
  12. 12. Evaluation Approaches • Objectivist vs. Subjectivist approaches • Objectivist characteristics – Information resources, users, and processes can be measured – Rational persons should agree on important measures and desirable outcomes – It is possible to disprove a hypothesis, but never to fully prove one – Quantitative measurement is superior and more precise to qualitative methods – We can assess which resource is superior through comparisons Friedman & Wyatt (2006)
  13. 13. Evaluation Approaches • Objectivist vs. Subjectivist approaches • Subjectivist characteristics – What is observed depends fundamentally on the observer – Context is crucial – Different perspectives can be legitimately valid on desirable outcomes – Verbal description can be highly illuminating – Evaluation is viewed as an exercise in argument, rather than demonstration Friedman & Wyatt (2006)
  14. 14. Objectivist Approaches Objectivist • Comparison-Based Approach • Objectives-Based Approach (against stated goals) • Decision-Facilitation Approach (evaluation to resolve issues important for decision-making for further development) • Goal-Free Approach (purposefully blinded to intended effects) Friedman & Wyatt (2006)
  15. 15. Subjectivist Approaches Subjectivist • Quasi-Legal Approach (e.g. a mock trial) • Art Criticism Approach • Professional Review Approach (e.g. site visit by experienced peers) • Responsive/Illuminative Approach (derived from ethnography) Friedman & Wyatt (2006)
  16. 16. Objectivist Studies • Measurement studies – “Studies undertaken to develop and refine methods for making measurements” – E.g. development and validation of measurement methods, tools, questionnaires • Demonstration studies – Studies that use measurement “methods to address questions of direct importance in informatics” – Descriptive studies (no independent variables) – Comparative studies (investigator creates a contrasting set of conditions, as in experiments & quasi-experiments) – Correlational studies (explore hypothesized relationships among variables that were not manipulated) Friedman & Wyatt (2006)
  17. 17. Study Designs • Experiments – Randomized controlled trials • Quasi-Experiments – Non-randomized interventions – Investigator still controls assignment of subjects to interventions but not through randomization • Observational Studies – Investigator has no control over assignment of subjects into groups Friedman & Wyatt (2006)
  18. 18. Quasi-Experiments Harris et al. (2006)
  19. 19. Quasi-Experiments Harris et al. (2006)
  20. 20. Quasi-Experiments Harris et al. (2006)
  21. 21. Quasi-Experiments Harris et al. (2006)
  22. 22. Observational Studies • Cohort studies – Observe subjects with different exposures over time and compare outcomes • Case-control studies – Compare subjects with outcome of interests (cases) and without (controls) retrospectively to determine differences in exposure • Cross-sectional studies Mann (2003)
  23. 23. Measurements Friedman & Wyatt (2006) Source: http://ibis.health.state.nm.us/resources/ReliabilityValidity.html
  24. 24. Measurement Validity & Reliability Source: http://ibis.health.state.nm.us/resources/ReliabilityValidity.html
  25. 25. Measurement Validity • Content Validity & Face Validity • Criterion-Related Validity – Predictive validation – Concurrent validation • Construct Validity – Convergent validity – Divergent/discriminant validity • Not the same as internal validity & external validity of scientific studies Friedman & Wyatt (2006)
  26. 26. Measurement Reliability • Test-retest reliability • Interrater reliability – E.g. Kappa, intraclass correlations • Internal consistency reliability – E.g. Cronbach’s alpha
  27. 27. Threats to Internal Validity: Biases • Assessment bias • Allocation and recruitment bias • The Hawthorne Effect (the tendency for humans to improve their performance if they know it is being studied) • Data collection biases – Checklist effect – Data completeness effect (more complete data in intervention cases than controls) – Feedback effect – Carryover effect (spillover effect) – Placebo effect – Second-look bias Friedman & Wyatt (2006)
  28. 28. Threats to Internal Validity Harris et al. (2006)
  29. 29. Threats to Internal Validity: Confounding Harris et al. (2006)
  30. 30. Threats to External Validity • Study generalizability – Sample representativeness – Intervention (including implementation strategies) – Context • Developers as evaluators Friedman & Wyatt (2006)
  31. 31. Making Conclusions • Internal and external validity • Correlation vs. causation • Acknowledgement of study limitations • Anticipated vs. unanticipated effects • Lessons learned
  32. 32. Special Study Methods Used in Informatics • Surveys – Study design: Cross-sectional vs. longitudinal – Subjects – Sampling methods • Census • Random sampling (simple, stratified, cluster) • Nonproblability sampling (purposive sampling, quota sampling, etc.) – Sampling frame
  33. 33. Surveys • Survey Methodology – Survey delivery methods: paper, electronic (e-mail, web site) – Survey administration: self-administered vs. investigator-administered – Survey instrument (items) – Survey design – Item wording
  34. 34. Errors in Survey Studies • Sampling errors • Coverage errors • Nonresponse errors • Measurement errors • Processing errors OMB (2001)
  35. 35. Survey Book Dillman et al. (2008)
  36. 36. Special Study Methods Used in Informatics • Time and Motion Studies (Time-Motion Studies) • Economic Analysis – Cost-effectiveness analysis – Cost-benefit analysis – Cost-utility analysis – Economic impact analysis – Return on investment analysis
  37. 37. Special Study Methods Used in Informatics • Qualitative Studies – Interviews – Focused groups – Usability evaluations – Content analysis
  38. 38. Special Study Methods Used in Informatics • Software Testing & Evaluation Methodology • Testing Levels – Unit testing – Integration testing – System testing – System integration testing http://en.wikipedia.org/wiki/Software_testing
  39. 39. Software Testing & Evaluation • Software Testing Objectives – Installation testing – Compatibility testing – Smoke and sanity testing – Regression testing – User acceptance testing – Functional testing – Usability testing – Alpha & beta testing – Software performance testing – Security testing http://en.wikipedia.org/wiki/Software_testing
  40. 40. Software Testing & Evaluation • Approaches – White-box testing – Black-box testing – Gray-box testing http://en.wikipedia.org/wiki/Software_testing
  41. 41. Image source: Senoo et al. (2007) http://dx.doi.org/10.1108/14601060710776725 Nonaka SECI Model During Implementation, Near Go-Live & Post Go-Live After Action Review (AAR) / Postmortem Meeting, Project Evaluation Before & After Project Kick-off, During Project Planning During Implementation, Near Go-Live Training Projece Evaluation as Part of Project’s KM
  42. 42. “Half the money I spend on advertising is wasted; the trouble is I don't know which half.” -- John Wanamaker http://www.quotationspage.com/quote/1992.html, http://en.wikipedia.org/wiki/John_Wanamaker
  43. 43. References • DeLone WH, McLean ER. Information systems success: the quest for the dependent variable. Inform Syst Res. 1992 Mar;3(1):60-95. • DeLone WH, McLean ER. The DeLone and McLean model of information systems success: a ten-year update. J Manage Inform Syst. 2003 Spring;19(4):9-30. • Dillman DA, Smyth JD, Christian LM. Internet, mail, and mixed-mode surveys: the tailored design method. 3rd ed. Hoboken (NJ): Wiley; 2008. 512 p. • Donabedian A. Evaluating the quality of medical care. Millbank Mem Q. 1966;44:166-206. • Friedman CP, Wyatt JC. Evaluation methods in biomedical informatics. 2nd ed. New York (NY): Springer; 2006. 386 p. • Harris AD, McGregor JC, Perencevich EN, Furuno JP, Zhu J, Peterson DE, Finkelstein J. The use and interpretation of quasi-experimental studies in medical informatics. J Am Med Inform Assoc. 2006 Jan-Feb;13(1):16-23.
  44. 44. References • Mann CJ. Observational research methods. Research design II: cohort, corss sectional, and case-control studies. Emerg Med J. 2003;20:54-60. • Office of Management and Budget, Office of Information and Regulatory Affairs, Statistical Policy Office. Statistical policy working paper 31: Measuring and reporting sources of error in surveys. 2001 Jul.
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×