Your SlideShare is downloading. ×
Version 6    Spbt 2007.Prs
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Version 6 Spbt 2007.Prs

403
views

Published on

Presentation on certification testing within the pharmaceutical industry presented at the 2007 SPBT Conference

Presentation on certification testing within the pharmaceutical industry presented at the 2007 SPBT Conference

Published in: Technology, Business

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
403
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Designing a Sales Representative Certification Program Through Validated Measurement and Comprehensive Assessment Greg Sapnar, Bristol-Myers Squibb Practitioner’s Perspective Steven Just, Pedagogue Solutions Psychometric & Legal Testing Perspective Tuesday, June 19th 2:00 -3:30 Hollywood, Florida, Diplomat 4
  • 2. Designing a Sales Representative Certification Program Log into Channel 17  Press and release the “GO” button  While the light is flashing red and green, enter (17) the 2 digit channel code  After the second digit is entered, Press and release the “GO” button.  The light will flash green to confirm that it is programmed.
  • 3. What type of person are you? One who… 1. Makes things happen. 95% 2. Watches things happen. 3. Wonders what happened. 5% 0% . n. en pe .. pp p. p ap ha ha th s gs ng ha in W i th th s es es r de ak ch on M at W W
  • 4. Learning Objectives At the completion of this workshop, participants will be able to:  State the definitions of the four different types of certification  Apply job certification to the certification of sales representatives  State the competitive advantages of having a quot;certifiedquot; sales force.  Design a sales representative certification process  Implement a sales representative certification process
  • 5. Certification? Do you Certify?
  • 6. Do you certify your representatives on their job required knowledge? 59% 1. Yes 2. No 3. I think so 36% 4. I don’t know 5% 0% s so o w Ye N no nk tk hi ’ It on Id
  • 7. Do you have positive or negative consequences related to test results? 1. Yes 73% 2. No 3. I think so 4. I don’t know 14% 9% s 5% so o w Ye N no nk tk hi ’ It on Id
  • 8. Four Types of Certification
  • 9. Job Certification  Job certification focuses on job requirements and must be supported by documented evidence of job relevance – Relevance determined by job analysis – Relevance determined by SME consensus
  • 10. Certification should be viewed as a program rather than an activity
  • 11. Certification Development Framework Groundwork The Driver Business Case Design Requirements Standards Program Policy and Governance & Re-certification & Global Procedures Administration Maintenance Considerations Develop Develop Assessments Deliver Deliver Assessments Reference: Hale, Judith (2000). Performance-Based Certification: How to Design a Valid, Defensible, Cost- Effective Program. San Francisco: Jossey-Bass/Pfeiffer Evaluate Evaluation
  • 12. Certification Development Framework Groundwork The Driver Business Case Design Requirements Standards Program Policy and Governance & Re-certification & Global Procedures Administration Maintenance Considerations Develop Develop Assessments Deliver Deliver Assessments Reference: Hale, Judith (2000). Performance-Based Certification: How to Design a Valid, Defensible, Cost- Effective Program. San Francisco: Jossey-Bass/Pfeiffer Evaluate Evaluation
  • 13. Drivers and Business Cases to Support Certification of Pharmaceutical Representatives Driver Business Case  Demand for high standards/ethics in  Support the Corporate Mission/ High healthcare Standards of Healthcare  Need to differentiate in the industry  Develop Qualified Talent  External Forces making demands  Meet customers’ expectations/access – Customers  Prepare for rising tide of licensing – Government discussions  Poor Press of Pharmaceutical  Raise perceptions of Industry Pharmaceutical Representatives
  • 14. Certification Development Framework Groundwork The Driver Business Case Design Requirements Standards Program Policy and Governance & Re-certification & Global Procedures Administration Maintenance Considerations Develop Develop Assessments Deliver Deliver Assessments Reference: Hale, Judith (2000). Performance-Based Certification: How to Design a Valid, Defensible, Cost- Effective Program. San Francisco: Jossey-Bass/Pfeiffer Evaluate Evaluation
  • 15. Drivers and Business Cases to Support Certification of Pharmaceutical Representatives Requirements Standards  Skills Minimum Competency  Performance criteria  Knowledge Areas  Minimum Knowledge  Education requirements  Industry  Minimum education  Compliance  Other?
  • 16. Knowledge Measurement Strategy  Provide clear evidence we are delivering the knowledge and skills we need to achieve our company mission  Ensure business alignment through our Certification Governance Council  Implement process which creates valid and reliable assessments  Design for defensibility
  • 17. Assessment Process & Resources Requirements Establish objectives Validate Test Implement Test (½ day meeting) Knowledge Assessment Validate objectives Analyze & Interpret Outcomes Establish cut-score Create test questions Report Outcomes Validate Role-Play Implement Role Play Scenarios & checklist Evaluation Establish objectives Role-Play (full day meeting) Assessment Validate objectives Analyze & Interpret Outcomes Establish Cut Scores for Role-Play Report Outcomes Create Role-Play Scenarios & checklist Train Raters & Establish Rater Reliability for Role-Play The above process illustrates basic requirements for test development. Rigor may be increased if needed to support higher levels of decision making.
  • 18. Certification Development Framework Groundwork The Driver Business Case Design Requirements Standards Program Policy and Governance & Re-certification & Global Procedures Administration Maintenance Considerations Develop Develop Assessments Deliver Deliver Assessments Reference: Hale, Judith (2000). Performance-Based Certification: How to Design a Valid, Defensible, Cost- Effective Program. San Francisco: Jossey-Bass/Pfeiffer Evaluate Evaluation
  • 19. Elements of a Certification/Assessment Strategy  Governance and administration  Define terms  Legal issues  Remediation and consequences  Make expectations explicit and public  Determine methods of testing  Establish assessment frequency  Assessment security  Job competency analysis  Create fair, valid and reliable assessments  Determine cut (passing) scores  Recertification  Program evaluation/Item analysis
  • 20. Governance and Administration
  • 21. Document Process  All stages of the test validation process must be documented because of its potential to precipitate internal and external disputes.  Documentation of the entire test development process is essential.  You may have a perfectly valid measurement tool, but if you do not have documentation to show how you ensured that validity, you have no legal defense of the test.  Documentation should be complete and time relevant and may include: – Special forms to match each stage in the process – Summarization of each stage in the process
  • 22. Defining Terms
  • 23. Certification Development Framework Assessment Types:  Knowledge-based – Knows terms, rules, principles,concepts, and procedures  Skill-based – Can apply the terms, rules, principles, concepts, and procedures under controlled conditions such as in a simulation  Performance-based – Can apply the terms, rules, principles, concepts,and procedures consistently under real working conditions
  • 24. Common Terms  Assessment  Summative Assessment  Quiz  Diagnostic Assessment  Test  Rubric  Exam  Performance Assessment  Evaluation  Self-assessment  Pretest  High Stakes Assessment  Post-test  Certification  Formative Assessment
  • 25. Assessment  A systematic process for returning results in order to describe what students know or can do  An ongoing process aimed at measuring and improving student learning  Assessments can be in the form of a quiz, test, exam or evaluation
  • 26. Quiz A low-stakes diagnostic assessment in which the results are only to be used for self- or group-diagnosis and prescription.
  • 27. Test A medium stakes formative assessment designed to inform both the learner and (optionally) the instructor of the learner’s level of knowledge at an intermediary point in the instructional process. There are no long-term consequences for failure. Short-term consequences may include required remediation before proceeding with a learning activity.
  • 28. Exam A high-stakes summative assessment at the completion of a learning experience for which there are consequences for failure. Results of exams are made available to the learner’s direct supervisor and appropriate training department personnel. Exam results may have career impacting consequences
  • 29. Evaluation  An assessment that measures, compares, and judges  For example: – Role play evaluations – Smile sheets – Evaluation of a training program – Level 3 and 4 evaluations
  • 30. Legal Issues
  • 31. How do High Stakes Tests Differ From Other Types of Tests?  Always a summative assessment  Higher level of scrutiny  More rigorous development methodology  Potential legal consequences
  • 32. Legal Jeopardy  Individual  Group  Record-keeping requirements
  • 33. Individual Legal Issues  We live in a litigious society  Ensure that your hiring/promotion/dismissal decisions are based on sound science  Ensure that your record keeping is 100% accurate
  • 34. Group Legal Issues  Title VII of the Civil Rights Act of 1964 (as amended in 1991) prohibits basing employment decisions on race, gender, ethnicity, religion, or national origin  This has been interpreted to require that an employer’s selection procedures not result in disparate impact against any group unless the procedure is demonstrated to be “valid and consistent with business necessity.”
  • 35. Group Legal Issues  Selection procedures that result in adverse impact are presumed to be discriminatory  Once plaintiffs establish adverse impact, burden shifts to employer to demonstrate validity of process
  • 36. Record Keeping: 21 CFR Part 11  Fully auditable  Electronic signatures – Equivalent to a paper signature – Statement at signature time clarifies purpose  Legally defensible data  Fully versioned results
  • 37. Remediation and Consequences
  • 38. Do you have a formal system of remediation for students who fail a test? 1. Yes 56% 2. No 3. Unsure 33% 11% s o re Ye N u ns U
  • 39. Remediation  Must have a well-thought out remediation plan  Should involve: – Trainer(s) – District Manager  Provide multiple, but fixed number of, attempts to display mastery
  • 40. How many attempts do you allow for passing a test? 1. 1 2. 2 57% 3. 3 4. 4 5. 5 6. A number greater than 5 14% 14% 7. As many as needed 7% 7% 0% 0% 5 1 2 3 4 5 ed an ed th ne er as at re y an rg m be s m A nu A
  • 41. Consequences  There must be consistent and increasing consequences for failure  At each “failure” you may involve higher levels of corporate management  Usually the final step is to involve HR
  • 42. Global Considerations
  • 43. Certification Development Framework Groundwork The Driver Business Case Design Requirements Standards Program Policy and Governance & Re-certification & Global Procedures Administration Maintenance Considerations Develop Develop Assessments Deliver Deliver Assessments Reference: Hale, Judith (2000). Performance-Based Certification: How to Design a Valid, Defensible, Cost- Effective Program. San Francisco: Jossey-Bass/Pfeiffer Evaluate Evaluation
  • 44. Job Competency Analysis
  • 45. Analyze Job Content  The most important part of the validation process is ensuring that the test items match the job, called content validation.  Content validation is a process that formally determines and reflects the judgements of experts regarding the content or competencies assessed by the test.  Subject matter experts for the content need to be identified.  In a formal process the subject matter experts need to identify and list the tasks that need to be performed to successfully perform the job.
  • 46. Establish Content Validity of Objectives  The relevant tasks identified in step two are converted into instructional objectives, if the test is being developed in conjunction with a curriculum plan.  Subject Matter Experts (SMEs) must review the objectives and record their concurrence that the objectives match job competencies.
  • 47. Create Items  Test items are created to match each relevant objective. – Cognitive items, I.e. multiple choice questions, etc. are created to assess knowledge competencies. – Rating instruments, such as, checklists, are created to measure whether skills are being demonstrated appropriately.
  • 48. Knowledge-based Assessments: Four Keys to Developing Valid Questions Questions must be properly constructed Questions must be content-validated by placement within a structure of learning objectives Questions must be written at the proper cognitive level by categorization within Bloom’s Taxonomy Thorough post-hoc statistical evaluation must be performed
  • 49. Skills Assessments: Four Keys to Valid Measurement Rater training Inter-rater reliability Create a scoring rubric Create behaviorally anchored rating scales (BARS)
  • 50. Creating Fair, Valid and Reliable Assessments
  • 51. Knowledge Assessments
  • 52. What is Validity?
  • 53. Validity  Construct Validity  Face Validity  Predictive Validity  Content Validity
  • 54. Construct Validity Are you measuring what you think you are measuring?
  • 55. Face Validity Will your exam appear fair to the test takers?
  • 56. Predictive Validity A quantitative measure of how well a test predicts some form of measurable behavior.
  • 57. Content Validity  The adequacy with which a domain of content is tested – Not a quantitative measure – Flows from valid learning objectives, attention to Bloom’s taxonomy and properly constructed questions. – Must ensure a “sensible” method of testing
  • 58. Pilot the Test
  • 59. Conduct Initial Test Pilot  Piloting a test has two purposes: – To find major flaws in the test or the testing system – To begin to establish the statistical validation of the test  At least 30 people should be involved in the initial pilot.  The more critical the test, the larger the number of test-takers to be included in the pilot group.
  • 60. Perform Item Analysis on Pilot  Item analysis looks at each test item to see how it functions as a satisfactory measure in the test.  The data most corporate test designers need to collect for cognitive tests come from three measures: - Difficulty index - the percentage of the test takers who answered a particular question correctly. - Distractor pattern - looks at the selection of individual distractors to uncover patterns in how participants choose or do not choose them, I.e. if a particular distractor is never chosen, it is too easily disregarded and should be replaced with one more plausible. - Point -Biserial - Identifies correlation between high scoring and low scoring test takers choices - computer support needed.
  • 61. What is Test Reliability?
  • 62. Reliability  Consistency over time  Consistency across forms  Consistency among items  Consistency among evaluators
  • 63. Setting Passing Scores
  • 64. Setting Passing Scores for Criterion-referenced Tests A criterion-referenced test is one in which scores are judged against a pre-set “mastery” level
  • 65. What is your passing test score? 47% 1. <80% 2. 80% 3. 85% 26% 26% 4. 90% 5. >90% 6. Varies from test to test 0% 0% 0% o s so w Ye N no nk tk hi ’ It on Id
  • 66. Who sets your passing test score? 50% 1. I do 2. Upper management 3. Training management 28% 4. Therapeutic area 5. I haven’t a clue who sets it 17% 6% 0% o s so w Ye N no nk tk hi ’ It on Id
  • 67. Setting Cut Scores: The Three Most Common Methods  The Higher Authority Method: – “Our Vice President said it should be 90”  The Committee Method: – “90 seems about right”  The Received Wisdom Method: – “I don’t know how or when it got set, but it’s always been 90”
  • 68. Angoff Method  Identify judges who are familiar with the competency covered by the test.  For each item on the test each judge estimates the probability that a minimally competent person would get it right.  Sum the probabilities of each judge  Average the judges’ scores
  • 69. Angoff Method: Example Item Judge 1 Judge 2 Judge 3 1 .75 .80 .85 2 .80 .90 1.00 3 .75 .75 .90 4 .90 .90 .80 5 .95 .75 .85 Total 4.15 4.10 4.40 Percent 83% 82% 88% Averaging the totals for each Judge Cut Score= 84%
  • 70. Performance Testing
  • 71. Creating Valid Performance Tests  Create a scoring rubric  Create Behaviorally Anchored Rating Scales (BARS)  Train raters  Determine Inter-rater reliability
  • 72. Scoring Rubric  Accurate performance assessment requires a scoring model for the behaviors being assessed.  Typically displayed as a table with the performance criteria being judged down the left and the ratings across the top.
  • 73. Scoring Rubric: Example
  • 74. The Descriptive Behaviors  The judgment criteria that go into the boxes of the rubric  These are the behaviors you expect the evaluatee to display for this judgment criteria
  • 75. Descriptive Behaviors: Example
  • 76. Rater Training  Effective rater training should ensure: – Thorough knowledge of scoring standards (validity) – Consistency of scores (reliability) – Neutrality (fairness)
  • 77. Inter-rater reliability  Best way to ensure this is to have a properly developed scoring rubric and effective rater training  Standard measure for inter-rater reliability is Kappa Level, which approximates an intra-class correlation
  • 78. Recertification
  • 79. Do You Retest Knowledge Periodically? 1. Yes 61% 2. No 3. I think so 4. I don’t know 33% 6% 0% s o so w Ye N no nk tk hi ’ It on Id
  • 80. Ebbinghaus Curve of Forgetting
  • 81. Ebbinghaus Curve of Forgetting
  • 82. Re-certification  Re-certification applies to credentials that have a time limit.  It usually involves re-training and re-assessment.
  • 83. Certification Development Framework Groundwork The Driver Business Case Design Requirements Standards Program Policy and Governance & Re-certification & Global Procedures Administration Maintenance Considerations Develop Develop Assessments Deliver Deliver Assessments Reference: Hale, Judith (2000). Performance-Based Certification: How to Design a Valid, Defensible, Cost- Effective Program. San Francisco: Jossey-Bass/Pfeiffer Evaluate Evaluation
  • 84. Some Early Results
  • 85. Secure timed vs, unsecure, untimed  Eight participants took both the unvalidated, less secure exam Traditional pass and the validated exam score 90%  Participants have been given alpha designations for pilot  Traditional pass score is determined arbitrarily, Angoff Angoff pass pass score is determined by score 80% SME consensus  Even though Angoff pass score is 10% lower than traditional pass score, 2 out of 8 participants would not have passed Track 1 final exam, even though the validated exam was focused on required knowledge as determined by SMEs Uncontrolled Controlled
  • 86. Certification Development Framework Groundwork The Driver Business Case Design Requirements Standards Program Policy and Governance & Re-certification & Global Procedures Administration Maintenance Considerations Develop Develop Assessments Deliver Deliver Assessments Reference: Hale, Judith (2000). Performance-Based Certification: How to Design a Valid, Defensible, Cost- Effective Program. San Francisco: Jossey-Bass/Pfeiffer Evaluate Evaluation
  • 87. Analyzing Results of the Tests  Do you need to know how well your questions discriminate between weak and strong students? – Point-biserial correlation  Do you need to know what percent of students answered each question correctly? – Difficulty level  Do you need to know where students have misinformation? – Choice distribution  Do you need to know where the group has strengths and weaknesses? – Score by learning objective/topic
  • 88. Analyzing Results of the Program  Do your test results show that your sales reps are meeting the standards you have defined? – Percent passing  Is the program perceived as credible in the organization? – Validity  Are your results consistent over time? – Reliability
  • 89. Finally…  Document your process  Take the time to construct good assessments  Take the time to validate your assessments  Take the time to set defensible passing scores  Pilot the assessments  Set the right expectations for the learners  Analyze results and revise assessments as necessary  Recertify periodically
  • 90. Questions? Gregory Sapnar Bristol-Myers Squibb Gregory.sapnar@bms.com 609-897-4307 Steven B. Just Ed.D. Pedagogue Solutions Sjust@pedagogue.com 609-921-7585 x12