Questions 1. Draw a mental image of an employee you know who is not performing adequately. Select an intersection point in the PAQ matrix that best portrays the employee's behavior by answering the vertical axis' &quot;Does the employee have adequate job knowledge?&quot; and the horizontal axis' &quot;Does the employee have the proper attitude (desire) to perform the job?&quot; What quadrant did the employee fall into? 2. Do performance problems always point to training? 2. Considering the employee performance problems in your organization, is there a pattern to the form they take (e.g. are most in Quadrant)? If there is a pattern, what does this tell you? 3. What implications does this model have for the role of a trainer as a problem-solver? 4. Should training be given if the problem clearly points to a quadrant besides training? Achieving excellence through performance is accomplished in two major ways. The first way is taking a proactive stance by unearthing or preventing counter-productive methods. For example, you might implement diversity and sexual harassment training programs before they become a problem within the organization. There are four major causes of performance problems: Knowledge or Skills - The employee does not know how to perform the process correctly - lack of skills, knowledge, or abilities. Process - The problem is not employee related, but is caused by working conditions, bad processes, etc, etc. Resources - Lack of resources or technology. Motivation or Culture - The employee knows how to perform, but does so incorrectly.
Scope of performance problem Extent Identify whether the problem is isolated or widespread among units throughout the Army. Gravity (Seriousness.) Identify the safety, environmental, or security impact of the problem. Impact Identify the specific impact on individual and unit performance. Check to see if there are mission consequences; if none, there maybe no need to pursue the matter further.
Performing needs assessments and task/duty analyses have usually been triggered by new technologies, equipment, or people. Shouldn't these be continuous, on-going functions?
Developing training should include analyses of the characteristics of the learners, the setting in which the work will be performed, and the tasks and duties which the trainees will be expected to perform. A complete review of the subject matter (and subject matter experts) Goals and performance objectives must be set a plan to evaluate the training should be developed Instructional materials and strategies must be acquired, prepared, and pre-tested. Internal Evaluation - analyzing learning materials, student learning and achievements, and teacher effectiveness External evaluation is a method of judging the worth of a program at the end of the program. The focus is on the outcome. Levels one and two (reactive and learning) are formative/internal evaluations Levels three and four (performance and impact) are summative/external evaluations.
Beginning with the end in mind, let's examine the results desired from training.
Measures how the learners react to the training. Method - Attitude questionnaires that are passed out after most training classes. This level measures one thing: the learner's perception (reaction) of the course. Learners are keenly aware of what they need to know to accomplish a task. If the training program fails to satisfy their needs, a determination should be made as to whether it's the fault of the program design or delivery. This level is not indicative of the training's performance potential as it does not measure what new skills the learners have acquired or what they have learned that will transfer back to the working environment. This has caused some evaluators to down play its value. However, the interest, attention and motivation of the participants are critical to the success of any training program. People learn better when they react positively to the learning environment. When a learning package is first presented, rather it be e-learning, classroom training, CBT, etc., the learner has to make a decision as to whether he or she will pay attention to it. If the goal or task is judged as important and doable, then the learner is normally motivated to engage in it (Markus & Ruvulo, 1990). However, if the task is presented as low-relevance or there is a low probability of success, then a negative effect is generated and motivation for task engagement is low. This differs somewhat from Kirkpatrick. He writes, &quot;Reaction may best be considered as how well the trainees liked a particular training program&quot; (1996). However, the less relevance the learning package is to a learner, then the more effort that has to be put into the design and presentation of the learning package. That is, if it is not relevant to the learner, then the learning package has to &quot;hook&quot; the learner through slick design, humor, games, etc. This is not to say that design, humor, or games are not important. However, their use in a learning package should be to promote the &quot;learning process,&quot; not to promote the &quot;learning package&quot; itself. And if a learning package is built of sound design, then it should be help the learners to fix a performance gap. Hence, they should be motivated to learn! If not, something went dreadfully wrong during the planning and building processes! So if you find yourself having to hook the learners through slick design, then you probably need to reevaluate the purpose of the learning program.
Do not ask questions on teaching methods -- one might get high marks on &quot;how much the learners have learned&quot; (which tend to be valid) and low marks on &quot;the course was carefully planned and well organized&quot; (which tend not to be valid). Even if these &quot;Method or Process&quot; questions were valid, they do not tell us anything that the &quot;Result&quot; questions cannot. Most unreliable question is, &quot;How much did you enjoy the class?&quot; Learners generally enjoy courses that are the most intellectually challenging and meaningful. Yet, they will also report that they may enjoy a class that contributes little to their learning. Nevertheless, when the same learners are asked to assess their learning, provide a rating of the instructor and/or course, or to assess its intellectual contributions, the students, as a whole, are able to distinguish &quot;fluff&quot; from &quot;substance&quot; (Kaplan, 1974; Naftulin, 1973). Immediate Feedback To collect immediate feedback, end the session five minutes early and ask: 1) What major conclusion did you draw from today's session? 2) What major questions remain in your mind? (The Searle Center for Teaching Excellence Northwestern University). These two questions 1) assist learners to ask themselves extremely valuable questions (what have I learned; what do I need to learn now) and 2) provide the trainers with feedback, such as discovering that learners are drawing conclusions quite different from the ones intended. If so, you can begin the next session with responses to the patterns that emerge, or make adjustments in the way you train. Also, learners who have no previous experience have the most inconsistent feedback. This is partially because they have nothing to base their initial feedback on. By using the two questions above in multiple training sessions, you help them in &quot;scaffolding&quot; their feedback so that they may improve upon it (same principle as in scaffolding instruction).
This is the extent to which participants change attitudes, improve knowledge, and increase skill as a result of attending the program. It addresses the question: Did the participants learn anything? The learning evaluation require post-testing to ascertain what skills were learned during the training. In addition, the post-testing is only valid when combined with pre-testing, so that you can differentiate between what they already knew prior to training and what they actually learned during the training program. Measuring the learning that takes place in a training program is important in order to validate the learning objectives. Evaluating the learning that has taken place typically focuses on such questions as: What knowledge was acquired? What skills were developed or enhanced? What attitudes were changed? Learner assessments are created to allow a judgment to be made about the learner's capability for performance. There are two parts to this process: the gathering of information or evidence (testing the learner) and the judging of the information (what does the data represent?). This assessment should not be confused with evaluation . Assessment is about the progress and achievements of the individual learners, while evaluation is about the learning program as a whole (Tovey, 1997, p. 88). Evaluation in this process comes through the learner assessment that was built in the design phase. Note that the assessment instrument normally has more benefits to the designer than to the learner. Why? For the designer, the building of the assessment helps to define what the learning must produce. For the learner, assessments are statistical instruments that normally poorly correlate with the realities of performance on the job and they rate learners low on the &quot;assumed&quot; correlatives of the job requirements (Gilbert, 1998). Thus, the next level is the preferred method of assuring that the learning transfers to the job, but sadly, it is quite rarely performed.
In Kirkpatrick's original four-levels of evaluation, he names this level &quot;behavior.&quot; However, behavior is the action that is performed, while the final results of the behavior is the performance. Gilbert said that performance has two aspects -- behavior being the means and its consequence being the end (1998). If we were only worried about the behavioral aspect, then this could be done in the training environment. However, the consequence of the behavior (performance) is what we are really after -- can the learner now perform in the working environment? This evaluation involves testing the students capabilities to perform learned skills while on the job, rather than in the classroom. Level three evaluations can be performed formally (testing) or informally (observation). It determines if the correct performance is now occurring by answering the question, &quot;Do people use their newly acquired learnings on the job?&quot; It is important to measure performance because the primary purpose of training is to improve results by having the students learn new skills and knowledge and then actually applying them to the job. Learning new skills and knowledge is no good to an organization unless the participants actually use them in their work activities. Since level three measurements must take place after the learners have returned to their jobs, the actual Level three measurements will typically involve someone closely involved with the learner, such as a supervisor. Although it takes a greater effort to collect this data than it does to collect data during training, its value is important to the training department and organization as the data provides insight into the transfer of learning from the classroom to the work environment and the barriers encountered when attempting to implement the new techniques learned in the program.
This is the final results that occur. It measures the training program's effectiveness, that is, &quot;What impact has the training achieved?&quot; These impacts can include such items as monetary, efficiency, moral, teamwork, etc. While it is often difficult to isolate the results of a training program, it is usually possible to link training contributions to organizational improvements. Collecting, organizing and analyzing level four information can be difficult, time-consuming and more costly than the other three levels, but the results are often quite worthwhile when viewed in the full context of its value to the organization. As we move from level one to level four, the evaluation process becomes more difficult and time-consuming, however,it provides information that is of increasingly significant value. Perhaps the most frequently type of measurement is Level one because it is the easiest to measure. However, it provides the least valuable data. Measuring results that affect the organization is considerably more difficult, thus it is conducted less frequently, yet it yields the most valuable information. Each evaluation level should be used to provide a cross set of data for measuring training program. The first three-levels of Kirkpatrick's evaluation -- Reaction, Learning, and Performance are largely &quot;soft&quot; measurements, however decision-makers who approve such training programs, prefer results (returns or impacts). That does not mean the first three are useless, indeed, their use is in tracking problems within the learning package: Reaction informs you how relevant the training is to the work the learners perform (it measures how well the training requirement analysis processes worked). Learning informs you to the degree of relevance that the training package worked to transfer KSAs from the training material to the learners ( it measures how well the design and development processes worked). The performance level informs you of the degree that the learning can actually be applied to the learner's job ( it measures how well the performance analysis process worked). Impact informs you of the &quot;return&quot; the organization receives from the training. Decision-makers prefer this harder &quot;result,&quot; although not necessarily in dollars and cents. For example, a recent study of financial and information technology executives found that they consider both hard and soft &quot;returns&quot; when it comes to customer-centris technologies, but give more weight to non-financial metrics (soft), such as customer satisfaction and loyalty (Hayes, 2003). Note the difference in &quot;information&quot; and &quot;returns.&quot; That is, the first three-levels give you &quot;information&quot; for improving the learning package. While the fourth-level gives you &quot;impacts.&quot; A hard result is generally given in dollars and cents, while soft results are more informational in nature, but instead of evaluating how well the training worked, it evaluates the impact that training has upon the organization. There are exceptions. For example, if the organizational vision is to provide learning opportunities (perhaps to increase retention), then a level-two or level-three evaluation could be used to provide a soft return. This final measurement of the training program might be met with a more &quot;balanced&quot; approach or a &quot;balanced scorecard&quot; (Kaplan & Norton, 2001), which looks at the impact or return from four perspectives: Financial: A measurement, such as an ROI, that shows a monetary return, or the impact itself, such as how the output is affected. Financial can be either soft or hard results. Customer: Improving an area in which the organization differentiates itself from competitors to attract, retain, and deepen relationships with its targeted customers. Internal: Achieve excellence by improving such processes as supply-chain management, production process, or support process. Innovation and Learning: Ensuring the learning package supports a climate for organizational change, innovation, and the growth of individuals. Develop a &quot;balanced scorecard&quot; (Kaplan & Norton, 2001), four perspectives Financial - ROI, show a monetary return, or the impact itself, such as how the output is affected. Customer Satisfaction Internal: Improving such processes as supply-chain management, production process, or support process. Innovation and Learning: Organizational change, innovation, and the growth of individuals.
This slide has a different format. Suggest deleting the colons after each heading and deleting periods at ends of list items to be consistent with other slides.
Delete period after the word “data” in top cell under Pros.
ILT - only when face-to-face interaction and/or practice with feedback is required. WBT/CBT - for content that is stable, not likely to change within the next 3-6 months, since there is more cost for developing, packaging, etc. Virtual Classroom - for content that is likely to change within the next 3 months or so
Lockstep Instruction All the learners proceed at the same pace. It requires fewer instructors and is more easily managed than self-paced instruction. It is often the medium of choice for one-shot training sessions. The main disadvantage is that the pace is set for average learners...but, there are no average learners to be found. Also, it is hard to meet individual learning requirements and styles .
Training Best Practices:Needs Assessment and Evaluation Presented January 25, 2008 By Kristi L. Thompson, M Ed Sierra Associates www.KristiThompson.com
Objectives• Participants will be provided the information and tools needed to conduct a basic Needs Assessment for course development.• Participants will be provided the information and tools needed to conduct a Course Evaluation.• Participants will be provided some tips on structuring budget requests.Sierra Associates 2008 2
Learning is provided in order to improve performance on the present job. (Nadler, 1984)Sierra Associates 2008 3
What have you done today to enhance (or at least insure against the decline of) the relative overall useful-skill level of your work force vis-a-vis competitors? - Tom Peters Thriving on ChaosSierra Associates 2008 4
To Train or Not High Problem: Low Motivation Problem: Systemic Method Assess personal Method Job consequences/ rewards Consider system issues, problemKnowledge system is out of control of the employee Problem: Bad Fit Problem: Lack of Knowledge or Tools Method Consider improper Method placement of employee in Training the position Low Low High Employee attitude/desire to perform the job Sierra Associates 2008 5
Training ApproachesIntervention Approach Training is an intervention for solving problems involving employees. Focus is on performance and/or organizational results as corrections to problems. Steps taken when performance issue is identified.System Approach Training is a part of a continuous improvement process, fully integrated in the regular process of organizational improvement. Some steps done only once.Sierra Associates 2008 7
Instructional Design – Our FocusAnalyze – Analyze system (department, job, etc.) to gain a complete understanding of it – Compile an inventory of all tasks associated with each job (if needed) – Select tasks that need to be trained – Build performance measures for the tasks to be trained – Choose instructional setting for the tasks to be trained – Estimate what is going to cost to train the tasks• Design• Develop• Implement• Evaluate – Review and evaluate during each phase (analyze, design, develop, implement) – Perform evaluations – Revise training system based on evaluations Full ISD model in AppendixSierra Associates 2008 8
What is an Needs Assessment? “Needs assessment is the systematic effort that we make to gather opinions and ideas from a variety of sources on performance problems or new systems and technologies.” Allison Rossett (1987)Sierra Associates 2008 9
Why do a Needs Assessment?• To make sure we are applying the right solution to the problem• To identify what learning will be accomplished• To identify what changes in behavior and performance are expected• To determine the expected economic costs and benefitsSierra Associates 2008 10
Steps of a Needs Assessment• Conduct Task Analysis and compile a Task Inventory• Perform a Gap Analysis• Select which tasks will be addressed• Determine performance measures for the trained task• Select Training Method• Estimate training costsSierra Associates 2008 11
Task Analysis/Inventory• What • The breakdown of performance into detailed levels of specificity • Statements of what will be done how it will be done for what end result Ex: Produce weekly summary Help Desk reports for all calls using a provided Excel template to be delivered to IT Director on Fridays.• Why • To determine the operational components of a job, skill, goal, or objective • To describe what and how they are performed • To describe the sequence and scope Task Inventory• When • Whenever there are new processes or equipment, when job performance is below standards, or when requests for changes to current training or for new training are receivedSierra Associates 2008 12
Task Inventory – ExampleDepartment: IT Help Desk Date: July 7, 2007Analyst: Barty Crouch Department Supervisor: Susan MalfoyJob Title: Administrative Assistant Job Code: 0742Task Number and Task:0742-1 Types orders received by mail, telephone, or in person at a minimum rate of 45 WPM into a computer database under general supervision of the Help Desk Manager to document calls.0742-2 Receives and answers customer questions under close supervision of the Help Desk Manager in order to provide good customer relations.0742-3 Delivers triage and delivery information received by mail, telephone, or in person under general supervision of the Help Desk Manager to fulfill customer requests.0742-4 Acts as liaison between customer and various departments under close supervision of the Help Desk Manager in order to provide good customer satisfaction.0742-5 Post and maintains records in a computer database without supervision to provide the department with historical records for statistical needs.…. Sierra Associates 2008 13
Gap Analysis Performance Behaviors• What • Comparison of actual performance against new or existing standards• Why • To identify the performance gap between what is actually done and what is required or expected• When • An intervention is required • New processes and/or procedures • New equipment/hardware • New Applications • New technologies • Change in staffing • Reduction in productivity • Governmental mandates • Security breaches • Routinely, as part of continuous improvement processSierra Associates 2008 14
Gap Analysis Questions • For each identified task: • Determine current performance level on task • Determine desired performance level Task Current Performance Desired Performance Level LevelProduce weekly Emp. often makes Reports are fullysummary Help Desk mistakes, missing info complete and accuratereports for all calls approx 50% of the time every weekusing a provided Exceltemplate to be Emp. repeatedly asks Emp. is able todelivered to IT Director for assistance on produce report with noon Fridays. producing report, assistance particularly on recalculating Pivot tables Sierra Associates 2008 15
Tasks Selection• What • A determination of which tasks will be addressed in the training • Directs course objectives• Why • To determine the scope and content of the training• When • An intervention is required • New processes and/or procedures • New equipment/hardware • New Applications • New technologies • Change in staffing • Reduction in productivity • Governmental mandates • Security breaches • Routinely, as part of continuous improvement processSierra Associates 2008 16
Tasks Selection• Required – The task/topic is required by law or for safety• Risk – There is a high risk if task is not done correctly – The task is critical• Complexity – Task is difficult or complex – Task is done frequently – Task is time-consuming – Task is critical to the performance of the role/project• Team Considerations – Task requires coordination with other staff or with other tasks – Task is part of a collective set of tasks• Performance – Task is required for acceptable role performance – Task distinguishes star performers Task SelectionSierra Associates 2008 17
Task Selection - example Task Current Desired Select - Criteria Performance Performance Level LevelProduce Emp. often Reports are fully Include – Performanceweekly makes mistakes, complete andsummary Help missing info accurate everyDesk reports approx 50% of weekfor all calls the timeusing a Emp. is able toprovided Excel Emp. repeatedly produce reporttemplate to be asks for with nodelivered to IT assistance on assistanceDirector on producing report,Fridays. particularly on recalculating Pivot tablesSierra Associates 2008 18
Select Training Format• What • A determination of training delivery mode• Why • Identify the best tool for the job • Determine training materials and format • Begin to understand possible costs• When • For every training intervention or project Can be done at the task level or just generallySierra Associates 2008 19
Training Format Options• One-on-one/small group coaching• Boot Camp – small, very specialized audience• Job Aid• Classroom• Vendor course• OJT Training Format• Mentoring flowchart• CBT• e-Learning• Text-based• Video• Active LearningSierra Associates 2008 20
Budget• What do They want? – The greatest needs addressed with the least cost (ROI)• Structure your requests by – Projects supported (the bigger the better) – Productivity increase expected (SWAG!) – Number of employees to affected by a particular course/product (one product, many people) – Problems “solved” with metrics Training Costs WorksheetSierra Associates 2008 21
No money, no time, no resourcesStep Method2. Identify performance Questionnaire/Survey problems Focus group (see Lunchtime Assessment)2. Select which tasks Use the most reported from the survey2. Develop performance You decide based on time and resource limits measures for course2. Select method of Lecture/classroom instruction/setting2. Estimate costs/budget On location with internal SME/trainerSierra Associates 2008 22
Lunchtime Assessment1. Gather a possible audience (Pizza works well.) Ask each person to write down their ten most important training needs in a particular area. Emphasize “specific needs”. A good format is “I need to know how to…”2. Capture the training needs on the white board or flip chart. Avoid duplicate answers by questioning.4. Use a weighted voting process to prioritize the training needs across the group (sticky dots) to prioritize the list. Give half as many dots as there are items. Tell needs assessment participants to place their dots on the chart to vote for their priorities.5. List the training needs in order of importance, with the number of points assigned as votes determining priority, as determined by the sticky dot voting process.A Step Beyond• Have each person mark their top two needs on their lists of ten most important and then collect the lists.• Share these Top Twos with the manager for personal development work.Sierra Associates 2008 23
Basic Data Gathering TYPE OF SOURCES OF INFORMATION INFO needed DOCUMENTATION OBSERVATION INTERVIEW Interview/survey Review employee Observe in work files or personnelAUDIENCE audience or environment records supervisors Observe Interview expert Review job audience or or other descriptions, policy TASKS expert performers statements, and performing trouble reports Observe expert Interview SMEs, Review product or creators of policymakers, plans, specifications,CONTENT product/process marketers, or and marketing managers guidelinesSierra Associates 2008 24
IT Training Needs Assessment? IT Trait Process ModificationObsolescence •Use vendor training for very specialized skills/tasks •Define supported tools/apps/skills and limit training to theseEnormous amt of possible •Define supported tools/apps/skills andsubject matter: limit training to these New products daily •For static info – develop documentation New services daily •For active info – consider the fastest CHANGES daily! Training methodEntire campus as possible •Identify audience groupings related toaudience with different skills/knowledge/performance needed.needs/levels/application for •Tweak content/materials for eachsame tasks audience. •Train the Trainer – use department ITSierra Associates 2008 stars 25
The linkage of Assessment and Evaluation From Pamphlet 350-70-6 http://www-tradoc.monroe.army.mil/tpubs/pams/p350-70-6.pdfSierra Associates 2008 26
Evaluation Along the Way• Phase 1 - Analysis phase • Determining if it really is a training problem • Make sure there isn’t already a solution available• Phase 2 - Design phase • Ensure the course content link directly to the objectives• Phase 3 - Development phase • Perform rapid-prototyping (implement-evaluate-implement- evaluate-etc) or • Have another training specialist check the solutions• Phase 4 – Implementation phase • Use Level 1 evaluations (next slides)Sierra Associates 2008 27
Four Levels of Evaluation• Reaction• Learning• Behavior or performance• Outcomes or resultsSierra Associates 2008 28
Level One - Reaction• Measures and evaluates the learner’s reactions and opinions about training program itself• Does not measure what new skills the learners have acquired or what they have learned that will transfer back to the working environment.• Method - Attitude questionnaires after classes COWC Level One Level One QuestionaireSierra Associates 2008 29
Level One - Questions to Ask• Rate the instructor• Rate the course• Estimate how much you have learned in the course• Rate the effectiveness of the instructor in stimulating your interest in the subject• Rate the effectiveness of this course in challenging you intellectuallyMinimally – What major conclusion did you draw from todays session? – What major questions remain in your mind?Sierra Associates 2008 30
Level Two - Learning• Measures and evaluates the changes in the participants’ skills, knowledge, or attitudes as a result of the training: did they learn anything?• Methods – Pre- and post-testing • Questions should be directly linked to the selected tasks and the course contentSierra Associates 2008 31
Level Three - Performance (behavior)• Measures and evaluates the transfer of the learning to the job or organization: can and do people use their newly acquired learning on the job?• Methods – • Formal (testing or Manager interview) • Informal (observation) • Must take place after the learners have returned to their jobs Manager Level 3 Employee Level 3 Evaluation EvaluationSierra Associates 2008 32
Level Four - Results• Measures and evaluates the impact of the training on the productivity and profitability of the organization: what "return" has the organization received from the training?• Methods – – Develop a "balanced scorecard“ with four perspectives • Financial • Customer Satisfaction • Internal Perspective • Innovation and Learning – Linked to Vision/Mission statement on these aspects – Allow time for the behavior change to take place – Evaluate both before and after the programSierra Associates 2008 33
Balanced Score Card• Financial • A measurement, such as an ROI, that shows a monetary return, or the impact itself, such as how the output is affected • (Can be either soft or hard results.)• Customer • Improving an area in which the organization differentiates itself from competitors to attract, retain, and deepen relationships with its targeted customers• Internal • Achieve excellence by improving such processes as supply- chain management, production process, or support process• Innovation and Learning • Ensuring the learning package supports a climate for organizational change, innovation, and the growth of individuals VinFin Balanced Scorecard exampleSierra Associates 2008 34
References• Rossett, Allison, (1987). Training Needs Assessment (Techniques in Training and Performance Development Series. Educational Technology Pubns (August 1987)• Kirkpatrick, Donald, (1994). Evaluating Training Programs – The Four Levels. San Francisco, CA: Berrett-Koehler Publishers, Inc.• Multimedia Courseware Development Guide http://www.tradoc.army.mil/tpubs/pams/p350-70-2.doc• Writing a learning objective standard that measures Learning Objective (LO) performance. http://www.asat.army.mil/briefings/Learning%20objective%20standard%20writing.d• Big Dogs Performance Coaching Page http://nwlink.com/~donclark/perform/coach.html• Design Training Needs Assessment Surveys - http://adulted.about.com/od/trngneedsasst/a/TNAsurveys.htmSierra Associates 2008 36
Resources• First Things Fast: A Handbook for Performance Analysis by Allison Rossett (Nov 6, 1998)• Job Aids and Performance Support: Moving From Knowledge in the Classroom to Kno by Allison Rossett and Lisa Schafer (Nov 3, 2006)• A Handbook of Job Aids by Allison Rossett and Jeannette Gautier-Downes (Jun 15, 1991)• Beyond the Podium: Delivering Training and Performance to a Digital World by Allison Rossett and Kendra Sheldon (May 23, 2001)• Article – What is a Balanced Scorecard http://www.balancedscorecard.org/BSCResources/AbouttheBalancedScorecard/tabid/ 55/Default.aspx Digital Tools SurveySierra Associates 2008 37
Training RequirementsSierra Associates 2008 38
• Kristi Thompson is the Principal of Sierra Associates. She has over 20 years of experience in the development of stronger work environments. Kristi works with groups on workplace and relationship processes to redirect the focus and perspective, resulting in higher morale, focus and productivity. She has applied her expertise in the high-performance manufacturing, service industry, hi-tech companies, academic institutions and the medical field. She has presented on numerous topics such as Change, Conflict, Team Development and Quality Improvement.• Kristi received her Masters of Education from the University of Massachusetts Amherst and her Bachelors of Science in Chemistry from Worcester Polytechnic Institute.• Sierra Associates• 8 Munn Road• Monson, MA 01057• 413 267 3585• Cell: 508 789 4112• www.KristiThompson.com• firstname.lastname@example.orgSierra Associates 2008 39
Data gathering methods 1 Method Pros ConsQuestionnaire/ –Can customize per audience –% return is often less than 20%Survey population –Can be anonymous –Takes time and skill to develop well –Can and should ask different –Directed/limited answers style questions –Should be limited to 10 minute –Efficient in collecting lots of completion time data –What people tell you and what they –The easiest way of collecting really do may differ needs assessment data. –Can’t probe deeperDocumentation -Provides management/HR –May be difficult or impossible toreview (i.e., job perspective obtain desired documentsdescriptions, –May require interpretation/performance explanationappraisals)Interview –Good for complex or Time and resource intensivemanagers/ undefined problems/areassupervisors –Can be modified “on the fly” –Personal touchSierra Associates 2008 40
Data gathering methods 2 Method Pros ConsObservations –The most direct method of –Very resource consuming (time collecting needs assessment and people resources) dataReview –Time efficient –Dependent upon the quality ofperformance documentation and honesty inappraisals of documentslesser –Best used in conjunction withperformers conversations with MgrFocus groups –Good for complex or –Scheduling issues undefined problems/areas –Wide range of participants in –Can be modified “on the fly” attendance –Personal touch –what people tell you and what –Ideas can build on each they really do may differ other –Sometimes important to verify the –Relatively effective and results with observations and efficient document analysis Sierra Associates 2008 41
Steps for a full Analysis Step Method Toolbox Template1. Understand the • Interview manager/department System Overview client’s business head Instrument (Analyze system) System Overview example1. Conduct a Task • See next slides Task Inventory Inventory* Task Inventory1. Select which tasks • Discuss with Task Selection will be addressed manager/department Task Selection head/client1. Build performance • Review performance appraisals Task Performance measures for the • Interview manager/department Measure trained tasks head • Interview manager/department Task Performance head Template example •Ask HR/EAP Sierra Associates 2008 42
Steps for a Needs Assessment1. Select Lockstep Training method of Classroom (In-house or Vendor) Method instruction/ Boot Camp Decision Flow instruction setting Video Self-Paced Personalized System of Instruction (PSI) Computer Based Training (CBT) Text Instruction e-learning or Internet Distance Learning (IDL) Job Job Performance Aid (JPA) On-The-Job (OJT)1. Estimate Training Cost costs Worksheet Sierra Associates 2008 43