Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

  1. 1. Towards Managing the Learning Process: Evaluation of Simulation Training Rebecca J. Atkins1; Malcolm R. Smith1; Anthony R. Mildred2; & H. Peter Pfister1 1 School of Behavioural Sciences, The University of Newcastle 2 NSW RailCorp Rebecca.Atkins@newcastle.edu.auAbstract. The progressive nature of the simulation industry requires advanced training techniques, the strategic coreof which is the management of learning. A systematic approach to learning is further reliant on the development andimplementation of effective evaluation strategies. Leaders in the field have identified the need for the re-developmentof evaluation methods including outcome measures, system assessment, and the widespread acceptance andincorporation of evaluation techniques. The current need for training evaluation extends beyond the well establishedvalidation of simulators as training tools. Effective evaluation structures increase the effective use of training systemsand the application of skills acquired during the training program, further enabling the scientific validation of all facetsof the training process. The development of effective training evaluation processes grounded in psychometrics andcognitive science is essential for the continuation of advanced training within the simulation community. This paperpresents an external evaluation of an industry developed evaluation process of a reality training centre utilisingimmersive simulation within the rail transport industry. The evaluation related to a recent round of a safetymanagement system training program delivered across all levels of the rail workforce within an Australian State RailCorporation. The measure was not found to provide an accurate assessment of identified program objectives.However, valuable insights into workforce perceptions of the training process, programs, and facilities wereestablished. Discussion highlights the identification and evaluation of the learning components of the training task, theeffective and efficient use of the training facility, the enhancement of the training process, and the re-development ofan effective training evaluation system. Evaluation essentially represents the ability to understand the multifacetedtraining process and the learning outcomes produced, allowing application of this knowledge for further advancementof simulation use and effective training techniques. within the field of simulation training and the limited 1. INTRODUCTION acceptance and incorporation of evaluation techniques The progressive nature of the simulation industry is requires persistent intervention [6],[8]. The need for the reliant upon advanced training techniques centring on re-development of evaluation strategies has been the cognitive concepts of human learning. identified by industry personnel, highlighted as a means Internationally, the emerging role of the behavioural of continued advancement [6]. This need extends sciences and the increasing presence of Human Factors beyond assessment of technology and the well (HF) within the cross-disciplinary industry of simulation established validation of simulators as training tools [9], are reenergising the pursuit of advancement through a [10] to psychometrics and cognitive science with the focus on human and system performance [1],[2],[3]. The ongoing evaluation of all facets of the training process strategic core of this approach is the management of with particular emphasis on the role of learning learning. Research has identified the progressive need [11],[12],[13]. Evaluation is an essential adjunct to within the simulation industry to extend training simulation training aimed at optimising the effective use practices beyond the current technological focus to of technology and maximising performance outcomes. intensify the learning aspect of training [4],[5]. Such emphasis serves to maximise the use of simulators as 1.1 Training and Evaluation training tools [6] and to further advance the industry and Training is the systematic process of instruction, the performance outcomes of its professionals. practice, review, and examination [14]. Evaluation is an Psychologists within the simulation industry and within integral part of this process, offering a strategic the current context of rail transportation aim to develop approach to advancement [11]. Evaluation is the process methods by which systems can be improved to better of gathering information to ascertain the effectiveness accommodate the cognitive, physiological, and and efficiency of training programs to make informed behavioural limits of human operators to maximise the improvements to training [14]. Effective evaluation effectiveness of training tools and minimise frequency structures increase the effective use of training systems of human errors and resulting accidents [7]. The and the application of skills acquired during the training investigation and strategic improvement of training program. It is suggested that in general industry very methods is a cost-effective means of improving human few training programs are effectively evaluated and performance within a system and improving the subsequently improved upon [11],[14]. Empirical functionality of the system itself: a process referred to as evidence indicates evaluation is the most poorly evaluation. The current lack of psychometric evaluation performed element of training and is often not
  2. 2. performed at all [8],[11],[15]. Early theories of training physiological and behavioural limits of human operatorspresent evaluation as an essential and final phase of the maximising efficiency and effectiveness and improvingtraining process that serves as the measure from which safety [3].learning is examined and training effectiveness derived[12]. Kirkpatrick’s [12] model of training evaluationremains dominant within industrial/organisationalpsychology [8],[15]. The model presents four levels oftraining evaluation, each increasing in depth. The fourlevels (presented in Figure 1) are: (1) how trainees feelabout the program, (2) quantity of learning in the formof increased knowledge and understanding, (3) changesin behaviour, and (4) effects of behavioural change onthe attainment of learning objectives. These can be Figure 2: The ‘training cycle’ model [11].further simplified as: (1) reactions and feelings, (2)learning, (3) behaviour, and (4) results. The higherlevels of evaluation refer to changes in learning and 1.2 Training and Evaluation in Rail Transportationbehaviour where learning is represented by the Human error is well documented as the leadingacquisition of skills and knowledge and adoption of new contributor to in excess of 80% of all transportationbehaviours [16]. The attainment of desired learning accidents [1],[5],[19]. HF investigations into complexoutcomes is vital to the effectiveness of training modern transport systems have revealed the ability for[8],[11],[13],[17]. The Kirkpatrick model highlights the dramatic reduction of error rates through effectivefundamental aim of training programs to achieve change training [1],[19]. Research has further shown effectiveand/or development in the cognitive knowledge, skills, training to be a cost-effective method for the reductionand/or attitudes of trainees indicating the achievement of HF related transportation accidents [4]. Accurateof learning outcomes [18]. Essentially training statements as to training effectiveness within theeffectiveness is determined by measuring how well Australian rail industry cannot be made to date, as noprograms meet learning outcomes set for training and publicly available/published scientific evaluation hasultimately the level of behavioural transfer into the taken place. Without an effective evaluation processworkplace [11],[13],[18]. there is no evidence to suggest the presented concepts or learning objectives of a training program will be transferred into behaviour [11]. This is cause for concern particularly when training is used as an organisational strategy for the reduction of human error as a causal factor of accidents, as is the case with immersive simulation (IS) training within the Australian rail transport industry. The unquestionable importance of evaluation was stressed in the inquiry into the Glenbrook rail disaster. Employees who participated in safety training (which has not been evaluated) failed to exhibit key cognitive reasoning skills resulting in failure Figure 1: Four level model of evaluation [12]. to take appropriate action to avert the collision. The inquiry findings suggest effective evaluation of theA best practice model of training has been proposed by training program may have averted the disaster [20].Bramley [11] extending the Kirkpatrick model and Developing effective measurement tools to evaluate railemphasising the integral role of evaluation in the wider safety training is paramount and is a strong contributortraining process. According to the ‘training cycle’ model to the rationale of the current study.[11], the design of training involves the ordered processof: (1) identification of training needs, (2) settinglearning objectives, (3) selection of appropriate training 1.3 Background and Rationale for the Researchmethods, (4) delivery of the program, and (5) evaluation The use of IS as a key element of safety training isof the program and the feedback of information. relatively new to the Australian rail transportationIntegration of each of these steps is depicted in Figure 2. industry, with the Australian Rail Training RealityThe model indicates that information derived through Centre (ARTRC) completing the first round of training,training evaluation (in particular higher level Safety Management System 2.1 (SMS 2.1), in 2001.evaluation), has the potential to influence training Prior to this, interactive operator simulation was usedprograms, possible safety issues, required competencies, throughout the early 1990s to improve operatorand the way work roles and organisational systems are performance in freight rail transportation and interactivedefined. This has specific implications for the passenger train simulation was incorporated into driverdevelopment of systems that control for human error training programs in the mid 1990s. No scientificand other HF considerations. Information gained investigation has been reported regarding thethrough the evaluation of training has the potential to effectiveness of these programs. The IS currentlyalter the design of the workplace to better accommodate utilised within the ARTRC SMS program offers non-
  3. 3. interactive group training involving scenario/incident the questionnaire is not based on the learning outcomessimulations using the immersive audio-visual capacity set for the training program and is limited to evaluatingof the facility. The non-interactive basis of the the IS experience. Furthermore, the questionnaire wassimulation permits training (and associated benefits) to not based on any previously validated evaluation tool(s).extend to trainees in both driving and support roles as Hence, the validity of the SMS 2.4 EQ is not known,technical skills are not required to complete the training. and no previous testing or development of theSuch an approach is aligned with program objectives to instrument has taken place. The SMS 2.4 EQ consists ofenhance cohesion and communication among the 18 questions printed on a single sided A4 sheet in eightworkforce. SMS training is further conducted with point font. The first series of questions relate tomixed workgroups aimed at developing teamwork and a demographic information ascertaining identification,positive safety culture through increasing the level of age, gender, workgroup, length and type of service. Theunderstanding of how different work roles are integrated remaining questions hold a more formalised (whileinto the management of safe rail operations. The varied) response structure being either a forced-choiceadoption of this technique in interactive simulation categorical selection or a mixture of four- or five-pointtraining has successfully enhanced teamwork between Likert response scales. However, the Likert scalework roles in both aviation and medicine [5],[21]. The response options are not standardised betweenmixed group non-interactive IS training used at the questions, varying according to item topics.ARTRC is however, a unique concept that is previously Questionnaire items refer to impressions of mixed groupuntested as a learning tool. If such SMS training is to be training, perceptions of the IS scenarios in both SMSreliable the quality and effectiveness of the training must 2.4 and previous training rounds, increased awareness ofbe ascertained. safety defences, and impressions of learning.1.4 Current Study 2.3 ProcedureThe aims and objectives of the current study were set by No manipulation or control of the IS process wasARTRC executive (the project body) to determine the conducted as part of the current experiment. Theoutcomes of IS training and the mixed group training procedure was limited to analysis of the existing SMStechnique. The specified objectives of the program are 2.4 EQ dataset. The original dataset was collected bythat IS SMS training: (1) increases understanding, (2) the program instructors at the completion of SMS 2.4increases learning, (3) increases awareness of individual training. No independent data collection was conductedresponsibility for safety, (4) promotes discussion about as part of this phase of the research.safety, and (5) provides a positive response to mixedgroup training. The current research examined an 2.4 Data Analysisexisting dataset from round 2.4 of the SMS program(SMS 2.4). Analysis was conducted to establish the The dataset was analysed to evaluate the SMS 2.4 EQ.constructs of the training evaluation measurement tool The original Microsoft Access database was transferred(the SMS 2.4 Evaluation Questionnaire [SMS 2.4 EQ]) into Microsoft Excel and recoded into a numerical formdeveloped and implemented by the project body. The that could be analysed in SPSS version 12.01. Seventy-SMS 2.4 EQ has not previously been empirically tested. one records were lost during the computer import/exportThe purpose of the research was to examine the process as a result of data corruption. Standard qualityevaluation tool and develop recommendations for the control processes were conducted on the recoded datasetoptimisation of the evaluation process. to confirm accuracy.2. METHOD 3. RESULTS2.1 Participants 3.1 SMS 2.4 EQ Evaluation2,427 employees across the rail workforce participated An Exploratory Factor Analysis was conducted on thein SMS 2.4 as a required part of their employment. The dataset to identify the constructs measured by the SMSsample consisted of 463 females, 1,947 males, and 17 2.4 EQ and determine whether the evaluation tool wouldwho did not specify gender. Fourteen represented generate a factor structure that corresponded with theworkgroups were simplified to: driver, station staff, objectives of the IS training. Principal Componentguard, network ops, and other. Both males and females Analysis (PCA) using Varimax rotation was used towere represented across each of the workgroups and an isolate the factors measured by the questionnaire items.even spread of age, length of service, and experience in The SMS 2.4 EQ yielded a 4-factor structure explainingmultiple work roles was recorded. 66.45% of the variance. The four factors identified were as follows: Factor 1 – training systems and processes (7 items; Cronbachs alpha [CA] = 0·87); Factor 2 –2.2 Materials previous IS training (2 items; CA = 0·76); Factor 3 –The evaluation measure, the SMS 2.4 EQ, was current IS training (1 item); and Factor 4 – training indeveloped by the project body as a measurement tool to mixed groups (1 item). From a technical perspective, theevaluate the SMS 2.4 IS training program. The design of questions loading on Factor 1 are satisfactory while the
  4. 4. data for the remaining three Factors is at best suspect. on understanding SMS, coinciding with Factors 2 and 3.Satisfactory CAs (a form of quality assurance) were Several questionnaire items specifically probednoted for Factors 1 and 2. However, for Factors 3 and 4 participant perceptions, opinions, and views towardcomparative CAs could not be calculated as only one previous and current IS training. Within the SMS 2.4item loaded on each, while the single item loadings on EQ design, response scales for these Factors differed.Factor 3 (alpha = .86) and Factor 4 (alpha = .88) were To allow for statistical comparison, item responses werestrong. Construction of the questionnaire made it recoded into a two-point scale of either a positive ordifficult to compare items. Several questions require the negative response. Results indicate a positivity rating ofchoice of one of three options that are unique 75% towards previous IS scenarios (Factor 2) and 85%categorical responses to the particular question, toward SMS 2.4 (current) scenarios (Factor 3).resulting in a lack of comparative power. A standardised Suggesting trainees on average found the IS experiencemethod of item responses is recommended for future assisted their conceptual understanding and also that thequestionnaires. Further PCA examination revealed many scenarios are improving from previous rounds ofquestions loaded on more than one Factor indicating training. CSA revealed significant differences betweenthese items require revision to be more specific in order workgroups in terms of positive response to IS trainingto provide meaningful information. Analysis of the SMS for Factor 2 (2 items) and Factor 3 (1 item): Factor 2 –2.4 EQ suggests, in its current form, the evaluation tool item 1: ²(4, N = 2,226) = 54.17, p < .001; drivers χis unable to measure training outcomes as intended. reported the most assistance from previous IS trainingExamination reveals the questionnaire to be an emotive/ (83%), this result may have been contributed to by thereaction and perceptual measure of the overall training corresponding scenario specifically relating to theirprocess measuring positivity toward training. However, work role, depicting a driver as the main character.employee reactions to the program provide useful Factor 2 – item 2: ²(4, N = 2,311) = 37.34, p < .001; χinformation for possible improvement of SMS training. station staff reported the most assistance from previousSuch information was derived through analysis of the IS training (80%), a result again potentially influenceddata relating to each of the four Factors. by the referenced scenario centrally depicting and relating to station staff. Factor 3 – item 1: ²(4, N = χ3.2 Factor 1: Training Systems and Processes 2,427) = 36.27, p < .001; guards reported the most assistance from current IS training (89%), interestinglyWhile the majority of items loading onto Factor 1 were where a guard was depicted as the main character of theintended to measure different aspects of the outcomes of scenario. Such consistent findings strongly suggestthe training experience, they are in fact measuring the impressions of the IS training experience are closelysame construct identified as a measure of overall linked to trainees identifying with their work role in thetraining systems and processes. Investigation of the scenario, further suggesting SMS is not meeting theresults of these items presents a positive view of the identified objective of enhanced teamwork throughoverall training program with a positivity high of 84% increased understanding of different work roles.reported by participants, providing encouraging However, the SMS 2.4 EQ revealed that experiencinginformation from trainees about the use of the training the IS scenarios has a positive effect on the developmentfacility and the IS program. Demographic variables of understanding SMS for individual work roles.were examined for Factor 1 responses using Chi square Workers when seeing a co-worker of the same status inanalysis (CSA). A significant difference in percentage of the simulation scenario may have found increasedpositive responses towards the training systems and affinity toward the character and the scenario in general.processes was revealed between workgroups ²(16, N = χ This aspect should be considered as a valuable guide in2343) = 64.40, p < .001, where drivers (91%) had the the construction of future training scenarios.highest rate of overall positive responses to SMS 2.4.Such a finding contrasts previous anecdotal evidencesuggesting drivers disliked training. CSA also revealed a 3.4 Factor 4: Training in Mixed Groupssignificant difference between males and females on Positive response to mixed group training within the ISpositive responses ²(4, N = 2333) = 187.44, p <.001. χ environment is an objective of the SMS trainingFemales consistently reported a lower rate of positivity program. The fourth Factor measured by the SMS 2.4(64%) compared to males (89%). A question measuring EQ specifically relates to mixed group training.the perceived degree of improved learning from the IS Analysis was conducted to determine whether training inexperience yielded a positive response from only 49% mixed groups had any effect on the perception of theof females (compared to 84% of males), suggesting the training experience. Results indicate mixed groupcurrent IS training program is not positively accepted training was overall not received well however, this mayamong the majority of female trainees. Such a result have been confounded by the limited data available onmay have been contributed to by all the employee this construct (1 item). Overall 47% of participantscharacters depicted in the training scenarios being male. responded positively to the mixed workgroup training and 49% of participants responded negatively of which3.3 Factors 2 and 3: IS Training 46% stated that mixed group training took time away from gaining skills specific to their role and 3% reportedAn objective of the training program was that mixed group training was not useful at all. CSAexperiencing the IS scenarios will have a positive effect revealed a significant difference in positive responses to
  5. 5. mixed group training for different ages ²(6, N = 2,342) χ difference does exist in degree of positivity between= 68.31, p < .001; trainees aged 36-45 years responded males and females. Initiating changes to future trainingleast positively (40%), younger workers may have found programs to eliminate gender bias in the simulatedthe interaction with older, more experienced workers scenarios may improve the positivity response inbeneficial. CSA also revealed a significant difference females. The results also suggest a lack of understandingbetween workgroups in positivity of response ²(8, N = χ of teamwork between roles within the rail system with2,343) = 29.87, p < .001; drivers were the least positive an identifiable preference for training within isolatedworkgroup toward the mixed group training experience workgroups and a negative response for training in(46%), combining this evidence with the response of mixed workgroups. Such a finding holds potentialdrivers to Factor 2 item 1 suggests drivers prefer more negative influence on applied safety behaviour [5]. Thespecific training within their workgroup. CSA further identification of predetermined learning outcomesrevealed a significant difference in positivity of would be the most reliable dependent variable to use inresponse according to length of service ²(8, N = 2,343) χ future research and development of the SMS 2.4 EQ. A= 39.56, p < .001; trainees with 5-20 years of experience valid learning outcome evaluation tool would need to bewere least positive toward mixed group training (42%). developed as part of future research into the learningWorkers with either a shorter (< 5 years) or longer (> 20 benefits of the IS SMS program.years) length of service responded more positively,interaction with employees of contrasting experience 4.2 Recommendationslevels may have been perceived as beneficial. Overall,the results derived from Factor 4 indicate the mixed In order to improve the current questionnaire thegroup aspect of the training program is an area which following recommendations should be considered: (1)can be improved to promote teamwork and positive Improved construction of the EQ including changes insafety behaviour. the layout such as allowing more space between questions, incorporating sub-sections, and using larger font. (2) It is suggested that a standardised Likert scale4. DISCUSSION be used in future EQs to allow for the expression of a magnitude of responses and ease of item comparison.4.1 SMS 2.4 EQ Effectiveness The use of forced choice categorical responsesThe SMS 2.4 EQ was developed to evaluate the unnecessarily limits the information participants are ableobjectives of the IS component of the SMS 2.4 training to express and also leads responses. Inclusion of open-program. The derived factor structure of the evaluation ended questions would allow for greater transfer ofquestionnaire (EQ) failed to support this supposition. employee feedback and not limit responses, furtherEssentially the SMS 2.4 EQ measures the main factor of leading to identification of undiscovered concerns orthe perception and impression of training systems and information about the IS program. (3) Measuringprocesses. It is concluded the SMS 2.4 EQ does not multiple constructs with long questions createsvalidly measure the constructs that were specified as participant confusion, a lack of clarity in theobjectives for the IS program. The tool requires measurement tool, and results in items that are not purescientific development if a valid measure for evaluating measures. Short, concise questions would correct thisthe program is to be obtained. The SMS 2.4 EQ is likely issue in future tools. The addition of multiple questionsan emotive/reaction and perceptual measure towards the based on predetermined constructs would lead to a moreoverall training process measuring training positivity. reliable factor structure. Using negatively and positivelyThe SMS 2.4 EQ, in its current form, is a tool for phrased questions would also control for a positivityconducting the first level of evaluation (see Figure 1). bias. (4) Fatigue may have been a confounding variableFuture directions based on the best-practice model of in the evaluation process. The SMS 2.4 EQ wasevaluation suggested by Kirkpatrick [12] should aim at completed at the end of a lengthy training day. Aincreasing the depth of the EQ to a measure of higher possible alternate explanation of the results is thatorder learning and behavioural change. The Factor 1 might actually be employee positivity towardsincorporation of effective evaluation into the training finishing the day and hence not positivity towards theprocess as suggested by Bramley [11] would allow for training process. To minimise the likelihood of thisthe optimisation of the safety training process and the IS confound, it is recommended future studies counter-program. Development of evaluation tools exploring balance between training groups the time of the ISmultidimensional constructs based on declarative experience and related evaluation during the trainingknowledge, cognitive and behavioural skills, and day. (5) The EQ was completed at the conclusion ofaffirmation based learning outcomes is the next step in safety discussion. Group discussions may have had athe development and improvement of the IS based social desirability bias effect. It is possible that sometraining program. groups may have still been discussing responses on the questionnaire with other members of the group leadingWhile the factor structure of the SMS 2.4 EQ is not a to bias towards opinions projected by dominate in-measure of the SMS 2.4 program objectives or learning groups. To control for this participants could be askedoutcomes it is possible to speculate some conclusions to complete the questionnaire independently after theabout the SMS 2.4 training program. Overall there is a conclusion of training without discussion. (6) A biaspositive response towards SMS training, while a caused by fear of reprisal is likely to have increased the
  6. 6. positivity of participant responses. The request for 2. McFadden, K.L., & Towell, E.R. (1991). Aviationemployees to write their employee number on the SMS human factors: A framework for the new millennium.2.4 EQ is likely to have inflated positive responses due Journal of Air Transport Management, 5, 177-184.to fear of negative repercussions if reporting they did 3. Reason, J. (1995). A systems approach tonot like the training. Program instructors collecting and organizational error. Ergonomics, 38, 1708-1721.having access to completed, identifiable questionnaires 4. Helmreich, R.L., & Foushee, H.C. (1993). Why crewmay also have contributed to this bias. Future studies resource management? In E.L. Wiener, B.G. Kanki, &could limit this effect by not requesting employee R.L. Helmreich (Eds.), Cockpit Resource Managementnumbers and allowing participants to submit EQs (pp. 3-45). San Diego, USA: Academic Press, Inc.anonymously. 5. Helmreich, R.L., & Merritt, A.C. (1998). Culture at work. Aldershot, UK: Avebury.4.3 Conclusion 6. Bent, J. (2003). The under-employed full flight simulator. Paper presented at the Sixth InternationalThis research represents the first psychometric AAvPA Symposium in Sydney, Australia.evaluation of the SMS 2.4 EQ. The EQ is likely a 7. Reason, J. (1997). Managing the risks ofmeasure of the emotive reaction towards the overall organizational accidents. Aldershot, UK: Ashgatetraining process and not the outcomes of IS as intended. Publishing Limited.Insufficient evidence was available to suggest support as 8. Salas, E., & Cannon-Bowers, J.A. (2001). The scienceto whether IS was beneficial to employee safety of training: A decade of progress. Annual Review ofbehaviour or learning. Future research is required to Psychology, 52, 471-499.determine if there is an actual benefit to learning (andsubsequent behavioural transfer) as a result of being 9. Valverde, H.H. (1973). A review of flight simulator transfer of training studies. HF, 15(6), 510-523.exposed to the IS program at the ARTRC. Currently themixed group training structure is being negatively 10. Salas, E., Bowers, C.A., & Rhodenizer, L. (1998). It isreceived by employees. Overall however, it appears the not how much you have but how you use it: Toward a rational use of simulation to support training. IJAP,training program is being positively received. 8(3), 197-208. 11. Bramley, P. (1991). Evaluating training effectiveness:4.4 Industry Applications Translating theory into practice. Sydney: McGraw-Industries dominated by simulation training are reliant Hill Book Company.on effective training tools and structures and also the 12. Kirkpatrick, D.L. (1959). Techniques for evaluatingskilled functioning and reasoning of human operators. training programs. Journal of the American Society ofEvaluation is the process through which training Training and Development, 13, 3-9.effectiveness is derived and the mechanism for 13. Kirkpatrick, D.L. (2004). A training and developmentstrategically improving training processes and classic: How to start an objective evaluation ofperformance outcomes. Industrial accidents such as the your training program. Training & Development,Glenbrook rail disaster remind us of the human cost of 58(5), 1-3.industry and the consequence of ineffective training or 14. Goldstein, I.L. (1986). Training in organisations (2ndtraining which has not been scientifically evaluated [20]. ed.). San Francisco: Brooks/Cole.The identified lack of psychometric evaluation of 15. Arthur, J.W., Bennett, J.W., Edens, P.S., & Bell, S.T.training within the simulation industry and general (2003). Effectiveness of training in organizations: Aindustry [6],[8],[11],[14],[15],[20] does not represent a meta-analysis of design and evaluation features.safety oriented approach to the active pursuit of Journal of Applied Psychology, 88(2), 234-245.advancement. Strategic direction for the evaluation and 16. Schwartz, B., &. Robbins, S.J. (1995). Psychology ofsubsequent improvement of training is accessible learning and behaviour. London: Norton & Co.through best practice models such as Kirkpatrick’s 17. Machles, D. (2003). Evaluating the effectiveness ofmodel of training evaluation [12] and Bramley’s training safety training. OH&S, 72(6), 54.cycle model [11]. The theoretical concepts presented in 18. Kraiger, K., Ford, J.K., & Salas, E. (1993).these models have practical applications to industry and Application of cognitive, skill-based, and affectivedirect the development of new or redevelopment of theories of learning outcomes to new methods ofexisting training evaluation measures. The development training evaluation. Journal of Applied Psychology,of valid evaluation structures is a complex process 78(2), 311-328.reliant on psychometric examination and the revision of 19. Edkins, G.D., & Pollock, C.M. (1997). The influencedeveloping tools. Effective evaluation is an essential of sustained attention on railway accidents. Accidentstep towards strategic management of learning, it is Analysis & Prevention, 29(4), 533-539.essential such tools are incorporated into the wider 20. McInerney, P.A. (2001). Special Commission oftraining process enabling continued and proactive Inquiry into the Glenbrook Rail Accident (Specialadvancement for a progressive industry. Commission of Inquiry). Sydney, Australia. 21. Leape, L. (1994). Error in medicine. JAMA, 272(23),REFERENCES 1851-1857.1. Helmreich, R.L. (2000). On error management: Lessons from aviation. BMJ, 320, 781-785.