To be proficient at sea we need to have a combination of underpinning knowledge, relevant technical skills and the necessary soft skills, which make us good shipboard
team players capable of managing tasks in a safe manner. During maritime training, it is important to assess these three areas to establish the proficiency gaps relating
to the learning objectives/ goals. These identified deficiencies could subsequently guide and encourage us in more effective ways to tweak our learning artifacts to
fill in these gaps. This paper presents some of the tools, which have been successfully used in classrooms and in simulator-based training both in formative and in summative situations at the EMAS Academy.
Addressing the Proficiency Gap in Maritime Training
1. Addressing the Proficiency Gap in Maritime Training
Kalyan Chatterjea
EMAS Training Academy & Simulation Centre
Abstract
To be proficient at sea we need to have a combination of underpinning knowledge,
relevant technical skills and the necessary soft skills, which make us good shipboard
team players capable of managing tasks in a safe manner. During maritime training,
it is important to assess these three areas to establish the proficiency gaps relating
to the learning objectives/ goals. These identified deficiencies could subsequently
guide and encourage us in more effective ways to tweak our learning artifacts to
fill in these gaps. This paper presents some of the tools, which have been
successfully used in classrooms and in simulator-based training both in formative
and in summative situations at the EMAS Academy.
Background
Traditional knowledge-based training (KBT) in maritime education was
supplemented by the advent of STCW Convention to include skill-based training
(SBT) in the simulators (Bjorklund, Rodahl, and Robertson 1987; Cross 2014) . With
the appearance of simulators for training, there was a tendency among maritime
instructors to reduce the importance of knowledge-based components in class
rooms. The emphasis was put on the simulator-based skill acquisition, which was
perceived to improve performance. Yet in KBT, learners try to process facts, figures
and basic information, which become important for informed decision-making later
during performance at the workplace. It is argued that the learner applies the
knowledge and information, acquired during the KBT-phase, in a real situation (or
in a near-authentic simulation scenario) and gets an opportunity to reflect on this
application of knowledge during the performance stage. Eventually, this reflection
should help the learner assimilate the domain knowledge and associated skills. It
can be argued further, that the link between the knowledge, activity and the
learning objective gets stronger when adequate emphasis is placed on both theory
and practice.
In aviation, it was established that mastering theory and practice was not enough
and the non-technical skills (NoTechs) also form an important component required
to accomplish task goals (Airbus 2012; Helmreich and Merritt 2000). This is now
being advocated in maritime practice (Grech, Horberry, and Smith 2002; Barnett
2005; Gegory and Shanahan 2010) and mandated in the STCW 2010 revision.
Page 1 of 13
2. Hence, it can perhaps be claimed that for successful shipboard task performance
the learners need to have a combination of underpinning knowledge, relevant
technical skills and the necessary soft skills as prerequisite.
In adult learning, the concept of assessment may have negative connotations as
they could be associated with anxiety and awkwardness in front of the peers or
instructors. Yet with the demand for accountability at every learning centre,
assessment has becomes an integral part of each learning curriculum. Additionally,
assessments are credited to improve both learning for learners as well as teaching
for instructors.
Pupils need to know how their learning is progressing. Teachers also need to know how
their pupils are progressing, to guide both their own teaching and the pupils’ further
learning. (Assessment Reform Group 2002)
The concept of 'assessment for learning' has been proposed by a number of
proponents on positive aspects of testing both during lessons and for final
grading. (Black and Wiliam 1998; Bohemia and Harman 2009). Bohemia and
Herman suggested the following six assessment scenarios, which could enhance
learning environments and help measure the gaps in learning applicable in areas of
KBT, SBT and NoTechs.
1. An emphasis on authenticity and complexity in the content and methods of
assessment rather than reproduction of knowledge and reductive measurement
2. The use of high-stakes summative assessment rigorously but sparingly rather than
as the main driver for learning
3. Offering students extensive opportunities to engage in the kinds of tasks that
develop and demonstrate their learning, thus building their confidence and
capabilities before they are summatively assessed
4. It is rich in feedback derived from formal mechanisms e.g. tutor comments on
assignments, student self-review logs
5. It is rich in informal feedback e.g. peer review of draft writing, collaborative
project work, which provides students with a continuous flow of feedback on ‘how
they are doing’
6. Developing students’ abilities to direct their own learning, evaluate their own
progress and attainments and support the learning of others
In the next section, we describe the tools and methods used to conduct these
assessments at the EMAS Academy.
Page 2 of 13
3. Knowledge-based Assessments
These are traditional assessments, where we use objective testing using a
Classroom Response System (CRS - sometimes called a personal response system,
student response system, or audience response system). CRS is a set of hardware
and software that facilitates teaching activities such as the following.
➢ Instructor poses an objective question (multiple-choice, true-false, multiple
response, mapping etc.) to the learners via a computer/projector.
➢ Each learner submits an answer to the question using a hand-held
transmitter (a “clicker”) that beams the student-response to a receiver
attached to the instructor’s computer.
➢ Software on the instructor’s computer collects the students’ answers and
produces a bar chart showing how many students chose each of the answer
choices.
➢ The teacher makes “on the fly” instructional choices in response to the bar
chart by, for example, leading students in a discussion of the merits of each
answer choice or asking students to discuss the question in small groups.
Figure 1. Showing the CRS – a clicker and a multiple-response objective question
Figure 1 shows a multiple response question, which is used in the Resource
Management Course at the EMAS Academy.
Page 3 of 13
4. Figure 2. Showing a clicker & a multiple-response objective question from BRM Course
The clicker could be covering the following question types:
➢ Recall Questions – ask learners to recall facts, figures, information. They
rarely generate discussion. No higher order thinking is
required.
➢ Conceptual Questions – checks students understanding of concepts. Helps
to identify misconceptions. This could generate
discussions among learners and clarification from the
instructor.
➢ Application Questions - ask learners to make a decision or to choose from
a given scenario. Real-world scenarios can be given
and the learners asked to choose the appropriate
action. This could generate further interaction among
students and with the instructor.
➢ Higher-order Questions – ask learners to analyse relationship among
multiple concepts. This could generate further
interaction among students and with the instructor.
Page 4 of 13
5. Figure 3. Showing the results to the participants immediately after conducting the test
It is claimed that in order for learners to receive maximum benefit from feedback,
it should be supplied as soon as possible after performing a test. Positive feedback
is important, but negative feedback is equally significant, since an ignorant learner
may go on applying a misconception over and over before discovering the nature
of his misconception. Immediate feedback is often the most important
characteristic of a drill or tutorial.
Skill-based Assessments on Simulators
In order to evaluate a trainees performance a criterion or standard is required
against which the achievements can be measured. Setting this criterion value is
essential but at the same time difficult and complex. Many factors will
influence the criterion value and they can possibly change in time as well.
Furthermore the criterion for certain phenomena might be quite different for
the various levels of training performed on the simulator system. (Cross 2011)
As related above, the authentic environment on a maritime simulator does not
always lends itself to objective assessment. But some simulator manufacturers do
produce objective assessment tools, which could be programmed to make
assessment automated and additionally provide coaching messages during
simulation exercise, which could be altered using branching technique to provide
Page 5 of 13
6. appropriate instructions during the simulation exercise. Figure 4 to 6 show the
sequences of objective assessment on Kongsberg Big View Simulator at the EMAS
Academy.
Figure 4. Showing the assessment points in the exercise & use of logic gates to trigger
assessment.
Figure 5. Showing the draining of the compressed air system in engine room on the Kongsberg
Big View Simulator .
Page 6 of 13
7. Figure 6. Showing the draining of the compressed air system in engine control room and the
associated logic circuit.
Figure 7. Showing the trigger activating assessment.
Page 7 of 13
8. Figure 8. Showing allocation of marks.
Figure 9. Showing the final result sheet from the simulator.
Page 8 of 13
9. Soft-skill Assessments on Simulators
Soft-skills were first highlighted in the aviation industry when even experienced
pilots were seen to commit errors. These skills (or sometimes referred to as human
element) are also known as non-technical skills, which define behavioural
competencies covering personal effectiveness, communication skills, creative
problem-solving, strategic planning, leadership and team-building skills. Soft-skills
relate to a persons ability to interact effectively with team members. These are
now finding acceptance in maritime (included in STCW 2010 revision) and other
safety critical industries e.g., medical, nuclear power, process and even railways.
These are covered in Resource Management Courses in these industries.
Assessing these skills are not easy and in aviation they use behavioural markers for
these assessment.
At the EMAS Academy these non-technical skills are categorised into various
classifications, which are shown in the following diagram. (Chatterjea, Labor, and
Vidal 2013)
Figure 10. Showing the groupings and categories for Behavioural Markers (developed at the
EMAS Academy).
Figure 11. Showing examples of behavioural markers for good practice and for poor practice.
Page 9 of 13
10. Figure 12. Showing ratings of behavioural markers.
Figure 13. Showing a marking sheet for behavioural markers.
Page 10 of 13
11. Figure 14. Showing actual recordings of behavioural markers for two runs.
Figure 15. Showing comparison of behavioural markers for two runs.
Page 11 of 13
12. Conclusions
Assessments in maritime training is a complex area and assessment validity and
reliability would be an area for continuous research and deliberations. The paper
shared some of the efforts, which are being carried out at the EMAS Academy on
various aspects assessments. A publication sharing our experience in this area are
now available from Amazon.com (Chatterjea, Labor, and Vidal 2013).
[http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Dstripbooks&field-keywords=Kalyan
%20Chatterjea.]
References
Airbus. 2012. “Human Performance: Error Management.” Flight Operations Briefing
Notes. Accessed October 7, 2012.
http://www.airbus.com/fileadmin/media_gallery/files/safety_library_items/AirbusSafetyLib_-FLT_OPS-HUM_P
ER-SEQ07.pdf
Assessment Reform Group. 2002. “Testing, Motivation and Learning”. University of
Cambridge Faculty of Education.
http://assessmentreformgroup.files.wordpress.com/2012/01/tml.pdf.
Barnett, Michael L. 2005. “Searching for the Root Causes of Maritime Casualties.”
WMU Journal of Maritime Affairs 4 (2): 131–45. Accessed June 26, 2012.
http://www.solent.ac.uk/research/mhfr/resources/humanerror.pdf
Bjorklund, R, K Rodahl, and B J Robertson. 1987. “Effects of Maritime Simulator
Training on Performance.” In Trondheim, Norway: MARSIM 1987. Accessed April 1,
2014.
http://trid.trb.org/view.aspx?id=396688
Black, Paul, and Dylan William. 1998. “Inside the Black Box: Raising Standards
Through Classroom Assessment.” Phi Delta Kappan 80 (2): 139–44.
http://faa-training.measuredprogress.org/documents/10157/15652/InsideBlackBox.pdf.
Chatterjea, Kalyan, Captain Alex G. Labor, and Captain Francisco J. Vidal. 2013.
Bridge Resource Management: Teamwork and Leadership. Cengage Learning Asia Pte
Ltd.
http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Dstripbooks&field-keywords=Kalyan
%20Chatterjea.
Cross, Stephen J. 2011. “Quality MET Through Quality Simulator Applications.” In
Rijeka.
http://www.pfri.uniri.hr/imla19/doc/015.pdf.
Cross, Stephen J. 2014. “STCW and Simulators.” Maritime Institute Willem Barentsz.
Accessed April 1, 2014.
http://www.nhl.nl/nhl/7448/miwb/mstc/stcw-and-simulators.html
Page 12 of 13
13. Grech, Michelle R., Tim Horberry, and Andrew Smith. 2002. “Human Error in
Maritime Operations: Analyses of Accident Reports Using the Leximancer Tool.”
Proceedings of the Human Factors and Ergonomics Society Annual Meeting 46 (19):
1718–21. Accessed March 8, 2013.
https://www.leximancer.com/wiki/images/4/44/HFES2002_MGRECH.pdf
Gregory, Dik, and Paul Shanahan. 2010. The Human Element: A Guide to Human
Behaviour in the Shipping Industry. [London]: TSO for the Maritime and Coastguard
Agency. Accessed June 26, 2013.
http://www.dft.gov.uk/mca/the_human_element_a_guide_to_human_behaviour_in_the_shipping_industry.pdf
Helmreich, Robert L., and Ashleigh C. Merritt. 2000. “Safety and Error
Management: The Role of Crew Resource Management.” Aviation Resource
Management 1: 107–19. Accessed October 10, 2012.
http://homepage.psy.utexas.edu/homepage/group/helmreichlab/publications/pubfiles/pub250.pdf.
Lincoln, Mary. 2009. “Aligning ICT in Assessment with Teaching and Learning:
Enhancing Student Achievement in the Middle Years.” In Canberra, Australia.
http://www.acsa.edu.au/pages/images/Mary%20Lincoln%20-%20Alignment.pdf.
Page 13 of 13