881
1 INTRODUCTION
Inthis study we explorethe instructional challenges
examinersinMaritimeEducationandTraining(MET)
face when assessing students’ navigational
competenciesandskills in a simulated environment.
In MET the international convention Standards of
Training, Certificationand Watchkeeping for
Seafarers (STCW) provides regulations for
performancebased competency tests,
highlighting
that such tests should be performed in a simulator
(Regulation I/12). In navigation courses, the STCW
code emphasizes both technical proficiency and so
called nontechnical skills as minimum standard for
certification as master mariners. The term non
technicalskillshavebeendefinedbyFlin, O’Connor
andCrichton(2008)
ascognitive,socialandpersonal
skills that contribute to safe and efficient task
performance in safety critical domains. The learning
objectives concerning nontechnical skills to be
assessed through simulationbased tests are: content
knowledge,applicationand intentof the regulations
forpreventingcollisionsatsea(COLREG),inaddition
to skills
in team‐ and resource management for
contributing to a safe watch. Such skills include for
example,theabilitiestounderstandordersandtouse
the appropriate internal communication and alarm
systems, i.e. clear and concise communication at all
timesandorders“inaseamanlikemanner”.
However, in a recent overview
of the field,
Sellberg (2017) found that several aspects of the
Certifying Navigational Skills: A Video-based Study on
Assessments in Simulated Environments
C.Sellberg,O.Lindmark&M.Lundin
ChalmersUniversityofTechnology,Gothenburg,Sweden
UniversityofGothenburg,Gothenburg,Sweden
ABSTRACT:InMaritimeEducationandTraining(MET)wherestudentsaretrainedforprofessionswithhigh
standardsofsafety,theuseofsimulatorsistakentoprovideopportunitiesforsafeandcosteffectivetraining.
Although the use of simulators for training
and certifying technical proficiency and socalled nontechnical
skillsiswellestablishedandregulatedbyinternationalstandards,previousresearchsuggeststhatsimulator
basedassessmenthasbeenpoorlyimplementedintheMETsystem.Nowthechallengeistocontributewith
knowledgeabouthowtoconductconsistent,unbiased,andtransparentassessments
ofnavigationalskillsand
competencies.However,incurrentresearchitisnotevidenthowtrainingofnontechnicalskillsinsimulated
environmentsshouldbeassessed.Theaimofthisstudyistoexplorethepedagogicalchallengesinstructorsface
whenassessingstudents’navigationalskillsandcompetenciesinasimulatedenvironment.Thestudy
isbased
on videorecorded data from the certification part in a navigation course for second year master mariner
students. A situated approach to cognition and learning is employed to analyze the coconstruction of
assessmentinthesimulatedexercisesbymeansofinstructors’questionsandstudents’answers.Resultsreveal
an assessment practice where the students are still developing their navigational skills with instructional
supportfromexaminerswhilstbeingcertifiedonusingRadarequipmentinaccordancetoCOLREG.
http://www.transnav.eu
the International Journal
on Marine Navigation
and Safety of Sea Transportation
Volume 13
Number 4
December 2019
DOI:10.12716/1001.13.04.23
882
currenttrainingandassessmentsystemstand outas
alarming. For example, Emad and Roth (2008)
conclude that MET fails to achieve its learning
objectives. Rather, MET has actually changed the
learningobjectivestohelpstudentspass competence
tests in accordance with STCW. Ghosh, Bowles,
Ranmuthugala and Brooks (2014) argue that
this
change in learning objectives led to the use of
assessment methods that are failing to develop
professionalskillsthatenableseafarerstudentstoput
to use their competence from MET to workplace
contexts on board ships. Moreover, simulatorbased
competencetestsinMETareclaimedtobelackingin
validity, reliability and security (Gekara, Bloor &
Sampson,2011;Sampson,Gekara&Bloor,2011).The
argumentisthatthecurrentMETsystemfailstotrain
students’ socalled higher cognitive skills (e.g.
comprehension, application, analysis, synthesis and
evaluation), which is highlighted in STCW. What
Gekara et al. (2011) identify thus a
focus on aspects
such as maintaining vessel course and speed, safe
distance from other vessels and the required draft.
Furthermore,thescenariosinthecompetencetestare
arguedtobeverysimilartothescenarioscarriedout
duringtraining. Based on such results, Gekara et al.
(2011) conclude that the
current MET system favors
“examination coaching” and “rote learning” rather
thanhighqualitytraining or the effective evaluation
of“essentialknowledge and skills”(p.98).Nowthe
challenge is, as argued by Øvergård, Nazir and
Solberg (2017), to contribute with knowledge about
howtoconductconsistent,unbiasedandtransparent
assessments of
navigational skills and competencies.
However,incurrentresearchitisnotyetevidenthow
training of nontechnicalskills in simulated
environments could nor should beassessed
(Conceiçãoetal.,2017).
Againstthisbackground,theaimofthisstudyisto
explore the pedagogical challenges instructors face
whenassessingmaster
marinerstudents’navigational
competenciesandskills in a simulated environment.
Drawing on a situated approach to cognition and
learning,whichimpliesanalyzinginteractionaldetails
bymeansofvideodata,givesustheopportunitiesto
explore how assessment practices unfold during
simulations (Heath, Hindmarsh & Luff, 2010).
Specifically, by scrutinizing competency tests
in the
simulator environment in their own right, avoiding
theoretical distinctions of technical versus non
technicalskillsintheanalyticalprocess,theaimisto
deliveranadequateexplicationofexistingassessment
practices in simulator environments. The study is
basedonvideodatacollected inanavigationcourse
for second
year master mariner students, and
captures 30 students being certified on using Radar
andARPA(AutomaticRadarPlottingAid)
equipmentinabridgeoperationsimulator.
2 LITERATUREBACKGROUND
Insafetycriticaldomains,suchasaviation,medicine
and the offshore industry, different models to rate
nontechnical skills in simulated environments have
beendevelopedtomakesurethateachteammember
receivesas“fairandobjective”assessmentaspossible
(Flinetal., 2003, p.109).Thebasicpremisesofsuch
modelsare twofold:1)onlyobservable behavior can
beassessed,and2)assessmentsystemshouldhavea
ratingsystem thatindicates acceptable
and
unacceptable behavior. In the maritime domain,
which is in focus in this study, a similar model has
been developed, focusing on rating nontechnical
skillsofnavalcadets.
Table1. Behavioralmarker systemfor rating nontechnical
skillsdevelopedbyConceiçãoetal.(2017).
_______________________________________________
SkillBehavioralmarker
_______________________________________________
Leadership Takestheinitiative
Setsintentionsandgoals
Establishesandcontrolstandards
Situation Monitorsandreportschangesofsituations
awareness Collectsexternalinformation
Identifiespotentialdangerorproblems
CommunicationSharesinformation
Keepsacontinuous,clearandeffective
flowofinformation
Promotesaconstructiveenvironment
forcommunications
Team
workConsidersalltheelementsoftheteam
Coordinatesthetasksoftheteam
Assessesthecapabilitiesandcorrects
procedures
Decision Establishesalternativelinesofaction
makingAssessesandverifiesconsequencesof
decisionsandactions
Considersandshareswithothers,risksof
differentlinesofaction
_______________________________________________
Models in these domains traditionally draw on a
classic cognitivist view where technical and non
technicalskillsareseenasdifferentandseparableset
of competencies (Flin et al., 2008). However, in the
assessmentmodelbyConceiçãoetal.(2017)inTable
1abovetechnical skills are not included, as technical
skills
are “already objectively evaluated through the
complianceofproceduresandtheeffectivenessofthe
decisions and actions” (p. 257). In contrast, studies
adopting a situated approach to cognition and
learningtechnicalandnontechnicalskillsareviewed
as inherently intertwined and difficult, if not
impossible, to separate in the navigational
work
trainedandassessedinsimulators(seee.g.Hontvedt,
2015;Sellberg&Lundin,2018).Forexample,Sellberg
andLundin(2018)foundthatepisodeswhichatfirst
glance seemed to consist of quite technical
instructions on the use of radar technologies and of
keeping a safe distance to other vessels, when
analyzed
in detail the episodes entailed instructions
on socalled nontechnical skills such as situation
awarenessanddecisionmaking.Theanalyticalfocus
oninteractionaldetailsischaracteristictothesituated
approach,drawingonthemethodofsystematicvideo
analysis of naturally occurring learning practices to
render knowledge and competencies observable
in
work practices (Stahl, 2005). Shifting the perspective
frommeasuringthedevelopmentofvariousskillsas
isprimefocusofcognitiveperspectives,toanalyzing
the details of training for work practice, puts
emphasis onsimulatorbased training as an
interactionalachievement,highlightingtheroleofthe
instructors’continuousassessmentsandinstructional
strategies in the simulator environment to support
student learning towards a profession (Hontvedt
2015;Sellberg&Lundin,2017).
883
In research on simulatorbased assessments in
aviation,assessmentmodelswithbehavioralmarkers
for rating nontechnical skills have proven to be
difficult to use for examiners. Several studies have
revealed large disagreement and discrepancies in
assessment outcomes between flight examiners (e.g.
Roth&Mavin,2013;Weberet.al.,2013;
Roth,2015).
For example, Weber et. al. (2013) showed that
examinersapplythesameorsimilarreasonstoarrive
at different assessments or use different reasons to
arrive at the same assessment. Hence, interrater
reliabilitybetweenexaminerstendstobemoderateto
low.Wheninterviewingflightexaminersonhowthey
conduct assessment, Mavin and Roth (2014) found
that cockpit performance was discussed as holistic
eventsratherthanseparableskills.Moreover,through
observationofflightexaminations,interviews andthe
use of thinkaloud protocols during examinations,
Roth (2015) found that flight examiners based their
assessments of nontechnical skills, such as situation
awarenessanddecisionmaking,onalargenumberof
observations put together into a coherent storyline
even when using rating scales. Hence, before
uncritically putting assessment models into use in
MET,thereisaneedforempiricalstudiesofcurrent
assessmentpracticestoidentifyhowassessmentsare
interactionally achieved and
thus how they can be
developed.
3 METHODANDDATA
Themethodologicalapproachinthisstudyisguided
by Heath et al. (2011) principles for using video in
research. Following these principles, our aim is to
explorehumantechnologyinteractions“inthewild,”
in this case naturally occurring examinations in the
simulator. Moreover, these principles put emphasis
on the relationship between temporal, material and
socialaspectsinactivities.Thismakesvideorecorded
data an important source for analysis, since video
creates stable records of the verbal, visual and
materialpracticesunderstudy,enablingdetailedand
collaborativeanalysis(reference).
Video recorded data
of (training and)
examinations in one navigation course for second
year master mariners at a Swedish university was
collected. While the training sessions on the bridge
operation simulator have been analyzed in prior
studies (e.g. Sellberg & Lundin, 2017; Sellberg &
Lundin 2018), this study draws on video data from
simulator
based competence tests at the end of the
course. During competence tests students are being
certified on using Radar and ARPA equipment in
differenttrafficandweatherconditions,whichisone
ofthelearningobjectives. Anotherlearningobjective
outlinedinthesyllabusistheabilitytointerpretand
applyCOLREG
invarious situations.Intheempirical
data, the students are performing a simulated
crossingoftheDoverStraitTrafficSeparationScheme
(TSS).InTSSlanes,thecrossingshouldbedonewith
as close to a right angle as practicable according to
COLREG.There aretwomainreasonsforthis:First,
to
showtheintentiontocrossthelanetosurrounding
vesselsinthestrait.Second,inordertominimizethe
amountoftimespentcrossingaTSSlane.Asafeand
effective crossing means to adjust the ship’s speed
and/orcourseinaccordancetothesituation.Howto
make a safe
and effective crossing is this not
straightforwardbutajudgementtobecarriedoutby
the student. For example, it can be considered
effectivetoslowdownifthisisdoneinordertoavoid
making adjustments to one’s own ship’s course. In
another situation, a slight change of course can
be
preferred over a change in speed. Rather, in
accordance to the practices of “good seamanship”
(Taylor, 1998), the overall goal is to maintain the
trafficflowintheTSS.
During competence tests in the simulator, one or
twoexaminers,whowerealsoinstructorsduringthe
course,usuallymonitorthe
studentsfromanadjacent
instructor’s room. In the instructor’s room, several
computer screens display different aspects of the
activities on the five different bridges: instrument
settings,video surveillance of students’ work on the
bridge,aswellashowthestudents’viewthemarine
environmentthroughvisua l lookout.Theexaminers
also
have an overall view of the scenario through a
screen showing the actions of each vessel from a
birdseyeperspective. Performanceduringthe testis
assessed based on several instances: 1) observable
actions taken during the scenario 2) interviews with
students on the bridge, 3) observations. For this
purpose,
two different assessment sheets are used,
one for assessing the setting of different instrument,
andoneforassessingthestudents’understandingof
the use of instruments and the traffic situation at
hand.Attheendofthecourse,thestudentsarealso
examined by means of a theoretical, written
examination on
COLREG. However, the theoretical
testshavebeenleftoutofthisstudy.
Table2.Anoverviewofthevideorecordedexaminations.
_______________________________________________
Day StudentExaminer(s) N
o
of N
o
of N
o
of
groupstudentspass fail
_______________________________________________
Day1 Group1 n=1 n=5 n=4 n=1
Day1 Group2 n=1 n=5 n=5 n=0
Day2 Group3 n=1 n=5 n=5 n=0
Day2 Group4 n=1 n=5 n=3 n=2
Day3 Group5 n=2 n=5 n=4 n=1
Day3 Group6 n=2 n=5 n=1 n=4
_______________________________________________
Threedifferentsessionsofcompetencetestsinthe
simulatorwererecorded,inall30students(seeTable
2). In order to capture the activities that take place
during competence tests, wallmounted gopro
cameraswereusedoneachofthefivebridgesinthe
simulator, and a fixed
camera was placed in the
instructor’sroomtocapturetheexaminers’useofthe
monitoringtechnologies.
4 ANALYTICRESULTS
Inall,thevideodataanalyzedinthisstudyconsistsof
85 episodes of examinerstudent interactions and 3
instances of studentstudent interactions during
competence tests. While most examinerstudent
interactions
in the data corpus were routinely
performedinorder to fill out the assessment sheets,
there were also instances of examinerstudent
interactionswheretheexaminervisitedsomestudents
884
on their bridges and intervened in midst of the test
(n=8). These occasions were not part of the
observationandinterviewpartofthecompetencetest.
In previous studies on simulatorbased maritime
training, the instructors’ monitoring of students’ on
goingactivitiesmakesitpossibletoattendtospecific
details
of the students’ conduct, which lays the
ground for making assessments (Sellberg & Lundin,
2017; Sellberg & Lundin, 2018). Such assessments
represent a continuous and ongoing process that is
groundedintheinstructors’abilitiestorecognizethe
fit or gap between the learning objectives and the
students’activitiesin
thesimulator.Inthisway,these
assessments lay the ground for supporting each
student with corrections, clarifications or approvals
on their performance. The instructional work is
arguedtobefosteringthestudentsintothemaritime
work practice and discourses of what constitutes
“goodseamanship”,andseenasessentialfortraining
the
next generation of mariners (Sellberg & Lundin,
2017; Sellberg & Lundin, 2018). In this study, we
found that this kind of instructive work, i.e.
intervention on the bridge based on observations
madeintheinstructor’sroom,alsotakesplaceduring
examinations, which calls for further analysis. The
eight episodes identified
in the data corpus were
transcribed and analyzed with attention to the
sequentialorganizationoftalkandbodilyconductto
gainknowledge on howtheseinterventionsor visits
enterintotheexaminationpracticeinthe simulators
(cf. Heath et al. 2011). In the following text we will
analyze two of the
eight episodes where the
instructorspaythestudentavisitonthebridgeinthe
midstofcompetencetests.
Inthefirstepisode,theexaminerentersoneofthe
bridges and confronts the student on slow speed
approachingthepointofbeginningcrossingtheTSS
lane:
Excerpt 1.
01. EXAMINER: Why do you drive so slowly?
There is no reason to
02. STUDENT: Nooo
03. EXAMINER: When you are about to cross
th-then… you’re trailing for a speed you
04. shouldn’t keep
05. ((3 seconds pause))
06. Right?
07. STUDENT: Yeah but I’ve trailed on the
speed I’m going right now but…
08. EXAMINER: Yes you have but should you
cross so slowly?
09. STUDENT: Yes I don’t really need to
10. EXAMINER: Need to? You shouldn’t! You
should pass as quickly as possible…
11. Well yes
Inthefirstturn,theexaminertopicalizestheissue
forinterveninginthe midstofexaminationwhichis
speed,orthelackthereof,byaskingthestudent“why
do you drive so slowly?”. The instructor directly
accountsfortherelevancyofsuchaquestion:“There
isnoreasonto”(line
01).Thestudentrespondswith
someuncertainty:“nooo”.Thestudent’sresponsecan
beseenas inadequate,eitherasrevealing aproblem
with the examiners’ formulation of the question, or
thestudent’slack ofunderstandingofthe“tooslow”
situation. The examiner tries to clarify the student’s
problem: “you’re trailing for a
speed you shouldn’t
keep”(lines0304),makingexplicitawrongbehavior
in the situation. A 3 second pause follows, the
instructor by a “right?” (line 06), almost rushes the
student to answer the question or make some
indicationofunderstandingthisincorrectbehavior.In
hisresponse,thestudentaccounts
forusingthetrail
functioninrelationtothecurrentspeed(line07).This
response is evidently taken as insufficient as the
examinercontinuesbyagainaskingthestudentifhe
should go so slow (line 08). The examiner turns the
student’s account of the current (slow) speed into a
question
ofwhatcouldandwhatshouldbedone.In
this manner the instructor identifies a lack in the
student’sunderstandingofthesituation.Thestudent
seems thus to display an understanding of such a
distinction: “yeah I don’t really need to” (line 09),
which is then treated as a correctable
matter by the
examiner in line 10. The sharp remark “you
shouldn’t!” and the following account of the reason
why is delivered in a higher tone of voice,
highlighting that the student’s slow speed is
inappropriate, as well as unsafe, in this particular
situation. This episode ends as the instructor rushes
out and the student is left alone again. After the
examiners’ intervention, the student increases his
speed and perform a crossing that receive a passing
grade.
In the second episode, the examiner enters the
student’s bridge at the same time as the student is
freneticallypushingthesteeringgearto
avoidaclose
quartersituation with another vessel in the strait. In
thisepisodethestudentprovidesanaccountdirectly
as the examiner enters the bridge, showing an
understandingofthesituationandtheinterventionof
theexaminer:
Excerpt 2.
12. STUDENT: Yeah… I didn’t see that
bastard
comin’
13. EXAMINER: No! Cause you don’t look
ahead
14. What are you doing here? ((points to the
right side of the radar
15. display))
16. You should be there… ((points towards
the
center of the radar
17. display))
18. ((2 second pause))
19. … in the center
20. STUDENT: ((sighs)) yes I really messed
up
there
21. I’m tryin’ to salvage the situation
right now
22. EXAMINER: yeah that’s that
23. STUDENT: ((gasping))
Thestudentanticipatesthecritiquetobedelivered
by the examiner, saying that he “didn’t see that
bastardcomin’” (line12),andrevealshis
understandingofthe problem the examiner isabout
to address.The evaluation delivered by the
examiner suggests a reason why the student“didn’t
see that bastard
comin’”, which is failing to look
ahead (line 13). Hence, the negative evaluation
885
concerns an issue closely tied to gaining and
maintaining situation awareness (Conceição et al.,
2017). The examiner then are using starboard radar
display to point out the incorrect position, followed
bythepreferredpositionofthestudents’vessel(lines
1419).Thestudentrespondstothiswithadeepsigh,
andcommentsthathe“messedup”andis“tryin’to
salvage the situation” (lines 2021), and thus are
agreeingwiththeexaminer’sproblemdefinition“not
thinking ahead” in line 13 and suggested correct
alternative in lines 1619. The examiner closes the
intervention(line22)andas he leaves
theroom,the
student gasps, revealing his frustration with the
current situation (line 23). The student is however
abletocorrectthesituationandcontinuesthepassing
accordingtoprotocolandpassestheexam.
However, it is worth noting that the students in
both episodes passed the examination. Of the eight
students that were being corrected on their
performanceaftertheexamineridentifiedsomekind
of trouble with their TSS crossing during the
competencetest,sixpassedtheexaminationandtwo
failed.
5 CONCLUSIONANDDISCUSSION
Inthisstudywehaveexploredauthenticinstancesof
examinerstudentinteractionsduringsimulatorbased
competence tests. While the instructors’ continuous
and ongoing process of monitoring, assessing and
correcting students during training is argued to be
essentialforfosteringstudentsintothemaritimework
practice in previous research, findings from the
currentstudyshowthatthiskindofinstructivework
alsotakesplaceduringindividual
certificationsinthe
simulator environment. The examiners interventions
duringcompetencetestsinthesimulatorfoundinthe
datacorpuswereorganizedasbriefcorrectionswith
clear directives for improvement. In that sense, they
differ from instructions during training, which are
oriented towards developing students’ professional
reasoning (e.g. Sellberg & Lundin,
2017; Sellberg &
Lundin, 2018). However, the students’ needing and
receiving instructional support during competence
tests in the simulator suggest that students are still
developingtheirprofessionalcompetenceatthispoint
in training. Hence, there are reasons to consider the
practice of assessing students that are only halfway
throughtheireducation
forthe purpose of receiving
professionalcertificates.
Althoughthisanalysisisbasedonasmallsample
of video recorded data, the data corpus offers a
complex and interesting starting point for analyzing
theexistingassessmentpracticesinMET.Inthisdata,
preliminary findings show that corrections during
competence tests are
regularly made, but not all
students are provided with this kind of support. In
regards to this, findings in the empirical data raises
criticalandimportantquestionsinregardstowhatit
meansto produceas“fairandobjective”assessment
as possible (Flin et al., 2003, p. 109). While the
examiners
work systematically as instructors to
supportstudents’learningthroughoutthecourse,the
overall goal of the competence test is to conduct
consistent, unbiased, and transparent assessments
(Øvergård et. al., 2017). Development of assessment
toolsthatsupportexaminersworkisonepossiblepart
ofthesolution,asproposedbyØvergårdetal.(2017).
Another part of the solution is to develop the
examiners’knowledge on how to conduct valid and
reliableassessmentsofperformanceinthesimulator.
In regards to this challenge, there are reasons to be
carefulbeforeputtingdifferentassessmentmodelsfor
rating nontechnical skills to use in MET
(cf.
Conceição et al., 2017). As pointed out in the
background,resultsfromaviationrevealanumberof
problems when using these models as ground for
making assessments (e.g. Mavin & Roth, 2014).
Hence, there is need for future studies that analyze
the current assessment practices to identify areas of
improvement, and develop a practice where
simulatorbased assessments of competence ensure
thevalidityandreliabilityofMETcertificates.
ACKNOWLEDGEMENTS
This research is funded by FORTE (Swedish Research
Council for Health, WorkingLife and Welfare) project no:
201801198
REFERENCES
Conceição, V. P., Basso, J. C., Lopes, C. F., & Dahlman, J.
(2017).Developmentofabehaviouralmarkersystemfor
rating cadet’s nontechnical skills. TransNav:
InternationalJournalonMarineNavigationandSafetyofSea
Transportation.Doi:10.12716/1001.11.02.07
Emad, G., & Roth, W. M. (2008). Contradictions in the
practicesoftraining
forandassessmentofcompetency:
A case study from the maritime domain. Education+
Training.Doi:https://doi.org/10.1108/00400910810874026
Flin, R., O’Connor, P., & Crichton, M. (2008). Safety at the
sharp end: a guide to nontechnical skills. Aldershot
England:Ashgate.
Flin, R., Martin, L., Goeters, KM., Hörmann, HJ.,
Amalberti, R.,
Valot, C. & Nijhuis., H. (2003).
Development of the NOTECHS (nontecnical skills)
system for assessing pilots’ CRM skills. Human Factors
andAerospaceSafety,3(2),97119.
Gekara,V.O.,Bloor,M.&Sampson,H.(2011).Computer
based assessment in safetycritical industries: The case
of shipping. Journal of Vocational
Education & Training.
Doi:https://doi.org/10.1080/13636820.2010.536850
Ghosh,S,Bowles,M,Ranmuthugala,D.&Brooks,B.(2014).
Reviewing seafarer assessment methods to determine
the need for authentic assessment. Australian Journal of
Maritime & Ocean Affairs. Doi:
10.1080/18366503.2014.888133
Heath,C.,Hindmarsh,J.&Luff,P.(2010).Videoinqualitative
research:Analysingsocial
interactionineverydaylife.SAGE
PublicationsLtd,London.
Hontvedt, M. (2015). Professional vision in simulated
environments—Examiningprofessional maritimepilotsʹ
performance of work tasks in a fullmission ship
simulator. Learning, Culture and Social Interaction. Doi:
https://doi.org/10.1016/j.lcsi.2015.07.003
Mavin,T.,&Roth,WM.(2014).Aholistic viewofcockpit
performance:
Ananalysisoftheassessmentdiscourseof
flight examiners. International Journal of Aviation
Psychology. Doi:
https://doi.org/10.1080/10508414.2014.918434
Roth, WM. (2015). Flight Examiners’ Methods of
AscertainingPilotProficiency.TheInternationalJournalof
AviationPsychology.Doi:10.1080/10508414.2015.1162642
886
Roth, WM. & Mavin, T. (2015). Peer Assessment of
Aviation Performance: Inconsistent for Good Reasons.
CognitiveScience.Doi:10.1111/cogs.12152
Sampson,H.,Gekara,V.&Bloor,M.(2011).Watertightor
sinking? A consideration of the standards of the
contemporary assessment practices underpinning
seafarerlicenceexaminationsandtheirimplicationsfor
employers.MaritimePolicyManagement.Doi:https://doi
org.ezproxy.ub.gu.se/10.1080/03088839.2010.533713
Sellberg,C. (2017). Simulatorsin bridge operationtraining
and assessment: A systematic review and qualitative
synthesis. WMU Journal of Maritime Affairs. Doi:
https://doi.org/10.1007/s1343701601148
Sellberg,C.&Lundin,M.(2017).Sellberg,C.,&Lundin,M.
(2017).Demonstratingprofessionalintersubjectivity:The
instructorʹs work in simulatorbased learning
environments.Learning,cultureandsocialinteraction.Doi:
https://doi.org/10.1016/j.lcsi.2017.02.003
Sellberg,C.,&Lundin,M.(2018).Tasksandinstructionson
the simulated bridge: Discourses of temporality in
maritime training. Discourse Studies. Doi:
https://doi.org/10.1177/1461445617734956
Stahl G (2005) Group cognition in computerassisted
collaborative learning. Journal
of Computer Assisted
Learning, Doi: https://doi.org/10.1111/j.1365
2729.2005.00115.x
Taylor, D. H. (1998). Rules and regulations in maritime
collision avoidance: New directions for bridge team
training.JournalofNavigation,51(1), 67–72.
Weber, D., Roth, WM., Mavin, T. & S. Dekker. (2013).
Shouldwepursueinterraterreliabilityordiversity?An
empirical study
of pilot performance assessment.
AviationinFocus,4(2),3458.
Øvergård,K.I.,Nazir,S.,&Solberg,A.S.(2017).Towards
automated performance assessment formaritime
navigation. TransNav, International Journal on Marine
Navigation and Safety of Sea Transportation.
Doi:10.12716/1001.11.02.03