49
1 INTRODUCTION
The maritime industry constantly performs
challenging operations with much potential for
human errors. These operations need a delicate
interplay between human and technological factors
organized in a sociotechnical system to achieve
complex goals: e.g. to successfully transport
hazardous cargo in constrained and shallow waters
alongside heavy traffic. Sociotechnical systems are
charact
erized by high numbers of dynamic and
interdependent tasks that are necessary to
successfully perform a wide range of complex
operations. All components of these systems must
workseparatelyandinmutualdependencywitheach
other. In the maritime domain, technical errors are
less prone than human errors, which dictates the
amplit
ude necessary to put on training and
assessmentofoperator’serrorperformance.
Humanerrorhappensalltimeandisaninevitable
partofhumannature.Withinthemaritimeindustry;
however,humanerrorsgeneratecriticalconsequences
soseveretheyareworthspendingtimeandresources
topreventandmitigate (Kim and Nazir 2016).Such
consequences are associated with costly damages to
equipment, loss of lives, severe injuries, or
environmentalpollut
ion.
Human error is involved in between 8085 % of
maritime accidents (HanzuPazara et al. 2008).
Consequently, much resources are spent to improve
humanperformanceandreducehumanerror.
Errorandhumanreliabilit
yhavebeenresearched
frommultipleperspectives,i.e.preventiveorreactive,
andlevelsi.e.anindividual,team,andallthewayto
theorganizational‐orsocietallevel.Thisisnecessary
consideringthathumanperformanceisinfluencedon
all levels of analysis, from individual cognitive
patternstoorganizationalstructure.
Human Error in Pilotage Operations
J
.Ernstsen&S.Nazir
UniversityCollegeofSoutheastNorway,Vestfold,Norway
ABSTRACT: Pilotage operations require close interaction between human and machines. This complex
sociotechnical system is necessary to safely and efficiently maneuver a vessel in constrained waters. A
sociotechnicalsystemconsistsofinterdependenthuman‐andtechnicalvariablesthatcontinuouslymustwork
togethertobesuccessful.Thiscomplexityispronetoerrors,andstatisticsshowtha
tmosttheseerrorsinthe
maritime domain are due to human components in the system (80 85%). This explains the attention on
research to reduce human errors. The current study deployed a systematic human error reduction and
predictionapproach(SHERPA)toshedlightonerrortypesanderrorremediesa
pparentinpilotageoperations.
Datawascollectedusinginterviewsandobservation.Hierarchicaltaskanalysiswasperformedand55tasks
were analyzed using SHERPA. Findings suggests that communication and action omission errors are most
pronetohumanerrorsinpilotageoperations.Practicalandtheoreti
calimplicationsoftheresultsarediscussed.
http://www.transnav.eu
the International Journal
on Marine Navigation
and Safety of Sea Transportation
Volume 12
Number 1
March 2018
DOI:10.12716/1001.12.01.05
50
Ultimately, it requires strenuous efforts to
pinpointwhenandwhereerrorsarelikelytohappen.
Thetypeoferrorsandprobabilityofhumanerrorto
occurcanbefoundthroughcarefulanalysisoftasks
and system requirements. This will yield designers
andtrainers information of which specific tasks and
system
characteristics need fortification.This
proactiveapproachtohumanerrorisvaluableforthe
maritimeindustry(withmuchcompetitionandscarce
resources) considering the cost of consequences,
despite the efforts needed to implement measures
againsthumanerrors.
Experts and novices are both prone to errors.
Experience is an essential part of
expertise, and the
roadtobecomeanexpertinvolvesdevelopingmental
schemas.Theschemashelptheoperatorbyreducing
the time taken to recognize situations and to make
decisionsandcorrectiveactionsaccordingly(Naziret
al. 2013). Experts have sophisticated ways to
subconsciouslyknowwhattodooftencharacterized
by
experts telling that “they just know”. Their
schemas allow them to understand situations
triggered by small, subtle cues within the
environment. As opposed to experts, novices have
mentalschemasthatarelesseffective,thusrelyingon
more attention and cognitive resources to perceive,
understand, and predict the same situation. This
differencemanifestsintheantecedentsrelatedtothe
errors conducted in complicated situations: Where
experts can perceive subtle environmental cues to
understand the situation while monitoring, novices
must pay closer attention to catch the same cues.
Experts; who uses less attention and rely on mental
patterns, can be misguided when perceiving
or
interpreting environmental cues, consequentially
making a poor decision and action. Novices are less
likely to make the same mistake as they pay more
resourcestotheenvironmentandinterpretsthecues
moreconsciously,butthismakesnovicesmoreprone
tooverload,whichtherefore, makesthemignorantto
importantenvironmental
cuesaboutthesituation. In
complex maritime operations, understanding these
characteristics are of paramount importance to
effectively implement measures that reduce the
probability and mitigate consequences of human
errors.
Pilotage is a renown complicated pilotage
operation(SharmaandNazir2017).Consideringthe
dynamic nature of pilotage operations, i.e. that the
safest
situationoftenistokeepgoing,putspressureto
continuouslymaintainsituationawareness.Lossofit,
byforinstancethemechanismsdepictedabove,may
result in an accident. There are many examples of
accidents during pilotage operations, e.g. Godafoss,
Federal Kivalina and Crete Cement accidents
(Accident Investigation Board 2010a; Accident
Investigation Board 2010b; Accident Investigation
Board 2012). To assess human reliability in an
operation, one must understand the operation itself.
Thus,nextadepictionofagenericpilotageoperation.
Pilotage operations can be broken down to eight
main tasks: Order and get the pilotaboard, develop
group relationship, installing the pilot,
assess
environment and weather, decide route, supervise
navigation,coordinate tugboatsandberthing
(Ernstsen et al. In Press). Developing group
relationshipand assessingenvironmentandweather
are nonsequential continuous tasks, while the other
tasksareusuallyperformedinthesequenceshownin
Figure1below.
Figure1.Timelineoftasksinpilotageoperation
Pilotage operations are dynamic with many
interdependenttasks.Italsoconsistsof much subtle
and nontransparent feedback from the system,
makingitmorechallengingandmentallyintensiveto
perceive, assess, understand, and decide the proper
course of action. For instance, radar with unprecise
settingsmaydetectnoisewhichcanbe
bothwavesor
fishingvesselstoanuntrainedeye.Thus,operatorsin
pilotage operations are heavily dependent on
individualskillsandknowledgeoftheoperation,as
wellasefficientcollaborationtosuccessfullybring the
vessel to berth or out of the port. This complexity
gives much potential to do human
errors, which
emphasizestheneedtounderstandthenatureofsuch
errors.
Human error research vastly increased after
complexaccidents inthe70sand80s,e.g.ThreeMile
Island and Chernobyl. The focus changed from
technical malfunctions to acknowledging the role of
human factors. After this, accident investigations
began to look
for errors caused by human
components,eitheritbeingfoundatthesharp‐orthe
blunt end. Error research became popula r, and as a
consequent,manytheoriesweredevelopedaccording
to how it is conceptually applied, e.g Rasmussen
(1983); Reason (1990); Sanders and Moray (1991);
Wickensetal.(2015);Woods
etal.(1994).
Hollnagel(2000)attemptedanovelviewoferror,
looking at errors as contextual factors influencing
(normal) performance variability and dictates one
need to understand how these factors influence
behavior to understand how situational changes
impactperformancevariability(asopposedtocoinit
“human error”). As mentioned, pilotage operations
are complex, dynamic with a multitude of
interdependent tasks. This dictates a need to
understand which environmental circumstances
affectshumanreliabilitytoallowpinpointedtraining
anddesignalterations.
Human reliability is the positive orientation of
humanerror.Humanreliabilityassessment(HRA)is
a broad name for ways to find and
predict human
errors in a system. The increase in human error
research have resulted in a high number of various
humanreliabilityassessmentmethods,andmostcan
be divided as quantitative or qualitative approaches
tounderstandandpredicthumanerror.Forinstance,
51
Bell and Holroyd (2009) found 72 tools related to
humanreliability.PleaseseeAalipouretal.(2016)for
a short review of more HRA examples. The basic
functions of most HRA methods are: (1) to find
humanerrorsassociatewiththeoperation,(2)predict
the likelihood of occurrence, and (3);
if necessary,
reduction of their likelihood (Park and Jung 1996).
Quantitative approaches to human error are mostly
concerned with human error probabilities, which,
according to Bell and Holroyd (2009), is defined as
depictedinEquation1:

E
EO
N
PHE
N
 (1)
where N
Eis number of errors and NEOis number of
opportunities for errors. However, to find data to
calculate error probability is challenging and often
duetomuchsubjectivity.Acountermeasureistofirst
thoroughlyunderstandwhicherrortypesareproneto
occur in the operation under analysis before
attemptingtocalculateerrorprobabilities.
ThecomplexityofHRA
increasesastheoperation
ismoreintertwinedinasociotechnical frameworkas
therearemoreinterdependentanddynamicvariables
influencingthehumanreliability.Thismakesiteven
moredifficulttothoroughlyunderstand whicherror
types exists. To find them in a complex system;
however, SHERPA is a suitable human reliability
assessmentmethodformaritimeoperations.
The main contribution of the current paper is to
performSHERPAtoidentifyerror typesfortheeight
tasks associated with a pilotage operation, as
mentionedabove.ASHERPAcanshednovellighton
complex operations through a consistent analysis of
tasks, error types and potential
consequences
associated with tasks. The goal is to provide
information about human errors in pilotage
operations.
2 METHOD
Data was collected using interview and observation.
The interviews were unstructured, openended
interviews and observations. The interview was
designed to gather information regarding tasks and
goals associated to a pilotage operation
and the
cognitive demands for the pilots and captain
respectively.Theintervieweeswerepresentedwitha
scenario of a 30.000 deadweight oil tanker with a
goaltoberthatSlagentangenoilrefineryinNorway.
Itisastandardscenariothatmostcaptainsandpilots
haveexperiencedoratleastcan
relateto.Adefinition
ofamediumsizedaccidentwasinquiredmidofthe
scenariotalkthrough.Theparticipantswereaskedto
ranktheaccidenttoalevel4ona10levelscale,with
level10beingtheaccidentwithhighestconsequences,
e.g.explosionandlossoflifeor
severecasualties.
All interviews began with a review of informed
consent to participate and to audio record the
interviews.Thelengthwas1hourand15minuteson
average,longest1hourand37minutesandshortest1
hour and 5 minutes. When data saturation was
achieved,ashifttowardsvalidation
ofdataoccurred
to ensure a valid representation of the piloting
operation.Thesameinterviewerwas used to ensure
consistency. Data collection process and storing was
approvedbyNorwegianCentreforResearchData.
Observation was issued to collect data and to
validate and verify findings following the task
analysis. The
observation scenario was to follow a
pilotonacarcargovesselleavingOsloPortboundto
Hvasserpilotstation.Theresearcherwasconsciousto
notice the occurrence of tasks that were identified
from the interview data. The observation was open,
andtheresearchercouldaskquestionthroughoutthe
voyage to
ensure a consistent and elaborate
understandingoftheoperation.
2.1 Samplingandresponserate
Thesnowballapproachwasusedtogather
interviewees (i.e. ask interviewees to provide
colleague/friends fitting the criteria of interviewee).
Eightintervieweeswithpilotingexpertisecontributed
to the analysis and four interviewees with captain
expertise. To be
a captain or pilot requires much
experience, thus all applicable interviewees were
deemedsubjectmatterexpertsconsideringtheirwork
positions.Theinterviewshadslightlyshiftduringthe
research,consistentwiththeiterativedevelopmentof
much qualitative research: the development and
validationofthetaskanalysisandfurther,validation
of SHERPA. Most
interviews were conducted in
person; however, due to geographical separation,
threeinterviewsweredoneusingFaceTime®.
2.2 Structureandanalyzeresults
Interview data were transcribed verbatim. More
efficient transcription methods were used as data
saturation approached, e.g. transcription of only
relevant sections of dataset. Tasks and functions for
the task
analysis were identified with both a
grounded (i.e. bottomup) and a theoretical/practical
evaluation (topdown), where the information is
evaluatedbysubjectmatterexperts.
2.2.1 Contentanalysisandtaskanalysis
Content analysis is a common way to analyze
textualdatawherethebasicprincipleistocodedata
into categories.
Categories can be grounded directly
from the text itself or relate to established theories.
The interview transcription was coded to converge
tasksandgoalsrevealedintheinterviews.Therecord
wasbrokendowntobeanalyzedwiththepurposeof
identifying emerging categories within the dataset.
Thiswasusedto
structureandprovideinputtothe
task analysis. Content analysis is a powerful way to
reduce confirmation bias when understanding
interview data.The information from the content
analysis was used to structure the hierarchical task
analysis.