699
1 INTRODUCTION
With the rapid development of information
technologysuchasbigdata,artificialintelligenceand
deep learning, the shipbuilding industry is moving
towards informatization and intelligence (Alexander
2011,Russell2010,Lecun2015).Intelligentshipvisual
perception is the premise and foundation for
unmannednavigation(Zhang2010).Itcanobviously
reducethemarinetrafficaccidentscausedbyhuman
factors, optimize the ship route, reduce the fuel
consumption, reduce the cost of ship operation, and
improve the safety of ship navigation. Ship
recognition and tracking is an indispensable part of
intelligent ship visual perception. It can identify
dangerous ship types, monitor
the surrounding
environment and make reasonable ship collision
avoidancedecisions,whichisofgreatsignificanceto
thepromotionanddevelopmentofintelligentships.
In recent years, in order to deal with the visual
perception challenges of intelligent ships, research
institutions and scholars have processed the visual
perception data under the background
of intelligent
ships(Johansson 1973). Ship detection, shiptracking
and ship type recognition have been extensively
studied. In ship detection, the emergence of visual
mechanism provides a good research idea for
detecting surface targets based on visible video
sequences.Kimetal.proposedanadaptivefocusing
region of interest
detection algorithm, and achieved
idealdetectionresults(Kimetal.2015).Liaccelerated
class detection and recognition by sharing
convolution neural network, which provided a new
Ship Recognition and Tracking System for Intelligent
Ship Based on Deep Learning Framework
B.Liu,S.Z.Wang,Z.X.Xie,J.S.Zhao&M.F.Li
M
erchantMarineCollege,ShanghaiMaritimeUniversity,Shanghai,China
ABSTRACT: Automatically recognizing and tracking dynamic targets on the sea is an important task for
intelligent navigation, which is the prerequisite and foundation of the realization of autonomous ships.
Nowadays,theradarisatypicalperceptionsystemwhichisusedtodetecttargets,
buttheradarechocannot
depict the target’s shape and appearance, which affects the decisionmaking ability of the ship collision
avoidance.Therefore,visualperceptionsystembasedoncameravideoisveryusefulforfurthersupportingthe
autonomousshipnavigationalsystem.However,ship’srecognitionandtrackinghasbeenachallenge
taskin
thenavigationalapplicationfieldduetothelongdistancedetectionandtheshipitselfmotion.Aneffectiveand
stableapproachisrequiredtoresolvethisproblem.Inthispaper,anovelshiprecognitionandtrackingsystem
isproposedbyusingthedeeplearning framework.Inthisframework,thedeep
residualnetworkandcross
layer jump connection policy are employed to extract the advanced ship features which help enhance the
classificationaccuracy,thusimprovestheperformanceoftheobjectrecognition.Experimentally,thesuperiority
of the proposed ship recognition and tracking system was confirmed by comparing it with state ofthe
art
algorithmsonalargenumberofshipvideodatasets.
http://www.transnav.eu
the International Journal
on Marine Navigation
and Safety of Sea Transportation
Volume 13
Number 4
December 2019
DOI:10.12716/1001.13.04.01
700
ideaforwatersurfacetargetdetection(Lietal.2016).
Some scholars excavated the video containing water
targetsaccordingtothelearning mechanismoflarge
perspective, and generated the most possible set of
water surface targets according to the maximum
likelihoodprobabilitymethod(TʹJampensetal.&Zou
et
al.2016).Somescholarsusemultiviewmethodto
extract multiple features of water targets (such as
texture features, structural features, color features,
etc.).Sparselearningandmultitasklearningareused
tofusefeatures,eliminatefalsetargetsandretainthe
detected water targets (Albrecht 2011, Hong 2015,
Bergamasco 2016).
In ship tracking, the traditional
method is to abstract the tracking ship as a particle
throughtheautomaticidentificationsystem(AIS)and
radar (Xiao et al. 2015). Domel et al. applied the
correlation filtering algorithm to the tracking field,
anduse a single grayfeature to representthe target
fortracking
(Bolmeetal.2010).Inordertoovercome
the shortcomings of traditional ship tracking
algorithms, Chen et al. proposed a ship moving
position tracking algorithm based on support vector
machineregressionandgametheory.Supportvector
machinewasusedtoestimatethepositionoftheship
tobetrackedin
ordertoimprovetheaccuracyofship
trackingposition(Chenetal.2017).Chenetal.fused
instruction filter and back stepping method to
construct a robust adaptive neural network tracking
controller for ship course (Chen et al. 2016). In the
aspect of ship type recognition, the above goal is
achieved
by fusing sensor data information such as
selfidentification system (AIS) and radar (Robards
2016,Shu2017,Sang2015).Jiangetal.alsoproposed
a ship type recognition method based on structural
feature analysis, which can effectively extract high
resolution COSMOSkyMed image features of bulk
carriers, container ships and
tankers (JIANG et al.
2014).Chenetal.tookintoaccountthecomputational
complexity,recognitionaccuracyandthedifferenceof
featuresextractedbyva riousalgorithmsinshiptype
recognition, and used support vector machine
algorithm to fuse the ship features extracted by the
above operators (Chen et al. 2016). The methods
mentioned above have achieved certain results for
ship visua l perception under certain specific
conditions. However, in the era of intelligent ships,
higher requirements have been put forward in ship
detection, tracking and ship type recognition. It is
necessary to detect and recognize small targets of
ships by using relevant information
collection and
sensing technology to determine potential collision
risk and help the decisionmaking system of
intelligentshipstodetermineinterestedshiptargets.
With the increasing scale of maritime traffic and
the increasing complexity and diversification of the
surrounding environment of ships in voyage, at
present, it is mainly through the
crew to judge the
type of ships around and the state of navigation
artificially.Thereare certainsubjective errors,which
cannot meet the basic requirements of intelligent
ships. In this paper, a visual perception system for
intelligent ship is constructed based on the indepth
learning framework. The shipborne
camera can
monitortheinformationaroundtheshipandidentify
othershiptypesinrealtime.Thispaperimprovesthe
shallownetworkstructureandmultiscaleprediction
method of targets in the traditional deep learning,
introduces the idea of residual network He et al.
2015effectivelyovercomestheproblemsof
gradient
dispersionandgradientexplosion,andimprovesthe
abilityofdatafeaturelearning.Thenetworkdepthis
increased by crosslayer connection, and advanced
features of ships are extracted for combination
learning. On this basis, target region prediction and
classification prediction are integrated into a single
neural network model to realize
the global
informationoftheimagefortargetrecognition.Inthe
case of high accuracy, fast target detection and ship
typerecognitionarerealized.
2 INTELLIGENTVISUALPERCEPTIONSYSTEM
FORSHIPS
With the rapid development of computer vision
theory and technology (Moeslund et al. 2001), it
provides a favorable technical
support for data
visualization of intelligent ships. Intelligent ships
perceivethesurroundingenvironmentandtheirown
statethroughvarioussensors,andmakedecisionson
the perceived environment so as to realize the
auxiliary sailing and active safety of ships, even
autonomous sailing. Figure 1, shows the framework
of the proposed intelligent
ship visual perception
systembasedoncomputervision.
Rearanti‐collision
warning
Ultrasonic camera
Frontvessel
identification
Frontanti‐
collisionwarning
Anti‐collisionwarning
onbothsides
AIS
camera
camera
radar
Figure1. Schematic diagram of ship visual perception
system.
Intelligent ship visual perception system collects
videoimageinformationduringnavigationbyvisual
sensorsinstalledaroundships,andprocessesitwith
automatic identification system (AIS) and
radar(Merchant et al. 2012). Vessel tracking and
recognitioninthenavigationareaisthepremiseand
foundation of the intelligent ship visual perception.
By installing
cameras on both sides of the ship, the
dynamic information of surrounding ships is
monitored and tracked. The possible dangerous can
beobtained,andthecollisionriskbetweenthetarget
ship and the ownship can be further determined.
Provide the crew with different degrees of danger
signalstohelp
thecrewmakecorrectjudgments.On
thisbasis,thetypesofothershipsinthesituationof
intersection and encounter are identified, and the
heading speed of the ship is adjusted dynamically
and timely to avoid the risk of collision and ensure
the safety of navigation. As shown in Figure 2,
the
visual perceptionflow chart of a shipis constructed
by a shipborne camera which is used to sense the
surrounding environment to obtain the traffic
situation near the ship, detect and identify the ship
typeofanothership.
701
Environmentalawareness
Shipstarb oard
camera
Shipportcamera
Visualperception
system
Shiptarget
InformationCollection
Controloutput
Shipinspection
Shiptracking
Shiptypeidentification
Figure2.Visualperceptionflowchartofshipbornecamera.
3 TARGETRECOGNITIONANDTRACKING
FRAMEWORKBASEDONDEEPLEARNING
In recent years, with the introduction of computer
visionanddeeplearningalgorithmsintothefieldof
target tracking and recognition, great breakthroughs
have been made in performance gradually, which
provides a new idea for the research of visual
perception of
intelligent ships. In the visual
perceptiontaskformaritimetraffic,itisnecessaryto
detectshipsinthevideosequencequickly,efficiently
and accurately through the shipborne camera, and
identify the types of ships in realtime to help
intelligent ships more accurately judge the collision
risk and ensure
the safe navigation of intelligent
ships.
Traditional deep learning network framework
mainlyincludesinputlayer,hiddenlayerandoutput
layer(Xuetal.2016).Thenumberofnetworklayersis
relatively shallow, which cannot meet the basic
requirements of intelligent ships. Considering the
changes of ship imaging size, illumination, angle of
view, overlap of ship imaging and artificial
participation in the situation of intersection and
encounter,etc.Figure3isanetworkframeworkbased
ondeeplearningstructureforintelligentshiptracking
andrecognition.
Convol ution layer
BNlayer
Relulayer
Add
Residual connection
ęę
Convol ution layer Relulayer
Trainingmod el
Multi‐sca leprediction
ęę
Fully connect edlaye r
Matchingtargeting
ęę
Classificationidentification
Pu nishmen tmec hanism
Trackingrecognitionmodel
Result
output
Ship
image
Figure3.Targetrecognitionandtrackingframeworkbased
ondeeplearning.
3.1 TrainingModel
In order to ensure the recognition accuracy and the
stability of the tracking of the visual perception
system,Residualstructureisemployedintothedeep
neural network model to ensure that the network
structureisdeepandconvergent.Theinputsamples
areconvolutedtoextractthecorrespondingfeatures,
andthencombinedlearningiscarriedoutto getthe
featuremapmodeloftheobject,whichinitializesthe
followuptrackingandrecognitionmodel.
Convolution layer is the core component of the
neural network structure. The number of training
parameters of the neural network is reduced by the
sharing
of receptive fields and weights. In
convolutional networks, the latter layer of neurons
extractsthelocalfeaturesofdifferentlocationsofthe
former layer of feature map to get the next layer of
feature map. In order to overcome effectively the
shortcomingsofthedeepneuralnetworktrainingand
acceleratethe
convergencespeedofnetworktraining.
BatchNormalization(BN)operationsareaddedafter
each convolution layer to normalize the distribution
ofinputdataintoameanvalueof0andavariance of
1.
On this basis, a crosslayer jump connection
methodisadded.ByusingtheresidualfunctionF
(u)
= H (u)‐u, the layerbylayer training of the deep
neural network structure is changed into stageby
stagetraining.The networkstructureis dividedinto
several subsegments, each subsegment contains a
relatively shallow number of network layers, and
eachsubsegmentcontains
apartofthetotallearning
deficit (total loss), which ultimately achieves a
relativelysmalloverallloss.Accordingtothetraining
networkofships,themeansquareanderrorareused
as loss functions, which are composed of coordinate
error, IOU error and classification error. The
expressionsareasfollows:
2
2
2
2
2
22
00
22
00
2
00
2
00
2
0
ˆˆ
[( ) ( ) ]
ˆ
ˆ
[( ) ( ) ]
ˆ
()
ˆ
()
ˆ
(() ())
SB
obj
cood ij i i i i
ij
SB
obj
cood ij i i i i
ij
SB
obj
ij i i
ij
SB
nobj
noobj ij i i
ij
S
obj
ij i i
i c classes
xx yy
hh
CC
CC
pc pc
















(1)
Among them, the first two lines represent the
coordinateerror,thefirstlineisthepredictionofthe
centercoordinateoftheboundingbox,thesecondline
is the prediction of the width and height of the
boundingbox,thethirdandfourthlinesrepresentthe
lossofconfidence
oftheboundingbox,andthefifth
lineistheerrorofthepredictioncategory.Ifthereis
notargetina cell,theclassificationerrorwillnotbe
propagated backward. When the object in the
boundingboxandtheonewiththehighestIOUinthe
real frame propagate backward.
The rest will not
proceed.
3.2 ShipRecognitionandTrackingModel
Bytraining the visualperception model obtained by
the network, feature maps of a certain size can be
obtainedfromtheinputimage.Drawingontheidea
of Yolo algorithm (Redmon J et al. 2015), the input
imageis
dividedintocorrespondingsizegrids.Each
grid prediction priori box (clustered values) on the
featuregraphcontainsfourpredictivevaluest
x,ty,w,
hofwhichthefirstfourarecoordinates.Theprocess
ofobtainingb
x,by,bw,bhfromtheactualpredictiontx,
t
y,w,hisexpressedas:
x
xx
btc
 (2)
702

yyy
btc
 (3)
w
ww
bPe
 (4)
h
hh
bPe (5)
Amongthem,
x
c and
y
c arethe number ofthe
first grid in the upper left corner where the central
coordinates of the border are located.
t and
y
t
are the center coordinates of the predicted border.
The
represents the logistic function, which
normalizes coordinates to 01. The final
x
b and
y
b
are normalized values relative to the grid position.
Thewidth and heightof thepredicted border arew
and h. Pw, Ph are the width and height of the
candidate box. The final
w
b and
h
b are normalized
valuesrelativetocandidateboxpositions.
Inordertopreventthedriftofshiptrackingframe
andmakethemovingoftargetmorerobust,apenalty
mechanism is constructed to process the model
featuresand torepresent andlearn these featuresin
ordertoachievethepurposeof
shiptrackinginvideo
sequence.Inordertoobtainbettertrackingeffect,we
introducethecoordinate prediction valueasthecost
function.Thepenaltymechanismisasfollows:


12
1
2
N
KK K T
L
K
Min f D U V P Q




 (6)
K
KK
UPQ (7)
For each feature K in the model,
K
D is used to
represent the ship image tracking sequence,
K
V is
thefeaturematrixoftheshipfeaturegraph,and
K
U
isthematrixrepresentationofthesequence
K
D .
L
f

isacostfunction,whichisusedtoevaluatethedegree
of difference between ship target and ship feature
graph ma trix in the first feature.
K
P is the matrix
representation of the kth feature. The global
representation matrix P is obtained by filling
K
P
horizontally,whichissimilartotheglobalcoefficient
matrix Q. The parameter
1
P represents the
independence of each feature of the model.
2
T
Q
representstheabnormalresultofmodeltracking.The
parameter
1
represents the penalty degree of the
globalcoefficientmatrixP,andtheparameter
2
is
the penalty coefficient corresponding to the global
coefficientmatrixP.
Fortherecognitionofshiptypesatsea,thespatial
distribution of ships overlaps, and the same frame
detection corresponds to two different ships. Thus,
only one ship type can be identified, resulting in a
declinein recognition rate.
Inthispaper, multilabel
classification is used to predict the target category,
and the logical regression layer of multilabel and
multiclassification are added to the network
structure.Thesigmoidfunctionisusedasthelogistic
regressionunittoclassifyeachcategory.Atthesame
time, the cross
entropy cost function is used to
measure the difference between the predicted value
and the actual value of the neural network.The
expressionisasfollows:
1
1
x
y
e
 (8)





1
1
1
log 1 log 1
m
ii i
i
JTfxTfx
m

(9)
Amongthem,misthetotalnumberofsamples,T
isthelabel,withavalueof0or1.irepresentstheship
sampleandf(x)representsthepredictedoutput.
4 EXPERIMENTALANALYSIS
4.1 ShipDataImageSet
At present, there are fewer shiprelated data sets
in
object detection related data sets, and fewer image
sets for merchant ship recognition tasks. Therefore,
this study collects five kinds of common merchant
ship images as sample sets by means of network
search. It mainly includes container ships, bulk
carriers,oiltankers,LNGvesselsandfishingvessels.
In this study,
7402 images were collected, including
2320 images of container ships, 1050 images of
tankers,1140imagesofliquefiednaturalgasvessels,
1860 images of grocery vessels and 1032 images of
fishing vessels. 80% images of each ship type are
selected as training set, and the remaining 20%
imagesareselected
astestset.Figure4showspictures
ofdifferentshiptypes.
Figure4.Trainingsamplesetofdifferentshiptypepictures
4.2 ExperimentalPlatformandParameterSettings
Theexperimentalplatformofthisstudyis Windows
10operatingsystem,16GRAM,CPUprocessorʹsmain
frequency is 3.2GHz, GPU is NVIDIA GTX 1050Ti,
displaymemoryis4G,testplatformisPyCharm(2018
version). The specifications and parameters of the
shipbornecameraare
listedinTable1.
703
Table1.CameraSpecifications
_______________________________________________
Pixel5million
Lenssize8mm
Monitoringangle41.5°
Monitoringdistance2025m
_______________________________________________
Theparametersettingofdeepnetworkstructureis
the main task of network training. According to the
idea of transfer learning, the pretraining network
framework can be finetuned with its own training
dataontheexistingbasicnetwork,whichcanachieve
bettertrainingeffect.Thisstudyis
basedonthepre
training Darknet model. Some parameters are
initializedasshowninTable2.
Table2.Initializationtuningsettings for network structure
parameters
_______________________________________________
ParameterInitial value
_______________________________________________
momentum0.9
decay0.0005
angle0
saturation1.5
exposure1.5
hue0.1
learning_rate0.001
burn_in1000
max_batches500200
policysteps
steps40000,45000
_______________________________________________
4.3 VideoDataSourceforShipMonitoring
The experimental data are based on the video data
collectedbythecamerasonbothsidesofthecontainer
ship YUFENG of Shanghai Maritime University.
Figure5showstheinstallationpositionofthecamera
ofthecontainership.Thecollectedsurveillancevideo
is
divided into two groups to evaluate the
performance of ship detection algorithm. The first
group of ship surveillance video is used to evaluate
the detection performance of ship detection model
under different traffic conditions based on good
navigation environment. The second group of ship
surveillance video is based on the
foggy navigation
environment,whichisusedtotesttherobustnessand
accuracy of ship detection model under very low
visibility.
Figure5.Installationlocationofshipbornecamera.
4.4 Analysisofexperimentalresults
In order to verify the validity and reliability of the
detection, the training pictures contain pictures of
various meteorological and environmental scenarios.
Accordingtothecharacteristicsofconvolutionneural
network, illumination, sea surface environment and
otherimportantfactorswillbeautomaticallylearned
bythe model.In
addition, sincebatch normalization
operation is included in our training process, the
generalization ability of the model can be greatly
improved, and the effects of different light intensity
canbeeffectivelyovercome.Accordingtotheaverage
losscurveofthenumberofiterationsinthetraining
processshowninFigure
6,itisfoundthatthelossof
the type is basically stable around 0.3 when the
numberofiterationsis12,000.Withtheincreaseofthe
number of iterations, the value of the average loss
functionremainsbasicallyunchangedandtendstobe
stable. It shows that the algorithm has
fast
convergenceinthetrainingprocess.
The recallaccuracy curve is a performanceindex
of a classifier, which is used to reflect the accuracy
and accuracy of ship type recognition. In this
experiment, four common types of ships were
selected, namely container ship, bulk carrier, oil
tankerandfishing vessel.
Asshown inFigure7, the
relationship curve between the recall rate and the
accuracy of theimproved ship type is compared on
the basis of the original method. From the
experimental data, it can be seen that the area
enclosed by the accuracy and recall rate of ship
detection in
this method is higher than that of the
originalmethod,reflectingthatthevalueofAPinthe
data is obviously larger than that of the original
method.Therecallratecanreach85%withoutlossof
precision. When the recall rate reaches 80%, the
accuracy can still reach 80%,
which fully illustrates
theaccuracyofthismethod.
Figure6.Averagelossfunctioncurve.
704
(a) PR Curve of Fishing Ship.(b) PR Curve of Bulk
freighterShip.
(c) PR Curve of Container Ship.(d) PR Curve of Oil
Tanker.
Figure7. Contrast of PR Curves of enhanced visual
perceptionmethod.
4.5 Contrastiveexperimentsofdifferentmethods
Inordertoillustratetheeffectivenessofthismethod,
the recognition performance of this model is further
validated by comparing the commonly used deep
learning target recognition methods. KNearest
Neighbor (KNN), Artificial Neural Network (ANN),
Traditional Neural Network (CNN) and Deep
Convolution Neural Network
(DCNN) are used to
comparedifferentshiptypes.Undertheconditionsof
thisexperiment,theunifiedagreementthatIOUvalue
isgreaterthan0.75isthecorrecttargetdetection.The
resultsareshowninTable3.
Table3. Comparison of recognition accuracy of different
shiptyperecognitionalgorithms.
_______________________________________________
KNN ANN CNN DCNN Proposed
method
_______________________________________________
Generalcargo34.20% 30.10% 80.00% 86.00% 89.50%
ship
Bulkcargo 31.20% 33.10% 71.20% 72.50% 88.20%
ship
Container 53.20% 61.20% 85.20% 90.70% 96.80%
ship
LNGship 42.10% 37.00% 63.20% 66.70% 82.70%
Oiltanker 46.10% 45.30% 79.50% 84.60% 90.50%
average 40.90% 42.60% 76.70% 81.40% 89.50%
value
_______________________________________________
Among the above five types of ship recognition
tasks, KNN and ANN algorithms have the lowest
recognition accuracy for grocery and bulk carriers,
whichare31.2%and30.1%respectively,whileCNN
methodhasthelowestrecognitionaccuracyforLNG
ships,whichisonly63.2%.Therecognitionaccuracy
ofDCNN
algorithmforbulkcarrierandLNGshipis
72.5% and 66.7% respectively, while the recognition
accuracy of the two ship types mentioned above is
88.2%and82.7%respectively,and96.8%forcontainer
ship.Theaccuracyofshiptyperecognitionbasedon
KNN(ANN) andDCNN (CNN, DCNN) showsthat
theship
typerecognitionmethodbasedonthismodel
cannot extract the characteristics of different ship
typesverywell,andtheshiptyperecognitionmethod
based on this model can find the depth features of
differentshiptypesbetter,andcanobtainbettership
recognitioneffect.
4.6 Experimentalresults
Key frames are
extracted from shipborne camera
surveillance video in different environments and
trafficflowstoevaluatethedetectionperformanceof
the algorithm. This system detects the ship in video
sequence.FromFigure8,itisshownthatinthevideo
scene with good navigation environment, the ship
tracking process shows a good
tracking effect, and
accuratelyrealtimedisplayoftheshiptype.
Inorder to verify the robustnessand accuracy of
the proposed algorithm, ship detection experiments
were carried out on shipborne camera surveillance
videoduringfognavigation.Figure9showsthatthe
proposed algorithm can effectively overcome the
effects
ofhazeweatherandilluminationchanges,and
stilltrackshipsandidentifyshiptypesinthecaseof
lowvisibility.Onthebasisofidentifyingshiptypes,
shipvisualperceptiontasksarefurtherprocessed.As
showninFigure10,variousshiptypesandimportant
partsofshipsareaccuratelyidentified.
(a)166Frame (b)427Frame (c)877Frame
Figure8.TrackingandRecognitionResultsoftheSystemin
VideoScenewithGoodNavigationEnvironment.
(a)211Frame (b)524Frame (c)660Frame
Figure9. Tracking and Recognition Results of the System
DiagraminaFoggyNavigationEnvironmentVideoScene.
Figure10.Recognitionresultsofshiptypeandposition.
705
5 CONCLUSION
Inthispaper,anintelligentshipvisionenhancement
system based on deep learning framework is
proposed to solve the problem of ship tracking and
recognition for intelligent navigation visual
perception tasks. It effectively overcomes the
shortcomings of different illumination, different
weather, wind and wave conditions and artificial
participation.
Future research work will integrate
radar, infrared and AIS data to obtain more long
distance marine vessel monitoring and realtime
displayoftheshipʹsgeographicallocationunderpoor
visualconditions.
ACKNOWLEDGEMENTS
Theresearchpresentedinthispaperhasbeensupportedby
ShanghaiShuguangPlanProject(15SG44),NationalNatural
Science Foundation of China (51709167), Natural Science
Foundation of Shanghai (18ZR1417100), Shanghai Pujiang
Program(18PJD017),andShanghaiScienceandTechnology
Innovation Action Plan (18DZ1206101), and the Young
TeacherTrainingProgramofShanghaiMunicipalEducation
Commission(ZZHS18053).
REFERENCE
[1]AlbrechtT,WestGA,TanT,etal.2011.Visualmaritime
attention using multiple lowlevel features and naive
bayes classification: Digital Image Computing
Techniques and Applications (DICTA), International
Conferenceon[C].IEEE.
[2]Bergamasco F, Benetazzo A, Barbariol F, et al. 2016.
Multiview horizondriven sea plane
estimation for
stereowaveimagingonmovi ngvessels[J].Computers&
Geosciences95:105117.
[3]Bolme D S , Beveridge J R , Draper B A , et al. 2010.
Visual object tracking using adaptive correlation
filters[J].
[4]Chen Weiqiang, Chen Jun, Zhang Wei, et al. 2016.
Robusttrackingcontrolfor
shipheadingadaptiveneural
network[J].JournalofShipEngineering(09):1520.
[5]ChenWenting,LiuNantong,JiKefeng,etal.2014.Ship
Recognition for SAR Image Based on MultiClassifier
Fusion[J].RemoteSensingInformation(5):9095.
[6]Chen Xiaojun, Yang Zhangqiong. 2017. Application of
Support Vector Regression and Game
Theory in Ship
Moving PositionTracking [J].Shipscience and
technology(08):1921.
[7]He K , Zhang X , Ren S , et al. 2015. Deep Residual
LearningforImageRecognition[J].
[8]HongZ,ChenZ,WangC,etal.2015.Multistoretracker
(muster): A cognitive psychology inspired approach
to
objecttracking: Proceedings of the IEEE Conference on
ComputerVisionandPatternRecognition,[C].
[9]Hong Z,Mei X,Prokhorov D, et al. 2013. Tracking via
Robust Multitask Multiview Joint Sparse
Representation[J]:649656.
[10]JIANG Shaofeng, WANG Chao, WU Fan, et al. 2014.
COSMOSkyMedImageCommercialShip
Classification
Algorithm Based on Structural Feature Analysis[J].
Remote Sensing Technology and Application.29(4):607
615.
[11]Johansson G. 1973. Visual perception of biological
motion and a model for its analysis[J]. Perception &
Psychophysics14(2):201211.
[12]Kim D, Kim H, Jung S, et al. 2015. A visionbased
detection algorithm for moving
jellyfish in underwater
environment: Ubiquitous Robots and Ambient
Intelligence(URAI),201512thInternational Conference
on[C].IEEE.
[13]Lecun Y, Bengio Y, Hinton G. 2015. Deep learning[J].
Nature521(7553):436.
[14]Li X, Shang M, Hao J, et al. 2016. Accelerating fish
detection and recognition by sharing CNNs with
objectnesslearning:OCEANS
2016Shanghai[C].IEEE.
[15]Mayerschönberger V, Cukier K. 2014. Big data: A
revolution that will transform how we live, work, and
think.[J]. Mathematics & Computer Education
47(17):181183.
[16]Merchant N D, Witt M J, Blondel P, et al. 2012.
Assessing sound exposure from shipping in coastal
waters using
a single hydrophone and Automatic
Identification System (AIS) data[J]. Marine Pollution
Bulletin64(7):13201329.
[17]MoeslundTB,GranumE.2001.ASurveyofComputer
VisionBased Human Motion Capture[J]. Computer
Vision&ImageUnderstanding81(3):231268.
[18]RobardsM,SilberG,AdamsJ, etal.2016.Conservation
science and policy
applications of the marine vessel
Automatic Identification System (AIS)—a review[J]
92(1):75103.
[19]Russell S J, Norvig P. 2010. Artificial intelligence: a
modern approach[J]. Applied Mechanics & Materials
263(5):28292833.
[20]SangL,WallA,MaoZ,etal.2015.Anovelmethodfor
restoringthe trajectoryoftheinland
waterwayship by
usingAISdata[J]110:183194.
[21]ShuY,DaamenW,LigteringenH,etal.2017.Influence
of external conditions and vessel encounters on vessel
behavior in ports and waterways using Automatic
IdentificationSystemdata[J]131:114.
[22]TʹJampensR,HernandezF,VandecasteeleF,etal.2016.
Automatic
detection, tracking and counting of birds in
marine video content: Image Processing Theory Tools
and Applications (IPTA), 2016 6th International
Conferenceon[C].IEEE.
[23]Xiao F, Han L, Gulijk C V, et al. 2015. Comparison
study on AIS data of ship traffic behavior[J]. Ocean
Engineering95(3):8493.
[24]Xu
C,LuC,LiangX,etal.2016.MultilossRegularized
DeepNeuralNetwork[J].IEEETransactionsonCircuits
&SystemsforVideoTechnology26(12):22732283.
[25]ZhangZ,ZhangXW,LiangRY,etal.2010.Research
and Implementation of Shiplock Monitoring System
Based on SVM and Visual Perception[J].
Modern
ElectronicsTechnique.
[26]ZouZ,ShiZ.2016.Shipdetectioninspaceborneoptical
image with SVD networks[J]. IEEE Transactions on
GeoscienceandRemoteSensing54(10):58325845.