221
levels observed/forecasted at the stations located
upwards (Gdansk Głowa, Tczew, Chełmno and
Toruń).Aninfluenceofsea was modelled usingthe
sea levels recorded in Gdańsk Port Północny. The
model calibration and verification was carried out
takingintoconsideration24hoursleadtime.
3 ARTIFICIAL
NEURONNETWORKMETHOD
Theartificialneuronnetwork(ANN)werecreatedas
an attempt to a human brain activity
performance/quality. The detailed description of the
methods was included in papers worked out by
Tadeusiewicz (1995/6), Sztobryn (1999, 2001, 2003),
Sztobryn and Krzysztofik (2001), and on website of
STATISTICAprogram(2010).
Thebaseforoperationandsoftwareisthemodel
of a singular neuron called from the neuron model
authors’namestheMcCullochPittsmodel.
AstructureoftheANNiscomposedofinputdata
(so‐called input layer), the hidden layers and the
outputlayer.Theinputlayercontainsparameters
(i.e.
input vector) which, in the opinion of investigator,
influenceon themodelledphenomenon.It has tobe
emphasized that there are no limits in selection of
parameters (on contrary to models of mathematical
physics). Generally a list of such parameters is very
long; moreover, they are often connected with
each
otherandinternallycorrelated.Reductionoftheinput
parameters, called also reduction of the input data
space dimension, is one of the elementary but also
oneofthemostdifficulttasksincalibrationofANN
model. Modelling the hidden layer/layers through
selection of suitable number of hidden neurons is a
next step in calibration of the model. A number of
output neurons is determined by the phenomenon
character,intheanalysedcaseitwaswaterlevelwith
24hoursleadtime.
Building of ANN model was done by the
followingstages:
reduction of the input data dimension ( find the
finalandoptimalinputdatavector),
neuronnetworkstructure(i.e.decisionhowmany
hidden layers with how many neurons are
includedintothemodel),
function of activation of neuron layers ( i.e. the
function of transformation of input vector to the
outputinsidetheindividualneuron),
learning method ( way of comparison and
correctionoferror,equaltothedifferencebetween
modelledva lueandobserved–forcalibrationthe
network,
The final work was the analysis of the results in
respect of quality and implementation to
operational/routineworkofforecastservice.
Over100parameters,affecting
waterlevelchanges
in Swibno cross‐section were selected. They
representedonehydrology (levels,theirchangesand
water table drops) of the Lower Wisla (from the
Torun profile) and the Gulf of Gdansk ( Hel and
Gdañsk) as well as the meteorological conditions:
currentandforecastedfortheGulfofGdansk.
Reduction of the dimension was carried out
applying 3 methods: correlation, genetic algorithm
andbythemodelsensitivitytesting(itmeanstesting
themodel abilitytogivethegoodsimulation/forecast
for input data containing and without tested
parameters).
Under the literature revive and comparison of
learning methods of ANN (there
is the way of
calculation and reduction of errors between the
known i.e. observed and modelled output data
values)thebackpropagationmethodswaschosenas
well as mulitpreceptron structure ANN ( it means
thatinvestigatedstructurewasconsistsfrom3layers).
The next problem was the division of whole data
population into 3 series : learning and tested (for
modelcalibration)andvalidationseries(independent
datausedformodelverification).70‐15‐15 %division
wasapplieditmeansthat70%ofthepopulationwas
the learning series, 15% each for the testing and
validating ones. To compare the
quality of
performanceandreliabilityofforecast,thestatistical
indicators were applied, calculated for each of the
threetimeseriesseparately:rootmean squareerror
(RMS)andcorrelationcoefficientR(standardPearson
correlation coefficient with p=0.92 confidence
interval).
4 RESULTSOFMODELLING
There were tested 500 network in respect of
imminence from river; in case of the best 5
representations the results are presented in Table 2.
There are included the characteristics of 5 best
network(withchangeablenumberofhiddenneurons
andoneoutputneuron)representingthewaterlevel
inSwibnowith24hoursleadtimeanddissectionof
by70‐15‐15.
The first column in Table 2 specifies the grid
structure. Thus MLP stands for multi‐layer
preceptron(MultilayerPreceptron)–one‐layerinthis
case,itmeanswithonelayerhidden.Symbols5‐34‐1
showanumberofneurons.
Table2.Characteristicsof5bestcalibratednetworks
__________________________________________________________________________________________________
network corelationcorrelation correlation MREfor MREfor MREfor number activationactivation
(forlearning (fortested (forvalidation learning tested validationofused functionfor function
series)series)series)=model series series series=modellearninghiddenlayer/ foroutput
verificationverificationperiods neurons layer
__________________________________________________________________________________________________
1234 567 89 10
__________________________________________________________________________________________________
MLP5‐34‐1 0,990,990,994,81 5,71 5,97 910Tanh Logistic
MLP5‐35‐1 0,990,990,995,03 5,78 6,52 816Tanh Logistic
MLP5‐31‐1 0,990,990,995,52 6,22 6,53 950Tanh Logistic
MLP5‐35‐1 0,990,990,995,23 6,49 6,90 934Tanh
Logistic
MLP5‐26‐1 0,990,990,995,63 6,58 6,91 819Tanh Logistic
__________________________________________________________________________________________________