293
1 INTRODUCTION
Oneofthewaysofsolvingoptimizationtasksisusing
an evolutionary algorithm with many populations.
Multipopulation distributed evolutionary algorithm
(DGA) is a one of such programs, first presented in
(Tanese1989a;Tanese1989b)asamethodofparallel
calculationsonasimplegeneticalgorithm(Goldberg
1989).
The proposed solution divided global
populationintoseveralsubpopulations.Afterwardsa
simple generic algorithm was applied to each
subpopulation using a single processor, which
computed a single evolution cycle. In every
generationfewindividualswere exchangedbetween
subpopulations (in a synchronous or asynchronous
way). That research showed the genetic algorithm
definedinthiswayprovidedbetterresultsintermsof
computation time and results quality in comparison
to a single population algorithm. In (Forrest and
Mitchell 1991) Forrest and Mitchell repeated the
research on the multipopulation algorithm using
Tanese’s functions. Their workproven that Tenese’s
resultscouldbeconnectedto
thefitnessfunctionused
in her research and the parallelization of the
calculations. In (Belding 1995) Theodore C. Belding
continued the work on DGA using Royal Road
functions,calledR
1,R2, R3,R4whichwerepresentedin
detailed in (Mitchell, Forrest and Holland 1992;
ForrestandMitchell1993;MitchellandHolland1993;
Mitchell, Holland and Forrest 1994). Author has
shown that in the case of complex tasks (R
3, R4
functions), the multipopulation algorithm reached
far better results (in comparison to a single
populationalgorithm).Inothercasestheresultswere
comparable.
Oneoftheexamplesofanapplicationofamulti
population evolutionary algorithm is a two step
variation used for planning networks of parallel
connected computers (Cochran, Horng
and Fowler
2006). In the first step algorithm optimizes using a
weighedfitnessfunctionandinthesecond,basedon
theresultsfromthepreviousstep,subpopulationsare
being created which evolve in parallel. The paper
comparestheachievedresultswithsolutiongivenby
othermethods.Theauthorsshownthe
DGAprovided
better results. In (Martikainen and Ovaska 2006) a
hierarchical two population evolutionary algorithm
Distributed Evolutionary Algorithm for Path Planning
in Navigation Situation
R.Śmierzchalski,Ł.Kuczkowski,P.Kolendo&B.Jaworski
GdanskUniversityofTechnology,Gdansk,Poland
ABSTRACT: This article presents the use of a multipopulation distributed evolutionary algorithm for path
planning in navigation situation. The algorithmused is with partially exchanged population and migration
betweenindependentlyevolvingpopula tions. Inthispaperacomparisonbetween amultipopulation anda
classicsinglepopulationalgorithmtakesplace.Theimpactontheultimatesolutionhasbeenresearched.Itwas
shownthatusingseveralindependentpopulationsleadstoanimprovementoftheultimatesolutioncompared
toasinglepopulationapproach.Theconceptwascheckedagainstaproblemofmaritimecollisionavoidance.
http://www.transnav.eu
the International Journal
on Marine Navigation
and Safety of Sea Transportation
Volume 7
Number 2
June 2013
DOI:10.12716/1001.07.02.17
294
operatingbasedonabaseandelitepopulationswas
presented. Theelite populationconsisted of the best
individualsandthebase onecontainsindividualsof
lesser fitness score. Both populations are evolved
using different methods i.e. mutation and crossover
probability. The paper shows the superiority of a
multipopulationvariation
overthesinglepopulation
whileusingthesamenumberofgenerations.
Multipopulation distributed evolutionary
algorithm can work both in parallel and sequential
modes.Differentvariationsareusedtosolvecomplex
optimization tasks. In (Gehring and Bortfeldt 2002)
Gehring and Bortfeldtshowed multipopulation
algorithm that calculated the optimal container
loading sequence. The DGA modification used by
them provided results better than those of a single
population algorithm. There are many publication
such as (Whitley 1997; CantuPaz 1999; Martikainen
2006) that show an overview currently used
variations of the multipopulation algorithms.
Differentwaysofindividualmigrationanddedicated
genetic
operators that allow for an exchange of
individualsbetweenpopulationsareshown.
One of the tasks that the multipopulation
algorithm can be used with is remote control of a
mobile object (i.e. mobile robot or autonomous
overwaterships).Itconsistsinleadinganobjectfrom
astartingpositionto
itsdestinationortheoperation
(mission)area.Inordertodothis,anoptimal(bythe
criteriaofi.e.length)pathhastobeplotted.Thispath
hastoavoidobstacles:staticanddynamicconstraints
of the environment. The dynamic constraints can
present themselves as other moveable objects
travelling along their
own trajectory with certain
speed.The problemcanbeexamined intwo modes:
offline and online. The offline mode of path
planningiscarriedoutinanenvironmentwherethe
movement parameters of other dynamic objects are
knownandareconstant.Theonlinemodetakesinto
account the changes in the environment and the
uncertainty of the movement of other objects. As a
result of this, a constant control over the
environment’s changes and the other objects
parameters is required. In the event of changes, a
modificationofpreviouslyplottedpathtakesplace.
Theproblemwasreduced
toanoptimizationtasks
with static (islands, forbidden areas) and dynamic
(other ships, changeable weather conditions)
constraints(Śmierzchalski1997).Inordertosolvethe
presentedproblemanadaptiveevolutionarymethod
wasused (Goldberg 1989; Michalewicz 1996),which
operated based on the Evolutionary
Planner/Navigator(νEP/N++)(XiaoandMichalewicz
1999).
Thisalgorithmusestheevolutionaryalgorithm
library GALib (Wall 1996). Based on the available
library components a multipopulation distributed
evolutionaryalgorithm(Tanese1989a;Tanese1989b;
Belding 1995) was examined and its results were
comparedwithasinglepopulationversion.
This article presents a multipopulation
distributed evolutionary algorithm used in
the
problemofmaritimepathplanning.Itsorganizedso
that in the chapter two the maritime path planning
problem is presented. The evolutionary method and
the multipopulation variant is described in chapter
three and four. Chapter five defines the simulation
environment, while chapter six presents the results.
Chapterseven
concludesthepaper.
2 PATHPLANNINGFORACOLLISION
SCENARIO
The problem of collision avoidance consists of
plottingapathP,asapartofgivenroute,whichthe
mobileobjectcoversoftheinitialposition(startpoint)
(x
0,y0)totheactualdestinationpoint(xe,ye).Thepath
is composed of a linear segment sequence p
i (i =
1,...,n), interconnected by the turning points (x
i, yi).
The start and destination points are chosen by the
operator. Taking the above into account, path P is
feasible (belongs to the safe paths set): if every
segment p
i (i = 1,…,n) remains in the area of the
environmentanditdoesnotcrosswithanydynamic
or static constrain. The paths that do not meet this
requirement are considered unfeasible (Cochran,
HorngandFowler2003).
According to the collision avoidance rules, when
an encountered object is in
the area of observation
and if the object’s course crosses own path in an
unsafe distance, we consider the object to be on a
potentialcollisioncourse(target1,pointp
k,Fig1).
Figure1.Potentialcollisionscenario.
The safedistance from theown ship depends on
theadoptedcollisiondangerlevel(usuallyadistance
of 58 nautical miles in front of the bow and 24
nauticalmilesastern,dependingon thesize,type of
vesseland the ratioof ownand target ships’speed)
(Śmierzchalski
1998). The distance of the closest
approachandthetimeneededtoreachthisdistance
has to be also considered. The collision danger
conditionisreducedtodetectingif(Lenart1986):
min kr
DD
(1)
min
D
kr
TT
(2)
where:
D
minclosestapproachdistance,
T
Dmintimetoachieveclosestapproachdistance,
Dkr,TkrcriticalvaluesofDminandTDminsetby
thesystem’soperator.
Incaseofthecollisionavoidancetask,theobjects
the present a collision danger are interpreted as
mobile objects travelling with certain speed and
course.
295
Accordingtothemovementconcept,theownship
shouldcoverthegivenrouteinagiventime.Onthe
otherhanditshouldmovesafelyalongthegivenpath
in order to avoid objects that present a thread of
collision. Path planning for an object in a collision
scenariohas
tobeacompromisebetweenadeviation
fromthegivenrouteandthesafetyofthetravel.Thus
the problem is defined a multicriteria optimization
task, which includes the safety and the economy of
the movement. The total cost of a path’s fitness
considersthesafety costandthe
costofmovingalong
thegivenpath.Thesafetycostincalculatedbasedon
thedistancefromtheconstraintsandthecostoftravel
takes into account: total path’s length P, the
maximumturnanglebetweenthesegmentsp
iandthe
timeneededtocovertheroute(Śmierzchalski1998).
3 SINGLEPOPULATIONALGORITHMWITH
PARTIALLYREPLACEDPOPULATION.
Evolutionary algorithm processes a set of solutions
called the population. The environment on which it
operatesisdefinedbasedonthetaskpending(fitness
function, constrains). Each individual (single
populationmember)
representsadifferentproblem’s
solution.Basedonthefitnessfunctioneachindividual
is assigned a parameter called the fitness score. The
fitness score determines the quality of the solution
represented by each member. In the moment of
algorithm’sinitialisation,theinitialconditionsareset.
Each individual is being randomly generated.
Afterwards
the following steps are executed:
reproduction, genetic operations, evaluation and
succession(Fig.2).Inreproductionphaseatemporary
population is created to which random individuals
from the base population are being copied. It is
possible to introduce more than one copy of any
individual. The greater the fitness score, the
greater
the chance of selecting particular member of the
population.Inthenextstepthetemporarypopulation
is processed by the genetic operations, which
modifies individuals. TheνEP/N++ program has the
following genetic operator buildon: hard mutation,
soft mutation, adding a gene, gene removal, gene
positionswap,smoothingandsingle
pointcrossover
(Śmierzchalski 1997). The set of solutions calculated
in this way is called the child population, which is
evaluated. The succession phase creates a new base
population. In the single population algorithm with
partially replaced population the new population
gathersindividualsfromthechildpopulationandthe
old
base popula tion. The amount of individuals
addedtothebasepopulationisdefinedbytheuser.
Theindividualpassesoveronlyifitsfitnessscoreis
greaterthanthefitnessscoreoftheworstindividual
oftheoldbaselocations.Thosealgorithm’sphasesare
repeatedin a loopuntil the termination
condition is
met (The algorithm runs for a specific number of
generations or a desired fitness score has been
achieved).IntheGALiblibrarythealgorithmlikethis
iscalledaSteadyStateGA.
Figure2.Singlepopulationalgorithmdiagram
4 MULTIPOPULATIONALGORITHM
Multipopulation algorithm provides the ability of
running simultaneous, independent evolutions of
many populations. To achieve thisνEPN++ uses the
partial exchange evolutionary algorithm. General
diagramof themultipopulation algorithmisshown
on Fig. 3. In the first phase of the algorithm an
initialization of user
defined number of populations
ofrandomlygeneratedindividualstakesplace.After
that the evolution process is applied to each
populationseparately.Thisprocess,similarasinthe
singlepopulationalgorithm,consistsofthefollowing
steps: reproduction, crossover, mutation, individual
evaluation and the new base population individual
selection.Whentheevolution
cycleofallpopulations
is completed, the migration and succession of the
superiorpopulationtakesplace.
The considered algorithm utilizes the stepping
stone migration (Fig. 4a) (Tanese 1989a; Tanese
1989b).Fromeachpopulation,startingwiththezero
one,aspecificnumberofindividualsisbeingpassed
on to the neighboring (next
in the set) population.
This process is repeated until the last population
won’texportitsindividualstothezeropopulation.
In the next step, the succession of the superior
population takes place (Fig. 4b). This population
stores the best solutionand is not undergoing an
evolution process. From each inferior
population (a
populationaffectedbytheevolutionprocess)acertain
number of best adapted individuals is selected.
Afterwardsthealgorithmverifiesifanyofthechosen
individuals has fitness score better than the worst
adaptedmemberofthesuperiorpopulation.Ifso,the
worse individual is replaced by the better
one. The
ultimate solution is the individual of the superior
populationwiththegreatestfitnessscore.
296
Figure3. The multipopulation evolutionary algorithm
diagram.
a)
b)
Figure4 a) Stepping stone migration, b) elite population
succession
5 SIMULATIONENVIRONMENT
The multipopulation distributed evolutionary
algorithm presented has been used to solve the
problem of maritime path planning. The algorithm
research requires the selection of appropriate test
tasks. Three environments representing close to real
maritime scenarios were selected. The following
parameterswereconsidered:ψcourse,υ
speed.
Environment 1 (Fig. 5a) presents the problem of
static forbidden area avoidance. The constraint
introducedrepresentsanisland.Environment2(Fig.
5b) reflects a problem of avoiding a collision with
dynamicobjects(representingtargetships)travelling
inoppositedirections.Oneoftheobjectsistravelling
withψ‐180°and
υ‐12knots(target1)andtheother
withψ‐0°,υ‐8knots(target2).Environment3(Fig.
5c)isacombinationofbothprevioussituations,thus
it consists of a static obstacle and dynamic ships
travellingwithψ‐140°andυ‐8knots(target1)and
withψ‐
andυ‐12knots(target2).
Figure5.Environmenta)1,b)2,c)3
6 SIMULATIONS
The synergy of solutions presented in the tables
belowwas preparedbased on theenvironment type
andtheinitial populations marked asa and b. Only
the parameters analysed were changing. The results
below utilize the following abbreviations: SSGA‐
SteadyStateGeneticAlgorithm,DGA‐DemeGenetic
Algorithm, env
thetype of environment (based on
theenumerationfromthepreviouschapter),initthe
initial population, pop the number of populations,
mig the number of migration inviduals, t
calculation time, best‐fitness score of the best
individual,Ffitnessscore.Undependablyfromthe
simulation,the
followingalgorithmparameterswere
set:
single population size: 30 individuals, crossover
probability: 0.8, mutation probability: 0.15, the
numberofindividualsreplacedinthepopulation:6,
selector:proportionalroulette,algorithmtermination
SSGA:1000generations,DGA:200generations,initial
ownship’sspeed:20knots.
Eachfigure’sindividualwithhighestfitnessscore
has
his path bolded. The position of the dynamic
objects is displayed for the best member of the
population. The best calculated solution for each
environmentispresentedingraphicalforminFig.6
297
Figure6.Thebestcalculatedsolutionforeachenvironment
The goal of the research undertaken was to
compare the results of SSGA and DGA or also for
DGA to establish the impact of the amount of
populations and the numbers of migrating
individuals on the solution’s quality and the
computing time. Based on the achieved results a
convergence (the ratio
between the ultimate fitness
scoreandthenumberofgenerationneededtoachieve
it)analysiswasmadeforSSGAandDGAalgorithms.
6.1 Singleandmultipopulationalgorithmcomparison
InthefirstphaseSSGAwascomparedwiththeDGA.
In the analysed simulations the DGA was operating
basedon the
followingparameters: pop= 5,mig =3.
TheuseofDGAincomparisontoSSGAhasextended
thecalculationtimeofaverage2.5times.Thoughthe
DGA’s evolution process was shorter (SSGA 1000
generation, DGA 200 generation) it makes higher
number of calculations for fitness score for each
population. Strong selection pressure used in SSGA
makes it so that there is a lot of best individual’s
copies inthe population. Thus it is not necessary to
recalculate its fitness score. Using several
independentpopulationsandexchangingindividuals
between them, DGA introduces higher variety of
solutions.Inthe
sametime,theprobabilityofcreating
new individuals during the genetic operations
increases. Individuals created this way require
evaluation,whichextendsthecalculationprocess.The
time it takes (average 130 seconds) is acceptable for
theproblemofcollisionavoidanceandmeetsthenear
real time requirement (Śmierzchalski and
Michalewicz2000).
After comparing the single population with the
multipopulation algorithm it has been concluded
that regardless of the environment and the initial
population, better result were achieved with DGA
(Tab.1).Theaveragescoreimprovedby18%.
Table1.SSGAandDGAcomparison
_______________________________________________
env init SSGADGApop=5,mig=3
t[s] bestt[s] best
_______________________________________________
1 a18 236.58 55 192.38
b16 216.56 44  199.47
2 a42 198.89 129 192.68
b32 277.02 130 175.85
3 a54 316.35 217 260.99
b59 315.35 218 248.01
_______________________________________________
6.2 Theimpactofthenumberofpopulationsonthe
DGA’aperformance.
Fortheresearchthemigparameterwassetto3.The
results are presented in Tab. 2. The algorithm
improvedorkeptthesameleveloffitnessscorewith
the increasing number of populations. The rule
doesn’t apply
to all cases. Simulations with pop = 5
provided solutionsofthelowestfitness score, while
pop=8providedthebestresult.Takingtheaboveinto
account, increasing the population’s amount
noticeably extends the calculation time, however
doesn’tguaranteethesolution’simprovement.Based
ontheresearchonecanconclude
thatusingpop=3to
5allowsonetoachievethebestresults(Tab.2).
Table2. Impact of the number of population on the DGA
performance
_______________________________________________
env init DGApop=2, DGApop=5, DGApop=8,
mig=3mig=3mig=3
t[s] best t[s] best t[s] best
_______________________________________________
1 a 23 205.16 55 192.38 75 201.49
b 24 201.03 44 199.47 91 197.07
2 a 54 223.14 129 192.68 201 192.86
b 51 176.44 130 175.85 212 176.5
3 a 82 258.82 217 260.99 334 272.67
b 90 255.43 218 248.01 330 252.63
_______________________________________________
6.3 Theimpactofthenumberofmigrating individualson
theDGA’sperformance.
Simulations were run with mig = 3, 6 and 15
individuals (Tab. 3). The results showed that the
number of migrating individuals doesn’t affect the
ultimate solution. In case of init b both env 1 and 3
produced similar fitness score. Using init a gives
various solutions. It was noticed that increasing the
amount of migrating individuals helps to smoothen
the algorithm convergence characteristic, which
eliminatesthestepaspectofthechangestothefitness
scoreofthebestindividual(Fig.8).
Table3. The impact of the number of the migrating
individualsontheDGA’sperformance
_______________________________________________
env init DGApop=5, DGApop=5, DGApop=5,
mig=3mig=6mig=15
t[s] best t[s] best t[s] best
_______________________________________________
1 a 55 192.38 48 203.02 59 202.45
b 44 199.47 52 195.07 48 198.96
2 a 129 192.68 127 163.25 134 163.31
b 130 175.85 126 175.92 132 175.96
3 a 217 260.99 217 308.07 288 239.96
b 218 248.01 234 250.57 223 252.37
_______________________________________________
298
6.4 Thecourseofthealgorithm’sconvergence
Fig. 7‐Fig. 8 show the diagram of the solution’s
convergence depending on the algorithm used,
parameters, initial population and the environment.
Intermediate results were recorded every 50
generationsfortheSSGA(Fig.7)andevery10forthe
DGA(Fig.8).This
way20sampleswereachievedfor
eachalgorithm.BasedonFig.7itwasconcludedthat
using SSGA results in poor solution’s diversity
through the process of evolution. After the
explorationphase(regardless oftheenvironment up
until about 200
th
generation) it stagnates within the
localminimum.Thebestresultdoesnotimprove.The
only exception was the simulation made for env 2
using init a. A sudden improve of the best solution
takesplacebetween850
th
and900
th
generationdueto
themutationoperator.InDGAthebestfitnessscoreis
constantly improving. This is due to the number of
populationwhich diversifies thesolutions. Based on
Fig.8,onecandeducethatDGAincomparisonwith
SSGA requires significantly fewer generations to
achieve the ultimate fitness score.
For env 2 it was
about70andforenv3.about100generations.While
using DGA the solutions provided are consistent
regardlessoftheinitialpopulation.
Figure7.ConvergenceoftheSSGAa)inita,b)initb(trend:
1env1,2env2,3env3)
299
Figure8.ConvergenceoftheDGAa)env1,inita,b)env1,
initb,c)env2,inita,d)env2,initb,e)env3,inita,f)env3,
initb(trend:1pop=2,mig=3,2pop
=5,mig=3,3pop=
5,mig=6,4pop=5,mig=15,5pop=8,mig=3)
Figure9.Juxtapositionofthesimulationtimesfor thesimulations(horizontalaxis:1‐SSGA,2‐DGA pop=2,mig=3,3‐
DGApop=5,mig=3,4‐DGApop=5,mig=6,5‐DGApop=5,mig=15,6‐DGA,pop=8,mig=3)
6.5 Calculationtimeanalysis
Thenextresearchstepwastoanalysetheprogram’s
performance depending on the environment, initial
population,numberofpopulationsandtheamountof
migratingindividuals.Fig.9presentsajuxtaposition
of calculation times for selected simulations. It was
concludedthatcalculationtimesaredependentfrom
the
environment’s complexity and at a small degree
from the initial population. Calculations for env
1which consists of a single static obstacle took the
shortest amount of time. The calculation time
extension was observed for env 2 and 3 which is
connected with triangulating the position of the
dynamic obstacles and the
points of potential
collision.
In Tab. 4 percentage comparison between
calculation time and fitness score of the best
individualwaspresented.Allthedataareshownfor
the single popula tion algorithm. In example, using
DGAwithpop= 2andinita(firstrowinthetable)for
env 1
the calculationtime has extended by 27%and
the solution improved by 13% in comparison to the
resultachievedbySSGA.
In terms of computing time, the best solutions
wereprovidedbytheSSGA.UsingDGAwithpop=2
resultsinaveragesolutionimprovementby14%and
increaseofthe
calculationtimeby45%.Similarforpop
= 5 we get 18% solution improvement and 260%
calculation time increase and for pop = 8 16% and
450%.
Fromthedataaboveonecanconcludethatit’sbest
to use 2 or 3 populations. Increasing the number of
generations increases the
computation time
disproportionatetothequalityofthesolution.
Table4. The quality of the solution depending on the
calculationtime
_______________________________________________
init pop env1env2env3
‐  ‐ t[%] best[%] t[%] best[%] t[%] best[%]
_______________________________________________
a 2 27.7 13.2 28.512.151.818.1
b 2 50  7.159.336.3  52.519.0
a 5 200 15.7 209.5 12.9 345.614.7
b 5 200 8.6304.1 36.5 281.320.6
a 8 316.6 14.8 378.5 3518.513.8
b 8 468.7 9 562.5 36.2  459.319.8
_______________________________________________
300
7 CONCLUSIONS
The undertaken research has shown that using a
distributed genetic algorithm improves the ultimate
fitness score in comparison with an algorithm
working on a single population. This is achieved
regardless of the initial popula tion and the
environment’s type. It was also shown, that DGA
requires far fewer generation
to reach a solution
comparable to or better than the SSGA. The
convergenceanalysisofbothtypesofthealgorithms
shows that the distributed approach has a positive
impactonmaintainingthepopulation’sdiversity.At
thesametimethedeviationsinthesolutionsearchfor
optimumcharacteristic havedecreased. The
increase
of the number of evolved populations impacts the
improvement of the ultimate fitness score. A critical
number of popula tions which prevented further
results’ improvement was shown. Changing the
amountofthemigratingindividualshadnoeffecton
the final solution, however it did shape the
algorithm’sconvergencecharacteristic. Usingthe
20
40% of the population size migration prevented
suddenchangesofthebestfitnessscorevalue.After
comparing the results of all simulations it was
concludedthatthegreatestimpactonthecalculation
time comes from the environment’s complexity and
the amount of populations. It was also recognized
that using
more than 23 population brings an
unsatisfactory improvement of the ultimate solution
takingintoaccounttheextendedcalculationtime.
REFERENCES
Belding T. C. (1995). “The Distributed Genetic Algorithms
Revised.” Proc. of 6th Int. Conf. Genetic Algorithms:
114121.
CantuPaz E. (1999). “Topologies, Migration Rates, and
MultiPopulation Parallel Genetic Algorithms.”
ProceedingsofGECCO.
Cochran J.K., Horng S. and Fowler J.W. (2003). “A Multi
PopulationGenetic Algorithmto Solve MultiObjective
SchedulingProblemsforParallelMachines.”Computers
and Operations Research, Vol 30: 10871102, Oxford,
UK.
Forrest S. and Mitchell M. (1991). “The performance of
genetic algorithms on Walsh polynomials: Some
anomalous results and their explanation.” Proceedings
of the Fourth International Conference on Genetic
Algorithms, In Belew, R. & L. Booker
(Eds.): 182189,
SanMateo.
ForrestS.andMitchell M. (1993).“Relativebuildingblock
fitness and the Building Block hypothesis.” In Whitley
L.D. (Ed.), Foundations of Genetic Algorithms 2: 109
126,SanMateo,CA:MorganKaufmann.
Gehring H. and Bortfeldt A. (2002). “A parallel genetic
algorithm for solving the container
loading problem.”
InternationalTransactionsinOperationalResearch, Vol.
9,No.4:497–511.
Goldberg D.E. (1989). “Genetic Algorithms in Search,
Optimization, and Machine Learning.” Boston:
AddisonWesleyLongmanPublishingCo.,Inc.
Lenart A.S. (1986). “Wybrane problemy analizy i syntezy
okrętowych systemów antykolizyjnych.” Budownictwo
okrętowe nr XLIV, Zeszyty naukowe Politechniki
Gdańskiejnr405,Gdańsk1986.
MartikainenJ.(2006).“MethodsforImprovingReliabilityof
EvolutionaryComputationAlgorithmsandAccelerating
Problem Solving.” Ph.D. Thesis, Helsinki Universityof
Technology, Dep. of Electrical and Communications
Engineering,Espoo.
Martikainen J. and Ovaska S.J. (2006). “Hierarchical two
population genetic algorithm.” International Journal of
ComputationalIntelligence
Researchvol.2,No.4.
Michalewicz Z. (1996). “Genetic Algorithms + Data
Structures=EvolutionPrograms.”Spriger‐Verlang.
MitchellM.,ForrestS.andHollandJ.H.(1992).“Theroyal
roadforgeneticalgorithms:FitnesslandscapesandGA
performance.”InProc.oftheFirstEuropeanConference
onArtificialLife:245254,
Cambridge,MITPress.
Mitchell M., Holland J.H. (1993). “When will agenetic
algorithmoutperformhillclimbing?”SantaFeInstitute
working paper 9306037, Santa Fe, NM: Santa Fe
Institute.
MitchellM.,Holland J.H.andForrestS.(1994).“Whenwill
a genetic algorithm outperform hill climbing?”
Advances in Neural Information Processing Systems
6
SanMateo,CA:MorganKaufmann.
Śmierzchalski R. (1997). “Trajectory planning for ship in
collisionsituationsatseabyevolutionarycomputation.”
InProc.oftheIFACMCMCʹ97,Brijuni,Croatia.
Śmierzchalski R. (1998). “Synteza metod i algorytmów
wspomagania decyzji nawigatora w sytuacji kolizyjnej
namorzu.”DSc.dissertation,Gdynia.
Śmierzchalski
R.andMichalewiczZ.(2000).“Modelingofa
Ship Trajectory in Collision Situations at Sea by
Evolutionary Algorithm.” IEEE Transaction on
EvolutionaryComputation,Vol.4,No.3.
TaneseR.(1989a). “Distributed Genetic Algorithms.Proc.
of3rdInt.Conf.GeneticAlgorithms:432439.
TaneseR.(1989b).“DistributedGeneticAlgorithms.”Ph.D.
Thesis,Universityof
Michigan,AnnArbor.
WallM.(1996).“GAlib:AC++LibraryofGeneticAlgorithm
Components.”MIT.
Whitley D. (1997). “Island Model Genetic Algorithms and
LinearlySeparableProblems.”Proc.ofAISBWorkshop
onEvolutionaryComputation.
Xiao J. and Michalewicz Z. (1999). “An Evolutionary
Computation Approach to Planning and Navigation.”
ChapterinSoftComputing
andMechatronics,Physica
Verlag.