Page 36 - Fister jr., Iztok, and Andrej Brodnik (eds.). StuCoSReC. Proceedings of the 2017 4th Student Computer Science Research Conference. Koper: University of Primorska Press, 2017
P. 36
ve differential evolution (MAS-jDE) for solving DOP. that is an extension of DE. Finally, the proposed MAS-jDE
These are a class of problems, where the environment is presented from the multi-agent aspect.
changes over time. Autonomous agents that are capable
of detecting these changes on their own, try to solve DOP 3.1 Differential evolution
by using jDE and maintaining various sized populations.
An evolutionary progress can typically stagnate when us- DE [11] is a well established EA, developed by Storn and
ing agents with smaller populations. Therefore, we tried to Price [11]. It is a population-based algorithm, whose popu-
prevail this problem by implementing two mechanism: de- lation is defined as a set of real-valued vectors representing
tection of stagnation by calculating population’s diversity different solutions of the problem:
and migration of individuals between agents. Experiments
with various configurations of multi-agent systems were con- xi = {xi,1, xi,2, . . . , xi,D}, (2)
ducted, where the number of agents and the population sizes
were varied. Indeed, these both parameters were selected for i = 1, . . . , NP , where NP represents the population size
such that the algorithms in each configurations spent the and D is the dimensionality of the problem. Like in other
same number of the fitness function evaluations. In this EAs, the first step in DE is a random initialization of the
way, fair comparison between different MAS-jDE configu- population. Until the termination condition is satisfied, the
rations can be expected. Additionally, the performance of population undergoes operation of three evolutionary op-
the proposed algorithm MAS-jDE was also compared to the erators by creating trial vectors: mutation, crossover and
others state-of-the-art algorithms for solving DOP. selection.
The remainder of the paper is organized in the following Mutation creates mutant vector vi(G+1) from parent’s popu-
way. Section 2 describes the DOPs. Section 3 describes the lation. Different mutation DE strategies have been used in
original DE, its extension jDE and the proposed MAS-jDE
algorithm. Section 4 illustrates the results of experiments literature [10, 11], but in original article [11] authors used
that were conducted and provides discussion on them. Fi-
nally, Section 5 concludes this paper with summarizing the the so-called ’DE/rand/1/bin’:
performed work and outlining directions for the future work.
vi(G+1) = xr(G0 ) + F · (x(rG1 ) − xr(G2 )), (3)
where r0, r1 and r2 represent random integer numbers in
range [1, NP ], where it holds r0 = r1 = r2, and the scale
factor is a real number in range F ∈ [0, 2].
2. PROBLEM DEFINITION Crossover operator introduces a little diversity in the pop-
ulation of solutions. It creates a trial vector ui(,Gj+1) from a
This paper is devoted to solving DOPs. Li et al. in [7] combination of elements from either the mutation vector or
defined these kind of problems as follows:
the corresponding parent vector as:
DOP = f (x, φ, t), (1)
vi(,Gj +1)
where DOP is the dynamic optimization problem, f is an u(i,Gj+1) = xi(,Gj ) if rand (0, 1) ≤ CR or j = jrand , (4)
evaluation function, x is a feasible solution in the solution set otherwise,
X, φ (so called change type) is a system control parameter
determining the solution’s distribution in the fitness land- where the crossover rate is defined in range CR ∈ [0, 1) and
scape and t is real-world time. Changes in environment can determines a probability of creating a trial vector from the
be results of a change of system control parameters describ- mutation vector. Index jrand is randomly selected integer
ing environment state or a change in evaluation function. defined in range [1, D] that is used to ensure that the al-
These changes can occur either after some predefined num- gorithm makes at least one change in trial vector regarding
ber of generations or, more commonly, in each generation, previous generation.
where the online responses are desired by the evolutionary
algorithms. Selection compares the results of fitness function from newly
The goal of the algorithm for solving DOP is to detect the created trial vector u(i,Gj+1) and parent vector x(i,Gj). If the
change of environment and find the new optimum. Problem problem is minimization problem, it is defined as follows:
frequently occurring in evolutionary algorithms for DOP is a
stagnation of population. Evolutionary process moves popu- x(iG+1) = ui(G+1) if f (ui(G+1)) ≤ f (x(iG)) (5)
lation to or very near to the local or global optimum, which x(iG) otherwise.
causes a very small diversity of the population. When it
comes to environmental change, the population with small The fitness function depends on the problem to be solved.
diversity cannot explore search space efficiently enough. Usu-
ally this could mean, that the algorithm will not be able to 3.2 Self-adaptive DE
find the next global optimum.
Brest et al. [1] presented self-adaptive differential evolution
algorithm (jDE), where control parameters Fi and CRi are
added to representation of individuals and adapted during
the evolution process using following equations:
3. THE PROPOSED ALGORITHM Fi(G+1) = Fl + rand1 · Fu if rand 2 < τ1, (6)
Fi(G) otherwise,
The proposed MAS-jDE algorithm combines a knowledge
from two domains: evolutionary algorithms (EAs) [3] and CRi(G+1) = rand3, if rand4 < τ2, (7)
multi-agent systems (MAS) [12]. In line with this, three C Ri(G) otherwise,
algorithm are described at first, i.e., differential evolution
(DE), followed by self-adaptation differential evolution (jDE)
StuCoSReC Proceedings of the 2017 4th Student Computer Science Research Conference 36
Ljubljana, Slovenia, 11 October
These are a class of problems, where the environment is presented from the multi-agent aspect.
changes over time. Autonomous agents that are capable
of detecting these changes on their own, try to solve DOP 3.1 Differential evolution
by using jDE and maintaining various sized populations.
An evolutionary progress can typically stagnate when us- DE [11] is a well established EA, developed by Storn and
ing agents with smaller populations. Therefore, we tried to Price [11]. It is a population-based algorithm, whose popu-
prevail this problem by implementing two mechanism: de- lation is defined as a set of real-valued vectors representing
tection of stagnation by calculating population’s diversity different solutions of the problem:
and migration of individuals between agents. Experiments
with various configurations of multi-agent systems were con- xi = {xi,1, xi,2, . . . , xi,D}, (2)
ducted, where the number of agents and the population sizes
were varied. Indeed, these both parameters were selected for i = 1, . . . , NP , where NP represents the population size
such that the algorithms in each configurations spent the and D is the dimensionality of the problem. Like in other
same number of the fitness function evaluations. In this EAs, the first step in DE is a random initialization of the
way, fair comparison between different MAS-jDE configu- population. Until the termination condition is satisfied, the
rations can be expected. Additionally, the performance of population undergoes operation of three evolutionary op-
the proposed algorithm MAS-jDE was also compared to the erators by creating trial vectors: mutation, crossover and
others state-of-the-art algorithms for solving DOP. selection.
The remainder of the paper is organized in the following Mutation creates mutant vector vi(G+1) from parent’s popu-
way. Section 2 describes the DOPs. Section 3 describes the lation. Different mutation DE strategies have been used in
original DE, its extension jDE and the proposed MAS-jDE
algorithm. Section 4 illustrates the results of experiments literature [10, 11], but in original article [11] authors used
that were conducted and provides discussion on them. Fi-
nally, Section 5 concludes this paper with summarizing the the so-called ’DE/rand/1/bin’:
performed work and outlining directions for the future work.
vi(G+1) = xr(G0 ) + F · (x(rG1 ) − xr(G2 )), (3)
where r0, r1 and r2 represent random integer numbers in
range [1, NP ], where it holds r0 = r1 = r2, and the scale
factor is a real number in range F ∈ [0, 2].
2. PROBLEM DEFINITION Crossover operator introduces a little diversity in the pop-
ulation of solutions. It creates a trial vector ui(,Gj+1) from a
This paper is devoted to solving DOPs. Li et al. in [7] combination of elements from either the mutation vector or
defined these kind of problems as follows:
the corresponding parent vector as:
DOP = f (x, φ, t), (1)
vi(,Gj +1)
where DOP is the dynamic optimization problem, f is an u(i,Gj+1) = xi(,Gj ) if rand (0, 1) ≤ CR or j = jrand , (4)
evaluation function, x is a feasible solution in the solution set otherwise,
X, φ (so called change type) is a system control parameter
determining the solution’s distribution in the fitness land- where the crossover rate is defined in range CR ∈ [0, 1) and
scape and t is real-world time. Changes in environment can determines a probability of creating a trial vector from the
be results of a change of system control parameters describ- mutation vector. Index jrand is randomly selected integer
ing environment state or a change in evaluation function. defined in range [1, D] that is used to ensure that the al-
These changes can occur either after some predefined num- gorithm makes at least one change in trial vector regarding
ber of generations or, more commonly, in each generation, previous generation.
where the online responses are desired by the evolutionary
algorithms. Selection compares the results of fitness function from newly
The goal of the algorithm for solving DOP is to detect the created trial vector u(i,Gj+1) and parent vector x(i,Gj). If the
change of environment and find the new optimum. Problem problem is minimization problem, it is defined as follows:
frequently occurring in evolutionary algorithms for DOP is a
stagnation of population. Evolutionary process moves popu- x(iG+1) = ui(G+1) if f (ui(G+1)) ≤ f (x(iG)) (5)
lation to or very near to the local or global optimum, which x(iG) otherwise.
causes a very small diversity of the population. When it
comes to environmental change, the population with small The fitness function depends on the problem to be solved.
diversity cannot explore search space efficiently enough. Usu-
ally this could mean, that the algorithm will not be able to 3.2 Self-adaptive DE
find the next global optimum.
Brest et al. [1] presented self-adaptive differential evolution
algorithm (jDE), where control parameters Fi and CRi are
added to representation of individuals and adapted during
the evolution process using following equations:
3. THE PROPOSED ALGORITHM Fi(G+1) = Fl + rand1 · Fu if rand 2 < τ1, (6)
Fi(G) otherwise,
The proposed MAS-jDE algorithm combines a knowledge
from two domains: evolutionary algorithms (EAs) [3] and CRi(G+1) = rand3, if rand4 < τ2, (7)
multi-agent systems (MAS) [12]. In line with this, three C Ri(G) otherwise,
algorithm are described at first, i.e., differential evolution
(DE), followed by self-adaptation differential evolution (jDE)
StuCoSReC Proceedings of the 2017 4th Student Computer Science Research Conference 36
Ljubljana, Slovenia, 11 October