Tài liệu A survey on directional information techniques for multi-Objective evolutionary algorithms - Nguyen Long: Công nghệ thông tin
N. Long, , N. D. Dinh, “A survey on directional information evolutionary algorithms.” 54
A SURVEY ON DIRECTIONAL INFORMATION TECHNIQUES
FOR MULTI-OBJECTIVE EVOLUTIONARY ALGORITHMS
Nguyen Long1*, Nguyen Xuan Hung2, Cao Van Huan2, Nguyen Duc Dinh3
Abstract: In many disciplines, optimization problems often have two or more
objectives, which are normally in conflict with others, and that we wish to optimize
them simultaneously. These problems are called multi-objective optimization
problems (MOPs). In fact, MOPs normally give rise not to one, but to a set of
solutions (called a Pareto optimal set (POS)) which, in the absence of any further
information, are all equally good. An evolutionary algorithms have been very
popular for solving MOPs [8, 12] mainly due to their ease of use, work on
population and their wide applicability. Evolutionary algorithms allow to find an
entire set of Pareto optimal solutions in a single run of the algorithm, ins...
19 trang |
Chia sẻ: quangot475 | Lượt xem: 699 | Lượt tải: 0
Bạn đang xem nội dung tài liệu A survey on directional information techniques for multi-Objective evolutionary algorithms - Nguyen Long, để tải tài liệu về máy bạn click vào nút DOWNLOAD ở trên
Công nghệ thông tin
N. Long, , N. D. Dinh, “A survey on directional information evolutionary algorithms.” 54
A SURVEY ON DIRECTIONAL INFORMATION TECHNIQUES
FOR MULTI-OBJECTIVE EVOLUTIONARY ALGORITHMS
Nguyen Long1*, Nguyen Xuan Hung2, Cao Van Huan2, Nguyen Duc Dinh3
Abstract: In many disciplines, optimization problems often have two or more
objectives, which are normally in conflict with others, and that we wish to optimize
them simultaneously. These problems are called multi-objective optimization
problems (MOPs). In fact, MOPs normally give rise not to one, but to a set of
solutions (called a Pareto optimal set (POS)) which, in the absence of any further
information, are all equally good. An evolutionary algorithms have been very
popular for solving MOPs [8, 12] mainly due to their ease of use, work on
population and their wide applicability. Evolutionary algorithms allow to find an
entire set of Pareto optimal solutions in a single run of the algorithm, instead of
having to perform a series of separate runs as in the case of the traditional
mathematical programming techniques. Recently, the guided techniques have been
discussed, conceptualized and used to guide multi-objective evolutionary algorithms
(MOEAs) during the search process towards the POS. Generally, guided
information is derived from population, individuals, archives, decision makers.
Then those information are used to guide MOEAs during their evolutionary process
quickly towards the POS. The good guidance will control MOEAs to obtain the set
of solutions towards POSs in a good quality of convergence and diversity. This is a
difficult task since the evolutionary process allows randomness so it is hard to
maintain the balance between convergence and diversity properties during the
search. This paper will discuss the determination and the effective usage of the
guided information in MOEAs. The paper also gives a survey on the usage of
directional information for best guiding the search to improve the balance between
convergence and diversity for multi-objective evolutionary algorithms.
Keywords: MOEAs, Gradient descent directions, Improvement directions, PDE, DMEA-II.
1. INTRODUCTION
In optimization area, the using evolution algorithms (EAs) brings many
effectiveness to solve optimization problems. In fact, evolution algorithms work on
population and stochastic mechanism so evolution algorithms can be effectively
used to solve difficulty problems which have complex optimal sets in objective
space. EAs have widely randomized range so they make the search being not
biased towards local optima, that is why EAs are suitable for global optimization
problems. When solving multi-objective problems, EAs are adaptively and
effectively used to obtain a set of approximated Pareto optimal solutions.
However, EAs also have some difficulties such as: the obtained solutions are
approximated Pareto optimal solutions so they are not really desired optimal
solutions for the problems. It also requires a high number of generations to get a
good set of solutions. To avoid these disadvantages, a hybridization model that
combines MOEAs with search mechanisms to improve the performance quality of
the algorithms. The search techniques are discussed and widely used in multi-
objective optimization such as: particle swarm optimization (PSO), ant colony,
gradient based directions, differential evolution, direction of improvement, ...
These techniques are used to guide the evolutionary process quick towards Pareto
Nghiên cứu khoa học công nghệ
Tạp chí Nghiên cứu KH&CN quân sự, Số Đặc san CNTT, 12 - 2017 55
optimal fronts in objective space (or Pareto optimal sets in decision space), and to
avoid being trapped in local optima. This guidance helps MOEAs to be improved
in their exploitation and exploration characterises and the quality of the obtained
solutions. The using of guided information is a promising technique to get good
approximated solutions, it helps MOEAs to be improved in their quality and
capacity. In fact, there are many approaches in using guided information in
MOEAs, one of these kinds of guided information is directional information in
MOEAs, it contains: gradient descent directions, differential directions in
differential evolution and directions of improvement. The using of these directions
are summarized in following sections. The remainder of this paper is organized as
follows. A description of guided directions is given in Section II. The advantages
and disadvantages of the usage of directional information are indicated in Section
III. Finally, the research directions and trends are shown in conclusions.
2. TECHNIQUE OF USING GUIDED DIRECTIONS
2.1. Using gradient descent directions
In optimization, gradient based method determines search directions base on
gradient vectors of objective functions at a selected point. Because gradient vectors
are fast increasing directions of objective functions at a point, so the idea of using
negative derivative directions is the fast decreasing directions of objective
functions. Other words, solutions are moved by these directions allow to find a
optimal point. In multi-objective optimization algorithms, the aim of the
evolutionary process is getting an approximated Pareto optimal set. During the
search, if the selected point is a Pareto optimal it will be kept for next generation,
otherwise, this point need to be moved toward the Pareto optimal front to be an
approximated Pareto optimal solution. A method is used as a local search widely
used is using gradient based directions at a point, these directions help algorithm to
move the selected point toward the Pareto optimal front. For more details of
determining and using these directions is described in two popular methods:
steepest descent method and Pareto descent method. In gradient based direction
methods for multi-objective problems, steepest descent directions at a point are
defined as a direction which is combined by descent directions at a that point, they
are often determined by solving a quadratic linear programming problem. The
illustration for descent directions is shown in Fig. 1.
Figure 1. Illustration of descent directions, gi denotes for vector − ( ) .
The using of steepest descent directions in multi-objective optimization is
proposed in [15], the idea of this method is determining steepest descent direction
Công nghệ thông tin
N. Long, , N. D. Dinh, “A survey on directional information evolutionary algorithms.” 56
at a selected point, the directions are determined by a linear combination of
negative gradient components. These directions improve all objectives
simultaneously. The steepest descent directions are determined as follows:
∗ = − ∗ (1)
The authors use concept of Lagrange to calculate value for m dimensions of a
Lagrange multiplier λ:
= arg min
1
2
‖ ‖
≥ 0 (2)
= 1
This is a quadratic linear programming with m variables and m linear
inequalities and an equality. In other propose [16], the authors define Pareto
descent directions and use these directions to control evolutionary process towards
Pareto optimal front. In this propose, Pareto descent directions at x in decision
space are defined as descent directions to which no other descent direction is
superior in improving all objective functions. There are often multiple Pareto
descent directions. Other words, if direction d is a Pareto descent direction if d can
be presented as a convex combination of steepest descent directions [15] of
objective functions or there exist ≥ 0 ( = 1,2, , ) such that:
= (−∇ ( ))
(3)
Since both the complete set of descent directions and all convex combinations
of the steepest descent directions form convex cones, the union of the two, namely,
the complete set of Pareto descent directions, also forms a convex cone. The
illustration of descent cone is shown in Fig. 2.
Figure 2. Illustration of Pareto descent directions (d2, d3), gi − ( ) .
The authors propose a local search model that uses both Pareto descent
directions and descent directions when solving multi-objective problems. In this
propose, these directions are determined in three cases: all Pareto descent
directions are in feasible region, some Pareto descent directions are in infeasible
region and there are no Pareto descent directions in feasible region.
In case of all Pareto descent directions are in feasible region, a combination of
steepest descent directions gives a descent direction, it determined by a linear
programming problem:
Nghiên cứu khoa học công nghệ
Tạp chí Nghiên cứu KH&CN quân sự, Số Đặc san CNTT, 12 - 2017 57
= ≥ 0 ( = 1,2, , )
(4)
≪ 1,
≫ 0 ( = 1,2, , )
There, ≠ 0; ∑ ≪ 1 is a weighted vector needs to determine, =
∆ ( )∆ ( ). When the selected solution is located on the boundary of feasible
region, some Pareto descent directions in infeasible region, hence, to solve the
multi-objective problems in e_ectiveness way, these infeasible directions need to
be ignored during the search. For this task, an additional constraint is used:
. ≫ 0 (5)
There, is a normalized vector of the boundary of feasible region, is Pareto
descent directions are determined by the de_nition [16]. If there are no Pareto
descent directions in feasible region, descent directions are used for the search,
these directions are determined by a linear programming problem:
.
. (−∇ ( ) ( = 1,2, , ) (6)
−1 ≪ ≪ 1 ( = 1,2, , )
There, = ( , , , ) is a weighted vector, when = 0, it can be
presented as a direction, otherwise, when ≠ 0, the directions start from a edge
which is calculated by above linear programming problem, these found directions
are descent directions. This case is demonstrated in Fig. 3.
Figure 3. Demonstration of determination directions in three cases:
All Pareto descent directions are in feasible region (left),
Some Pareto descent directions are in infeasible region (middle),
No Pareto descent directions in feasible region (right).
In [27], the authors introduce a directed search bases on gradient information, in
this method, a search direction v ∈ Rn in decision space is determined:
lim
→
( + ) − ( )
= , = 1,2, , (7)
There, ∈ is a vector in objective space, :
→ is an objective
function ith of the multi-objective problem. is a step length parameter add
into objective functions. A Jacobian matrix is used, then, (7) is represented
as matrix vector:
( ) = (8)
Công nghệ thông tin
N. Long, , N. D. Dinh, “A survey on directional information evolutionary algorithms.” 58
A bellow system of equalities is solved to find direction . When number
variables bigger than number of objectives, can be determined:
= ( )
(9)
With ∈ × is an invertible matrix of ∈ × , ≪ . When problems
contain constraints , , , :
→ at x then a system of equalities is
solved to find :
( ) =
( ) ≪ 0
−1 ≪ ≪ 1 ( = 1,2, , )
ℎ ( ) ==
⎝
⎜
⎛
∇ ( )
∇ ( )
∇ ( )
⎠
⎟
⎞
∈ × (10)
The search direction is used for directed descent direction method, which is
incorporated with traditional methods such as weighted sum, goal programming.
The authors use this direction based technique to propose a multi-objective
algorithm. In this algorithm, a individual at selected point in decision space is
moved belong the search direction to be closed Pareto optimal set.
Using gradient based algorithm to solve multi-objective problems, it has a good
convergence rate but there are some difficulties when MOPs have a complex
surface in objective space, a lot of local optimums and the they are non-
differentiable, those are difficulty MOPs. There are no way to determine gradient
vectors, so gradient based algorithm can not be used. Based on population and
stochastic mechanism, EAs can solve above difficulty MOPs, however, EAs also
have some difficulties such as: the obtained solutions are approximated Pareto
optimal solutions so they are not really desired optimal solutions for the problems.
It also requires a high number of generations to get a good set of solutions. Other
words, EAs might have problem in convergence rate. The idea of combining gra
dient based directions with evolution algorithm in a hybridization algorithm is
raised and discussed. These algorithms inherit the advantages of gradient based
and evolution algorithms to effective solve multi-objective problems. An evolution
algorithm is incorporated with gradient information in local search will be quicker
convergence to Pareto optimal front. Here, some gradient based directions are used
to guide the evolutionary process quickly convergence to Pareto optimal front and
avoid thelocal optimums during the search.
According the above idea, in [2] the authors propose an evolution algorithm
which combines with discrete gradient information. In this propose, discrete
gradient is used as a local search operator for + and ( , ) evolution strategies,
there is number of randomly selected parents, is number of generated
offsprings. The algorithm applies discrete gradient on all individuals in population
in the first generation. At later generations, discrete gradient is only applied for the
best found individuals. A number of offsprings are generated after recombination
and mutation, then µ best individuals are selected in ( + ) (parents and new
Nghiên cứu khoa học công nghệ
Tạp chí Nghiên cứu KH&CN quân sự, Số Đặc san CNTT, 12 - 2017 59
generated offsprings ) or only in (new generated offsprings). The authors use
= 2 ( )
and = (2 )
to calculate a step length in ⃗ in the
mutation operator individual :
, = + (0,1)
(11)
, =
(0,1) + (0,1)
In [17], the authors combine concept of evolution computation with approximated
gradients to generate offsprings for next generations. In this propose, gradients is not
exactly calculated, they are estimated by using minimum solutions in neighbor
populations. This propose is implemented in four main steps: Step 1, a center point c
and a radius r are initialized, set = ; Step 2, a population with size randomly
individuals, each point in population defines a set of vectors such that:
= + ; ∈ (1,2, , ) (12)
There, a randomized number in (0,1), is a randomized vector and is the
current iteration;
Step 3: Approximating gradient: Evaluate ( ) ℎ = 1,2, , . When
∃ : (min ( ( ) ≪ ( ), set is in problem ( ) = min ( ( ) ). An
approximated gradient vector is defined from the differences between .
= − (13)
At Step 4, a new center point and new radius are created by:
= + ∀ ≫ 1 (14)
there, is a control parameter.
= ‖ − ‖∀ ≫ 1 (15)
In [4, 3], the authors introduce three approaches of using gradient information
with MOEAs:
Using conjugate gradients: A local search method which uses gradient with
a single objective, an objective is randomly selected in objectives of MOPs.
This method depends on relationship between objectives, when an objective
is the best improved, it helps to improve the remaining objectives.
Using an alternative repeated line-search: The idea of this approach is to
reduce the probability of improving a single objective while making the
objective value in the other objective worse, the objective that is searched
locally can be altered during local search. The authors propose to perform a
linesearch (i.e. find a local minimum with respect to a single direction) in a
single, alternatingly chosen objective in the direction of the negative gradient
of that objective. This process is repeated until a multiobjective local
minimum is found.
Using a combined repeated line-search: A linesearch is performed in a
promising direction. When the linesearch terminates, a new linesearch is
initiated in a new promising direction found at the new location. A set of
non-dominated directions are improvement directions, they are described as
bellows:
Công nghệ thông tin
N. Long, , N. D. Dinh, “A survey on directional information evolutionary algorithms.” 60
=
∑
,
∑
,
∑
∈[ , ] (16)
When the linesearch terminates, a randomly vector of is given (each
∈ [0,1] and ∑
= 1), the a new direction is determined when putting
this randomly vector intoabove inequality. The line-search is not used to
minimize a single objective but it used to findnon-dominated solutions
belong the line-search. A multi-objective problem need to be solved to find
the line-search.
In [5], the authors modify the concept of descent cone in the using gradients, the
descent cone is defined as interregion of negative space which is generated by
gradient vectors over all objectives. The authors indicate that, offsprings which is
generated by a MOEA have to located on this descent cone. A gradient
approximation is proposed, the approximated method bases on neighbored
information, it really costs less than solving a quadratic linear programming in [15].
In other propose [28], the authors introduce a method that uses generating
techniques for optimal Pareto solutions [30] and [26] to improve an mutation on
NSGA-II. This propose uses Simultaneous Perturbation (SP)[29] for gradient
estimation:
( ) =
( ∆) ( )
∆
(17)
there, ( )is the component i
th of a gradient, ∆ is randomized vectors with n
dimensions that satisfies the condition in [29]. The randomized component of this
randomized vector is generated by the Bernouli distributor. A step-size c at each
iteration k is calculated: = /( + 1)
. The using of randomized gradient in
[30]: A set of non-dominated solutions is determined and used for approximating
of Pareto optimal set. From each new non-dominated , a new individual
, is
generated by
= − ∑ ∇
( ) (18)
there, is a is a vector distribution in [0, 1] and t is the step-length at generation j
which is calculated: = / , is a positive constant. The generating method in
[26] is used:
= − ( ) + , = (19)
there, > 0, is a Brownian movement, (−q(x)) is a direction in decision space
that is a descent direction for all objective functions, this descent direction is
determined by a quadratic problem: ( ) = ∑ ∇
( ) when is minimum
These mutation operators is used in NSGA-II as twonew versions of NSGA-II: T-
NSGA-FD and T-NSGA-SP.
Based on the directed search method in [27], the authors in [19] improve and
combine the directed search with MOEA/D [31]. The directed search is improved
by other direction determination:
( )
= ∑ , where
= ( )
(20)
there, ∈ is a guided vector, T (x) ∈ Rk×r k is the number of objectives, r is
number of search directions, λ ∈ Rr is step-length vector for search directions.
When search directions are found, the selected solution x is moved to new
Nghiên cứu khoa học công nghệ
Tạp chí Nghiên cứu KH&CN quân sự, Số Đặc san CNTT, 12 - 2017 61
location by: =
( )
, t is a step-length is calculated by [22]. The directed search
method is applied with MOEA/D as a local search, for the combination, two additional
control parameters are given: the first parameter is a ratio for using the local search in
each generations, the second parameter is the number of solutions in population which
are selected by the local search in each generation.
In [24, 25], the authors introduce two version of a MOEA which uses descent
directions, they are suitable to solve non-differentiable MOPs, in this propose a
specific descent direction determination is introduced to avoid the derivative
calculation. A new decision vector is determined:
= + ∑
(21)
there, is a new generated decision vector, is the current randomized
decision vector, is a number is randomly generated from a distributor in [0, 1],
is a descent direction for objective , is a step-length at iteration, that is
calculated: = max (1, (1 − / ), there is a desired number of iterations.
To avoid the derivative calculation, the authors propose a descent direction
determination for each objective: The first, the population is divided into α equal
sub-populations. At current generation, for each sub-population, calculate all
points for all objective functions to get a . This is a point that has the lowest
values for each objective function. Then a coordinated search is used to find a
descent direction for each (randomly generation in variables randomly
order, then they are checked for selection), the descent direction for individual i of
objective j is determined:
, = − + (22)
there, is a leader point of a sub-population, is the descent direction
of the . This procedure is repeated for the other objectives. At the end, M
descent directions are associated with each solution in the population.
2.2. Using differential directions
One of widely used methods which uses directional information to guide
MOEAs is differential evolution (DE) method. In this method, guided directions
are determined on the differences of selected solutions in population. The
differences between differential evolution algorithms and evolutionary algorithms
in mutation and recombination operators. The perturbation is not like in
evolutionary algorithms, in differential evolution, the differences of weighted
vectors in decision space are used to perturb the population. In other words, the
main technique in differential evolution is the using differential directions which
are determined from selected points in decision space for the perturbation. The
common steps in differential evolution are described:
Step 1: i = 1.
Step 2: Randomly select 1, 2, 3 ∈ {1,2, , } such that 1 ≠ 2 ≠
3 ≠ ,i is the indexes of the selected individuals in population.
Step 3: Generate a new child individual , from selected parents
, , , v , with a current individual , at generation . A child
individual , is generated by:
Công nghệ thông tin
N. Long, , N. D. Dinh, “A survey on directional information evolutionary algorithms.” 62
, = , + ( , − , ) + ( , − , ) (23)
there, K and F are control parameters, they are often generated by a
uniformly distribution in [0, 1]. , is called main parent, , , , are
called supported parents.
Step 4: i = i + 1.
Step 5: Repeat Step 2 if it is not enough desired child individuals and
≠ .
In differential evolution, the population is randomly generated in ranges for
each decision variable in decision space. At each generation, population is
perturbed, the index i of current individual , is initialized by 1 at the first step.
At the second step, three independent individuals are randomly selected in
population (size N), their indexes are: 1, 2, 3. The parameter K is a rate for the
combination between xr3,tt and current individual xi,tt. The parameter F is a step-
length for vector ( , − , ). Step 3, these selected individuals are used to
generate new individual , for next generation + 1:
In single optimization, a new individual , is compared to the current
individual , , if the new individual is better than the current one, it replaces the
current individual in next generation. At Step 5, the terminated condition is
checked, it repeats until getting enough individuals or N individuals are checked.
In multi-objective optimization, the dominance relationship between new
individual and the current individual is examined for the replacement. Some
proposed multi-objective evolutionary algorithms which use differential evolution
is short described as bellows: According the concept of differential evolution, the
authors in [1] introduce a MOEA uses DE. In this propose, the authors modify
standard DE as bellows:
Using Gaussian distribution N (0, 5, 0.15) for population initialization.
Parental individuals , , , v , are only selected in non-dominated
solutions in each generation.
Offspring is generated by:
, = , + ( , − , ) + ( , − , ) (24)
As a standard DE, but step-length F for vector ( , − , ) is randomly
picked from Gaussian distribution N (0, 1).
The boundary constraints are preserved either by reversing the sign if the
variable is less than 0 or keeping subtracting 1 if it is greater than 1 until the
variable is within its boundaries.
Offspring u , are placed into the population if they dominate the main
parent , − , .
A combined version of NSGA-II with DE is introduced in [11], in this propose,
the individual , in standard DE is replaced by , which has the best rank
in dominated relationship in population at generation tt, the individual , is
skipped. Here, offspring for generation + 1 are determined:
, = , + , − , 2 ≠ 3, 2, 3 ∈ [1, ] (25)
Nghiên cứu khoa học công nghệ
Tạp chí Nghiên cứu KH&CN quân sự, Số Đặc san CNTT, 12 - 2017 63
there, 2, 3 are randomly selected in [1,N], , is the individual which has
the best rank in dominated relationship in population. In addition, the crossover
operator generates a trial vector by:
, =
, , ( ) ≪ = ( )
, , ( ) > ≠ ( )
(26)
there, is a crossover rate, j is the decision variable jth of solution ith, r(j) is a
number which is randomly picked in [0,1], , will use mutation vector
, , ( ) ≪ . Otherwise, , is used.
In [21], the authors introduce a new version of MOEA/D which uses DE
(MOEA/D-DE). In this propose, a method for individual replacement and
randomized search rate in mutation operators is suggested. In MOEA/D, the
maximum number of solutions that can be replaced by a child solution is the size
of the neighborhood, T, whose disadvantage is shown in [20]. MOEA/D-DE
improves the replacement mechanism by adding a bound nr, which is much
smaller than T. A high quality child solution can replace nrcurrent solutions at
most, which helps the diversity maintenance. The Euclidian distances from current
selected solution to other solutions are calculated to select nr solutions which have
the shortest distances, they are entered next generation with mutation operator:
, = , + , − , (27)
there, 1, 2 ∈ [0, ] 1 ≠ 2, is picked from Gaussian distribution in [0, 1]
is step-length parameter for ( , − , ) each generation.
2.3. Using direction of improvement
The effectiveness of using directional information has been discussed in multi-
objective evolutionary algorithmic design. Beside of gradient based technique and
differential evolution to guide MOEAs, the using of direction of improvement is
raised and indicated as a promising technique to obtain set of good approximated
Pareto optimal solutions. This techique helps MOEAs to maintain exploitation and
exploration characteristics. Follow the aspect of improving MOEAs, the direction
of improvement is defined as directions which are determined from population in
decision space or objective space. These directions are archived and used for
moving solutions belong the directions to make MOEAs to be improved in
convergence rate and diversity. There are some proposes which use direction of
improvement: In [18], based on the concept of differential evolution, the authors
propose an using direction of improvement for two important characteristics of
MOEAs: convergence and diversity. In this propose, two types of direction are
defined for NSGA-II [13] as bellows:
Directional convergence: In NSGA-II, at each generation, the individuals are
ranked in anumber of classes by dominance relationship in population. Individuals
are of the same rank which are put into the same class. A directional convergence
is defined as a vector from ( , − , ), there xi,tt is a current selected
individual, , is a main parent in DE, which is randomly selected from higher
rank individuals in the population (see Fig. 4). This direction points toward the
Pareto optimal set. The meaningful of using this direction is guiding the
Công nghệ thông tin
N. Long, , N. D. Dinh, “A survey on directional information evolutionary algorithms.” 64
evolutionary process quickly convergence to the Pareto optimal set, it helps
MOEAs to be improved in convergence rate.
Directional spread: This direction is defined from ( , − , ), there, two
individuals xr1,tt, xr2,tt are randomly selected from current population such that
, , , are of the same rank (see Fig. 4). This direction is used to guide
solution to be distributed belong the Pareto optimalset. It helps MOEAs to be
improved in diversity.
Figure 4. Illustration of directional convergence and directional spread, f is the
currentindividual, individuals a and b are of the same rank, c, d, e are of the same
rank. Directionalconvergence (left) and directional spread (right) in decision space.
Using the directions of improvement: The authors give three ways to use the
directions of improvement:
NSDE-DC: A version of NSGA-II which only uses directional
convergence, the indexes 1, 2 such that 1 ≠ 2 ≠ 3 ≠ are
randomized selected in the indexes of the current population.
NSDE-DS: A version of NSGA-II which only uses directional spread, the
index 3 such that r1 ≠ r2 ≠ r3 ≠ i is randomized selected in the indexes
of the current population.
NSDE-DCS: A version of NSGA-II which uses both of directional
convergence and directional spread.
In all ways of using the directions of improvement, offspring is generated by:
, = , + ( , − , ) + ( , − , ) (28)
In the experimental validation, the authors set F = 0.8 and K = 0.4.
In [9, 6, 7] the authors introduce a guidance technique for multi-objective
algorithm in both decision space and objective space in a local search model. In
this propose, decision space space is divided into several non-overlap spheres, each
sphere si is defined that includes a pair of a centroid and a radius: S = [s0, s1, . . . ,
sn]v si = (ci, ri). At initialized step, all radiuses ri are set the same value r. Each
sphere is defined as a local area in decision space. To ensure a sphere is not
overlapped to others, a constraint is used:
≫ 2 (29)
there,
is the euclidian distance between sphere A and sphere B. The points in a
sphere is uniformly distributed, the minimized distance from a point to others in
the same sphere is , the distance between two points i and j is
, is the
centroid of the sphere X, the constraint conditionis described as bellows:
Nghiên cứu khoa học công nghệ
Tạp chí Nghiên cứu KH&CN quân sự, Số Đặc san CNTT, 12 - 2017 65
≪ ,
≪ , (30)
≫
The authors use Cartesian coordinate to generate values for points x = (x1, x2, . .
. , xn) in a sphere bases on the radius of the sphere and a set of − 1 randomized
angles. The local model is definedas a system of spheres in decision and they are
kept by a movement operator. The main steps are described as bellows:
Step 1: Define spheres: The parameters are declared: number of spheres
SP, initialized radius such that satisfies (29) and the minimized distance
between two spheres β.
Step 2: Initilize spheres: Spheres are initialized by a method of randomly
angle generating which satisfies the condition in ( 30).
Step 3: Implement evolutionary iterations for each sphere.
Step 4: Move spheres belong the predefined strategy, a non-communicated
strategy is suggested, where a centroid is calculated which bases on entire
non-dominated in the sphere:
=
∑ (
)
(31)
where, is a set of N obtained non-dominated individuals, a radius ri of the
sphere i is determined:
=
, >
, ℎ
(32)
Step 5: The termination condition is examined, if it is not satisfied then go
to Step 3.
Figure 5. The movement of a centroid to determine a direction of improvement: I)
currentsphere where the big black dot is its centroid; II) The transition state in
which the big white dot is the new centroid; III) The new sphere with new
generated o_springs (small black dots). The big white arrow is determined
direction of improvement.
In this propose, the movement direction of a sphere is determined by a vector
which is created from a pair of an old centroid and a new one (see Fig. 5), this
direction is defined as a direction of improvement. The combined direction of the
system is defined as a combination of all directions of improvement of the spheres:
⃗ = ⃗
(33)
Công nghệ thông tin
N. Long, , N. D. Dinh, “A survey on directional information evolutionary algorithms.” 66
there, is the number of spheres in decision space. Based on the concept of
particle swarm optimization (PSO) rules [14], a new individual is generated by an
adaption:
⃗
= ⃗
+ ⃗ (34)
there, ⃗ is the vector of a direction, ⃗
is a individual which is generated by
a crossover operator,
is a new generated individual from above adaption.
The directions of improvement which are created by movement direction of
spheres, they are used to guide the evolutionary process towards difference areas in
Pareto optimal front in objective space, it helps to maintain the diversity of the
population during generations.
With the aim of improve the exploitation and exploration of MOEAs, the
authors in [10] introduce a direction based multi-objective evolutionary algorithm
(DMEA), there are two types of improvement directions are used. These directions
of improvement are defined as bellows:
Convergence direction: This direction is defined as a direction from a solution
to a better solution in objective space, in multi-objective optimization problems,
the convergence direction is a direction from a dominated solution to a non-
dominated solution. If non-dominated solutions are maintained globally, this direc-
tion is global direction. In unconstraint MOPs, a guidance to move a dominated
solution belong this direction helps evolutionary process to find better area in
decision space. Illustration of convergence direction is shown in Fig.6, there A is a
dominated solution, B, C, D, E, F, tt are non-dominated solutions, all directions
from A to B, C, D, E, F, tt are determined as convergence directions.
Spread direction: This direction is defined as a direction from two equivalence
solutions, in multi-objective optimization problems, the spread direction is a
direction from a non-dominated solution to other non-dominated solution. If a
solution is perturbed by this direction, it helps the population to be more spreading,
other words, the obtained solutions will be distributed belong Pareto optimal front
in objective space. Illustration of convergence direction is shown in Fig.6, all
directions between B, C, D, E, F, tt are determined as spread directions.
Figure 6. An illustration of convergence (black arrows) and spread (hollow
arrows) directions in objective space (left) and decision variable space (right).
The propose directions of improvement are used as bellows:
Nghiên cứu khoa học công nghệ
Tạp chí Nghiên cứu KH&CN quân sự, Số Đặc san CNTT, 12 - 2017 67
A dominated solution Par is selected with variables ( ) + in decision
space ∈ {1,2, , }, aperturbation rate 0 < p < 1 and a convergence d1, a
individual S1(i) is perturbed by convergence direction by:
( ) ≡ ( ) + ( ) ( ) (35)
there, is a scalar parameter is randomly picked in a distribution U(0,2) this is a
step-length to avoid local optimums, RND (p) is a randomized number in
distribution U(0,1) it equals 1 if U(0,1) < and 0 otherwise. A non-dominated
solution Par is selected with variables ( ) + is perturbed by a spread direction
d2 by:
( ) ≡ ( ) + ( ) ( ) (36)
In this propose, the directions of improvement are calculated by:
≡
−
| − |
(37)
≡
−
| − |
(38)
there, , and are three randomly selected solutions in the archive ( an
external populationis used to keep elitist solutions during generations). In an
additional technique, a system of rays inobjective space is used for niching
operator in DMEA. Based on the distance between from solutions to the nearest
ray, a ray based niching is introduced to use during the search. This is an
interesting technique which is incorporated with directions of improvement to
effective guide the evolutionary process to maintain the exploitation and
exploration of the algorithm. In the new version of DMEA, DMEA-II[23], an
explicit niching operator is used with a set of rays which divides the space
evenlyfor the selection of non-dominated solutions to ll the solution archive and
the population of the next generation. We found that, with DMEA-II solutions will
effectively converge to the Pareto optimalset under the guidance of the ray system.
3. ADVANTAGES AND DISADVANTAGES
Through above short survey on techniques of using directional information as
guided direction for multi-objective algorithms, the advantages and disadvantages
of each technique are indicated as bellows:
3.1. Gradient based direction technique
Solving multi-objective optimization problems by gradient based directions is
early discussed and used in difference approaches. In fact, gradient based multi-
objective algorithms have some advantages: This algorithms can be used to solve
complex differentiable MOPs, gradient based directions are used so it makes multi-
objective algorithms to be good convergence rate, when incorporating with
evolution strategy in a hybridization MOEA, the algorithms can have a good
convergence rate and avoid the local optimums during the search. However, there
some difficulties in using gradient based directions: The algorithms can not be
used with non-differentiable MOPs, it requires a hight performance cost to
determine gradient based directions. In hybridization MOEAs which use gradient
directions in local search model but it still has difficulties in determining descent
Công nghệ thông tin
N. Long, , N. D. Dinh, “A survey on directional information evolutionary algorithms.” 68
directions, Pareto descent directions and directed directions, non-differentiable
MOPs and maintaining the balance between exploitation and exploration of the
algorithms globally.
3.2. Differential direction technique
To date, evolutionary algorithms which use concept of differential evolution is
known as a powerful an effective algorithm to solve single optimization problems.
However, in multi-objective optimization, although it still has the advantages like
in single optimization, but a multi-objective evolutionary which uses differential
directions by the concept of differential evolution has some difficulties: MOEAs
with DE have hight convergence rate but it is difficulty to keep diversity for the
population. This disadvantage can be solved if some mechanisms for maintaining
diversity of the population are incorporated with the algorithms. One other
difficulty is that MOEAs with DE only work on real decision space, so it can not
be used when decision space is binary space. This difficulty can be solved when
use an additional codding technique for a space transformation.
3.3. Direction of improvement technique
The using of direction of improvements in MOEAs is known as an effective
technique because of the aim of the directions of improvement. The directions of
improvement are used to guide the evolutionary process to make the population to
be quickly converged and uniformly distributed belong to Pareto optimal front. It
helps to improve the convergence rate and diversity for obtained population. Using
direction of improvement is overcame almost difficulties in using of gradient based
direction and differential direction: It is quite simple to determine directions of
improvement because these directions are determined by dominance relationship
of solutions (or individuals) in population (or an external population); The
directions of improvement are used for a movement of solution follows two
aspects: being closed and uniformly distributed the Pareto optimal front. It helps to
ensure convergence rate and diversity of the population so it promises to obtain a
good approximated Pareto optimal set, the primary aim of improving MOEAs in
multi-objective optimization area. However, there are some difficulties on using of
direction of improvement: keeping the balance between exploitation and
exploration is difficult because evolutionary process follows the stochastic
mechanism. This difficult mights be a reason of reducing convergence rate and
diversity of the population. Almost directions of improvement are used in a local
search model for MOEAs, so it mights not a effective algorithm for global multi-
objective optimization.
In recently proposes which use direction of improvement, the authors in [18]
defined directions of improvement bases on dominance relationship in ranked
classes of individuals so the search is locally. Other local search uses directions of
improvement in [9, 6, 7] when the direction of improvement is determined be the
moving of local spheres in decision space. The global search uses directions of
improvement is introduced in [10] when the directions of improvement are
determined by dominance relationship on individuals entire the main population
and the archive during generations.
Nghiên cứu khoa học công nghệ
Tạp chí Nghiên cứu KH&CN quân sự, Số Đặc san CNTT, 12 - 2017 69
4. CONCLUSIONS
This paper analysts the using of directional information to guide evolutionary
process in MOEAs, the paper also indicates that: The using of direction of
improvement is an effective technique, it promises for MOEAs to obtain a good
approximated Pareto optimal front through the search guidance which follows
directions of improvement for the exploitation and exploration of the algorithms.
It helps the population is improved in convergence rate and diversity, however
the balance between exploitation and exploration need to be kept during
evolutionary processes.
In fact, when using directions of improvement, offspring is generated by each
direction often in a balanced rate [18,10], It clears that this is a ’hard’ constraint of
the algorithms when quality of population in convergence and diversity are
differences at difference states of evolutionary process. At the early states, the
population is quite diversity, it needs to increase the convergence rate at that time.
Otherwise, at the late states, the population is strongly converged to Pareto
optimal front, it needs to make population to be more diversity to keep the balance
between convergence and diversity. Hence, the search guidance needs to adaptive
use to keep the balance between exploitation and exploration throughout
evolutionary process.
Beside the search guidance belongs the directions of improvement automatically,
the search guidance needs to reach decision maker’s preferred regions in objective
space for acompleted effective guidance. The using of directions of improvement
needs to incorporate with a search mechanism to guide the evolutionary process to
converge to the DM’s preferred region, which is closed and uniformly distributed
toward some parts of Pareto optimal front. This combined technique is meaningful
and effectiveness when solving areal world multi-objective optimization problem.
Summary of using guided direction as a search guidance in MOEAs, this paper
indicates that: The use of direction for guiding MOEAs is a promising approach.
There is a need to have more investigation on how to have an effective guidance
from both aspects: 1) Automatically guiding the evolutionary process to make the
MOEA balanced between exploitation and exploration. 2) Combining decision
maker’s preference with improvement direction to guide the MOEAs during
optimal process toward the most preferred region in objective space.
REFERENCES
[1]. H.A. Abbass, R.A. Sarker, and C.S. Newton. “PDE: A pareto-frontier
differential evolution approach for multi-objective optimization problems”. In
Proceedings of the IEEE Congress on Evol. Compt (CEC2001), volume 2,
pages 971–978, Piscataway, NJ, 2001. IEEE Press.
[2]. Hussein A Abbass, Adil M Bagirov, and Jiapu Zhang. “The discrete
gradient evolutionary strategy method for global optimization”. In
IEEE Congresson Evolutionary Computation (1), pages 435–442,2003.
[3]. P.A.N. Bosman and D. Thierens. “Multi-objective optimization with the
naive midea”. In J.A. Lozano et al, editor, Towards a New Evolutionary
Computation. Advances in Estimation of Distribution Algorithms, pages 123–
Công nghệ thông tin
N. Long, , N. D. Dinh, “A survey on directional information evolutionary algorithms.” 70
157. Springer-Verlag, Berlin,2006.
[4]. Peter A. N. Bosman and Edwin D. de Jong. “Exploiting gradient information
in numerical multi– objective evolutionary optimization”. In Proceedings of
the 2005 Conference on Genetic and Evolutionary Computation, GECCO ’05,
pages 755–762, New York, NY, USA, 2005. ACM.
[5]. Martin Brown and Robert E Smith. “Effective use of directional information
in multi-objective evolutionarycomputation”. In Genetic and Evolutionary
Computation (GECCO 2003), pages 778–789. Springer, 2003.
[6]. Lam T Bui, Hussein A Abbass, and Daryl Essam. “Local models-an approach
to distributed multi-objective optimization”. Computational Optimization and
Applications, 42(1):105–139, 2009.
[7]. Lam T Bui, Hussein A Abbass, and Daryl Essam. “Localization for solving
noisy multi-objective optimization problems”. Evolutionary computation,
17(3): 379–409, 2009.
[8]. Lam T. Bui and Sameer Alam. “Multi-Objective Optimization in Computation
Intelligence: Theory and Practice”. Information Science Reference. IGI
Global, 52008.
[9]. Lam Thu Bui, Kalyanmoy Deb, Hussein A Abbass, and Daryl Essam.
“Interleaving guidance in evolutionary multi-objective optimization”. Journal
of Computer Science and Technology, 23(1):44–63, 2008.
[10]. Lam Thu Bui, Jing Liu, Axel Bender, Michael Barlow, Slawomir Wesolkowski,
and Hussein A. Abbass. “Dmea: A direction-based multiobjective evolutionary
algorithm”. Memetic Computing, pages 271–285, 2011.
[11]. Fan Yang Chung Kwan and Che Chang. “A differential evolution variant of
nsga ii for realworld multiobjective optimization”. Proceeding ACAL’07
Proceedings of the 3rd Australian conference on Progress in artificial life,
pages 345–356, 2007.
[12]. K. Deb. “Multiobjective Optimization using Evolutionary Algorithms”. John
Wiley and Son Ltd, New York, 2001.
[13]. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. “A fast and elitist
multiobjective genetic algorithm: Nsga-ii”. Evolutionary Computation, IEEE
Transactions on, 6(2):182–197, 2002.
[14]. R. C. Eberhart and Y. Shi. “Particle swarm optimization: developments,
applications and resources”. In Proceedings of the Congress on Evolutionary
Computation. IEEE Press, 2001.
[15]. Jorg Fliege and Benar Fux Svaiter. “Steepest descent methods for
multicriteria optimization”. Mathematical Methods of Operations Research,
51(3):479–494, 2000.
[16]. Ken Harada and Shigenobu Kobayashi. “Local search for multiobjective
function optimization: Pareto descent method”. In The 8th Annual Conference
on Genetic and Evolutionary Computation (GECCO-2006), pages 659–666.
ACM Press, 2006.
[17]. Joel Hewlett, Bogdan Wilamowski, and G Dundar. “Merge of evolutionary
computation with gradient based method for optimization problems”. In
Industrial Electronics, 2007. ISIE 2007. IEEE International Symposium on,
Nghiên cứu khoa học công nghệ
Tạp chí Nghiên cứu KH&CN quân sự, Số Đặc san CNTT, 12 - 2017 71
pages 3304–3309. IEEE, 2007.
[18]. Antony W Iorio and Xiaodong Li. “Incorporating directional information
within a differential evolution algorithm for multi-objective optimization”. In
Proceedings of the 8th annual conference on Genetic and evolutionary
computation, pages 691–698. ACM, 2006.
[19]. Adriana Lara, Sergio Alvarado, Shaul Salomon, Gideon Avigad, Carlos A
Coello Coello, and Oliver Schu¨tze. “The gradient free directed search
method as local search within multi-objective evolutionary algorithms”. In
EVOLVE-A Bridge between Probability, Set Oriented Numerics, and
Evolutionary Computation II, pages 153–168. Springer, 2013.
[20]. Hui Li and Qingfu Zhang. “Multiobjective optimization problems with
complicated pareto sets, moea/d and nsga-ii”. Evolutionary Computation,
IEEE Transactions on, 13(2):284–302, 2009.
[21]. Bo Liu, Francisco V Ferna´ndez, Qingfu Zhang, Murat Pak, Suha Sipahi, and
Georges Gielen. “An enhanced moea/d-de and its application to
multiobjective analog cell sizing”. In Evolutionary Computation (CEC), 2010
IEEE Congress on, pages 1–7. IEEE, 2010.
[22]. Erick Mejia and O Schutze. “A predictor corrector method for the
computation of boundary points of a multi-objective optimization problem”.
In Electrical Engineering Computing Science and Automatic Control (CCE),
2010 7th International Conference on, pages 395–399. IEEE, 2010.
[23]. Long Nguyen, Lam T Bui, and Hussein A Abbass. “Dmea-ii: the direction-
based multi-objective evolutionary algorithm-ii”. Soft Computing, 18(11):
2119–2134, 2014.
[24]. Isabel Espirito Santo Roman Denysiuk, Lino Costa. “Ddmoa: Descent
directions based multiobjective algorithm”. In Proceedings of the
Conference on Computational and Mathematical Methods in Science and
Engineering (CMMSE 12), pages 460–471. IEEE, 2012.
[25]. Isabel Espirito Santo Roman Denysiuk, Lino Costa. “Ddmoa2: Improved
descent directions based multiobjective algorithm”. In 13th International
Conference on Computational and Mathematical Methods in Science and
Engineering. IEEE, 2013.
[26]. Stefan Sch¨affler, Reinhart Schultz, and Klaus Weinzierl. “Stochastic method
for the solution of unconstrained vector optimization problems”. Journal of
Optimization Theory and Applications, 114(1):209–222, 2002.
[27]. O. Schutze, A. Lara, and Carlos A. Coello Coello. “On the influence of the
number of objectives on the hardness of a multiobjective optimization
problem”. Evolutionary Computation, IEEE Transactions on, 15(4):444–455,
Aug 2011.
[28]. Pradyumn Kumar Shukla. “On gradient based local search methods in
unconstrained evolutionary multi-objective optimization”. In Evolutionary
Multi-Criterion Optimization, pages 96–110. Springer, 2007.
[29]. James C Spall. “Implementation of the simultaneous perturbation algorithm
for stochastic optimization”. Aerospace and Electronic Systems, IEEE
Transactions on, 34(3):817–823, 1998.
Công nghệ thông tin
N. Long, , N. D. Dinh, “A survey on directional information evolutionary algorithms.” 72
[30]. G Timmel. “Ein stochastisches suchverrahren zur bestimmung der optimalen
kompromilsungen bei statischen polzkriteriellen optimierungsaufgaben”.
Wiss. Z. TH Ilmenau, 6(1):5, 1980.
[31]. Q. F. Zhang and H. Li. Moea/d: “A multi-objective evolutionary algorithm
based on decomposition”. 2007.
TÓM TẮT
KHẢO SÁT KỸ THUÂT SỬ DỤNG THÔNG TIN VỀ HƯỚNG
CHO GIẢI THUẬT TIẾN HÓA TỐI ƯU ĐA MỤC TIÊU
Trong nhiều lĩnh vực, bài toán tối ưu thường có nhiều hơn hai mục tiêu và
chúng thường xung đột với nhau và đòi hỏi tìm giá trị tối ưu của các mục
tiêu đồng thời. Những bài toán này được gọi là bài toán tối ưu đa mục tiêu
(multi-objective optimization problems – MOPs). Thực tế, MOP thường đưa
ra một vài giải pháp chứ không phải một giải pháp và tập giải pháp này
được gọi là tập tối ưu Pareto (Pareto optimal set- POS). Giải thuật tiến hóa
đã trở thành phổ biến trong giải các bài toán tối ưu đa mục tiêu [8, 12] với
lý do dễ sử dụng, làm việc theo quần thể và khả năng ứng dụng rộng rãi của
nó. Giải thuật tiến hóa cho phép tìm một tập giải pháp tối ưu Pareto trong
một lần chạy của giải thuật thay vì phải thực hiện một vài lần chạy nhưng kỹ
thuật lập trình toán học truyền thống. Gần đây, kỹ thuật chỉ dẫn được thảo
luận, nghiên cứu và sử dụng để chỉ dẫn cho giải thuật tiến hóa tối ưu đa mục
tiêu trong quá trình tìm kiếm theo tập tối ưu Pareto. Nói chung, thông tin chỉ
dẫn được lấy từ quần thể, các thể, tập lưu trữ ngoài và người ra quyết định,
sau đó thông tin này được sử dụng để chỉ dẫn tiến trình tiến hóa nhanh
chóng hội tụ đến tập POS. Kỹ thuật chỉ dẫn tốt sẽ điều khiển cho giải thuật
tiến hóa tối ưu đa mục tiêu để đạt được tập giải pháp tối ưu Pareto có chất
lượng về tính hội tụ và đa dạng. Đây là một việc khó khi mà tiến trình tiến
hóa có tính chất ngẫu nhiên và nó rất khó để duy trì tính cần bằng của hội tụ
và đa dạng trong quá trình tìm kiếm. Bài báo này sẽ khảo sát và bàn luận
cách thức xác định và sử dụng hiệu quả thông tin chỉ dẫn trong giải thuật
tiến hóa tối ưu đa mục tiêu. Bài báo cũng tập trung khảo sát về việc sử dụng
hướng cải thiện để chỉ dẫn tốt nhất cho quá trình tìm kiếm nhằm duy trì cân
bằng giữa tính hội tụ và đa dạng của giải thuật tiến hóa tối ưu đa mục tiêu.
Keywords: MOEAs, Hướng trượt Gradient, Hướng cải thiện,Thông tin định hướng, PDE, DMEA-II.
Nhận bài ngày 16 tháng 8 năm 2017
Hoàn thiện ngày 26 tháng 11 năm 2017
Chấp nhận đăng ngày 28 tháng 11 năm 2017
Address: 1National Defense Academy;
2Le Quy Don Technical University;
3MITI, Military Academy of Science and Technology.
* Email: longn@mta.edu.vn.
Các file đính kèm theo tài liệu này:
- 06_7052_2151877.pdf