alternatively
Ewa Gajda-Zagórska, Konrad Gądek, Radosław Łazarz, Michał Idzik, Robert Schaefer
We propose treating Hierarchical Genetic Algorithm[kolodziej2001hierarchical,
HGSGeneticSearchReinforced,
HGSRealNumber,
jojczyk2012global,
HGSSimpletons]
as a meta—optimizing tool for Multi—objective Evolutionary Algorithms (MOEAs). We argue, that using HGS helps finding better Pareto set approximations and/or helps finding it quicker. We compare some standard multi—objective algorithms[zitzler2004tutorial]
against well—known problems[zitzler2000comparison]
and then compare results to those obtained while using HGS.
<script type="text/javascript" src="https://c328740.ssl.cf1.rackcdn.com/mathjax/latest/MathJax.js?config=TeX-AMS_HTML">
</script>
<script type="text/javascript">
Backbone.on(‘set:isComplete’, function() { return MathJax.Hub.Queue([“Typeset”, MathJax.Hub]); });
Backbone.on(‘card:save’, function(cardId) { return MathJax.Hub.Queue([“Typeset”, MathJax.Hub, “card”+cardId]); });
</script>
references.bib
on DropboxEvolutionary Algorithms (EAs) are an invaluable tool providing solutions for complex problems for which no better methods are known or for which even approximate solutions are too complex to use. EAs are not only general, they are also relatively easy to use and implement as well—-in practice, the only requirement is providing the fitness function that indicates how feasible given solution is.
Some problem domains consist of multiple, often conflicting, objectives, each to be optimized. One of the popular methods is to implement some function that steers the EAs in some particular direction by combining results for all objectives into one value. Unfortunately, such search strategies’ results give little information on a problem nor any other available, probably more promising, solutions with different balances. This is why multi—objective evolutionary algorithms (MOEAs) are being used. MOEAs work by maintaining and returning a set of promising solutions, each with different balance between objectives. The result of such search may give us some more insight into the problem. Moreover, given results of MOEAs, it is trivial to find (nearly) optimal solutions for some desirable balance between objectives.
Having such requirements, MOEAs are somewhat more complicated. Beside trying to optimize a set of solutions, they need to maintain diversity of that set so the results would be meaningful and accurate[zitzler2004indicator]
. There is even more to consider, e.g. what happens if some results yield exactly same objective values as others while not being equal [preuss2006pareto,
rudolph2007capabilities]
, how to specify the trade—off between minimizing a distance to Pareto—optimal set and maximizing diversity of a set [zitzler2004indicator]
or how to deal with NP—hard problem of calculating hypervolume indicator [bradstreet2011hypervolume]
.
Multiple algorithms exists, which try to solve same problems in different way. They tend to yield different performance characteristics, thus multiple comparisons and analysis are available [zitzler2000comparison,
zitzler1999multiobjective,
laumanns2002running]
.
Our research was to look again at popular algorithms and verify their performance. We implemented them so to allow Hierarchical Genetic Algorithm to work cooperatively with those so to perform analysis on whether this would yield differences in performance. In this paper, we summarize our work and give brief conclusions.
Let’s consider parameter space\[ \vec{x} = (x_1, x_2, \ldots, x_m) \in \mathbf{X}\subset\mathbb{R}^m \] and objective space\[ \vec{y} = (y_1, y_2, \ldots, y_n) \in \mathbf{Y}\subset\mathbb{R}^n \]Given some fitness function\[
f\colon \mathbf{X}\to\mathbf{Y}
\]we want to find those vectors in parameter space that minimize all and each of the components of objective vector.
Dominance is a relation of strict partial order between two objective vectors. We say that \(\vec{y^{(1)}}\) dominates \(\vec{y^{(2)}}\) if every component of former vector is at least as good as the corresponding component of the latter and at least one of the components is better. More formally:\[
\vec{y^{(1)}}\prec\vec{y^{(2)}} \stackrel{\mathrm{def}}{\iff} \left(\forall i \in [1, 2, \ldots, m]\colon \vec{y^{(1)}_i} \leq \vec{y^{(2)}_i}\right)\\
\land
\left(\exists i \in [1, 2, \ldots, m]\colon \vec{y^{(1)}_i} < \vec{y^{(2)}_i}\right)
\]
We can extend this relation to parameter space—-parameter vector dominates another if corresponding objective vector for former dominates the objective vector for the latter:\[
\vec{x^{(1)}}\prec\vec{x^{(2)}}\iff f(\vec{x^{(1)}}) \prec f(\vec{x^{(2)}})
\]
Solution is optimal, iff no component of objective vector could be further minimized without increasing the value of other component(s). Equivalently, solution is optimal if there is no other solution that dominate it. The set of non—dominated decision vectors is called a Pareto set and the set of non—dominated objective vectors is called a Pareto front.
The MOEA is an algorithm that is given the fitness/objective function \(f\), a description of its domain \(\mathbf{X}\) and its co-domain\(\mathbf{Y}\), returns an approximation of Pareto—set according to the results of that function.
NSGA-II[deb2000fast]
is a much improved version of Non—dominated Sorting Genetic Algorithm (NSGA).
TO BE EXTENDED
SPEA2 stands for Strength Pareto Evolutionary Algorithm[zitzler2001spea2]
and is an improved version of SPEA. Its main concept is to keep a non—dominated archive of constant size throughout whole evolutionary truncating solutions that are too close to each other. Analysis shows that it’s a significant improvement over predecessor and it’s comparable in performance to NSGA-II.
Indicator—Based Evolutionary Algorithm[zitzler2004indicator]
was created because contemporary algorithms had made implicit assumptions about trade—off between minimizing distance to Pareto—optimal set and maximizing diversity of optimized set. IBEA introduces customizable indicator as a performance goal.
HGS
TO BE EXTENDED
Move some refs from abstract here.
Relation \(\prec\) is:
Take a look on the difference: SPEA2 but NSGA-II (arabic vs roman numbers).
Shall we describe binary indicator?
\[
I\colon \Omega \times \Omega \to \mathbb{R}
\]
\[
I{\epsilon^+} = min\epsilon \left{
\forall x^2 \in B \; \exists x^1 \in A\colon f_i (x^1) - \epsilon \leq f_i (x^2)\ \mathrm{for}\ i\in {1, 2, \ldots, n}
\right}
\]
Binary quality indicator is dominance preserving iff
\[
x^1 \prec x^2 \Rightarrow I(\{x^1\}, \{x^2\}) < I(\{x^2\}, \{x^1\})
\]and\[
x^1 \prec x^2 \Rightarrow \forall x^3 \in X\colon I(\{x^3\}, \{x^1\}) \geq I(\{x^3\}, \{x^2\})
\]