Mentales habitudes - Tag - statistics2015-10-19T19:25:25+02:00nojhanurn:md5:12147DotclearMulti-criteria meta-parameter tuning for mono-objective stochastic metaheuristicsurn:md5:9302cf9ff0cb3704a3fc4b590105fcf22008-11-10T14:05:00+01:00nojhanDiversconferencesmetaheuristicparameter-settingperformance assessmentstatisticsvalidation <p>Here are some explanations about the work I have presented <a href="http://metah.nojhan.net/post/2008/11/06/2nd-International-Conference-on-Metaheuristics-and-Nature-Inspired-Computing">
at the META'08 conference</a>. This post is based on the notes I used for my
presentation.</p>
<p>In all the automatic parameter setting methods, the problem of finding the
better parameter set is considered as an optimization problem, with only one
objective, generally finding the best optimum, or reducing the uncertainty of
the results. Sometimes, one try to improve the speed. More rarely, speed,
precision or robustness are aggregated in one criterion, with an adhoc formula.
In fact, one can set parameters according to several objectives (improve speed,
improve robustness, etc.). One cannot find a set of parameters fitting all the
potential uses of a single algorithm on a single problem instance. Thus,
parameter setting is a multi-objective problem.</p>
<p>The key point here is that it is easier to set the parameters of a solver
than to solve the problem directly. The simpler example of this idea is when
you want to solve a continuous optimization problem with hundreds of variables,
with a metaheuristic that have 3 parameters. Moreover, you only have to tune
your parameters once, even if you will solve many problems instances later.</p>
<p>In this work, I only consider speed and precision, although the method may
handle any performance metrics.</p>
<p>What is crucial in our method is that we do not want to aggregates the
criterions, instead, we want the Pareto front corresponding to all the
non-dominated parameters set. I use plots representing the Pareto front, that I
will sometimes call the « performance » front, or performance
profile.</p>
<p>The idea is that one can then compare more rigorously several algorithms, by
comparing their respective performance fronts. We can also benefits from having
a cursor, scaling from a behaviour oriented towards speed, at one extreme, or
precision, at the other side. Even more interesting is the performance profile
projected on the parameters space. One can see that every algorithm has its
very own profile, that tells a lot on how it behaves.</p>
<p><a href="http://metah.nojhan.net/public/parameters_all.png"><img src="http://metah.nojhan.net/public/./.parameters_all_s.jpg" alt="Performance profiles of 4 metaheuristics" style="display:block; margin:0 auto;" title="Performance profiles of 4 metaheuristics: a Simulated Annealing, a Genetic Algorithm, two Estimation of Distribution Algorithms. Produced by NSGA-II, with median estimation, on the Rosenbrock-2D problem, using the parameter corresponding to the sampling density for each method with an absolute time stopping criterion." /></a>
<em>The figure above shows performance profiles of 4 metaheuristics: a
Simulated Annealing, a Genetic Algorithm, two Estimation of Distribution
Algorithms (produced by NSGA-II, with median estimation, on the Rosenbrock-2D
problem, using the parameter corresponding to the sampling density for each
method with an absolute time stopping criterion).</em></p>
<p>Our results suggest that the choice of the stopping criterion has a drastic
influence on the interest of the performance profile, it must be chosen
carefully. Similarly, the method can not naturally find a unique profile for a
set of problem instances, but is strictly valid only for an instance of a given
problem. Finally, we note that the performance profiles are often convex in the
objectives space, which could indicate that aggregation may be usefull.</p>
<p>The proposed method allows to aggregate all parameters into one, determining
the position within the profile of performance, since behavior strongly
oriented towards production (fast, unaccurate) to conception (slow, accurate).
The projection of the profile in the space of parameters can also reflect the
impact of parameters on performance, or dependence between parameters. Such
information may be very relevant to better understand some complex
metaheuristics. It also becomes possible to compare several metaheuristics,
delaying the performance profiles on the same scale. The statistic validation
also receives additional dimensions of discrimination.</p>
<p>In perspective, it remains to reduce the demand for calculations of the
meta-optimizer, using dedicated methods (SPO, racing, etc.).. It is also
possible to extend the method taking into account robustness as supplementary
objectives and checking the possibility of rebuilding correlations on a set of
instances.</p>
<p>Finally, here are the slides. I use light slides without a lot o text, so I
suggest that you read the notes while looking at the presentation. You will
find the abstract, the extended abstract and the slides on my professional
website, at the <a href="http://www.nojhan.net/pro/spip.php?article29" hreflang="en">corresponding publication page</a>.</p>
<object type="application/x-shockwave-flash" data="http://s3.amazonaws.com/slideshare/ssplayer.swf?id=118457&doc=meta08dreoparameters-1226652498095339-8" height="348" width="425"><param name="movie" value="http://s3.amazonaws.com/slideshare/ssplayer.swf?id=118457&doc=meta08dreoparameters-1226652498095339-8" /></object>2nd International Conference on Metaheuristics and Nature Inspired Computingurn:md5:fb39548f9b31ca48eed65d334b2783d02008-11-06T10:28:00+01:00nojhanBibliographieconferencesmetaheuristicparameter-settingperformance assessmentstatisticsvalidation <p>I've just attend the <a href="http://www2.lifl.fr/META08/" hreflang="en">META 2008 international conference on metaheuristics and nature inspired
computing</a>.</p>
<p>The weather was nice in Tunisia, we had a place to sleep, a restaurant and a
swimming pool, the organization was just fine. The acceptance rate was of 60%,
with 116 accepted papers, for 130 attendants and one short review by paper (at
least for mine).</p>
<p>OK, now let's talk about what is really exciting: science.</p>
<p>I was more than pleased to attend to two tutorials, given by Dick Thierens
and Thomas Stützle, that both were talking about the use of stochastic local
search.</p>
<p>What was definitely interesting is that these two talented researchers were
insisting a lot on the need of a rigorous experimental approach for the design
and the validation of metaheuristics. That's good news for our research domain:
the idea that metaheuristics should be employed in a scientific manner rather
than in an artistic one gains more and more importance.</p>
<p>First, they both says that a good way to tackle a hard optimization problem
is to employ a bottom-up approach: start first with a simple local search, then
use metaheuristics operators to improve the results.</p>
<p>Thomas Stützle, particularly, insist on the crucial need of rigorous
parameter setting and experimental validation with statistical tests. That's
definitely a very important point.</p>
<p>Another good point made by Thomas is the use of the term "algorithm
engineering" to describe a rigorous design and evaluation approach of
optimization algorithms. I was searching a nice term to name it, I think this
one is a good candidate. The bad news at this conference is that, despite these
two incredible tutorials, there was very few peoples speaking about algorithm
engineering. I was presenting a <a href="http://www.nojhan.net/pro/spip.php?article29" hreflang="en">new method for
parameter setting and behaviour understanding</a>, but I was in a unrelated
"metaheuristics for real-world problem" session. I haven't seen other works
specifically dedicated to such subjects.</p>
<p>More badly, I have attend to several presentations with very bad
experimental work. Some peoples keeps telling their stochastic algorithm is
better only by showing the best result found. More often, their is a mean and a
standard deviation, but without a statistical test. But there is hope as, since
2001 (when <a href="http://mistic.heig-vd.ch/taillard/codes.dir/comparaison.dir/comparaison.htm" hreflang="en">some works made by Éric D. Taillard</a> definitely introduced
experimental validation for metaheuristics, at least for me), I find that the
proportion of better experimental plans is increasing in the literature.</p>
<p>Anyway, my wish is that there will be more and more special sessions on
algorithm engineering in future conferences on metaheuristics. In the meantime,
there is the <a href="http://iridia.ulb.ac.be/~sls/sls2009" hreflang="en">2nd
"Engineering Stochastic local search algorithms" conference</a>, in september
2009, in Brussels, that seems really interesting...</p>Metaheuristic validation in a nutshellurn:md5:f35180d5a41a197cb5d674d3d742f7962008-06-18T20:20:00+02:00nojhanDiversdesignmetaheuristicparameter-settingperformance assessmentstatisticsvalidation <p>People using metaheuristics often forget that the price to pay for their
ease of adaptation to a new problem is the <strong>hard validation
work</strong>. There is several things to keep in mind when using a
metaheuristic, especially when one want to <strong>prove</strong> that they
work <strong>in practice</strong>.</p>
<p>This (kind of) mind map try to list what you should do, and a short set of
main tools to do it. It is not always mandatory to use all the tools, sometimes
it is just a matter of choice (like for the parameter setting), sometimes the
more you do, the better it is (like for performance assessment).</p>
<p>The graphic has been drawn in SVG, and I have put some references in a very
small font at the bottom of some boxes. Thus, it would be more confortable to
view it in Firefox or in Inkscape, and to zoom where needed. Try the <a href="http://metah.nojhan.net/public/metaheuristics_doe_en.svg">SVG version</a>.</p>
<p><a href="http://metah.nojhan.net/public/metaheuristics_doe_en.png"><img src="http://metah.nojhan.net/public/./.metaheuristics_doe_en_m.jpg" alt="Metaheuristic design" style="display:block; margin:0 auto;" /></a></p>Random draw in a sphereurn:md5:aba37017bfe4271882efdbe17c0b66172007-03-28T00:00:00+02:00nojhanDiversstatistics <p>When adapting combinatorial metaheuristics to continuous problems, one
sometimes use a sphere as an approximation of the "neighborhood". The idea is
thus to draw the neighbours around a solution, for example in order to apply a
simple mutation in a genetic algorithm.</p>
<p>Sometimes, one choose to use an uniform law, but how to draw random vectors
<em>uniformly</em> in an hyper-sphere ?</p>
<p>The first idea that comes to mind is to use a polar coordinate system and
draw the radius <em>r</em> and the angles
<em>a<sub>1</sub>...a<sub>2</sub>...a<sub>i</sub>...a<sub>N</sub></em> with a
uniform law. Then, one convert the coordinates in the rectangular system,
<em>x<sub>1</sub>...x<sub>2</sub>...x<sub>i</sub>...x<sub>N</sub></em>.</p>
<p>The result is interesting for a metaheuristic design, but is not a uniform
distribution:</p>
<img src="/public/randHS_false.png" />
<p>The correct method is to draw each <em>x<sub>i</sub></em> according to:
<em>x<sub>i</sub>=(r<sub>i</sub><sup>1/N</sup>a<sub>i</sub>)/√(∑(a<sub>i</sub>))</em><br />
(in L<sup>A</sup>T<sub>E</sub>X : <code>$x_{i}=\frac{r^{\frac{1}{N}}_i\cdot
a_{i}}{\sqrt{{\displaystyle \sum _{j=1}^{N}a_{i}}}}$</code>)</p>
<p>With <em>r<sub>i</sub></em> uniformly drawn in <em>U<sub>0,1</sub></em> and
<em>a</em> following a normal law <em>N<sub>O,1</sub></em></p>
<p>The result is then a true uniform distribution:</p>
<img alt="" src="http://metah.nojhan.net/public/randHS_ok.png" /><img src="/public/randHS_ok.png
/><br />
<p>Credits goes to <a href="http://www.mauriceclerc.net/">Maurice Clerc</a> for
detecting and solving the problem.</p>