Mentales habitudes - Tag - performance assessment2015-10-19T19:25:25+02:00nojhanurn:md5:12147DotclearMulti-criteria meta-parameter tuning for mono-objective stochastic metaheuristicsurn:md5:9302cf9ff0cb3704a3fc4b590105fcf22008-11-10T14:05:00+01:00nojhanDiversconferencesmetaheuristicparameter-settingperformance assessmentstatisticsvalidation <p>Here are some explanations about the work I have presented <a href="http://metah.nojhan.net/post/2008/11/06/2nd-International-Conference-on-Metaheuristics-and-Nature-Inspired-Computing">
at the META'08 conference</a>. This post is based on the notes I used for my
presentation.</p>
<p>In all the automatic parameter setting methods, the problem of finding the
better parameter set is considered as an optimization problem, with only one
objective, generally finding the best optimum, or reducing the uncertainty of
the results. Sometimes, one try to improve the speed. More rarely, speed,
precision or robustness are aggregated in one criterion, with an adhoc formula.
In fact, one can set parameters according to several objectives (improve speed,
improve robustness, etc.). One cannot find a set of parameters fitting all the
potential uses of a single algorithm on a single problem instance. Thus,
parameter setting is a multi-objective problem.</p>
<p>The key point here is that it is easier to set the parameters of a solver
than to solve the problem directly. The simpler example of this idea is when
you want to solve a continuous optimization problem with hundreds of variables,
with a metaheuristic that have 3 parameters. Moreover, you only have to tune
your parameters once, even if you will solve many problems instances later.</p>
<p>In this work, I only consider speed and precision, although the method may
handle any performance metrics.</p>
<p>What is crucial in our method is that we do not want to aggregates the
criterions, instead, we want the Pareto front corresponding to all the
non-dominated parameters set. I use plots representing the Pareto front, that I
will sometimes call the « performance » front, or performance
profile.</p>
<p>The idea is that one can then compare more rigorously several algorithms, by
comparing their respective performance fronts. We can also benefits from having
a cursor, scaling from a behaviour oriented towards speed, at one extreme, or
precision, at the other side. Even more interesting is the performance profile
projected on the parameters space. One can see that every algorithm has its
very own profile, that tells a lot on how it behaves.</p>
<p><a href="http://metah.nojhan.net/public/parameters_all.png"><img src="http://metah.nojhan.net/public/./.parameters_all_s.jpg" alt="Performance profiles of 4 metaheuristics" style="display:block; margin:0 auto;" title="Performance profiles of 4 metaheuristics: a Simulated Annealing, a Genetic Algorithm, two Estimation of Distribution Algorithms. Produced by NSGA-II, with median estimation, on the Rosenbrock-2D problem, using the parameter corresponding to the sampling density for each method with an absolute time stopping criterion." /></a>
<em>The figure above shows performance profiles of 4 metaheuristics: a
Simulated Annealing, a Genetic Algorithm, two Estimation of Distribution
Algorithms (produced by NSGA-II, with median estimation, on the Rosenbrock-2D
problem, using the parameter corresponding to the sampling density for each
method with an absolute time stopping criterion).</em></p>
<p>Our results suggest that the choice of the stopping criterion has a drastic
influence on the interest of the performance profile, it must be chosen
carefully. Similarly, the method can not naturally find a unique profile for a
set of problem instances, but is strictly valid only for an instance of a given
problem. Finally, we note that the performance profiles are often convex in the
objectives space, which could indicate that aggregation may be usefull.</p>
<p>The proposed method allows to aggregate all parameters into one, determining
the position within the profile of performance, since behavior strongly
oriented towards production (fast, unaccurate) to conception (slow, accurate).
The projection of the profile in the space of parameters can also reflect the
impact of parameters on performance, or dependence between parameters. Such
information may be very relevant to better understand some complex
metaheuristics. It also becomes possible to compare several metaheuristics,
delaying the performance profiles on the same scale. The statistic validation
also receives additional dimensions of discrimination.</p>
<p>In perspective, it remains to reduce the demand for calculations of the
meta-optimizer, using dedicated methods (SPO, racing, etc.).. It is also
possible to extend the method taking into account robustness as supplementary
objectives and checking the possibility of rebuilding correlations on a set of
instances.</p>
<p>Finally, here are the slides. I use light slides without a lot o text, so I
suggest that you read the notes while looking at the presentation. You will
find the abstract, the extended abstract and the slides on my professional
website, at the <a href="http://www.nojhan.net/pro/spip.php?article29" hreflang="en">corresponding publication page</a>.</p>
<object type="application/x-shockwave-flash" data="http://s3.amazonaws.com/slideshare/ssplayer.swf?id=118457&doc=meta08dreoparameters-1226652498095339-8" height="348" width="425"><param name="movie" value="http://s3.amazonaws.com/slideshare/ssplayer.swf?id=118457&doc=meta08dreoparameters-1226652498095339-8" /></object>2nd International Conference on Metaheuristics and Nature Inspired Computingurn:md5:fb39548f9b31ca48eed65d334b2783d02008-11-06T10:28:00+01:00nojhanBibliographieconferencesmetaheuristicparameter-settingperformance assessmentstatisticsvalidation <p>I've just attend the <a href="http://www2.lifl.fr/META08/" hreflang="en">META 2008 international conference on metaheuristics and nature inspired
computing</a>.</p>
<p>The weather was nice in Tunisia, we had a place to sleep, a restaurant and a
swimming pool, the organization was just fine. The acceptance rate was of 60%,
with 116 accepted papers, for 130 attendants and one short review by paper (at
least for mine).</p>
<p>OK, now let's talk about what is really exciting: science.</p>
<p>I was more than pleased to attend to two tutorials, given by Dick Thierens
and Thomas Stützle, that both were talking about the use of stochastic local
search.</p>
<p>What was definitely interesting is that these two talented researchers were
insisting a lot on the need of a rigorous experimental approach for the design
and the validation of metaheuristics. That's good news for our research domain:
the idea that metaheuristics should be employed in a scientific manner rather
than in an artistic one gains more and more importance.</p>
<p>First, they both says that a good way to tackle a hard optimization problem
is to employ a bottom-up approach: start first with a simple local search, then
use metaheuristics operators to improve the results.</p>
<p>Thomas Stützle, particularly, insist on the crucial need of rigorous
parameter setting and experimental validation with statistical tests. That's
definitely a very important point.</p>
<p>Another good point made by Thomas is the use of the term "algorithm
engineering" to describe a rigorous design and evaluation approach of
optimization algorithms. I was searching a nice term to name it, I think this
one is a good candidate. The bad news at this conference is that, despite these
two incredible tutorials, there was very few peoples speaking about algorithm
engineering. I was presenting a <a href="http://www.nojhan.net/pro/spip.php?article29" hreflang="en">new method for
parameter setting and behaviour understanding</a>, but I was in a unrelated
"metaheuristics for real-world problem" session. I haven't seen other works
specifically dedicated to such subjects.</p>
<p>More badly, I have attend to several presentations with very bad
experimental work. Some peoples keeps telling their stochastic algorithm is
better only by showing the best result found. More often, their is a mean and a
standard deviation, but without a statistical test. But there is hope as, since
2001 (when <a href="http://mistic.heig-vd.ch/taillard/codes.dir/comparaison.dir/comparaison.htm" hreflang="en">some works made by Éric D. Taillard</a> definitely introduced
experimental validation for metaheuristics, at least for me), I find that the
proportion of better experimental plans is increasing in the literature.</p>
<p>Anyway, my wish is that there will be more and more special sessions on
algorithm engineering in future conferences on metaheuristics. In the meantime,
there is the <a href="http://iridia.ulb.ac.be/~sls/sls2009" hreflang="en">2nd
"Engineering Stochastic local search algorithms" conference</a>, in september
2009, in Brussels, that seems really interesting...</p>Metaheuristic validation in a nutshellurn:md5:f35180d5a41a197cb5d674d3d742f7962008-06-18T20:20:00+02:00nojhanDiversdesignmetaheuristicparameter-settingperformance assessmentstatisticsvalidation <p>People using metaheuristics often forget that the price to pay for their
ease of adaptation to a new problem is the <strong>hard validation
work</strong>. There is several things to keep in mind when using a
metaheuristic, especially when one want to <strong>prove</strong> that they
work <strong>in practice</strong>.</p>
<p>This (kind of) mind map try to list what you should do, and a short set of
main tools to do it. It is not always mandatory to use all the tools, sometimes
it is just a matter of choice (like for the parameter setting), sometimes the
more you do, the better it is (like for performance assessment).</p>
<p>The graphic has been drawn in SVG, and I have put some references in a very
small font at the bottom of some boxes. Thus, it would be more confortable to
view it in Firefox or in Inkscape, and to zoom where needed. Try the <a href="http://metah.nojhan.net/public/metaheuristics_doe_en.svg">SVG version</a>.</p>
<p><a href="http://metah.nojhan.net/public/metaheuristics_doe_en.png"><img src="http://metah.nojhan.net/public/./.metaheuristics_doe_en_m.jpg" alt="Metaheuristic design" style="display:block; margin:0 auto;" /></a></p>Error metricsurn:md5:5b605b43941038b046f78b32101b002d2007-07-05T00:00:00+02:00nojhanDiversperformance assessment <p>Many metrics are used to assess the quality of approximation found by
metaheuristics. Two of them are used really often: distance to the true optimum
according to its <em>position</em> and to its <em>value</em>.</p>
<p>Unfortunately, the objective function's shape can vary a lot in real-world
problem, making these metrics difficult to interpret. For example, if the
optimum is in a very deep valley (in value), a solution close to it in position
may not signifiate that the algorithm have well learn the shape of it.
Inversely, a solution close to an optimum in value may not signifiate that it
is in the same valley.</p>
<p>One metric that can counter thse drawbacks is a distance taking into account
the parameters of the problem as well as the value dimension.</p>
<p>Anyway, the question of the type of distance to use is dependent of the
problem.</p>Metaheuristics and experimental researchurn:md5:791721d7bcb3baa498d1430e63b6c5462006-12-10T00:00:00+01:00nojhanDiversevolutionary computationperformance assessment <p>Springer has just published a book on "<a href="http://www.springer.com/east/home/computer/foundations?SGWID=5-156-22-142872140-detailsPage=ppmmedia" hreflang="en">Experimental Research in Evolutionary Computation</a>", written
by <a href="http://ls11-www.cs.uni-dortmund.de/people/tom/" hreflang="en">Thomas Bartz-Beielstein</a>.</p>
<p>Thomas Bartz-Beielstein is working on the statistical analysis of the
behaviour of metaheuristics (see <a href="http://ls11-www.cs.uni-dortmund.de/people/tom/#Talks_and_Tutorials" hreflang="en">its tutorials</a> at GECCO and CEC), and the publication of its book is a
really great thing. I haven't read it yet, but the table of content seems
really promising. There is a true need for such work in the metaheuristics
community, and in stochastic optimization in general.</p>
<p>A friend said to me that the lack of experimental culture in the computer
science community was a form of consensus, perhaps because theoretical aspects
of mathematics was the "only way to make true science". This is a true problem
when you deal with stochastic algorithm, applied to real world problem. Despite
the fact that several papers early call for more rigourous experimental studies
of metaheuristcs (<a href="http://mistic.heig-vd.ch/taillard/" hreflang="en">E.D. Taillard</a> has written papers on this problem several years ago,
for example), the community does not seems to quickly react.</p>
<p>Yet, things are changing, after the series of CEC special sessions on
benchmark for metaheuristics, there is more and more papers on how to test
stochastic optimization algorithms and outline the results. I think this book
is coming timely... the next step will be to promote the dissemination of the
results data (and code!), in an open format, along with the papers.</p>Metaheuristics & benchmarks at CECurn:md5:44c681380254e6b9e91d05c4a9c6f0182006-08-30T00:00:00+02:00nojhanDiversbenchmarkperformance assessment <p>The IEEE Congress on Evolutionary Computation (CEC) is a well-known event
that take place every year.</p>
<p>Since 2005, there is an interesting group of special sessions, organized by
<a href="http://www.ntu.edu.sg/home/EPNSugan/">Ponnuthurai Nagaratnam
Suganthan</a>:</p>
<ul>
<li>2005, <a href="http://www.ntu.edu.sg/home/EPNSugan/index_files/CEC-05/CEC05.htm">Real-Parameter
Optimization</a>,</li>
<li>2006, <a href="http://www.ntu.edu.sg/home/EPNSugan/index_files/CEC-06/CEC06.htm">Constrained
Real-Parameter Optimization</a>,</li>
<li>2007, <a href="http://www.ntu.edu.sg/home/EPNSugan/index_files/CEC-07/CEC07.htm">Performance
Assessment of Multi-Objective Optimization Algorithms</a>.</li>
</ul>
<p>What is really interesting in these sessions is the systematic presence of
an <em>implemented</em> generalistic benchmark, built after discussion between
researchers.</p>
<p>This is an extremely necessary practice, which is, unfortunately, not
generalized. Indeed, this is the first step toward a rigourous performance
assessment of metaheuristics (the second one being a true statistical approach,
and the third one a considered data presentation).</p>