Mentales habitudes

Aller au contenu | Aller au menu | Aller à la recherche

Tag - performance assessment

Fil des billets

lundi 10 novembre 2008

Multi-criteria meta-parameter tuning for mono-objective stochastic metaheuristics

Here are some explanations about the work I have presented at the META'08 conference. This post is based on the notes I used for my presentation.

In all the automatic parameter setting methods, the problem of finding the better parameter set is considered as an optimization problem, with only one objective, generally finding the best optimum, or reducing the uncertainty of the results. Sometimes, one try to improve the speed. More rarely, speed, precision or robustness are aggregated in one criterion, with an adhoc formula. In fact, one can set parameters according to several objectives (improve speed, improve robustness, etc.). One cannot find a set of parameters fitting all the potential uses of a single algorithm on a single problem instance. Thus, parameter setting is a multi-objective problem.

The key point here is that it is easier to set the parameters of a solver than to solve the problem directly. The simpler example of this idea is when you want to solve a continuous optimization problem with hundreds of variables, with a metaheuristic that have 3 parameters. Moreover, you only have to tune your parameters once, even if you will solve many problems instances later.

In this work, I only consider speed and precision, although the method may handle any performance metrics.

What is crucial in our method is that we do not want to aggregates the criterions, instead, we want the Pareto front corresponding to all the non-dominated parameters set. I use plots representing the Pareto front, that I will sometimes call the « performance » front, or performance profile.

The idea is that one can then compare more rigorously several algorithms, by comparing their respective performance fronts. We can also benefits from having a cursor, scaling from a behaviour oriented towards speed, at one extreme, or precision, at the other side. Even more interesting is the performance profile projected on the parameters space. One can see that every algorithm has its very own profile, that tells a lot on how it behaves.

Performance profiles of 4 metaheuristics The figure above shows performance profiles of 4 metaheuristics: a Simulated Annealing, a Genetic Algorithm, two Estimation of Distribution Algorithms (produced by NSGA-II, with median estimation, on the Rosenbrock-2D problem, using the parameter corresponding to the sampling density for each method with an absolute time stopping criterion).

Our results suggest that the choice of the stopping criterion has a drastic influence on the interest of the performance profile, it must be chosen carefully. Similarly, the method can not naturally find a unique profile for a set of problem instances, but is strictly valid only for an instance of a given problem. Finally, we note that the performance profiles are often convex in the objectives space, which could indicate that aggregation may be usefull.

The proposed method allows to aggregate all parameters into one, determining the position within the profile of performance, since behavior strongly oriented towards production (fast, unaccurate) to conception (slow, accurate). The projection of the profile in the space of parameters can also reflect the impact of parameters on performance, or dependence between parameters. Such information may be very relevant to better understand some complex metaheuristics. It also becomes possible to compare several metaheuristics, delaying the performance profiles on the same scale. The statistic validation also receives additional dimensions of discrimination.

In perspective, it remains to reduce the demand for calculations of the meta-optimizer, using dedicated methods (SPO, racing, etc.).. It is also possible to extend the method taking into account robustness as supplementary objectives and checking the possibility of rebuilding correlations on a set of instances.

Finally, here are the slides. I use light slides without a lot o text, so I suggest that you read the notes while looking at the presentation. You will find the abstract, the extended abstract and the slides on my professional website, at the corresponding publication page.

jeudi 6 novembre 2008

2nd International Conference on Metaheuristics and Nature Inspired Computing

I've just attend the META 2008 international conference on metaheuristics and nature inspired computing.

The weather was nice in Tunisia, we had a place to sleep, a restaurant and a swimming pool, the organization was just fine. The acceptance rate was of 60%, with 116 accepted papers, for 130 attendants and one short review by paper (at least for mine).

OK, now let's talk about what is really exciting: science.

I was more than pleased to attend to two tutorials, given by Dick Thierens and Thomas Stützle, that both were talking about the use of stochastic local search.

What was definitely interesting is that these two talented researchers were insisting a lot on the need of a rigorous experimental approach for the design and the validation of metaheuristics. That's good news for our research domain: the idea that metaheuristics should be employed in a scientific manner rather than in an artistic one gains more and more importance.

First, they both says that a good way to tackle a hard optimization problem is to employ a bottom-up approach: start first with a simple local search, then use metaheuristics operators to improve the results.

Thomas Stützle, particularly, insist on the crucial need of rigorous parameter setting and experimental validation with statistical tests. That's definitely a very important point.

Another good point made by Thomas is the use of the term "algorithm engineering" to describe a rigorous design and evaluation approach of optimization algorithms. I was searching a nice term to name it, I think this one is a good candidate. The bad news at this conference is that, despite these two incredible tutorials, there was very few peoples speaking about algorithm engineering. I was presenting a new method for parameter setting and behaviour understanding, but I was in a unrelated "metaheuristics for real-world problem" session. I haven't seen other works specifically dedicated to such subjects.

More badly, I have attend to several presentations with very bad experimental work. Some peoples keeps telling their stochastic algorithm is better only by showing the best result found. More often, their is a mean and a standard deviation, but without a statistical test. But there is hope as, since 2001 (when some works made by Éric D. Taillard definitely introduced experimental validation for metaheuristics, at least for me), I find that the proportion of better experimental plans is increasing in the literature.

Anyway, my wish is that there will be more and more special sessions on algorithm engineering in future conferences on metaheuristics. In the meantime, there is the 2nd "Engineering Stochastic local search algorithms" conference, in september 2009, in Brussels, that seems really interesting...

mercredi 18 juin 2008

Metaheuristic validation in a nutshell

People using metaheuristics often forget that the price to pay for their ease of adaptation to a new problem is the hard validation work. There is several things to keep in mind when using a metaheuristic, especially when one want to prove that they work in practice.

This (kind of) mind map try to list what you should do, and a short set of main tools to do it. It is not always mandatory to use all the tools, sometimes it is just a matter of choice (like for the parameter setting), sometimes the more you do, the better it is (like for performance assessment).

The graphic has been drawn in SVG, and I have put some references in a very small font at the bottom of some boxes. Thus, it would be more confortable to view it in Firefox or in Inkscape, and to zoom where needed. Try the SVG version.

Metaheuristic design

jeudi 5 juillet 2007

Error metrics

Many metrics are used to assess the quality of approximation found by metaheuristics. Two of them are used really often: distance to the true optimum according to its position and to its value.

Unfortunately, the objective function's shape can vary a lot in real-world problem, making these metrics difficult to interpret. For example, if the optimum is in a very deep valley (in value), a solution close to it in position may not signifiate that the algorithm have well learn the shape of it. Inversely, a solution close to an optimum in value may not signifiate that it is in the same valley.

One metric that can counter thse drawbacks is a distance taking into account the parameters of the problem as well as the value dimension.

Anyway, the question of the type of distance to use is dependent of the problem.

dimanche 10 décembre 2006

Metaheuristics and experimental research

Springer has just published a book on "Experimental Research in Evolutionary Computation", written by Thomas Bartz-Beielstein.

Thomas Bartz-Beielstein is working on the statistical analysis of the behaviour of metaheuristics (see its tutorials at GECCO and CEC), and the publication of its book is a really great thing. I haven't read it yet, but the table of content seems really promising. There is a true need for such work in the metaheuristics community, and in stochastic optimization in general.

A friend said to me that the lack of experimental culture in the computer science community was a form of consensus, perhaps because theoretical aspects of mathematics was the "only way to make true science". This is a true problem when you deal with stochastic algorithm, applied to real world problem. Despite the fact that several papers early call for more rigourous experimental studies of metaheuristcs (E.D. Taillard has written papers on this problem several years ago, for example), the community does not seems to quickly react.

Yet, things are changing, after the series of CEC special sessions on benchmark for metaheuristics, there is more and more papers on how to test stochastic optimization algorithms and outline the results. I think this book is coming timely... the next step will be to promote the dissemination of the results data (and code!), in an open format, along with the papers.

mercredi 30 août 2006

Metaheuristics & benchmarks at CEC

The IEEE Congress on Evolutionary Computation (CEC) is a well-known event that take place every year.

Since 2005, there is an interesting group of special sessions, organized by Ponnuthurai Nagaratnam Suganthan:

What is really interesting in these sessions is the systematic presence of an implemented generalistic benchmark, built after discussion between researchers.

This is an extremely necessary practice, which is, unfortunately, not generalized. Indeed, this is the first step toward a rigourous performance assessment of metaheuristics (the second one being a true statistical approach, and the third one a considered data presentation).