Mentales habitudes

Aller au contenu | Aller au menu | Aller à la recherche

lundi 19 octobre 2015

Changement de sujet

Ce blog était à l'abandon depuis 6 ans, pour tout un tas de raisons que je ne vais pas exposer ici. Il va néanmoins essayer de reprendre un semblant d'activité.

Je change donc son sujet, sa langue et son nom pour y héberger des réflexions plus générales. Les anciens articles sont conservés, parce que je n'aime pas jeter.

lundi 10 novembre 2008

Multi-criteria meta-parameter tuning for mono-objective stochastic metaheuristics

Here are some explanations about the work I have presented at the META'08 conference. This post is based on the notes I used for my presentation.

In all the automatic parameter setting methods, the problem of finding the better parameter set is considered as an optimization problem, with only one objective, generally finding the best optimum, or reducing the uncertainty of the results. Sometimes, one try to improve the speed. More rarely, speed, precision or robustness are aggregated in one criterion, with an adhoc formula. In fact, one can set parameters according to several objectives (improve speed, improve robustness, etc.). One cannot find a set of parameters fitting all the potential uses of a single algorithm on a single problem instance. Thus, parameter setting is a multi-objective problem.

The key point here is that it is easier to set the parameters of a solver than to solve the problem directly. The simpler example of this idea is when you want to solve a continuous optimization problem with hundreds of variables, with a metaheuristic that have 3 parameters. Moreover, you only have to tune your parameters once, even if you will solve many problems instances later.

In this work, I only consider speed and precision, although the method may handle any performance metrics.

What is crucial in our method is that we do not want to aggregates the criterions, instead, we want the Pareto front corresponding to all the non-dominated parameters set. I use plots representing the Pareto front, that I will sometimes call the « performance » front, or performance profile.

The idea is that one can then compare more rigorously several algorithms, by comparing their respective performance fronts. We can also benefits from having a cursor, scaling from a behaviour oriented towards speed, at one extreme, or precision, at the other side. Even more interesting is the performance profile projected on the parameters space. One can see that every algorithm has its very own profile, that tells a lot on how it behaves.

Performance profiles of 4 metaheuristics The figure above shows performance profiles of 4 metaheuristics: a Simulated Annealing, a Genetic Algorithm, two Estimation of Distribution Algorithms (produced by NSGA-II, with median estimation, on the Rosenbrock-2D problem, using the parameter corresponding to the sampling density for each method with an absolute time stopping criterion).

Our results suggest that the choice of the stopping criterion has a drastic influence on the interest of the performance profile, it must be chosen carefully. Similarly, the method can not naturally find a unique profile for a set of problem instances, but is strictly valid only for an instance of a given problem. Finally, we note that the performance profiles are often convex in the objectives space, which could indicate that aggregation may be usefull.

The proposed method allows to aggregate all parameters into one, determining the position within the profile of performance, since behavior strongly oriented towards production (fast, unaccurate) to conception (slow, accurate). The projection of the profile in the space of parameters can also reflect the impact of parameters on performance, or dependence between parameters. Such information may be very relevant to better understand some complex metaheuristics. It also becomes possible to compare several metaheuristics, delaying the performance profiles on the same scale. The statistic validation also receives additional dimensions of discrimination.

In perspective, it remains to reduce the demand for calculations of the meta-optimizer, using dedicated methods (SPO, racing, etc.).. It is also possible to extend the method taking into account robustness as supplementary objectives and checking the possibility of rebuilding correlations on a set of instances.

Finally, here are the slides. I use light slides without a lot o text, so I suggest that you read the notes while looking at the presentation. You will find the abstract, the extended abstract and the slides on my professional website, at the corresponding publication page.

jeudi 11 septembre 2008

The ultimate metaheuristic?

There exists a lot of different algorithms families that can be called "metaheuristics", stricly speaking, there are a very, very, very large number of metaheuristics instances.

Defining what is a metaheuristic "family" is a difficult problem: when may I called this or this algorithm an evolutionary one? Is estimation of distribution a sub-family of genetic algorithms? What is the difference between ant colony optimization and stochastic gradient ascent? Etc.

Despite the difficulty of classifying metaheuristics, there is some interesting characteristics shared by stochastic metaheuristics. Indeed, they are all iteratively manipulating a sample of the objective function[1]

For example, simulated annealing is often depicted as a probabilistic descent algorithm, but it is more than that. Indeed, simulated annealing is based on the Metropolis-Hastings algorithm, which is a way of sampling any probability distributionn, as long as you can calculate its density at any point. Thus, simulated annealing use an approximation of the objective function as a probability density function to generate a sampling. It is even more obvious if you consider a step by step decrease of the temperature. Estimation of distribution are another obvious example: they are explicitly manipulating samplings, but one can also have the same thoughts about evolutionary algorithms, even if they are manipulating the sampling rather implicitely.

The diagram tries to illustrate this idea: (a) a descent algorithm can have the same sampling behaviour than an iteration of a (b) "population" method.

Given these common processes, is it possible to design a kind of "universal" metaheuristic ? Theoretically, the answer is yes. For example, in the continuous domain, consider an estimation of distribution algorithm, using a mixture of gaussian kernel: it can learn any probability density function (possibly needing an infinite number of kernels). Thus, carefully choosing the function to use at each iteration and the selection operator, one can reproduce the behaviour of any stochastic metaheuristic.

Of course, choosing the correct mixture (and the other parameters) is a very difficult problem in practice. But I find interesting the idea that the problem of designing a metaheuristic can be reduced to a configuration problem.

Notes

[1] Johann Dréo, Patrick Siarry, "Stochastic metaheuristics as sampling techniques using swarm intelligence. ", in "Swarm Intelligence: Focus on Ant and Particle Swarm Optimization", Felix T. S. Chan, Manoj Kumar Tiwari (Eds.), Advanced Robotic Systems International, I-Tech Education and Publishing, Vienna, Austria , ISBN 978-3-902613-09-7 - December 2008

mercredi 18 juin 2008

Metaheuristic validation in a nutshell

People using metaheuristics often forget that the price to pay for their ease of adaptation to a new problem is the hard validation work. There is several things to keep in mind when using a metaheuristic, especially when one want to prove that they work in practice.

This (kind of) mind map try to list what you should do, and a short set of main tools to do it. It is not always mandatory to use all the tools, sometimes it is just a matter of choice (like for the parameter setting), sometimes the more you do, the better it is (like for performance assessment).

The graphic has been drawn in SVG, and I have put some references in a very small font at the bottom of some boxes. Thus, it would be more confortable to view it in Firefox or in Inkscape, and to zoom where needed. Try the SVG version.

Metaheuristic design

lundi 3 mars 2008

The problem with spreading new metaheuristics

Marcelo De Brito had interesting thoughts about what he call New Wave Of Genetic Algorithms. He is surprised that when "evolutionary computation" is applied to a new problem, the first algorithm used is the good old canonic genetic algorithm, despite that there exist active researchs on Estimation of Distribution Algorithms. Julian Togelius write that it may be because people does not understand other algorithms, or even know that anything else exists.

I think that is definitely true. This subject is a kind of hobby for me. Indeed, as I have came from ecology to applied mathematics, I feel like a kind of generalist researcher, not being able to be the best somewhere, but trying to be as good as possible on several fields. Concerning the field of what Marcelo called NWOGA, I would like to emphasize some other things.

As David E. Goldberg say in its courses, genetic algorithm is the term everybody use. For specialist, a GA is just a kind of "evolutionary algorithm", with specific rules, that are more defined by history than by anything else.

The litterature on evolutionary computation is quite big, the first algorithm being designed in 1965 (evolutionary strategies, followed by evolutionary programming in 1966), making it difficult to spread deep changes on basic concepts.

There exist a lot more stochastic algorithms for global optimization than just evolutionary ones. I prefer to call the stochastic metaheuristics, or simply metaheuristics, because this lead to far less bias than a metaphoric naming (cf. the previous post on classification of metaheuristics).

For example, during my PhD thesis, I was convinced that some Ant Colony Optimization algorithms were just equivalent to Estimation of Distribution Algorithms, when talking about continuous problems. Moreover, I'm now convinced that a lot of metaheuristics just shares some common stochastic sampling processes, that are not specifiquely related to evolution. For example, mathematically, Simulated Annealing is just a kind of EDA using an approximation of the objective function as a model (or inversely, of course).

As Julian says: I know roughly what an EDA does, but I couldn't sit down an implement one on the spot. This is, in my humble opinion, one of the more important thing to keep in mind. Indeed, there exist more and more papers claming that a correct parameter setting of a metaheuristic can lead to the performances of any competing metaheuristic.

Thus, the true discriminatory criterion is not the fantasised intrinsic capability, but the ease of implementation and parameter setting on a specific problem. In other words, choose the algorithm you like, but be aware that there exists a lot of other ones.

vendredi 12 octobre 2007

Classification of metaheuristics

I eventually find some time to try a graphical representation of how metaheuristics could be classified.

Here is a slidified version, that shows each classes independently:

And a static image version: Graphical classification of metaheuristics

Note that some metaheuristics are not completely contained in certains classes, this indicate that the method could be considered as part of the class or not, depending on your point of view.

I have reported the following metaheuristics:

  • genetic programming, genetic algorithms, differential evolution,
  • evolution strategies,
  • estimation of distribution algorithms,
  • particle swarm optimization,
  • ant colony optimization,
  • simulated annealing,
  • tabu search,
  • GRASP,
  • variable neighborhood search,
  • iterated, stochastic and guided local search.

And the following classes :

  • metaheurisitcs vs local search,
  • population metaheuristics vs trajectory-based ones,
  • evolutionary computation or not,
  • nature-inspired methods or not,
  • dynamic objective function vs static ones,
  • memory-based algorithms vs memory-less,
  • implicit, explicit or direct metaheuristics.

I proposed the last class, so that it may not be well-known. You will find more informations about it in the following paper: Adaptive Learning Search, a new tool to help comprehending metaheuristics, J. Dreo, J.-P. Aumasson, W. Tfaili, P. Siarry, International Journal on Artificial Intelligence Tools, Vol. 16, No. 3.. - 1 June 2007

I didn't placed a stochastic category, as it seems a bit difficult to represent graphically. Indeed, a lot of methods could be "stochasticized" or "derandomized" in several ways.

There is surely several lacks or errors, feel free to give your point of view with a trackback, an email or by modifying the SVG source version (comments are disabled due to spam that I didn't have time to fight accurately).

vendredi 27 juillet 2007

Hybridization : estimation of distribution as a meta-model filter generator for metaheuristics ?

An interesting idea is to use meta-model (a priori representation of the problem) as a filter to bias the sample produced by metaheuristics. This approach seems especially promising for engineering problem, where computing the objective function is very expensive.

One simple form of meta-model is a probability density function, approximating the shape of the objective function. This PDF could thus be used to filter out bad points before evaluation.

Why, then, do not directly use EDA to generate the sample ? Because one can imagine that the problem shape is not well known, and that using a complex PDF is impossible (too expensive to compute, for example). Then, using a classical indirect metaheuristic (let say an evolutionary algorithm) should be preferable (computationnaly inexpensive) for the sample generation. If one know a good approximation to use for the distribution of the EDA (not too computationnaly expensive), one can imagine using the best part of the two worlds.

An example could be a problem with real variable : using an EDA with a multi-variate normal distribution is computationnaly expensive (due to the estimation of the co-variance, mainly), and using a mixture of gaussian kernels makes difficult to have an a priori on the problem. Thus, why not using a indirect metaheuristic to handle the sample generation, and use a meta-model which parameters are estimated from the previous sample, according to a chosen distribution ?

One more hybridization to try...

jeudi 5 juillet 2007

Error metrics

Many metrics are used to assess the quality of approximation found by metaheuristics. Two of them are used really often: distance to the true optimum according to its position and to its value.

Unfortunately, the objective function's shape can vary a lot in real-world problem, making these metrics difficult to interpret. For example, if the optimum is in a very deep valley (in value), a solution close to it in position may not signifiate that the algorithm have well learn the shape of it. Inversely, a solution close to an optimum in value may not signifiate that it is in the same valley.

One metric that can counter thse drawbacks is a distance taking into account the parameters of the problem as well as the value dimension.

Anyway, the question of the type of distance to use is dependent of the problem.

mercredi 28 mars 2007

Random draw in a sphere

When adapting combinatorial metaheuristics to continuous problems, one sometimes use a sphere as an approximation of the "neighborhood". The idea is thus to draw the neighbours around a solution, for example in order to apply a simple mutation in a genetic algorithm.

Sometimes, one choose to use an uniform law, but how to draw random vectors uniformly in an hyper-sphere ?

The first idea that comes to mind is to use a polar coordinate system and draw the radius r and the angles a1...a2...ai...aN with a uniform law. Then, one convert the coordinates in the rectangular system, x1...x2...xi...xN.

The result is interesting for a metaheuristic design, but is not a uniform distribution:

<img src="/public/randHS_false.png" />

The correct method is to draw each xi according to: xi=(ri1/Nai)/√(∑(ai))
(in LATEX : $x_{i}=\frac{r^{\frac{1}{N}}_i\cdot a_{i}}{\sqrt{{\displaystyle \sum _{j=1}^{N}a_{i}}}}$)

With ri uniformly drawn in U0,1 and a following a normal law NO,1

The result is then a true uniform distribution:

<img src="/public/randHS_ok.png />

Credits goes to Maurice Clerc for detecting and solving the problem.

jeudi 21 décembre 2006

Random and learning

The Machine Learning (Theory) blog, by John Langford, has a very intersting post on "Explicit Randomization in Learning algorithms".

The post and its comments are talking about machine-learning, but can largely be applied to metaheuristics. The page is listing several reason for using randomization, from which some are of special intersts for metaheuristics:

  1. symmetry breaking as a way to make decision, which is of great importance for metaheuristics, which must learn and choose where are the "promising regions";
  2. overfit avoidance, which is related to the intensification/diversification balance problem;
  3. adversary defeating and bias suppression, which can be interpreted as trying to design a true meta-heuristic (i.e. that can be applied on several problems without major changes).

Of course, it should be possible to design a completely deterministic algorithm that takes decisions, achieve a correct i/d balance and can tackle all problems... Even if this force to integrate the problems themselves in the algorithm, it should be possible. The drawback is that it is computationally intractable.

In fact, metaheuristics (and, as far as I understand, machine-learning algorithms) are located somewhere between random search algorithms and deterministic ones. The compromise between these two tendencies is dependent of the problem and of the offered computational effort.

mardi 19 décembre 2006

Metaheuristics and machine-learning

Metaheuristics and machine-learning algorithms shares a large number of characteristics, like stochastic processes, manipulaton of probability density functions, etc.

One of the interesting evolution of the research on metaheuristics these years is the increasing bridge-building with machine-learning. I see at least two interesting pathways: the use of metaheuristics in machine-learning and the use of machine-learning in metaheuristics.

The first point is not really new, machine-learning heavily use optimization, and it was natural to try stochastic algorithms where local search or exact algorithms failed. Nevertheless, there is now a sufficient litterature to organize some special sessions in some symposium. For 2007, there will be a special session on Genetics-Based Machine Learning at CEC, and a track on Genetics-Based Machine Learning and Learning Classifier Systems at GECCO. These events are centered around "genetic" algortihm (see the posts on the IlliGAL blog : 1, 2), despite the fact that there are several papers using other metaheuritics, like simulated annealing, but this is a common drawback, and does not affect the interest of the subject.

The second point is less exploited, but I find it of great interest. A simple example of what can be done with machine-learning inside metaheuristic can be shown with estimation of distribution algorithms. In these metaheuristics, a probability density function is used to explicitely build a new sample of the objective function (a "population", in the evolutionary computation terminology) at each iteration. It is then crucial to build a probability density function that is related to the structure of the objective function (the "fitness landscape"). There, it should be really interesting to build the model of the pdf itself from a selected sample, using a machine-learning algorithm. There is some interesting papers talking about that.

If you mix these approaches with the problem of estimating a Boltzmann distribution (the basis of simulated annealing), you should have an awesome research field...

dimanche 10 décembre 2006

Metaheuristics and experimental research

Springer has just published a book on "Experimental Research in Evolutionary Computation", written by Thomas Bartz-Beielstein.

Thomas Bartz-Beielstein is working on the statistical analysis of the behaviour of metaheuristics (see its tutorials at GECCO and CEC), and the publication of its book is a really great thing. I haven't read it yet, but the table of content seems really promising. There is a true need for such work in the metaheuristics community, and in stochastic optimization in general.

A friend said to me that the lack of experimental culture in the computer science community was a form of consensus, perhaps because theoretical aspects of mathematics was the "only way to make true science". This is a true problem when you deal with stochastic algorithm, applied to real world problem. Despite the fact that several papers early call for more rigourous experimental studies of metaheuristcs (E.D. Taillard has written papers on this problem several years ago, for example), the community does not seems to quickly react.

Yet, things are changing, after the series of CEC special sessions on benchmark for metaheuristics, there is more and more papers on how to test stochastic optimization algorithms and outline the results. I think this book is coming timely... the next step will be to promote the dissemination of the results data (and code!), in an open format, along with the papers.

mercredi 30 août 2006

Metaheuristics & benchmarks at CEC

The IEEE Congress on Evolutionary Computation (CEC) is a well-known event that take place every year.

Since 2005, there is an interesting group of special sessions, organized by Ponnuthurai Nagaratnam Suganthan:

What is really interesting in these sessions is the systematic presence of an implemented generalistic benchmark, built after discussion between researchers.

This is an extremely necessary practice, which is, unfortunately, not generalized. Indeed, this is the first step toward a rigourous performance assessment of metaheuristics (the second one being a true statistical approach, and the third one a considered data presentation).

mercredi 23 août 2006

What are metaheuristics ?

Despite the title of this blog, the term metaheuristic is not really well defined.

One of the first occurence of the term can (of course) be found in a paper by Fred Glover[1]: Future Paths for Integer Programming and Links to Artificial Intelligence[2]. In the section concerning tabu search, he talks about meta-heuristic:

Tabu search may be viewed as a "meta-heuristic" superimposed on another heuristic. The approach undertakes to transcend local optimality by a strategy of forbidding (or, more broadly, penalizing) certain moves.

In the AI field, a heuristic is a specific method that help solving a problem (from the greek for to find), but how must we understand the meta word ? Well, in greek, it means "after", "beyond" (like in metaphysic) or "about" (like in metadata). Reading Glover, metaheuristics seems to be heuristics beyond heuristics, which seems to be a good old definition, but what is the definition nowadays ? The litterature is really prolific on this subject, and the definitions are numerous.

There is at least three tendencies :

  1. one that consider that the most important part of metaheuristcs is the gathering of several heuristics,
  2. one other that promotes the fact that metaheuristics are designed as generalistic methods, that can tackle several problems without major changes in their design,
  3. the last one that use the term only for evolutionnary algorithms when they are hybridicized with local searches (methods that are called memetic algorithms in the other points of vue).

The last one is quite minor in the generalistic litterature, it can mainly be found in the field of evolutionnary computation, separate out the two other tendencies is more difficult.

Here are some definitions gathered in more or less generalistic papers:

"iterative generation process which guides a subordinate heuristic by combining intelligently different concepts for exploring and exploiting the search space" (Osman and Laporte, 1996[3])

"(metaheuristics) combine basic heuristic methods in higher level frameworks aimed at efficiently and effectively exploring a search space" (Blum and Roli, 2003[4])

"a metaheuristic can be seen as a general-purpose heuristic method designed to guide an underlying problem-specific heuristic (...) A metaheuristic is therefore a general algorithmic framework which can be applied to different optimization problems with relative few modifications to make them adapted to a specific problem." (Dorigo and Stützle, 2004[5])

"(metaheuristics) apply to all kinds of problems (...) are, at least to some extent, stochastic (...) direct, i.e. they do not resort to the calculation of the gradients of the objective function (...) inspired by analogies: with physics, biology or ethology" (Dréo, Siarry, Petrowski and Taillard, 2006[6])

One can summarize by enumerating the expected characteristics:

  • optimization algorithms,
  • with an iterative design,
  • combining low level heuristics,
  • aiming to tackle a large scale of "hard" problems.

As it is pointed out by the last reference, a large majority of metaheuristics (well, not to say all) use at least one stochastic (probabilistic) process and does not use more information than the solution and the associated value(s) of the objective function.

Talking about combining heuristics seems to be appropriate for Ant Colony Optimization, that specifically needs one (following Dorigo's point of vue), it can be less obvious for Evolutionnary Algorithms. One can consider that mutation, or even the method's strategy itself, is a heuristic, but isn't it too generalistic to be called a heuristic ?

If we forget the difficulty to demarcate what can be called a heuristic and what is the scope of the term meta, one can simply look at the use of the term among specialists. Despite the fact that the definition can be used in several fields (data mining, machine learning, etc.), the term is used for optimization algorithms. This is perhaps the best reason among others: the term permits to separate a research field from others, thus adding a little bit of marketing...

I would thus use this definition:

Metaheuristics are algorithms designed to tackle "hard" optimization problems, with the help of iterative stochastic processes. These methods are manipulating direct samples of the objective function, and can be applied to several problems without major changes in their design.

Notes

[1] A recurrent joke says that whatever is your new idea, it has already be written down by Glover

[2] Comput. & Ops. Res.Vol. 13, No.5, pp. 533-549, 1986

[3] Metaheuristic: A bibliography, Annals of Operations Research, vol. 63, pp. 513-623, 1996

[4] Metaheuristics in combinatorial optimization: Overview and conceptual comparison, ACM Computing Surveys, vol. 35, issue 3, 2003

[5] Ant Colony Optimization, MIT Press, 2004

[6] Metaheuristics for Hard Optimization, Springer, 2006

mardi 1 août 2006

About this blog

This blog is an attempt to publish thoughts about metaheuristics and to share them with others. Indeed, blogs are fun, blogs are popular, ok... but most of all, blogs can be very usefull for researchers, that constently need to communicate, share ideas and informations.

Metaheuristics are (well, that's one definition among others, but in my opinion the better one) iterative (stochastic) algorithms for "hard" optimization. Well known metaheuristics are the so-called "genetic algorithms" (lets call them evolutionary ones), but these are not the only class: dont forget simulated annealing, tabu search, ant colony algorithms, estimation of distribution, etc.

This blog will try to focuse on the theory, the design, the understanding, the application, the implementation and the use of metaheuristics. I hope this blog will be profitable to other peoples (researchers as well as users), and will be a place to share thoughts.

Welcome aboard, and lets sleep with metaheuristics.