Mentales habitudes

Aller au contenu | Aller au menu | Aller à la recherche

Tag - simulated annealing

Fil des billets

jeudi 11 septembre 2008

The ultimate metaheuristic?

There exists a lot of different algorithms families that can be called "metaheuristics", stricly speaking, there are a very, very, very large number of metaheuristics instances.

Defining what is a metaheuristic "family" is a difficult problem: when may I called this or this algorithm an evolutionary one? Is estimation of distribution a sub-family of genetic algorithms? What is the difference between ant colony optimization and stochastic gradient ascent? Etc.

Despite the difficulty of classifying metaheuristics, there is some interesting characteristics shared by stochastic metaheuristics. Indeed, they are all iteratively manipulating a sample of the objective function[1]

For example, simulated annealing is often depicted as a probabilistic descent algorithm, but it is more than that. Indeed, simulated annealing is based on the Metropolis-Hastings algorithm, which is a way of sampling any probability distributionn, as long as you can calculate its density at any point. Thus, simulated annealing use an approximation of the objective function as a probability density function to generate a sampling. It is even more obvious if you consider a step by step decrease of the temperature. Estimation of distribution are another obvious example: they are explicitly manipulating samplings, but one can also have the same thoughts about evolutionary algorithms, even if they are manipulating the sampling rather implicitely.

The diagram tries to illustrate this idea: (a) a descent algorithm can have the same sampling behaviour than an iteration of a (b) "population" method.

Given these common processes, is it possible to design a kind of "universal" metaheuristic ? Theoretically, the answer is yes. For example, in the continuous domain, consider an estimation of distribution algorithm, using a mixture of gaussian kernel: it can learn any probability density function (possibly needing an infinite number of kernels). Thus, carefully choosing the function to use at each iteration and the selection operator, one can reproduce the behaviour of any stochastic metaheuristic.

Of course, choosing the correct mixture (and the other parameters) is a very difficult problem in practice. But I find interesting the idea that the problem of designing a metaheuristic can be reduced to a configuration problem.

Notes

[1] Johann Dréo, Patrick Siarry, "Stochastic metaheuristics as sampling techniques using swarm intelligence. ", in "Swarm Intelligence: Focus on Ant and Particle Swarm Optimization", Felix T. S. Chan, Manoj Kumar Tiwari (Eds.), Advanced Robotic Systems International, I-Tech Education and Publishing, Vienna, Austria , ISBN 978-3-902613-09-7 - December 2008

mardi 19 décembre 2006

Metaheuristics and machine-learning

Metaheuristics and machine-learning algorithms shares a large number of characteristics, like stochastic processes, manipulaton of probability density functions, etc.

One of the interesting evolution of the research on metaheuristics these years is the increasing bridge-building with machine-learning. I see at least two interesting pathways: the use of metaheuristics in machine-learning and the use of machine-learning in metaheuristics.

The first point is not really new, machine-learning heavily use optimization, and it was natural to try stochastic algorithms where local search or exact algorithms failed. Nevertheless, there is now a sufficient litterature to organize some special sessions in some symposium. For 2007, there will be a special session on Genetics-Based Machine Learning at CEC, and a track on Genetics-Based Machine Learning and Learning Classifier Systems at GECCO. These events are centered around "genetic" algortihm (see the posts on the IlliGAL blog : 1, 2), despite the fact that there are several papers using other metaheuritics, like simulated annealing, but this is a common drawback, and does not affect the interest of the subject.

The second point is less exploited, but I find it of great interest. A simple example of what can be done with machine-learning inside metaheuristic can be shown with estimation of distribution algorithms. In these metaheuristics, a probability density function is used to explicitely build a new sample of the objective function (a "population", in the evolutionary computation terminology) at each iteration. It is then crucial to build a probability density function that is related to the structure of the objective function (the "fitness landscape"). There, it should be really interesting to build the model of the pdf itself from a selected sample, using a machine-learning algorithm. There is some interesting papers talking about that.

If you mix these approaches with the problem of estimating a Boltzmann distribution (the basis of simulated annealing), you should have an awesome research field...

mardi 1 août 2006

About this blog

This blog is an attempt to publish thoughts about metaheuristics and to share them with others. Indeed, blogs are fun, blogs are popular, ok... but most of all, blogs can be very usefull for researchers, that constently need to communicate, share ideas and informations.

Metaheuristics are (well, that's one definition among others, but in my opinion the better one) iterative (stochastic) algorithms for "hard" optimization. Well known metaheuristics are the so-called "genetic algorithms" (lets call them evolutionary ones), but these are not the only class: dont forget simulated annealing, tabu search, ant colony algorithms, estimation of distribution, etc.

This blog will try to focuse on the theory, the design, the understanding, the application, the implementation and the use of metaheuristics. I hope this blog will be profitable to other peoples (researchers as well as users), and will be a place to share thoughts.

Welcome aboard, and lets sleep with metaheuristics.