## The ultimate metaheuristic?

Par nojhan le jeudi 11 septembre 2008, 14:56 - Divers - Lien permanent

There exists a lot of different algorithms families that can be called
"metaheuristics", stricly speaking, there are a very, very, very large number
of metaheuristics *instances*.

Defining what is a metaheuristic "family" is a difficult problem: when may I called this or this algorithm an evolutionary one? Is estimation of distribution a sub-family of genetic algorithms? What is the difference between ant colony optimization and stochastic gradient ascent? Etc.

Despite the difficulty of classifying
metaheuristics, there is some interesting characteristics shared by
stochastic metaheuristics. Indeed, they are all iteratively manipulating a
sample of the objective function^{[1]}

For example, simulated annealing is
often depicted as a probabilistic descent
algorithm, but it is more than that. Indeed, simulated annealing is based
on the Metropolis-Hastings
algorithm, which is a way of sampling any probability distributionn, as
long as you can calculate its density at any point. Thus, **simulated
annealing use an approximation of the objective function as a probability
density function to generate a sampling**.
It is even more obvious if you consider a step by step decrease of the
temperature. Estimation of
distribution are another obvious example: they are explicitly manipulating
samplings, but one can also have the same thoughts about evolutionary algorithms, even if they are
manipulating the sampling rather implicitely.

The diagram tries to illustrate this idea: (a) a descent algorithm can have the same sampling behaviour than an iteration of a (b) "population" method.

Given these common processes, is it possible to design a kind of "universal"
metaheuristic ? Theoretically, the answer is yes. For example, in the
continuous domain, consider an estimation of distribution algorithm, using a
mixture of gaussian kernel:
it can learn any probability density function (possibly needing an infinite
number of kernels). Thus, carefully choosing the function to use at each
iteration and the selection operator, **one can reproduce the behaviour
of any stochastic metaheuristic**.

Of course, choosing the correct mixture (and the other parameters) is a very
difficult problem in practice. But I find interesting the idea that **the
problem of designing a metaheuristic can be reduced to a configuration
problem**.

#### Notes

[1] Johann Dréo, Patrick Siarry, "Stochastic metaheuristics as sampling techniques using swarm intelligence. ", in "Swarm Intelligence: Focus on Ant and Particle Swarm Optimization", Felix T. S. Chan, Manoj Kumar Tiwari (Eds.), Advanced Robotic Systems International, I-Tech Education and Publishing, Vienna, Austria , ISBN 978-3-902613-09-7 - December 2008