Mentales habitudes - Tag - mixture of gaussian kernel
2015-10-19T19:25:25+02:00
nojhan
urn:md5:12147
Dotclear
The ultimate metaheuristic?
urn:md5:6b05629a17a75cbaed8aa9566830da02
2008-09-11T14:56:00+02:00
nojhan
Divers
descent algorithmestimation of distributionevolutionary computationmetaheuristicMetropolis-Hastings algorithmmixture of gaussian kernelsamplingsimulated annealing
<p>There exists a lot of different algorithms families that can be called
"metaheuristics", stricly speaking, there are a very, very, very large number
of <a href="http://metah.nojhan.net/tag/metaheuristic">metaheuristics</a> <em>instances</em>.</p>
<p>Defining what is a metaheuristic "family" is a difficult problem: when may I
called this or this algorithm an evolutionary one? Is estimation of
distribution a sub-family of genetic algorithms? What is the difference between
ant colony optimization and stochastic gradient ascent? Etc.</p>
<p>Despite the <a href="http://metah.nojhan.net/post/2007/10/12/Classification-of-metaheuristics">difficulty of classifying
metaheuristics</a>, there is some interesting characteristics shared by
stochastic metaheuristics. Indeed, they are all iteratively manipulating a
sample of the objective function<sup>[<a href="http://metah.nojhan.net/post/2008/09/11/#pnote-276667-1" id="rev-pnote-276667-1" name="rev-pnote-276667-1">1</a>]</sup></p>
<p>For example, <a href="http://metah.nojhan.net/tag/simulated%20annealing">simulated annealing</a> is
often depicted as a probabilistic <a href="http://metah.nojhan.net/tag/descent%20algorithm">descent
algorithm</a>, but it is more than that. Indeed, simulated annealing is based
on the <a href="http://metah.nojhan.net/tag/Metropolis-Hastings%20algorithm">Metropolis-Hastings
algorithm</a>, which is a way of sampling any probability distributionn, as
long as you can calculate its density at any point. Thus, <strong>simulated
annealing use an approximation of the objective function as a probability
density function to generate a <a href="http://metah.nojhan.net/tag/sampling">sampling</a></strong>.
It is even more obvious if you consider a step by step decrease of the
temperature. <a href="http://metah.nojhan.net/tag/estimation%20of%20distribution">Estimation of
distribution</a> are another obvious example: they are explicitly manipulating
samplings, but one can also have the same thoughts about <a href="http://metah.nojhan.net/tag/evolutionary%20computation">evolutionary algorithms</a>, even if they are
manipulating the sampling rather implicitely.</p>
<p><img src="http://upload.wikimedia.org/wikipedia/commons/thumb/f/f9/Metaheuristic_parcours-population.png/250px-Metaheuristic_parcours-population.png" alt="" /></p>
<p>The diagram tries to illustrate this idea: (a) a descent algorithm can have
the same sampling behaviour than an iteration of a (b) "population" method.</p>
<p>Given these common processes, is it possible to design a kind of "universal"
metaheuristic ? Theoretically, the answer is yes. For example, in the
continuous domain, consider an estimation of distribution algorithm, using a
<a href="http://metah.nojhan.net/tag/mixture%20of%20gaussian%20kernel">mixture of gaussian kernel</a>:
it can learn any probability density function (possibly needing an infinite
number of kernels). Thus, carefully choosing the function to use at each
iteration and the selection operator, <strong>one can reproduce the behaviour
of any stochastic metaheuristic</strong>.</p>
<p>Of course, choosing the correct mixture (and the other parameters) is a very
difficult problem in practice. But I find interesting the idea that <strong>the
problem of designing a metaheuristic can be reduced to a configuration
problem</strong>.</p>
<div class="footnotes">
<h4>Notes</h4>
<p>[<a href="http://metah.nojhan.net/post/2008/09/11/#rev-pnote-276667-1" id="pnote-276667-1" name="pnote-276667-1">1</a>] Johann DrĂ©o, Patrick Siarry, "<a href="http://www.nojhan.net/pro/spip.php?article26" hreflang="en">Stochastic
metaheuristics as sampling techniques using swarm intelligence</a>. ", in
"Swarm Intelligence: Focus on Ant and Particle Swarm Optimization", Felix T. S.
Chan, Manoj Kumar Tiwari (Eds.), Advanced Robotic Systems International, I-Tech
Education and Publishing, Vienna, Austria , ISBN 978-3-902613-09-7 - December
2008</p>
</div>