Mentales habitudes - Tag - simulated annealing
2015-10-19T19:25:25+02:00
nojhan
urn:md5:12147
Dotclear
The ultimate metaheuristic?
urn:md5:6b05629a17a75cbaed8aa9566830da02
2008-09-11T14:56:00+02:00
nojhan
Divers
descent algorithmestimation of distributionevolutionary computationmetaheuristicMetropolis-Hastings algorithmmixture of gaussian kernelsamplingsimulated annealing
<p>There exists a lot of different algorithms families that can be called
"metaheuristics", stricly speaking, there are a very, very, very large number
of <a href="http://metah.nojhan.net/tag/metaheuristic">metaheuristics</a> <em>instances</em>.</p>
<p>Defining what is a metaheuristic "family" is a difficult problem: when may I
called this or this algorithm an evolutionary one? Is estimation of
distribution a sub-family of genetic algorithms? What is the difference between
ant colony optimization and stochastic gradient ascent? Etc.</p>
<p>Despite the <a href="http://metah.nojhan.net/post/2007/10/12/Classification-of-metaheuristics">difficulty of classifying
metaheuristics</a>, there is some interesting characteristics shared by
stochastic metaheuristics. Indeed, they are all iteratively manipulating a
sample of the objective function<sup>[<a href="http://metah.nojhan.net/post/2008/09/11/#pnote-276667-1" id="rev-pnote-276667-1" name="rev-pnote-276667-1">1</a>]</sup></p>
<p>For example, <a href="http://metah.nojhan.net/tag/simulated%20annealing">simulated annealing</a> is
often depicted as a probabilistic <a href="http://metah.nojhan.net/tag/descent%20algorithm">descent
algorithm</a>, but it is more than that. Indeed, simulated annealing is based
on the <a href="http://metah.nojhan.net/tag/Metropolis-Hastings%20algorithm">Metropolis-Hastings
algorithm</a>, which is a way of sampling any probability distributionn, as
long as you can calculate its density at any point. Thus, <strong>simulated
annealing use an approximation of the objective function as a probability
density function to generate a <a href="http://metah.nojhan.net/tag/sampling">sampling</a></strong>.
It is even more obvious if you consider a step by step decrease of the
temperature. <a href="http://metah.nojhan.net/tag/estimation%20of%20distribution">Estimation of
distribution</a> are another obvious example: they are explicitly manipulating
samplings, but one can also have the same thoughts about <a href="http://metah.nojhan.net/tag/evolutionary%20computation">evolutionary algorithms</a>, even if they are
manipulating the sampling rather implicitely.</p>
<p><img src="http://upload.wikimedia.org/wikipedia/commons/thumb/f/f9/Metaheuristic_parcours-population.png/250px-Metaheuristic_parcours-population.png" alt="" /></p>
<p>The diagram tries to illustrate this idea: (a) a descent algorithm can have
the same sampling behaviour than an iteration of a (b) "population" method.</p>
<p>Given these common processes, is it possible to design a kind of "universal"
metaheuristic ? Theoretically, the answer is yes. For example, in the
continuous domain, consider an estimation of distribution algorithm, using a
<a href="http://metah.nojhan.net/tag/mixture%20of%20gaussian%20kernel">mixture of gaussian kernel</a>:
it can learn any probability density function (possibly needing an infinite
number of kernels). Thus, carefully choosing the function to use at each
iteration and the selection operator, <strong>one can reproduce the behaviour
of any stochastic metaheuristic</strong>.</p>
<p>Of course, choosing the correct mixture (and the other parameters) is a very
difficult problem in practice. But I find interesting the idea that <strong>the
problem of designing a metaheuristic can be reduced to a configuration
problem</strong>.</p>
<div class="footnotes">
<h4>Notes</h4>
<p>[<a href="http://metah.nojhan.net/post/2008/09/11/#rev-pnote-276667-1" id="pnote-276667-1" name="pnote-276667-1">1</a>] Johann DrĂ©o, Patrick Siarry, "<a href="http://www.nojhan.net/pro/spip.php?article26" hreflang="en">Stochastic
metaheuristics as sampling techniques using swarm intelligence</a>. ", in
"Swarm Intelligence: Focus on Ant and Particle Swarm Optimization", Felix T. S.
Chan, Manoj Kumar Tiwari (Eds.), Advanced Robotic Systems International, I-Tech
Education and Publishing, Vienna, Austria , ISBN 978-3-902613-09-7 - December
2008</p>
</div>
Metaheuristics and machine-learning
urn:md5:6a9a108df90979ccc7d46e7b1cf8dcd6
2006-12-19T00:00:00+01:00
nojhan
Divers
evolutionary computationmachine learningsimulated annealing
<p>Metaheuristics and machine-learning algorithms shares a large number of
characteristics, like stochastic processes, manipulaton of probability density
functions, etc.</p>
<p>One of the interesting evolution of the research on metaheuristics these
years is the increasing bridge-building with machine-learning. I see at least
two interesting pathways: the use of metaheuristics in machine-learning and the
use of machine-learning in metaheuristics.</p>
<p>The first point is not really new, machine-learning heavily use
optimization, and it was natural to try stochastic algorithms where local
search or exact algorithms failed. Nevertheless, there is now a sufficient
litterature to organize some special sessions in some symposium. For 2007,
there will be a <em><a href="http://seal.tst.adfa.edu.au/~alar/gbml2007/" hreflang="en">special session on Genetics-Based Machine Learning</a></em> at
CEC, and a <em><a href="http://www.sigevo.org/gecco-2007/organizers-tracks.html" hreflang="en">track
on Genetics-Based Machine Learning and Learning Classifier Systems</a></em> at
GECCO. These events are centered around "genetic" algortihm (see the posts on
the IlliGAL blog : <a href="http://www-illigal.ge.uiuc.edu/system/components/com_jd-wp/wp-trackback.php?p=745" hreflang="en">1</a>, <a href="http://www-illigal.ge.uiuc.edu/system/components/com_jd-wp/wp-trackback.php?p=746" hreflang="en">2</a>), despite the fact that there are several papers using
other metaheuritics, like simulated annealing, but this is a common drawback,
and does not affect the interest of the subject.</p>
<p>The second point is less exploited, but I find it of great interest. A
simple example of what can be done with machine-learning inside metaheuristic
can be shown with estimation of distribution algorithms. In these
metaheuristics, a probability density function is used to explicitely build a
new sample of the objective function (a "population", in the evolutionary
computation terminology) at each iteration. It is then crucial to build a
probability density function that is related to the structure of the objective
function (the "fitness landscape"). There, it should be really interesting to
build the model of the pdf itself from a selected sample, using a
machine-learning algorithm. There is some interesting papers talking about
that.</p>
<p>If you mix these approaches with the problem of estimating a Boltzmann
distribution (the basis of simulated annealing), you should have an awesome
research field...</p>
About this blog
urn:md5:6371356d53a94f98925fb66da14eeb9f
2006-08-01T00:00:00+02:00
nojhan
Divers
ant colony algorithmestimation of distributionevolutionary computationsimulated annealing
<p>This blog is an attempt to publish thoughts about metaheuristics and to
share them with others. Indeed, blogs are fun, blogs are popular, ok... but
most of all, blogs can be very usefull for researchers, that constently need to
communicate, share ideas and informations.</p>
<p>Metaheuristics are (well, that's one definition among others, but in my
opinion the better one) iterative (stochastic) algorithms for "hard"
optimization. Well known metaheuristics are the so-called "genetic algorithms"
(lets call them <em>evolutionary</em> ones), but these are not the only class:
dont forget simulated annealing, tabu search, ant colony algorithms, estimation
of distribution, etc.</p>
<p>This blog will try to focuse on the <em>theory</em>, the <em>design</em>,
the <em>understanding</em>, the <em>application</em>, the
<em>implementation</em> and the <em>use</em> of metaheuristics. I hope this
blog will be profitable to other peoples (researchers as well as users), and
will be a place to share thoughts.</p>
<p>Welcome aboard, and lets sleep with metaheuristics.</p>