## The problem with spreading new metaheuristics

Par nojhan le lundi 3 mars 2008, 11:19 - Divers - Lien permanent

Marcelo De Brito had interesting thoughts about what he call New Wave Of Genetic Algorithms. He is surprised that when "evolutionary computation" is applied to a new problem, the first algorithm used is the good old canonic genetic algorithm, despite that there exist active researchs on Estimation of Distribution Algorithms. Julian Togelius write that it may be because people does not understand other algorithms, or even know that anything else exists.

I think that is definitely true. This subject is a kind of hobby for me. Indeed, as I have came from ecology to applied mathematics, I feel like a kind of generalist researcher, not being able to be the best somewhere, but trying to be as good as possible on several fields. Concerning the field of what Marcelo called NWOGA, I would like to emphasize some other things.

As David E. Goldberg say in
its courses, genetic algorithm is the term everybody use. For specialist, a
GA is just a kind of "evolutionary algorithm", with specific rules, that are
more defined by *history* than by anything else.

The litterature on evolutionary computation is quite big, the first algorithm being designed in 1965 (evolutionary strategies, followed by evolutionary programming in 1966), making it difficult to spread deep changes on basic concepts.

There exist a lot more stochastic algorithms for global optimization than just evolutionary ones. I prefer to call the stochastic metaheuristics, or simply metaheuristics, because this lead to far less bias than a metaphoric naming (cf. the previous post on classification of metaheuristics).

For example, during my PhD thesis, I was convinced that some Ant Colony Optimization algorithms were just equivalent to Estimation of Distribution Algorithms, when talking about continuous problems. Moreover, I'm now convinced that a lot of metaheuristics just shares some common stochastic sampling processes, that are not specifiquely related to evolution. For example, mathematically, Simulated Annealing is just a kind of EDA using an approximation of the objective function as a model (or inversely, of course).

As Julian says: I know roughly what an EDA does, but I couldn't sit down
an implement one on the spot

. This is, in my humble opinion, one of the
more important thing to keep in mind. Indeed, there exist more and more papers
claming that a correct parameter setting of a metaheuristic can lead to the
performances of any competing metaheuristic.

Thus, the true discriminatory criterion is not the fantasised intrinsic
capability, but the ease of implementation and parameter setting *on a
specific problem*. In other words, choose the algorithm you like, but be
aware that there exists a lot of other ones.

## Commentaires

Hi, Nojhan!

First of all: Thank you for your nice comment and the follow-up post here on your blog! :)

Very interesting your points.

I would like to point you, if you permit me, to the "official" date in which the evolution strategy came to this crazy, noisy, grubby, discontinuous, and chaotic world:

June 12th, 1964 at Technical University of Berlin.

That was the day when the so called Experimentum Crucis took place.

For more informations about the History of Evolution Strategies, please, see here:

http://geneticargonaut.blogspot.com...

The text was written in a hurry, then, excuse me some grammar errors. :)

As we are talking about the NWOGA/EDAs, let me show you the following: The EDAs came to life in the early 1990s, being the first paper/technical report from 1993 (see: http://citeseer.ist.psu.edu/juels93...). But, I hope I am not wrong, all the math/statistics embedded in EDAs was available along the 1980s (even in the 1970s and 1960s!!)! So, why did EDAs come out only in the early 1990s? Interesting, isn't it?

My (very) short opinion: It had to do with the (cybernetic?) Zeitgeist of that time. :)

Best Regards!

Marcelo