The problem with spreading new metaheuristics
Marcelo De Brito had interesting thoughts about what he call New Wave Of Genetic Algorithms. He is surprised that when "evolutionary computation" is applied to a new problem, the first algorithm used is the good old canonic genetic algorithm, despite that there exist active researchs on Estimation of Distribution Algorithms. Julian Togelius write that it may be because people does not understand other algorithms, or even know that anything else exists.
I think that is definitely true. This subject is a kind of hobby for me. Indeed, as I have came from ecology to applied mathematics, I feel like a kind of generalist researcher, not being able to be the best somewhere, but trying to be as good as possible on several fields. Concerning the field of what Marcelo called NWOGA, I would like to emphasize some other things.
As David E. Goldberg say in its courses, genetic algorithm is the term everybody use. For specialist, a GA is just a kind of "evolutionary algorithm", with specific rules, that are more defined by history than by anything else.
The litterature on evolutionary computation is quite big, the first algorithm being designed in 1965 (evolutionary strategies, followed by evolutionary programming in 1966), making it difficult to spread deep changes on basic concepts.
There exist a lot more stochastic algorithms for global optimization than just evolutionary ones. I prefer to call the stochastic metaheuristics, or simply metaheuristics, because this lead to far less bias than a metaphoric naming (cf. the previous post on classification of metaheuristics).
For example, during my PhD thesis, I was convinced that some Ant Colony Optimization algorithms were just equivalent to Estimation of Distribution Algorithms, when talking about continuous problems. Moreover, I'm now convinced that a lot of metaheuristics just shares some common stochastic sampling processes, that are not specifiquely related to evolution. For example, mathematically, Simulated Annealing is just a kind of EDA using an approximation of the objective function as a model (or inversely, of course).
As Julian says:
I know roughly what an EDA does, but I couldn't sit down
an implement one on the spot. This is, in my humble opinion, one of the
more important thing to keep in mind. Indeed, there exist more and more papers
claming that a correct parameter setting of a metaheuristic can lead to the
performances of any competing metaheuristic.
Thus, the true discriminatory criterion is not the fantasised intrinsic capability, but the ease of implementation and parameter setting on a specific problem. In other words, choose the algorithm you like, but be aware that there exists a lot of other ones.