I've just attend the META 2008 international conference on metaheuristics and nature inspired computing.

The weather was nice in Tunisia, we had a place to sleep, a restaurant and a swimming pool, the organization was just fine. The acceptance rate was of 60%, with 116 accepted papers, for 130 attendants and one short review by paper (at least for mine).

OK, now let's talk about what is really exciting: science.

I was more than pleased to attend to two tutorials, given by Dick Thierens and Thomas Stützle, that both were talking about the use of stochastic local search.

What was definitely interesting is that these two talented researchers were insisting a lot on the need of a rigorous experimental approach for the design and the validation of metaheuristics. That's good news for our research domain: the idea that metaheuristics should be employed in a scientific manner rather than in an artistic one gains more and more importance.

First, they both says that a good way to tackle a hard optimization problem is to employ a bottom-up approach: start first with a simple local search, then use metaheuristics operators to improve the results.

Thomas Stützle, particularly, insist on the crucial need of rigorous parameter setting and experimental validation with statistical tests. That's definitely a very important point.

Another good point made by Thomas is the use of the term "algorithm engineering" to describe a rigorous design and evaluation approach of optimization algorithms. I was searching a nice term to name it, I think this one is a good candidate. The bad news at this conference is that, despite these two incredible tutorials, there was very few peoples speaking about algorithm engineering. I was presenting a new method for parameter setting and behaviour understanding, but I was in a unrelated "metaheuristics for real-world problem" session. I haven't seen other works specifically dedicated to such subjects.

More badly, I have attend to several presentations with very bad experimental work. Some peoples keeps telling their stochastic algorithm is better only by showing the best result found. More often, their is a mean and a standard deviation, but without a statistical test. But there is hope as, since 2001 (when some works made by Éric D. Taillard definitely introduced experimental validation for metaheuristics, at least for me), I find that the proportion of better experimental plans is increasing in the literature.

Anyway, my wish is that there will be more and more special sessions on algorithm engineering in future conferences on metaheuristics. In the meantime, there is the 2nd "Engineering Stochastic local search algorithms" conference, in september 2009, in Brussels, that seems really interesting...