Mentales habitudes

Aller au contenu | Aller au menu | Aller à la recherche

Tag - definition

Fil des billets

lundi 17 novembre 2008

More on "stochastic local search" definition

Petr Pošík let an interesting comment on a previous post : "Metaheuristic" or "Stochastic Local Search"?. As bloggers always love worthwhile comments from insightful readers, I copy it here, as an article, along with my answer.

I would like to present here an idea that will take the "stochastic" and "local" idea even more further than Thomas and Dirk. In that view, even GAs are local search techniques! Why?

It has a lot to do with the definition of neighborhood of the current state of the algorithm. The state may be the position of an algorithm in the search space. In classical local search, the state is given by mere 1 point. In population-based techniques, the state is the whole population. New solutions are usually generated only by local modifications of the state, i.e. the search takes place only in the local neighborhood of the algorithm state.

Yes, of course, the local neighborhood of the population as a whole is very broad and such a local search is much less prone to premature convergence and exhibits a kind of global behaviour. But, IMHO, it is still a local search...

I completely understand your view: if there is a non-zero probability of generating any solution in the search space, then the search must eventually find the global optimum, and thus performs a global search.

This is the case for many stochastic local search algorithm in continuous domain which use e.g. normal distribution. But if you set the variance too high, you will get very slow convergence to that global optimum which is in practice not desirable. If you set variance too low, you will quickly find local optimum, but you will have to wait for the global one virtually infinitely long - again undesirable. Or, put it in another way, would you call a "global optimization" technique still "global" if it takes zilions of years to find that global solution?

In practice, I think, for continuous spaces we have to resort to algorithms that exhibit local behavior since we always have some hard constraints on the length of the optimization run. In my eyes, this perfectly justifies the name "stochastic local search".

In fact, as far as I comprehend them, both Dick Thierens and Thomas Stützle shares your point of view. For them, GA are a kind of stochastic local search.

I find the classification in "descent" algorithm and "population" ones quite artificial. For example, simulated annealing is implemented as a descent algorithm, but does actually perform a sampling of the objective function, which is commonly seen as a population algorithm characteristic.

More rigourously, I think that every stochastic metaheuristic does try to avoid local optima. Stochastic processes are always here to do so[1].

Maybe the problem is the definition of local search. In my humble opinion, a local search is a method that search for a local optimum. This can be opposed to global search, where one want to find a global optimum.

You are right when you point out that this is related to ergodicity, as only an ergodic[2] algorithm may converge[3], and thus be able to reliably find a global optimum. Thus, yes, I will say that a true and rigorous optimization method is global if, and only if, it is at least quasi-ergodic[4]. A pure random search is the basic global optimization method, that a good metaheuristic should at least outperforms.

Here, it is the stochastic operators that manipulates the solutions that permits the global behaviour. This is not related to the encoding of the solutions/the neighbourhood structure, even if it is a really crucial part of the algorithm.

Thus, a "stochastic local search" may be defined as a local search seeking a global optimum, which is a paradoxical definition. I'd rather prefer that we keep the implementation and the mathematical bases separated, and thus talk of "stochastic search", or "stochastic metaheuristic".

Or else, one may want to use a paradoxical definition, as a matter of fun, which is also a good reason to do so :-)

Notes

[1] There may be probabilistic choices that are not related to such tasks, but they are not linked to the iterative aspects of the search algorithms, thus not being stochastics

[2] i.e. that can evaluate any solution

[3] i.e. that have a non-null probability of finding the global optimum in a finite time, or to say it differently, that can find the global optimum after a time that may tends towards infinity (for continuous problems, for discrete ones it is bounded by the size of the instance)

[4] i.e. that can reach any solution in a finite number of iterations

vendredi 7 novembre 2008

"Metaheuristic" or "Stochastic Local Search"?

During their excellent tutorials at META 2008, both Thomas and Dick talked about "stochastic local search" and seems to be rather uncomfortable with the "metaheuristic" term. They seems to reserve it for very high level well known algorithms.

I'm not sure that using the term "stochastic" along with "local" is a good idea. In all the algorithms, the use of probabilistic processes aims at avoiding local optima. Thus, stochastic algorithms are not "local search" anymore, but "global search". While I agree that it is a very good approach to start with local search techniques, I would say that when you introduce stochastic processes, then you enter the field of metaheuristics. On the other hand, the paradoxal use of "stochastic" along with "local" may be interesting from a marketing point of vue... but I like paradoxes.

Anyway, despite the fact that there would be a lot more to say about the problem of nomenclature in our field (who says "everything is evolutionary"?), this is not very important, I tink I will continue using "metaheuristics", until a common term establish itself in the litterature.

mercredi 23 août 2006

What are metaheuristics ?

Despite the title of this blog, the term metaheuristic is not really well defined.

One of the first occurence of the term can (of course) be found in a paper by Fred Glover[1]: Future Paths for Integer Programming and Links to Artificial Intelligence[2]. In the section concerning tabu search, he talks about meta-heuristic:

Tabu search may be viewed as a "meta-heuristic" superimposed on another heuristic. The approach undertakes to transcend local optimality by a strategy of forbidding (or, more broadly, penalizing) certain moves.

In the AI field, a heuristic is a specific method that help solving a problem (from the greek for to find), but how must we understand the meta word ? Well, in greek, it means "after", "beyond" (like in metaphysic) or "about" (like in metadata). Reading Glover, metaheuristics seems to be heuristics beyond heuristics, which seems to be a good old definition, but what is the definition nowadays ? The litterature is really prolific on this subject, and the definitions are numerous.

There is at least three tendencies :

  1. one that consider that the most important part of metaheuristcs is the gathering of several heuristics,
  2. one other that promotes the fact that metaheuristics are designed as generalistic methods, that can tackle several problems without major changes in their design,
  3. the last one that use the term only for evolutionnary algorithms when they are hybridicized with local searches (methods that are called memetic algorithms in the other points of vue).

The last one is quite minor in the generalistic litterature, it can mainly be found in the field of evolutionnary computation, separate out the two other tendencies is more difficult.

Here are some definitions gathered in more or less generalistic papers:

"iterative generation process which guides a subordinate heuristic by combining intelligently different concepts for exploring and exploiting the search space" (Osman and Laporte, 1996[3])

"(metaheuristics) combine basic heuristic methods in higher level frameworks aimed at efficiently and effectively exploring a search space" (Blum and Roli, 2003[4])

"a metaheuristic can be seen as a general-purpose heuristic method designed to guide an underlying problem-specific heuristic (...) A metaheuristic is therefore a general algorithmic framework which can be applied to different optimization problems with relative few modifications to make them adapted to a specific problem." (Dorigo and Stützle, 2004[5])

"(metaheuristics) apply to all kinds of problems (...) are, at least to some extent, stochastic (...) direct, i.e. they do not resort to the calculation of the gradients of the objective function (...) inspired by analogies: with physics, biology or ethology" (Dréo, Siarry, Petrowski and Taillard, 2006[6])

One can summarize by enumerating the expected characteristics:

  • optimization algorithms,
  • with an iterative design,
  • combining low level heuristics,
  • aiming to tackle a large scale of "hard" problems.

As it is pointed out by the last reference, a large majority of metaheuristics (well, not to say all) use at least one stochastic (probabilistic) process and does not use more information than the solution and the associated value(s) of the objective function.

Talking about combining heuristics seems to be appropriate for Ant Colony Optimization, that specifically needs one (following Dorigo's point of vue), it can be less obvious for Evolutionnary Algorithms. One can consider that mutation, or even the method's strategy itself, is a heuristic, but isn't it too generalistic to be called a heuristic ?

If we forget the difficulty to demarcate what can be called a heuristic and what is the scope of the term meta, one can simply look at the use of the term among specialists. Despite the fact that the definition can be used in several fields (data mining, machine learning, etc.), the term is used for optimization algorithms. This is perhaps the best reason among others: the term permits to separate a research field from others, thus adding a little bit of marketing...

I would thus use this definition:

Metaheuristics are algorithms designed to tackle "hard" optimization problems, with the help of iterative stochastic processes. These methods are manipulating direct samples of the objective function, and can be applied to several problems without major changes in their design.

Notes

[1] A recurrent joke says that whatever is your new idea, it has already be written down by Glover

[2] Comput. & Ops. Res.Vol. 13, No.5, pp. 533-549, 1986

[3] Metaheuristic: A bibliography, Annals of Operations Research, vol. 63, pp. 513-623, 1996

[4] Metaheuristics in combinatorial optimization: Overview and conceptual comparison, ACM Computing Surveys, vol. 35, issue 3, 2003

[5] Ant Colony Optimization, MIT Press, 2004

[6] Metaheuristics for Hard Optimization, Springer, 2006