The Machine Learning (Theory) blog, by John Langford, has a very intersting post on "Explicit Randomization in Learning algorithms".

The post and its comments are talking about machine-learning, but can largely be applied to metaheuristics. The page is listing several reason for using randomization, from which some are of special intersts for metaheuristics:

  1. symmetry breaking as a way to make decision, which is of great importance for metaheuristics, which must learn and choose where are the "promising regions";
  2. overfit avoidance, which is related to the intensification/diversification balance problem;
  3. adversary defeating and bias suppression, which can be interpreted as trying to design a true meta-heuristic (i.e. that can be applied on several problems without major changes).

Of course, it should be possible to design a completely deterministic algorithm that takes decisions, achieve a correct i/d balance and can tackle all problems... Even if this force to integrate the problems themselves in the algorithm, it should be possible. The drawback is that it is computationally intractable.

In fact, metaheuristics (and, as far as I understand, machine-learning algorithms) are located somewhere between random search algorithms and deterministic ones. The compromise between these two tendencies is dependent of the problem and of the offered computational effort.