7.9. Summary

All the meta-heuristics seen in this chapter sits on top of a Local Search. In or-tools, we made the choice to only trigger a meta-heuristic after a local optimum has been reached.

We have seen in details the three meta-heuristics implemented in or-tools with a Metaheuristic class:

  • Tabu Search;
  • Simulated Annealing and
  • Guided Local Search.

For each of them, we have seen our implementation in full details.

We have also seen how we implemented Large Neighborhood Search with specialized LocalSearchOperators.

Because our meta-heuristics never stop, we have seen how to implement customized SearchLimits that stop the search when a specific stopping criterion is reached. When you don’t know what do to do, we have seen that you can use default search strategies that evolve dynamically. In particular, we have seen the DefaultIntegerSearch DecisionBuilder and how restarting a search might help.

7.9.1. A little word on the efficiency of meta-heuristics

As we have seen in this chapter, our three meta-heuristics aren’t a silver bullet and this is normal: our implementations are too generic. To solve a problem efficiently, you really need to use the structure of the problem to help the solver as much as possible. We have also seen that the quality of the LocalSearchOperators involved is crucial: the meta-heuristics can only guide the Local Search. If the Local Search limits itself to visit only a small part of the search space, the meta-heuristics are helpless.

The strength of meta-heuristics based on Local Search comes from learning while searching. First of all, most meta-heuristics depend on some parameters. Finding the right parameters to solve a specific instance is in itself a hard problem. Second, these parameters can be changed dynamically during the search and the most efficient searches evolve dynamically. Think about intensification versus diversification. Tabu Search learns about what features to forbid or keep and Guided Local Search penalizes some features (and thus rewards others). Learning which parameters to favor or not happens during the search and we talk about reactive search. One could say that escaping a local optimum is only a (although a very important) byproduct of this learning process: the meta-heuristic “understands” it is stuck and seeks to move forward past this local optimum, taking the history of the search into account.

Google or-tools
open source library

User's Manual

Google search



Tutorial examples

Current chapter

7. Meta-heuristics: several previous problems

Previous section

7.8. Default search

Next section

8. Custom constraints: the alldifferent_except_0 constraint

Current section