7.4. Tabu Search (TS)

This meta-heuristic was invented in the eighties and has been quite successful to produce very good solutions for most problems. Well implemented, it can be very efficient. We describe here our generic and therefor simple implementation. We’ll again develop and discuss the whole code. Not only will you know exactly what to expect from our implementation but it can also serve as an example for your own implementation of a meta-heuristic with the or-tools library.

7.4.1. The basic idea[1]

The basic idea is to avoid being trapped in a local optimum by making some features of a solution tabu: we don’t want to produce any solution with these features for a certain period of time. This period of time is called a tenure. If we choose these features well, not only do we have the guarantee that we will not reproduce the local optimum again (because it has these features) but we might get out of the vicinity of this local optimum and explore more promising neighborhoods. This is called the diversification phase: we seek to find solutions with different features than previously obtained. Once you find a good solution, you might want to explore solutions “close” to it. If you manage to find what features is important for a solution, you might want to keep them to explore similar solutions. This is called the intensification phase: we seek to find solutions with similar (or the same) features to a (or several) given solution(s). We keep two lists: one for forbidden features and one for features to keep in the next solutions. The Tabu Search oscillates between these two phases: diversification to avoid being trapped in local optima and also explore other parts of the search space and intensification to search more in details a promising neighborhood.

We only have scratched the surface of the Tabu Search meta-heuristic. If you want to know more about TS, the classical book by Glover and Laguna [Glover1997] is still a good reference. Another good reference is [Gendreau2005]. To have an up to date account on this topic, search the Internet: there are plenty of documents about TS.

7.4.2. Our implementation

Our implementation only deals with IntVar variables. Because our implementation is quite generic and can be used with any LocalSearchOperator and any problem, the features of a solution we consider are the given IntVar variables and their values. In intensification mode, we will keep certain variables fixed, i.e. bounded to a certain value and in diversification mode, we will forbid some variables to take some values. Actually, we mix both modes: we use two tenures: keep_tenure and forbid_tenure. We also keep two lists of variables: keep_tabu_list_ and forbid_tabu_list_: variables in the keep_tabu_list_ must keep their values and variables in the forbid_tabu_list_ can not use the corresponding values. To see what variables are selected and added to those lists, you have to look at the code below. They will be kept in the lists for a certain tenure. The tenure can be seen as a certain amount of time or a certain number of iterations. In our case, we consider each time a new neighbor (candidate solution) is selected by the Local Search algorithm. We keep a variable stamp_ that counts the number of iterations. Because we only start the Tabu Search after the first optimum is reached, this variable is at 0 until the first local optimum is reached. After that, every time a candidate solution is selected (including the starting solution when the Local Search starts again after finding a local optimum), this variable is incremented.

Our aspiration criterion is simple: a solution is accepted no matter what if it is better than any other solution encountered so far. Because of this aspiration criterion, our TabuSearch Metaheuristic only add constraints. No variable has its domain skimmed or its value fixed. These additional constraints always allow for a better solution to be accepted. To do so, we use a Boolean IntVar aspiration variable and a Boolean IntVar tabu variable: one of the two has to be equal to 1.

7.4.2.1. Some helpers

To know what a given variable should keep as value or not and how long, we define a simple VarValue struct:

struct VarValue {
  VarValue(IntVar* const var, int64 value, int64 stamp)
        : var_(var), value_(value), stamp_(stamp) {}
  IntVar* const var_;
  const int64 value_;
  const int64 stamp_;
};

Our lists will be declared as:

typedef std::list<VarValue> TabuList;

The std::list can be used as a FIFO list, i.e. a queue[2]. You can add an element at the beginning of the queue/list with its push_front() method and retrieve an element at the end of the queue/list with its pop_back() method. To update a list with respect to the time/iterations/tenures, we chop off the tail of the list whenever an element is outdated:

void AgeList(int64 tenure, TabuList* list) {
  while (!list->empty() && list->back().stamp_ < stamp_ - tenure) {
    list->pop_back();
  }
}

The real method called is AgeLists() because it updates both lists and increment the stamp_ variable.

void AgeLists() {
  AgeList(keep_tenure_, &keep_tabu_list_);
  AgeList(forbid_tenure_, &forbid_tabu_list_);
  ++stamp_;
}

7.4.2.2. The constructor and the variables

Let’s start with the (private) variables of the TabuSearch class:

const std::vector<IntVar*> vars_;
Assignment assignment_;
int64 last_;
TabuList keep_tabu_list_;
int64 keep_tenure_;
TabuList forbid_tabu_list_;
int64 forbid_tenure_;
double tabu_factor_;
int64 stamp_;
bool found_initial_solution_;

vars_ are the variables to use in both Tabu lists. Because of the genericity of our implementation, these variables must be used (changed) by the LocalSearchOperator[3]. We keep an Assignment with the last found solution and store its objective value in the last_ variable. With the variables inherited from the Metaheuristic class, this will allow us to play with the last_, current_ and best_ values.

Both Tabu lists and corresponding tenures have quite explicit names: keep_tabu_list_ and keep_tenure_ for the to be kept features list and forbid_tabu_list_ and forbid_tenure_ for the forbidden features list.

The tabu_factor_ variable is a percentage of the number of variables that are following the Tabu guidelines. If equal to 1.0, it means that all variables must follow the Tabu guidelines or in other words, no violation of the Tabu criteria is allowed (except when the aspiration criterion kicks in). If equal to 0.0, no variable must follow the Tabu guidelines or in other words, the Tabu criteria can be fully violated.

stamp_ is a counter with the number of iterations. Until the Tabu Search is really launched, this counter stays at 0. Finally, found_initial_solution_ is a Boolean that indicates if an initial solution has been found or not.

The constructor is quite straightforward:

TabuSearch(Solver* const s,
           bool maximize,
           IntVar* objective,
           int64 step,
           const std::vector<IntVar*>& vars,
           int64 keep_tenure,
           int64 forbid_tenure,
           double tabu_factor)

step denotes the decrease/increase of the objective value sought after each found feasible solution. Because it is not dynamically changed, you probably want to keep this variable at 1. This value is kept in the inherited variable step_.

Now that we have met the variables and helpers, we can discuss the callbacks of the TabuSearch class. We ordered them below in a way to gradually have a big picture of our algorithmic implementation, not in the order they are called.

7.4.2.3. LocalOptimum()

The LocalOptimum() method is called whenever a nested Local Search is finished. If one SearchMonitor returns true in its LocalOptimum callback, the Local Search is restarted and the search continues.

bool LocalOptimum() {
  AgeLists();
  if (maximize_) {
    current_ = kint64min;
  } else {
    current_ = kint64max;
  }
  return found_initial_solution_;
}

This method launches the Tabu Search. Indeed, by calling AgeLists(), the stamp_ variable is incremented and all the code in the if (0 != stamp_) condition (see below) can be executed. As the Local Search will be relaunched, this method also updates the current_ variable. Finally, if an initial solution has been found, the method returns True to continue the Local Search. If no solution has been previously found, the search is aborted.

7.4.2.4. AcceptNeighbor()

The AcceptNeighbor() method is called whenever a candidate solution is selected to be the next solution in the Local Search algorithm[4]. More precisely, it is called in the Next() method of the FindOneNeighbor DecisionBuilder. This is called an iteration in the Local Search.

void TabuSearch::AcceptNeighbor() {
  if (0 != stamp_) {
    AgeLists();
  }
}

We age both lists and increment the stamp_ variable to acknowledge the iteration if we have already obtained a local optimum in the Local Search.

7.4.2.5. AtSolution()

The AtSolution() method is called whenever a solution is found and accepted in the Local Search. It called in the NextSolution() method of the current search of the CP Solver.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
bool AtSolution() {
  if (!Metaheuristic::AtSolution()) {
    return false;
  }
  found_initial_solution_ = true;
  last_ = current_;

  if (0 != stamp_) {
    for (int i = 0; i < vars_.size(); ++i) {
      IntVar* const var = vars_[i];
      const int64 old_value = assignment_.Value(var);
      const int64 new_value = var->Value();
      if (old_value != new_value) {
        VarValue keep_value(var, new_value, stamp_);
        keep_tabu_list_.push_front(keep_value);
        VarValue forbid_value(var, old_value, stamp_);
        forbid_tabu_list_.push_front(forbid_value);
      }
    }
  }
  assignment_.Store();

  return true;
}

On lines 2 to 6, we update the current_, last_ and best_ values. We also set found_initial_solution_ since we have a solution and hence an initial solution. On line 21 we store the current solution. This allows us to keep the values of the variables and compare them with a new solution as we do in the lines 8 to 20. These lines are only processed if we have reached our first local optimum (if (stamp_ != 0) {...}). If this is the case, we consider among the variables given to the constructor, those that have changed since last solution (if (old_value != new_value) {}). We force those variables to keep their new values for keep_tenure iterations and forbid them to take their old value for forbid_tenure iterations. Depending on the value of forbid_tenure, you can forbid a variable to take several values.

Because we want the search to resume, we return true.

7.4.2.6. ApplyDecision()

The ApplyDecision() method is called when a Decision is about to be applied. This is the place to add the constraints.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
void TabuSearch::ApplyDecision(Decision* const d) {
  Solver* const s = solver();
  if (d == s->balancing_decision()) {
    return;
  }

  IntVar* aspiration = s->MakeBoolVar();
  if (maximize_) {
    s->AddConstraint(
        s->MakeIsGreaterOrEqualCstCt(objective_, best_ + step_,
                                                           aspiration));
  } else {
    s->AddConstraint(
        s->MakeIsLessOrEqualCstCt(objective_, best_ - step_,
                                                           aspiration));
  }

  std::vector<IntVar*> tabu_vars;
  for (const VarValue& vv : keep_tabu_list_) {
    IntVar* tabu_var = s->MakeBoolVar();
    Constraint* keep_cst =
        s->MakeIsEqualCstCt(vv.var_, vv.value_, tabu_var);
    s->AddConstraint(keep_cst);
    tabu_vars.push_back(tabu_var);
  }
  for (const VarValue& vv : forbid_tabu_list_) {
    IntVar* tabu_var = s->MakeBoolVar();
    Constraint* forbid_cst =
        s->MakeIsDifferentCstCt(vv.var_, vv.value_, tabu_var);
    s->AddConstraint(forbid_cst);
    tabu_vars.push_back(tabu_var);
  }
  if (tabu_vars.size() > 0) {
    IntVar* tabu = s->MakeBoolVar();
    s->AddConstraint(s->MakeIsGreaterOrEqualCstCt(
                                        s->MakeSum(tabu_vars)->Var(),
                                        tabu_vars.size() * tabu_factor_,
                                        tabu));
    s->AddConstraint(s->MakeGreaterOrEqual(s->MakeSum(aspiration, tabu),
                                           1LL));
  }

  if (maximize_) {
    const int64 bound = (current_ > kint64min) ?
                                            current_ + step_ : current_;
    s->AddConstraint(s->MakeGreaterOrEqual(objective_, bound));
  } else {
    const int64 bound = (current_ < kint64max) ?
                                            current_ - step_ : current_;
    s->AddConstraint(s->MakeLessOrEqual(objective_, bound));
  }

  if (found_initial_solution_) {
    s->AddConstraint(s->MakeNonEquality(objective_, last_));
  }
}

After quickly screening the content of this method, you might be curious about why we didn’t encapsulate any code in a if (0 != stamp_) {...}. The reason is simple, we need to be able to “revert” to “any” solution at any time if the aspiration criteria is used. At the same time (lines 43 to 51), we force the solver to produce better solutions as an OptimizeVar SearchMonitor would[5]. We also add (lines 53 to 55) a constraint to forbid obtaining solutions with the same values and thus avoid plateaus (i.e. solutions with the exact same objective value one after the other). Let’s discuss the other lines.

Lines 3 to 5 are technical. Whenever we have the unique BalancingDecision, we know that we shouldn’t do anything and let it pass. Lines 7 to 16 is about our aspiration criterion: we accept a neighbor (candidate solution) if it improves the best solution found so far. These constraints allow to accept solutions that will be found in the future and are considered Tabu. Lines 18 to 41 are dedicated to add the corresponding constraints for the Tabu Search mechanism. First (lines 19 to 25) to store some features then (lines 26 to 32) to forbid some other features. Each time, a BoolVar variable corresponding to the added equality (lines 21 to 23) or inequality (lines 28 to 30) is created and added to the std::vector<IntVar*> tabu_vars list. This list is then used (lines 35 to 38) to control how many of these variables (tabu_vars.size() * tabu_factor_) should really follow or not the Tabu criteria. Finally, line 39 adds a constraint to balance the Tabu and aspiration criteria.

7.4.3. First results

The code found in the file jobshop_ts1.cc is very similar to the code in jobshop_ls1.cc from the previous chapter. We only use the SwapIntervals LocalSearchOperator in the Local Search to solve the Job-Shop Problem[6] because we want to quickly reach a Local Optimum and compare both Local Searches with and without Tabu Search.

As in jobshop_ls1.cc, we don’t devise any specialized search strategy and we use the basic Solver::CHOOSE_RANDOM_RANK_FORWARD strategy to rank all tasks and then the basic Solver::CHOOSE_FIRST_UNBOUND with Solver::ASSIGN_MIN_VALUE to schedule each task at its earliest start time.

One main difference with the code in jobshop_ls1.cc is that we don’t use an OptimizeVar SearchOperator but rely on the TabuSearch Metaheuristic to minimize the objective_var IntVar variable.

What are the variables we will use with the TabuSearch class? As it only accepts (pointers to) IntVars, we will use the associated IntVar that represents the “ranking” of the IntervalVars in the ranked sequence[7]. This is exactly what the Next() method of the SequenceVar class returns:

std::vector<IntVar*> tabu_vars;
for (int seq = 0; seq < all_sequences.size(); ++seq) {
  SequenceVar * seq_var = all_sequences[seq];
  for (int interval = 0; interval < seq_var->size(); ++interval ) {
    IntVar * next = seq_var->Next(interval);
    tabu_vars.push_back(next);
  }
}

To create a TabuSearch instance, there is the following factory method:

SearchMonitor* Solver::MakeTabuSearch(bool maximize, IntVar* const v,
                           int64 step, const std::vector<IntVar*>& vars,
                           int64 keep_tenure, int64 forbid_tenure,
                           double tabu_factor) {
    return RevAlloc(new TabuSearch(this, maximize, v, step, vars,
                                   keep_tenure, forbid_tenure,
                                   tabu_factor));
}

We use it like so:

SearchMonitor * tabu_search = solver.MakeTabuSearch(false,
                                                    objective_var,
                                                    1,
                                                    tabu_vars,
                                                    FLAGS_keep_tenure,
                                                    FLAGS_forbid_tenure,
                                                    FLAGS_tabu_factor);

Let’s compare the results of jobshop_ls1.cc and jobshop_ts1.cc for different parameters of the Tabu Search on the problem defined in file abz9. But first, recall that the Local Search in jobshop_ls1.cc, found a Local Optimum (26 th solution, after 35 seconds[8]) with a value of 1051 (and the Local Search stops there). For the file abz9, we have 300 Tabu variables.

We now use the Tabu Search to try to improve this result. Whenever the Local Search will reach this local optimum, the Tabu Search will kick in and hopefully get us out of this local optimum trap. We use the NoImprovementLimit SearchLimit with the number of solutions accepted without any improvement in the objective value set to 30 and infinite time.

The next table compares some results obtained with different Tabu parameters to solve the problem in file abz9.

K F TF #Sol Time(s) #Best value Time (s)
10 5 1.0 123 202,235 92 948 145,135
10 10 1.0 123 199,682 93 948 143,416
5 10 1.0 157 294,816 127 987 249,965
15 10 1.0 123 289,651 93 948 237,480
10 0 1.0 123 154,212 92 948 119,058
10 30 1.0 77 199,009 46 1025 62,842
10 30 0.6 61 90,186 30 1049 40,498
10 10 0.6 61 106,086 30 1049 40,574
10 10 0.2 61 92,421 30 1049 40,576
10 10 0.0 57 87,766 26 1051 35,517

The three first columns show the following Tabu parameters: K represents the keep_tenure value, F the forbid_tenure and TF the tabu_factor. The next two columns give respectively the the total number of solution generated (#Sol) and total time (Time(s) in seconds) that the corresponding algorithm took (when stopped by our SearchLimit). The last three columns are about the best solution found: its number (#Best), value (value) and the number of seconds needed to find it (Time (s)).

Some preliminary comments. First of all, the Tabu Search improves the search in most cases, a good sign. The only case (last line) where it doesn’t improve the search is when we use a tabu_factor of 0.0: no Tabu criterion needs to be fulfilled. The Tabu Search relaunches the Local Search without any improvement and the NoImprovementLimit kicks in after 30 more solutions. Second, the Tabu Search is quite robust. The first five lines give similar results for similar parameters. It seems that keeping some features of a solution is more important than forbidding some. Third, softening the Tabu criterion (tabu_factor < 1) doesn’t seem to work in this particular case.

Can we do better than our best Local Search described in jobshop_ls3.cc? The best objective value was 931. You can read all about in the section Results.

We added a Tabu Search method in file jobshop_ts2.cc to complete the code in jobshop_ls3.cc[9]. With the default parameters and the following Tabu parameters keep_tenure=10, forbid_tenure=5 and tabu_factor=1.0, we obtain an objective value of 924 (in 201,604 seconds). Much better than 931 but still very far from the optimal value of 679. And we had roughly to work twice as hard to go from 931 to 924. This is typical. To get even better results, one must implement specialized algorithms.

Footnotes

[1]We only give a you a very basic idea barely enough to understand our implementation. Actually, some Tabu Search experts might even not agree with our presentation of the main idea.
[2]FIFO stands for First In, First Out. Basically, a queue is a data structure that allows you to add elements and retrieve them in their order of appearance.
[3]Variables that are not changed by the LocalSearchOperator will not enter any Tabu list.
[4]

I.e. filtered and accepted, see the section The basic Local Search algorithm and the callback hooks for the SearchMonitors if needed.

[5]Although the OptimizeVar SearchMonitor does not add constraints.
[6]If you don’t remember anything about the Job-Shop Problem, don’t panic. You can read about it in the previous chapter and if you prefer you can completely skip the problem definition and our implementation to solve it (but not the use of LocalSearchOperators as our implementation only works with Local Search). We only discuss here the use of the TabuSearch meta-heuristic.
[7]

You might want to reread the section Variables about variables used for scheduling in or-tools.

[8]You might be surprised by these 35 seconds if you compare them to the 51 seconds needed in the previous chapter. Of course, we used another computer.
[9]We use an OptimizeVar in file jobshop_ts2.cc because the initial solution is found by Local Search. This OptimizeVar variable is not used in the Tabu Search.

Bibliography

[Glover1997]
  1. Glover and M. Laguna. Tabu Search, Kluwer Academic Publishers, 1997.
[Gendreau2005]M. Gendreau and J.-Y. Potvin. Tabu search. In E. K. Burke and G. Kendall, editors, Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques. Springer-Verlag, 2005