1.2. What is constraint programming?

Constraint Programming (CP) is an emergent field in operations research[1]. It is based on feasibility (i.e. finding a feasible solution) rather than optimization (i.e. finding an optimal solution) and focuses on the constraints and variables’ domain rather than the objective function.

Although quite new, it already possesses a strong theoretical foundation, a widespread and very active community around the world with dedicated scientific journals and conferences and an arsenal of different solving techniques.

CP has been successfully applied in planning and scheduling and in numerous other problems with heterogeneous constraints (see section Real examples for a description of some of its achievements).

The problems CP deals (mostly) with are called Constraint Satisfaction Problems (CSP). Roughly, a CSP is a mathematical model with constraints and the goal is to find a feasible solution i.e. to assign values to the variables of the model such that every constraint is satisfied. One of the most well-known such problem is the Boolean SATisfiability Problem (boolean SAT). (See Wikipedia Constraint satisfaction problem and Boolean satisfiability problem entries.)

Warning

This section was written with different readers in mind. The ones described in the preface but also our colleagues from operations research that are new to CP. From time to time, we compare CP with their field and we use some jargon. Don’t be afraid if you don’t understand those asides and just read on.

Constraint Programming does optimization too!

When faced with an optimization problem, CP first finds a feasible solution x_0 with an objective value of z(x_0). It then adds the constraint z(x) < z(x_0) and tries to find a feasible solution for this enhanced model.

The same trick is applied again and again until the addition of constraint z(x) < z(x_i) for a feasible solution x_i renders the model incoherent, i.e. there is no feasible solution for this model. The last feasible solution x_i is thus an optimal solution.

1.2.1. Strength of Constraint Programming

Two of the main assets of CP are:

  • the ease to model a problem and
  • the possibility to add heterogeneous constraints.

1.2.1.1. The ease to model a problem

If you are used to (non-)linear programming, you know how difficult it is to model some constraints (forcing them to be linear, use of big M for disjunctions, replacing one constraint by a bunch of linear constraints, relying on the direction of optimization (minimizing or maximizing), etc.). None of this happens in CP because constraints can be any constraints. They even don’t have to be numerical and can deal with symbolic variables! This allows to model your problems in a very natural fashion.

One of the most well-known global constraints is the \text{AllDifferent} constraint. This constraint ensures that the variables have different values in a feasible solution. For instance \text{AllDifferent}(t_0, t_1, t_2) forces the three variables t_0, t_1 and t_2 to have different values. Say that t_0, t_1 and t_2 can take the integer values in [0,2].

Compare

\text{AllDifferent}(t_0, t_1, t_2)

to the classical way (see [Williams2001]) of translating this constraint in linear integer programming for instance:

\begin{array}{rccl}
  t_i - \sum_{j=0}^2 j \lambda_{ij} & = & 0 & \forall \, i\\
  \sum_{j=0}^2 \lambda_{ij}         & = & 1 & \forall \, i\\
  \sum_{i=0}^2 \lambda_{ij}         & \leqslant & 1 & \forall \, j
\end{array}

To model the \text{AllDifferent}(t_0, \ldots, t_{n-1}) constraint[2] with t_i \in \, [0, n-1], we already need n^2 auxiliary variables \lambda_{ij}:

\lambda_{ij} = \left\{
\begin{array}{l l}
1 & \quad \text{if $t_i$ takes value $j$}\\
0 & \quad \text{otherwise}\\
\end{array} \right.

and 3n linear equations!

Of course if \text{AllDifferent}(t_0, t_1, t_2) was being replaced by its linear integer programming translation for instance, it would only be syntactic sugar but it is not. Specialized and efficient propagation algorithms were (and are still!) developed to ensure t_0, t_1 and t_2 keep different values during the search.

Numerous specialized and general global constraints exist. The Global Constraint Catalog references 354 global constraints at the time of writing.

Because CP deals locally[3] with each constraints, adding constraints, even on the fly (i.e. during the search), is not a problem. This makes CP a perfect framework to prototype and test ideas: you can change the model without changing (too much) your search strategy/algorithm.

1.2.1.2. The possibility to add heterogeneous constraints

Because the type of relationships among variables that can be modelled in CP is quite large[4], you can play with quite heterogeneous constraints and mix all type of variables.

One of the curiosities of CP is its ability to deal with meta-constraints: constraints on constraints!

Take for instance the \text{Element} constraint. Let [x_0, \ldots, x_{n-1}] be an array of integers variables with domain \{0,\ldots, n-1\}, y an integer variables with domain contained in \{0,\ldots, n-1\} and z with domain \{0,\ldots, n-1\}. The \text{Element} constraint assign the y^{\text{th}} variable in [x_0, \ldots, x_{n-1}] to z, i.e.:

z = x_y.

If you change y or the array [x_0, \ldots, x_{n-1}], z will change accordingly but remember that you have an equality, so this works the other way around too. If you change z then y or/and the array [x_0, \ldots, x_{n-1}] will have to change!

This technique is called reification and you can learn more about it in the chapter Reification.

The ease to model a problem and the possibility to add heterogeneous constraints sometimes make CP the preferred or only framework to model some difficult problems with a lot of side-constraints.