## Abstract-only Presentations

*Text for each abstract is provided below the list of authors
and titles*

Jomu George Mani Paret and Otmane Ait Mohamed

Domain reduction of input variables of constraint satisfaction problem by using a new consistency algorithm

Alejandro Arbelaez and Philippe Codognet

Parallel Local Search for SAT

Philippe Jegou, Lionel Paris, Nicolas Prcovic and Cyril Terrioux

On a syntactic characterization of arc consistency in CSPs

Timo Berthold

Measuring the impact of primal heuristics

Victor Zverovich and Robert Fourer

Connecting Constraint Solvers to AMPL

Lars Kotthoff and Barry O'Sullivan

Constraint-based Clustering

Steve Prestwich, Marco Laumanns and Ban Kawas

Symmetry Breaking in Scenario Generation

Touhid Ahmed

Modeling and Simulation Based Fitness Evaluation in Genetic Algorithm for Combinatorial Optimization

John Chinneck and Mubashsharul Shafique

Towards a Fast Heuristic for Global Optimization and MINLP

Jomu George Mani Paret and Otmane Ait Mohamed

**Domain reduction of input variables of constraint satisfaction problem by using a new consistency algorithm **

Coverage is used to obtain information about execution of hardware description language (HDL) statements. It helps to determine how well the test cases verified the design under verification. Coverage directed test generation (CDTG) techniques analyze coverage results and adapt the test generation process to improve the coverage. One of the important components of CDTG technique is the constraint solver. The constraint solver in CDTG requires large memory and take more time to generate results. Hence attaining required coverage is difficult. In this paper a new methodology (based on consistency search) to attain coverage is developed. This methodology is based on a new efficient consistency algorithm called GACCC-2 (generalized arc consistency on conjunction of constraints). The main advantage of this methodology is that the memory usage and time to generate the solution is reduced. The effectiveness of the algorithm was tested on 3-SAT problem instances and some benchmark CSPs. Then the proposed methodology was used in the verification of a mini-MIPS processor and some very encouraging results were obtained.

Alejandro Arbelaez and Philippe Codognet

**Parallel Local Search for SAT **

In this paper we study the performance of parallel local search solvers for SAT. Unlike previous work in the area, where parallel solvers are focussed on few cores (up to 16), in this paper we exploit parallelism up to 512 cores.

Philippe Jegou, Lionel Paris, Nicolas Prcovic and Cyril Terrioux

**On a syntactic characterization of arc consistency in CSPs **

It is now well known that the efficiency of a resolution of CSP is based on the operation and implementation of efficient filtering algorithms local consistencies like AC (arc-consistency). However, the systematic implementation of a such filtering for a search can sometimes be very expensive. Also, it might be relevant to avoid the systematic execution of filterings after each choice point in a search tree. In this paper, to avoid this, we propose a syntactic characterization of the notion of arc consistency in CSPs. We show how it is possible to express, on the basis of an order of the domains of variable that a CSP satisfies or not arc consistency. The advantage of this syntactic expression is especially that detection of inconsistency, for a MAC type search does not require the implementation of a filtering algorithm for the search thereby improving the efficiency of the search.

Timo Berthold

**Measuring the impact of primal heuristics **

In modern MIP-solvers like the branch-cut-and-price-framework SCIP, primal heuristics play a major role in finding and improving feasible solutions at the early steps of the solution process. However, classical performance measures for MIP such as time to optimality or number of branch-and-bound nodes reflect the impact of primal heuristics on the overall solving process rather badly. Reasons for this are that they typically depend on the convergence of the dual bound and that they only consider instances which can actually be solved within a given time limit. We discuss the question of how the quality of a primal heuristic should be evaluated and introduce a new performance measure, the "primal integral". It depends on the quality of solutions found during the solving process as well as on the point in time when they are found. Thereby, it assesses the impact of primal heuristics on the ability to find feasible solutions of good quality, in particular early during search. Finally, we discuss computational results for different classes of primal heuristics that are implemented in SCIP.

Victor Zverovich and Robert Fourer

**Connecting Constraint Solvers to AMPL **

We describe a recent project to connect the AMPL optimization modeling language to the constraint solvers IBM ILOG CP, Gecode and JaCoP. Our interfaces take advantage of logical forms such as constraint disjunctions, counting operators, and all-different constraints that were introduced into AMPL in the 1990s to complement conventional algebraic equalities and inequalities; the availability of these forms permits CP and MIP models to be solved and compared in the same context. While general-purpose modeling languages have previously been connected to constraint solvers, our work is distinguished by the AMPL language's connections to multiple CP solvers in addition to a broad variety of linear and nonlinear OR solvers. Because CP solvers show a great diversity of interface requirements, we incorporate new ideas for facilities to support conversion from AMPL's general expression-tree representation of logical constraints to the structures and calls required by different solvers. A number of examples illustrate the use of AMPL's logical forms in writing CP problems naturally and in trying them out on different solver platforms.

Lars Kotthoff and Barry O'Sullivan

**Constraint-based Clustering **

Machine learning and constraint programming are almost completely independent research fields. However, there are significant opportunities for synergy between them. In this presentation, we introduce a constraint programming approach to the classification problem in machine learning. Specifically, we treat classification as a clustering problem.

Previous approaches have used constraints as a means of representing background knowledge to improve the quality of the resulting clustering. We show how to use such constraints to not only guide the machine learning algorithms, but replace them entirely. Our approach uses an off-the-shelf constraint solver to find the clustering that reflects as much background knowledge as possible. A second formulation allows us to optimise for the objectives commonly used in machine learning algorithms, such as maximising the inter-cluster distances.

We present an evaluation results of our approaches on a variety of well-known benchmarks covering a range of different application domains. Our approaches can significantly outperform standard clustering methods used in machine learning in terms of the quality of the resulting clustering and classification. In addition, the constraint programming formulation provides much more flexibility and customisation opportunities than standard machine learning approaches.

Steve Prestwich, Marco Laumanns and Ban Kawas

**Symmetry Breaking in Scenario Generation **

Several symmetry breaking methods have been devised for Constraint Programming which can dramatically reduce the size of a problem. We propose applying these methods to the random variables of a stochastic problem, an approach we call stochastic symmetry breaking. This can be seen as a scenario reduction method and an alternative to scenario sampling. We apply it to a real-world problem: finding an optimal investment plan for a transportation network, given that a future earthquake probabilistically destroys links in the network. The problem has a billion scenarios and was previously solved approximately by taking a million samples. We exploit a form of symmetry called dynamic interchangeability to reduce the number of generated scenarios to a few hundred, making sampling unnecessary and allowing the exact solution of cases previously considered intractable.

Touhid Ahmed

**Modeling and Simulation Based Fitness Evaluation in Genetic Algorithm for Combinatorial Optimization **

Optimization is the process of making a system more effective in terms of efficiency, utilization and throughput. It involves searching for a solution in a solution space for each variable involved. As the number of variables increase the size of the search space grows exponentially. Many traditional techniques exist for optimization; Linear programming, mathematical optimization and operational research e.t.c. An alternative approach is to use Artificial Intelligence (AI) techniques. The goal of using AI for optimization would be to search for a better solution while reducing time and effort costs.

Genetic Algorithm (GA) is an AI search tool for finding solutions to optimization problems. Despite its merit in intelligent search mechanism, GA often lacks in evaluating fitness of dynamic systems consisting of random behavior and multiple performance criteria. This paper proposes a framework of a hybrid system using Simulation to complement the shortcomings of GA in fitness evaluation.

A GA generated solution will be fitted into a simulation model and tested using relevant performance standards. The simulation result will be sent back as feedback to the GA algorithm to produce the next generation of solutions. These solutions will be evaluated further through simulation, and this cycle of solution generation and evaluation using simulation will continue until a satisfactory solution is derived.

John Chinneck and Mubashsharul Shafique

**Towards a Fast Heuristic for Global Optimization and MINLP **

We present a multi-start heuristic for global nonlinear optimization that relies on these elements: (i) an initial random scatter of points, (ii) a fast method for moving from an initial point to a point that is close to feasibility, (iii) a method for clustering the near-feasible points that tends to identify disjoint feasible regions, (iv) a simple heuristic for improving the best point in each cluster, and (v) a high-quality local optimizer. The method is inherently parallelizable, well-suited to very large models, and very fast, mainly because it requires few expensive local optimizer launches. In our testing to date, the method is generally able to find the global optimum (or another high quality point) in little time. We plan to extend this work to mixed-integer nonlinear programming by the addition of spatial branching with the goal of finding the MINLP optimum (or another high quality solution) using only a very small number of local optimizer launches. Because of the nature of step (ii), we are able to approximate basins of attraction, leading to novel spatial branching ' approaches.