SAT/CSP-ICTAI-2014 Accepted Papers with Abstracts
Ignacio Araya. Estimating Upper Bounds for Improving the Filtering in Interval Branch and Bound Optimizers
Abstract:
When interval branch and bound solvers are used
for solving constrained global optimization,
upper bounding the objective function is an important mechanism
which helps to reduce globally the search space.
Each time a new upper bound $UB$ is found during the search,
a constraint related to the objective function
$f_{obj}(x)<UB$ is added in order to prune non-optimal regions.
We observed experimentally that if we knew a close-to-optimal
value in advance (without necessarily knowing the corresponding solution),
then the performance of the solver could be significantly improved.
Thus, in this work we propose a simple mechanism
for estimating upper bounds in order to accelerate the convergence
of interval branch and bound solvers. The proposal is validated
through a series of experiments.
Abstract:
Partial constraint satisfaction and soft constraints enable to deal
with over-constrained problems in practice. Constraint relationships have been introduced to
provide a qualitative approach to specifying preferences over the
constraints that should be satisfied. In contrast to quantitative
approaches like weighted or fuzzy CSPs, the preferences just rely on a
directed acyclic graph. The approach is particularly aimed at
scenarios where soft-constraint problems stemming from several
independently modeled agents have to be aggregated into one problem in
a multi-agent system. Existing transformations into weighted-CSP
introduce unintended, additional preference decisions.
We first illustrate the application of constraint relationships
in a case study from energy management along with deficiencies of existing work.
We then show how to embed
constraint relationships into the soft-constraint frameworks of partial valuation structures
and further c-semirings by means of free constructions. We finally provide a prototypical
implementation of heuristics for the well-known branch-and-bound algorithm
along with an empirical evaluation.
Abstract:
This paper describes a computer-assisted non-existence proof of 9-input sorting networks consisting of 24 comparators, hence showing that the 25-comparator sorting network found by Floyd in 1964 is optimal. As a corollary, we obtain that the 29-comparator network found by Waksman in 1969 is optimal when sorting 10 inputs. This closes the two smallest open instances of the optimal-size sorting network problem, which have been open since the results of Floyd and Knuth from 1966 proving optimality for sorting networks of up to 8 inputs. The proof involves a combination of two methodologies: one based on exploiting the abundance of symmetries in sorting networks, and the other based on an encoding of the problem to that of satisfiability of propositional logic. We illustrate that, while each of these can single-handedly solve smaller instances of the problem, it is their combination that leads to the more efficient solution that scales to handle 9 inputs.
Abstract:
An original encoding of multiple contraction in Boolean logic through Partial-Max-SAT is proposed.
Multiple contraction of a set of clauses $\Delta$ by a set of formulas $\Gamma$ delivers one maximum cardinality subset of $\Delta$ from which no
formula of $\Gamma$ can be deduced. Equivalently, multiple contraction can be defined as
the extraction of one maximum cardinality subset of $\Delta$ that is satisfiable together with a given set of formulas.
Noticeably, the encoding schema allows multiple contraction
to be computed through a number of calls to a SAT solver that is bound by the number of formulas in $\Gamma$ and one call to Partial-Max-SAT.
On the contrary, in the worst case, a direct approach requires us to compute for each formula $\gamma$ in $\Gamma$
all inclusion-maximal subsets of $\Delta$ that do not entail $\gamma$.
Extensive experimental results show that the encoding allows multiple contraction to be computed in a way that is practically viable in many cases
and outperforms the direct approach.
Michael Sioutis. Triangulation versus Graph Partitioning for Tackling Large Real World Qualitative Spatial Networks
Abstract:
There has been interest in recent literature intackling very large real world qualitative spatial networks,primarily because of the real datasets that have been, andare to be, offered by the Semantic Web community andscale up to millions of nodes. The state-of-the-art techniquesfor tackling such large networks employ the following twoapproaches for retaining the sparseness of their underlyinggraphs and reasoning with them: (i) graph triangulation andsparse matrix implementation, and (ii) graph partitioningand parallelization. Regarding the latter approach, an im-plementation has been offered by Nikolaou and Koubarakisrecently [AAAI, 2014]. However, although their implementationlooks promising and with space for improvement, and theauthors were well intentioned, a series of mistakes in theevaluation process resulted in the wrong conclusion that it isable to provide fast consistency for very large qualitative spatialnetworks with respect to the state-of-the-art, which was alsothe main result of that paper. In this paper, we review thetwo aforementioned approaches and contradict the result ofNikolaou and Koubarakis by properly re-evaluating them withthe benchmark dataset of that paper. Thus, we establish a clearview on the state-of-the-art solutions for reasoning with largereal world qualitative spatial networks efficiently, which is themain result of this paper.
André Abramé and Djamal Habet. Local Max-Resolution in Branch and Bound Solvers for Max-SAT
Abstract:
One of the most critical components of Branch & Bound (BnB) solvers for Max-SAT is the estimation of the lower bound. At each node of the search tree, they detect inconsistent subsets (IS) of the formula by unit propagation based methods and apply a treatment on them. Depending on the structure of the IS, current best performing BnB solvers transform them by several max-resolution steps and keep the changes in the sub-part of the subtree or simply remove the clauses of these subsets from the formula and restore them before the next decision. The formula obtained after this last treatment is not equivalent to the original one and the number of detectable remaining inconsistencies may be reduced.
In this paper, instead of applying such a removal, we propose to fully exploit all the inconsistent subsets by applying the well-known max-resolution inference rule to transform them locally in the current node of the search tree. The expected benefits of this transformation are an accurate lower bound estimation and the reduction of the number of decisions needed to solve an instance. We show experimentally the interest of our approach on weighted and unweighted Max-SAT instances and discuss the obtained results.
Abstract:
The problem of finding small unsatisfiable subsets of a set of constraints is important for various applications in computer science and artificial intelligence. We study the problem of identifying whether a given instance to the constraint satisfaction problem (CSP) has an unsatisfiable subset of size at most k from a parameterized complexity point of view. We show that the problem of finding small unsatisfiable subsets of a CSP instance is harder than the corresponding problem for CNF formulas. Moreover, we show that the problem is not fixed-parameter tractable when restricting the problem to any maximal tractable Boolean constraint language (for which the problem is nontrivial). We show that the problem is hard even when the maximum number of occurrences of any variable is bounded by a constant, a restriction which leads to fixed-parameter tractability for the case of CNF formulas. Finally, we relate the problem of finding small unsatisfiable subsets to the problem of identifying variable assignments that are enforced already by a small number of constraints (backbones), or that are ruled out already by a small number of constraints (anti-backbones).
Amina Kemmar, Willy Ugarte, Samir Loudni, Thierry Charnois, Yahia Lebbah, Patrice Boizumault and Bruno Crémilleux. Mining Relevant Sequence Patterns with CP-based Framework
Abstract:
Sequential pattern mining under various constraints is a challenging data mining task. The paper provides a generic framework based on constraint programming to discover sequence patterns defined by constraints on local patterns (e.g., gap, regular expressions) or constraints on patterns involving combination of local patterns such as relevant subgroups and top-k patterns. This framework enables the user to mine in a declarative way both kinds of patterns. The solving step is done by exploiting the machinery of Constraint Programming. For complex patterns involving combination of local patterns, we improve the mining step by using dynamic CSP. Finally, we present two case studies in biomedical information extraction and stylistic analysis in linguistics.
Abstract:
An extension and several variants of Dalal's Quad polynomial fragments of SAT are presented. Firstly, the stability of Quad fragments is investigated. Then, the extension is as follows. Quad fixed total orderings of clauses is accompanied with specific additional separate orderings of maximal sub-clauses. Interestingly, the resulting fragments extend $Quad$ without degrading its worst-case complexity. Several other variants of Quad that give rise to additional different polynomial fragments are then investigated.
Abstract:
We study how to find a solution to a constraint
problem without modeling it. Constraint acquisition systems
such as Conacq or ModelSeeker are not able to solve a single
instance of a problem because they require positive examples
to learn. The recent QuAcq algorithm for constraint acquisition
does not require positive examples to learn a constraint
network. It is thus able to solve a constraint problem without
modeling it: we simply exit from QuAcq as soon as a complete
example is classified as positive by the user. In this paper, we
propose ASK&SOLVE, an elicitation-based solver that tries to
find the best tradeoff between learning and solving to converge
as soon as possible on a solution. We propose several strategies
to speed-up ASK&SOLVE. Finally we give an experimental
evaluation that shows that our approach improves the state of
the art.
Thorsten Ehlers, Dirk Nowotka and Philipp Sieweck. Communication in massively-parallel SAT Solving
Abstract:
The exchange of learnt clauses is a key feature in parallel SAT solving.
We present a graph-based approach to this topic.
Each solver thread corresponds to a node in this graph. Communication between two solvers is allowed if the respective nodes are connected by an edge. This yields another dimension in controlling the amount of communication. We show results for this approach, gaining significant speedups for up to 256 parallel solvers.
El Mouelhi Achref, Philippe Jégou and Cyril Terrioux. Hidden Tractable Classes: from Theory to Practice
Abstract:
Tractable classes constitute an important issue in CP, at least from a theoretical viewpoint.But they are not actually used in practice.Either their treatment is too costly for time complexity.Or, even if there exist efficient algorithms to manage them, they do not appear in the real problems.We propose here to address this issue thanks to the notion of hidden tractable classes.Such classes are based on a known tractable class C, and a transformation t, and are defined by sets of instances P such that their transformation using t is in C, that is t(P) in C.We propose a general framework to study such notions.After, we focus our study on the tractable class BTP, and several transformations which are the filterings classically used in CP.We show then that the use of filterings allows sometimes to highlight the occurrence of BTP in the benchmarks generally considered for solver comparisons, i.e. that BTP is sometimes ``hidden'' in the benchmarks.Thus, this approach allows to extend the set of known tractable classes.
David Lesaint, Deepak Mehta, Barry O'Sullivan and Vincent Vigneron. A Decomposition Approach for Discovering Discriminative Motifs in a Sequence Database
Abstract:
A considerable effort has been invested over the years in ad-hoc algorithms for itemset and pattern mining. Constraint Programming has recently been proposed as a means to tackle itemset mining tasks with a common modelling framework. We follow this approach to address the discovery of discriminative n-ary motifs in databases of labeled sequences. We define a n-ary motif as a mapping of n patterns to n class-wide embeddings and we restrict the interpretation of constraints on a motif to the sequences embedding all patterns. We formulate core constraints that minimize redundancy between motifs and introduce a general constraint optimization framework to compute common and exclusive motifs. We cast the discovery of closed and replication-free motifs in this framework for which we propose a two-stage approach based on Constraint Programming. Experimental results on datasets of protein sequences demonstrate the efficiency of the approach.
Abstract:
Many network design problems arising in the fields of transportation, distribution and logisitics require clients to be connected to facilities through a set of carriers subject to distance and capacity constraints. Here a carrier could be a cable, vehicle, salesman etc. The distance from a facility to client using a carrier could be expressed as signal loss, time spent, path length, etc. The capacity of a carrier could be interpreted as the maximum number of commodities that a carrier can carry, the maximum number of clients or links that a single carrier can visit, etc.
The main decisions are to determine the number of carriers, assign clients to carriers, and design a network for each carrier subject to distance, capacity and some side constraints. In this paper, we focus on Cable Routing Problem (CRP), which is NP-hard. We present a constraint-based local search algorithm and
two efficient local move operators. The effectiveness of our approach is demonstrated by experimenting with 300 instances of CRP taken from real-world passive optical network deployments in Ireland. The results show that our algorithm can scale to very large size problem instances and it can compute good quality solutions in a very limited time.
Jeremias Berg and Matti Järvisalo. SAT-Based Approaches to Treewidth Computation: An Evaluation
Abstract:
Treewidth is an important structural property of graphs, tightly connected to computational tractability in eg various constraint satisfaction formalisms such as constraint programming, Boolean satisfiability, and answer set programming, as well as probabilistic inference, for instance. An obstacle to harnessing treewidth as a way to efficiently solving bounded treewidth instances of NP-hard problems is that deciding the treewidth, and hence computing an optimal tree-decomposition, of a given graph is in itself an NP-complete problem. In this paper, we study the applicability of Boolean satisfiability (SAT) based approaches to determining the treewidths of graphs, and at the same time obtaining an associated optimal tree-decomposition. Extending earlier studies, we evaluate various SAT and MaxSAT based strategies for treewidth computation, and compare these approaches to practical dedicated exact algorithms for the prob- lem.
Abstract:
We address the problem of multi-objective constraint optimization problems (MO-COPs). Solving an MO-COP traditionally consists in computing the set of all Pareto optimal solutions, which is exponentially large in the general case. So this causes two main problems: first is the time complexity concern, second is a lack of decisiveness. In this paper, we formalize the notion of MO-COP operator which associates every MO-COP with a subset of Pareto optimal solutions satisfying some desirable additional properties. Then, we present two specific classes of MO-COP operators that give preference to some subsets of Pareto optimal solutions. These operators correspond to two classical doctrines in Decision Theory: utilitarianism and egalitarianism. They compute solutions much more efficiently than classical operators computing all Pareto optimal solutions.
In practice, they return a very few number of solutions even for problems involving a high number of objectives.
Vincent Armant and Ken Brown. Minimizing the Driving Distance in Ride Sharing Systems
Abstract:
Reducing the number of cars driving on roads is an important objective for smart sustainable cities, for reducing emissions and improving traffic flow. To assist with this aim, ride-sharing systems match intending drivers with prospective passengers. The matching problem becomes more complex when drivers can pick-up and drop-off several passengers, both drivers and passengers have to travel within a time-window and are willing to switch roles. We present a mixed integer programming model for this switching rider problem, with the objective of minimizing the total distance driven by the population. We exhibit how the potential saving in kilometers increases as the driver flexibility and the density of the distribution of participants increases. Further, we show how breaking symmetries among the switchers improves performance, gaining over an order of magnitude speed up in solving time, and allowing approximately 50% more participants to be handled in the same computation time.
Abstract:
We consider a network design problem arising in the development of an all-optical future generation Internet network called a flexgrid. An optical island is a set of core nodes that can be fully interconnected by transparent wavelength routes. We present a mathematical model for finding an optimal optical island, show that it is an NP-hard problem, and present a decomposition of the problem. In a first phase, we choose network links and route
demands over the resulting network. In the second phase, we allocate demands to individual fibres and spectrum segments on the fibres.
This so-called routing and spectrum assignment (RSA) problem is a generalisation of the well-known routing and wavelength assignment problem (RWA) of conventional optical networks. Flexgrid optical networks allow
to bundle higher capacity connection requests by allocating channels in a number of contiguous frequency slots, providing increased throughput, as long as the connection length is below technological limits.
We solve the first part of the decomposition with a large neighbourhood search, and the second with a CP model using a single GEOST global constraint. Results for Ireland and Italy show that solutions of high quality can be found by this decomposition.
Victor Reyes and Ignacio Araya. Probing-based variable selection heuristics for NCSPs
Abstract:
Interval branch \& bound solvers are commonly used for solving numerical constraint satisfaction problems. They alternate pruning/contraction and branching steps in order to find small boxes containing all the solutions of the problem.
The branching basically consists in generating two subproblems by dividing the domain of one variable in two. The selection of this variable is the topic of this work. Several heuristics has been proposed so far, most of them using local information from the current node (e.g., domain sizes, partial derivative images over the current box, etc...). We propose instead, an approach based on past information. This information is provided by a preprocessing phase of the algorithm (probing) and it is used during the search. In simple words, our algorithm attempts to identify the most important variables in a series of cheap test runs. As a result of probing, the variables are weighted. These weights are then considered by the selection
heuristic during the search. Experiments stress the interest of using techniques based on past information in interval branch \& bound solvers.
Minas Dasygenis and Kostas Stergiou. Building Portfolios for Parallel Constraint Solving by Varying the Local Consistency Applied
Abstract:
Portfolio based approaches to constraint solving aim at exploiting the variability in performance displayed by different solvers or different parameter settings of a single solver. Such approaches have been quite successful in both a sequential and a parallel processing mode. Given the increasingly larger number of available processors for parallel processing, an important challenge when designing portfolios is to identify solver parameters that offer diversity in the exploration of the search space and to generate different solver configurations by automatically tuning these parameters. In this paper we propose, for the first time, a way to build porfolios for parallel solving by parameterizing the local consistency property applied during search. To achieve this we exploit heuristics for adaptive propagation proposed in [1]. We show how this approach can result in the easy automatic generation of portfolios that display large performance variability. We make an experimental comparison against a standard sequential solver as well as portfolio based methods that use randomization of the variable ordering heuristic as the source of diversity. Results demonstrate that our method constantly outperforms the sequential solver and in most cases it more efficient than the other portfolio approaches.
Abstract:
Conflict based clause learning is known to be an important component in Modern SAT solving. Because of the exponential blow up of the size of learnt clauses database, maintaining a relevant and polynomiallybounded set of learnt clauses is crucial for the efficiency of clause learning based SAT solvers. In this paper, we first compare several criteria for selecting the most relevant learnt clauses with a simple random selection strategy. We then propose new criteria allowing us to select relevant clauses w.r.t. a given search state. Then, we use such strategies as a means to diversify the search in a portfolio based parallel solver. An experimental evaluation comparing the classical ManySAT solver with the one augmented with multiple deletion strategies, shows the interest of such approach.
Carlos Mencía and Joao Marques-Silva. Efficient Relaxations of Over-Constrained CSPs
Abstract:
Constraint Programming is becoming the preferred solving technology in a variety of application domains. It is not unusual that a CSP modeling some real-life problem is found to be unfeasible or over-constrained. In this scenario, users may be interested in identifying the causes responsible for inconsistency, or in getting some advice so that they can reformulate their problem to render it feasible. This paper is concerned with the latter issue, which plays a very important role in the analysis of over-constrained problems. Concretely, we study the problem of computing a minimal exclusion set of constraints (MESC) from unfeasible CSPs. A MESC is a set-wise minimal set of constraints whose removal makes the original problem feasible. We provide an overview of existing techniques for MESC extraction and consider additional alternatives and optimizations. Our main contribution is the adaptation of the best-performing algorithm for SAT to work in CSP. We also integrate a technique that improves its efficiency. The results from an experimental study indicate considerable improvements over the state-of-the-art.
Hiep Nguyen, Thomas Schiex, Christian Bessiere and Simon de Givry. Maintaining Virtual Arc Consistency Dynamically During Search
Abstract:
Virtual Arc Consistency (VAC) is a recent local
consistency for processing cost function networks (aka weighted
constraint networks) that exploits a simple but powerful connection
with standard constraint networks. It has allowed to
close hard frequency assignment benchmarks and is capable of
directly solving networks of submodular functions. The algorithm
enforcing VAC is an iterative algorithm that solves a sequence of
standard constraint networks. This algorithm has been improved
by exploiting the idea of dynamic arc consistency between each
iteration, leading to the dynamic VAC algorithm. When VAC is
maintained during search, the difference between two adjacent
nodes in the search tree is also limited. In this paper, we show
that the incrementality of Dynamic VAC can also be useful when
maintaining VAC during search and we present results showing
that maintaining dynamic VAC during search can effectively
accelerate search.