The 2016 pseudo-Boolean competition is organized as a special event of the SAT 2016 conference.
This edition of the competition will be very similar to the PB12 competition. We especially encourage the submission of new solvers and new instances.
This edition of the competition will introduce two novelties:
Parallel solvers are welcome in the competition. Two rankings will be generated: one based on wall-clock time, the other based on CPU time. In both rankings, the main criterion is the number of instances solved within a given timeout.
The wall-clock time ranking will promote solvers which use all available resources to give an answer as quickly as possible. The CPU time ranking will promote solvers which use resources as efficiently as possible. This latter ranking is the one that was used in the previous competitions. In the wall clock based ranking, timeout will be imposed on the wall clock time. In the CPU based ranking, timeout will be imposed on CPU time. It is expected that parallel solvers will perform better in the first ranking while sequential solvers will perform better in the second ranking.
Benchmarks used in the competition are always preprocessed because we do not want to discriminate solvers on their ability to deal with syntactical corner cases.
However, in real life, users will submit instances which may contain redundancies (e.g. multiple occurrences of a variable in a constraint) or even syntax errors. This year, we shall report the ability of the solver to cope with this kind of problem. Especially, we will test if the solver can deal with instances that do not comply to the "Specific rules for the PB competition" described in the competition format. These rules are guaranteed during the competition but not in real life. This test phase will be completely independent of the ranking. The goal is to encourage authors to write robust solvers, that can be put in the hands of any user.
For questions, comments and any other issue regarding the PB competition, please get in touch with the organizer.
Results of the competitionSome web pages are currently incomplete. We are working on this issue.
- Presentation of the competition results at the SAT16 conference
- Results of the PBS/PBO track
- Results of the WBO track
- PB instances collected in 2016
DeadlinesThe timetable of the 2016 evaluation is detailed below.
|Opening of registration site||April 1st, 2016|
|Submitters must request a login by||April 15, 2016|
benchmarks submission deadline
||May 1st, 2016|
|Solvers running||May 2016|
|Solvers description must be provided by||June 11, 2016|
|Results will be given during the SAT 2016 conference||July 5-July 8, 2016|
Call for solvers
We encourage the submission of any kind of pseudo-Boolean solver (complete or incomplete solvers, working on linear or non-linear constraints, etc.). There are a few simple requirements to ensure that a submitted solver will correctly operate in the evaluation environment. It is expected that any pseudo-Boolean solver can be easily modified to conform to these requirements. People interested in submitting a solver can contact the organizers to get some help.All the details about the input format, the output of the solver and its running environment are available in this PDF file.
Sample code, parsers for a few languages and a few programming tips are also available.
Benchmarks to test your solvers (as well as the results of the competitions) are available from the previous competitions sites
Solver submitters implicitly accept that the results of their solver be publicly available on the evaluation web site.
Solver submitters are strongly encouraged to make their solver publicly available, either in source code or in executable format. However, this is not a requirement for entering the evaluation.
Solvers submitted by the organizers
Organizers may submit their solvers but must publish on April 1st, 2016 either a precise description of their solver, or the source code, or the binary of their solver, or a fingerprint of one of these.
Call for benchmarks
Benchmarks may be submitted in any reasonable format. The organizers will do their best to translate the submitted benchmarks to the input format adopted for the evaluation. A benchmark submitted in the format of the evaluation will still be normalized to ensure consistency among all the input files.
We would particularly appreciate the submission of:
- industrial benchmarks
- benchmarks generators (written in any reasonable language)
- benchmarks set with a wide distribution of their number of clauses, cardinality constraints and pseudo-Boolean constraints
- benchmarks with non-linear constraints
Submitted benchmarks will be made available on the evaluation web site shortly after the beginning of the evaluation. We cannot accept benchmarks which are not publicly available (because anyone must be able to reproduce the experiments of the evaluation).
Benchmarks submitters should submit a description of their benchmarks set to be made available on the evaluation web site.
Benchmarks of the previous competitions can be downloaded from the previous competitions sites.
CRIL, Université d'Artois, France
olivier.roussel at cril.univ-artois.fr / home page