Pseudo-Boolean Evaluation 2007
The third pseudo-Boolean evaluation will be organized as a special
event of the SAT 2007
conference.
Like the previous evaluations (PB05,
PB06), the goal is to
assess the state of the art in the field of pseudo-Boolean solvers.
A special issue of the JSAT journal will be dedicated to the evaluation and competition events related to the SAT 2007 conference.
The next competition of
pseudo-Boolean solvers will be organized in 2009. This was decided
to leave more time to get new solvers and benchmarks.
For questions, comments and any other issue regarding the PB
Evaluation, please get in touch with pbeval@cril.univ-artois.fr.
Results of the evaluation were revealed at the SAT 2007 Conference
Click here to see the results
The slides of the presentation at SAT2007
The posters presented at SAT2007
Benchmarks used during the evaluation may be downloaded
Some new benchmarks were used in the PB07 evaluation:
Most benchmarks come from the PB06 evaluation (easy benchmarks were not used again):
Deadlines
The timetable of the 2007 evaluation is detailed below.
Deadlines were extended by approximately one month because a few
contestants didn't receive the call for participation. An early and
late submission date have been introduced. Only solvers submitted
before the early submission date will be tested before the evaluation
starts.
Opening of
registration site |
January 5, 2007 |
Submitters must request a login on the registration site by |
March 16, 2007 |
Early submission date
Solvers submitted before this date will be tested on a limited
number of benchmarks. If a bug is discovered, submitters will be
allowed to fix it before the start of the evaluation. |
March 23, 2007 |
Late submission date
Solvers submitted before this date and after the early deadline will not be tested. They will directly enter the evaluation. |
March 30, 2007 |
Solvers running |
April 2007 |
Solvers description
must be provided by |
April 27, 2007 |
Results will be
given during the SAT 2007
conference |
May 28-31, 2007 |
Noticeable changes
We report here the most important changes in the evaluation rules
(in comparison to the PB'06 evaluation). As some other minor points
were also modified, you are invited to read completely and
carefully the call for solvers and benchmarks.
- The input file format is extended to represent non-linear
pseudo-Boolean constraints (that is constraints that contain products
of Boolean variables). Solvers that handle only linear constraints are
strongly encouraged to implement some linearization scheme so that they
may also enter the non-linear categories (this should be really easy).
Parsers provided on this site will support a default linearization
scheme to ease the transition.
- The medium integers category (MEDINT) disappears because too few
instances belonged to that category in the PB06 evaluation.
- The parsers interface is slightly modified. Variable identifiers
are passed as integers rather than strings. Some callback are added to
deal with non-linear constraints.
- More flexibility has been introduced in the command line used to
run a solver.
- Environment variables which give the time and memory limit
are now called TIMEOUT and MEMLIMIT.
We insist on the fact that you are not required to modify your solver
to add support for non linear constraints. For linear pseudo-Boolean
instances, the file format is exactly the same as in the PB06
evaluation.
For solvers which don't support non linear constraints, we will
preprocess the instances to perform a basic linearization of
constraints. That way, all solvers will be run on non linear instances. You really don't have to worry about non linear constraints unless
you want to implement some specific algorithms.
Basically, a solver submitted to PB06 can be submitted without any
modification to PB07.
Call for solvers
We encourage the submission of any kind of pseudo-Boolean solver
(complete or incomplete solvers, working on linear or non-linear
constraints, etc.). There are a few simple requirements to ensure
that a submitted solver will correctly operate in the evaluation
environment. It is expected that any pseudo-Boolean solver can be
easily modified to conform to these requirements. People interested in
submitting a solver can contact the organizers to get some help.
All the details about the input format, the
output of the solver and its running environment are available on a
separate page.
Sample code, parsers for a few languages and a
few programming tips are also available.
(support for non-linear pseudo-Boolean instances is currently being
developped; a preliminary version is already available; an update will
follow)
A preliminary set of benchmarks is available for
testing the solvers
(currently only linear pseudo-Boolean instances; an update will follow)
Some conformance tests are available.
(currently only complies with last year's evaluation; an update will
follow)
Solver submitters implicitly accept that the results of their solver be
publicly available on the evaluation web site.
Solver submitters are strongly encouraged to make their solver publicly
available, either in source code or in executable format. However, this
is not a requirement for entering the evaluation.
Solvers submitted by the organizers
To raise the standard of the
evaluation and clarify the duality organizer/submitter, the following
rule has been adopted:
Organizers can only submit their solvers if and only if the submitted
solver is publicly available (either in source code or in executable
format) before the competition starts (the deadline is fixed to
March 16, 2007). |
Vasco Manquinho already submitted a new version of BSOLO (compressed Linux executable).
Olivier Roussel submitted a new solver named Oree (compressed Linux executable) (as this solver was not properly tested, don't be too confident in its answers).
Call for benchmarks
Benchmarks may be submitted in any reasonable format. The organizers
will do their best to translate the submitted benchmarks to the input
format adopted for the evaluation. A benchmark submitted in the format
of the evaluation will still be normalized to ensure consistency among
all the input files.
We would particularly appreciate the submission of
- industrial benchmarks
- benchmarks generators (written in any reasonable language)
- benchmarks set with a wide distribution of their number of
clauses, cardinality constraints and pseudo-Boolean constraints
- benchmarks with non-linear constraints
Submitted benchmarks will be made available on the evaluation web site
shortly after the beginning of the evaluation. We cannot accept
benchmarks which are not publicly available (because anyone must be
able to reproduce the experiments of the evaluation).
Benchmarks submitters may provide a description of their benchmarks set
to be made available on the evaluation web site.
Organization
To reach the organizers, send an email to pbeval@cril.univ-artois.fr.
Vasco MANQUINHO
INESC-ID, Lisboa, Portugal
vmm at sat.inesc.pt / home page
|
Olivier ROUSSEL
CRIL, Université d'Artois, France
olivier.roussel at
cril.univ-artois.fr / home page |