The 2024 pseudo-Boolean competition is organized as a special event of the SAT 2024 conference.
Eight years after the last competition of pseudo-Boolean solvers, there were requests to organize a new edition. The goal is to assess the state of the art in the field of pseudo-Boolean solvers, in the same spirit as the previous competitions and evaluations (PB16, PB Evaluation 2015 (dead link) PB Evaluation 2015 (archive on Zenodo) , PB12, PB11, PB10, PB09, PB07, PB06, PB05)
Main changes (in comparison with PB16)
- A steering committee (Carlos Ansótegui, Johannes Fichte, Matti Järvisalo, Jakob Nordström, Olivier Roussel) is in charge of defining the rules of the competition, before the actual competition starts.
- During the competition, a jury is in charge of addressing unexpected issues. This jury is composed of members of the steering committee having no conflict of interest with submitted solvers. The jury will only have access to anonymized data. The jury has the final word on every decision taken during the competition. It has the right to disqualify a solver.
- A warm up phase will be organized before the actual competition. The goal is to test both solvers, the competition rules and the competition environment and make sure that the actual competition will run smoothly. The rules of the competition may be slightly ajusted after the warm up. The results of this warm up will be available only to contestants and the jury, and will be erased when the actual competition starts.
- Four tracks will be opened unconditionnaly: PBS (PB Satisfaction) and PBO (PB Optimization) for linear constraints, with and without certificate of unsatisfibility/optimality. Certificates of unsatisfibility/optimality must be generated in the VeriPB format. These tracks are opened to both sequential and parallel solvers.
- Other tracks (WBO, non-linear contraints) may be opened if there is sufficient interest.
- Keep in mind that coefficients in PB constraints can contain an arbitrary number of digits. This year, there is no distinct categories for small and big integers (SMALLINT/BIGINT) because there is no single definition that can suit all solvers. Solvers are expected to use at least 64 bit integers but they are not required to use arbitrary precision integers. When an instance contains integers that are too large for the solver, it must detect this at parse time and report a 's UNSUPPORTED' answer. This will be used to identify instances supported by all solvers and generate a separate ranking on these. The new keyword "intsize" in the competition format gives information on the size of integers required to handle an instance.
- The first line of an instance in the competition format has been extended with 2 new keywords: "#equal", which indicates how many equals contraints appear in the file, and "intsize" (see above). Benchmarks with this new information are available below.
- Solvers may print diagnostic lines (optionally). These lines follow the pattern "d VARIABLE VALUE". They are used to collect some relevant information, such as the number of conflicts for a CDCL solvers (e.g. "d CONFLICTS 8465873"). These diagnostics will be collected and available after the competition for further analysis.
For questions, comments and any other issue regarding the PB competition, please get in touch with the organizer
Special Track in the Pragmatics of SAT Workshop
The Pragmatics of SAT Workshop introduces this year a special “competition" track, which is supposed to attract submissions detailing the solvers participating in this year’s competitions associated with the main SAT-2024 conference: SAT Competition, MaxSAT Evaluation, Model Counting Competition, and the Competition of Pseudo-Boolean Solvers.
The submissions in this track are expected to be 8 pages long and are supposed to describe the rationale behind the design choices made when implementing the solvers participating in the evaluations. We believe PoS would be a great forum for bringing solver developers together to participate in joint discussions about what algorithms and data structures work well or do not work well in practice (in their tools), from the perspective of the corresponding competitions, especially since the competition results are to be announced shortly after at SAT 2024. Based on the competition results (in terms of tool correctness and possibly their performance), the corresponding tool description papers will be included in CEUR-WS proceedings of the workshop.
Benchmarks available in the (slightly new) PB24 format
Here are the benchmarks from the past competitions, with the two new keywords on the first comment line.- normalized-extraPB12.tar (567 MiB)
- normalized-PB06.tar (197 MiB)
- normalized-PB07.tar (19 MiB)
- normalized-PB09.tar (74 MiB)
- normalized-PB10.tar (677 MiB) [updated: previous archive contained duplicate, uncompressed files]
- normalized-PB11.tar (377 MiB)
- normalized-PB12.tar (1.3 GiB) [updated: previous archive contained duplicate, uncompressed files]
- normalized-PB15eval.tar (82 MiB)
- normalized-PB16.tar (118 MiB)
- normalized-PB24.tar (2.5 GiB)
- SampleByIntSize.tar (128 MiB): a sample with one instance per category and value of intsize
- normalized-WBO.tar (80 MiB): WBO instances
Results of the competition
- Slides of the presentation at SAT 24.
- Detailed results of the PBS/PBO (decision/optimization) tracks.
- Detailed results of the WBO track.
- Details about the benchmarks selection and the rest of the competition
- Solvers description and links (to be completed)
- DLS-PBO [solver description]
- Exact [solver home page] [solver description] [author home page]
- IPBHS [solver description]
- mixed-bag, pb-oll-rs [solver description], [solver sources], [author home page]
- NaPS [solver home page] [solver description]
- ParLS-PBO [solver description]
- Picat [solver home page] [solver description]
- RoundingSat [solver home page]
- Sat4j [solver home page] [author home page]
- SCIP, FiberSCIP [solver home page]
- (coming soon) description of new instances, benchmarks generators
Registration and submission
Deadlines
The timetable of the 2024 competition is outlined below.Submission of solvers to the warm-up phase (no strict deadline, submit when ready) |
June 2024 |
Submission of solvers and benchmarks to the actual competition. | July 7th, 2024 |
Solvers description | June, 2024 |
Results will be given during the SAT 2024 conference | August 21-24, 2024 |
Call for solvers
We encourage the submission of any kind of pseudo-Boolean solver (complete or incomplete solvers, sequential or parallel, working on linear or non-linear constraints, etc.). There are a few simple requirements to ensure that a submitted solver will correctly operate in the competition environment. It is expected that any pseudo-Boolean solver can be easily modified to conform to these requirements. People interested in submitting a solver can contact the organizers to get some help.
- The general OPB format is described in this PDF file. Only a subset of this format will be used in the competition (see next item). However, this general format may be used to submit instances to the competition.
- The restricted OPB format used in the competition is described in this PDF file.
- All the details about the output of the solver and its running environment are available in this PDF file.
-
Sample code, parsers for a few languages and a
few programming tips are also available.
Solvers will be ranked on the number of definitive answers (UNSAT, SAT in the DEC category, OPT in the OPT category) they give in each category. Ties are broken on the cumulated CPU/wall-clock time to give these answers.
Portfolios (meta-solvers calling other independent solvers) will be ranked separetely. Authors will have to indicate which solvers are used in the portfolio.
Solver submitters implicitly accept that the results of their solver be publicly available on the competition web site. A solver cannot be withdrawn from the competition once this latter has started.
When a solver provides even one single wrong answer in a given category of benchmarks, the solver's results in that category cannot be trusted. Therefore, such a solver will not appear in the final ranking, but its results will still be published on the competition web site. Authors will be allowed to submit a fixed version of their solver (with a maximum of 3 fixed versions).
Solver submitters are strongly encouraged to make their solver publicly available, either in source code or in executable format. However, this is not a requirement for entering the competition.
Solvers submitted by the organizers
Organizers may submit their solvers but must publish on April 1st, 2024 either a precise description of their solver, or the source code, or the binary of their solver, or a fingerprint of one of these.
No solver has been submitted by the organizer
Call for benchmarks
Benchmarks may be submitted in any reasonable format. The organizers will do their best to translate the submitted benchmarks to the input format adopted for the competition. A benchmark submitted in the format of the competition will still be normalized to ensure consistency among all the input files.
We would particularly appreciate the submission of:
- industrial benchmarks
- benchmarks generators (written in any reasonable language)
- benchmarks sets with a wide distribution of their number of clauses, cardinality constraints and pseudo-Boolean constraints
- benchmarks with non-linear constraints
Submitted benchmarks will be made available on the competition web site after the competition. We cannot accept benchmarks which are not publicly available (because anyone must be able to reproduce the experiments of the competition).
Benchmarks submitters should submit a description of their benchmarks set to be made available on the competition web site.
Benchmarks of the previous competitions can be downloaded from the previous competitions sites.
Organizer
Olivier ROUSSEL CRIL, Université d'Artois, France olivier.roussel at cril.fr / home page |