Pseudo-Boolean Competition 2024

The 2024 pseudo-Boolean competition is organized as a special event of the SAT 2024 conference.

Eight years after the last competition of pseudo-Boolean solvers, there were requests to organize a new edition. The goal is to assess the state of the art in the field of pseudo-Boolean solvers, in the same spirit as the previous competitions and evaluations (PB16, PB Evaluation 2015 (dead link) PB Evaluation 2015 (archive on Zenodo) , PB12, PB11, PB10, PB09, PB07, PB06, PB05)

Main changes (in comparison with PB16)

  • A steering committee (Carlos Ansótegui, Johannes Fichte, Matti Järvisalo, Jakob Nordström, Olivier Roussel) is in charge of defining the rules of the competition, before the actual competition starts.
  • During the competition, a jury is in charge of addressing unexpected issues. This jury is composed of members of the steering committee having no conflict of interest with submitted solvers. The jury will only have access to anonymized data. The jury has the final word on every decision taken during the competition. It has the right to disqualify a solver.
  • A warm up phase will be organized before the actual competition. The goal is to test both solvers, the competition rules and the competition environment and make sure that the actual competition will run smoothly. The rules of the competition may be slightly ajusted after the warm up. The results of this warm up will be available only to contestants and the jury, and will be erased when the actual competition starts.
  • Four tracks will be opened unconditionnaly: PBS (PB Satisfaction) and PBO (PB Optimization) for linear constraints, with and without certificate of unsatisfibility/optimality. Certificates of unsatisfibility/optimality must be generated in the VeriPB format. These tracks are opened to both sequential and parallel solvers.
  • Other tracks (WBO, non-linear contraints) may be opened if there is sufficient interest.
  • Keep in mind that coefficients in PB constraints can contain an arbitrary number of digits. This year, there is no distinct categories for small and big integers (SMALLINT/BIGINT) because there is no single definition that can suit all solvers. Solvers are expected to use at least 64 bit integers but they are not required to use arbitrary precision integers. When an instance contains integers that are too large for the solver, it must detect this at parse time and report a 's UNSUPPORTED' answer. This will be used to identify instances supported by all solvers and generate a separate ranking on these. The new keyword "intsize" in the competition format gives information on the size of integers required to handle an instance.
  • The first line of an instance in the competition format has been extended with 2 new keywords: "#equal", which indicates how many equals contraints appear in the file, and "intsize" (see above). Benchmarks with this new information are available below.
  • Solvers may print diagnostic lines (optionally). These lines follow the pattern "d VARIABLE VALUE". They are used to collect some relevant information, such as the number of conflicts for a CDCL solvers (e.g. "d CONFLICTS 8465873"). These diagnostics will be collected and available after the competition for further analysis.

For questions, comments and any other issue regarding the PB competition, please get in touch with the organizer

Special Track in the Pragmatics of SAT Workshop

The Pragmatics of SAT Workshop introduces this year a special “competition" track, which is supposed to attract submissions detailing the solvers participating in this year’s competitions associated with the main SAT-2024 conference: SAT Competition, MaxSAT Evaluation, Model Counting Competition, and the Competition of Pseudo-Boolean Solvers.

The submissions in this track are expected to be 8 pages long and are supposed to describe the rationale behind the design choices made when implementing the solvers participating in the evaluations. We believe PoS would be a great forum for bringing solver developers together to participate in joint discussions about what algorithms and data structures work well or do not work well in practice (in their tools), from the perspective of the corresponding competitions, especially since the competition results are to be announced shortly after at SAT 2024. Based on the competition results (in terms of tool correctness and possibly their performance), the corresponding tool description papers will be included in CEUR-WS proceedings of the workshop.

Benchmarks available in the (slightly new) PB24 format

Here are the benchmarks from the past competitions, with the two new keywords on the first comment line. Other sources of instances (translation to the PB24 format will be available soon).

Registration and submission

Registration and submission of solvers/benchmarks is open! Just follow this link.


The timetable of the 2024 competition is outlined below.

Submission of solvers to the warm-up phase
(no strict deadline, submit when ready)
April 2024
Submission of solvers and benchmarks to the actual competition. June 17th, 2024
Solvers description June, 2024
Results will be given during the SAT 2024 conference August 21-24, 2024

Call for solvers

We encourage the submission of any kind of pseudo-Boolean solver (complete or incomplete solvers, sequential or parallel, working on linear or non-linear constraints, etc.). There are a few simple requirements to ensure that a submitted solver will correctly operate in the competition environment. It is expected that any pseudo-Boolean solver can be easily modified to conform to these requirements. People interested in submitting a solver can contact the organizers to get some help.

Solvers will be ranked on the number of definitive answers (UNSAT, SAT in the DEC category, OPT in the OPT category) they give in each category. Ties are broken on the cumulated CPU/wall-clock time to give these answers.

Portfolios (meta-solvers calling other independent solvers) will be ranked separetely. Authors will have to indicate which solvers are used in the portfolio.

Solver submitters implicitly accept that the results of their solver be publicly available on the competition web site. A solver cannot be withdrawn from the competition once this latter has started.

When a solver provides even one single wrong answer in a given category of benchmarks, the solver's results in that category cannot be trusted. Therefore, such a solver will not appear in the final ranking, but its results will still be published on the competition web site. Authors will be allowed to submit a fixed version of their solver (with a maximum of 3 fixed versions).

Solver submitters are strongly encouraged to make their solver publicly available, either in source code or in executable format. However, this is not a requirement for entering the competition.

Solvers submitted by the organizers

Organizers may submit their solvers but must publish on April 1st, 2024 either a precise description of their solver, or the source code, or the binary of their solver, or a fingerprint of one of these.

No solver has been submitted by the organizer

Call for benchmarks

Benchmarks may be submitted in any reasonable format. The organizers will do their best to translate the submitted benchmarks to the input format adopted for the competition. A benchmark submitted in the format of the competition will still be normalized to ensure consistency among all the input files.

We would particularly appreciate the submission of:

Submitted benchmarks will be made available on the competition web site after the competition. We cannot accept benchmarks which are not publicly available (because anyone must be able to reproduce the experiments of the competition).

Benchmarks submitters should submit a description of their benchmarks set to be made available on the competition web site.

Benchmarks of the previous competitions can be downloaded from the previous competitions sites.


CRIL, Université d'Artois, France
olivier.roussel at / home page