DMC: A Distributed Model Counter
DMC is a distributed model counter associating with an input CNF formula its number of models. DMC can take advantage of a (possibly large) number of sequential model counters running on (possibly heterogeneous) computing units spread over a network of computers. For ensuring an efficient workload distribution, the model counting task is shared between the model counters following a policy close to work stealing. The number and the sizes of the messages which are exchanged by the jobs are minimized.
For more details, see the paper:
J.-M. Lagniez, P. Marquis and N. Szczepanski, "DMC: A Distributed Model Counter". Proceedings of IJCAI'18.
Ressources for the dmc (V 1.0)
- the benchmarks themselves.
- log files.
- the runtime code of our counter.
- a table synthesizing the empirical results.
- statistics about the communication.
1) Benchmarks used
The subfolder instances.tar.gz contains the 40 CNF benchmarks bench.cnf used in our experiments. The instance must conform to the standard DIMACS format.
1 2 3 0
-1 -2 0
-2 -3 0
2) Log Files
The subfolder logs.tar.gz: this folder contains one subfolder per model counter used in the experiments; the name of each subfolder is of the form MC#cores3600s, where MC is the model counter (CountAntom, dCountAntom, or dmc), # is the number of cores (2, 4, 8, 16, 32, 64, or 128) and 3600s recalls the time-out we have considered. For each instance bench.cnf used in the experiments, each subfolder contains two files, bench.log and bench.err
Each file bench.log reports a number of information about the communications achieved during the
distributed computation of the number of models of the corresponding benchmark. One can find among
it the number of variables and the number of clauses, as well as the number of models (if found within
3600s). The number of models is given by a line starting with an ‘’s’’, and followed by the number of
models itself.
Each file bench.err reports a number of information about the model counting jobs done by the workers
during the computation. One can find among it the wall-clock time spent for solving the corresponding
benchmark, if the computation succeeded. This is given by a line starting with ‘’real’’ and followed
by the time (in seconds).
N.B.: Due to the fact that the various jobs wrote into the same two files bench.log and bench.err during the distributed computation, it may happen that some lines coming from a given job were inserted into lines coming from other jobs, making the files difficult to decipher. Nevertheless, the key information reported in the files (number of models, wall-clock time) does not suffer from this problem, and can be read safely without any risk of confusion.
3) How to use dmc
The folder dmc.tar.gz contains the object files (.o) corresponding to DMC. In order to run DMC, the following pieces of software must be installed:
- g++ 4.9.4 based on C++ 2011
- zlib http://zlib.net/
- OpenMPI https://www.open-mpi.org/software/
A makefile is given in the folder dmc.tar.gz. In order to get the run-time code of DMC, execute the command ‘’make’’ in a terminal. In the current directory, a run-time code named DeMoniaC will be generated. For running DMC on 2 computers with names (respectively) node1 and node2 (as made precise during the configuration of mpi), execute the following command:
mpirun -np #1 - -host node1 DeMoniaC -dmc bench.cnf : -np #2 --host node2 DeMoniaC -dmc bench.cnf
#1 is the number of cores of node1, and #2 is the number of cores of node2. For running DMC on more than 2 computers, it is enough to repeat the sequence : -np #2 - -host node2 DeMoniaC -dmc bench.cnf in the command line, replacing 2 by 3, 4, 5, etc. as expected.
$ mpirun -np 3 ./dmc -dmc ais6.cnf
c Integer mode
c Integer mode
c Integer mode
c
c Option list
c Caching: 1
c Variable heuristic: VSADS
c Phase heuristic: TRUE
c Partitioning heuristic: YES + graph reduction + equivalence simplication
c
c Benchmark Information
c Number of variables: 61
c Number of clauses: 581
c Number of literals: 1351
c [WORKER 2]: PREPROCESSING
c [WORKER 1]: PREPROCESSING
c [WORKER 2] initialization done
c [MASTER] Initialization done
-1: U R
c [WORKER 1] initialization done
2: W R
c [WORKER 2]: IS RUNNING ON A NEW PROBLEM: tps = 0.00205517 |C| = 426 |V| = 61
c [WORKER 2] ---------------------------------------------------------------------------------------------------------------------------------
c [WORKER 2] #compile | time | #posHit | #negHit | #split | size cache | Mem(MB) | #equivCall | #Dec. Node | #paritioner | wtime | ptime | share pb | share sol | wait pb |
c [WORKER 2] ---------------------------------------------------------------------------------------------------------------------------------
-1: R R
c [WORKER 2] wait for the master 2 1.5974e-05 at 5.96046e-06
c [WORKER 2] the worker send 1 in 5.38826e-05
-1: R U
1: R W
c [WORKER 1]: IS RUNNING ON A NEW PROBLEM: tps = 0.00586295 |C| = 79 |V| = 59
c [WORKER 1] ---------------------------------------------------------------------------------------------------------------------------------
c [WORKER 1] #compile | time | #posHit | #negHit | #split | size cache | Mem(MB) | #equivCall | #Dec. Node | #paritioner | wtime | ptime | share pb | share sol | wait pb |
c [WORKER 1] ---------------------------------------------------------------------------------------------------------------------------------
-1: U W
c [MASTER] STOP RUNNING
[MASTER] Expression evaluation: 2
s 24
c [WORKER 2] --------------------------------------------c [WORKER 1] ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
c [WORKER 1]
c [WORKER 1] Statistics
c [WORKER 1] Compilation Information
c [WORKER 1] Number of recursive call: 5
c [WORKER 1] Number of split formula: 0
c [WORKER 1] Number of decision: 2
c [WORKER 1] Number of paritioner calls: 1
c [WORKER 1] Number of positive hit 0
c [WORKER 1] Number of negative hit 2
-------
c [WORKER 2]
c [WORKER 2] Statistics
c [WORKER 2] Compilation Information
c [WORKER 2] Number of recursive call: 102
c [WORKER 2] Number of split formula: 0
c [WORKER 2] Number of decision: 51
c [WORKER 2] Number of paritioner calls: 1
c [WORKER 2] Number of positive hit 0
c [WORKER 2] Number of negative hit 51
c [WORKER 2]
c [WORKER 2] Final time: 0.008040
c [WORKER 2] Final time communication: 0.000119
c [WORKER 1]
c [WORKER 1] Final time: 0.007135
c [WORKER 1] Final time communication: 0.005502
c [WORKER 1] Final time communication share problem: 0.000000
c [WORKER 1] Final time communication share solution: 0.000019
c [WORKER 1] Final time communication wait for a problem: 0.005483
c [WORKER 1]
c [WORKER 1]: STOP RUNNING
c [WORKER 2] Final time communication share problem: 0.000064
c [WORKER 2] Final time communication share solution: 0.000055
c [WORKER 2] Final time communication wait for a problem: 0.000000
c [WORKER 2]
c [WORKER 2]: STOP RUNNING
The output generated by DMC is the number of models of the input, starting with "s". For the input ais6.cnf, the output is 24, which means that ais6.cnf has 24 models.
4-5) Results
The file tableResuts.pdf contains a table which provides the results obtained. For each benchmark (gathered by families) are reported its number of variables, its number of clauses and then the wall-clock time needed to compute its number of models using DMC equipped with 1, 2, 4, 8, 16, 32, 64, 128 cores (in sequence) and then CountAntom (run on 8 cores) and finally dCountAntom (run on 128 cores). For DMC running on n cores with n > 1, whenever a speedup exist, it is also reported.
The file tableIdle.pdf contains a table which provides, for each benchmark, some statistics about the sum of the communication times and the times during which the processing units have been idle when the model counter used was DMC run on 128 cores.