Benchmark
non-incremental/QF_BV/20210312-Bouvier/vlsat3_g04.smt2
Publications:
[1] Pierre Bouvier, Hubert Garavel, and Hernan Ponce de Leon.
"Automatic Decomposition of Petri Nets into Automata Networks -
A Synthetic Account". Proceedings PETRI NETS 2020, LNCS 12152,
Springer. https://doi.org/10.1007/978-3-030-51831-8_1
[2] Hubert Garavel. "Nested-Unit Petri Nets". Journal of Logical
and Algebraic Methods in Programming, vol. 104, Elsevier, 2019.
https://doi.org/10.1016/j.jlamp.2018.11.005
In [1], several methods for decomposing an ordinary, safe Petri net
into a flat, unit-safe NUPN [2], have been proposed. These methods
are implemented in a complete tool chain involving SAT solvers, SMT
solvers, and tools for graph coloring and finding maximal cliques.
From a data set of 12,000+ NUPN models, 51,000+ SMT formulas have
been generated, out of which a subset of 1200 interesting formulas
to be used as SMT-LIB 2.6 benchmarks was carefully selected.
Original filename: vlsat3_g04.smt2
Specific parameters for the present benchmark:
- number of places: 224
- number of units: 89
- number of edges in the concurrency graph: 15608
- number of variables: 224
- number of uninterpreted functions: 0
- number of asserts: 15832
- total number of operators in asserts: 79416
| Benchmark |
| Size | 1937522 |
| Compressed Size | 64750 |
| License |
Creative Commons Attribution 4.0 International
(CC-BY-4.0)
|
| Category | industrial |
| First Occurrence | 2021-07-18 |
| Generated By | Pierre Bouvier |
| Generated On | 2021-03-12 00:00:00 |
| Generator | — |
| Dolmen OK | 1 |
| strict Dolmen OK | 1 |
| check-sat calls | 1 |
| Status | sat |
| Inferred Status | sat |
| Size | 1937514 |
| Compressed Size | 64759 |
| Max. Term Depth | 3 |
| Asserts | 15832 |
| Declared Functions | 0 |
| Declared Constants | 224 |
| Declared Sorts | 0 |
| Defined Functions | 0 |
| Defined Recursive Functions | 0 |
| Defined Sorts | 0 |
| Constants | 0 |
| Declared Datatypes | 0 |
Symbols
= | 15608 |
distinct | 224 |
extract | 88 |
bvand | 15608 |
Evaluations
| Evaluation |
Rating |
Solver |
Variant |
Result |
Wallclock |
CPU Time |
|
SMT-COMP 2021
|
|
Bitwuzla |
Bitwuzla-fixed_default |
sat ✅
|
10.39370
|
10.39380
|
| |
MathSAT |
mathsat-5.6.6_default |
sat ✅
|
8.42855
|
8.42833
|
| |
STP |
STP 2021.0_default |
sat ✅
|
112.29200
|
423.96000
|
| |
Z3 |
z3-4.8.11_default |
sat ✅
|
3.72130
|
3.72116
|
|
SMT-COMP 2023
|
0.17 (5/6) |
Bitwuzla |
Bitwuzla-fixed_default |
sat ✅
|
17.78640
|
17.77760
|
| |
cvc5 |
cvc5-default-2023-05-16-ea045f305_sq |
sat ✅
|
44.82680
|
44.82670
|
| |
STP |
STP 2022.4_default |
sat ✅
|
13.49980
|
13.49930
|
| |
|
STP 2022.4_default |
sat ✅
|
13.70610
|
13.70280
|
| |
UltimateEliminator |
UltimateIntBlastingWrapper+SMTInterpol_default |
unknown ❌
|
294.06900
|
919.57000
|
| |
Yices2 |
Yices 2 for SMTCOMP 2023_default |
sat ✅
|
1.27270
|
1.27243
|
| |
Z3-Owl |
z3-Owl-Final_default |
sat ✅
|
142.28900
|
142.27300
|
| |
|
z3-Owl-Final_default |
sat ✅
|
157.50900
|
157.49000
|
|
SMT-COMP 2025
|
0.22 (7/9) |
Bitwuzla |
Bitwuzla |
sat ✅
|
4.42169
|
4.29224
|
| |
|
Bitwuzla-MachBV-base |
sat ✅
|
6.43584
|
6.31267
|
| |
Bitwuzla-MachBV |
Bitwuzla-MachBV |
sat ✅
|
5.95845
|
5.83352
|
| |
BVDecide |
bv_decide |
sat ✅
|
220.97841
|
220.84255
|
| |
|
bv_decide-nokernel |
sat ✅
|
222.98717
|
222.89456
|
| |
cvc5 |
cvc5 |
sat ✅
|
18.60726
|
18.48745
|
| |
SMTInterpol |
SMTInterpol |
unknown ❌
|
60.11946
|
200.57500
|
| |
Yices2 |
Yices2 |
sat ✅
|
4.48082
|
4.35812
|
| |
Z3alpha |
Z3-alpha |
sat ✅
|
185.18754
|
217.32525
|
| |
Z3 |
Z3-alpha-base |
unknown ❌
|
1201.28598
|
1201.00240
|
| |
|
Z3-Owl-base |
unknown ❌
|
1201.28387
|
1201.06104
|
| |
|
z3siri-base |
unknown ❌
|
1201.28771
|
1201.00980
|
| |
Z3-Owl |
Z3-Owl |
sat ✅
|
60.66044
|
60.52269
|