Benchmark
non-incremental/QF_BV/20210312-Bouvier/vlsat3_g88.smt2
Publications:
[1] Pierre Bouvier, Hubert Garavel, and Hernan Ponce de Leon.
"Automatic Decomposition of Petri Nets into Automata Networks -
A Synthetic Account". Proceedings PETRI NETS 2020, LNCS 12152,
Springer. https://doi.org/10.1007/978-3-030-51831-8_1
[2] Hubert Garavel. "Nested-Unit Petri Nets". Journal of Logical
and Algebraic Methods in Programming, vol. 104, Elsevier, 2019.
https://doi.org/10.1016/j.jlamp.2018.11.005
In [1], several methods for decomposing an ordinary, safe Petri net
into a flat, unit-safe NUPN [2], have been proposed. These methods
are implemented in a complete tool chain involving SAT solvers, SMT
solvers, and tools for graph coloring and finding maximal cliques.
From a data set of 12,000+ NUPN models, 51,000+ SMT formulas have
been generated, out of which a subset of 1200 interesting formulas
to be used as SMT-LIB 2.6 benchmarks was carefully selected.
Original filename: vlsat3_g88.smt2
Specific parameters for the present benchmark:
- number of places: 194
- number of units: 52
- number of edges in the concurrency graph: 16724
- number of variables: 194
- number of uninterpreted functions: 0
- number of asserts: 16918
- total number of operators in asserts: 84610
| Benchmark |
| Size | 1444793 |
| Compressed Size | 60745 |
| License |
Creative Commons Attribution 4.0 International
(CC-BY-4.0)
|
| Category | industrial |
| First Occurrence | 2021-07-18 |
| Generated By | Pierre Bouvier |
| Generated On | 2021-03-12 00:00:00 |
| Generator | — |
| Dolmen OK | 1 |
| strict Dolmen OK | 1 |
| check-sat calls | 1 |
| Status | sat |
| Inferred Status | sat |
| Size | 1444785 |
| Compressed Size | 60734 |
| Max. Term Depth | 3 |
| Asserts | 16918 |
| Declared Functions | 0 |
| Declared Constants | 194 |
| Declared Sorts | 0 |
| Defined Functions | 0 |
| Defined Recursive Functions | 0 |
| Defined Sorts | 0 |
| Constants | 0 |
| Declared Datatypes | 0 |
Symbols
= | 16724 |
distinct | 194 |
extract | 51 |
bvand | 16724 |
Evaluations
| Evaluation |
Rating |
Solver |
Variant |
Result |
Wallclock |
CPU Time |
|
SMT-COMP 2021
|
0.50 (2/4) |
Bitwuzla |
Bitwuzla-fixed_default |
sat ✅
|
6.95871
|
6.95721
|
| |
MathSAT |
mathsat-5.6.6_default |
unknown ❌
|
1200.02000
|
1198.85000
|
| |
STP |
STP 2021.0_default |
sat ✅
|
66.81550
|
249.56600
|
| |
Z3 |
z3-4.8.11_default |
unknown ❌
|
1200.11000
|
1199.87000
|
|
SMT-COMP 2023
|
0.33 (4/6) |
Bitwuzla |
Bitwuzla-fixed_default |
sat ✅
|
9.34445
|
9.34296
|
| |
cvc5 |
cvc5-default-2023-05-16-ea045f305_sq |
unknown ❌
|
1200.03000
|
1199.79000
|
| |
STP |
STP 2022.4_default |
sat ✅
|
9.62950
|
9.63000
|
| |
|
STP 2022.4_default |
sat ✅
|
9.60104
|
19.11070
|
| |
UltimateEliminator |
UltimateIntBlastingWrapper+SMTInterpol_default |
unknown ❌
|
280.14300
|
849.51300
|
| |
Yices2 |
Yices 2 for SMTCOMP 2023_default |
sat ✅
|
2.09612
|
4.16609
|
| |
Z3-Owl |
z3-Owl-Final_default |
sat ✅
|
538.76500
|
538.60800
|
| |
|
z3-Owl-Final_default |
sat ✅
|
174.94500
|
174.87400
|
|
SMT-COMP 2025
|
0.33 (6/9) |
Bitwuzla |
Bitwuzla |
sat ✅
|
2.40643
|
2.28667
|
| |
|
Bitwuzla-MachBV-base |
sat ✅
|
6.74649
|
6.62855
|
| |
Bitwuzla-MachBV |
Bitwuzla-MachBV |
sat ✅
|
3.50864
|
3.38304
|
| |
BVDecide |
bv_decide |
sat ✅
|
252.28345
|
252.13075
|
| |
|
bv_decide-nokernel |
sat ✅
|
238.49094
|
238.32327
|
| |
cvc5 |
cvc5 |
sat ✅
|
17.83052
|
17.70808
|
| |
SMTInterpol |
SMTInterpol |
unknown ❌
|
62.12455
|
209.00940
|
| |
Yices2 |
Yices2 |
sat ✅
|
3.54000
|
3.42387
|
| |
Z3alpha |
Z3-alpha |
sat ✅
|
110.63123
|
233.58342
|
| |
Z3 |
Z3-alpha-base |
unknown ❌
|
1201.29945
|
1201.00875
|
| |
|
Z3-Owl-base |
unknown ❌
|
1201.26600
|
1200.97610
|
| |
|
z3siri-base |
unknown ❌
|
1201.29264
|
1200.92810
|
| |
Z3-Owl |
Z3-Owl |
unknown ❌
|
1201.78256
|
1201.19771
|