Benchmark
non-incremental/QF_BV/20210312-Bouvier/vlsat3_a50.smt2
Publications:
[1] Pierre Bouvier, Hubert Garavel, and Hernan Ponce de Leon.
"Automatic Decomposition of Petri Nets into Automata Networks -
A Synthetic Account". Proceedings PETRI NETS 2020, LNCS 12152,
Springer. https://doi.org/10.1007/978-3-030-51831-8_1
[2] Hubert Garavel. "Nested-Unit Petri Nets". Journal of Logical
and Algebraic Methods in Programming, vol. 104, Elsevier, 2019.
https://doi.org/10.1016/j.jlamp.2018.11.005
In [1], several methods for decomposing an ordinary, safe Petri net
into a flat, unit-safe NUPN [2], have been proposed. These methods
are implemented in a complete tool chain involving SAT solvers, SMT
solvers, and tools for graph coloring and finding maximal cliques.
From a data set of 12,000+ NUPN models, 51,000+ SMT formulas have
been generated, out of which a subset of 1200 interesting formulas
to be used as SMT-LIB 2.6 benchmarks was carefully selected.
Original filename: vlsat3_a50.smt2
Specific parameters for the present benchmark:
- number of places: 140
- number of units: 19
- number of edges in the concurrency graph: 8610
- number of variables: 140
- number of uninterpreted functions: 0
- number of asserts: 8750
- total number of operators in asserts: 43614
| Benchmark |
| Size | 455833 |
| Compressed Size | 29196 |
| License |
Creative Commons Attribution 4.0 International
(CC-BY-4.0)
|
| Category | industrial |
| First Occurrence | 2021-07-18 |
| Generated By | Pierre Bouvier |
| Generated On | 2021-03-12 00:00:00 |
| Generator | — |
| Dolmen OK | 1 |
| strict Dolmen OK | 1 |
| check-sat calls | 1 |
| Status | unsat |
| Inferred Status | unsat |
| Size | 455825 |
| Compressed Size | 29202 |
| Max. Term Depth | 3 |
| Asserts | 8750 |
| Declared Functions | 0 |
| Declared Constants | 140 |
| Declared Sorts | 0 |
| Defined Functions | 0 |
| Defined Recursive Functions | 0 |
| Defined Sorts | 0 |
| Constants | 0 |
| Declared Datatypes | 0 |
Symbols
= | 8610 |
distinct | 140 |
extract | 18 |
bvand | 8610 |
Evaluations
| Evaluation |
Rating |
Solver |
Variant |
Result |
Wallclock |
CPU Time |
|
SMT-COMP 2021
|
0.50 (2/4) |
Bitwuzla |
Bitwuzla-fixed_default |
unsat ✅
|
158.48300
|
158.41600
|
| |
MathSAT |
mathsat-5.6.6_default |
unknown ❌
|
1200.08000
|
1144.55000
|
| |
STP |
STP 2021.0_default |
unsat ✅
|
290.73700
|
1152.80000
|
| |
Z3 |
z3-4.8.11_default |
unknown ❌
|
1200.11000
|
1200.00000
|
|
SMT-COMP 2022
|
0.29 (5/7) |
Bitwuzla |
Bitwuzla-fixed_default |
unsat ✅
|
196.98600
|
196.93400
|
| |
|
Bitwuzla-wrapped_default |
unsat ✅
|
364.38700
|
364.37900
|
| |
cvc5 |
cvc5-default-2022-07-02-b15e116-wrapped_sq |
unknown ❌
|
1200.01000
|
1199.91000
|
| |
MathSAT |
MathSAT-5.6.8_default |
unsat ✅
|
1198.65000
|
1198.53000
|
| |
STP |
STP 2022.4_default |
unsat ✅
|
150.60500
|
150.60900
|
| |
Yices2 |
Yices 2.6.2 for SMTCOMP 2021_default |
unsat ✅
|
371.43800
|
371.43200
|
| |
Z3++BV |
z3++bv_0702_default |
unknown ❌
|
1200.11000
|
1200.02000
|
| |
Z3 |
z3-4.8.17_default |
unsat ✅
|
326.62000
|
326.59600
|
|
SMT-COMP 2023
|
0.33 (4/6) |
Bitwuzla |
Bitwuzla-fixed_default |
unsat ✅
|
298.50900
|
298.43500
|
| |
cvc5 |
cvc5-default-2023-05-16-ea045f305_sq |
unsat ✅
|
1139.38000
|
1139.11000
|
| |
STP |
STP 2022.4_default |
unsat ✅
|
283.67200
|
283.66600
|
| |
|
STP 2022.4_default |
unsat ✅
|
278.26500
|
278.25200
|
| |
UltimateEliminator |
UltimateIntBlastingWrapper+SMTInterpol_default |
unknown ❌
|
1200.04000
|
1287.81000
|
| |
Yices2 |
Yices 2 for SMTCOMP 2023_default |
unknown ❌
|
1200.03000
|
1199.95000
|
| |
Z3-Owl |
z3-Owl-Final_default |
unknown ❌
|
1200.11000
|
1199.88000
|
| |
|
z3-Owl-Final_default |
unsat ✅
|
260.85400
|
260.83500
|
|
SMT-COMP 2024
|
0.67 (2/6) |
Bitwuzla |
Bitwuzla |
unsat ✅
|
146.40826
|
146.30479
|
| |
cvc5 |
cvc5 |
unknown ❌
|
1201.71481
|
1200.99503
|
| |
SMTInterpol |
SMTInterpol |
unknown ❌
|
1202.72162
|
1489.54482
|
| |
STP |
STP |
unsat ✅
|
113.62818
|
113.51271
|
| |
Yices2 |
Yices2 |
unknown ❌
|
1201.21370
|
1200.88002
|
| |
Z3alpha |
Z3-alpha |
unknown ❌
|
1201.71383
|
1201.02151
|