Benchmark
non-incremental/QF_BV/20190429-UltimateAutomizerSvcomp2019/byte_add_1_true-unreach-call_true-no-overflow_true-termination.i_1.smt2
|
Generated by the tool Ultimate Automizer [1,2] which implements
an automata theoretic approach [3] to software verification.
This SMT script belongs to a set of SMT scripts that was generated by
applying Ultimate Automizer to benchmarks [4] from the SV-COMP 2019 [5,6].
This script might _not_ contain all SMT commands that are used by
Ultimate Automizer. In order to satisfy the restrictions of
the SMT-COMP we have to drop e.g., the commands for getting
values (resp. models), unsatisfiable cores and interpolants.
2019-04-27, Matthias Heizmann (heizmann@informatik.uni-freiburg.de)
[1] https://ultimate.informatik.uni-freiburg.de/automizer/
[2] Matthias Heizmann, Yu-Fang Chen, Daniel Dietsch, Marius Greitschus,
Jochen Hoenicke, Yong Li, Alexander Nutz, Betim Musa, Christian
Schilling, Tanja Schindler, Andreas Podelski: Ultimate Automizer
and the Search for Perfect Interpolants - (Competition Contribution).
TACAS (2) 2018: 447-451
[3] Matthias Heizmann, Jochen Hoenicke, Andreas Podelski: Software Model
Checking for People Who Love Automata. CAV 2013:36-52
[4] https://github.com/sosy-lab/sv-benchmarks
[5] Dirk Beyer: Automatic Verification of C and Java Programs: SV-COMP 2019.
TACAS (3) 2019: 133-155
[6] https://sv-comp.sosy-lab.org/2019/
|
| Benchmark |
| Size | 3397 |
| Compressed Size | 1255 |
| License |
Creative Commons Attribution 4.0 International
(CC-BY-4.0)
|
| Category | industrial |
| First Occurrence | 2020-07-06 |
| Generated By | — |
| Generated On | — |
| Generator | — |
| Dolmen OK | 1 |
| strict Dolmen OK | 1 |
| check-sat calls | 1 |
| Status | unsat |
| Inferred Status | unsat |
| Size | 3389 |
| Compressed Size | 1262 |
| Max. Term Depth | 14 |
| Asserts | 2 |
| Declared Functions | 0 |
| Declared Constants | 8 |
| Declared Sorts | 0 |
| Defined Functions | 0 |
| Defined Recursive Functions | 0 |
| Defined Sorts | 0 |
| Constants | 0 |
| Declared Datatypes | 0 |
Symbols
not | 2 |
or | 1 |
and | 1 |
= | 10 |
let | 2 |
extract | 17 |
bvor | 3 |
bvadd | 4 |
bvsgt | 2 |
bvshl | 3 |
bvlshr | 6 |
zero_extend | 17 |
Evaluations
| Evaluation |
Rating |
Solver |
Variant |
Result |
Wallclock |
CPU Time |
|
SMT-COMP 2025
|
|
Bitwuzla |
Bitwuzla |
unsat ✅
|
0.28899
|
0.16155
|
| |
|
Bitwuzla-MachBV-base |
unsat ✅
|
0.24985
|
0.13079
|
| |
Bitwuzla-MachBV |
Bitwuzla-MachBV |
unsat ✅
|
0.33068
|
0.21044
|
| |
BVDecide |
bv_decide |
unsat ✅
|
0.78613
|
0.61189
|
| |
|
bv_decide-nokernel |
unsat ✅
|
0.67432
|
0.49813
|
| |
cvc5 |
cvc5 |
unsat ✅
|
0.25695
|
0.13912
|
| |
SMTInterpol |
SMTInterpol |
unsat ✅
|
10.44597
|
27.57202
|
| |
Yices2 |
Yices2 |
unsat ✅
|
0.27613
|
0.15387
|
| |
Z3alpha |
Z3-alpha |
unsat ✅
|
0.39360
|
0.27964
|
| |
Z3 |
Z3-alpha-base |
unsat ✅
|
0.31783
|
0.19254
|
| |
|
Z3-Owl-base |
unsat ✅
|
0.31085
|
0.18879
|
| |
|
z3siri-base |
unsat ✅
|
0.30254
|
0.17627
|
| |
Z3-Owl |
Z3-Owl |
unsat ✅
|
0.80321
|
0.68095
|