Benchmark

non-incremental/QF_BVFPLRA/20190429-UltimateAutomizerSvcomp2019/digits_bad_while_false-unreach-call.i_AllErrorsAtOnce_Iteration3_TraceCheck_0.smt2

|
Generated by the tool Ultimate Automizer [1,2] which implements
an automata theoretic approach [3] to software verification.

This SMT script belongs to a set of SMT scripts that was generated by
applying Ultimate Automizer to benchmarks [4] from the SV-COMP 2019 [5,6].
This script might _not_ contain all SMT commands that are used by
Ultimate Automizer. In order to satisfy the restrictions of
the SMT-COMP we have to drop e.g., the commands for getting
values (resp. models), unsatisfiable cores and interpolants.

2019-04-27, Matthias Heizmann (heizmann@informatik.uni-freiburg.de)

[1] https://ultimate.informatik.uni-freiburg.de/automizer/
[2] Matthias Heizmann, Yu-Fang Chen, Daniel Dietsch, Marius Greitschus,
     Jochen Hoenicke, Yong Li, Alexander Nutz, Betim Musa, Christian
     Schilling, Tanja Schindler, Andreas Podelski: Ultimate Automizer
     and the Search for Perfect Interpolants - (Competition Contribution).
     TACAS (2) 2018: 447-451
[3] Matthias Heizmann, Jochen Hoenicke, Andreas Podelski: Software Model
     Checking for People Who Love Automata. CAV 2013:36-52
[4] https://github.com/sosy-lab/sv-benchmarks
[5] Dirk Beyer: Automatic Verification of C and Java Programs: SV-COMP 2019.
     TACAS (3) 2019: 133-155
[6] https://sv-comp.sosy-lab.org/2019/
|
Benchmark
Size6072
Compressed Size1472
License Creative Commons Attribution 4.0 International (CC-BY-4.0)
Categoryindustrial
First Occurrence2020-07-06
Generated By
Generated On
Generator
Dolmen OK1
strict Dolmen OK1
check-sat calls1
Query 1
Status unsat
Inferred Status unsat
Size 6064
Compressed Size1471
Max. Term Depth9
Asserts 27
Declared Functions0
Declared Constants21
Declared Sorts 0
Defined Functions0
Defined Recursive Functions 0
Defined Sorts0
Constants0
Declared Datatypes0

Symbols

false1 ite1 not5 =21
bvadd8 bvmul12 bvsrem4 bvsgt1
sign_extend6 fp.sub4 fp.mul8 fp.div1
fp.eq5 roundNearestTiesToEven1 roundTowardZero8 fp.to_sbv8
to_fp20

Evaluations

Evaluation Rating Solver Variant Result Wallclock CPU Time
SMT-COMP 2020 0.50 (2/4) COLIBRI COLIBRI 20.5.25_default unknown ❌ 0.31294 0.31277
CVC4 CVC4-sq-final_default unsat ✅ 0.02483 0.02513
MathSAT MathSAT5_default.sh unsat ✅ 0.01846 0.01841
SMT-COMP 2021 0.50 (2/4) COLIBRI COLIBRI_21_06_23_default unknown ❌ 0.32675 0.32725
COLIBRI 20.5.25_default unknown ❌ 0.31717 0.31778
COLIBRI_21_05_28_default unknown ❌ 0.32791 0.32843
CVC4 CVC4-sq-final_default unsat ✅ 0.02622 0.02652
MathSAT mathsat-5.6.6_default unsat ✅ 0.01889 0.01886
SMT-COMP 2022 0.20 (4/5) Bitwuzla Bitwuzla-wrapped_default unsat ✅ 0.03848 0.03845
COLIBRI COLIBRI 22_06_18_default unknown ❌ 0.32953 0.32965
cvc5 cvc5_default unsat ✅ 0.02721 0.02768
cvc5-default-2022-07-02-b15e116-wrapped_sq unsat ✅ 0.03035 0.03093
MathSAT MathSAT-5.6.8_default unsat ✅ 0.01728 0.01723
Z3 z3-4.8.17_default unsat ✅ 1.76788 1.76957
SMT-COMP 2023 0.33 (2/3) Bitwuzla Bitwuzla-fixed_default unsat ✅ 0.31667 0.31659
COLIBRI COLIBRI 2023_05_10_default unknown ❌ 0.34578 0.34526
cvc5 cvc5-default-2023-05-16-ea045f305_sq unsat ✅ 0.02514 0.02559
SMT-COMP 2024 0.33 (2/3) Bitwuzla Bitwuzla unsat ✅ 0.55997 0.46018
COLIBRI COLIBRI unknown ❌ 0.49704 0.36802
cvc5 cvc5 unsat ✅ 0.22651 0.12649
SMT-COMP 2025 0.25 (3/4) Bitwuzla Bitwuzla unsat ✅ 0.55983 0.43423
COLIBRI COLIBRI unknown ❌ 0.47696 0.35375
Colibri2 colibri2 unsat ✅ 13.65174 13.52740
cvc5 cvc5 unsat ✅ 0.26007 0.14449