Benchmark

non-incremental/QF_BVFPLRA/20190429-UltimateAutomizerSvcomp2019/newton_2_8_false-unreach-call_true-termination.i_AllErrorsAtOnce_Iteration1_TraceCheck_0.smt2

|
Generated by the tool Ultimate Automizer [1,2] which implements
an automata theoretic approach [3] to software verification.

This SMT script belongs to a set of SMT scripts that was generated by
applying Ultimate Automizer to benchmarks [4] from the SV-COMP 2019 [5,6].
This script might _not_ contain all SMT commands that are used by
Ultimate Automizer. In order to satisfy the restrictions of
the SMT-COMP we have to drop e.g., the commands for getting
values (resp. models), unsatisfiable cores and interpolants.

2019-04-27, Matthias Heizmann (heizmann@informatik.uni-freiburg.de)

[1] https://ultimate.informatik.uni-freiburg.de/automizer/
[2] Matthias Heizmann, Yu-Fang Chen, Daniel Dietsch, Marius Greitschus,
     Jochen Hoenicke, Yong Li, Alexander Nutz, Betim Musa, Christian
     Schilling, Tanja Schindler, Andreas Podelski: Ultimate Automizer
     and the Search for Perfect Interpolants - (Competition Contribution).
     TACAS (2) 2018: 447-451
[3] Matthias Heizmann, Jochen Hoenicke, Andreas Podelski: Software Model
     Checking for People Who Love Automata. CAV 2013:36-52
[4] https://github.com/sosy-lab/sv-benchmarks
[5] Dirk Beyer: Automatic Verification of C and Java Programs: SV-COMP 2019.
     TACAS (3) 2019: 133-155
[6] https://sv-comp.sosy-lab.org/2019/
|
Benchmark
Size6683
Compressed Size1625
License Creative Commons Attribution 4.0 International (CC-BY-4.0)
Categoryindustrial
First Occurrence2020-07-06
Generated By
Generated On
Generator
Dolmen OK1
strict Dolmen OK1
check-sat calls1
Query 1
Status unknown
Inferred Status sat
Size 6675
Compressed Size1619
Max. Term Depth9
Asserts 25
Declared Functions0
Declared Constants24
Declared Sorts 0
Defined Functions0
Defined Recursive Functions 0
Defined Sorts0
Constants0
Declared Datatypes0

Symbols

false1 ite1 not3 and1
=23 let9 /1 fp.add8
fp.sub6 fp.mul22 fp.div14 fp.neg1
fp.lt2 fp.gt1 roundNearestTiesToEven1 to_fp17

Evaluations

Evaluation Rating Solver Variant Result Wallclock CPU Time
SMT-COMP 2020 0.25 (3/4) COLIBRI COLIBRI 20.5.25_default sat ✅ 12.48600 12.48140
CVC4 CVC4-sq-final_default sat ✅ 5.19083 5.19125
MathSAT MathSAT5_default.sh sat ✅ 24.46430 24.46390
SMT-COMP 2021 0.25 (3/4) COLIBRI COLIBRI_21_06_23_default sat ✅ 19.17280 19.16520
COLIBRI 20.5.25_default sat ✅ 12.31710 12.31470
COLIBRI_21_05_28_default sat ✅ 21.26170 21.26060
CVC4 CVC4-sq-final_default sat ✅ 5.42206 5.42175
MathSAT mathsat-5.6.6_default sat ✅ 22.77370 22.77420
SMT-COMP 2022 0.20 (4/5) Bitwuzla Bitwuzla-wrapped_default sat ✅ 12.69840 12.69810
COLIBRI COLIBRI 22_06_18_default unknown ❌ 1202.09000 1201.99000
cvc5 cvc5_default sat ✅ 55.45260 55.44500
cvc5-default-2022-07-02-b15e116-wrapped_sq sat ✅ 185.62400 185.62500
MathSAT MathSAT-5.6.8_default sat ✅ 27.13600 27.13430
Z3 z3-4.8.17_default sat ✅ 27.37250 27.37210
SMT-COMP 2023 0.33 (2/3) Bitwuzla Bitwuzla-fixed_default sat ✅ 20.65360 20.65240
COLIBRI COLIBRI 2023_05_10_default unknown ❌ 1200.02000 1199.89000
cvc5 cvc5-default-2023-05-16-ea045f305_sq sat ✅ 435.15400 435.05200
SMT-COMP 2024 Bitwuzla Bitwuzla sat ✅ 25.07934 24.96099
COLIBRI COLIBRI sat ✅ 1064.42470 1064.32562
cvc5 cvc5 sat ✅ 10.81600 10.71596
SMT-COMP 2025 0.25 (3/4) Bitwuzla Bitwuzla sat ✅ 18.14000 18.00200
COLIBRI COLIBRI sat ✅ 4.71045 4.59090
Colibri2 colibri2 unknown ❌ 2.64671 2.52638
cvc5 cvc5 sat ✅ 28.30860 28.18580