Benchmark

non-incremental/QF_BVFPLRA/20190429-UltimateAutomizerSvcomp2019/square_5_true-unreach-call_true-termination.i_AllErrorsAtOnce_Iteration1_TraceCheck_0.smt2

|
Generated by the tool Ultimate Automizer [1,2] which implements
an automata theoretic approach [3] to software verification.

This SMT script belongs to a set of SMT scripts that was generated by
applying Ultimate Automizer to benchmarks [4] from the SV-COMP 2019 [5,6].
This script might _not_ contain all SMT commands that are used by
Ultimate Automizer. In order to satisfy the restrictions of
the SMT-COMP we have to drop e.g., the commands for getting
values (resp. models), unsatisfiable cores and interpolants.

2019-04-27, Matthias Heizmann (heizmann@informatik.uni-freiburg.de)

[1] https://ultimate.informatik.uni-freiburg.de/automizer/
[2] Matthias Heizmann, Yu-Fang Chen, Daniel Dietsch, Marius Greitschus,
     Jochen Hoenicke, Yong Li, Alexander Nutz, Betim Musa, Christian
     Schilling, Tanja Schindler, Andreas Podelski: Ultimate Automizer
     and the Search for Perfect Interpolants - (Competition Contribution).
     TACAS (2) 2018: 447-451
[3] Matthias Heizmann, Jochen Hoenicke, Andreas Podelski: Software Model
     Checking for People Who Love Automata. CAV 2013:36-52
[4] https://github.com/sosy-lab/sv-benchmarks
[5] Dirk Beyer: Automatic Verification of C and Java Programs: SV-COMP 2019.
     TACAS (3) 2019: 133-155
[6] https://sv-comp.sosy-lab.org/2019/
|
Benchmark
Size3364
Compressed Size1240
License Creative Commons Attribution 4.0 International (CC-BY-4.0)
Categoryindustrial
First Occurrence2020-07-06
Generated By
Generated On
Generator
Dolmen OK1
strict Dolmen OK1
check-sat calls1
Query 1
Status unsat
Inferred Status unsat
Size 3356
Compressed Size1238
Max. Term Depth8
Asserts 9
Declared Functions0
Declared Constants8
Declared Sorts 0
Defined Functions0
Defined Recursive Functions 0
Defined Sorts0
Constants0
Declared Datatypes0

Symbols

false1 ite1 not4 or1
and1 =7 /5 fp.add2
fp.sub2 fp.mul10 fp.lt2 fp.geq2
roundNearestTiesToEven1 to_fp7

Evaluations

Evaluation Rating Solver Variant Result Wallclock CPU Time
SMT-COMP 2020 0.25 (3/4) COLIBRI COLIBRI 20.5.25_default unsat ✅ 0.49601 0.49531
CVC4 CVC4-sq-final_default unsat ✅ 867.97100 867.85900
MathSAT MathSAT5_default.sh unsat ✅ 664.37700 664.24600
SMT-COMP 2021 0.25 (3/4) COLIBRI COLIBRI_21_06_23_default unsat ✅ 2.91697 2.90648
COLIBRI 20.5.25_default unsat ✅ 0.50407 0.50461
COLIBRI_21_05_28_default unsat ✅ 2.93261 2.93073
CVC4 CVC4-sq-final_default unsat ✅ 870.89000 870.83700
MathSAT mathsat-5.6.6_default unsat ✅ 519.14600 518.66100
SMT-COMP 2022 0.20 (4/5) Bitwuzla Bitwuzla-wrapped_default unknown ❌ 1200.03000 1200.02000
COLIBRI COLIBRI 22_06_18_default unsat ✅ 1.17680 1.17687
cvc5 cvc5_default unsat ✅ 367.91700 367.88300
cvc5-default-2022-07-02-b15e116-wrapped_sq unsat ✅ 468.93500 468.62600
MathSAT MathSAT-5.6.8_default unsat ✅ 687.66700 687.48700
Z3 z3-4.8.17_default unsat ✅ 996.06900 995.98800
SMT-COMP 2023 0.33 (2/3) Bitwuzla Bitwuzla-fixed_default unknown ❌ 1200.08000 1199.99000
COLIBRI COLIBRI 2023_05_10_default unsat ✅ 1.46938 1.46791
cvc5 cvc5-default-2023-05-16-ea045f305_sq unsat ✅ 611.45900 611.44700
SMT-COMP 2024 Bitwuzla Bitwuzla unsat ✅ 165.57028 165.46308
COLIBRI COLIBRI unsat ✅ 1.25616 1.15629
cvc5 cvc5 unsat ✅ 387.93986 387.73036
SMT-COMP 2025 Bitwuzla Bitwuzla unsat ✅ 568.46578 568.17972
COLIBRI COLIBRI unsat ✅ 0.90598 0.78491
Colibri2 colibri2 unsat ✅ 1.17977 1.05262
cvc5 cvc5 unsat ✅ 527.17932 526.98575