Benchmark

non-incremental/QF_ANIA/20190429-UltimateAutomizerSvcomp2019/s3_clnt.blast.01_false-unreach-call.i.cil.c_AllErrorsAtOnce_Iteration11_TraceCheck_0.smt2

|
Generated by the tool Ultimate Automizer [1,2] which implements
an automata theoretic approach [3] to software verification.

This SMT script belongs to a set of SMT scripts that was generated by
applying Ultimate Automizer to benchmarks [4] from the SV-COMP 2019 [5,6].
This script might _not_ contain all SMT commands that are used by
Ultimate Automizer. In order to satisfy the restrictions of
the SMT-COMP we have to drop e.g., the commands for getting
values (resp. models), unsatisfiable cores and interpolants.

2019-04-27, Matthias Heizmann (heizmann@informatik.uni-freiburg.de)

[1] https://ultimate.informatik.uni-freiburg.de/automizer/
[2] Matthias Heizmann, Yu-Fang Chen, Daniel Dietsch, Marius Greitschus,
     Jochen Hoenicke, Yong Li, Alexander Nutz, Betim Musa, Christian
     Schilling, Tanja Schindler, Andreas Podelski: Ultimate Automizer
     and the Search for Perfect Interpolants - (Competition Contribution).
     TACAS (2) 2018: 447-451
[3] Matthias Heizmann, Jochen Hoenicke, Andreas Podelski: Software Model
     Checking for People Who Love Automata. CAV 2013:36-52
[4] https://github.com/sosy-lab/sv-benchmarks
[5] Dirk Beyer: Automatic Verification of C and Java Programs: SV-COMP 2019.
     TACAS (3) 2019: 133-155
[6] https://sv-comp.sosy-lab.org/2019/
|
Benchmark
Size98725
Compressed Size9295
License Creative Commons Attribution 4.0 International (CC-BY-4.0)
Categoryindustrial
First Occurrence2020-07-06
Generated By
Generated On
Generator
Dolmen OK1
strict Dolmen OK1
check-sat calls1
Query 1
Status unknown
Inferred Status sat
Size 98717
Compressed Size9298
Max. Term Depth13
Asserts 669
Declared Functions0
Declared Constants294
Declared Sorts 0
Defined Functions0
Defined Recursive Functions 0
Defined Sorts0
Constants0
Declared Datatypes0

Symbols

true11 false3 not90 and1
=174 let66 mod7 +418
-4 <5 <=260 >=227
select841 store173

Evaluations

Evaluation Rating Solver Variant Result Wallclock CPU Time
SMT-COMP 2020 0.50 (2/4) Alt-Ergo Alt-Ergo-SMTComp-2020_default unknown ❌ 1.46698 4.95364
CVC4 CVC4-sq-final_default sat ✅ 7.73367 7.73140
CVC4-2019-06-03-d350fe1-wrapped-sq_default sat ✅ 29.44280 29.42510
MathSAT MathSAT5_default.sh unknown ❌ 0.03125 0.03117
Z3 z3-4.8.8_default sat ✅ 101.17500 101.14400
SMT-COMP 2021 0.25 (3/4) CVC4 CVC4-2019-06-03-d350fe1-wrapped-sq_default sat ✅ 132.19800 132.16400
CVC4-sq-final_default sat ✅ 15.73730 15.73590
cvc5 cvc5-fixed_default sat ✅ 514.00200 513.74700
MathSAT mathsat-5.6.6_default unknown ❌ 0.02870 0.02865
Z3 z3-4.8.11_default sat ✅ 163.09000 163.06500
SMT-COMP 2022 0.50 (2/4) CVC4 CVC4-sq-final_default sat ✅ 95.85990 95.84760
cvc5 cvc5-default-2022-07-02-b15e116-wrapped_sq unknown ❌ 1200.08000 1199.93000
MathSAT MathSAT-5.6.8_default unknown ❌ 0.02832 0.02827
Z3 z3-4.8.17_default sat ✅ 61.88830 61.88290
SMT-COMP 2023 CVC4 CVC4-sq-final_default sat ✅ 53.75790 53.75120
cvc5 cvc5-default-2023-05-16-ea045f305_sq sat ✅ 48.52840 48.52660
SMTInterpol smtinterpol-2.5-1272-g2d6d356c_default sat ✅ 1.93695 5.70101
Yices2 Yices 2 for SMTCOMP 2023_default sat ✅ 6.27550 6.27288
SMT-COMP 2024 cvc5 cvc5 sat ✅ 795.61681 795.49081
SMTInterpol SMTInterpol sat ✅ 1.96332 5.57830
Yices2 Yices2 sat ✅ 3.33377 3.23164
SMT-COMP 2025 cvc5 cvc5 sat ✅ 49.58024 49.44736
SMTInterpol SMTInterpol sat ✅ 1.33545 3.61156
Yices2 Yices2 sat ✅ 2.98507 2.86005