Benchmark
non-incremental/QF_BVFPLRA/20190429-UltimateAutomizerSvcomp2019/sine_1_false-unreach-call_true-termination.i_AllErrorsAtOnce_Iteration1_TraceCheck_0.smt2
|
Generated by the tool Ultimate Automizer [1,2] which implements
an automata theoretic approach [3] to software verification.
This SMT script belongs to a set of SMT scripts that was generated by
applying Ultimate Automizer to benchmarks [4] from the SV-COMP 2019 [5,6].
This script might _not_ contain all SMT commands that are used by
Ultimate Automizer. In order to satisfy the restrictions of
the SMT-COMP we have to drop e.g., the commands for getting
values (resp. models), unsatisfiable cores and interpolants.
2019-04-27, Matthias Heizmann (heizmann@informatik.uni-freiburg.de)
[1] https://ultimate.informatik.uni-freiburg.de/automizer/
[2] Matthias Heizmann, Yu-Fang Chen, Daniel Dietsch, Marius Greitschus,
Jochen Hoenicke, Yong Li, Alexander Nutz, Betim Musa, Christian
Schilling, Tanja Schindler, Andreas Podelski: Ultimate Automizer
and the Search for Perfect Interpolants - (Competition Contribution).
TACAS (2) 2018: 447-451
[3] Matthias Heizmann, Jochen Hoenicke, Andreas Podelski: Software Model
Checking for People Who Love Automata. CAV 2013:36-52
[4] https://github.com/sosy-lab/sv-benchmarks
[5] Dirk Beyer: Automatic Verification of C and Java Programs: SV-COMP 2019.
TACAS (3) 2019: 133-155
[6] https://sv-comp.sosy-lab.org/2019/
|
| Benchmark |
| Size | 3341 |
| Compressed Size | 1292 |
| License |
Creative Commons Attribution 4.0 International
(CC-BY-4.0)
|
| Category | industrial |
| First Occurrence | 2020-07-06 |
| Generated By | — |
| Generated On | — |
| Generator | — |
| Dolmen OK | 1 |
| strict Dolmen OK | 1 |
| check-sat calls | 1 |
| Status | sat |
| Inferred Status | sat |
| Size | 3333 |
| Compressed Size | 1291 |
| Max. Term Depth | 9 |
| Asserts | 9 |
| Declared Functions | 0 |
| Declared Constants | 8 |
| Declared Sorts | 0 |
| Defined Functions | 0 |
| Defined Recursive Functions | 0 |
| Defined Sorts | 0 |
| Constants | 0 |
| Declared Datatypes | 0 |
Symbols
false | 1 |
ite | 1 |
not | 4 |
or | 1 |
and | 1 |
= | 7 |
let | 4 |
/ | 2 |
fp.add | 2 |
fp.sub | 1 |
fp.mul | 6 |
fp.div | 3 |
fp.neg | 2 |
fp.leq | 1 |
fp.lt | 1 |
fp.geq | 1 |
fp.gt | 1 |
roundNearestTiesToEven | 1 |
to_fp | 6 |
| |
Evaluations
| Evaluation |
Rating |
Solver |
Variant |
Result |
Wallclock |
CPU Time |
|
SMT-COMP 2020
|
0.25 (3/4) |
COLIBRI |
COLIBRI 20.5.25_default |
sat ✅
|
1.36157
|
1.36038
|
| |
CVC4 |
CVC4-sq-final_default |
sat ✅
|
33.17500
|
33.17360
|
| |
MathSAT |
MathSAT5_default.sh |
sat ✅
|
2.50980
|
2.50909
|
|
SMT-COMP 2021
|
0.25 (3/4) |
COLIBRI |
COLIBRI_21_06_23_default |
sat ✅
|
0.64584
|
0.64627
|
| |
|
COLIBRI 20.5.25_default |
sat ✅
|
1.17291
|
1.17346
|
| |
|
COLIBRI_21_05_28_default |
sat ✅
|
0.65069
|
0.65109
|
| |
CVC4 |
CVC4-sq-final_default |
sat ✅
|
87.81660
|
87.80730
|
| |
MathSAT |
mathsat-5.6.6_default |
sat ✅
|
177.51200
|
177.49200
|
|
SMT-COMP 2022
|
|
Bitwuzla |
Bitwuzla-wrapped_default |
sat ✅
|
1.09379
|
1.09360
|
| |
COLIBRI |
COLIBRI 22_06_18_default |
sat ✅
|
0.46081
|
0.46090
|
| |
cvc5 |
cvc5_default |
sat ✅
|
14.51840
|
14.51600
|
| |
|
cvc5-default-2022-07-02-b15e116-wrapped_sq |
sat ✅
|
29.23490
|
29.23200
|
| |
MathSAT |
MathSAT-5.6.8_default |
sat ✅
|
3.32519
|
3.32494
|
| |
Z3 |
z3-4.8.17_default |
sat ✅
|
6.91464
|
6.91551
|
|
SMT-COMP 2023
|
|
Bitwuzla |
Bitwuzla-fixed_default |
sat ✅
|
10.51740
|
10.51530
|
| |
COLIBRI |
COLIBRI 2023_05_10_default |
sat ✅
|
0.50715
|
0.50758
|
| |
cvc5 |
cvc5-default-2023-05-16-ea045f305_sq |
sat ✅
|
27.09690
|
27.08980
|
|
SMT-COMP 2024
|
|
Bitwuzla |
Bitwuzla |
sat ✅
|
1.53003
|
1.42974
|
| |
COLIBRI |
COLIBRI |
sat ✅
|
0.60005
|
0.49874
|
| |
cvc5 |
cvc5 |
sat ✅
|
3.83117
|
3.73131
|
|
SMT-COMP 2025
|
|
Bitwuzla |
Bitwuzla |
sat ✅
|
1.27400
|
1.15000
|
| |
COLIBRI |
COLIBRI |
sat ✅
|
0.74041
|
0.60782
|
| |
Colibri2 |
colibri2 |
sat ✅
|
1.55796
|
1.43929
|
| |
cvc5 |
cvc5 |
sat ✅
|
6.28008
|
6.15653
|