Benchmark

non-incremental/QF_NRA/LassoRanker/SV-COMP/gcd1_true-termination.c_Iteration3_Loop_4-pieceTemplate.smt2

SMT script generated by Ultimate LassoRanker [1]
Ultimate LassoRanker is a tool that synthesizes ranking functions for 
linear lasso programs and implements the techniques presented in [2] and [3].
For generating these SMT scripts Ultimate LassoRanker was used as a sub 
procedure of the termination analyzer Ultimate BuchiAutomizer [4] which 
implements the techniques presented in [5].

This SMT script belongs to a set of SMT scripts that was generated by applying
revision 11505 of BuchiAutomizer to
- the benchmarks from the demonstration category on termination of the SV-COMP 
2014 [6] available at [7],
- the benchmarks from [8] which are available at [9] and,
- benchmarks from the repository of Ultimate LassoRanker.
This set of SMT scripts contains only SMT scripts that we considered to be 
difficult because LassoRanker run into a timeout after 10 seconds.

2014-05-03, Matthias Heizmann (heizmann@informatik.uni-freiburg.de)


[1] http://ultimate.informatik.uni-freiburg.de/LassoRanker/
[2] Jan Leike and Matthias Heizmann. Ranking Templates for Linear Loops. In 
TACAS 2014.
[3] Matthias Heizmann, Jochen Hoenicke, Jan Leike and Andreas Podelski. Linear 
Ranking for Linear Lasso Programs. In ATVA 2013.
[4] http://ultimate.informatik.uni-freiburg.de/BuchiAutomizer/
[5] Matthias Heizmann, Jochen Hoenicke and Andreas Podelski. Termination 
Analysis by Learning Terminating Programs. Accepted at CAV 2014
[6] Dirk Beyer: Status Report on Software Verification - (Competition Summary 
SV-COMP 2014). TACAS 2014
[7] https://svn.sosy-lab.org/software/sv-benchmarks/trunk/c/termination-crafted/
[8] Marc Brockschmidt, Byron Cook, Carsten Fuhs: Better Termination Proving 
through Cooperation. CAV 2013:413-429
[9] http://verify.rwth-aachen.de/brockschmidt/Cooperating-T2/
Benchmark
Size83956
Compressed Size5995
License Creative Commons Attribution 4.0 International (CC-BY-4.0)
Categoryindustrial
First Occurrence2014-07-21
Generated By
Generated On
Generator
Dolmen OK1
strict Dolmen OK1
check-sat calls1
Query 1
Status sat
Inferred Status sat
Size 83948
Compressed Size6010
Max. Term Depth10
Asserts 64
Declared Functions0
Declared Constants551
Declared Sorts 0
Defined Functions0
Defined Recursive Functions 0
Defined Sorts0
Constants0
Declared Datatypes0

Symbols

or161 and63 =469 let109
+632 -470 *1056 <63
<=63 >64 >=497

Evaluations

Evaluation Rating Solver Variant Result Wallclock CPU Time
SMT-COMP 2014 0.75 (1/4) CVC3 CVC3 default unknown ❌ 7.01207 6.99694
CVC4 CVC4 f7118b2 default unknown ❌ 0.14966 0.13898
raSAT raSAT-main-track-final default.sh unknown ❌ 0.01992 0.00700
Z3 Z3-4.3.2.a054b099c1d6-x64-debian-6.0.6-SMT-COMP-2014 default sat ✅ 1.26624 1.25881
SMT-COMP 2015 0.67 (2/6) CVC3 CVC3 default unknown ❌ 2402.95000 2403.58000
CVC4 CVC4-master-2015-06-15-9b32405-main default unknown ❌ 0.14942 0.14698
CVC4-experimental-2015-06-15-ff5745a-main default unknown ❌ 0.15026 0.14798
raSAT raSAT default.sh unknown ❌ 2400.01000 2400.99000
SMT-RAT SMT-RAT-final default unknown ❌ 2400.01000 2401.01000
Yices2 Yices2-NL default sat ✅ 456.23300 456.31700
Z3 z3 4.4.0 default sat ✅ 0.39552 0.39594
SMT-COMP 2016 0.60 (2/5) CVC4 CVC4-master-2016-05-27-cfef263-main default unknown ❌ 0.14706 0.14730
raSAT raSAT 0.3 default.sh unknown ❌ 2400.02000 2401.45000
raSAT 0.4 exp - final default.py unknown ❌ 2400.03000 4815.66000
SMT-RAT SMT-RAT default unknown ❌ 2400.06000 2401.43000
Yices2 Yices-2.4.2 default sat ✅ 11.43520 11.44200
Z3 z3-4.4.1 default sat ✅ 0.37773 0.37924
SMT-COMP 2017 0.40 (3/5) CVC4 CVC4-smtcomp2017-main default sat ✅ 0.33689 0.33509
SMT-RAT SMTRAT-comp2017_2 default unknown ❌ 600.07200 600.05300
veriT veriT+raSAT+Redlog default unknown ❌ 600.11500 735.72000
Yices2 Yices2-Main default sat ✅ 8.41077 8.41000
Z3 z3-4.5.0 default sat ✅ 0.41713 0.41596
SMT-COMP 2018 0.40 (3/5) CVC4 master-2018-06-10-b19c840-competition-default_default sat ✅ 0.67343 0.67370
SMT-RAT SMTRAT-Rat-final_default unknown ❌ 1200.01000 1199.89000
SMTRAT-MCSAT-final_default unknown ❌ 1200.01000 1199.92000
veriT veriT+raSAT+Reduce_default unknown ❌ 1200.01000 1199.94000
Yices2 Yices 2.6.0_default sat ✅ 5.82889 5.82886
Z3 z3-4.7.1_default sat ✅ 0.62531 0.62527
SMT-COMP 2020 0.43 (4/7) CVC4 CVC4-sq-final_default sat ✅ 0.86873 0.86540
MathSAT MathSAT5_default.sh sat ✅ 0.63280 0.63272
Par4 Par4-wrapped-sq_default sat ✅ 0.74837 2.05000
SMT-RAT smtrat-CDCAC_default unknown ❌ 1200.09000 1200.01000
smtrat-MCSAT_default unknown ❌ 1200.11000 1200.00000
veriT veriT+raSAT+Redlog_default unknown ❌ 1200.02000 1199.96000
Yices2 Yices 2.6.2 bug fix_default unknown ❌ 1200.12000 1199.97000
Z3 z3-4.8.8_default sat ✅ 1106.14000 1106.04000
SMT-COMP 2021 0.70 (3/10) MathSAT mathsat-5.6.6_default sat ✅ 0.47219 0.47215
Par4 Par4-wrapped-sq_default sat ✅ 0.95247 2.74000
SMT-RAT smtrat-MCSAT_default unknown ❌ 1200.02000 1199.90000
veriT veriT+raSAT+Redlog_default unknown ❌ 1200.01000 1199.93000
Z3 z3-4.8.11_default sat ✅ 2.58119 2.58014
SMT-COMP 2022 0.33 (6/9) cvc5 cvc5-default-2022-07-02-b15e116-wrapped_sq sat ✅ 0.91041 0.91086
MathSAT MathSAT-5.6.8_default sat ✅ 0.42674 0.42666
NRA-LS NRA-LS-FINAL_default sat ✅ 1.53029 1.53012
Par4 Par4-wrapped-sq_default sat ✅ 1.49025 2.07000
SMT-RAT SMT-RAT-MCSAT_default unknown ❌ 1200.02000 1200.04000
veriT veriT+raSAT+Redlog_default unknown ❌ 1200.07000 1199.95000
Yices2 Yices 2.6.2 for SMTCOMP 2021_default unknown ❌ 1200.02000 1199.95000
Z3 z3-4.8.17_default sat ✅ 0.65920 0.66109
Z3++ z3++0715_default sat ✅ 0.45099 0.45105
SMT-COMP 2024 0.20 (4/5) cvc5 cvc5 sat ✅ 0.64069 0.54107
SMTInterpol SMTInterpol unknown ❌ 0.90237 2.13847
SMT-RAT SMT-RAT sat ✅ 36.36869 36.26716
Yices2 Yices2 sat ✅ 1.07279 0.97248
Z3alpha Z3-alpha sat ✅ 0.41395 0.31417
SMT-COMP 2025 0.17 (5/6) cvc5 cvc5 sat ✅ 0.49327 0.36905
SMTInterpol SMTInterpol unknown ❌ 0.87637 1.92561
SMT-RAT SMT-RAT sat ✅ 41.42492 41.30231
Yices2 Yices2 sat ✅ 1.42394 1.30199
Z3alpha Z3-alpha sat ✅ 0.91524 1.18099
Z3 Z3-alpha-base sat ✅ 0.37034 0.24213
z3siri-base sat ✅ 0.38135 0.25358