Benchmark

non-incremental/QF_NIA/LassoRanker/LeeJonesBen-Amram-POPL2001-Ex2_true-termination.c_Iteration1_Loop+nonterminationTemplate_0.smt2

SMT script generated by Ultimate LassoRanker [1].
Ultimate LassoRanker is a tool that analyzes termination and nontermination of
lasso-shaped programs. This script contains the SMT commands that Ultimate 
LassoRanker used while checking if a lasso-shaped program has a geometric 
nontermination argument. (See [2] for a preliminary definition of
geometric nontermination argument.)

This SMT script belongs to a set of SMT scripts that was generated by applying
Ultimate Buchi Automizer [3,4] to benchmarks from the SV-COMP 2016 [5,6] 
which are available at [7]. Ultimate Buchi Automizer takes omega-traces
(lasso-shaped programs) and uses LassoRanker in order to check if the 
lasso-shaped program is terminating.

2016-04-30, Matthias Heizmann (heizmann@informatik.uni-freiburg.de)


[1] https://ultimate.informatik.uni-freiburg.de/LassoRanker/
[2] Jan Leike, Matthias Heizmann: Geometric Series as Nontermination
Arguments for Linear Lasso Programs. CoRR abs/1405.4413 (2014)
http://arxiv.org/abs/1405.4413
[3] http://ultimate.informatik.uni-freiburg.de/BuchiAutomizer/
[4] Matthias Heizmann, Jochen Hoenicke, Andreas Podelski: Software Model
Checking for People Who Love Automata. CAV 2013:36-52
[5] http://sv-comp.sosy-lab.org/2016/
[6] Dirk Beyer: Reliable and Reproducible Competition Results with BenchExec
and Witnesses (Report on SV-COMP 2016). TACAS 2016: 887-904
[7] https://github.com/dbeyer/sv-benchmarks
Benchmark
Size38138
Compressed Size2886
License Creative Commons Attribution 4.0 International (CC-BY-4.0)
Categoryindustrial
First Occurrence2014-07-21
Generated By
Generated On
Generator
Dolmen OK1
strict Dolmen OK1
check-sat calls1
Query 1
Status sat
Inferred Status sat
Size 38130
Compressed Size2899
Max. Term Depth8
Asserts 1
Declared Functions0
Declared Constants30
Declared Sorts 0
Defined Functions0
Defined Recursive Functions 0
Defined Sorts0
Constants0
Declared Datatypes0

Symbols

true1 or3 and4 =4
+418 -96 *1430 >=91

Evaluations

Evaluation Rating Solver Variant Result Wallclock CPU Time
SMT-COMP 2016 0.43 (4/7) AProVE AProVE NIA 2014 default sat ✅ 0.62290 1.49571
CVC4 CVC4-master-2016-05-27-cfef263-main default sat ✅ 0.66283 0.66350
ProB ProB competition unknown ❌ 2400.06000 2401.66000
raSAT raSAT 0.3 default.sh unknown ❌ 2400.03000 2401.41000
raSAT 0.4 exp - final default.py unknown ❌ 2400.03000 4812.12000
SMT-RAT SMT-RAT default unknown ❌ 2400.09000 2401.48000
Yices2 Yices-2.4.2 default sat ✅ 0.02993 0.02988
Z3 z3-4.4.1 default sat ✅ 50.47810 50.50620
SMT-COMP 2017 0.20 (4/5) AProVE AProVE NIA 2014 default sat ✅ 0.69586 1.77536
CVC4 CVC4-smtcomp2017-main default sat ✅ 0.04549 0.04428
SMT-RAT SMTRAT-comp2017_2 default unknown ❌ 600.07200 599.96000
Yices2 Yices2-Main default sat ✅ 0.67349 0.67275
Z3 z3-4.5.0 default sat ✅ 99.11030 99.11340
SMT-COMP 2018 0.20 (4/5) AProVE AProVE NIA 2014_default sat ✅ 0.63442 1.51198
CVC4 master-2018-06-10-b19c840-competition-default_default sat ✅ 0.09945 0.09963
SMT-RAT SMTRAT-Rat-final_default unknown ❌ 1200.03000 1199.92000
Yices2 Yices 2.6.0_default sat ✅ 0.05697 0.05691
Z3 z3-4.7.1_default sat ✅ 62.18730 62.18310
SMT-COMP 2022 cvc5 cvc5-default-2022-07-02-b15e116-wrapped_sq sat ✅ 0.16646 0.16702
MathSAT MathSAT-5.6.8_default sat ✅ 0.04548 0.04540
Par4 Par4-wrapped-sq_default sat ✅ 0.04134 0.00664
Yices2 Yices 2.6.2 for SMTCOMP 2021_default sat ✅ 0.03231 0.03218
Yices-ismt yices-ismt-0721_default sat ✅ 0.03845 0.03861
Z3 z3-4.8.17_default sat ✅ 38.45090 38.45180
Z3++ z3++0715_default sat ✅ 0.03321 0.03326