Benchmark

non-incremental/UFBVDTLIA/20241211-verus/verismo/tspec__math__bits_pverismo_tspec.math.bits_p.bit64_shl_auto._01.smt2

Benchmarks generated by the Rust verifier Verus (https://verus-lang.github.io/verus/guide/) on the project Verismo (https://www.usenix.org/conference/osdi24/presentation/zhou) 
and processed using Mariposa (https://github.com/secure-foundations/mariposa).
This benchmarks was originally run with z3 with the following options:
    (set-option :auto_config false)
    (set-option :smt.mbqi false)
    (set-option :smt.case_split 3)
    (set-option :smt.qi.eager_threshold 100.0)
    (set-option :smt.delay_units true)
    (set-option :smt.arith.solver 2)
    (set-option :smt.arith.nl false)
    (set-option :pi.enabled false)
    (set-option :rewriter.sort_disjunctions false)
    (set-option :sat.euf true)
    (set-option :tactic.default_tactic sat)
    (set-option :smt.ematching false)
    (set-option :smt.case_split 0)
Benchmark
Size34719
Compressed Size5588
License Creative Commons Attribution 4.0 International (CC-BY-4.0)
Categoryindustrial
First Occurrence2025-08-11
Generated ByAmar Shah
Generated On2024-12-11 00:00:00
GeneratorVerus
Dolmen OK1
strict Dolmen OK1
check-sat calls1
Query 1
Status unsat
Inferred Status None
Size 34711
Compressed Size5602
Max. Term Depth72
Asserts 83
Declared Functions74
Declared Constants85
Declared Sorts 9
Defined Functions0
Defined Recursive Functions 0
Defined Sorts0
Constants0
Declared Datatypes2

Symbols

Bool5 not2 or69 and98
=>235 =270 distinct1 forall63
Int30 +2 -6 <75
<=17 >1 >=2 BitVec4
concat2 bvand3 bvult66 bvugt1
bvshl72 Char3

Evaluations

Evaluation Rating Solver Variant Result Wallclock CPU Time
SMT-COMP 2025 0.50 (1/2) cvc5 cvc5 unsat ✅ 0.28850 0.16932
SMTInterpol SMTInterpol unknown ❌ 96.69171 363.54999