Getting started

Kani is an open-source verification tool that uses model checking to analyze Rust programs. Kani is particularly useful for verifying unsafe code blocks in Rust, where the "unsafe superpowers" are unchecked by the compiler. Some example properties you can prove with Kani include memory safety properties (e.g., null pointer dereferences, use-after-free, etc.), the absence of certain runtime errors (i.e., index out of bounds, panics), and the absence of some types of unexpected behavior (e.g., arithmetic overflows). Kani can also prove custom properties provided in the form of user-specified assertions. As Kani uses model checking, Kani will either prove the property, disprove the property (with a counterexample), or may run out of resources.

Kani uses proof harnesses to analyze programs. Proof harnesses are similar to test harnesses, especially property-based test harnesses.

Project Status

Kani is currently under active development. Releases are published here. Major changes to Kani are documented in the RFC Book.

There is support for a fair amount of Rust language features, but not all (e.g., concurrency). Please see Limitations for a detailed list of supported features.

Kani releases every two weeks. As part of every release, Kani will synchronize with a recent nightly release of Rust, and so is generally up-to-date with the latest Rust language features.

If you encounter issues when using Kani, we encourage you to report them to us.


Kani offers an easy installation option on three platforms:

  • x86_64-unknown-linux-gnu (Most Linux distributions)
  • x86_64-apple-darwin (Intel Mac OS)
  • aarch64-apple-darwin (Apple Silicon Mac OS)

Other platforms are either not yet supported or require instead that you build from source. To use Kani in your GitHub CI workflows, see GitHub CI Action.


The following must already be installed:

  • Python version 3.7 or newer and the package installer pip.
  • Rust 1.58 or newer installed via rustup.
  • ctags is required for Kani's --visualize option to work correctly. Universal ctags is recommended.

Installing the latest version

To install the latest version of Kani, run:

cargo install --locked kani-verifier
cargo kani setup

This will build and place in ~/.cargo/bin (in a typical environment) the kani and cargo-kani binaries. The second step (cargo kani setup) will download the Kani compiler and other necessary dependencies, and place them under ~/.kani/ by default. A custom path can be specified using the KANI_HOME environment variable.

Installing an older version

cargo install --locked kani-verifier --version <VERSION>
cargo kani setup

Checking your installation

After you've installed Kani, you can try running it by creating a test file:

// File:
fn main() {
    assert!(1 == 2);

Run Kani on the single file:


You should get a result like this one:

Check 1: main.assertion.1
         - Status: FAILURE
         - Description: "assertion failed: 1 == 2"

Fix the test and you should see a result like this one:


Next steps

If you're learning Kani for the first time, you may be interested in our tutorial.

Installing from source code

If you were able to install Kani normally, you do not need to build Kani from source. You probably want to proceed to the Kani tutorial.


In general, the following dependencies are required to build Kani from source.

NOTE: These dependencies may be installed by running the scripts shown below and don't need to be manually installed.

  1. Cargo installed via rustup
  2. CBMC (latest release)
  3. CBMC Viewer (latest release)
  4. Kissat (Release 3.1.1)

Kani has been tested in Ubuntu and macOS platforms.

Install dependencies on Ubuntu

Support is available for Ubuntu 18.04, 20.04 and 22.04. The simplest way to install dependencies (especially if you're using a fresh VM) is following our CI scripts:

# git clone
git clone
cd kani
git submodule update --init
# If you haven't already (or from
source $HOME/.cargo/env

Install dependencies on macOS

Support is available for macOS 11. You need to have Homebrew installed already.

# git clone
git clone
cd kani
git submodule update --init
# If you haven't already (or from
source $HOME/.cargo/env

Build and test Kani

Build the Kani package:

cargo build-dev

Then, optionally, run the regression tests:


This script has a lot of noisy output, but on a successful run you'll see at the end of the execution:

All Kani regression tests completed successfully.

Adding Kani to your path

To use a locally-built Kani from anywhere, add the Kani scripts to your path:

export PATH=$(pwd)/scripts:$PATH

Next steps

If you're learning Kani for the first time, you may be interested in our tutorial.

GitHub Action

Kani offers a GitHub Action for running Kani in CI. As of now, only Ubuntu 20.04 with x86_64-unknown-linux-gnu is supported for Kani in CI.

Using Kani in your GitHub workflow

Our GitHub Action is available in the GitHub Marketplace.

The following workflow snippet will checkout your repository and run cargo kani on it whenever a push or pull request occurs. Replace <MAJOR>.<MINOR> with the version of Kani you want to run with.

name: Kani CI
    runs-on: ubuntu-20.04
      - name: 'Checkout your code.'
        uses: actions/checkout@v3

      - name: 'Run Kani on your code.'
        uses: model-checking/kani-github-action@v<MAJOR>.<MINOR>

This will run cargo kani on the code you checked out.


The action takes the following optional parameters:

  • command: The command to run. Defaults to cargo kani. Most often, you will not need to change this.
  • working-directory: The directory to execute the command in. Defaults to .. Useful if your repository has multiple crates, and you only want to run on one of them.
  • args: The arguments to pass to the given ${command}. See cargo kani --help for a full list of options. Useful options include:
    • --output-format=terse to generate terse output.
    • --tests to run on proofs inside the test module (needed for running Bolero).
    • --workspace to run on all crates within your repository.


  • Kani takes too long for my CI: Try running Kani on a schedule with desired frequency.
  • Kani Silently Crashes with no logs: Few possible reasons:
    • Kani ran out of RAM. GitHub offers up to 7GB of RAM, but Kani may use more. Run locally to confirm.
    • GitHub terminates jobs longer than 6 hours.
    • Otherwise, consider filing an issue here.

Using Kani

At present, Kani can used in two ways:

If you plan to integrate Kani in your projects, the recommended approach is to use cargo kani. If you're already using cargo, this will handle dependencies automatically, and it can be configured (if needed) in Cargo.toml. But kani is useful for small examples/tests.

Usage on a package

Kani is integrated with cargo and can be invoked from a package as follows:

cargo kani [OPTIONS]

This works like cargo test except that it will analyze all proof harnesses instead of running all test harnesses.

Common command line flags

Common to both kani and cargo kani are many command-line flags:

  • --concrete-playback=[print|inplace]: Experimental, --enable-unstable feature that generates a Rust unit test case that plays back a failing proof harness using a concrete counterexample. If used with print, Kani will only print the unit test to stdout. If used with inplace, Kani will automatically add the unit test to the user's source code, next to the proof harness. For more detailed instructions, see the debugging verification failures section.

  • --visualize: Experimental, --enable-unstable feature that generates an HTML report providing traces (i.e., counterexamples) for each failure found by Kani.

  • --tests: Build in "test mode", i.e. with cfg(test) set and dev-dependencies available (when using cargo kani).

  • --harness <name>: By default, Kani checks all proof harnesses it finds. You can switch to checking a single harness using this flag.

  • --default-unwind <n>: Set a default global upper loop unwinding bound for proof harnesses. This can force termination when CBMC tries to unwind loops indefinitely.

Run cargo kani --help to see a complete list of arguments.

Usage on a single crate

For small examples or initial learning, it's very common to run Kani on just one source file. The command line format for invoking Kani directly is the following:

kani [OPTIONS]

This will build and run all proof harnesses found within.

Configuration in Cargo.toml

Users can add a default configuration to the Cargo.toml file for running harnesses in a package. Kani will extract any arguments from these sections:

  • [workspace.metadata.kani.flags]
  • [package.metadata.kani.flags]

For example, if you want to set a default loop unwinding bound (when it's not otherwise specified), you can achieve this by adding the following lines to the package's Cargo.toml:

default-unwind = 1

The options here are the same as on the command line (cargo kani --help), and flags (that is, command line arguments that don't take a value) are enabled by setting them to true.

Starting with Rust 1.80 (or nightly-2024-05-05), every reachable #[cfg] will be automatically checked that they match the expected config names and values. To avoid warnings on cfg(kani), we recommend adding the check-cfg lint config in your crate's Cargo.toml as follows:

unexpected_cfgs = { level = "warn", check-cfg = ['cfg(kani)'] }

For more information please consult this blog post.

The build process

When Kani builds your code, it does three important things:

  1. It sets cfg(kani) for target crate compilation (including dependencies).
  2. It injects the kani crate.
  3. It sets cfg(kani_host) for host build targets such as any build script and procedural macro crates.

A proof harness (which you can learn more about in the tutorial), is a function annotated with #[kani::proof] much like a test is annotated with #[test]. But you may experience a similar problem using Kani as you would with dev-dependencies: if you try writing #[kani::proof] directly in your code, cargo build will fail because it doesn't know what the kani crate is.

This is why we recommend the same conventions as are used when writing tests in Rust: wrap your proof harnesses in cfg(kani) conditional compilation:

mod verification {
    use super::*;

    pub fn check_something() {
        // ....

This will ensure that a normal build of your code will be completely unaffected by anything Kani-related.

This conditional compilation with cfg(kani) (as seen above) is still required for Kani proofs placed under tests/. When this code is built by cargo test, the kani crate is not available, and so it would otherwise cause build failures. (Whereas the use of dev-dependencies under tests/ does not need to be gated with cfg(test) since that code is already only built when testing.)

Verification results

Running Kani on a harness produces an output that includes a set of checks as follows:

Check 1: example.assertion.1
         - Status: <status>
         - Description: <description>
         - Location: <location>

Kani determines the verification result for the harness based on the result (i.e., <status>) of each individual check (also known as "properties"). If all checks are successful then the overall verification result of the harness is successful. Otherwise the verification fails, which indicates issues with the code under verification.

Check results

The result (or Status) of a check in Kani can be one of the following:

  1. SUCCESS: This indicates that the check passed (i.e., the property holds). Note that in some cases, the property may hold vacuously. This can occur because the property is unreachable, or because the harness is over-constrained.


fn success_example() {
    let mut sum = 0;
    for i in 1..4 {
        sum += i;
    assert_eq!(sum, 6);

The output from Kani indicates that the assertion holds:

Check 4: success_example.assertion.4
         - Status: SUCCESS
         - Description: "assertion failed: sum == 6"
  1. FAILURE: This indicates that the check failed (i.e., the property doesn't hold). In this case, please see the debugging verification failures section for more help.


fn failure_example() {
    let arr = [1, 2, 3];
    assert_ne!(arr.len(), 3);

The assertion doesn't hold as Kani's output indicates:

Check 2: failure_example.assertion.2
         - Status: FAILURE
         - Description: "assertion failed: arr.len() != 3"
  1. UNREACHABLE: This indicates that the check is unreachable (i.e., the property holds vacuously). This occurs when there is no possible execution trace that can reach the check's line of code. This may be because the function that contains the check is unused, or because the harness does not trigger the condition under which the check is invoked. Kani currently checks reachability for the following assertion types:
    1. Assert macros (e.g. assert, assert_eq, etc.)
    2. Arithmetic overflow checks
    3. Negation overflow checks
    4. Index out-of-bounds checks
    5. Divide/remainder-by-zero checks


fn unreachable_example() {
    let x = 5;
    let y = x + 2;
    if x > y {
        assert!(x < 8);

The output from Kani indicates that the assertion is unreachable:

Check 2: unreachable_example.assertion.2
         - Status: UNREACHABLE
         - Description: "assertion failed: x < 8"
  1. UNDETERMINED: This indicates that Kani was not able to conclude whether the property holds or not. This can occur when the Rust program contains a construct that is not currently supported by Kani. See Rust feature support for Kani's current support of the Rust language features.


fn undetermined_example() {
    let mut x = 0;
    unsupp(&mut x);
    assert!(x == 0);

fn unsupp(x: &mut u8) {
    unsafe {

The output from Kani indicates that the assertion is undetermined due to the missing support for inline assembly in Kani:

Check 2: undetermined_example.assertion.2
         - Status: UNDETERMINED
         - Description: "assertion failed: x == 0"

Cover property results

Kani provides a kani::cover macro that can be used for checking whether a condition may occur at a certain point in the code.

The result of a cover property can be one of the following:

  1. SATISFIED: This indicates that Kani found an execution that triggers the specified condition.

The following example uses kani::cover to check if it's possible for x and y to hold the values 24 and 72, respectively, after 3 iterations of the while loop, which turns out to be the case.

fn cover_satisfied_example() {
    let mut x: u8 = kani::any();
    let mut y: u8 = kani::any();
    y /= 2;
    let mut i = 0;
    while x != 0 && y != 0 {
        kani::cover!(i > 2 && x == 24 && y == 72);
        if x >= y { x -= y; }
        else { y -= x; }
        i += 1;


Check 1: cover_satisfied_example.cover.1
         - Status: SATISFIED
         - Description: "cover condition: i > 2 && x == 24 && y == 72"
         - Location: src/ in function cover_satisfied_example
  1. UNSATISFIABLE: This indicates that Kani proved that the specified condition is impossible.

The following example uses kani::cover to check if it's possible to have a UTF-8 encoded string consisting of 5 bytes that correspond to a string with a single character.

fn cover_unsatisfiable_example() {
    let bytes: [u8; 5] = kani::any();
    let s = std::str::from_utf8(&bytes);
    if let Ok(s) = s {
        kani::cover!(s.chars().count() <= 1);

which is not possible as such string will contain at least two characters.

Check 46: cover_unsatisfiable_example.cover.1
         - Status: UNSATISFIABLE
         - Description: "cover condition: s.chars().count() <= 1"
         - Location: src/ in function cover_unsatisfiable_example
  1. UNREACHABLE: This indicates that the cover property itself is unreachable (i.e. it is vacuously unsatisfiable).

In contrast to an UNREACHABLE result for assertions, an unreachable (or an unsatisfiable) cover property may indicate an incomplete proof.

Example: In this example, a kani::cover call is unreachable because if the outer if condition holds, then the non-empty range r2 is strictly larger than the non-empty range r1, in which case, the condition in the inner if condition is impossible.

fn cover_unreachable_example() {
    let r1: std::ops::Range<i32> = kani::any()..kani::any();
    let r2: std::ops::Range<i32> = kani::any()..kani::any();
    if r2.start > r1.end {
        if r2.end < r1.end {
Check 3: cover_unreachable_example.cover.1
         - Status: UNREACHABLE
         - Description: "cover condition: r2.contains(&0)"
         - Location: src/ in function cover_unreachable_example
  1. UNDETERMINED: This is the same as the UNDETERMINED result for normal checks (see [check_results]).

Verification summary

Kani reports a summary at the end of the verification report, which includes the overall results of all checks, the overall results of cover properties (if the package includes cover properties), and the overall verification result, e.g.:

 ** 0 of 786 failed (41 unreachable)

 ** 0 of 1 cover properties satisfied



NOTE: This tutorial expects you to have followed the Kani installation instructions first.

This tutorial will step you through a progression from simple to moderately complex tasks with Kani. It's meant to ensure you can get started, and see at least some simple examples of how typical proofs are structured. It will also teach you the basics of "debugging" proof harnesses, which mainly consists of diagnosing and resolving non-termination issues with the solver.

You may also want to read the Application section to better understand what Kani is and how it can be applied on real code.

First steps

Kani is unlike the testing tools you may already be familiar with. Much of testing is concerned with thinking of new corner cases that need to be covered. With Kani, all the corner cases are covered from the start, and the new concern is narrowing down the scope to something manageable for the verifier.

Consider this first program (which can be found under first-steps-v1):

fn estimate_size(x: u32) -> u32 {
    if x < 256 {
        if x < 128 {
            return 1;
        } else {
            return 3;
    } else if x < 1024 {
        if x > 1022 {
            panic!("Oh no, a failing corner case!");
        } else {
            return 5;
    } else {
        if x < 2048 {
            return 7;
        } else {
            return 9;

Think about the test harness you would need to write to test this function. You would need figure out a whole set of arguments to call the function with that would exercise each branch. You would also need to keep that test harness up-to-date with the code, in case some of the branches change. And if this function was more complicated—for example, if some of the branches depended on global state—the test harness would be even more onerous to write.

We can try to property test a function like this, but if we're naive about it (and consider all possible u32 inputs), then it's unlikely we'll ever find the bug.

    proptest! {
        fn doesnt_crash(x: u32) {
# cargo test
test tests::doesnt_crash ... ok

There's only 1 in 4 billion inputs that fail, so it's vanishingly unlikely the property test will find it, even with a million samples.

Let's write a Kani proof harness for estimate_size. This is a lot like a test harness, but now we can use kani::any() to represent all possible u32 values:

fn check_estimate_size() {
    let x: u32 = kani::any();
# cargo kani
Runtime decision procedure: 0.00116886s

Check 3: estimate_size.assertion.1
         - Status: FAILURE
         - Description: "Oh no, a failing corner case!"

Kani has immediately found a failure. Notably, we haven't had to write explicit assertions in our proof harness: by default, Kani will find a host of erroneous conditions which include a reachable call to panic or a failing assert. If Kani had run successfully on this harness, this amounts to a mathematical proof that there is no input that could cause a panic in estimate_size.

Getting a trace

By default, Kani only reports failures, not how the failure happened. In this running example, it seems obvious what we're interested in (the value of x that caused the failure) because we just have one unknown input at the start (similar to the property test), but that's kind of a special case. In general, understanding how a failure happened requires exploring a full (potentially large) execution trace.

An execution trace is a record of exactly how a failure can occur. Nondeterminism (like a call to kani::any(), which could return any value) can appear in the middle of its execution. A trace is a record of exactly how execution proceeded, including concrete choices (like 1023) for all of these nondeterministic values.

To get a trace for a failing check in Kani, run:

cargo kani --visualize --enable-unstable

This command runs Kani and generates an HTML report that includes a trace. Open the report with your preferred browser. Under the "Errors" heading, click on the "trace" link to find the trace for this failure.

From this trace report, we can filter through it to find relevant lines. A good rule of thumb is to search for either kani::any() or assignments to variables you're interested in. At present time, an unfortunate amount of generated code is present in the trace. This code isn't a part of the Rust code you wrote, but is an internal implementation detail of how Kani runs proof harnesses. Still, searching for kani::any() quickly finds us these lines:

let x: u32 = kani::any();
x = 1023u

Here we're seeing the line of code and the value assigned in this particular trace. Like property testing, this is just one example of a failure. To proceed, we recommend fixing the code to avoid this particular issue and then re-running Kani to see if you find more issues.

Exercise: Try other failures

We put an explicit panic in this function, but it's not the only kind of failure Kani will find. Try a few other types of errors.

For example, instead of panicking we could try explicitly dereferencing a null pointer:

unsafe { return *(0 as *const u32) };

Notably, however, the Rust compiler emits a warning here:

warning: dereferencing a null pointer
  --> src/
10 |    unsafe { return *(0 as *const u32) };
   |                    ^^^^^^^^^^^^^^^^^^ this code causes undefined behavior when executed
   = note: `#[warn(deref_nullptr)]` on by default

Still, it's just a warning, and we can run the code without test failures just as before. But Kani still catches the issue:

Check 2: estimate_size.pointer_dereference.1
         - Status: FAILURE
         - Description: "dereference failure: pointer NULL"

Exercise: Can you find an example where the Rust compiler will not complain, and Kani will?

Click to show one possible answer
return 1 << x;

Overflow (in addition, multiplication or, in this case, bit-shifting by too much) is also caught by Kani:

Check 1: estimate_size.assertion.1
         - Status: FAILURE
         - Description: "attempt to shift left with overflow"

Check 3: estimate_size.undefined-shift.1
         - Status: FAILURE
         - Description: "shift distance too large"

Assertions, Assumptions, and Harnesses

It seems a bit odd that our example function is tested against billions of possible inputs, when it really only seems to be designed to handle a few thousand. Let's encode this fact about our function by asserting some reasonable upper bound on our input, after we've fixed our bug. (New code available under first-steps-v2):

fn estimate_size(x: u32) -> u32 {
    assert!(x < 4096);

    if x < 256 {
        if x < 128 {
            return 1;
        } else {
            return 3;
    } else if x < 1024 {
        if x > 1022 {
            return 4;
        } else {
            return 5;
    } else {
        if x < 2048 {
            return 7;
        } else {
            return 9;

Now we've explicitly stated our previously implicit expectation: this function should never be called with inputs that are too big. But if we attempt to verify this modified function, we run into a problem:

Check 3: estimate_size.assertion.1
         - Status: FAILURE
         - Description: "assertion failed: x < 4096"

What we want is a precondition for estimate_size. That is, something that should always be true every time we call the function. By putting the assertion at the beginning, we ensure the function immediately fails if that expectation is not met.

But our proof harness will still call this function with any integer, even ones that just don't meet the function's preconditions. That's... not a useful or interesting result. We know that won't work already. How do we go back to successfully verifying this function?

This is the purpose of writing a proof harness. Much like property testing (which would also fail in this assertion), we need to set up our preconditions, call the function in question, then assert our postconditions. Here's a revised example of the proof harness, one that now succeeds:

fn verify_success() {
    let x: u32 = kani::any();
    kani::assume(x < 4096);
    let y = estimate_size(x);
    assert!(y < 10);

But now we must wonder if we've really fully tested our function. What if we revise the function, but forget to update the assumption in our proof harness to cover the new range of inputs?

Fortunately, Kani is able to report a coverage metric for each proof harness. Try running:

cargo kani --visualize --harness verify_success

The beginning of the report includes coverage information. Clicking through to the file will show fully-covered lines in green. Lines not covered by our proof harness will show in red.

Try changing the assumption in the proof harness to x < 2048. Now the harness won't be testing all possible cases. Rerun cargo kani --visualize. Look at the report: you'll see we no longer have 100% coverage of the function.


In this section:

  1. We saw Kani find panics, assertion failures, and even some other failures like unsafe dereferencing of null pointers.
  2. We saw Kani find failures that testing could not easily find.
  3. We saw how to write a proof harness and use kani::any().
  4. We saw how to get a failing trace using kani --visualize
  5. We saw how proof harnesses are used to set up preconditions with kani::assume().
  6. We saw how to obtain coverage metrics and use them to ensure our proofs are covering as much as they should be.

Failures that Kani can spot

In the last section, we saw Kani spot two major kinds of failures: assertions and panics. If the proof harness allows some program execution that results in a panic, then Kani will report that as a failure. In addition, we saw (very briefly) a couple of other kinds of failures: null pointer dereferences and overflows. In this section, we're going to expand on these additional checks, to give you an idea of what other problems Kani will find.

Bounds checking and pointers

Rust is safe by default, and so includes dynamic (run-time) bounds checking where needed. Consider this Rust code (available here):

/// Wrap "too-large" indexes back into a valid range for the array
fn get_wrapped(i: usize, a: &[u32]) -> u32 {
    if a.len() == 0 {
        return 0;
    return a[i % a.len() + 1];

We can again write a simple property test against this code:

    proptest! {
        fn doesnt_crash(i: usize, a: Vec<u32>) {
            get_wrapped(i, &a);

This property test will immediately find a failing case, thanks to Rust's built-in bounds checking.

But what if we change this function to use unsafe Rust?

return unsafe { *a.get_unchecked(i % a.len() + 1) };

Now the error becomes invisible to this test:

# cargo test
test bounds_check::tests::doesnt_crash ... ok

The property test still causes an out-of-bounds access, but this undefined behavior does not necessarily cause an immediate crash. (This is part of why undefined behavior is so difficult to debug.) Through the use of unsafe code, we removed the runtime check for an out of bounds access. It just turned out that none of the randomly generated tests triggered behavior that actually crashed. But if we write a Kani proof harness:

fn bound_check() {
    let size: usize = kani::any();
    kani::assume(size < 4096);
    let index: usize = kani::any();
    let array: Vec<u32> = vec![0; size];
    get_wrapped(index, &array);

And run this proof with:

cargo kani --harness bound_check

We still see a failure from Kani, even without Rust's runtime bounds checking.

Also, notice there were many checks in the verification output. (At time of writing, 351.) This is a result of using the standard library Vec implementation, which means our harness actually used quite a bit of code, short as it looks. Kani is inserting a lot more checks than appear as asserts in our code, so the output can be large.

We get the following summary at the end:

 ** 1 of 351 failed
Failed Checks: dereference failure: pointer outside object bounds
 File: "./src/", line 11, in bounds_check::get_wrapped


Notice that, for Kani, this has gone from a simple bounds-checking problem to a pointer-checking problem. Kani will check operations on pointers to ensure they're not potentially invalid memory accesses. Any unsafe code that manipulates pointers will, as we see here, raise failures if its behavior is actually a problem.

Consider trying a few more small exercises with this example:

  1. Exercise: Switch back to the normal/safe indexing operation and re-try Kani. How does Kani's output change, compared to the unsafe operation? (Try predicting the answer, then seeing if you got it right.)
  2. Exercise: Remember how to get a trace from Kani? Find out what inputs it failed on.
  3. Exercise: Fix the error, run Kani, and see a successful verification.
  4. Exercise: Try switching back to the unsafe code (now with the error fixed) and re-run Kani. Does it still verify successfully?
Click to see explanation for exercise 1

Having switched back to the safe indexing operation, Kani reports two failures:

 ** 2 of 350 failed
Failed Checks: index out of bounds: the length is less than or equal to the given index
 File: "./src/", line 11, in bounds_check::get_wrapped
Failed Checks: dereference failure: pointer outside object bounds
 File: "./src/", line 11, in bounds_check::get_wrapped


The first is Rust's runtime bounds checking for the safe indexing operation. The second is Kani's check to ensure the pointer operation is actually safe. This pattern (two checks for similar issues in safe Rust code) is common to see, and we'll see it again in the next section.

NOTE: While Kani will always be checking for both properties, in the future the output here may change. You might have noticed that the bad pointer dereference can't happen, since the bounds check would panic first. In the future, Kani's output may report only the bounds checking failure in this example.

Click to see explanation for exercise 2

Having run cargo kani --harness bound_check --visualize and clicked on one of the failures to see a trace, there are three things to immediately notice:

  1. This trace is huge. Because the standard library Vec is involved, there's a lot going on.
  2. The top of the trace file contains some "trace navigation tips" that might be helpful in navigating the trace.
  3. There's a lot of generated code and it's really hard to just read the trace itself.

To navigate this trace to find the information you need, we again recommend searching for things you expect to be somewhere in the trace:

  1. Search the page for kani::any or <variable_of_interest> = such as size = or let size. We can use this to find out what example values lead to a problem. In this case, where we just have a couple of kani::any values in our proof harness, we can learn a lot just by seeing what these are. In this trace we find (and the values you get may be different):
Step 36: Function bound_check, File src/, Line 37
let size: usize = kani::any();
size = 2464ul

Step 39: Function bound_check, File src/, Line 39
let index: usize = kani::any();
index = 2463ul

You may see different values here, as it depends on the solver's behavior.

  1. Try searching for failure:. This will be near the end of the page. You can now search upwards from a failure to see what values certain variables had. Sometimes it can be helpful to change the source code to add intermediate variables, so their value is visible in the trace. For instance, you might want to compute the index before indexing into the array. That way you'd see in the trace exactly what value is being used.

These two techniques should help you find both the nondeterministic inputs, and the values that were involved in the failing assertion.

Overflow and math errors

Consider a different variant on the function above:

fn get_wrapped(i: usize, a: &[u32]) -> u32 {
    return a[i % a.len()];

We've corrected the out-of-bounds access, but now we've omitted the "base case": what to return on an empty list. Kani will spot this not as a bound error, but as a mathematical error: on an empty list the modulus operator (%) will cause a division by zero.

  1. Exercise: Try to run Kani on this version of get_wrapped, to see what this kind of failure looks like.

Rust can also perform runtime safety checks for integer overflows, much like it does for bounds checks. (Though Rust disables this by default in --release mode, it can be re-enabled.) Consider this code (available here):

fn simple_addition(a: u32, b: u32) -> u32 {
    return a + b;

A trivial function, but if we write a property test for it, we immediately find inputs where it fails, thanks to Rust's dynamic checks. Kani will find these failures as well. Here's the output from Kani:

# cargo kani --harness add_overflow
 ** 1 of 2 failed
Failed Checks: attempt to add with overflow
 File: "./src/", line 7, in overflow::simple_addition


This issue can be fixed using Rust's alternative mathematical functions with explicit overflow behavior. For instance, if the wrapping behavior is intended, you can write a.wrapping_add(b) instead of a + b. Kani will then report no issues.

Exercise: Classic overflow failure

A classic example of a subtle bug that persisted in many implementations for a very long time is "finding the midpoint" in quick sort. This often naively looks like this (code available here):

fn find_midpoint(low: u32, high: u32) -> u32 {
    return (low + high) / 2;
cargo kani --harness midpoint_overflow

Kani immediately spots the bug in the above code.

  1. Exercise: Fix this function so it no longer overflows. (Hint: depending on which approach you take, you may need to add the assumption that high > low to your proof harness. Don't add that right away, see what happens if you don't. Just keep it in mind.)
  2. Exercise: Prove your new implementation actually finds the midpoint correctly by adding an assertion to the test harness.
Click to see solutions for these exercises

A very common approach for resolving the overflow issue looks like this:

return low + (high - low) / 2;

But if you naively try this (try it!), you'll find a new underflow error: high - low might result in a negative number, but has type u32. Hence, the need to add the assumption we suggested above, to make that impossible. (Adding an assumption, though, means there's a new way to "use it wrong." Perhaps we'd like to avoid that! Can you avoid the assumption?)

After that, you might wonder how to "prove your new implementation correct." After all, what does "correct" even mean? Often we're using a good approximation of correct, such as the equivalence of two implementations (often one much "simpler" than the other somehow). Here's one possible assertion we could write in the proof harness:

assert!(result as u64 == (a as u64 + b as u64) / 2);

You might have even come up with this approach to avoiding the overflow issue in the first place! Having two different implementations, using different approaches, but proven to yield the same results, gives us greater confidence that we compute the correct result.

Failures that Kani cannot spot

Check out Limitations for information on the checks that Kani does not perform. Notably, Kani is not prioritizing all Rust-specific notions of undefined behavior.


In this section:

  1. We saw Kani spot out-of-bounds accesses.
  2. We saw Kani spot actually-unsafe dereferencing of a raw pointer to invalid memory.
  3. We got more experience reading the traces that Kani generates, to debug a failing proof harness.
  4. We saw Kani spot a division by zero error and an overflowing addition.
  5. As an exercise, we tried proving an assertion (finding the midpoint) that was not completely trivial.

Loops, unwinding, and bounds

Consider code like this (available here):

fn initialize_prefix(length: usize, buffer: &mut [u8]) {
    // Let's just ignore invalid calls
    if length > buffer.len() {

    for i in 0..=length {
        buffer[i] = 0;

This code has an off-by-one error that only occurs on the last iteration of the loop (when called with an input that will trigger it). We can try to find this bug with a proof harness like this:

#[kani::unwind(1)] // deliberately too low
fn check_initialize_prefix() {
    const LIMIT: usize = 10;
    let mut buffer: [u8; LIMIT] = [1; LIMIT];

    let length = kani::any();
    kani::assume(length <= LIMIT);

    initialize_prefix(length, &mut buffer);

But we've just used a new attribute (#[kani::unwind(1)]) that requires some explanation. When we run cargo kani on this code as we have written it, we see an odd verification failure:

 ** 1 of 67 failed (66 undetermined)
Failed Checks: unwinding assertion loop 0


If we try removing this "unwind" annotation and re-running Kani, the result is worse: non-termination. Kani simply doesn't produce a result.

The problem we're struggling with is the technique Kani uses to verify code. We're not able to handle code with "unbounded" loops, and what "bounded" means can be quite subtle. It has to have a constant number of iterations that's "obviously constant" enough for the verifier to actually figure this out. In practice, very few loops are like this.

To verify programs like this with Kani as it exists today, we need to do two things:

  1. Set an upper bound on the size of the problem. We've actually already done part of this: our proof harness above seems to be trying to set an upper LIMIT of 10.
  2. Tell Kani about this limit if (or when) it's not able to figure it out on its own. This is the purpose of the kani::unwind annotation.

Bounding proofs like this means we may no longer be proving as much as we originally hoped. Who's to say, if we prove everything works up to size 10, that there isn't a novel bug lurking, reachable only with problems of size 11+? Perhaps! But, let's get back to the issue at hand.

By putting #[kani::unwind(1)] on the proof harness, we've placed an upper bound of 1 loop iteration. The "unwinding assertion" failure that Kani reports is because this bound is not high enough. The code tries to execute more than 1 loop iteration. (And, because the unwinding isn't high enough, many of the other properties Kani is verifying become "undetermined": we don't really know if they're true or false, because we can't get far enough.)

Exercise: Try increasing the bound. Where might you start? How high do you need to go to get rid of the "unwinding assertion" failure?

Click to see explanation for the exercise

Since the proof harness is trying to limit the array to size 10, an initial unwind value of 10 seems like the obvious place to start. But that's not large enough for Kani, and we still see the "unwinding assertion" failure.

At size 11, the "unwinding assertion" goes away, and now we can see the actual failure we're trying to find too. We'll explain why we see this behavior in a moment.

Once we have increased the unwinding limit high enough, we're left with these failures:

 ** 1 of 68 failed
Failed Checks: index out of bounds: the length is less than or equal to the given index
 File: "./src/", line 12, in initialize_prefix


Exercise: Fix the off-by-one error, and get the (bounded) proof to go through.

We now return to the question: why is 11 the unwinding bound?

Kani needs the unwinding bound to be "one more than" the number of loop iterations. We previously had an off-by-one error that tried to do 11 iterations on an array of size 10. So... the unwinding bound needed to be 11, then.

NOTE: Presently, there are some situations where "number of iterations of a loop" can be less obvious than it seems. This can be easily triggered with use of break or continue within loops. Often this manifests itself as needing "two more" or "three more" iterations in the unwind bound than seems like it would actually run. In those situations, we might still need a bound like kani::unwind(13), despite looking like a loop bounded to 10 iterations.

The approach we've taken here is a general method for getting a bounded proof to go through:

  1. Put an actual upper bound on the problem itself. Here that's accomplished via LIMIT in our proof harness. We don't create a slice any bigger than that, and that's what we loop over.
  2. Start at a reasonable guess for a kani::unwind bound, and increase until the unwinding assertion failure goes away.
  3. Or, if that starts to take too long to verify, decrease your problem's bound, to accommodate the verifier's performance.

Unwinding value specification

The best approach to supplying Kani with unwind bounds is using the annotation kani::unwind, as we show above.

You might want to supply one via command line when experimenting, however. In that case you can either use --default-unwind x to set an unwind bound for every proof harness that does not have an explicit bound.

Or you can override a harness's bound, but only when running a specific harness:

cargo kani --harness check_initialize_prefix --unwind 11

Finally, you might be interested in defaulting the unwind bound to 1, to force termination (and force supplying a bound) on all your proof harnesses. You can do this by putting this into your Cargo.toml file:

default-unwind = 1

Bounded proof

Before we finish, it's worth revisiting the implications of what we've done here. Kani frequently needs to do "bounded proof", which contrasts with unbounded or full verification.

We've written a proof harness that shows initialize_prefix has no errors on input slices of size 10, but no higher. The particular size we choose is usually determined by balancing the level of assurance we want, versus runtime of Kani. It's often not worth running proofs for large numbers of iterations, unless either very high assurance is necessary, or there's reason to suspect larger problems will contain novel failure modes.

Exercise: Try increasing the problem size (both the unwind and the LIMIT constant). When does it start to take more than a few seconds?

Click to see explanation for the exercise

On your friendly neighborhood author's machine, a LIMIT of 100 takes about 3.8 seconds end-to-end. This is a relatively simple bit of code, though, and it's not uncommon for some proofs to scale poorly even to 5 iterations.

One consequence of this, however, is that Kani often scales poorly to "big string problems" like parsing. Often a parser will need to consume inputs larger than 10-20 characters to exhibit strange behaviors.


In this section:

  1. We saw Kani fail to terminate.
  2. We saw how #[kani::unwind(1)] can help force Kani to terminate (with a verification failure).
  3. We saw "unwinding assertions" verify that we've set the unwinding limit high enough.
  4. We saw how to put a practical bound on problem size in our proof harness.
  5. We saw how to pick an unwinding size large enough to successfully verify that bounded proof.

Nondeterministic variables

Kani is able to reason about programs and their execution paths by allowing users to create nondeterministic (also called symbolic) values using kani::any(). Kani is a "bit-precise" model checker, which means that Kani considers all the possible bit-value combinations that would be valid if assigned to a variable's memory contents. In other words, kani::any() should not produce values that are invalid for the type (which would lead to Rust undefined behavior).

Out of the box, Kani includes kani::any() implementations for most primitive and some std types. In this tutorial, we will show how to use kani::any() to create symbolic values for other types.

Safe nondeterministic variables

Let's say you're developing an inventory management tool, and you would like to start verifying properties about your API. Here is a simple example (available here):

use std::num::NonZeroU32;
use vector_map::VecMap;

pub type ProductId = u32;

pub struct Inventory {
    /// Every product in inventory must have a non-zero quantity
    pub inner: VecMap<ProductId, NonZeroU32>,

impl Inventory {
    pub fn update(&mut self, id: ProductId, new_quantity: NonZeroU32) {
        self.inner.insert(id, new_quantity);

    pub fn get(&self, id: &ProductId) -> Option<NonZeroU32> {

Let's write a fairly simple proof harness, one that just ensures we successfully get the value we inserted with update:

    pub fn safe_update() {
        // Empty to start
        let mut inventory = Inventory { inner: VecMap::new() };

        // Create non-deterministic variables for id and quantity.
        let id: ProductId = kani::any();
        let quantity: NonZeroU32 = kani::any();
        assert!(quantity.get() != 0, "NonZeroU32 is internally a u32 but it should never be 0.");

        // Update the inventory and check the result.
        inventory.update(id, quantity);
        assert!(inventory.get(&id).unwrap() == quantity);

We use kani::any() twice here:

  1. id has type ProductId which was actually just a u32, and so any value is fine.
  2. quantity, however, has type NonZeroU32. In Rust, it would be undefined behavior to have a value of 0 for this type.

We included an extra assertion that the value returned by kani::any() here was actually non-zero. If we run this, you'll notice that verification succeeds.

cargo kani --harness safe_update

kani::any() is safe Rust, and so Kani only implements it for types where type invariants are enforced. For NonZeroU32, this means we never return a 0 value. The assertion we wrote in this harness was just an extra check we added to demonstrate this fact, not an essential part of the proof.

Custom nondeterministic types

While kani::any() is the only method Kani provides to inject non-determinism into a proof harness, Kani only ships with implementations for a few std types where we can guarantee safety. When you need nondeterministic variables of types that don't have a kani::any() implementation available, you have the following options:

  1. Implement the kani::Arbitrary trait for your type, so you and downstream crates can use kani::any() with your type.
  2. Implement the bolero_generator::TypeGenerator trait. This will enable you and downstream crates to use Kani via Bolero.
  3. Write a function that builds an object from non-deterministic variables.

We recommend the first approach for most cases. The first approach is simple and conventional. This option will also enable you to use it with parameterized types, such as Option<MyType> and arrays. Kani includes a derive macro that allows you to automatically derive kani::Arbitrary for structures and enumerations as long as all its fields also implement the kani::Arbitrary trait. One downside of this approach today is that the kani crate ships with Kani, but it's not yet available on So you need to annotate the Arbitrary implementation with a #[cfg(kani)] attribute. For the derive macro, use #[cfg_attr(kani, derive(kani::Arbitrary))].

The second approach is recommended for cases where you would also like to be able to apply fuzzing or property testing. The benefits of doing so were described in this blog post. Like kani::Arbitrary, this trait can also be used with a derive macro. One thing to be aware of is that this type allow users to generate arbitrary values that include pointers. In those cases, only the values pointed to are arbitrary, not the pointers themselves.

Finally, the last approach is recommended when you need to pass in parameters, like bounds on the size of the data structure. (Which we'll discuss more in the next section.) This approach is also necessary when you need to generate a nondeterministic variable of a type that you're importing from another crate, since Rust doesn't allow you to implement a trait defined in an external crate for a type that you don't own.

Either way, inside this function you would simply return an arbitrary value by generating arbitrary values for its components. To generate a nondeterministic struct, you would just generate nondeterministic values for each of its fields. For complex data structures like vectors or other containers, you can start with an empty one and add a (bounded) nondeterministic number of entries.

For example, for a simple enum you can just annotate it with the Arbitrary derive attribute:

#[derive(Copy, Clone)]
#[cfg_attr(kani, derive(kani::Arbitrary))]
pub enum Rating {

But if the same enum is defined in an external crate, you can use a simple trick:

    pub fn any_rating() -> Rating {
        match kani::any() {
            0 => Rating::One,
            1 => Rating::Two,
            _ => Rating::Three,

All we're doing here is making use of a nondeterministic integer to decide which variant of Rating to return.

NOTE: If we thought of this code as generating a random value, this function looks heavily biased. We'd overwhelmingly generate a Three because it's matching "all other integers besides 1 and 2." But Kani just see 3 meaningful possibilities, each of which is not treated any differently from each other. The "proportion" of integers does not matter.

Bounding nondeterministic variables

You can use kani::any() for [T; N] (if implemented for T) because this array type has an exact and constant size. But if you wanted a slice ([T]) up to size N, you can no longer use kani::any() for that. Likewise, there is no implementation of kani::any() for more complex data structures like Vec.

The trouble with a nondeterministic vector is that you usually need to bound the size of the vector, for the reasons we investigated in the last chapter. The kani::any() function does not have any arguments, and so cannot be given an upper bound.

This does not mean you cannot have a nondeterministic vector. It just means you have to construct one. Our example proof harness above constructs a nondeterministic Inventory of size 1, simply by starting with the empty Inventory and inserting a nondeterministic entry.


Try writing a function to generate a (bounded) nondeterministic inventory (from the first example:)

fn any_inventory(bound: u32) -> Inventory {
   // fill in here

One thing you'll quickly find is that the bounds must be very small. Kani does not (yet!) scale well to nondeterministic-size data structures involving heap allocations. A proof harness like safe_update above, but starting with any_inventory(2) will probably take a couple of minutes to prove.

A hint for this exercise: you might choose two different behaviors, "size of exactly bound" or "size up to bound". Try both!

A solution can be found in


In this section:

  1. We saw how kani::any() will return "safe" values for each of the types Kani implements it for.
  2. We saw how to implement kani::Arbitrary or just write a function to create nondeterministic values for other types.
  3. We noted that some types cannot implement kani::any() as they need a bound on their size.
  4. We did an exercise to generate nondeterministic values of bounded size for Inventory.

Debugging verification failures

When the result of a certain check comes back as a FAILURE, Kani offers different options to help debug:

  • --concrete-playback. This experimental feature generates a Rust unit test case that plays back a failing proof harness using a concrete counterexample.
  • --visualize. This feature generates an HTML text-based trace that enumerates the execution steps leading to the check failure.

Concrete playback

When concrete playback is enabled, Kani will generate unit tests for assertions that failed during verification, as well as cover statements that are reachable.

These tests can then be executed using Kani's playback subcommand.


In order to enable this feature, run Kani with the -Z concrete-playback --concrete-playback=[print|inplace] flag. After getting a verification failure, Kani will generate a Rust unit test case that plays back a failing proof harness with a concrete counterexample. The concrete playback modes mean the following:

  • print: Kani will just print the unit test to stdout. You will then need to copy this unit test into the same module as your proof harness. This is also helpful if you just want to quickly find out which values were assigned by kani::any() calls.
  • inplace: Kani will automatically copy the unit test into your source code. Before running this mode, you might find it helpful to have your existing code committed to git. That way, you can easily remove the unit test with git revert. Note that Kani will not copy the unit test into your source code if it detects that the exact same test already exists.

After the unit test is in your source code, you can run it with the playback subcommand. To debug it, there are a couple of options:

To manually compile and run the test, you can use Kani's playback subcommand:

cargo kani playback -Z concrete-playback -- ${unit_test_func_name}

The output from this command is similar to cargo test. The output will have a line in the beginning like Running unittests {files} ({binary}).

You can further debug the binary with tools like rust-gdb or lldb.


Running kani -Z concrete-playback --concrete-playback=print on the following source file:

fn proof_harness() {
    let a: u8 = kani::any();
    let b: u16 = kani::any();
    assert!(a / 2 * 2 == a &&
            b / 2 * 2 == b);

yields a concrete playback Rust unit test similar to the one below:

fn kani_concrete_playback_proof_harness_16220658101615121791() {
    let concrete_vals: Vec<Vec<u8>> = vec![
        // 133
        // 35207
        vec![135, 137],
    kani::concrete_playback_run(concrete_vals, proof_harness);

Here, 133 and 35207 are the concrete values that, when substituted for a and b, cause an assertion failure. vec![135, 137] is the byte array representation of 35207.

Request for comments

This feature is experimental and is therefore subject to change. If you have ideas for improving the user experience of this feature, please add them to this GitHub issue. We are tracking the existing feature requests in this GitHub milestone.


  • This feature does not generate unit tests for failing non-panic checks (e.g., UB checks). This is because checks would not trigger runtime errors during concrete playback. Kani generates warning messages for this.
  • This feature does not support generating unit tests for multiple assertion failures within the same harness. This limitation might be removed in the future. Kani generates warning messages for this.
  • This feature requires that you use the same Kani version to generate the test and to playback. Any extra compilation option used during verification must be used during playback.


This section is the main reference for Kani. It contains sections that informally describe its main features.


In Kani, attributes are used to mark functions as harnesses and control their execution. This section explains the attributes available in Kani and how they affect the verification process.

At present, the available Kani attributes are the following:


The #[kani::proof] attribute specifies that a function is a proof harness.

Proof harnesses are similar to test harnesses, especially property-based test harnesses, and they may use functions from the Kani API (e.g., kani::any()). A proof harness is the smallest verification unit in Kani.

When Kani is run, either through kani or cargo kani, it'll first collect all proof harnesses (i.e., functions with the attribute #[kani::proof]) and then attempt to verify them.


If we run Kani on this example:

fn my_harness() {
    assert!(1 + 1 == 2);

We should see a line in the output that says Checking harness my_harness... (assuming my_harness is the only harness in our code). This will be followed by multiple messages that come from CBMC (the verification engine used by Kani) and the verification results.

Using any other Kani attribute without #[kani::proof] will result in compilation errors.


The #[kani::proof] attribute can only be added to functions without parameters.


The #[kani::should_panic] attribute specifies that a proof harness is expected to panic.

This attribute allows users to exercise negative verification. It's analogous to how #[should_panic] allows users to exercise negative testing for Rust unit tests.

This attribute only affects the overall verification result. In particular, using the #[kani::should_panic] attribute will return one of the following results:

  • VERIFICATION:- FAILED (encountered no panics, but at least one was expected) if there were no failed checks.
  • VERIFICATION:- FAILED (encountered failures other than panics, which were unexpected) if there were failed checks but not all them were related to panics.
  • VERIFICATION:- SUCCESSFUL (encountered one or more panics as expected) otherwise.

At the moment, to determine if a check is related to a panic, we check if its class is assertion. The class is the second member in the property name, the triple that's printed after Check X: : <function>.<class>.<number>. For example, the class in Check 1: my_harness.assertion.1 is assertion, so this check is considered to be related to a panic.

NOTE: The #[kani::should_panic] is only recommended for writing harnesses which complement existing harnesses that don't use the same attribute. In order words, it's only recommended to write negative harnesses after having written positive harnesses that successfully verify interesting properties about the function under verification.


The #[kani::should_panic] attribute verifies that there are one or more failed checks related to panics. At the moment, it's not possible to pin it down to specific panics. Therefore, it's possible that the panics detected with #[kani::should_panic] aren't the ones that were originally expected after a change in the code under verification.


Let's assume we're using the Device from this example:

struct Device {
    is_init: bool,

impl Device {
    fn new() -> Self {
        Device { is_init: false }

    fn init(&mut self) {
        self.is_init = true;

We may want to verify that calling device.init() more than once should result in a panic. We can do so with the following harness:

fn cannot_init_device_twice() {
    let mut device = Device::new();

Running Kani on it will produce the result VERIFICATION:- SUCCESSFUL (encountered one or more panics as expected)


The #[kani::unwind(<number>)] attribute specifies that all loops must be unwound up to <number> times.

By default, Kani attempts to unwind all loops automatically. However, this unwinding process doesn't always terminate. The #[kani::unwind(<number>)] attribute will:

  1. Disable automatic unwinding.
  2. Unwind all loops up to <number> times.

After the unwinding stage, Kani will attempt to verify the harness. If the #[kani::unwind(<number>)] attribute was specified, there's a chance that one or more loops weren't unwound enough times. In that case, there will be at least one failed unwinding assertion (there's one unwinding assertion for each loop), causing verification to fail.

Check the Loops, unwinding and bounds section for more information about unwinding.


Let's assume we've written this code which contains a loop:

fn my_sum(vec: &Vec<u32>) -> u32 {
    let mut sum = 0;
    for elem in vec {
        sum += elem;

fn my_harness() {
    let vec = vec![1, 2, 3];
    let sum = my_sum(&vec);
    assert!(sum == 6);

Running this example on Kani will produce a successful verification result. In this case, Kani automatically finds the required unwinding value (i.e., the number of times it needs to unwind all loops). This means that the #[kani::unwind(<number>)] attribute isn't needed, as we'll see soon. In general, the required unwinding value is equal to the maximum number of iterations for all loops, plus one. The required unwinding value in this example is 4: the 3 iterations in the for elem in vec loop, plus 1.

Let's see what happens if we force a lower unwinding value with #[kani::unwind(3)]:

fn my_harness() {
    let vec = vec![1, 2, 3];
    let sum = my_sum(&vec);
    assert!(sum == 6);

As we mentioned, trying to verify this harness causes an unwinding failure:

 ** 1 of 187 failed (186 undetermined)
Failed Checks: unwinding assertion loop 0
 File: "/home/ubuntu/devices/src/", line 32, in my_sum

[Kani] info: Verification output shows one or more unwinding failures.
[Kani] tip: Consider increasing the unwinding value or disabling `--unwinding-assertions`.

Kani cannot verify the harness because there is at least one unwinding assertion failure. But, if we use #[kani::unwind(4)], which is the right unwinding value we computed earlier:

fn my_harness() {
    let vec = vec![1, 2, 3];
    let sum = my_sum(&vec);
    assert!(sum == 6);

We'll get a successful result again:

 ** 0 of 186 failed



Changes the solver to be used by Kani's verification engine (CBMC).

This may change the verification time required to verify a harness.

At present, <solver> can be one of:

  • minisat: MiniSat.
  • cadical (default): CaDiCaL.
  • kissat: kissat.
  • bin="<SAT_SOLVER_BINARY>": A custom solver binary, "<SAT_SOLVER_BINARY>", that must be in path.


Kani will use the CaDiCaL solver in the following example:

fn check() {
    let mut a = [2, 3, 1];
    assert_eq!(a[0], 1);
    assert_eq!(a[1], 2);
    assert_eq!(a[2], 3);

Changing the solver may result in different verification times depending on the harness.

Note that the default solver may vary depending on Kani's version. We highly recommend users to annotate their harnesses if the choice of solver has a major impact on performance, even if the solver used is the current default one.

#[kani::stub(<original>, <replacement>)]

Replaces the function/method with name with the function/method with name during compilation

Check the Stubbing section for more information about stubbing.


Stubbing (or mocking) is an unstable feature which allows users to specify that certain items should be replaced with stubs (mocks) of those items during verification. At present, the only items where stubbing can be applied are functions and methods (see limitations for more details).

When to consider stubbing

In general, we have identified three reasons where users may consider stubbing:

  • Unsupported features: The code under verification contains features that Kani does not support, such as inline assembly.
  • Bad performance: The code under verification contains features that Kani supports, but it leads to bad verification performance (for example, deserialization code).
  • Compositional reasoning: The code under verification contains code that has been verified separately. Stubbing the code that has already been verified with a less complex version that mimics its behavior can result in reduced verification workloads.

In most cases, stubbing enables users to verify code that otherwise would be impractical to verify. Although definitions for mocking (normally used in testing) and stubbing may slightly differ depending on who you ask, we often use both terms interchangeably.


The stubbing feature can be enabled by using the --enable-stubbing option when calling Kani. Since it's an unstable feature, it requires passing the --enable-unstable option in addition to --enable-stubbing.

At present, the only component of the stubbing feature is the #[kani::stub(<original>, <replacement>)] attribute, which allows you to specify the pair of functions/methods that must be stubbed in a harness.

The #[kani::stub(...)] attribute

The stub attribute #[kani::stub(<original>, <replacement>)] is the main tool of the stubbing feature.

It indicates to Kani that the function/method with name <original> should be replaced with the function/method with name <replacement> during the compilation step. The names of these functions/methods are resolved using Rust's standard name resolution rules. This includes support for imports like use foo::bar as baz, as well as imports of multiple versions of the same crate.

This attribute must be specified on a per-harness basis. This provides a high degree of flexibility for users, since they are given the option to stub the same item with different replacements (or not use stubbing at all) depending on the proof harness. In addition, the attribute can be specified multiple times per harness, so that multiple (non-conflicting) stub pairings are supported.

An example: stubbing random

Let's see a simple example where we use the rand::random function to generate an encryption key.

fn encrypt_then_decrypt_is_identity() {
    let data: u32 = kani::any();
    let encryption_key: u32 = rand::random();
    let encrypted_data = data ^ encryption_key;
    let decrypted_data = encrypted_data ^ encryption_key;
    assert_eq!(data, decrypted_data);

At present, Kani fails to verify this example due to issue #1781.

However, we can work around this limitation thanks to the stubbing feature:

fn mock_random<T: kani::Arbitrary>() -> T {

#[kani::stub(rand::random, mock_random)]
fn encrypt_then_decrypt_is_identity() {
    let data: u32 = kani::any();
    let encryption_key: u32 = rand::random();
    let encrypted_data = data ^ encryption_key;
    let decrypted_data = encrypted_data ^ encryption_key;
    assert_eq!(data, decrypted_data);

Here, the #[kani::stub(rand::random, mock_random)] attribute indicates to Kani that it should replace rand::random with the stub mock_random. Note that this is a fair assumption to do: rand::random is expected to return any u32 value, just like kani::any.

Now, let's run it through Kani:

cargo kani --enable-unstable --enable-stubbing --harness encrypt_then_decrypt_is_identity

The verification result is composed of a single check: the assertion corresponding to assert_eq!(data, decrypted_data).

Check 1: encrypt_then_decrypt_is_identity.assertion.1
         - Status: SUCCESS
         - Description: "assertion failed: data == decrypted_data"
         - Location: src/ in function encrypt_then_decrypt_is_identity

 ** 0 of 1 failed


Kani shows that the assertion is successful, avoiding any issues that appear if we attempt to verify the code without stubbing.


In the following, we describe all the limitations of the stubbing feature.

Usage restrictions

The usage of stubbing is limited to the verification of a single harness. Therefore, users are required to pass the --harness option when using the stubbing feature.

In addition, this feature isn't compatible with concrete playback.


Support for stubbing is currently limited to functions and methods. All other items aren't supported.

The following are examples of items that could be good candidates for stubbing, but aren't supported:

  • Types
  • Macros
  • Traits
  • Intrinsics

We acknowledge that support for method stubbing isn't as ergonomic as it could be. A common problem when attempting to define method stubs is that we don't have access to the private fields of an object (i.e., the fields in self). One workaround is to use the unsafe function std::mem::transmute, as in this example:

struct Foo {
    x: u32,

impl Foo {
    pub fn m(&self) -> u32 {

struct MockFoo {
    pub x: u32,

fn mock_m(foo: &Foo) -> u32 {
    let mock: &MockFoo = unsafe { std::mem::transmute(foo) };
    return mock.x;

#[kani::stub(Foo::m, mock_m)]
fn my_harness() { ... }

However, this isn't recommended since it's unsafe and error-prone. In general, we don't recommend stubbing for private functions/methods. Doing so can lead to brittle proofs: private functions/methods are subject to change or removal even in version minor upgrades (they aren't part of the APIs). Therefore, proofs that rely on stubbing for private functions/methods might incur a high maintenance burden.

Error conditions

Given a set of original-replacement pairs, Kani will exit with an error if:

  1. a specified original function does not exist;
  2. a specified replacement stub does not exist;
  3. the user specifies conflicting stubs for the same harness (e.g., if the same original function is mapped to multiple replacement functions); or
  4. the signature of the replacement stub is not compatible with the signature of the original function/method (see next section).

Stub compatibility and validation

We consider a stub and a function/method to be compatible if all the following conditions are met:

  • They have the same number of parameters.
  • They have the same return type.
  • Each parameter in the stub has the same type as the corresponding parameter in the original function/method.
  • The stub must have the same number of generic parameters as the original function/method. However, a generic parameter in the stub is allowed to have a different name than the corresponding parameter in the original function/method. For example, the stub bar<A, B>(x: A, y: B) -> B is considered to have a type compatible with the function foo<S, T>(x: S, y: T) -> T.
  • The bounds for each type parameter don't need to match; however, all calls to the original function must also satisfy the bounds of the stub.

The final point is the most subtle. We don't require that a type parameter in the signature of the stub implements the same traits as the corresponding type parameter in the signature of the original function/method. However, Kani will reject a stub if a trait mismatch leads to a situation where a statically dispatched call to a trait method cannot be resolved during monomorphization. For example, this restriction rules out the following harness:

fn foo<T>(_x: T) -> bool {

trait DoIt {
    fn do_it(&self) -> bool;

fn bar<T: DoIt>(x: T) -> bool {

#[kani::stub(foo, bar)]
fn harness() {

The call to the trait method DoIt::do_it is unresolvable in the stub bar when the type parameter T is instantiated with the type &str. On the other hand, this approach provides some flexibility, such as allowing our earlier example of mocking rand::random: both rand::random and my_random have type () -> T, but in the first case T is restricted such that the type Standard implements Distribution<T>, whereas in the latter case T has to implement kani::Arbitrary. This trait mismatch is allowed because at this call site T is instantiated with u32, which implements kani::Arbitrary.


You may be interested in applying Kani if you're in this situation:

  1. You're working on a moderately important project in Rust.
  2. You've already invested heavily in testing to ensure correctness.
  3. You want to invest further, to gain a much higher degree of assurance.

If you haven't already, we also recommend techniques like property testing and fuzzing (e.g. with bolero). These yield good results, are very cheap to apply, and are often easy to adopt and debug.

In this section, we explain how Kani compares with other tools and suggest where to start applying Kani in real code.

Comparison with other tools

Fuzzing (for example, with cargo-fuzz) is a unguided approach to random testing. A fuzzer generally provides an input of random bytes, and then examines fairly generic properties (such as "doesn't crash" or "commit undefined behavior") about the resulting program.

Fuzzers generally get their power through a kind of evolutionary algorithm that rewards new mutant inputs that "discover" new branches of the program under test. Fuzzers are excellent for testing security boundaries, precisely because they make no validity assumptions (hence, they are "unguided") when generating the input.

Property testing (for example, with Proptest) is a guided approach to random testing. "Guided" in the sense that the test generally provides a strategy for generating random values that constrains their range. The purpose of this strategy is to either focus on interesting values, or avoid failing assertions that only hold for a constrained set of inputs. Tests in this style do actually state properties: For all inputs (of some constrained kind), this condition should hold.

Property testing is often quite effective, but the engine can't fully prove the property: It can only sample randomly a few of those values to test (though property testing libraries frequently give interesting "edge cases" a higher probability, making them more effective at bug-finding).

Model checking is similar to these techniques in how you use them, but it's non-random and exhaustive (though often only up to some bound on input or problem size). Thus, properties checked with a model checker are effectively proofs. Instead of naively trying all possible concrete inputs (which could be infeasible and blow up exponentially), model checkers like Kani will cleverly encode program traces as symbolic "SAT/SMT" problems, and hand them off to SAT/SMT solvers. SAT/SMT solving is an NP-complete problem, but many practical programs can be model-checked within milliseconds to seconds (with notable exceptions: you can easily try to reverse a cryptographic hash with a model checker, but good luck getting it to terminate!)

Model checking allows you to prove non-trivial properties about programs, and check those proofs in roughly the same amount of time as a traditional test suite would take to run. The downside is many types of properties can quickly become "too large" to practically model-check, and so writing "proof harnesses" (very similar to property tests and fuzzer harnesses) requires some skill to understand why the solver is not terminating and fix the structure of the problem you're giving it so that it does. This process basically boils down to "debugging" the proof.

Looking for concurrency?

At present, Kani does not support verifying concurrent code. Two tools of immediate interest are Loom and Shuttle. Loom attempts to check all possible interleavings, while Shuttle chooses interleavings randomly. The former is sound (like Kani), but the latter is more scalable to large problem spaces (like property testing).

Other tools

The Rust Formal Methods Interest Group maintains a list of interesting Rust verification tools.

Where to start on real code

It can be daunting to find the right place to start writing proofs for a real-world project. This section will try to help you get over that hurdle.

In general, you're trying to do three things:

  1. Find a place where it'd be valuable to have a proof.
  2. Find a place where it won't be too difficult to prove something, just to start.
  3. Figure out what a feasible longer-term goal might be.

By far, the best strategy is to follow your testing. Places where proof will be valuable are often places where you've written a lot of tests, because they're valuable there for the same reasons. Likewise, code structure changes to make functions more unit-testable will also make functions more amenable to proof. Often, by examining existing unit tests (and especially property tests), you can easily find a relatively self-contained function that's a good place to start.

Where is proof valuable?

  1. Where complicated things happen with untrusted user input. These are often the critical "entry points" into the code. These are also places where you probably want to try other techniques (e.g., fuzz testing).

  2. Where unsafe is used extensively. These are often places where you'll already have concentrated a lot of tests.

  3. Where you have a complicated implementation that accomplishes a much simpler abstract problem. Ideal places for property testing, if you haven't tried that already. But the usual style of property tests you often write here (generate large random lists of operations, then compare between concrete and abstract model) won't be practical to directly port to model checking.

  4. Where normal testing "smells" intractable.

Where is it easier to start?

  1. Find crates or files with smaller lists of dependencies. Dependencies can sometimes blow up the tractability of proofs. This can usually be handled, but requires a lot more investment to make it happen, and so isn't a good place to start.

  2. Don't forget to consider starting with your dependencies. Sometimes the best place to start won't be your code, but the code that you depend on. If it's used by more projects that just yours, it will be valuable to more people, too!

  3. Find well-tested code. When you make changes to improve the unit-testability of code, that also makes it more amenable to proof, too.

Here are some things to avoid, when starting out:

  1. Lots of loops, or at least nested loops. As we saw in the tutorial, right now we often need to put upper bounds on loops to make more limited claims.

  2. Inductive data structures. These are data structures with unbounded size (e.g., linked lists or trees.) These can be hard to model since you need to set bounds on their size, similar to what happens with loops.

  3. Input/Output code. Kani doesn't model I/O, so if your code depends on behavior like reading/writing to a file, you won't be able to prove anything. This is one obvious area where testability helps provability: often we separate I/O and "pure" computation into different functions, so we can unit-test the latter.

  4. Deeper call graphs. Functions that call a lot of other functions can require more investment to make tractable. They may not be a good starting point.

  5. Significant global state. Rust tends to discourage this, but it still exists in some forms.

Your first proof

A first proof will likely start in the following form:

  1. Nondeterministically initialize variables that will correspond to function inputs, with as few constraints as possible.
  2. Call the function in question with these inputs.
  3. Don't (yet) assert any post-conditions.

Running Kani on this simple starting point will help figure out:

  1. What unexpected constraints might be needed on your inputs (using kani::assume) to avoid "expected" failures.
  2. Whether you're over-constrained. Check the coverage report using --visualize. Ideally you'd see 100% coverage, and if not, it's usually because you've assumed too much (thus over-constraining the inputs).
  3. Whether Kani will support all the Rust features involved.
  4. Whether you've started with a tractable problem. (Remember to try setting #[kani::unwind(1)] to force early termination and work up from there.)

Once you've got something working, the next step is to prove more interesting properties than just what Kani covers by default. You accomplish this by adding new assertions (not just in your harness, but also to the code being run). Even if a proof harness has no post-conditions being asserted directly, the assertions encountered along the way can be meaningful proof results by themselves.

Examples of the use of Kani

On the Kani blog, we've documented worked examples of applying Kani:

  1. The Rectangle example of the Rust Book
  2. A Rust standard library CVE
  3. Verifying a part of Firecracker

Developer documentation

Kani is an open source project open to external contributions.

The easiest way to contribute is to report any issue you encounter while using the tool. If you want to contribute to its development, we recommend looking into these issues.

In this chapter, we provide documentation that might be helpful for Kani developers (including external contributors):

  1. Coding conventions.
  2. Useful command-line instructions for Kani/CBMC/Git.
  3. Development setup recommendations for working with cbmc.
  4. Development setup recommendations for working with rustc.
  5. Guide for testing in Kani.
  6. Transition to StableMIR.

NOTE: The developer documentation is intended for Kani developers and not users. At present, the project is under heavy development and some items discussed in this documentation may stop working without notice (e.g., commands or configurations). Therefore, we recommend users to not rely on them.

Coding conventions


We automate most of our formatting preferences. Our CI will run format checkers for PRs and pushes. These checks are required for merging any PR.

For Rust, we use rustfmt which is configured via the rustfmt.toml file. We are also in the process of enabling clippy. Because of that, we still have a couple of lints disabled (see .cargo/config for the updated list).

We also have a bit of C and Python code in our repository. For C we use clang-format and for Python scripts we use autopep8. See .clang-format and pyproject.toml for their configuration.


We recognize that in some cases, the formatting and lints automation may not be applicable to a specific code. In those cases, we usually prefer explicitly allowing exceptions by locally disabling the check. E.g., use #[allow] annotation instead of disabling a link on a crate or project level.

All source code files begin with a copyright and license notice. If this is a new file, please add the following notice:

// Copyright Kani Contributors
// SPDX-License-Identifier: Apache-2.0 OR MIT

When modifying a file from another project, please keep their headers as is and append the following notice after them:

// ... existing licensing headers ...

// Modifications Copyright Kani Contributors
// See GitHub history for details.

Note: The comment escape characters will depend on the type of file you are working with. E.g.: For rust start the header with //, but for python start with #.

We also have automated checks for the copyright notice. There are a few file types where this rule doesn't apply. You can see that list in the copyright-exclude file.

Code for soundness

We are developing Kani to provide assurance that critical Rust components are verifiably free of certain classes of security and correctness issues. Thus, it is critical that we provide a verification tool that is sound. For the class of errors that Kani can verify, we should not produce a "No Error" result if there was in fact an error in the code being verified, i.e., it has no "False Negatives".

Because of that, we bias on the side of correctness. Any incorrect modeling that may trigger an unsound analysis that cannot be fixed in the short term should be mitigated. Here are a few ways how we do that.

Compilation errors

Make sure to add user-friendly errors for constructs that we can't handle. For example, Kani cannot handle the panic unwind strategy, and it will fail compilation if the crate uses this configuration.

In general, it's preferred that error messages follow these guidelines used for rustc development. If the errors are being emitted from kani-compiler, you should use the compiler error message utilities (e.g., the Session::span_err method). However, if the errors are being emitted from kani-driver, you should use the functions provided in the util module in kani-driver.

Internal compiler errors

Even though this doesn't provide users the best experience, you are encouraged to add checks in the compiler for any assumptions you make during development. Those checks can be on the form of assert!() or unreachable!() statement. Please provide a meaningful message to help user understand why something failed, and try to explain, at least with a comment, why this is the case.

We don't formally use any specific formal representation of function contract, but whenever possible we do instrument the code with assertions that may represent the function pre- and post-conditions to ensure we are modeling the user code correctly.

Verification errors

In cases where Kani fails to model a certain instruction or local construct that doesn't have a global effect, we encode this failure as a verification error. I.e., we generate an assertion failure instead of the construct we are modeling using codegen_unimplemented(), which blocks the execution whenever this construct is reached.

This will allow users to verify their crate successfully as long as that construct is not reachable in any harness. If a harness has at least one possible execution path that reaches such construct, Kani will fail the verification, and it will mark all checks, other than failed checks, with UNDETERMINED status.

Create detailed issues for "TODO" tasks

It is OK to add "TODO" comments as long as they don't compromise user experience or the tool correctness. When doing so, please create an issue that captures the task. Add details about the task at hand including any impact to the user. Finally, add the link to the issue that captures the "TODO" task as part of your comment.


// TODO: This function assumes type cannot be ZST. Check if that's always the case.
assert!(!typ.is_zst(), "Unexpected ZST type");

Performant but readable

We aim at writing code that is performant but also readable and easy to maintain. Avoid compromising the code quality if the performance gain is not significant.

Here are few tips that can help the readability of your code:

  • Sort match arms, enum variants, and struct fields alphabetically.
  • Prefer concise but meaningful names.
  • Prefer exhaustive matches.
  • Prefer declarative over imperative programming.

Working with CBMC

This section describes how to access more advanced CBMC options from Kani.

CBMC arguments

Kani is able to handle common CBMC arguments as if they were its own (e.g., --default-unwind <n>), but sometimes it may be necessary to use CBMC arguments which are not handled by Kani.

To pass additional arguments for CBMC, you pass --cbmc-args to Kani. Note that this "switches modes" from Kani arguments to CBMC arguments: Any arguments that appear after --cbmc-args are considered to be CBMC arguments, so all Kani arguments must be placed before it.

Thus, the command line format to invoke cargo kani with CBMC arguments is:

cargo kani [<kani-args>]* --cbmc-args [<cbmc-args>]*

NOTE: In cases where CBMC is not expected to emit a verification output, you have to use Kani's argument --output-format old to turn off the post-processing of output from CBMC.

Individual loop bounds

Setting --default-unwind <n> affects every loop in a harness. Once you know a particular loop is causing trouble, sometimes it can be helpful to provide a specific bound for it.

In the general case, specifying just the highest bound globally for all loops shouldn't cause any problems, except that the solver may take more time because all loops will be unwound to the specified bound.

In situations where you need to optimize for the solver, individual bounds for each loop can be provided on the command line. To do so, we first need to know the labels assigned to each loop with the CBMC argument --show-loops:

# kani src/ --output-format old --cbmc-args --show-loops
Loop _RNvCs6JP7pnlEvdt_3lib17initialize_prefix.0:
  file ./src/ line 11 column 5 function initialize_prefix

Loop _RNvMs8_NtNtCswN0xKFrR8r_4core3ops5rangeINtB5_14RangeInclusivejE8is_emptyCs6JP7pnlEvdt_3lib.0:
  file $RUST/library/core/src/ops/ line 540 column 9 function std::ops::RangeInclusive::<Idx>::is_empty

Loop gen-repeat<[u8; 10]::16806744624734428132>.0:

This command shows us the labels of the loops involved. Note that, as mentioned in CBMC arguments, we need to use --output-format old to avoid post-processing the output from CBMC.

NOTE: At the moment, these labels are constructed using the mangled name of the function and an index. Mangled names are likely to change across different versions, so this method is highly unstable.

Then, we can use the CBMC argument --unwindset label_1:bound_1,label_2:bound_2,... to specify an individual bound for each loop as follows:

kani src/ --cbmc-args --unwindset _RNvCs6JP7pnlEvdt_3lib17initialize_prefix.0:12

Working with rustc

Kani is developed on the top of the Rust compiler, which is not distributed on and depends on bootstrapping mechanisms to properly build its components. Thus, our dependency on rustc crates are not declared in our Cargo.toml.

Below are a few hacks that will make it easier to develop on the top of rustc.

Code analysis for rustc definitions

IDEs rely on cargo to find dependencies and sources to provide proper code analysis and code completion. In order to get these features working for rustc crates, you can do the following:


Add the following to the rust-analyzer extension settings in settings.json:

    "rust-analyzer.rustc.source": "discover",
    "": "workspace_and_dependencies",

Ensure that any packages that use rustc data structures have the following line set in their Cargo.toml

# This package uses rustc crates.

You may also need to install the rustc-dev package using rustup

rustup toolchain install nightly --component rustc-dev

Debugging in VS code

To debug Kani in VS code, first install the CodeLLDB extension. Then add the following lines at the start of the main function (see the CodeLLDB manual for details):

    let url = format!(

Note that pretty printing for the Rust nightly toolchain (which Kani uses) is not very good as of June 2022. For example, a vector may be displayed as vec![{...}, {...}] on nightly Rust, when it would be displayed as vec![Some(0), None] on stable Rust. Hopefully, this will be fixed soon.

CLion / IntelliJ

This is not a great solution, but it works for now (see for more details). Edit the Cargo.toml of the package that you're working on and add artificial dependencies on the rustc packages that you would like to explore.

# This configuration doesn't exist so it shouldn't affect your build.
# Adjust the path here to point to a local copy of the rust compiler.
# The best way is to use the rustup path. Replace <toolchain> with the
# proper name to your toolchain.
rustc_driver = { path = "~/.rustup/toolchains/<toolchain>/lib/rustlib/rustc-src/rust/compiler/rustc_driver" }
rustc_interface = { path = "~/.rustup/toolchains/<toolchain>/lib/rustlib/rustc-src/rust/compiler/rustc_interface" }

Don't forget to rollback the changes before you create your PR.

EMACS (with use-package)

First, Cargo.toml and rustup toolchain steps are identical to VS Code. Install Rust-analyzer binary under ~/.cargo/bin/.

On EMACS, add the following to your EMACS lisp files. They will install the necessary packages using the use-package manager.

;; Install LSP
(use-package lsp-mode
  :commands lsp)
(use-package lsp-ui)

;; Install Rust mode
(use-package toml-mode)
(use-package rust-mode)

(setq lsp-rust-server 'rust-analyzer)
(setenv "PATH" (concat (getenv "PATH") ":/home/USER/.cargo/bin/"))

If EMACS complains that it cannot find certain packages, try running M-x package-refresh-contents.

For LSP to be able to find rustc_private files used by Kani, you will need to modify variable lsp-rust-analyzer-rustc-source. Run M-x customize-variable, type in lsp-rust-analyzer-rustc-source, click Value Menu and change it to Path. Paste in the path to Cargo.toml of rustc's source code. You can find the source code under .rustup, and the path should end with compiler/rustc/Cargo.toml. Important: make sure that this rustc is the same version and architecture as what Kani uses. If not, LSP features like definition lookup may be break.

This ends the basic install for EMACS. You can test your configuration with the following steps.

  1. Opening up a rust file with at least one rustc_private import.
  2. Activate LSP mode with M-x lsp.
  3. When asked about the root of the project, pick one of them. Make sure that whichever root you pick has a Cargo.toml with rustc_private=true added.
  4. If LSP asks if you want to watch all files, select yes. For less powerful machines, you may want to adjust that later.
  5. On the file with rustc_private imports, do the following. If both work, then you are set up.
    • Hover mouse over the rustc_private import. If LSP is working, you should get information about the imported item.
    • With text cursor over the same rustc_private import, run M-x lsp-find-definition. This should jump to the definition within rustc's source code.

LSP mode can integrate with flycheck for instant error checking and company for auto-complete. Consider adding the following to the configuration.

(use-package flycheck
  :hook (prog-mode . flycheck-mode))

(use-package company
  :hook (prog-mode . company-mode)

clippy linter can be added by changing the LSP install to:

(use-package lsp-mode
  :commands lsp
  (lsp-rust-analyzer-cargo-watch-command "clippy"))

Finally lsp-mode can run rust-analyzer via TRAMP for remote development. We found this way of using rust-analyzer to be unstable as of 2022-06. If you want to give it a try you will need to add a new LSP client for that remote with the following code.

	:new-connection (lsp-tramp-connection "/full/path/to/remote/machines/rust-analyzer")
	:major-modes '(rust-mode)
	:remote? t
	:server-id 'rust-analyzer-remote))

For further details, please see

Custom rustc

There are a few reasons why you may want to use your own copy of rustc. E.g.:

  • Enable more verbose logs.
  • Use a debug build to allow you to step through rustc code.
  • Test changes to rustc.

We will assume that you already have a Kani setup and that the variable KANI_WORKSPACE contains the path to your Kani workspace.

It's highly recommended that you start from the commit that corresponds to the current rustc version from your workspace. To get that information, run the following command:

cd ${KANI_WORKSPACE} # Go to your Kani workspace.
rustc --version # This will print the commit id. Something like:
# rustc 1.60.0-nightly (0c292c966 2022-02-08)
#                       ^^^^^^^^^ this is used as the ${COMMIT_ID} below
# E.g.:

First you need to clone and build stage 2 of the compiler. You should tweak the configuration to satisfy your use case. For more details, see and

git clone
cd rust
git checkout ${COMMIT_ID:?"Missing rustc commit id"}
./configure --enable-extended --tools=src,rustfmt,cargo --enable-debug
./ build -i --stage 2

Now create a custom toolchain (here we name it custom-toolchain):

# Use x86_64-apple-darwin for MacOs
rustup toolchain link custom-toolchain build/x86_64-unknown-linux-gnu/stage2
cp build/x86_64-unknown-linux-gnu/stage2-tools-bin/* build/x86_64-unknown-linux-gnu/stage2/bin/

Finally, override the current toolchain in your kani workspace and rebuild kani:

rustup override set custom-toolchain
cargo clean
cargo build-dev

Rust compiler utilities to debug kani-compiler

Enable rustc logs

In order to enable logs, you can just define the RUSTC_LOG variable, as documented here:

Note that, depending on the level of logs you would like to get (debug and trace are not enabled by default), you'll need to build your own version of rustc as described above. For logs that are related to kani-compiler code, use the KANI_LOG variable.

Debugging type layout

In order to print the type layout computed by the Rust compiler, you can pass the following flag to rustc: -Zprint-type-sizes. This flag can be passed to kani or cargo kani by setting the RUSTFLAG environment variable.

RUSTFLAGS=-Zprint-type-sizes kani

When enabled, the compiler will print messages that look like:

print-type-size type: `std::option::Option<bool>`: 1 bytes, alignment: 1 bytes
print-type-size     variant `Some`: 1 bytes
print-type-size         field `.0`: 1 bytes
print-type-size     variant `None`: 0 bytes

Inspecting the MIR

You can easily visualize the MIR that is used as an input to code generation by setting the Rust flag --emit mir. I.e.:

RUSTFLAGS=--emit=mir kani

The compiler will generate a few files, but we recommend looking at the files that have the following suffix: kani.mir. Those files will include the entire MIR collected by our reachability analysis. It will include functions from all dependencies, including the std library. One limitation is that we dump one copy of each specialization of the MIR function, even though the MIR body itself doesn't change.

Transition to StableMIR

We have partnered with the Rust compiler team in the initiative to introduce stable APIs to the compiler that can be used by third-party tools, which is known as the Stable MIR Project, or just StableMIR. This means that we are starting to use the new APIs introduced by this project as is, despite them not being stable yet.

StableMIR APIs

For now, the StableMIR APIs are exposed as a crate in the compiler named stable_mir. This crate includes the definition of structures and methods to be stabilized, which are expected to become the stable APIs in the compiler. To reduce the migration burden, these APIs are somewhat close to the original compiler interfaces. However, some changes have been made to make these APIs cleaner and easier to use.

For example:

  1. The usage of the compiler context (aka TyCtxt) is transparent to the user. The StableMIR implementation caches this context in a thread local variable, and retrieves it whenever necessary.
    • Because of that, code that uses the StableMIR has to be invoked inside a run call.
  2. The DefId has been specialized into multiple types, making its usage less error prone. E.g.: FnDef represents the definition of a function, while StaticDef is the definition of a static variable.
    • Note that the same DefId may be mapped to different definitions according to its context. For example, an InstanceDef and a FnDef may represent the same function definition.
  3. Methods that used to be exposed as part of TyCtxt are now part of a type. Example, the function TyCtxt.instance_mir is now Instance::body.
  4. There is no need for explicit instantiation (monomorphization) of items from anInstance::body. This method already instantiates all types and resolves all constants before converting it to stable APIs.


Since the new APIs require converting internal data to a stable representation, the APIs were also designed to avoid needless conversions, and to allow extra information to be retrieved on demand.

For example, Ty is just an identifier, while TyKind is a structure that can be retrieved via Ty::kind method. The TyKind is a more structured object, thus, it is only generated when the kind method is invoked. Since this translation is not cached, many of the functions that the rust compiler used to expose in Ty, is now only part of TyKind. The reason being that there is no cache for the TyKind, and users should do the caching themselves to avoid needless translations.

From our initial experiments with the transition of the reachability algorithm to use StableMIR, there is a small penalty of using StableMIR over internal rust compiler APIs. However, they are still fairly efficient and it did not impact the overall compilation time.

Interface with internal APIs

To reduce the burden of migrating to StableMIR, and to allow StableMIR to be used together with internal APIs, there are two helpful methods to convert StableMIR constructs to internal rustc and back:

  • rustc_internal::internal(): Convert a Stable item into an internal one.
  • rustc_internal::stable(): Convert an internal item into a Stable one.

Both of these methods are inside rustc_smir crate in the rustc_internal module inside the compiler. Note that there is no plan to stabilize any of these methods, and there's also no guarantee on its support and coverage.

The conversion is not implemented for all items, and some conversions may be incomplete. Please proceed with caution when using these methods.

Besides that, do not invoke any other rustc_smir methods, except for run. This crate's methods are not meant to be invoked externally. Note that, the method run will also eventually be replaced by a Stable driver.

Creating and modifying StableMIR items

For now, StableMIR should only be used to get information from the compiler. Do not try to create or modify items directly, as it may not work. This may result in incorrect behavior or an internal compiler error (ICE).

Naming conventions in Kani

As we adopt StableMIR, we would like to introduce a few conventions to make it easier to maintain the code. Whenever there is a name conflict, for example, Ty or codegen_ty, use a suffix to indicate which API you are using. Stable for StableMIR and Internal for rustc internal APIs.

A module should either default its naming to Stable APIs or Internal APIs. I.e.: Modules that have been migrated to StableMIR don't need to add the Stable suffix to stable items. While those that haven't been migrated, should add Stable, but no Internal is needed.

For example, the codegen::typ module will likely include methods:

codegen_ty(&mut self, Ty) and codegen_ty_stable(&mut, TyStable) to handle internal and stable APIs.

Command cheat sheets

Development work in the Kani project depends on multiple tools. Regardless of your familiarity with the project, the commands below may be useful for development purposes.



# Error "'rustc' panicked at 'failed to lookup `SourceFile` in new context'"
# or similar error? Cleaning artifacts might help.
# Otherwise, comment the line below.
cargo clean
cargo build-dev


# Full regression suite
# Delete regression test caches (Linux)
rm -r build/x86_64-unknown-linux-gnu/tests/
# Delete regression test caches (macOS)
rm -r build/x86_64-apple-darwin/tests/
# Test suite run (we can only run one at a time)
# cargo run -p compiletest -- --suite ${suite} --mode ${mode}
cargo run -p compiletest -- --suite kani --mode kani
# Build documentation
cd docs


These can help understand what Kani is generating or encountering on an example or test file:

# Enable `debug!` macro logging output when running Kani:
kani --debug
# Use KANI_LOG for a finer grain control of the source and verbosity of logs.
# E.g.: The command below will print all logs from the kani_middle module.
KANI_LOG="kani_compiler::kani_middle=trace" kani
# Keep CBMC Symbol Table and Goto-C output (.json and .goto)
kani --keep-temps
# Generate "C code" from CBMC IR (.c)
kani --gen-c
# Generate a ${INPUT}.kani.mir file with a human friendly MIR dump
# for all items that are compiled to the respective goto-program.
RUSTFLAGS="--emit mir" kani ${INPUT}.rs

The KANI_REACH_DEBUG environment variable can be used to debug Kani's reachability analysis. If defined, Kani will generate a DOT graph ${INPUT}.dot with the graph traversed during reachability analysis. If defined and not empty, the graph will be filtered to end at functions that contains the substring from KANI_REACH_DEBUG.

Note that this will only work on debug builds.

# Generate a DOT graph ${INPUT}.dot with the graph traversed during reachability analysis

# Generate a DOT graph ${INPUT}.dot with the sub-graph traversed during the reachability analysis
# that connect to the given target.


# See CBMC IR from a C file:
goto-cc file.c -o file.out
goto-instrument --print-internal-representation file.out
# or (for json symbol table)
cbmc --show-symbol-table --json-ui file.out
# or (an alternative concise format)
cbmc --show-goto-functions file.out
# Recover C from goto-c binary
goto-instrument --dump-c file.out > file.gen.c


The Kani project follows the squash and merge option for pull request merges. As a result:

  1. The title of your pull request will become the main commit message.
  2. The messages from commits in your pull request will appear by default as a bulleted list in the main commit message body.

But the main commit message body is editable at merge time, so you don't have to worry about "typo fix" messages because these can be removed before merging.

# Set up your git fork
git remote add fork${USER}/kani.git
# Reset everything. Don't have any uncommitted changes!
git clean -xffd
git submodule foreach --recursive git clean -xffd
git submodule update --init
# Need to update local branch (e.g. for an open pull request?)
git fetch origin
git merge origin/main
# Or rebase, but that requires a force push,
# and because we squash and merge, an extra merge commit in a PR doesn't hurt.
# Checkout a pull request locally without the github cli
git fetch origin pull/$ID/head:pr/$ID
git switch pr/$ID
# Push to someone else's pull request
git origin add $USER $GIR_URL_FOR_THAT_USER
# Search only git-tracked files
git grep codegen_panic
# Accidentally commit to main?
# "Move" commit to a branch:
git checkout -b my_branch
# Fix main:
git branch --force main origin/main

cargo kani assess

Assess is an experimental new feature to gather data about Rust crates, to aid the start of proof writing.

In the short-term, assess collects and dumps tables of data that may help Kani developers understand what's needed to begin writing proofs for another project. For instance, assess may help answer questions like:

  1. Does Kani successfully build all of the crates involved in this project? If not, why not?
  2. Does Kani support all the Rust language features necessary to do verification with this project? If not, which are most important?

In the long-term, assess will become a user-facing feature, and help Kani users get started writing proofs. We expect that users will have the same questions as above, but in the long term, hopefully the answers to those trend towards an uninteresting "yes." So the new questions might be:

  1. Is this project ready for verification? Projects need to be reasonably well-tested first. Our operating hypothesis is that code currently covered by unit tests is the code that could become covered by proofs.
  2. How much of given project (consisting of multiple packages or workspaces) or which of the user's projects might be verifiable? If a user wants to start trying Kani, but they have the choice of several different packages where they might try, we can help find the package with the lowest hanging fruit.
  3. Given a package, where in that package's code should the user look, in order to write the first (or next) proof?

These long-term goals are only "hinted at" with the present experimental version of assess. Currently, we only get as far as finding out which tests successfully verify (concretely) with Kani. This might indicate tests that could be generalized and converted into proofs, but we currently don't do anything to group, rank, or otherwise heuristically prioritize what might be most "interesting." (For instance, we'd like to eventually compute coverage information and use that to help rank the results.) As a consequence, the output of the tool is very hard to interpret, and likely not (yet!) helpful to new or potential Kani users.

Using Assess

To assess a package, run:

cargo kani --enable-unstable assess

As a temporary hack (arguments shouldn't work like this), to assess a single cargo workspace, run:

cargo kani --enable-unstable --workspace assess

To scan a collection of workspaces or packages that are not part of a shared workspace, run:

cargo kani --enable-unstable assess scan

The only difference between 'scan' and 'regular' assess is how the packages built are located. All versions of assess produce the same output and metrics. Assess will normally build just like cargo kani or cargo build, whereas scan will find all cargo packages beneath the current directory, even in unrelated workspaces. Thus, 'scan' may be helpful in the case where the user has a choice of packages and is looking for the easiest to get started with (in addition to the Kani developer use-case, of aggregating statistics across many packages).

(Tip: Assess may need to run for awhile, so try using screen, tmux or nohup to avoid terminating the process if, for example, an ssh connection breaks. Some tests can also consume huge amounts of ram when run through Kani, so you may wish to use ulimit -v 6000000 to prevent any processes from using more than 6GB. You can also limit the number of concurrent tests that will be run by providing e.g. -j 4, currently as a prepended argument, like --enable-unstable or --workspace in the examples above.)

What assess does

Assess builds all the packages requested in "test mode" (i.e. --tests), and runs all the same tests that cargo test would, except through Kani. This gives end-to-end assurance we're able to actually build and run code from these packages, skipping nothing of what the verification process would need, except that the harnesses don't have any nondeterminism (kani::any()) and consequently don't "prove" much. The interesting signal comes from what tests cannot be analyzed by Kani due to unsupported features, performance problems, crash bugs, or other issues that get in the way.

Currently, assess forces termination by using unwind(1) on all tests, so many tests will fail with unwinding assertions.

Current Assess Results

Assess produces a few tables of output (both visually in the terminal, and in a more detailed json format) so far:

Unsupported features

 Unsupported feature           |   Crates | Instances
                               | impacted |    of use
 caller_location               |       71 |       239
 simd_bitmask                  |       39 |       160

The unsupported features table aggregates information about features that Kani does not yet support. These correspond to uses of codegen_unimplemented in the kani-compiler, and appear as warnings during compilation.

Unimplemented features are not necessarily actually hit by (dynamically) reachable code, so an immediate future improvement on this table would be to count the features actually hit by failing test cases, instead of just those features reported as existing in code by the compiler. In other words, the current unsupported features table is not what we want to see, in order to perfectly prioritize implementing these features, because we may be counting features that no proof would ever hit. A perfect signal here isn't possible: there may be code that looks statically reachable, but is never dynamically reachable, and we can't tell. But we can use test coverage as an approximation: well-tested code will hopefully cover most of the dynamically reachable code. The operating hypothesis of assess is that code covered by tests is code that could be covered by proof, and so measuring unsupported features by those actually hit by a test should provide a better "signal" about priorities. Implicitly deprioritizing unsupported features because they aren't covered by tests may not be a bug, but a feature: we may simply not want to prove anything about that code, if it hasn't been tested first, and so adding support for that feature may not be important.

A few notes on terminology:

  1. "Crates impacted" here means "packages in the current workspace (or scan) where the building of that package (and all of its dependencies) ultimately resulted in this warning." For example, if only assessing a single package (not a workspace) this could only be 1 in this column, regardless of the number of dependencies.
  2. "Instances of use" likewise means "total instances found while compiling this package's tests and all the (reachable) code in its dependencies."
  3. These counts are influenced by (static) reachability: if code is not potentially reachable from a test somehow, it will not be built and will not be counted.

Test failure reasons

 Reason for failure           | Number of tests
 unwind                       |              61
 none (success)               |               6
 assertion + overflow         |               2

The test failure reasons table indicates why, when assess ran a test through Kani, it failed to verify. Notably:

  1. Because we force termination with unwind(1), we expect unwind to rank highly.
  2. We do report number of tests succeeding on this table, to aid understanding how well things went overall.
  3. The reported reason is the "property class" of the CBMC property that failed. So assertion means an ordinary assert!() was hit (or something else with this property class).
  4. When multiple properties fail, they are aggregated with +, such as assertion + overflow.
  5. Currently this table does not properly account for should_fail tests, so assertion may actually be "success": the test should hit an assertion and did.

Promising test cases

 Candidate for proof harness                           | Location
 float::tests::f64_edge_cases                          | src/
 float::tests::f32_edge_cases                          | src/
 integer::tests::test_integers                         | src/

This table is the most rudimentary so far, but is the core of what long-term assess will help accomplish. Currently, this table just presents (with paths displayed in a clickable manner) the tests that successfully "verify" with Kani. These might be good candidates for turning into proof harnesses. This list is presently unordered; the next step for improving it would be to find even a rudimentary way of ranking these test cases (e.g. perhaps by code coverage).

How Assess Works

kani-compiler emits *.kani-metadata.json for each target it builds. This format can be found in the kani_metadata crate, shared by kani-compiler and kani-driver. This is the starting point for assess.

Assess obtains this metadata by essentially running a cargo kani:

  1. With --all-features turned on
  2. With unwind always set to 1
  3. In test mode, i.e. --tests
  4. With test-case reachability mode. Normally Kani looks for proof harnesses and builds only those. Here we switch to building only the test harnesses instead.

Assess starts by getting all the information from these metadata files. This is enough by itself to construct a rudimentary "unsupported features" table. But assess also uses it to discover all the test cases, and (instead of running proof harnesses) it then runs all these test harnesses under Kani.

Assess produces a second metadata format, called (unsurprisingly) "assess metadata". (Found in kani-driver under src/assess/ This format records the results of what assess does.

This metadata can be written to a json file by providing --emit-metadata <file> to assess. Likewise, scan can be told to write out this data with the same option.

Assess metadata is an aggregatable format. It does not apply to just one package, as assess can work on a workspace of packages. Likewise, scan uses and produces the exact same format, across multiple workspaces.

So far all assess metadata comes in the form of "tables" which are built with TableBuilder<T: TableRow>. This is documented further in src/assess/

Using Assess on the top-100 crates

There is a script in the Kani repo for this purpose.

This will clone the top-100 crates to /tmp/top-100-experiment and run assess scan on them:


If you'd like to preseve the results, you can direct scan to use a different directory with an environment variable:

ASSESS_SCAN="~/top-100-experiment" ./scripts/exps/

To re-run the experiment, it suffices to be in the experiment directory:

cd ~/top-100-experiment && ~/kani/scripts/exps/


Testing in Kani is carried out in multiple ways. There are at least two very good reasons to do it:

  1. Software regression: A regression is a type of bug that appears after a change is introduced where a feature that was previously working has unexpectedly stopped working.

    Regression testing allows one to prevent a software regression from happening by running a comprehensive set of working tests before any change is committed to the project.

  2. Software metrics: A metric is a measure of software characteristics which are quantitative and countable. Metrics are particularly valuable for project management purposes.

We recommend reading our section on Regression Testing if you're interested in Kani development. To run kani on a large number of remotely hosted crates, please see Repository Crawl.

Regression testing

Kani relies on a quite extensive range of tests to perform regression testing. Regression testing can be executed by running the command:


The script executes different testing commands, which we classify into:

See below for a description of each one.

Note that regression testing is run whenever a Pull Request is opened, updated or merged into the main branch. Therefore, it's a good idea to run regression testing locally before submitting a Pull Request for Kani.

Kani testing suites

The Kani testing suites are the main testing resource for Kani. In most cases, the tests contained in the Kani testing suites are single Rust files that are run using the following command:

kani <options>

Command-line options can be passed to the test by adding a special comment to the file. See testing options for more details.

In particular, the Kani testing suites are composed of:

  • kani: The main testing suite for Kani. The test is a single Rust file that's run through Kani. In general, the test passes if verification with Kani is successful, otherwise it fails.
  • firecracker: Works like kani but contains tests inspired by Firecracker code.
  • prusti: Works like kani but contains tests from the Prusti tool.
  • smack: Works like kani but contains tests from the SMACK tool.
  • kani-fixme: Similar to kani, but runs ignored tests from the kani testing suite (i.e., tests with fixme or ignore in their name). Allows us to detect when a previously not supported test becomes supported. More details in "Fixme" tests.
  • expected: Similar to kani but with an additional check which ensures that lines appearing in *.expected files appear in the output generated by kani.
  • ui: Works like expected, but focuses on the user interface (e.g., warnings) instead of the verification output.
  • cargo-kani: This suite is designed to test the cargo-kani command. As such, this suite works with packages instead of single Rust files. Arguments can be specified in the Cargo.toml configuration file. Similar to the expected suite, we look for *.expected files for each harness in the package.
  • cargo-ui: Similar to cargo-kani, but focuses on the user interface like the ui test suite.
  • script-based-pre: This suite is useful to execute script-based tests, and also allows checking expected output and exit codes after running them. The suite uses the exec mode, described in more detail here.

We've extended compiletest (the Rust compiler testing framework) to work with these suites. That way, we take advantage of all compiletest features (e.g., parallel execution).

Testing stages

The process of running single-file tests is split into three stages:

  • check: This stage uses the Rust front-end to detect if the example is valid Rust code.
  • codegen: This stage uses the Kani back-end to determine if we can generate GotoC code.
  • verify: This stage uses CBMC to obtain a verification result.

If a test fails, the error message will include the stage where it failed:

error: test failed: expected check success, got failure

When working on a test that's expected to fail, there are two options to indicate an expected failure. The first one is to add a comment

// kani-<stage>-fail

at the top of the test file, where <stage> is the stage where the test is expected to fail.

The other option is to use the predicate kani::expect_fail(cond, message) included in the Kani library. The cond in kani::expect_fail is a condition that you expect not to hold during verification. The testing framework expects one EXPECTED FAIL message in the verification output for each use of the predicate.

NOTE: kani::expect_fail is only useful to indicate failure in the verify stage, errors in other stages will be considered testing failures.

Testing options

Many tests will require passing command-line options to Kani. These options can be specified in single Rust files by adding a comment at the top of the file:

// kani-flags: <options>

For example, to use an unwinding value of 4 in a test, we can write:

// kani-flags: --default-unwind 4

For cargo-kani tests, the preferred way to pass command-line options is adding them to Cargo.toml. See Usage on a package for more details.

"Fixme" tests

Any test containing fixme or ignore in its name is considered a test not supported for some reason (i.e., they return an unexpected verification result).

However, "fixme" tests included in the kani folder are run via the kani-fixme testing suite. kani-fixme works on test files from kani but:

  1. Only runs tests whose name contains fixme or ignore (ignoring the rest).
  2. The expected outcome is failure. In other words, a test is successful if it fails.

We welcome contributions with "fixme" tests which demonstrate a bug or unsupported feature in Kani. Ideally, the test should include some comments regarding:

  • The expected result of the test.
  • The actual result of the test (e.g., interesting parts of the output).
  • Links to related issues.

To include a new "fixme" test in kani you only need to ensure its name contains fixme or ignore. If your changes to Kani cause a "fixme" test to become supported, you only need to rename it so the name does not contain fixme nor ignore.

Rust unit tests

These tests follow the Rust unit testing style.

At present, Kani runs unit tests from the following packages:

  • cprover_bindings
  • kani-compiler
  • cargo-kani

Python unit tests

We use the Python unit testing framework to test the CBMC JSON parser.

Script-based tests

These are tests which are run using scripts. Scripting gives us the ability to perform ad-hoc checks that cannot be done otherwise. They are currently used for:

  • Standard library codegen
  • Firecracker virtio codegen
  • Diamond dependency

In fact, most of them are equivalent to running cargo kani and performing checks on the output. The downside to scripting is that these tests will always be run, even if there have not been any changes since the last time the regression was run.

NOTE: With the addition of the exec mode for compiletest (described below), we'll be migrating these script-based tests to other suites using the exec mode. The exec mode allows us to take advantage of compiletest features while executing script-based tests (e.g., parallel execution).

The exec mode

The exec mode in compiletest allows us to execute script-based tests, in addition to checking expected output and exit codes after running them.

In particular, tests are expected to be placed directly under the test directory (e.g., script-based-pre) in a directory with a config.yml file, which should contain:

  • script: The path to the script to be executed.
  • expected (optional): The path to the .expected file to use for output comparison.
  • exit_code (optional): The exit code to be returned by executing the script (a zero exit code is expected if not specified).

For example, let's say want to test the script

echo "Exiting with code 1!"
exit 1

In this case, we'll create a folder that contains the config.yml file:

expected: exit-one.expected
exit_code: 1

where exit-one.expected is simply a text file such as:

Exiting with code 1!

If expected isn't specified, the output won't be checked. If exit_code isn't specified, the exec mode will check the exit code was zero.

Note that all paths specified in the config.yml file are local to the test directory, which is the working directory assumed when executing the test. This is meant to avoid problems when executing the test manually.

(Experimental) Testing with a Large Number of Repositories

This section explains how to run Kani on a large number of crates downloaded from git forges. You may want to do this if you are going to test Kani's ability to handle Rust features found in projects out in the wild.

For the first half, we will explain how to use data from to pick targets. Second half will explain how to use a script to run on a list of selected repositories.

Picking Repositories

In picking repositories, you may want to select by metrics like popularity or by the presence of certain features. In this section, we will explain how to select top ripostes by download count.

We will use the db-dump method of getting data from as it is zero cost to their website and gives us SQL access. To start, have the following programs set up on your computer.

  • docker
  • docker-compose.
  1. Start PostgreSQL. Paste in the following yaml file as docker-compose.yaml. version: '3.3' may need to change.
version: '3.3'
    image: postgres:latest
    restart: always
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
      - crates-data:/var/lib/postgresql/data
      driver: "json-file"
        max-size: "50m"
    driver: local

Then, run the following to start the setup.

docker-compose up -d

Once set up, run docker ls to figure out the container's name. We will refer to the name as $CONTAINER_NAME from now on.

  1. Download actual data from First, run the following command to get a shell in the container: docker exec -it --user postgres $CONTAINER_NAME bash. Now, run the following to grab and install the data into the repository. Please note that this may take a while.

    tar -xf db-dump.tar.gz
    psql postgres -f */schema.sql
    psql postgres -f */import.sql
  2. Extract the data. In the same docker shell, run the following to extract the top 1k repositories. Other SQL queries may be used if you want another criteria

    (SELECT name, repository, downloads  FROM crates
    WHERE repository LIKE 'http%' ORDER BY DOWNLOADS DESC LIMIT 1000)
    to 'top-1k.csv' csv header;
  3. Clean the data. The above query will capture duplicates paths that are deeper than the repository. You can clean these out.

    • URL from CSV: cat top-1k.csv | awk -F ',' '{ print $2 }' | grep -v 'http.*'
    • Remove long paths: sed 's/tree\/master.*$//g'
    • Once processed, you can dedup with sort | uniq --unique

Running the List of Repositories

In this step we will download the list of repositories using a script

Make sure to have Kani ready to run. For that, see the build instructions.

From the repository root, you can run the script with ./scripts/exps/ $URL_LIST_FILE where $URL_LIST_FILE points to a line-delimited list of URLs you want to run Kani on. Repositories that give warnings or errors can be grepping for with "STDERR Warnings" and "Error exit in" respectively.

Performance comparisons with benchcomp

While Kani includes a performance regression suite, you may wish to test Kani's performance using your own benchmarks or with particular versions of Kani. You can use the benchcomp tool in the Kani repository to run several 'variants' of a command on one or more benchmark suites; automatically parse the results of each of those suites; and take actions or emit visualizations based on those results.

Example use-cases

  1. Run one or more benchmark suites with the current and previous versions of Kani. Exit with a return code of 1 or print a custom summary to the terminal if any benchmark regressed by more than a user-configured amount.
  2. Run benchmark suites using several historical versions of Kani and emit a graph of performance over time.
  3. Run benchmark suites using different SAT solvers, command-line flags, or environment variables.


Benchcomp provides the following features to support your performance-comparison workflow:

  • Automatically copies benchmark suites into a fresh directories before running with each variant, to ensure that built artifacts do not affect subsequent runtimes
  • Parses the results of different 'kinds' of benchmark suite and combines those results into a single unified format. This allows you to run benchmarks from external repositories, suites of pre-compiled GOTO-binaries, and other kinds of benchmark all together and view their results in a single dashboard.
  • Driven by a single configuration file that can be sent to colleagues or checked into a repository to be used in continuous integration.
  • Extensible, allowing you to write your own parsers and visualizations.
  • Caches all previous runs and allows you to re-create visualizations for the latest run without actually re-running the suites.

Quick start

Here's how to run Kani's performance suite twice, comparing the last released version of Kani with the current HEAD.

git worktree add new HEAD
git worktree add old $(git describe --tags --abbrev=0)

tools/benchcomp/bin/benchcomp --config tools/benchcomp/configs/perf-regression.yaml

This uses the perf-regression.yaml configuration file that we use in continuous integration. After running the suite twice, the configuration file terminates benchcomp with a return code of 1 if any of the benchmarks regressed on metrics such as success (a boolean), solver_runtime, and number_vccs (numerical). Additionally, the config file directs benchcomp to print out a Markdown table that GitHub's CI summary page renders in to a table.

The rest of this documentation describes how to modify benchcomp for your own use cases, including writing a configuration file; writing a custom parser for your benchmark suite; and writing a custom visualization to examine the results of a performance comparison.

benchcomp command line

benchcomp is a single command that runs benchmarks, parses their results, combines these results, and emits visualizations. benchcomp also provides subcommands to run these steps individually. Most users will want to invoke benchcomp in one of two ways:

  • benchcomp without any subcommands, which runs the entire performance comparison process as depicted below
  • benchcomp visualize, which runs the visualization step on the results of a previous benchmark run without actually re-running the benchmarks. This is useful when tweaking the parameters of a visualization, for example changing the threshold of what is considered to be a regression.

The subcommands run and collate are also available. The diagram below depicts benchcomp's order of operation.

Gcluster_runbenchcomp runcluster_collatebenchcomp collatecluster_vizualizebenchcomp visualizesuite_1asuite_1out_1aoutputfilessuite_1a->out_1arun withvariant asuite_1a_yamlsuite_1a.yamlout_1a->suite_1a_yamlsuite_1_parser.pyresult_yamlresult.yamlsuite_1a_yaml->result_yamlsuite_1bsuite_1out_1boutputfilessuite_1b->out_1brun withvariant bsuite_1b_yamlsuite_1b.yamlout_1b->suite_1b_yamlsuite_1_parser.pysuite_1b_yaml->result_yamlsuite_2csuite_2out_2coutputfilessuite_2c->out_2crun withvariant asuite_2c_yamlsuite_2c.yamlout_2c->suite_2c_yamlsuite_2_parser.pysuite_2c_yaml->result_yamlsuite_2dsuite_2out_2doutputfilessuite_2d->out_2drun withvariant bsuite_2d_yamlsuite_2d.yamlout_2d->suite_2d_yamlsuite_2_parser.pysuite_2d_yaml->result_yamlviz_1graph.svgresult_yaml->viz_1viz_2summary.mdresult_yaml->viz_2viz_3exit 1 onregressionresult_yaml->viz_3

Running benchcomp invokes run, collate, and visualize behind the scenes. If you have previously run benchcomp, then running benchcomp visualize will emit the visualizations in the config file using the previous result.yaml.

In the diagram above, two different suites (1 and 2) are both run using two variants---combinations of command, working directory, and environment variables. Benchmark suite 2 requires a totally different command line to suite 1---for example, suite_1 might contain Kani harnesses invoked through cargo kani, while suite_2 might contain CBMC harnesses invoked through Users would therefore define different variants (c and d) for invoking suite_2, and also specify a different parser to parse the results. No matter how different the benchmark suites are, the collate stage combines their results so that they can later be compared.

Example config file

Users must specify the actual suites to run, the parsers used to collect their results, and the visualizations to emit in a file called benchcomp.yaml or a file passed to the -c/--config flag. The next section describes the schema for this configuration file. A run similar to the diagram above might be achieved using the following configuration file:

# Compare a range of Kani and CBMC benchmarks when
# using Cadical versus the default SAT solver

      directory: kani_benchmarks
      command_line: scripts/
      env: {}

      directory: kani_benchmarks
      command_line: scripts/
      # This variant uses a hypothetical environment variable that
      # forces Kani to use the cadical SAT solver
        KANI_SOLVER: cadical

      directory: cbmc_benchmarks
      env: {}

      directory: cbmc_benchmarks
        EXTERNAL_SAT_SOLVER: cadical

        module: kani_perf
      variants: [variant_a, variant_b]

        module: cbmc_litani_parser
      variants: [variant_c, variant_d]

  - type: dump_graph
    out_file: graph.svg

  - type: dump_markdown_results_table
    extra_columns: []

  - type: error_on_regression
    - [variant_a, variant_b]
    - [variant_c, variant_d]

benchcomp configuration file

benchcomp's operation is controlled through a YAML file---benchcomp.yaml by default or a file passed to the -c/--config option. This page lists the different visualizations that are available.


A variant is a single invocation of a benchmark suite. Benchcomp runs several variants, so that their performance can be compared later. A variant consists of a command-line argument, working directory, and environment. Benchcomp invokes the command using the operating system environment, updated with the keys and values in env. If any values in env contain strings of the form ${var}, Benchcomp expands them to the value of the environment variable $var.

            command_line: echo "Hello, world"
            directory: /tmp
              PATH: /my/local/directory:${PATH}


After benchcomp has finished parsing the results, it writes the results to results.yaml by default. Before visualizing the results (see below), benchcomp can filter the results by piping them into an external program.

To filter results before visualizing them, add filters to the configuration file.

    - command_line: ./scripts/
    - command_line: cat

The value of filters is a list of dicts. Currently the only legal key for each of the dicts is command_line. Benchcomp invokes each command_line in order, passing the results as a JSON file on stdin, and interprets the stdout as a YAML-formatted modified set of results. Filter scripts can emit either YAML (which might be more readable while developing the script), or JSON (which benchcomp will parse as a subset of YAML).

Built-in visualizations

The following visualizations are available; these can be added to the visualize list of benchcomp.yaml.

Detailed documentation for these visualizations follows.


Scatterplot configuration options


Print Markdown-formatted tables displaying benchmark results

For each metric, this visualization prints out a table of benchmarks, showing the value of the metric for each variant, combined with an optional scatterplot.

The 'out_file' key is mandatory; specify '-' to print to stdout.

'extra_colums' can be an empty dict. The sample configuration below assumes that each benchmark result has a 'success' and 'runtime' metric for both variants, 'variant_1' and 'variant_2'. It adds a 'ratio' column to the table for the 'runtime' metric, and a 'change' column to the table for the 'success' metric. The 'text' lambda is called once for each benchmark. The 'text' lambda accepts a single argument---a dict---that maps variant names to the value of that variant for a particular metric. The lambda returns a string that is rendered in the benchmark's row in the new column. This allows you to emit arbitrary text or markdown formatting in response to particular combinations of values for different variants, such as regressions or performance improvements.

'scatterplot' takes the values 'off' (default), 'linear' (linearly scaled axes), or 'log' (logarithmically scaled axes).

Sample configuration:

- type: dump_markdown_results_table
  out_file: "-"
  scatterplot: linear
    - column_name: ratio
      text: >
        lambda b: str(b["variant_2"]/b["variant_1"])
        if b["variant_2"] < (1.5 * b["variant_1"])
        else "**" + str(b["variant_2"]/b["variant_1"]) + "**"
    - column_name: change
      text: >
        lambda b: "" if b["variant_2"] == b["variant_1"]
        else "newly passing" if b["variant_2"]
        else "regressed"

Example output:

## runtime

| Benchmark |  variant_1 | variant_2 | ratio |
| --- | --- | --- | --- |
| bench_1 | 5 | 10 | **2.0** |
| bench_2 | 10 | 5 | 0.5 |

## success

| Benchmark |  variant_1 | variant_2 | change |
| --- | --- | --- | --- |
| bench_1 | True | True |  |
| bench_2 | True | False | regressed |
| bench_3 | False | True | newly passing |


Print the YAML-formatted results to a file.

The 'out_file' key is mandatory; specify '-' to print to stdout.

Sample configuration:

- type: dump_yaml
  out_file: '-'


Terminate benchcomp with a return code of 1 if any benchmark regressed.

This visualization checks whether any benchmark regressed from one variant to another. Sample configuration:

- type: error_on_regression
  - [variant_1, variant_2]
  - [variant_1, variant_3]
  - metric: runtime
    test: "lambda old, new: new / old > 1.1"
  - metric: passed
    test: "lambda old, new: False if not old else not new"

This says to check whether any benchmark regressed when run under variant_2 compared to variant_1. A benchmark is considered to have regressed if the value of the 'runtime' metric under variant_2 is 10% higher than the value under variant_1. Furthermore, the benchmark is also considered to have regressed if it was previously passing, but is now failing. These same checks are performed on all benchmarks run under variant_3 compared to variant_1. If any of those lambda functions returns True, then benchcomp will terminate with a return code of 1.


Run an executable command, passing the performance metrics as JSON on stdin.

This allows you to write your own visualization, which reads a result file on stdin and does something with it, e.g. writing out a graph or other output file.

Sample configuration:

- type: run_command
  command: ./

Custom parsers

Benchcomp ships with built-in parsers that retrieve the results of a benchmark suite after the run has completed. You can also create your own parser, either to run locally or to check into the Kani codebase.

Built-in parsers

You specify which parser should run for each benchmark suite in benchcomp.yaml. For example, if you're running the kani performance suite, you would use the built-in kani_perf parser to parse the results:

      variants: [variant_1, variant_2]
        module: kani_perf

Custom parsers

A parser is a program that benchcomp runs inside the root directory of a benchmark suite, after the suite run has completed. The parser should retrieve the results of the run (by parsing output files etc.) and print the results out as a YAML document. You can use your executable parser by specifying the command key rather than the module key in your benchconf.yaml file:

      variants: [variant_1, variant_2]
        command: ./

The kani_perf parser mentioned above, in tools/benchcomp/benchcomp/parsers/, is a good starting point for writing a custom parser, as it also works as a standalone executable. Here is an example output from an executable parser:

    runtime: {}
    success: {}
    errors: {}
            runtime: 32
            success: true
            errors: []
            runtime: 0
            success: false
            errors: ["compilation failed"]

The above format is different from the final result.yaml file that benchcomp writes, because the above file represents the output of running a single benchmark suite using a single variant. Your parser will run once for each variant, and benchcomp combines the dictionaries into the final result.yaml file.

Contributing custom parsers to Kani

To turn your executable parser into one that benchcomp can invoke as a module, ensure that it has a main(working_directory) method that returns a dict (the same dict that it would print out as a YAML file to stdout). Save the file in tools/benchcomp/benchcomp/parsers using python module naming conventions (filename should be an identifier and end in .py).


Like other tools, Kani comes with some limitations. In some cases, these limitations are inherent because of the techniques it's based on, or the undecidability of the properties that Kani seeks to prove. In other cases, it's just a matter of time and effort to remove these limitations (e.g., specific unsupported Rust language features).

In this chapter, we do the following to document these limitations:

Undefined Behaviour

The Effect of Undefined Behaviour on Program Verification

Rust has a broad definition of undefined behaviour (UB). The Rust documentation warns that UB can have unexpected, non-local effects:

Note: Undefined behavior affects the entire program. For example, calling a function in C that exhibits undefined behavior of C means your entire program contains undefined behaviour that can also affect the Rust code. And vice versa, undefined behavior in Rust can cause adverse affects on code executed by any FFI calls to other languages.

If a program has UB, the semantics of the rest of the program are undefined. As a result, if the program under verification contains UB then, in principle, the program (including its representation in MIR analyzed by Kani) has no semantics and hence could do anything, including violating the guarantees checked by Kani. This means that verification results are subject to the proviso that the program under verification does not contain UB.

What forms of Undefined Behaviour can Rust Exhibit

Rust’s definition of UB is so broad that Rust has the following warning:

Warning The following list is not exhaustive. There is no formal model of Rust's semantics for what is and is not allowed in unsafe code, so there may be more behavior considered unsafe. The following list is just what we know for sure is undefined behavior. Please read the Rustonomicon ( before writing unsafe code.

Given the lack of a formal semantics for UB, and given Kani's focus on memory safety, there are classes of UB which Kani does not detect. A non-exhaustive list of these, based on the non-exhaustive list from the Rust documentation, is:

  • Data races.
    • Kani focuses on sequential code.
  • Breaking the pointer aliasing rules (
    • Kani can detect if misuse of pointers causes memory safety or assertion violations, but does not track reference lifetimes.
  • Mutating immutable data.
    • Kani can detect if modification of immutable data causes memory safety or assertion violations, but does not track reference lifetimes.
  • Invoking undefined behavior via compiler intrinsics.
    • Kani makes a best effort attempt to check the preconditions of compiler intrinsics, but does not guarantee to do so in all cases.
  • Executing code compiled with platform features that the current platform does not support (see target_feature).
    • Kani relies on rustc to check for this case.
  • Calling a function with the wrong call ABI or unwinding from a function with the wrong unwind ABI.
    • Kani relies on rustc to check for this case.
  • Producing an invalid value, even in private fields and locals.
    • Kani won't create invalid values with kani::any() but it also won't complain if you transmute an invalid value to a Rust type (for example, a 0 to NonZeroU32).
  • Incorrect use of inline assembly.
    • Kani does not support inline assembly.
  • Using uninitialized memory.

Kani makes a best-effort attempt to detect some cases of UB:

Rust feature support

The table below tries to summarize the current support in Kani for the Rust language features according to the Rust Reference. We use the following values to indicate the level of support:

  • Yes: The feature is fully supported. We are not aware of any issue with it.
  • Partial: The feature is at least partially supported. We are aware of some issue with with it.
  • No: The feature is not supported. Some support may be available but analyses should not be trusted.

As with all software, bugs may be found anywhere regardless of the level of support. In such cases, we would greatly appreciate that you filed a bug report.

3.1Macros By ExampleYes
3.2Procedural MacrosYes
4Crates and source filesYes
5Conditional compilationYes
6.2Extern cratesYes
6.3Use declarationsYes
6.5Type aliasesYes
6.9Constant itemsYes
6.10Static itemsYes
6.13External blocksYes
6.14Generic parametersYes
6.15Associated ItemsYes
8.2.1Literal expressionsYes
8.2.2Path expressionsYes
8.2.3Block expressionsYes
8.2.4Operator expressionsYes
8.2.5Grouped expressionsYes
8.2.6Array and index expressionsYes
8.2.7Tuple and index expressionsYes
8.2.8Struct expressionsYes
8.2.9Call expressionsYes
8.2.10Method call expressionsYes
8.2.11Field access expressionsYes
8.2.12Closure expressionsYes
8.2.13Loop expressionsYes
8.2.14Range expressionsYes
8.2.15If and if let expressionsYes
8.2.16Match expressionsYes
8.2.17Return expressionsYes
8.2.18Await expressionsNoSee Notes - Concurrency
10.1.1Boolean typeYes
10.1.2Numeric typesYes
10.1.3Textual typesYes
10.1.4Never typeYes
10.1.5Tuple typesYes
10.1.6Array typesYes
10.1.7Slice typesYes
10.1.8Struct typesYes
10.1.9Enumerated typesYes
10.1.10Union typesYes
10.1.11Function item typesYes
10.1.12Closure typesPartialSee Notes - Advanced features
10.1.13Pointer typesPartialSee Notes - Advanced features
10.1.14Function pointer typesPartialSee Notes - Advanced features
10.1.15Trait object typesPartialSee Notes - Advanced features
10.1.16Impl trait typePartialSee Notes - Advanced features
10.1.17Type parametersPartialSee Notes - Advanced features
10.1.18Inferred typePartialSee Notes - Advanced features
10.2Dynamically Sized TypesPartialSee Notes - Advanced features
10.3Type layoutYes
10.4Interior mutabilityYes
10.5Subtyping and VarianceYes
10.6Trait and lifetime boundsYes
10.7Type coercionsPartialSee Notes - Advanced features
10.9Lifetime elisionYes
11Special types and traitsPartial
Operator TraitsPartial
Deref and DerefMutYes
15.1Unsafe functionsYes
15.2Unsafe blocksYes
15.3Behavior considered undefinedPartial
Data racesNoSee Notes - Concurrency
Dereferencing dangling raw pointersYes
Dereferencing unaligned raw pointersNo
Breaking pointer aliasing rulesNo
Mutating immutable dataNo
Invoking undefined behavior via compiler intrinsicsPartialSee Notes - Intrinsics
Executing code compiled with platform features that the current platform does not supportNo
Producing an invalid value, even in private fields and localsNo

Notes on partially or unsupported features

Code generation for unsupported features

Kani aims to be an industrial verification tool. Most industrial crates may include unsupported features in parts of their code that do not need to be verified. In general, this should not prevent users using Kani to verify their code.

Because of that, the general rule is that Kani generates an assert(false) statement followed by an assume(false) statement when compiling any unsupported feature. assert(false) will cause verification to fail if the statement is reachable during the verification stage, while assume(false) will block any further exploration of the path. However, the analysis will not be affected if the statement is not reachable from the code under verification, so users can still verify components of their code that do not use unsupported features.

In a few cases, Kani aborts execution if the analysis could be affected in some way because of an unsupported feature (e.g., global ASM).


Kani does not support assembly code for now. We may add it in the future but at present there are no plans to do so.

Check out the tracking issues for inline assembly (asm! macro) and global assembly (asm_global! macro) to know more about the current status.


Concurrent features are currently out of scope for Kani. In general, the verification of concurrent programs continues to be an open research problem where most tools that analyze concurrent code lack support for other features. Because of this, Kani emits a warning whenever it encounters concurrent code and compiles as if it was sequential code.

Standard library functions

Kani overrides a few common functions (e.g., print macros) to provide a more verification friendly implementation.

Advanced features

The semantics around some advanced features (traits, types, etc.) from Rust are not formally defined which makes it harder to ensure that we can properly model all their use cases.

In particular, there are some outstanding issues to note here:

  • Sanity check Variant type in projections #448.
  • Unexpected fat pointer results in #277, #327 and #676.

We are particularly interested in bug reports concerning these features, so please file a bug report if you're aware of one.

Panic strategies

Rust has two different strategies when a panic occurs:

  1. Stack unwinding (default): Walks back the stack cleaning up the data from each function it encounters.
  2. Abortion: Immediately ends the program without cleaning up.

Currently, Kani does not support stack unwinding. This has some implications regarding memory safety since programs sometimes rely on the unwinding logic to ensure there is no resource leak or persistent data inconsistency. Check out this issue for updates on stack unwinding support.

Uninitialized memory

Reading uninitialized memory is considered undefined behavior in Rust. At the moment, Kani cannot detect if memory is uninitialized, but in practice this is mitigated by the fact that all memory is initialized with nondeterministic values. Therefore, any code that depends on uninitialized data will exhibit nondeterministic behavior. See this issue for more details.


At present, we are aware of some issues with destructors, in particular those related to advanced features.


Please refer to Intrinsics for information on the current support in Kani for Rust compiler intrinsics.

Floating point operations

Kani supports floating point numbers, but some supported operations on floats are "over-approximated." These are the trigonometric functions like sin and cos and the sqrt function as well. This means the verifier can raise errors that cannot actually happen when the code is run normally. For instance, (#1342) the sin/cos functions basically return a nondeterministic value between -1 and 1. In other words, they largely ignore their input and give very conservative answers. This range certainly includes the "real" value, so proof soundness is still preserved, but it means Kani could raise spurious errors that cannot actually happen. This makes Kani unsuitable for verifying some kinds of properties (e.g. precision) about numerical algorithms. Proofs that fail because of this problem can sometimes be repaired by introducing "stubs" for these functions that return a more acceptable approximation. However, note that the actual behavior of these functions can vary by platform/os/architecture/compiler, so introducing an "overly precise" approximation may introduce unsoundness: actual system behavior may produce different values from the stub's approximation.


The tables below try to summarize the current support in Kani for Rust intrinsics. We define the level of support similar to how we indicate Rust feature support:

  • Yes: The intrinsic is fully supported. We are not aware of any issue with it.
  • Partial: The intrinsic is at least partially supported. We are aware of some issue with with it.
  • No: The intrinsic is not supported.

In general, code generation for unsupported intrinsics follows the rule described in Rust feature support - Code generation for unsupported features.

Any intrinsic not appearing in the tables below is considered not supported. Please open a feature request if your code depends on an unsupported intrinsic.

Compiler intrinsics

atomic_and_seqcstPartialSee Atomics
atomic_and_acquirePartialSee Atomics
atomic_and_acqrelPartialSee Atomics
atomic_and_releasePartialSee Atomics
atomic_and_relaxedPartialSee Atomics
atomic_cxchg_acqrel_acquirePartialSee Atomics
atomic_cxchg_acqrel_relaxedPartialSee Atomics
atomic_cxchg_acqrel_seqcstPartialSee Atomics
atomic_cxchg_acquire_acquirePartialSee Atomics
atomic_cxchg_acquire_relaxedPartialSee Atomics
atomic_cxchg_acquire_seqcstPartialSee Atomics
atomic_cxchg_relaxed_acquirePartialSee Atomics
atomic_cxchg_relaxed_relaxedPartialSee Atomics
atomic_cxchg_relaxed_seqcstPartialSee Atomics
atomic_cxchg_release_acquirePartialSee Atomics
atomic_cxchg_release_relaxedPartialSee Atomics
atomic_cxchg_release_seqcstPartialSee Atomics
atomic_cxchg_seqcst_acquirePartialSee Atomics
atomic_cxchg_seqcst_relaxedPartialSee Atomics
atomic_cxchg_seqcst_seqcstPartialSee Atomics
atomic_cxchgweak_acqrel_acquirePartialSee Atomics
atomic_cxchgweak_acqrel_relaxedPartialSee Atomics
atomic_cxchgweak_acqrel_seqcstPartialSee Atomics
atomic_cxchgweak_acquire_acquirePartialSee Atomics
atomic_cxchgweak_acquire_relaxedPartialSee Atomics
atomic_cxchgweak_acquire_seqcstPartialSee Atomics
atomic_cxchgweak_relaxed_acquirePartialSee Atomics
atomic_cxchgweak_relaxed_relaxedPartialSee Atomics
atomic_cxchgweak_relaxed_seqcstPartialSee Atomics
atomic_cxchgweak_release_acquirePartialSee Atomics
atomic_cxchgweak_release_relaxedPartialSee Atomics
atomic_cxchgweak_release_seqcstPartialSee Atomics
atomic_cxchgweak_seqcst_acquirePartialSee Atomics
atomic_cxchgweak_seqcst_relaxedPartialSee Atomics
atomic_cxchgweak_seqcst_seqcstPartialSee Atomics
atomic_fence_seqcstPartialSee Atomics
atomic_fence_acquirePartialSee Atomics
atomic_fence_acqrelPartialSee Atomics
atomic_fence_releasePartialSee Atomics
atomic_load_seqcstPartialSee Atomics
atomic_load_acquirePartialSee Atomics
atomic_load_relaxedPartialSee Atomics
atomic_load_unorderedPartialSee Atomics
atomic_max_seqcstPartialSee Atomics
atomic_max_acquirePartialSee Atomics
atomic_max_acqrelPartialSee Atomics
atomic_max_releasePartialSee Atomics
atomic_max_relaxedPartialSee Atomics
atomic_min_seqcstPartialSee Atomics
atomic_min_acquirePartialSee Atomics
atomic_min_acqrelPartialSee Atomics
atomic_min_releasePartialSee Atomics
atomic_min_relaxedPartialSee Atomics
atomic_nand_seqcstPartialSee Atomics
atomic_nand_acquirePartialSee Atomics
atomic_nand_acqrelPartialSee Atomics
atomic_nand_releasePartialSee Atomics
atomic_nand_relaxedPartialSee Atomics
atomic_or_seqcstPartialSee Atomics
atomic_or_acquirePartialSee Atomics
atomic_or_acqrelPartialSee Atomics
atomic_or_releasePartialSee Atomics
atomic_or_relaxedPartialSee Atomics
atomic_singlethreadfence_seqcstPartialSee Atomics
atomic_singlethreadfence_acquirePartialSee Atomics
atomic_singlethreadfence_acqrelPartialSee Atomics
atomic_singlethreadfence_releasePartialSee Atomics
atomic_store_seqcstPartialSee Atomics
atomic_store_releasePartialSee Atomics
atomic_store_relaxedPartialSee Atomics
atomic_store_unorderedPartialSee Atomics
atomic_umax_seqcstPartialSee Atomics
atomic_umax_acquirePartialSee Atomics
atomic_umax_acqrelPartialSee Atomics
atomic_umax_releasePartialSee Atomics
atomic_umax_relaxedPartialSee Atomics
atomic_umin_seqcstPartialSee Atomics
atomic_umin_acquirePartialSee Atomics
atomic_umin_acqrelPartialSee Atomics
atomic_umin_releasePartialSee Atomics
atomic_umin_relaxedPartialSee Atomics
atomic_xadd_seqcstPartialSee Atomics
atomic_xadd_acquirePartialSee Atomics
atomic_xadd_acqrelPartialSee Atomics
atomic_xadd_releasePartialSee Atomics
atomic_xadd_relaxedPartialSee Atomics
atomic_xchg_seqcstPartialSee Atomics
atomic_xchg_acquirePartialSee Atomics
atomic_xchg_acqrelPartialSee Atomics
atomic_xchg_releasePartialSee Atomics
atomic_xchg_relaxedPartialSee Atomics
atomic_xor_seqcstPartialSee Atomics
atomic_xor_acquirePartialSee Atomics
atomic_xor_acqrelPartialSee Atomics
atomic_xor_releasePartialSee Atomics
atomic_xor_relaxedPartialSee Atomics
atomic_xsub_seqcstPartialSee Atomics
atomic_xsub_acquirePartialSee Atomics
atomic_xsub_acqrelPartialSee Atomics
atomic_xsub_releasePartialSee Atomics
atomic_xsub_relaxedPartialSee Atomics
cosf32PartialResults are overapproximated; this test explains how
cosf64PartialResults are overapproximated; this test explains how
exp2f32PartialResults are overapproximated
exp2f64PartialResults are overapproximated
expf32PartialResults are overapproximated
expf64PartialResults are overapproximated
logf32PartialResults are overapproximated
logf64PartialResults are overapproximated
offsetPartialDoesn't check all UB conditions
powf32PartialResults are overapproximated
powf64PartialResults are overapproximated
ptr_offset_fromPartialDoesn't check all UB conditions
raw_eqPartialCannot detect uninitialized memory
sinf32PartialResults are overapproximated; this test explains how
sinf64PartialResults are overapproximated; this test explains how
transmutePartialDoesn't check all UB conditions
unaligned_volatile_loadNoSee Notes - Concurrency
unaligned_volatile_storeNoSee Notes - Concurrency
volatile_copy_memoryNoSee Notes - Concurrency
volatile_copy_nonoverlapping_memoryNoSee Notes - Concurrency
volatile_loadPartialSee Notes - Concurrency
volatile_set_memoryNoSee Notes - Concurrency
volatile_storePartialSee Notes - Concurrency


All atomic intrinsics are compiled as an atomic block where the operation is performed. But as noted in Notes - Concurrency, Kani support for concurrent verification is limited and not used by default. Verification on code containing atomic intrinsics should not be trusted given that Kani assumes the code to be sequential.

Platform intrinsics

Intrinsics from the platform_intrinsics feature.

simd_remYesDoesn't check for floating point overflow #2669

Unstable features

In general, unstable Rust features are out of scope and any support for them available in Kani should be considered unstable as well.

The following are examples of unstable features that are not supported in Kani:

  • Generators
  • C-variadics


As explained in Comparison with other tools, Kani is based on a technique called model checking, which verifies a program without actually executing it. It does so through encoding the program and analyzing the encoded version. The encoding process often requires "modeling" some of the library functions to make them suitable for analysis. Typical examples of functionality that requires modeling are system calls and I/O operations. In some cases, Kani performs such encoding through overriding some of the definitions in the Rust standard library.

The following table lists some of the symbols that Kani overrides and a description of their behavior compared to the std versions:

assert, assert_eq, and assert_ne macrosSkips string formatting code, generates a more informative message and performs some instrumentation
debug_assert, debug_assert_eq, and debug_assert_ne macrosRewrites as equivalent assert* macro
print, eprint, println, and eprintln macrosSkips string formatting and I/O operations
unreachable macroSkips string formatting and invokes panic!()
std::process::{abort, exit} functionsInvokes panic!() to abort the execution

Crates documentation

Kani currently ships with a kani crate that provide APIs to allow users to write and configure their harnesses. These APIs are tightly coupled with each Kani version, so they are not published yet at

You can find their latest documentation here:

  • kani: This crate provide APIs to write Kani harnesses.


This section collects frequently asked questions about Kani. Please consider opening an issue if you have a question that would like to see here.


Kani doesn't fail after kani::assume(false). Why?

kani::assume(false) (or kani::assume(cond) where cond is a condition that results in false in the context of the program), won't cause errors in Kani. Instead, such an assumption has the effect of blocking all the symbolic execution paths from the assumption. Therefore, all checks after the assumption should appear as UNREACHABLE. That's the expected behavior for kani::assume(false) in Kani.

If you didn't expect certain checks in a harness to be UNREACHABLE, we recommend using the kani::cover macro to determine what conditions are possible in case you've over-constrained the harness.

I implemented the kani::Arbitrary trait for a type that's not from my crate, and got the error only traits defined in the current crate can be implemented for types defined outside of the crate. What does this mean? What can I do?

This error is due to a violation of Rust's orphan rules for trait implementations, which are explained here. In that case, you'll need to write a function that builds an object from non-deterministic variables. Inside this function you would simply return an arbitrary value by generating arbitrary values for its components.

For example, let's assume the type you're working with is this enum:

#[derive(Copy, Clone)]
pub enum Rating {

Then, you can match on a non-deterministic integer (supplied by kani::any) to return non-deterministic Rating variants:

    pub fn any_rating() -> Rating {
        match kani::any() {
            0 => Rating::One,
            1 => Rating::Two,
            _ => Rating::Three,

More details about this option, which also useful in other cases, can be found here.

If the type comes from std (Rust's standard library), you can open a request for adding Arbitrary implementations to the Kani library. Otherwise, there are more involved options to consider:

  1. Importing a copy of the external crate that defines the type, then implement Arbitrary there.
  2. Contributing the Arbitrary implementation to the external crate that defines the type.