Getting started
Kani is an open-source verification tool that uses model checking to analyze Rust programs. Kani is particularly useful for verifying unsafe code blocks in Rust, where the "unsafe superpowers" are unchecked by the compiler. Some example properties you can prove with Kani include memory safety properties (e.g., null pointer dereferences, use-after-free, etc.), the absence of certain runtime errors (i.e., index out of bounds, panics), and the absence of some types of unexpected behavior (e.g., arithmetic overflows). Kani can also prove custom properties provided in the form of user-specified assertions. As Kani uses model checking, Kani will either prove the property, disprove the property (with a counterexample), or may run out of resources.
Kani uses proof harnesses to analyze programs. Proof harnesses are similar to test harnesses, especially property-based test harnesses.
Project Status
Kani is currently under active development. Releases are published here. Major changes to Kani are documented in the RFC Book. We also publish updates on Kani use cases and features on our blog.
There is support for a fair amount of Rust language features, but not all (e.g., concurrency). Please see Limitations for a detailed list of supported features.
Kani releases every two weeks. As part of every release, Kani will synchronize with a recent nightly release of Rust, and so is generally up-to-date with the latest Rust language features.
If you encounter issues when using Kani, we encourage you to report them to us.
Installation
Kani offers an easy installation option on three platforms:
x86_64-unknown-linux-gnu
(Most Linux distributions)x86_64-apple-darwin
(Intel Mac OS)aarch64-apple-darwin
(Apple Silicon Mac OS)
Other platforms are either not yet supported or require instead that you build from source. To use Kani in your GitHub CI workflows, see GitHub CI Action.
Dependencies
The following must already be installed:
- Python version 3.7 or newer and the package installer
pip
. - Rust 1.58 or newer installed via
rustup
.
Installing the latest version
To install the latest version of Kani, run:
cargo install --locked kani-verifier
cargo kani setup
This will build and place in ~/.cargo/bin
(in a typical environment) the kani
and cargo-kani
binaries.
The second step (cargo kani setup
) will download the Kani compiler and other necessary dependencies, and place them under ~/.kani/
by default.
A custom path can be specified using the KANI_HOME
environment variable.
Installing an older version
cargo install --locked kani-verifier --version <VERSION>
cargo kani setup
Checking your installation
After you've installed Kani, you can try running it by creating a test file:
// File: test.rs
#[kani::proof]
fn main() {
assert!(1 == 2);
}
Run Kani on the single file:
kani test.rs
You should get a result like this one:
[...]
RESULTS:
Check 1: main.assertion.1
- Status: FAILURE
- Description: "assertion failed: 1 == 2"
[...]
VERIFICATION:- FAILED
Fix the test and you should see a result like this one:
[...]
VERIFICATION:- SUCCESSFUL
Next steps
If you're learning Kani for the first time, you may be interested in our tutorial.
Installing from source code
If you were able to install Kani normally, you do not need to build Kani from source. You probably want to proceed to the Kani tutorial.
Dependencies
In general, the following dependencies are required to build Kani from source.
NOTE: These dependencies may be installed by running the scripts shown below and don't need to be manually installed.
Kani has been tested in Ubuntu and macOS platforms.
Install dependencies on Ubuntu
Support is available for Ubuntu 20.04, 22.04, and 24.04. The simplest way to install dependencies (especially if you're using a fresh VM) is following our CI scripts:
# git clone git@github.com:model-checking/kani.git
git clone https://github.com/model-checking/kani.git
cd kani
git submodule update --init
./scripts/setup/ubuntu/install_deps.sh
# If you haven't already (or from https://rustup.rs/):
./scripts/setup/install_rustup.sh
source $HOME/.cargo/env
Install dependencies on macOS
Support is available for macOS 11. You need to have Homebrew installed already.
# git clone git@github.com:model-checking/kani.git
git clone https://github.com/model-checking/kani.git
cd kani
git submodule update --init
./scripts/setup/macos/install_deps.sh
# If you haven't already (or from https://rustup.rs/):
./scripts/setup/install_rustup.sh
source $HOME/.cargo/env
Build and test Kani
Build the Kani package:
cargo build-dev
Then, optionally, run the regression tests:
./scripts/kani-regression.sh
This script has a lot of noisy output, but on a successful run you'll see at the end of the execution:
All Kani regression tests completed successfully.
Adding Kani to your path
To use a locally-built Kani from anywhere, add the Kani scripts to your path:
export PATH=$(pwd)/scripts:$PATH
Next steps
If you're learning Kani for the first time, you may be interested in our tutorial.
GitHub Action
Kani offers a GitHub Action for running Kani in CI.
As of now, only Ubuntu 20.04 with x86_64-unknown-linux-gnu
is supported for Kani in CI.
Using Kani in your GitHub workflow
Our GitHub Action is available in the GitHub Marketplace.
The following workflow snippet will checkout your repository and run cargo kani
on it whenever a push or pull request occurs.
Replace <MAJOR>.<MINOR>
with the version of Kani you want to run with.
name: Kani CI
on:
pull_request:
push:
jobs:
run-kani:
runs-on: ubuntu-20.04
steps:
- name: 'Checkout your code.'
uses: actions/checkout@v3
- name: 'Run Kani on your code.'
uses: model-checking/kani-github-action@v<MAJOR>.<MINOR>
This will run cargo kani
on the code you checked out.
Options
The action takes the following optional parameters:
command
: The command to run. Defaults tocargo kani
. Most often, you will not need to change this.working-directory
: The directory to execute the command in. Defaults to.
. Useful if your repository has multiple crates, and you only want to run on one of them.args
: The arguments to pass to the given${command}
. Seecargo kani --help
for a full list of options. Useful options include:--output-format=terse
to generate terse output.--tests
to run on proofs inside thetest
module (needed for running Bolero).--workspace
to run on all crates within your repository.
FAQ
- Kani takes too long for my CI: Try running Kani on a schedule with desired frequency.
- Kani Silently Crashes with no logs: Few possible reasons:
- Kani ran out of RAM. GitHub offers up to 7GB of RAM, but Kani may use more. Run locally to confirm.
- GitHub terminates jobs longer than 6 hours.
- Otherwise, consider filing an issue here.
Using Kani
At present, Kani can used in two ways:
- On a single crate with the
kani
command. - On a Cargo package with the
cargo kani
command.
If you plan to integrate Kani in your projects, the recommended approach is to use cargo kani
.
If you're already using cargo, this will handle dependencies automatically, and it can be configured (if needed) in Cargo.toml
.
But kani
is useful for small examples/tests.
Usage on a package
Kani is integrated with cargo
and can be invoked from a package as follows:
cargo kani [OPTIONS]
This works like cargo test
except that it will analyze all proof harnesses instead of running all test harnesses.
Common command line flags
Common to both kani
and cargo kani
are many command-line flags:
-
--concrete-playback=[print|inplace]
: Experimental,--enable-unstable
feature that generates a Rust unit test case that plays back a failing proof harness using a concrete counterexample. If used withprint
, Kani will only print the unit test to stdout. If used withinplace
, Kani will automatically add the unit test to the user's source code, next to the proof harness. For more detailed instructions, see the concrete playback section. -
--tests
: Build in "test mode", i.e. withcfg(test)
set anddev-dependencies
available (when usingcargo kani
). -
--harness <name>
: By default, Kani checks all proof harnesses it finds. You can switch to checking a single harness using this flag. -
--default-unwind <n>
: Set a default global upper loop unwinding bound for proof harnesses. This can force termination when CBMC tries to unwind loops indefinitely.
Run cargo kani --help
to see a complete list of arguments.
Usage on a single crate
For small examples or initial learning, it's very common to run Kani on just one source file. The command line format for invoking Kani directly is the following:
kani filename.rs [OPTIONS]
This will build filename.rs
and run all proof harnesses found within.
Configuration in Cargo.toml
Users can add a default configuration to the Cargo.toml
file for running harnesses in a package.
Kani will extract any arguments from these sections:
[workspace.metadata.kani.flags]
[package.metadata.kani.flags]
For example, if you want to set a default loop unwinding bound (when it's not otherwise specified), you can achieve this by adding the following lines to the package's Cargo.toml
:
[package.metadata.kani.flags]
default-unwind = 1
The options here are the same as on the command line (cargo kani --help
), and flags (that is, command line arguments that don't take a value) are enabled by setting them to true
.
Starting with Rust 1.80 (or nightly-2024-05-05), every reachable #[cfg] will be automatically checked that they match the expected config names and values.
To avoid warnings on cfg(kani)
, we recommend adding the check-cfg
lint config in your crate's Cargo.toml
as follows:
[lints.rust]
unexpected_cfgs = { level = "warn", check-cfg = ['cfg(kani)'] }
For more information please consult this blog post.
The build process
When Kani builds your code, it does three important things:
- It sets
cfg(kani)
for target crate compilation (including dependencies). - It injects the
kani
crate. - It sets
cfg(kani_host)
for host build targets such as any build script and procedural macro crates.
A proof harness (which you can learn more about in the tutorial), is a function annotated with #[kani::proof]
much like a test is annotated with #[test]
.
But you may experience a similar problem using Kani as you would with dev-dependencies
: if you try writing #[kani::proof]
directly in your code, cargo build
will fail because it doesn't know what the kani
crate is.
This is why we recommend the same conventions as are used when writing tests in Rust: wrap your proof harnesses in cfg(kani)
conditional compilation:
#[cfg(kani)]
mod verification {
use super::*;
#[kani::proof]
pub fn check_something() {
// ....
}
}
This will ensure that a normal build of your code will be completely unaffected by anything Kani-related.
This conditional compilation with cfg(kani)
(as seen above) is still required for Kani proofs placed under tests/
.
When this code is built by cargo test
, the kani
crate is not available, and so it would otherwise cause build failures.
(Whereas the use of dev-dependencies
under tests/
does not need to be gated with cfg(test)
since that code is already only built when testing.)
Verification results
Running Kani on a harness produces an output that includes a set of checks as follows:
RESULTS:
Check 1: example.assertion.1
- Status: <status>
- Description: <description>
- Location: <location>
[...]
Kani determines the verification result for the harness based on the
result (i.e., <status>
) of each individual check (also known as "properties"). If all
checks are successful then the overall verification result of the harness is successful. Otherwise the
verification fails, which indicates issues with the code under verification.
Check results
The result (or Status
) of a check in Kani can be one of the following:
SUCCESS
: This indicates that the check passed (i.e., the property holds). Note that in some cases, the property may hold vacuously. This can occur because the property is unreachable, or because the harness is over-constrained.
Example:
fn success_example() {
let mut sum = 0;
for i in 1..4 {
sum += i;
}
assert_eq!(sum, 6);
}
The output from Kani indicates that the assertion holds:
Check 4: success_example.assertion.4
- Status: SUCCESS
- Description: "assertion failed: sum == 6"
FAILURE
: This indicates that the check failed (i.e., the property doesn't hold). In this case, please see the concrete playback section for more help.
Example:
fn failure_example() {
let arr = [1, 2, 3];
assert_ne!(arr.len(), 3);
}
The assertion doesn't hold as Kani's output indicates:
Check 2: failure_example.assertion.2
- Status: FAILURE
- Description: "assertion failed: arr.len() != 3"
UNREACHABLE
: This indicates that the check is unreachable (i.e., the property holds vacuously). This occurs when there is no possible execution trace that can reach the check's line of code. This may be because the function that contains the check is unused, or because the harness does not trigger the condition under which the check is invoked. Kani currently checks reachability for the following assertion types:- Assert macros (e.g.
assert
,assert_eq
, etc.) - Arithmetic overflow checks
- Negation overflow checks
- Index out-of-bounds checks
- Divide/remainder-by-zero checks
- Assert macros (e.g.
Example:
fn unreachable_example() {
let x = 5;
let y = x + 2;
if x > y {
assert!(x < 8);
}
}
The output from Kani indicates that the assertion is unreachable:
Check 2: unreachable_example.assertion.2
- Status: UNREACHABLE
- Description: "assertion failed: x < 8"
UNDETERMINED
: This indicates that Kani was not able to conclude whether the property holds or not. This can occur when the Rust program contains a construct that is not currently supported by Kani. See Rust feature support for Kani's current support of the Rust language features.
Example:
fn undetermined_example() {
let mut x = 0;
unsupp(&mut x);
assert!(x == 0);
}
#[feature(asm)]
fn unsupp(x: &mut u8) {
unsafe {
std::arch::asm!("nop");
}
}
The output from Kani indicates that the assertion is undetermined due to the missing support for inline assembly in Kani:
Check 2: undetermined_example.assertion.2
- Status: UNDETERMINED
- Description: "assertion failed: x == 0"
Cover property results
Kani provides a kani::cover
macro that can be used for checking whether a condition may occur at a certain point in the code.
The result of a cover property can be one of the following:
SATISFIED
: This indicates that Kani found an execution that triggers the specified condition.
The following example uses kani::cover
to check if it's possible for x
and y
to hold the values 24 and 72, respectively, after 3 iterations of the while
loop, which turns out to be the case.
#[kani::unwind(256)]
fn cover_satisfied_example() {
let mut x: u8 = kani::any();
let mut y: u8 = kani::any();
y /= 2;
let mut i = 0;
while x != 0 && y != 0 {
kani::cover!(i > 2 && x == 24 && y == 72);
if x >= y { x -= y; }
else { y -= x; }
i += 1;
}
}
Results:
Check 1: cover_satisfied_example.cover.1
- Status: SATISFIED
- Description: "cover condition: i > 2 && x == 24 && y == 72"
- Location: src/main.rs:60:9 in function cover_satisfied_example
UNSATISFIABLE
: This indicates that Kani proved that the specified condition is impossible.
The following example uses kani::cover
to check if it's possible to have a UTF-8 encoded string consisting of 5 bytes that correspond to a string with a single character.
#[kani::unwind(6)]
fn cover_unsatisfiable_example() {
let bytes: [u8; 5] = kani::any();
let s = std::str::from_utf8(&bytes);
if let Ok(s) = s {
kani::cover!(s.chars().count() <= 1);
}
}
which is not possible as such string will contain at least two characters.
Check 46: cover_unsatisfiable_example.cover.1
- Status: UNSATISFIABLE
- Description: "cover condition: s.chars().count() <= 1"
- Location: src/main.rs:75:9 in function cover_unsatisfiable_example
UNREACHABLE
: This indicates that thecover
property itself is unreachable (i.e. it is vacuously unsatisfiable).
In contrast to an UNREACHABLE
result for assertions, an unreachable (or an unsatisfiable) cover property may indicate an incomplete proof.
Example:
In this example, a kani::cover
call is unreachable because if the outer if
condition holds, then the non-empty range r2
is strictly larger than the non-empty range r1
, in which case, the condition in the inner if
condition is impossible.
#[kani::unwind(6)]
fn cover_unreachable_example() {
let r1: std::ops::Range<i32> = kani::any()..kani::any();
let r2: std::ops::Range<i32> = kani::any()..kani::any();
kani::assume(!r1.is_empty());
kani::assume(!r2.is_empty());
if r2.start > r1.end {
if r2.end < r1.end {
kani::cover!(r2.contains(&0));
}
}
}
Check 3: cover_unreachable_example.cover.1
- Status: UNREACHABLE
- Description: "cover condition: r2.contains(&0)"
- Location: src/main.rs:90:13 in function cover_unreachable_example
UNDETERMINED
: This is the same as theUNDETERMINED
result for normal checks (see [check_results]).
Verification summary
Kani reports a summary at the end of the verification report, which includes the overall results of all checks, the overall results of cover properties (if the package includes cover properties), and the overall verification result, e.g.:
SUMMARY:
** 0 of 786 failed (41 unreachable)
** 0 of 1 cover properties satisfied
VERIFICATION:- SUCCESSFUL
Crates documentation
Kani currently ships with a kani
crate that provide APIs to allow users to
write and configure their harnesses.
These APIs are tightly coupled with each Kani version, so they are not
published yet at https://crates.io.
You can find their latest documentation here:
- kani: This crate provide APIs to write Kani harnesses.
Tutorial
NOTE: This tutorial expects you to have followed the Kani installation instructions first.
This tutorial will step you through a progression from simple to moderately complex tasks with Kani. It's meant to ensure you can get started, and see at least some simple examples of how typical proofs are structured. It will also teach you the basics of "debugging" proof harnesses, which mainly consists of diagnosing and resolving non-termination issues with the solver.
You may also want to read the Application section to better understand what Kani is and how it can be applied on real code.
First steps
Kani is unlike the testing tools you may already be familiar with. Much of testing is concerned with thinking of new corner cases that need to be covered. With Kani, all the corner cases are covered from the start, and the new concern is narrowing down the scope to something manageable for the verifier.
Consider this first program (which can be found under first-steps-v1
):
fn estimate_size(x: u32) -> u32 {
if x < 256 {
if x < 128 {
return 1;
} else {
return 3;
}
} else if x < 1024 {
if x > 1022 {
panic!("Oh no, a failing corner case!");
} else {
return 5;
}
} else {
if x < 2048 {
return 7;
} else {
return 9;
}
}
}
Think about the test harness you would need to write to test this function. You would need figure out a whole set of arguments to call the function with that would exercise each branch. You would also need to keep that test harness up-to-date with the code, in case some of the branches change. And if this function was more complicated—for example, if some of the branches depended on global state—the test harness would be even more onerous to write.
We can try to property test a function like this, but if we're naive about it (and consider all possible u32
inputs), then it's unlikely we'll ever find the bug.
proptest! {
#![proptest_config(ProptestConfig::with_cases(10000))]
#[test]
fn doesnt_crash(x: u32) {
estimate_size(x);
}
}
# cargo test
[...]
test tests::doesnt_crash ... ok
There's only 1 in 4 billion inputs that fail, so it's vanishingly unlikely the property test will find it, even with a million samples.
Let's write a Kani proof harness for estimate_size
.
This is a lot like a test harness, but now we can use kani::any()
to represent all possible u32
values:
#[cfg(kani)]
#[kani::proof]
fn check_estimate_size() {
let x: u32 = kani::any();
estimate_size(x);
}
# cargo kani
[...]
Runtime decision procedure: 0.00116886s
RESULTS:
Check 3: estimate_size.assertion.1
- Status: FAILURE
- Description: "Oh no, a failing corner case!"
[...]
VERIFICATION:- FAILED
Kani has immediately found a failure.
Notably, we haven't had to write explicit assertions in our proof harness: by default, Kani will find a host of erroneous conditions which include a reachable call to panic
or a failing assert
.
If Kani had run successfully on this harness, this amounts to a mathematical proof that there is no input that could cause a panic in estimate_size
.
By default, Kani only reports failures, not how the failure happened. In this example, it would be nice to get a concrete example of a value of
x
that triggers the failure. Kani offers an (experimental) concrete playback feature that serves this purpose. As an exercise, try applying concrete playback to this example and see what Kani outputs.
Exercise: Try other failures
We put an explicit panic in this function, but it's not the only kind of failure Kani will find. Try a few other types of errors.
For example, instead of panicking we could try explicitly dereferencing a null pointer:
unsafe { return *(0 as *const u32) };
Notably, however, the Rust compiler emits a warning here:
warning: dereferencing a null pointer
--> src/lib.rs:10:29
|
10 | unsafe { return *(0 as *const u32) };
| ^^^^^^^^^^^^^^^^^^ this code causes undefined behavior when executed
|
= note: `#[warn(deref_nullptr)]` on by default
Still, it's just a warning, and we can run the code without test failures just as before. But Kani still catches the issue:
[...]
RESULTS:
[...]
Check 2: estimate_size.pointer_dereference.1
- Status: FAILURE
- Description: "dereference failure: pointer NULL"
[...]
VERIFICATION:- FAILED
Exercise: Can you find an example where the Rust compiler will not complain, and Kani will?
Click to show one possible answer
return 1 << x;
Overflow (in addition, multiplication or, in this case, bit-shifting by too much) is also caught by Kani:
RESULTS:
[...]
Check 1: estimate_size.assertion.1
- Status: FAILURE
- Description: "attempt to shift left with overflow"
Check 3: estimate_size.undefined-shift.1
- Status: FAILURE
- Description: "shift distance too large"
[...]
VERIFICATION:- FAILED
Assertions, Assumptions, and Harnesses
It seems a bit odd that our example function is tested against billions of possible inputs, when it really only seems to be designed to handle a few thousand.
Let's encode this fact about our function by asserting some reasonable upper bound on our input, after we've fixed our bug.
(New code available under first-steps-v2
):
fn estimate_size(x: u32) -> u32 {
assert!(x < 4096);
if x < 256 {
if x < 128 {
return 1;
} else {
return 3;
}
} else if x < 1024 {
if x > 1022 {
return 4;
} else {
return 5;
}
} else {
if x < 2048 {
return 7;
} else {
return 9;
}
}
}
Now we've explicitly stated our previously implicit expectation: this function should never be called with inputs that are too big. But if we attempt to verify this modified function, we run into a problem:
[...]
RESULTS:
[...]
Check 3: estimate_size.assertion.1
- Status: FAILURE
- Description: "assertion failed: x < 4096"
[...]
VERIFICATION:- FAILED
What we want is a precondition for estimate_size
.
That is, something that should always be true every time we call the function.
By putting the assertion at the beginning, we ensure the function immediately fails if that expectation is not met.
But our proof harness will still call this function with any integer, even ones that just don't meet the function's preconditions. That's... not a useful or interesting result. We know that won't work already. How do we go back to successfully verifying this function?
This is the purpose of writing a proof harness. Much like property testing (which would also fail in this assertion), we need to set up our preconditions, call the function in question, then assert our postconditions. Here's a revised example of the proof harness, one that now succeeds:
#[cfg(kani)]
#[kani::proof]
fn verify_success() {
let x: u32 = kani::any();
kani::assume(x < 4096);
let y = estimate_size(x);
assert!(y < 10);
}
Summary
In this section:
- We saw Kani find panics, assertion failures, and even some other failures like unsafe dereferencing of null pointers.
- We saw Kani find failures that testing could not easily find.
- We saw how to write a proof harness and use
kani::any()
. - We saw how proof harnesses are used to set up preconditions with
kani::assume()
.
Failures that Kani can spot
In the last section, we saw Kani spot two major kinds of failures: assertions and panics. If the proof harness allows some program execution that results in a panic, then Kani will report that as a failure. In addition, we saw (very briefly) a couple of other kinds of failures: null pointer dereferences and overflows. In this section, we're going to expand on these additional checks, to give you an idea of what other problems Kani will find.
Bounds checking and pointers
Rust is safe by default, and so includes dynamic (run-time) bounds checking where needed. Consider this Rust code (available here):
/// Wrap "too-large" indexes back into a valid range for the array
fn get_wrapped(i: usize, a: &[u32]) -> u32 {
if a.len() == 0 {
return 0;
}
return a[i % a.len() + 1];
}
We can again write a simple property test against this code:
proptest! {
#[test]
fn doesnt_crash(i: usize, a: Vec<u32>) {
get_wrapped(i, &a);
}
}
This property test will immediately find a failing case, thanks to Rust's built-in bounds checking.
But what if we change this function to use unsafe Rust?
return unsafe { *a.as_ptr().add(i % a.len() + 1) };
Now the error becomes invisible to this test:
# cargo test
[...]
test bounds_check::tests::doesnt_crash ... ok
The property test still causes an out-of-bounds access, but this undefined behavior does not necessarily cause an immediate crash. (This is part of why undefined behavior is so difficult to debug.) Through the use of unsafe code, we removed the runtime check for an out of bounds access. It just turned out that none of the randomly generated tests triggered behavior that actually crashed. But if we write a Kani proof harness:
#[cfg(kani)]
#[kani::proof]
fn bound_check() {
let size: usize = kani::any();
kani::assume(size < 4096);
let index: usize = kani::any();
let array: Vec<u32> = vec![0; size];
get_wrapped(index, &array);
}
And run this proof with:
cargo kani --harness bound_check
We still see a failure from Kani, even without Rust's runtime bounds checking.
Also, notice there were many checks in the verification output. (At time of writing, 345.) This is a result of using the standard library
Vec
implementation, which means our harness actually used quite a bit of code, short as it looks. Kani is inserting a lot more checks than appear as asserts in our code, so the output can be large.
We get the following summary at the end:
SUMMARY:
** 1 of 345 failed (8 unreachable)
Failed Checks: dereference failure: pointer outside object bounds
File: "./src/bounds_check.rs", line 11, in bounds_check::get_wrapped
VERIFICATION:- FAILED
Notice that, for Kani, this has gone from a simple bounds-checking problem to a pointer-checking problem. Kani will check operations on pointers to ensure they're not potentially invalid memory accesses. Any unsafe code that manipulates pointers will, as we see here, raise failures if its behavior is actually a problem.
Consider trying a few more small exercises with this example:
- Exercise: Switch back to the normal/safe indexing operation and re-try Kani. How does Kani's output change, compared to the unsafe operation? (Try predicting the answer, then seeing if you got it right.)
- Exercise: Try Kani's experimental concrete playback feature on this example.
- Exercise: Fix the error, run Kani, and see a successful verification.
- Exercise: Try switching back to the unsafe code (now with the error fixed) and re-run Kani. Does it still verify successfully?
Click to see explanation for exercise 1
Having switched back to the safe indexing operation, Kani reports a bounds check failure:
SUMMARY:
** 1 of 343 failed (8 unreachable)
Failed Checks: index out of bounds: the length is less than or equal to the given index
File: "src/bounds_check.rs", line 11, in bounds_check::get_wrapped
VERIFICATION:- FAILED
Click to see explanation for exercise 2
cargo kani -Z concrete-playback --concrete-playback=inplace --harness bound_check
produces the following test:
rust
#[test]
fn kani_concrete_playback_bound_check_4752536404478138800() {
let concrete_vals: Vec<Vec<u8>> = vec![
// 1ul
vec![1, 0, 0, 0, 0, 0, 0, 0],
// 18446744073709551615ul
vec![255, 255, 255, 255, 255, 255, 255, 255],
];
kani::concrete_playback_run(concrete_vals, bound_check);
}
which indicates that substituting the concrete values size = 1
and index = 2^64
in our proof harness will produce the out of bounds access.
Overflow and math errors
Consider a different variant on the function above:
fn get_wrapped(i: usize, a: &[u32]) -> u32 {
return a[i % a.len()];
}
We've corrected the out-of-bounds access, but now we've omitted the "base case": what to return on an empty list.
Kani will spot this not as a bound error, but as a mathematical error: on an empty list the modulus operator (%
) will cause a division by zero.
- Exercise: Try to run Kani on this version of
get_wrapped
, to see what this kind of failure looks like.
Rust can also perform runtime safety checks for integer overflows, much like it does for bounds checks.
(Though Rust disables this by default in --release
mode, it can be re-enabled.)
Consider this code (available here):
fn simple_addition(a: u32, b: u32) -> u32 {
return a + b;
}
A trivial function, but if we write a property test for it, we immediately find inputs where it fails, thanks to Rust's dynamic checks. Kani will find these failures as well. Here's the output from Kani:
# cargo kani --harness add_overflow
[...]
SUMMARY:
** 1 of 2 failed
Failed Checks: attempt to add with overflow
File: "./src/overflow.rs", line 7, in overflow::simple_addition
VERIFICATION:- FAILED
This issue can be fixed using Rust's alternative mathematical functions with explicit overflow behavior.
For instance, if the wrapping behavior is intended, you can write a.wrapping_add(b)
instead of a + b
.
Kani will then report no issues.
Exercise: Classic overflow failure
A classic example of a subtle bug that persisted in many implementations for a very long time is "finding the midpoint" in quick sort. This often naively looks like this (code available here):
fn find_midpoint(low: u32, high: u32) -> u32 {
return (low + high) / 2;
}
cargo kani --harness midpoint_overflow
Kani immediately spots the bug in the above code.
- Exercise: Fix this function so it no longer overflows.
(Hint: depending on which approach you take, you may need to add the assumption that
high > low
to your proof harness. Don't add that right away, see what happens if you don't. Just keep it in mind.) - Exercise: Prove your new implementation actually finds the midpoint correctly by adding an assertion to the test harness.
Click to see solutions for these exercises
A very common approach for resolving the overflow issue looks like this:
return low + (high - low) / 2;
But if you naively try this (try it!), you'll find a new underflow error: high - low
might result in a negative number, but has type u32
.
Hence, the need to add the assumption we suggested above, to make that impossible.
(Adding an assumption, though, means there's a new way to "use it wrong." Perhaps we'd like to avoid that! Can you avoid the assumption?)
After that, you might wonder how to "prove your new implementation correct." After all, what does "correct" even mean? Often we're using a good approximation of correct, such as the equivalence of two implementations (often one much "simpler" than the other somehow). Here's one possible assertion we could write in the proof harness:
assert!(result as u64 == (a as u64 + b as u64) / 2);
You might have even come up with this approach to avoiding the overflow issue in the first place! Having two different implementations, using different approaches, but proven to yield the same results, gives us greater confidence that we compute the correct result.
Failures that Kani cannot spot
Check out Limitations for information on the checks that Kani does not perform. Notably, Kani is not prioritizing all Rust-specific notions of undefined behavior.
Summary
In this section:
- We saw Kani spot out-of-bounds accesses.
- We saw Kani spot actually-unsafe dereferencing of a raw pointer to invalid memory.
- We saw Kani spot a division by zero error and an overflowing addition.
- As an exercise, we tried proving an assertion (finding the midpoint) that was not completely trivial.
Loops, unwinding, and bounds
Consider code like this (available here):
fn initialize_prefix(length: usize, buffer: &mut [u8]) {
// Let's just ignore invalid calls
if length > buffer.len() {
return;
}
for i in 0..=length {
buffer[i] = 0;
}
}
This code has an off-by-one error that only occurs on the last iteration of the loop (when called with an input that will trigger it). We can try to find this bug with a proof harness like this:
#[cfg(kani)]
#[kani::proof]
#[kani::unwind(1)] // deliberately too low
fn check_initialize_prefix() {
const LIMIT: usize = 10;
let mut buffer: [u8; LIMIT] = [1; LIMIT];
let length = kani::any();
kani::assume(length <= LIMIT);
initialize_prefix(length, &mut buffer);
}
But we've just used a new attribute (#[kani::unwind(1)]
) that requires some explanation.
When we run cargo kani
on this code as we have written it, we see an odd verification failure:
SUMMARY:
** 1 of 67 failed (66 undetermined)
Failed Checks: unwinding assertion loop 0
VERIFICATION:- FAILED
If we try removing this "unwind" annotation and re-running Kani, the result is worse: non-termination. Kani simply doesn't produce a result.
The problem we're struggling with is the technique Kani uses to verify code. We're not able to handle code with "unbounded" loops, and what "bounded" means can be quite subtle. It has to have a constant number of iterations that's "obviously constant" enough for the verifier to actually figure this out. In practice, very few loops are like this.
To verify programs like this with Kani as it exists today, we need to do two things:
- Set an upper bound on the size of the problem.
We've actually already done part of this: our proof harness above seems to be trying to set an upper
LIMIT
of 10. - Tell Kani about this limit if (or when) it's not able to figure it out on its own.
This is the purpose of the
kani::unwind
annotation.
Bounding proofs like this means we may no longer be proving as much as we originally hoped. Who's to say, if we prove everything works up to size 10, that there isn't a novel bug lurking, reachable only with problems of size 11+? Perhaps! But, let's get back to the issue at hand.
By putting #[kani::unwind(1)]
on the proof harness, we've placed an upper bound of 1 loop iteration.
The "unwinding assertion" failure that Kani reports is because this bound is not high enough.
The code tries to execute more than 1 loop iteration.
(And, because the unwinding isn't high enough, many of the other properties Kani is verifying become "undetermined": we don't really know if they're true or false, because we can't get far enough.)
Exercise: Try increasing the bound. Where might you start? How high do you need to go to get rid of the "unwinding assertion" failure?
Click to see explanation for the exercise
Since the proof harness is trying to limit the array to size 10, an initial unwind value of 10 seems like the obvious place to start. But that's not large enough for Kani, and we still see the "unwinding assertion" failure.
At size 11, the "unwinding assertion" goes away, and now we can see the actual failure we're trying to find too. We'll explain why we see this behavior in a moment.
Once we have increased the unwinding limit high enough, we're left with these failures:
SUMMARY:
** 1 of 68 failed
Failed Checks: index out of bounds: the length is less than or equal to the given index
File: "./src/lib.rs", line 12, in initialize_prefix
VERIFICATION:- FAILED
Exercise: Fix the off-by-one error, and get the (bounded) proof to go through.
We now return to the question: why is 11 the unwinding bound?
Kani needs the unwinding bound to be "one more than" the number of loop iterations. We previously had an off-by-one error that tried to do 11 iterations on an array of size 10. So... the unwinding bound needed to be 11, then.
NOTE: Presently, there are some situations where "number of iterations of a loop" can be less obvious than it seems. This can be easily triggered with use of
break
orcontinue
within loops. Often this manifests itself as needing "two more" or "three more" iterations in the unwind bound than seems like it would actually run. In those situations, we might still need a bound likekani::unwind(13)
, despite looking like a loop bounded to 10 iterations.
The approach we've taken here is a general method for getting a bounded proof to go through:
- Put an actual upper bound on the problem itself.
Here that's accomplished via
LIMIT
in our proof harness. We don't create a slice any bigger than that, and that's what we loop over. - Start at a reasonable guess for a
kani::unwind
bound, and increase until the unwinding assertion failure goes away. - Or, if that starts to take too long to verify, decrease your problem's bound, to accommodate the verifier's performance.
Unwinding value specification
The best approach to supplying Kani with unwind bounds is using the annotation kani::unwind
, as we show above.
You might want to supply one via command line when experimenting, however.
In that case you can either use --default-unwind x
to set an unwind bound for every proof harness that does not have an explicit bound.
Or you can override a harness's bound, but only when running a specific harness:
cargo kani --harness check_initialize_prefix --unwind 11
Finally, you might be interested in defaulting the unwind bound to 1, to force termination (and force supplying a bound) on all your proof harnesses.
You can do this by putting this into your Cargo.toml
file:
[workspace.metadata.kani.flags]
default-unwind = 1
Bounded proof
Before we finish, it's worth revisiting the implications of what we've done here. Kani frequently needs to do "bounded proof", which contrasts with unbounded or full verification.
We've written a proof harness that shows initialize_prefix
has no errors on input slices of size 10, but no higher.
The particular size we choose is usually determined by balancing the level of assurance we want, versus runtime of Kani.
It's often not worth running proofs for large numbers of iterations, unless either very high assurance is necessary, or there's reason to suspect larger problems will contain novel failure modes.
Exercise: Try increasing the problem size (both the unwind and the LIMIT
constant). When does it start to take more than a few seconds?
Click to see explanation for the exercise
On your friendly neighborhood author's machine, a LIMIT
of 100 takes about 3.8 seconds end-to-end.
This is a relatively simple bit of code, though, and it's not uncommon for some proofs to scale poorly even to 5 iterations.
One consequence of this, however, is that Kani often scales poorly to "big string problems" like parsing. Often a parser will need to consume inputs larger than 10-20 characters to exhibit strange behaviors.
Summary
In this section:
- We saw Kani fail to terminate.
- We saw how
#[kani::unwind(1)]
can help force Kani to terminate (with a verification failure). - We saw "unwinding assertions" verify that we've set the unwinding limit high enough.
- We saw how to put a practical bound on problem size in our proof harness.
- We saw how to pick an unwinding size large enough to successfully verify that bounded proof.
Nondeterministic variables
Kani is able to reason about programs and their execution paths by allowing users to create nondeterministic (also called symbolic) values using kani::any()
.
Kani is a "bit-precise" model checker, which means that Kani considers all the possible bit-value combinations that would be valid if assigned to a variable's memory contents.
In other words, kani::any()
should not produce values that are invalid for the type (which would lead to Rust undefined behavior).
Out of the box, Kani includes kani::any()
implementations for most primitive and some std
types.
In this tutorial, we will show how to use kani::any()
to create symbolic values for other types.
Safe nondeterministic variables
Let's say you're developing an inventory management tool, and you would like to start verifying properties about your API. Here is a simple example (available here):
use std::num::NonZeroU32;
use vector_map::VecMap;
pub type ProductId = u32;
pub struct Inventory {
/// Every product in inventory must have a non-zero quantity
pub inner: VecMap<ProductId, NonZeroU32>,
}
impl Inventory {
pub fn update(&mut self, id: ProductId, new_quantity: NonZeroU32) {
self.inner.insert(id, new_quantity);
}
pub fn get(&self, id: &ProductId) -> Option<NonZeroU32> {
self.inner.get(id).cloned()
}
}
Let's write a fairly simple proof harness, one that just ensures we successfully get
the value we inserted with update
:
#[kani::proof]
#[kani::unwind(3)]
pub fn safe_update() {
// Empty to start
let mut inventory = Inventory { inner: VecMap::new() };
// Create non-deterministic variables for id and quantity.
let id: ProductId = kani::any();
let quantity: NonZeroU32 = kani::any();
assert!(quantity.get() != 0, "NonZeroU32 is internally a u32 but it should never be 0.");
// Update the inventory and check the result.
inventory.update(id, quantity);
assert!(inventory.get(&id).unwrap() == quantity);
}
We use kani::any()
twice here:
id
has typeProductId
which was actually just au32
, and so any value is fine.quantity
, however, has typeNonZeroU32
. In Rust, it would be undefined behavior to have a value of0
for this type.
We included an extra assertion that the value returned by kani::any()
here was actually non-zero.
If we run this, you'll notice that verification succeeds.
cargo kani --harness safe_update
kani::any()
is safe Rust, and so Kani only implements it for types where type invariants are enforced.
For NonZeroU32
, this means we never return a 0
value.
The assertion we wrote in this harness was just an extra check we added to demonstrate this fact, not an essential part of the proof.
Custom nondeterministic types
While kani::any()
is the only method Kani provides to inject non-determinism into a proof harness, Kani only ships with implementations for a few std
types where we can guarantee safety.
When you need nondeterministic variables of types that don't have a kani::any()
implementation available, you have the following options:
- Implement the
kani::Arbitrary
trait for your type, so you and downstream crates can usekani::any()
with your type. - Implement the
bolero_generator::TypeGenerator
trait. This will enable you and downstream crates to use Kani via Bolero. - Write a function that builds an object from non-deterministic variables.
We recommend the first approach for most cases.
The first approach is simple and conventional. This option will also enable you to use it with parameterized types, such as Option<MyType>
and arrays.
Kani includes a derive macro that allows you to automatically derive kani::Arbitrary
for structures and enumerations as long as all its fields also implement the kani::Arbitrary
trait.
One downside of this approach today is that the kani
crate ships with Kani, but it's not yet available on crates.io.
So you need to annotate the Arbitrary implementation with a #[cfg(kani)]
attribute.
For the derive macro, use #[cfg_attr(kani, derive(kani::Arbitrary))]
.
The second approach is recommended for cases where you would also like to be able to apply fuzzing or property testing.
The benefits of doing so were described in this blog post.
Like kani::Arbitrary
, this trait can also be used with a derive
macro.
One thing to be aware of is that this type allow users to generate arbitrary values that include pointers.
In those cases, only the values pointed to are arbitrary, not the pointers themselves.
Finally, the last approach is recommended when you need to pass in parameters, like bounds on the size of the data structure. (Which we'll discuss more in the next section.) This approach is also necessary when you need to generate a nondeterministic variable of a type that you're importing from another crate, since Rust doesn't allow you to implement a trait defined in an external crate for a type that you don't own.
Either way, inside this function you would simply return an arbitrary value by generating arbitrary values for its components. To generate a nondeterministic struct, you would just generate nondeterministic values for each of its fields. For complex data structures like vectors or other containers, you can start with an empty one and add a (bounded) nondeterministic number of entries.
For example, for a simple enum you can just annotate it with the Arbitrary derive attribute:
#[derive(Copy, Clone)]
#[cfg_attr(kani, derive(kani::Arbitrary))]
pub enum Rating {
One,
Two,
Three,
}
But if the same enum is defined in an external crate, you can use a simple trick:
pub fn any_rating() -> Rating {
match kani::any() {
0 => Rating::One,
1 => Rating::Two,
_ => Rating::Three,
}
}
All we're doing here is making use of a nondeterministic integer to decide which variant of Rating
to return.
NOTE: If we thought of this code as generating a random value, this function looks heavily biased. We'd overwhelmingly generate a
Three
because it's matching "all other integers besides 1 and 2." But Kani just see 3 meaningful possibilities, each of which is not treated any differently from each other. The "proportion" of integers does not matter.
Bounding nondeterministic variables
You can use kani::any()
for [T; N]
(if implemented for T
) because this array type has an exact and constant size.
But if you wanted a slice ([T]
) up to size N
, you can no longer use kani::any()
for that.
Likewise, there is no implementation of kani::any()
for more complex data structures like Vec
.
The trouble with a nondeterministic vector is that you usually need to bound the size of the vector, for the reasons we investigated in the last chapter.
The kani::any()
function does not have any arguments, and so cannot be given an upper bound.
This does not mean you cannot have a nondeterministic vector.
It just means you have to construct one.
Our example proof harness above constructs a nondeterministic Inventory
of size 1
, simply by starting with the empty Inventory
and inserting a nondeterministic entry.
Exercise
Try writing a function to generate a (bounded) nondeterministic inventory (from the first example:)
fn any_inventory(bound: u32) -> Inventory {
// fill in here
}
One thing you'll quickly find is that the bounds must be very small.
Kani does not (yet!) scale well to nondeterministic-size data structures involving heap allocations.
A proof harness like safe_update
above, but starting with any_inventory(2)
will probably take a couple of minutes to prove.
A hint for this exercise: you might choose two different behaviors, "size of exactly bound
" or "size up to bound
".
Try both!
A solution can be found in exercise_solution.rs
.
Summary
In this section:
- We saw how
kani::any()
will return "safe" values for each of the types Kani implements it for. - We saw how to implement
kani::Arbitrary
or just write a function to create nondeterministic values for other types. - We noted that some types cannot implement
kani::any()
as they need a bound on their size. - We did an exercise to generate nondeterministic values of bounded size for
Inventory
.
Reference
This section is the main reference for Kani. It contains sections that informally describe its main features.
Attributes
In Kani, attributes are used to mark functions as harnesses and control their execution. This section explains the attributes available in Kani and how they affect the verification process.
At present, the available Kani attributes are the following:
#[kani::proof]
#[kani::should_panic]
#[kani::unwind(<number>)]
#[kani::solver(<solver>)]
#[kani::stub(<original>, <replacement>)]
#[kani::proof]
The #[kani::proof]
attribute specifies that a function is a proof harness.
Proof harnesses are similar to test harnesses, especially property-based test harnesses,
and they may use functions from the Kani API (e.g., kani::any()
).
A proof harness is the smallest verification unit in Kani.
When Kani is run, either through kani
or cargo kani
, it'll first collect all proof harnesses
(i.e., functions with the attribute #[kani::proof]
) and then attempt to verify them.
Example
If we run Kani on this example:
#[kani::proof]
fn my_harness() {
assert!(1 + 1 == 2);
}
We should see a line in the output that says Checking harness my_harness...
(assuming my_harness
is the only harness in our code).
This will be followed by multiple messages that come from CBMC (the verification engine used by Kani) and the verification results.
Using any other Kani attribute without #[kani::proof]
will result in compilation errors.
Limitations
The #[kani::proof]
attribute can only be added to functions without parameters.
#[kani::should_panic]
The #[kani::should_panic]
attribute specifies that a proof harness is expected to panic.
This attribute allows users to exercise negative verification.
It's analogous to how #[should_panic]
allows users to exercise negative testing for Rust unit tests.
This attribute only affects the overall verification result.
In particular, using the #[kani::should_panic]
attribute will return one of the following results:
VERIFICATION:- FAILED (encountered no panics, but at least one was expected)
if there were no failed checks.VERIFICATION:- FAILED (encountered failures other than panics, which were unexpected)
if there were failed checks but not all them were related to panics.VERIFICATION:- SUCCESSFUL (encountered one or more panics as expected)
otherwise.
At the moment, to determine if a check is related to a panic, we check if its class is assertion
.
The class is the second member in the property name, the triple that's printed after Check X:
: <function>.<class>.<number>
.
For example, the class in Check 1: my_harness.assertion.1
is assertion
, so this check is considered to be related to a panic.
NOTE: The
#[kani::should_panic]
is only recommended for writing harnesses which complement existing harnesses that don't use the same attribute. In other words, it's only recommended to write negative harnesses after having written positive harnesses that successfully verify interesting properties about the function under verification.
Limitations
The #[kani::should_panic]
attribute verifies that there are one or more failed checks related to panics.
At the moment, it's not possible to pin it down to specific panics.
Therefore, it's possible that the panics detected with #[kani::should_panic]
aren't the ones that were originally expected after a change in the code under verification.
Example
Let's assume we're using the Device
from this example:
struct Device {
is_init: bool,
}
impl Device {
fn new() -> Self {
Device { is_init: false }
}
fn init(&mut self) {
assert!(!self.is_init);
self.is_init = true;
}
}
We may want to verify that calling device.init()
more than once should result in a panic.
We can do so with the following harness:
#[kani::proof]
#[kani::should_panic]
fn cannot_init_device_twice() {
let mut device = Device::new();
device.init();
device.init();
}
Running Kani on it will produce the result VERIFICATION:- SUCCESSFUL (encountered one or more panics as expected)
#[kani::unwind(<number>)]
The #[kani::unwind(<number>)]
attribute specifies that all loops must be unwound up to <number>
times.
By default, Kani attempts to unwind all loops automatically.
However, this unwinding process doesn't always terminate.
The #[kani::unwind(<number>)]
attribute will:
- Disable automatic unwinding.
- Unwind all loops up to
<number>
times.
After the unwinding stage, Kani will attempt to verify the harness.
If the #[kani::unwind(<number>)]
attribute was specified, there's a chance that one or more loops weren't unwound enough times.
In that case, there will be at least one failed unwinding assertion (there's one unwinding assertion for each loop), causing verification to fail.
Check the Loops, unwinding and bounds section for more information about unwinding.
Example
Let's assume we've written this code which contains a loop:
fn my_sum(vec: &Vec<u32>) -> u32 {
let mut sum = 0;
for elem in vec {
sum += elem;
}
sum
}
#[kani::proof]
fn my_harness() {
let vec = vec![1, 2, 3];
let sum = my_sum(&vec);
assert!(sum == 6);
}
Running this example on Kani will produce a successful verification result.
In this case, Kani automatically finds the required unwinding value (i.e., the number of times it needs to unwind all loops).
This means that the #[kani::unwind(<number>)]
attribute isn't needed, as we'll see soon.
In general, the required unwinding value is equal to the maximum number of iterations for all loops, plus one.
The required unwinding value in this example is 4: the 3 iterations in the for elem in vec
loop, plus 1.
Let's see what happens if we force a lower unwinding value with #[kani::unwind(3)]
:
#[kani::proof]
#[kani::unwind(3)]
fn my_harness() {
let vec = vec![1, 2, 3];
let sum = my_sum(&vec);
assert!(sum == 6);
}
As we mentioned, trying to verify this harness causes an unwinding failure:
SUMMARY:
** 1 of 187 failed (186 undetermined)
Failed Checks: unwinding assertion loop 0
File: "/home/ubuntu/devices/src/main.rs", line 32, in my_sum
VERIFICATION:- FAILED
[Kani] info: Verification output shows one or more unwinding failures.
[Kani] tip: Consider increasing the unwinding value or disabling `--unwinding-assertions`.
Kani cannot verify the harness because there is at least one unwinding assertion failure.
But, if we use #[kani::unwind(4)]
, which is the right unwinding value we computed earlier:
#[kani::proof]
#[kani::unwind(4)]
fn my_harness() {
let vec = vec![1, 2, 3];
let sum = my_sum(&vec);
assert!(sum == 6);
}
We'll get a successful result again:
SUMMARY:
** 0 of 186 failed
VERIFICATION:- SUCCESSFUL
#[kani::solver(<solver>)]
Changes the solver to be used by Kani's verification engine (CBMC).
This may change the verification time required to verify a harness.
At present, <solver>
can be one of:
minisat
: MiniSat.cadical
(default): CaDiCaL.kissat
: kissat.bin="<SAT_SOLVER_BINARY>"
: A custom solver binary,"<SAT_SOLVER_BINARY>"
, that must be in path.
Example
Kani will use the CaDiCaL solver in the following example:
#[kani::proof]
#[kani::solver(cadical)]
fn check() {
let mut a = [2, 3, 1];
a.sort();
assert_eq!(a[0], 1);
assert_eq!(a[1], 2);
assert_eq!(a[2], 3);
}
Changing the solver may result in different verification times depending on the harness.
Note that the default solver may vary depending on Kani's version. We highly recommend users to annotate their harnesses if the choice of solver has a major impact on performance, even if the solver used is the current default one.
#[kani::stub(<original>, <replacement>)]
Replaces the function/method with name
Check the Stubbing section for more information about stubbing.
Experimental Features
We elaborate on some of the more commonly used experimental features in Kani.
This is not an exhaustive list; to see all of Kani's experimental features, run cargo kani --help
.
To use an experimental feature, invoke Kani with the --unstable
or -Z
flag followed by the name of the feature.
Coverage
Recall our estimate_size
example from First steps,
where we wrote a proof harness constraining the range of inputs to integers less than 4096:
#[cfg(kani)]
#[kani::proof]
fn verify_success() {
let x: u32 = kani::any();
kani::assume(x < 4096);
let y = estimate_size(x);
assert!(y < 10);
}
We must wonder if we've really fully tested our function. What if we revise the function, but forget to update the assumption in our proof harness to cover the new range of inputs?
Fortunately, Kani is able to report a coverage metric for each proof harness.
In the first-steps-v2
directory, try running:
cargo kani --coverage -Z source-coverage --harness verify_success
which verifies the harness, then prints coverage information for each line.
In this case, we see that each line of estimate_size
is followed by FULL
, indicating that our proof harness provides full coverage.
Try changing the assumption in the proof harness to x < 2048
.
Now the harness won't be testing all possible cases.
Rerun the command.
You'll see this line:
src/lib.rs, 24, NONE
which indicates that the proof no longer covers line 24, which addresses the case where x >= 2048
.
Stubbing
Stubbing (or mocking) is an unstable feature which allows users to specify that certain items should be replaced with stubs (mocks) of those items during verification. At present, the only items where stubbing can be applied are functions and methods (see limitations for more details).
When to consider stubbing
In general, we have identified three reasons where users may consider stubbing:
- Unsupported features: The code under verification contains features that Kani does not support, such as inline assembly.
- Bad performance: The code under verification contains features that Kani supports, but it leads to bad verification performance (for example, deserialization code).
- Compositional reasoning: The code under verification contains code that has been verified separately. Stubbing the code that has already been verified with a less complex version that mimics its behavior can result in reduced verification workloads.
In most cases, stubbing enables users to verify code that otherwise would be impractical to verify. Although definitions for mocking (normally used in testing) and stubbing may slightly differ depending on who you ask, we often use both terms interchangeably.
Components
The stubbing feature can be enabled by using the --enable-stubbing
option when calling Kani.
Since it's an unstable feature, it requires passing the --enable-unstable
option in addition to --enable-stubbing
.
At present, the only component of the stubbing feature is the #[kani::stub(<original>, <replacement>)]
attribute,
which allows you to specify the pair of functions/methods that must be stubbed in a harness.
The #[kani::stub(...)]
attribute
The stub attribute #[kani::stub(<original>, <replacement>)]
is the main tool of the stubbing feature.
It indicates to Kani that the function/method with name <original>
should be replaced with the function/method with name <replacement>
during the compilation step.
The names of these functions/methods are resolved using Rust's standard name resolution rules.
This includes support for imports like use foo::bar as baz
, as well as imports of multiple versions of the same crate.
This attribute must be specified on a per-harness basis. This provides a high degree of flexibility for users, since they are given the option to stub the same item with different replacements (or not use stubbing at all) depending on the proof harness. In addition, the attribute can be specified multiple times per harness, so that multiple (non-conflicting) stub pairings are supported.
An example: stubbing random
Let's see a simple example where we use the rand::random
function
to generate an encryption key.
#[cfg(kani)]
#[kani::proof]
fn encrypt_then_decrypt_is_identity() {
let data: u32 = kani::any();
let encryption_key: u32 = rand::random();
let encrypted_data = data ^ encryption_key;
let decrypted_data = encrypted_data ^ encryption_key;
assert_eq!(data, decrypted_data);
}
At present, Kani fails to verify this example due to issue #1781.
However, we can work around this limitation thanks to the stubbing feature:
#[cfg(kani)]
fn mock_random<T: kani::Arbitrary>() -> T {
kani::any()
}
#[cfg(kani)]
#[kani::proof]
#[kani::stub(rand::random, mock_random)]
fn encrypt_then_decrypt_is_identity() {
let data: u32 = kani::any();
let encryption_key: u32 = rand::random();
let encrypted_data = data ^ encryption_key;
let decrypted_data = encrypted_data ^ encryption_key;
assert_eq!(data, decrypted_data);
}
Here, the #[kani::stub(rand::random, mock_random)]
attribute indicates to Kani that it should replace rand::random
with the stub mock_random
.
Note that this is a fair assumption to do: rand::random
is expected to return any u32
value, just like kani::any
.
Now, let's run it through Kani:
cargo kani --enable-unstable --enable-stubbing --harness encrypt_then_decrypt_is_identity
The verification result is composed of a single check: the assertion corresponding to assert_eq!(data, decrypted_data)
.
RESULTS:
Check 1: encrypt_then_decrypt_is_identity.assertion.1
- Status: SUCCESS
- Description: "assertion failed: data == decrypted_data"
- Location: src/main.rs:18:5 in function encrypt_then_decrypt_is_identity
SUMMARY:
** 0 of 1 failed
VERIFICATION:- SUCCESSFUL
Kani shows that the assertion is successful, avoiding any issues that appear if we attempt to verify the code without stubbing.
Limitations
In the following, we describe all the limitations of the stubbing feature.
Usage restrictions
The usage of stubbing is limited to the verification of a single harness.
Therefore, users are required to pass the --harness
option when using the stubbing feature.
In addition, this feature isn't compatible with concrete playback.
Support
Support for stubbing is currently limited to functions and methods. All other items aren't supported.
The following are examples of items that could be good candidates for stubbing, but aren't supported:
- Types
- Macros
- Traits
- Intrinsics
We acknowledge that support for method stubbing isn't as ergonomic as it could be.
A common problem when attempting to define method stubs is that we don't have access to the private fields of an object (i.e., the fields in self
).
One workaround is to use the unsafe function std::mem::transmute
, as in this example:
struct Foo {
x: u32,
}
impl Foo {
pub fn m(&self) -> u32 {
0
}
}
struct MockFoo {
pub x: u32,
}
fn mock_m(foo: &Foo) -> u32 {
let mock: &MockFoo = unsafe { std::mem::transmute(foo) };
return mock.x;
}
#[cfg(kani)]
#[kani::proof]
#[kani::stub(Foo::m, mock_m)]
fn my_harness() { ... }
However, this isn't recommended since it's unsafe and error-prone. In general, we don't recommend stubbing for private functions/methods. Doing so can lead to brittle proofs: private functions/methods are subject to change or removal even in version minor upgrades (they aren't part of the APIs). Therefore, proofs that rely on stubbing for private functions/methods might incur a high maintenance burden.
Error conditions
Given a set of original
-replacement
pairs, Kani will exit with an error if:
- a specified
original
function does not exist; - a specified
replacement
stub does not exist; - the user specifies conflicting stubs for the same harness (e.g., if the same
original
function is mapped to multiplereplacement
functions); or - the signature of the
replacement
stub is not compatible with the signature of theoriginal
function/method (see next section).
Stub compatibility and validation
We consider a stub and a function/method to be compatible if all the following conditions are met:
- They have the same number of parameters.
- They have the same return type.
- Each parameter in the stub has the same type as the corresponding parameter in the original function/method.
- The stub must have the same number of generic parameters as the original function/method.
However, a generic parameter in the stub is allowed to have a different name than the corresponding parameter in the original function/method.
For example, the stub
bar<A, B>(x: A, y: B) -> B
is considered to have a type compatible with the functionfoo<S, T>(x: S, y: T) -> T
. - The bounds for each type parameter don't need to match; however, all calls to the original function must also satisfy the bounds of the stub.
The final point is the most subtle. We don't require that a type parameter in the signature of the stub implements the same traits as the corresponding type parameter in the signature of the original function/method. However, Kani will reject a stub if a trait mismatch leads to a situation where a statically dispatched call to a trait method cannot be resolved during monomorphization. For example, this restriction rules out the following harness:
fn foo<T>(_x: T) -> bool {
false
}
trait DoIt {
fn do_it(&self) -> bool;
}
fn bar<T: DoIt>(x: T) -> bool {
x.do_it()
}
#[kani::proof]
#[kani::stub(foo, bar)]
fn harness() {
assert!(foo("hello"));
}
The call to the trait method DoIt::do_it
is unresolvable in the stub bar
when the type parameter T
is instantiated with the type &str
.
On the other hand, this approach provides some flexibility, such as allowing our earlier example of mocking rand::random
:
both rand::random
and my_random
have type () -> T
, but in the first case T
is restricted such that the type Standard
implements Distribution<T>
,
whereas in the latter case T
has to implement kani::Arbitrary
.
This trait mismatch is allowed because at this call site T
is instantiated with u32
, which implements kani::Arbitrary
.
Contracts
Consider the following example:
fn gcd(mut max: u8, mut min: u8) -> u8 {
if min > max {
std::mem::swap(&mut max, &mut min);
}
let rest = max % min;
if rest == 0 { min } else { gcd(min, rest) }
}
Let's assume we want to verify some code that calls gcd
.
In the worst case, the number of steps (recursions) in gcd
approaches 1.5 times the number of bits needed to represent the input numbers.
So, for two large 64-bit numbers, a single call to gcd
can take almost 96 iterations.
It would be very expensive for Kani to unroll each of these iterations and then perform symbolic execution.
Instead, we can write contracts with guarantees about gcd
's behavior.
Once Kani verifies that gcd
's contracts are correct, it can replace each invocation of gcd
with its contracts, which reduces verification time for gcd
's callers.
For example, perhaps we want to ensure that the returned result
does indeed divide both max
and min
.
In that case, we could write contracts like these:
#[kani::requires(min != 0 && max != 0)]
#[kani::ensures(|result| *result != 0 && max % *result == 0 && min % *result == 0)]
#[kani::recursion]
fn gcd(mut max: u8, mut min: u8) -> u8 { ... }
Since gcd
performs max % min
(and perhaps swaps those values), passing zero as an argument could cause a division by zero.
The requires
contract tells Kani to restrict the range of nondeterministic inputs to nonzero ones so that we don't run into this error.
The ensures
contract is what actually checks that the result is a correct divisor for the inputs.
(The recursion
attribute is required when using contracts on recursive functions).
Then, we would write a harness to verify those contracts, like so:
#[kani::proof_for_contract(gcd)]
fn check_gcd() {
let max: u8 = kani::any();
let min: u8 = kani::any();
gcd(max, min);
}
and verify it by running kani -Z function-contracts
.
Once Kani verifies the contracts, we can use Kani's stubbing feature to replace all invocations to gcd
with its contracts, for instance:
// Assume foo() invokes gcd().
// By using stub_verified, we tell Kani to replace
// invocations of gcd() with its verified contracts.
#[kani::proof]
#[kani::stub_verified(gcd)]
fn check_foo() {
let x: u8 = kani::any();
foo(x);
}
By leveraging the stubbing feature, we can replace the (expensive) gcd
call with a verified abstraction of its behavior, greatly reducing verification time for foo
.
There is far more to learn about contracts.
We highly recommend reading our blog post about contracts (from which this gcd
example is taken). We also recommend looking at the contracts
module in our documentation.
Concrete Playback
When the result of a certain check comes back as a FAILURE
, Kani offers the concrete-playback
option to help debug. This feature generates a Rust unit test case that plays back a failing proof harness using a concrete counterexample.
When concrete playback is enabled, Kani will generate unit tests for assertions that failed during verification, as well as cover statements that are reachable.
These tests can then be executed using Kani's playback subcommand.
Usage
In order to enable this feature, run Kani with the -Z concrete-playback --concrete-playback=[print|inplace]
flag.
After getting a verification failure, Kani will generate a Rust unit test case that plays back a failing
proof harness with a concrete counterexample.
The concrete playback modes mean the following:
print
: Kani will just print the unit test to stdout. You will then need to copy this unit test into the same module as your proof harness. This is also helpful if you just want to quickly find out which values were assigned bykani::any()
calls.inplace
: Kani will automatically copy the unit test into your source code. Before running this mode, you might find it helpful to have your existing code committed togit
. That way, you can easily remove the unit test withgit revert
. Note that Kani will not copy the unit test into your source code if it detects that the exact same test already exists.
After the unit test is in your source code, you can run it with the playback
subcommand.
To debug it, there are a couple of options:
- You can try Kani's experimental extension provided for VSCode.
- Otherwise, you can debug the unit test on the command line.
To manually compile and run the test, you can use Kani's playback
subcommand:
cargo kani playback -Z concrete-playback -- ${unit_test_func_name}
The output from this command is similar to cargo test
.
The output will have a line in the beginning like
Running unittests {files} ({binary})
.
You can further debug the binary with tools like rust-gdb
or lldb
.
Example
Running kani -Z concrete-playback --concrete-playback=print
on the following source file:
#[kani::proof]
fn proof_harness() {
let a: u8 = kani::any();
let b: u16 = kani::any();
assert!(a / 2 * 2 == a &&
b / 2 * 2 == b);
}
yields a concrete playback Rust unit test similar to the one below:
#[test]
fn kani_concrete_playback_proof_harness_16220658101615121791() {
let concrete_vals: Vec<Vec<u8>> = vec![
// 133
vec![133],
// 35207
vec![135, 137],
];
kani::concrete_playback_run(concrete_vals, proof_harness);
}
Here, 133
and 35207
are the concrete values that, when substituted for a
and b
,
cause an assertion failure.
vec![135, 137]
is the byte array representation of 35207
.
Request for comments
This feature is experimental and is therefore subject to change. If you have ideas for improving the user experience of this feature, please add them to this GitHub issue. We are tracking the existing feature requests in this GitHub milestone.
Limitations
- This feature does not generate unit tests for failing non-panic checks (e.g., UB checks). This is because checks would not trigger runtime errors during concrete playback. Kani generates warning messages for this.
- This feature does not support generating unit tests for multiple assertion failures within the same harness. This limitation might be removed in the future. Kani generates warning messages for this.
- This feature requires that you use the same Kani version to generate the test and to playback. Any extra compilation option used during verification must be used during playback.
Application
You may be interested in applying Kani if you're in this situation:
- You're working on a moderately important project in Rust.
- You've already invested heavily in testing to ensure correctness.
- You want to invest further, to gain a much higher degree of assurance.
If you haven't already, we also recommend techniques like property testing and fuzzing (e.g. with
bolero
). These yield good results, are very cheap to apply, and are often easy to adopt and debug.
In this section, we explain how Kani compares with other tools and suggest where to start applying Kani in real code.
Comparison with other tools
Fuzzing (for example, with cargo-fuzz
) is a unguided approach to random testing.
A fuzzer generally provides an input of random bytes, and then examines fairly generic properties (such as "doesn't crash" or "commit undefined behavior") about the resulting program.
Fuzzers generally get their power through a kind of evolutionary algorithm that rewards new mutant inputs that "discover" new branches of the program under test. Fuzzers are excellent for testing security boundaries, precisely because they make no validity assumptions (hence, they are "unguided") when generating the input.
Property testing (for example, with Proptest) is a guided approach to random testing. "Guided" in the sense that the test generally provides a strategy for generating random values that constrains their range. The purpose of this strategy is to either focus on interesting values, or avoid failing assertions that only hold for a constrained set of inputs. Tests in this style do actually state properties: For all inputs (of some constrained kind), this condition should hold.
Property testing is often quite effective, but the engine can't fully prove the property: It can only sample randomly a few of those values to test (though property testing libraries frequently give interesting "edge cases" a higher probability, making them more effective at bug-finding).
Model checking is similar to these techniques in how you use them, but it's non-random and exhaustive (though often only up to some bound on input or problem size). Thus, properties checked with a model checker are effectively proofs. Instead of naively trying all possible concrete inputs (which could be infeasible and blow up exponentially), model checkers like Kani will cleverly encode program traces as symbolic "SAT/SMT" problems, and hand them off to SAT/SMT solvers. SAT/SMT solving is an NP-complete problem, but many practical programs can be model-checked within milliseconds to seconds (with notable exceptions: you can easily try to reverse a cryptographic hash with a model checker, but good luck getting it to terminate!)
Model checking allows you to prove non-trivial properties about programs, and check those proofs in roughly the same amount of time as a traditional test suite would take to run. The downside is many types of properties can quickly become "too large" to practically model-check, and so writing "proof harnesses" (very similar to property tests and fuzzer harnesses) requires some skill to understand why the solver is not terminating and fix the structure of the problem you're giving it so that it does. This process basically boils down to "debugging" the proof.
Looking for concurrency?
At present, Kani does not support verifying concurrent code. Two tools of immediate interest are Loom and Shuttle. Loom attempts to check all possible interleavings, while Shuttle chooses interleavings randomly. The former is sound (like Kani), but the latter is more scalable to large problem spaces (like property testing).
Other tools
The Rust Formal Methods Interest Group maintains a list of interesting Rust verification tools.
Where to start on real code
It can be daunting to find the right place to start writing proofs for a real-world project. This section will try to help you get over that hurdle.
In general, you're trying to do three things:
- Find a place where it'd be valuable to have a proof.
- Find a place where it won't be too difficult to prove something, just to start.
- Figure out what a feasible longer-term goal might be.
By far, the best strategy is to follow your testing. Places where proof will be valuable are often places where you've written a lot of tests, because they're valuable there for the same reasons. Likewise, code structure changes to make functions more unit-testable will also make functions more amenable to proof. Often, by examining existing unit tests (and especially property tests), you can easily find a relatively self-contained function that's a good place to start.
Where is proof valuable?
-
Where complicated things happen with untrusted user input. These are often the critical "entry points" into the code. These are also places where you probably want to try other techniques (e.g., fuzz testing).
-
Where
unsafe
is used extensively. These are often places where you'll already have concentrated a lot of tests. -
Where you have a complicated implementation that accomplishes a much simpler abstract problem. Ideal places for property testing, if you haven't tried that already. But the usual style of property tests you often write here (generate large random lists of operations, then compare between concrete and abstract model) won't be practical to directly port to model checking.
-
Where normal testing "smells" intractable.
Where is it easier to start?
-
Find crates or files with smaller lists of dependencies. Dependencies can sometimes blow up the tractability of proofs. This can usually be handled, but requires a lot more investment to make it happen, and so isn't a good place to start.
-
Don't forget to consider starting with your dependencies. Sometimes the best place to start won't be your code, but the code that you depend on. If it's used by more projects that just yours, it will be valuable to more people, too!
-
Find well-tested code. When you make changes to improve the unit-testability of code, that also makes it more amenable to proof, too.
Here are some things to avoid, when starting out:
-
Lots of loops, or at least nested loops. As we saw in the tutorial, right now we often need to put upper bounds on loops to make more limited claims.
-
Inductive data structures. These are data structures with unbounded size (e.g., linked lists or trees.) These can be hard to model since you need to set bounds on their size, similar to what happens with loops.
-
Input/Output code. Kani doesn't model I/O, so if your code depends on behavior like reading/writing to a file, you won't be able to prove anything. This is one obvious area where testability helps provability: often we separate I/O and "pure" computation into different functions, so we can unit-test the latter.
-
Deeper call graphs. Functions that call a lot of other functions can require more investment to make tractable. They may not be a good starting point.
-
Significant global state. Rust tends to discourage this, but it still exists in some forms.
Your first proof
A first proof will likely start in the following form:
- Nondeterministically initialize variables that will correspond to function inputs, with as few constraints as possible.
- Call the function in question with these inputs.
- Don't (yet) assert any post-conditions.
Running Kani on this simple starting point will help figure out:
- What unexpected constraints might be needed on your inputs (using
kani::assume
) to avoid "expected" failures. - Whether you're over-constrained. Check the coverage report using
--coverage -Z source-coverage
. Ideally you'd see 100% coverage, and if not, it's usually because you've assumed too much (thus over-constraining the inputs). - Whether Kani will support all the Rust features involved.
- Whether you've started with a tractable problem.
(Remember to try setting
#[kani::unwind(1)]
to force early termination and work up from there.)
Once you've got something working, the next step is to prove more interesting properties than just what Kani covers by default. You accomplish this by adding new assertions (not just in your harness, but also to the code being run). Even if a proof harness has no post-conditions being asserted directly, the assertions encountered along the way can be meaningful proof results by themselves.
Examples of the use of Kani
On the Kani blog, we've documented worked examples of applying Kani:
Developer documentation
Kani is an open source project open to external contributions.
The easiest way to contribute is to report any issue you encounter while using the tool. If you want to contribute to its development, we recommend looking into these issues.
In this chapter, we provide documentation that might be helpful for Kani developers (including external contributors):
- Coding conventions.
- Useful command-line instructions for Kani/CBMC/Git.
- Development setup recommendations for working with
cbmc
. - Development setup recommendations for working with
rustc
. - Guide for testing in Kani.
- Transition to StableMIR.
NOTE: The developer documentation is intended for Kani developers and not users. At present, the project is under heavy development and some items discussed in this documentation may stop working without notice (e.g., commands or configurations). Therefore, we recommend users to not rely on them.
Coding conventions
Formatting
We automate most of our formatting preferences. Our CI will run format checkers for PRs and pushes. These checks are required for merging any PR.
For Rust, we use rustfmt
which is configured via the rustfmt.toml file.
We are also in the process of enabling clippy
.
Because of that, we still have a couple of lints disabled (see .cargo/config for the updated list).
We also have a bit of C and Python code in our repository.
For C we use clang-format
and for Python scripts we use autopep8
.
See .clang-format
and pyproject.toml
for their configuration.
Exceptions
We recognize that in some cases, the formatting and lints automation may not be applicable to a specific code.
In those cases, we usually prefer explicitly allowing exceptions by locally disabling the check.
E.g., use #[allow]
annotation instead of disabling a link on a crate or project level.
Copyright notice
All source code files begin with a copyright and license notice. If this is a new file, please add the following notice:
// Copyright Kani Contributors
// SPDX-License-Identifier: Apache-2.0 OR MIT
When modifying a file from another project, please keep their headers as is and append the following notice after them:
// ... existing licensing headers ...
// Modifications Copyright Kani Contributors
// See GitHub history for details.
Note: The comment escape characters will depend on the type of file you are working with. E.g.: For rust start the
header with //
, but for python start with #
.
We also have automated checks for the copyright notice. There are a few file types where this rule doesn't apply. You can see that list in the copyright-exclude file.
Code for soundness
We are developing Kani to provide assurance that critical Rust components are verifiably free of certain classes of security and correctness issues. Thus, it is critical that we provide a verification tool that is sound. For the class of errors that Kani can verify, we should not produce a "No Error" result if there was in fact an error in the code being verified, i.e., it has no "False Negatives".
Because of that, we bias on the side of correctness. Any incorrect modeling that may trigger an unsound analysis that cannot be fixed in the short term should be mitigated. Here are a few ways how we do that.
Compilation errors
Make sure to add user-friendly errors for constructs that we can't handle. For example, Kani cannot handle the panic unwind strategy, and it will fail compilation if the crate uses this configuration.
In general, it's preferred that error messages follow these guidelines used for rustc
development.
If the errors are being emitted from kani-compiler
, you should use the compiler error message utilities (e.g., the Session::span_err
method). However, if the
errors are being emitted from kani-driver
, you should use the functions provided in the util
module in kani-driver
.
Internal compiler errors
Even though this doesn't provide users the best experience, you are encouraged to add checks in the compiler for any
assumptions you make during development.
Those checks can be on the form of assert!()
or unreachable!()
statement.
Please provide a meaningful message to help user understand why something failed, and try to explain, at least with
a comment, why this is the case.
We don't formally use any specific formal representation of function contract, but whenever possible we do instrument the code with assertions that may represent the function pre- and post-conditions to ensure we are modeling the user code correctly.
Verification errors
In cases where Kani fails to model a certain instruction or local construct that doesn't have a global effect,
we encode this failure as a verification error.
I.e., we generate an assertion failure instead of the construct we are modeling using
codegen_unimplemented()
,
which blocks the execution whenever this construct is reached.
This will allow users to verify their crate successfully as long as
that construct is not reachable in any harness. If a harness has at least one possible execution path that reaches
such construct, Kani will fail the verification, and it will mark all checks, other than failed checks, with
UNDETERMINED
status.
Create detailed issues for "TODO" tasks
It is OK to add "TODO" comments as long as they don't compromise user experience or the tool correctness. When doing so, please create an issue that captures the task. Add details about the task at hand including any impact to the user. Finally, add the link to the issue that captures the "TODO" task as part of your comment.
E.g.:
// TODO: This function assumes type cannot be ZST. Check if that's always the case.
// https://github.com/model-checking/kani/issues/XXXX
assert!(!typ.is_zst(), "Unexpected ZST type");
Performant but readable
We aim at writing code that is performant but also readable and easy to maintain. Avoid compromising the code quality if the performance gain is not significant.
Here are few tips that can help the readability of your code:
- Sort match arms, enum variants, and struct fields alphabetically.
- Prefer concise but meaningful names.
- Prefer exhaustive matches.
- Prefer declarative over imperative programming.
Working with CBMC
This section describes how to access more advanced CBMC options from Kani.
CBMC arguments
Kani is able to handle common CBMC arguments as if they were its own (e.g.,
--default-unwind <n>
), but sometimes it may be necessary to use CBMC arguments which
are not handled by Kani.
To pass additional arguments for CBMC, you pass --cbmc-args
to Kani. Note that
this "switches modes" from Kani arguments to CBMC arguments: Any arguments that
appear after --cbmc-args
are considered to be CBMC arguments, so all Kani
arguments must be placed before it.
Thus, the command line format to invoke cargo kani
with CBMC arguments is:
cargo kani [<kani-args>]* --cbmc-args [<cbmc-args>]*
NOTE: In cases where CBMC is not expected to emit a verification output, you have to use Kani's argument
--output-format old
to turn off the post-processing of output from CBMC.
Individual loop bounds
Setting --default-unwind <n>
affects every loop in a harness.
Once you know a particular loop is causing trouble, sometimes it can be helpful to provide a specific bound for it.
In the general case, specifying just the highest bound globally for all loops shouldn't cause any problems, except that the solver may take more time because all loops will be unwound to the specified bound.
In situations where you need to optimize for the solver, individual bounds for
each loop can be provided on the command line. To do so, we first need to know
the labels assigned to each loop with the CBMC argument --show-loops
:
# kani src/lib.rs --output-format old --cbmc-args --show-loops
[...]
Loop _RNvCs6JP7pnlEvdt_3lib17initialize_prefix.0:
file ./src/lib.rs line 11 column 5 function initialize_prefix
Loop _RNvMs8_NtNtCswN0xKFrR8r_4core3ops5rangeINtB5_14RangeInclusivejE8is_emptyCs6JP7pnlEvdt_3lib.0:
file $RUST/library/core/src/ops/range.rs line 540 column 9 function std::ops::RangeInclusive::<Idx>::is_empty
Loop gen-repeat<[u8; 10]::16806744624734428132>.0:
This command shows us the labels of the loops involved. Note that, as mentioned
in CBMC arguments, we need to use --output-format old
to
avoid post-processing the output from CBMC.
NOTE: At the moment, these labels are constructed using the mangled name of the function and an index. Mangled names are likely to change across different versions, so this method is highly unstable.
Then, we can use the CBMC argument --unwindset label_1:bound_1,label_2:bound_2,...
to specify an individual bound for each
loop as follows:
kani src/lib.rs --cbmc-args --unwindset _RNvCs6JP7pnlEvdt_3lib17initialize_prefix.0:12
Working with rustc
Kani is developed on the top of the Rust compiler, which is not distributed on crates.io and depends on
bootstrapping mechanisms to properly build its components.
Thus, our dependency on rustc
crates are not declared in our Cargo.toml
.
Below are a few hacks that will make it easier to develop on the top of rustc
.
Code analysis for rustc
definitions
IDEs rely on cargo
to find dependencies and sources to provide proper code analysis and code completion.
In order to get these features working for rustc
crates, you can do the following:
VSCode
Add the following to the rust-analyzer
extension settings in settings.json
:
"rust-analyzer.rustc.source": "discover",
"rust-analyzer.workspace.symbol.search.scope": "workspace_and_dependencies",
Ensure that any packages that use rustc
data structures have the following line set in their Cargo.toml
[package.metadata.rust-analyzer]
# This package uses rustc crates.
rustc_private=true
You may also need to install the rustc-dev
package using rustup
rustup toolchain install nightly --component rustc-dev
Debugging in VS code
To debug Kani in VS code, first install the CodeLLDB extension.
Then add the following lines at the start of the main
function (see the CodeLLDB manual for details):
{
let url = format!(
"vscode://vadimcn.vscode-lldb/launch/config?{{'request':'attach','sourceLanguages':['rust'],'waitFor':true,'pid':{}}}",
std::process::id()
);
std::process::Command::new("code").arg("--open-url").arg(url).output().unwrap();
}
Note that pretty printing for the Rust nightly toolchain (which Kani uses) is not very good as of June 2022.
For example, a vector may be displayed as vec![{...}, {...}]
on nightly Rust, when it would be displayed as vec![Some(0), None]
on stable Rust.
Hopefully, this will be fixed soon.
CLion / IntelliJ
This is not a great solution, but it works for now (see https://github.com/intellij-rust/intellij-rust/issues/1618
for more details).
Edit the Cargo.toml
of the package that you're working on and add artificial dependencies on the rustc
packages that you would like to explore.
# This configuration doesn't exist so it shouldn't affect your build.
[target.'cfg(KANI_DEV)'.dependencies]
# Adjust the path here to point to a local copy of the rust compiler.
# The best way is to use the rustup path. Replace <toolchain> with the
# proper name to your toolchain.
rustc_driver = { path = "~/.rustup/toolchains/<toolchain>/lib/rustlib/rustc-src/rust/compiler/rustc_driver" }
rustc_interface = { path = "~/.rustup/toolchains/<toolchain>/lib/rustlib/rustc-src/rust/compiler/rustc_interface" }
Don't forget to rollback the changes before you create your PR.
EMACS (with use-package
)
First, Cargo.toml
and rustup toolchain
steps are identical to VS
Code. Install Rust-analyzer binary under ~/.cargo/bin/
.
On EMACS, add the following to your EMACS lisp files. They will
install the necessary packages using the use-package
manager.
;; Install LSP
(use-package lsp-mode
:commands lsp)
(use-package lsp-ui)
;; Install Rust mode
(use-package toml-mode)
(use-package rust-mode)
(setq lsp-rust-server 'rust-analyzer)
(setenv "PATH" (concat (getenv "PATH") ":/home/USER/.cargo/bin/"))
If EMACS complains that it cannot find certain packages, try running
M-x package-refresh-contents
.
For LSP to be able to find rustc_private
files used by Kani, you
will need to modify variable lsp-rust-analyzer-rustc-source
. Run
M-x customize-variable
, type in lsp-rust-analyzer-rustc-source
,
click Value Menu
and change it to Path
. Paste in the path to
Cargo.toml
of rustc
's source code. You can find the source code
under .rustup
, and the path should end with
compiler/rustc/Cargo.toml
. Important: make sure that this
rustc
is the same version and architecture as what Kani uses. If
not, LSP features like definition lookup may be break.
This ends the basic install for EMACS. You can test your configuration with the following steps.
- Opening up a rust file with at least one
rustc_private
import. - Activate LSP mode with
M-x lsp
. - When asked about the root of the project, pick one of them. Make
sure that whichever root you pick has a
Cargo.toml
withrustc_private=true
added. - If LSP asks if you want to watch all files, select yes. For less powerful machines, you may want to adjust that later.
- On the file with
rustc_private
imports, do the following. If both work, then you are set up.- Hover mouse over the
rustc_private
import. If LSP is working, you should get information about the imported item. - With text cursor over the same
rustc_private
import, runM-x lsp-find-definition
. This should jump to the definition withinrustc
's source code.
- Hover mouse over the
LSP mode can integrate with flycheck
for instant error checking and
company
for auto-complete. Consider adding the following to the
configuration.
(use-package flycheck
:hook (prog-mode . flycheck-mode))
(use-package company
:hook (prog-mode . company-mode)
:config
(global-company-mode))
clippy
linter can be added by changing the LSP install to:
(use-package lsp-mode
:commands lsp
:custom
(lsp-rust-analyzer-cargo-watch-command "clippy"))
Finally lsp-mode can run rust-analyzer via TRAMP for remote development. We found this way of using rust-analyzer to be unstable as of 2022-06. If you want to give it a try you will need to add a new LSP client for that remote with the following code.
(lsp-register-client
(make-lsp-client
:new-connection (lsp-tramp-connection "/full/path/to/remote/machines/rust-analyzer")
:major-modes '(rust-mode)
:remote? t
:server-id 'rust-analyzer-remote))
For further details, please see https://emacs-lsp.github.io/lsp-mode/page/remote/.
Custom rustc
There are a few reasons why you may want to use your own copy of rustc
. E.g.:
- Enable more verbose logs.
- Use a debug build to allow you to step through
rustc
code. - Test changes to
rustc
.
We will assume that you already have a Kani setup and that the variable KANI_WORKSPACE
contains the path to your Kani workspace.
It's highly recommended that you start from the commit that corresponds to the current rustc
version from your workspace.
To get that information, run the following command:
cd ${KANI_WORKSPACE} # Go to your Kani workspace.
rustc --version # This will print the commit id. Something like:
# rustc 1.60.0-nightly (0c292c966 2022-02-08)
# ^^^^^^^^^ this is used as the ${COMMIT_ID} below
# E.g.:
COMMIT_ID=0c292c966
First you need to clone and build stage 2 of the compiler. You should tweak the configuration to satisfy your use case. For more details, see https://rustc-dev-guide.rust-lang.org/building/how-to-build-and-run.html and https://rustc-dev-guide.rust-lang.org/building/suggested.html.
git clone https://github.com/rust-lang/rust.git
cd rust
git checkout ${COMMIT_ID:?"Missing rustc commit id"}
./configure --enable-extended --tools=src,rustfmt,cargo --enable-debug --set=llvm.download-ci-llvm=true
./x.py build -i --stage 2
Now create a custom toolchain (here we name it custom-toolchain
):
# Use x86_64-apple-darwin for MacOs
rustup toolchain link custom-toolchain build/x86_64-unknown-linux-gnu/stage2
cp build/x86_64-unknown-linux-gnu/stage2-tools-bin/* build/x86_64-unknown-linux-gnu/stage2/bin/
Finally, override the current toolchain in your kani workspace and rebuild kani:
cd ${KANI_WORKSPACE}
rustup override set custom-toolchain
cargo clean
cargo build-dev
Rust compiler utilities to debug kani-compiler
Enable rustc
logs
In order to enable logs, you can just define the RUSTC_LOG
variable, as documented here: https://rustc-dev-guide.rust-lang.org/tracing.html.
Note that, depending on the level of logs you would like to get (debug and trace are not enabled by default), you'll need to build your own version of rustc
as described above.
For logs that are related to kani-compiler
code, use the KANI_LOG
variable.
Debugging type layout
In order to print the type layout computed by the Rust compiler, you can pass the following flag to rustc
: -Zprint-type-sizes
.
This flag can be passed to kani
or cargo kani
by setting the RUSTFLAG
environment variable.
RUSTFLAGS=-Zprint-type-sizes kani test.rs
When enabled, the compiler will print messages that look like:
print-type-size type: `std::option::Option<bool>`: 1 bytes, alignment: 1 bytes
print-type-size variant `Some`: 1 bytes
print-type-size field `.0`: 1 bytes
print-type-size variant `None`: 0 bytes
Inspecting the MIR
You can easily visualize the MIR that is used as an input to code generation by setting the Rust flag --emit mir
. I.e.:
RUSTFLAGS=--emit=mir kani test.rs
The compiler will generate a few files, but we recommend looking at the files that have the following suffix: kani.mir
.
Those files will include the entire MIR collected by our reachability analysis.
It will include functions from all dependencies, including the std
library.
One limitation is that we dump one copy of each specialization of the MIR function, even though the MIR body itself doesn't change.
Transition to StableMIR
We have partnered with the Rust compiler team in the initiative to introduce stable APIs to the compiler that can be used by third-party tools, which is known as the Stable MIR Project, or just StableMIR. This means that we are starting to use the new APIs introduced by this project as is, despite them not being stable yet.
StableMIR APIs
For now, the StableMIR APIs are exposed as a crate in the compiler named stable_mir
.
This crate includes the definition of structures and methods to be stabilized,
which are expected to become the stable APIs in the compiler.
To reduce the migration burden, these APIs are somewhat close to the original compiler interfaces.
However, some changes have been made to make these APIs cleaner and easier to use.
For example:
- The usage of the compiler context (aka
TyCtxt
) is transparent to the user. The StableMIR implementation caches this context in a thread local variable, and retrieves it whenever necessary.- Because of that, code that uses the StableMIR has to be invoked inside a
run
call.
- Because of that, code that uses the StableMIR has to be invoked inside a
- The
DefId
has been specialized into multiple types, making its usage less error prone. E.g.:FnDef
represents the definition of a function, whileStaticDef
is the definition of a static variable.- Note that the same
DefId
may be mapped to different definitions according to its context. For example, anInstanceDef
and aFnDef
may represent the same function definition.
- Note that the same
- Methods that used to be exposed as part of
TyCtxt
are now part of a type. Example, the functionTyCtxt.instance_mir
is nowInstance::body
. - There is no need for explicit instantiation (monomorphization) of items from an
Instance::body
. This method already instantiates all types and resolves all constants before converting it to stable APIs.
Performance
Since the new APIs require converting internal data to a stable representation, the APIs were also designed to avoid needless conversions, and to allow extra information to be retrieved on demand.
For example, Ty
is just an identifier, while TyKind
is a structure that can be retrieved via Ty::kind
method.
The TyKind
is a more structured object, thus,
it is only generated when the kind
method is invoked.
Since this translation is not cached,
many of the functions that the rust compiler used to expose in Ty
,
is now only part of TyKind
.
The reason being that there is no cache for the TyKind
,
and users should do the caching themselves to avoid needless translations.
From our initial experiments with the transition of the reachability algorithm to use StableMIR, there is a small penalty of using StableMIR over internal rust compiler APIs. However, they are still fairly efficient and it did not impact the overall compilation time.
Interface with internal APIs
To reduce the burden of migrating to StableMIR, and to allow StableMIR to be used together with internal APIs, there are two helpful methods to convert StableMIR constructs to internal rustc and back:
rustc_internal::internal()
: Convert a Stable item into an internal one.rustc_internal::stable()
: Convert an internal item into a Stable one.
Both of these methods are inside rustc_smir
crate in the rustc_internal
module inside the compiler.
Note that there is no plan to stabilize any of these methods,
and there's also no guarantee on its support and coverage.
The conversion is not implemented for all items, and some conversions may be incomplete. Please proceed with caution when using these methods.
Besides that, do not invoke any other rustc_smir
methods, except for run
.
This crate's methods are not meant to be invoked externally.
Note that, the method run
will also eventually be replaced by a Stable driver.
Creating and modifying StableMIR items
For now, StableMIR should only be used to get information from the compiler. Do not try to create or modify items directly, as it may not work. This may result in incorrect behavior or an internal compiler error (ICE).
Naming conventions in Kani
As we adopt StableMIR, we would like to introduce a few conventions to make it easier to maintain the code.
Whenever there is a name conflict, for example, Ty
or codegen_ty
,
use a suffix to indicate which API you are using.
Stable
for StableMIR and Internal
for rustc
internal APIs.
A module should either default its naming to Stable APIs or Internal APIs.
I.e.: Modules that have been migrated to StableMIR don't need to add the Stable
suffix to stable items.
While those that haven't been migrated, should add Stable
, but no Internal
is needed.
For example, the codegen::typ
module will likely include methods:
codegen_ty(&mut self, Ty)
and codegen_ty_stable(&mut, TyStable)
to handle
internal and stable APIs.
Command cheat sheets
Development work in the Kani project depends on multiple tools. Regardless of your familiarity with the project, the commands below may be useful for development purposes.
Kani
Build
# Error "'rustc' panicked at 'failed to lookup `SourceFile` in new context'"
# or similar error? Cleaning artifacts might help.
# Otherwise, comment the line below.
cargo clean
cargo build-dev
Test
# Full regression suite
./scripts/kani-regression.sh
# Delete regression test caches (Linux)
rm -r build/x86_64-unknown-linux-gnu/tests/
# Delete regression test caches (macOS)
rm -r build/x86_64-apple-darwin/tests/
# Test suite run (we can only run one at a time)
# cargo run -p compiletest -- --suite ${suite} --mode ${mode}
cargo run -p compiletest -- --suite kani --mode kani
# Build documentation
cd docs
./build-docs.sh
Debug
These can help understand what Kani is generating or encountering on an example or test file:
# Enable `debug!` macro logging output when running Kani:
kani --debug file.rs
# Use KANI_LOG for a finer grain control of the source and verbosity of logs.
# E.g.: The command below will print all logs from the kani_middle module.
KANI_LOG="kani_compiler::kani_middle=trace" kani file.rs
# Keep CBMC Symbol Table and Goto-C output (.json and .goto)
kani --keep-temps file.rs
# Generate "C code" from CBMC IR (.c)
kani --gen-c file.rs
# Generate a ${INPUT}.kani.mir file with a human friendly MIR dump
# for all items that are compiled to the respective goto-program.
RUSTFLAGS="--emit mir" kani ${INPUT}.rs
The KANI_REACH_DEBUG
environment variable can be used to debug Kani's reachability analysis.
If defined, Kani will generate a DOT graph ${INPUT}.dot
with the graph traversed during reachability analysis.
If defined and not empty, the graph will be filtered to end at functions that contains the substring
from KANI_REACH_DEBUG
.
Note that this will only work on debug builds.
# Generate a DOT graph ${INPUT}.dot with the graph traversed during reachability analysis
KANI_REACH_DEBUG= kani ${INPUT}.rs
# Generate a DOT graph ${INPUT}.dot with the sub-graph traversed during the reachability analysis
# that connect to the given target.
KANI_REACH_DEBUG="${TARGET_ITEM}" kani ${INPUT}.rs
CBMC
# See CBMC IR from a C file:
goto-cc file.c -o file.out
goto-instrument --print-internal-representation file.out
# or (for json symbol table)
cbmc --show-symbol-table --json-ui file.out
# or (an alternative concise format)
cbmc --show-goto-functions file.out
# Recover C from goto-c binary
goto-instrument --dump-c file.out > file.gen.c
Git
The Kani project follows the squash and merge option for pull request merges. As a result:
- The title of your pull request will become the main commit message.
- The messages from commits in your pull request will appear by default as a bulleted list in the main commit message body.
But the main commit message body is editable at merge time, so you don't have to worry about "typo fix" messages because these can be removed before merging.
# Set up your git fork
git remote add fork git@github.com:${USER}/kani.git
# Reset everything. Don't have any uncommitted changes!
git clean -xffd
git submodule foreach --recursive git clean -xffd
git submodule update --init
# Need to update local branch (e.g. for an open pull request?)
git fetch origin
git merge origin/main
# Or rebase, but that requires a force push,
# and because we squash and merge, an extra merge commit in a PR doesn't hurt.
# Checkout a pull request locally without the github cli
git fetch origin pull/$ID/head:pr/$ID
git switch pr/$ID
# Push to someone else's pull request
git origin add $USER $GIR_URL_FOR_THAT_USER
git push $USER $LOCAL_BRANCH:$THEIR_PR_BRANCH_NAME
# Search only git-tracked files
git grep codegen_panic
# Accidentally commit to main?
# "Move" commit to a branch:
git checkout -b my_branch
# Fix main:
git branch --force main origin/main
cargo kani assess
Assess is an experimental new feature to gather data about Rust crates, to aid the start of proof writing.
In the short-term, assess collects and dumps tables of data that may help Kani developers understand what's needed to begin writing proofs for another project. For instance, assess may help answer questions like:
- Does Kani successfully build all of the crates involved in this project? If not, why not?
- Does Kani support all the Rust language features necessary to do verification with this project? If not, which are most important?
In the long-term, assess will become a user-facing feature, and help Kani users get started writing proofs. We expect that users will have the same questions as above, but in the long term, hopefully the answers to those trend towards an uninteresting "yes." So the new questions might be:
- Is this project ready for verification? Projects need to be reasonably well-tested first. Our operating hypothesis is that code currently covered by unit tests is the code that could become covered by proofs.
- How much of given project (consisting of multiple packages or workspaces) or which of the user's projects might be verifiable? If a user wants to start trying Kani, but they have the choice of several different packages where they might try, we can help find the package with the lowest hanging fruit.
- Given a package, where in that package's code should the user look, in order to write the first (or next) proof?
These long-term goals are only "hinted at" with the present experimental version of assess. Currently, we only get as far as finding out which tests successfully verify (concretely) with Kani. This might indicate tests that could be generalized and converted into proofs, but we currently don't do anything to group, rank, or otherwise heuristically prioritize what might be most "interesting." (For instance, we'd like to eventually compute coverage information and use that to help rank the results.) As a consequence, the output of the tool is very hard to interpret, and likely not (yet!) helpful to new or potential Kani users.
Using Assess
To assess a package, run:
cargo kani --enable-unstable assess
As a temporary hack (arguments shouldn't work like this), to assess a single cargo workspace, run:
cargo kani --enable-unstable --workspace assess
To scan a collection of workspaces or packages that are not part of a shared workspace, run:
cargo kani --enable-unstable assess scan
The only difference between 'scan' and 'regular' assess is how the packages built are located.
All versions of assess produce the same output and metrics.
Assess will normally build just like cargo kani
or cargo build
, whereas scan
will find all cargo packages beneath the current directory, even in unrelated workspaces.
Thus, 'scan' may be helpful in the case where the user has a choice of packages and is looking for the easiest to get started with (in addition to the Kani developer use-case, of aggregating statistics across many packages).
(Tip: Assess may need to run for awhile, so try using screen
, tmux
or nohup
to avoid terminating the process if, for example, an ssh connection breaks.
Some tests can also consume huge amounts of ram when run through Kani, so you may wish to use ulimit -v 6000000
to prevent any processes from using more than 6GB.
You can also limit the number of concurrent tests that will be run by providing e.g. -j 4
, currently as a prepended argument, like --enable-unstable
or --workspace
in the examples above.)
What assess does
Assess builds all the packages requested in "test mode" (i.e. --tests
), and runs all the same tests that cargo test
would, except through Kani.
This gives end-to-end assurance we're able to actually build and run code from these packages, skipping nothing of what the verification process would need, except that the harnesses don't have any nondeterminism (kani::any()
) and consequently don't "prove" much.
The interesting signal comes from what tests cannot be analyzed by Kani due to unsupported features, performance problems, crash bugs, or other issues that get in the way.
Currently, assess forces termination by using unwind(1)
on all tests, so many tests will fail with unwinding assertions.
Current Assess Results
Assess produces a few tables of output (both visually in the terminal, and in a more detailed json format) so far:
Unsupported features
======================================================
Unsupported feature | Crates | Instances
| impacted | of use
-------------------------------+----------+-----------
caller_location | 71 | 239
simd_bitmask | 39 | 160
...
The unsupported features table aggregates information about features that Kani does not yet support.
These correspond to uses of codegen_unimplemented
in the kani-compiler
, and appear as warnings during compilation.
Unimplemented features are not necessarily actually hit by (dynamically) reachable code, so an immediate future improvement on this table would be to count the features actually hit by failing test cases, instead of just those features reported as existing in code by the compiler. In other words, the current unsupported features table is not what we want to see, in order to perfectly prioritize implementing these features, because we may be counting features that no proof would ever hit. A perfect signal here isn't possible: there may be code that looks statically reachable, but is never dynamically reachable, and we can't tell. But we can use test coverage as an approximation: well-tested code will hopefully cover most of the dynamically reachable code. The operating hypothesis of assess is that code covered by tests is code that could be covered by proof, and so measuring unsupported features by those actually hit by a test should provide a better "signal" about priorities. Implicitly deprioritizing unsupported features because they aren't covered by tests may not be a bug, but a feature: we may simply not want to prove anything about that code, if it hasn't been tested first, and so adding support for that feature may not be important.
A few notes on terminology:
- "Crates impacted" here means "packages in the current workspace (or scan) where the building of that package (and all of its dependencies) ultimately resulted in this warning."
For example, if only assessing a single package (not a workspace) this could only be
1
in this column, regardless of the number of dependencies. - "Instances of use" likewise means "total instances found while compiling this package's tests and all the (reachable) code in its dependencies."
- These counts are influenced by (static) reachability: if code is not potentially reachable from a test somehow, it will not be built and will not be counted.
Test failure reasons
================================================
Reason for failure | Number of tests
------------------------------+-----------------
unwind | 61
none (success) | 6
assertion + overflow | 2
...
The test failure reasons table indicates why, when assess ran a test through Kani, it failed to verify. Notably:
- Because we force termination with
unwind(1)
, we expectunwind
to rank highly. - We do report number of tests succeeding on this table, to aid understanding how well things went overall.
- The reported reason is the "property class" of the CBMC property that failed. So
assertion
means an ordinaryassert!()
was hit (or something else with this property class). - When multiple properties fail, they are aggregated with
+
, such asassertion + overflow
. - Currently this table does not properly account for
should_fail
tests, soassertion
may actually be "success": the test should hit an assertion and did.
Promising test cases
=============================================================================
Candidate for proof harness | Location
-------------------------------------------------------+---------------------
float::tests::f64_edge_cases | src/float.rs:226
float::tests::f32_edge_cases | src/float.rs:184
integer::tests::test_integers | src/integer.rs:171
This table is the most rudimentary so far, but is the core of what long-term assess will help accomplish. Currently, this table just presents (with paths displayed in a clickable manner) the tests that successfully "verify" with Kani. These might be good candidates for turning into proof harnesses. This list is presently unordered; the next step for improving it would be to find even a rudimentary way of ranking these test cases (e.g. perhaps by code coverage).
How Assess Works
kani-compiler
emits *.kani-metadata.json
for each target it builds.
This format can be found in the kani_metadata
crate, shared by kani-compiler
and kani-driver
.
This is the starting point for assess.
Assess obtains this metadata by essentially running a cargo kani
:
- With
--all-features
turned on - With
unwind
always set to1
- In test mode, i.e.
--tests
- With test-case reachability mode. Normally Kani looks for proof harnesses and builds only those. Here we switch to building only the test harnesses instead.
Assess starts by getting all the information from these metadata files. This is enough by itself to construct a rudimentary "unsupported features" table. But assess also uses it to discover all the test cases, and (instead of running proof harnesses) it then runs all these test harnesses under Kani.
Assess produces a second metadata format, called (unsurprisingly) "assess metadata".
(Found in kani-driver
under src/assess/metadata.rs
.)
This format records the results of what assess does.
This metadata can be written to a json file by providing --emit-metadata <file>
to assess
.
Likewise, scan
can be told to write out this data with the same option.
Assess metadata is an aggregatable format.
It does not apply to just one package, as assess can work on a workspace of packages.
Likewise, scan
uses and produces the exact same format, across multiple workspaces.
So far all assess metadata comes in the form of "tables" which are built with TableBuilder<T: TableRow>
.
This is documented further in src/assess/table_builder.rs
.
Using Assess on the top-100 crates
There is a script in the Kani repo for this purpose.
This will clone the top-100 crates to /tmp/top-100-experiment
and run assess scan on them:
./scripts/exps/assess-scan-on-repos.sh
If you'd like to preseve the results, you can direct scan to use a different directory with an environment variable:
ASSESS_SCAN="~/top-100-experiment" ./scripts/exps/assess-scan-on-repos.sh
To re-run the experiment, it suffices to be in the experiment directory:
cd ~/top-100-experiment && ~/kani/scripts/exps/assess-scan-on-repos.sh
Testing
Testing in Kani is carried out in multiple ways. There are at least two very good reasons to do it:
-
Software regression: A regression is a type of bug that appears after a change is introduced where a feature that was previously working has unexpectedly stopped working.
Regression testing allows one to prevent a software regression from happening by running a comprehensive set of working tests before any change is committed to the project.
-
Software metrics: A metric is a measure of software characteristics which are quantitative and countable. Metrics are particularly valuable for project management purposes.
We recommend reading our section on Regression Testing if you're interested in Kani development. To run kani on a large number of remotely hosted crates, please see Repository Crawl.
Regression testing
Kani relies on a quite extensive range of tests to perform regression testing. Regression testing can be executed by running the command:
./scripts/kani-regression.sh
The kani-regression.sh
script executes different testing commands, which we classify into:
See below for a description of each one.
Note that regression testing is run whenever a Pull Request is opened, updated or merged into the main branch. Therefore, it's a good idea to run regression testing locally before submitting a Pull Request for Kani.
Kani testing suites
The Kani testing suites are the main testing resource for Kani. In most cases, the tests contained in the Kani testing suites are single Rust files that are run using the following command:
kani file.rs <options>
Command-line options can be passed to the test by adding a special comment to the file. See testing options for more details.
In particular, the Kani testing suites are composed of:
kani
: The main testing suite for Kani. The test is a single Rust file that's run through Kani. In general, the test passes if verification with Kani is successful, otherwise it fails.firecracker
: Works likekani
but contains tests inspired by Firecracker code.prusti
: Works likekani
but contains tests from the Prusti tool.smack
: Works likekani
but contains tests from the SMACK tool.kani-fixme
: Similar tokani
, but runs ignored tests from thekani
testing suite (i.e., tests withfixme
orignore
in their name). Allows us to detect when a previously not supported test becomes supported. More details in "Fixme" tests.expected
: Similar tokani
but with an additional check which ensures that lines appearing in*.expected
files appear in the output generated bykani
.ui
: Works likeexpected
, but focuses on the user interface (e.g., warnings) instead of the verification output.cargo-kani
: This suite is designed to test thecargo-kani
command. As such, this suite works with packages instead of single Rust files. Arguments can be specified in theCargo.toml
configuration file. Similar to theexpected
suite, we look for*.expected
files for each harness in the package.cargo-ui
: Similar tocargo-kani
, but focuses on the user interface like theui
test suite.script-based-pre
: This suite is useful to execute script-based tests, and also allows checking expected output and exit codes after running them. The suite uses theexec
mode, described in more detail here.
We've extended
compiletest
(the
Rust compiler testing framework) to work with these suites. That way, we take
advantage of all compiletest
features (e.g., parallel execution).
Testing stages
The process of running single-file tests is split into three stages:
check
: This stage uses the Rust front-end to detect if the example is valid Rust code.codegen
: This stage uses the Kani back-end to determine if we can generate GotoC code.verify
: This stage uses CBMC to obtain a verification result.
If a test fails, the error message will include the stage where it failed:
error: test failed: expected check success, got failure
When working on a test that's expected to fail, there are two options to indicate an expected failure. The first one is to add a comment
// kani-<stage>-fail
at the top of the test file, where <stage>
is the stage where the test is
expected to fail.
The other option is to use the predicate kani::expect_fail(cond, message)
included in the Kani library. The cond
in kani::expect_fail
is a condition
that you expect not to hold during verification. The testing framework expects
one EXPECTED FAIL
message in the verification output for each use of the
predicate.
NOTE:
kani::expect_fail
is only useful to indicate failure in theverify
stage, errors in other stages will be considered testing failures.
Testing options
Many tests will require passing command-line options to Kani. These options can be specified in single Rust files by adding a comment at the top of the file:
// kani-flags: <options>
For example, to use an unwinding value of 4 in a test, we can write:
// kani-flags: --default-unwind 4
For cargo-kani
tests, the preferred way to pass command-line options is adding
them to Cargo.toml
. See Usage on a package
for more details.
"Fixme" tests
Any test containing fixme
or ignore
in its name is considered a test not
supported for some reason (i.e., they return an unexpected verification result).
However, "fixme" tests included in the kani
folder are run via the kani-fixme
testing suite. kani-fixme
works on test files from kani
but:
- Only runs tests whose name contains
fixme
orignore
(ignoring the rest). - The expected outcome is failure. In other words, a test is successful if it fails.
We welcome contributions with "fixme" tests which demonstrate a bug or unsupported feature in Kani. Ideally, the test should include some comments regarding:
- The expected result of the test.
- The actual result of the test (e.g., interesting parts of the output).
- Links to related issues.
To include a new "fixme" test in kani
you only need to ensure its name contains
fixme
or ignore
. If your changes to Kani cause a "fixme" test to become
supported, you only need to rename it so the name does not contain fixme
nor
ignore
.
Rust unit tests
These tests follow the Rust unit testing style.
At present, Kani runs unit tests from the following packages:
cprover_bindings
kani-compiler
cargo-kani
Python unit tests
We use the Python unit testing framework to test the CBMC JSON parser.
Script-based tests
These are tests which are run using scripts. Scripting gives us the ability to perform ad-hoc checks that cannot be done otherwise. They are currently used for:
- Standard library codegen
- Firecracker virtio codegen
- Diamond dependency
In fact, most of them are equivalent to running cargo kani
and performing
checks on the output. The downside to scripting is that these tests will always
be run, even if there have not been any changes since the last time the
regression was run.
NOTE: With the addition of the
exec
mode forcompiletest
(described below), we'll be migrating these script-based tests to other suites using theexec
mode. Theexec
mode allows us to take advantage ofcompiletest
features while executing script-based tests (e.g., parallel execution).
The exec
mode
The exec
mode in compiletest
allows us to execute script-based tests, in
addition to checking expected output and exit codes after running them.
In particular, tests are expected to be placed directly under the test directory
(e.g., script-based-pre
) in a directory with a config.yml
file, which
should contain:
script
: The path to the script to be executed.expected
(optional): The path to the.expected
file to use for output comparison.exit_code
(optional): The exit code to be returned by executing the script (a zero exit code is expected if not specified).
For example, let's say want to test the script exit-one.sh
:
echo "Exiting with code 1!"
exit 1
In this case, we'll create a folder that contains the config.yml
file:
script: exit-one.sh
expected: exit-one.expected
exit_code: 1
where exit-one.expected
is simply a text file such as:
Exiting with code 1!
If expected
isn't specified, the output won't be checked. If exit_code
isn't
specified, the exec
mode will check the exit code was zero.
Note that all paths specified in the config.yml
file are local to the test
directory, which is the working directory assumed when executing the test. This
is meant to avoid problems when executing the test manually.
(Experimental) Testing with a Large Number of Repositories
This section explains how to run Kani on a large number of crates downloaded from git forges. You may want to do this if you are going to test Kani's ability to handle Rust features found in projects out in the wild.
For the first half, we will explain how to use data from crates.io to pick targets. Second half will explain how to use a script to run on a list of selected repositories.
Picking Repositories
In picking repositories, you may want to select by metrics like popularity or by the presence of certain features. In this section, we will explain how to select top ripostes by download count.
We will use the db-dump
method of getting data from crates.io as it
is zero cost to their website and gives us SQL access. To start, have
the following programs set up on your computer.
- docker
- docker-compose.
- Start PostgreSQL. Paste in the following yaml file as
docker-compose.yaml
.version: '3.3'
may need to change.
version: '3.3'
services:
db:
image: postgres:latest
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- crates-data:/var/lib/postgresql/data
logging:
driver: "json-file"
options:
max-size: "50m"
volumes:
crates-data:
driver: local
Then, run the following to start the setup.
docker-compose up -d
Once set up, run docker ls
to figure out the container's name. We
will refer to the name as $CONTAINER_NAME
from now on.
-
Download actual data from crates.io. First, run the following command to get a shell in the container:
docker exec -it --user postgres $CONTAINER_NAME bash
. Now, run the following to grab and install the data into the repository. Please note that this may take a while.wget https://static.crates.io/db-dump.tar.gz tar -xf db-dump.tar.gz psql postgres -f */schema.sql psql postgres -f */import.sql
-
Extract the data. In the same docker shell, run the following to extract the top 1k repositories. Other SQL queries may be used if you want another criteria
\copy (SELECT name, repository, downloads FROM crates WHERE repository LIKE 'http%' ORDER BY DOWNLOADS DESC LIMIT 1000) to 'top-1k.csv' csv header;
-
Clean the data. The above query will capture duplicates paths that are deeper than the repository. You can clean these out.
- URL from CSV:
cat top-1k.csv | awk -F ',' '{ print $2 }' | grep -v 'http.*'
- Remove long paths:
sed 's/tree\/master.*$//g'
- Once processed, you can dedup with
sort | uniq --unique
- URL from CSV:
Running the List of Repositories
In this step we will download the list of repositories using a script assess-scan-on-repos.sh
Make sure to have Kani ready to run. For that, see the build instructions.
From the repository root, you can run the script with
./scripts/exps/assess-scan-on-repos.sh $URL_LIST_FILE
where
$URL_LIST_FILE
points to a line-delimited list of URLs you want to
run Kani on. Repositories that give warnings or errors can be grepping
for with "STDERR Warnings" and "Error exit in" respectively.
Performance comparisons with benchcomp
While Kani includes a performance regression suite, you may wish to test Kani's performance using your own benchmarks or with particular versions of Kani.
You can use the benchcomp
tool in the Kani repository to run several 'variants' of a command on one or more benchmark suites; automatically parse the results of each of those suites; and take actions or emit visualizations based on those results.
Example use-cases
- Run one or more benchmark suites with the current and previous versions of Kani. Exit with a return code of 1 or print a custom summary to the terminal if any benchmark regressed by more than a user-configured amount.
- Run benchmark suites using several historical versions of Kani and emit a graph of performance over time.
- Run benchmark suites using different SAT solvers, command-line flags, or environment variables.
Features
Benchcomp provides the following features to support your performance-comparison workflow:
- Automatically copies benchmark suites into a fresh directories before running with each variant, to ensure that built artifacts do not affect subsequent runtimes
- Parses the results of different 'kinds' of benchmark suite and combines those results into a single unified format. This allows you to run benchmarks from external repositories, suites of pre-compiled GOTO-binaries, and other kinds of benchmark all together and view their results in a single dashboard.
- Driven by a single configuration file that can be sent to colleagues or checked into a repository to be used in continuous integration.
- Extensible, allowing you to write your own parsers and visualizations.
- Caches all previous runs and allows you to re-create visualizations for the latest run without actually re-running the suites.
Quick start
Here's how to run Kani's performance suite twice, comparing the last released version of Kani with the current HEAD.
cd $KANI_SRC_DIR
git worktree add new HEAD
git worktree add old $(git describe --tags --abbrev=0)
tools/benchcomp/bin/benchcomp --config tools/benchcomp/configs/perf-regression.yaml
This uses the perf-regression.yaml
configuration file that we use in continuous integration.
After running the suite twice, the configuration file terminates benchcomp
with a return code of 1 if any of the benchmarks regressed on metrics such as success
(a boolean), solver_runtime
, and number_vccs
(numerical).
Additionally, the config file directs benchcomp to print out a Markdown table that GitHub's CI summary page renders in to a table.
The rest of this documentation describes how to modify benchcomp
for your own use cases, including writing a configuration file; writing a custom parser for your benchmark suite; and writing a custom visualization to examine the results of a performance comparison.
benchcomp
command line
benchcomp
is a single command that runs benchmarks, parses their results, combines these results, and emits visualizations.
benchcomp
also provides subcommands to run these steps individually.
Most users will want to invoke benchcomp
in one of two ways:
benchcomp
without any subcommands, which runs the entire performance comparison process as depicted belowbenchcomp visualize
, which runs the visualization step on the results of a previous benchmark run without actually re-running the benchmarks. This is useful when tweaking the parameters of a visualization, for example changing the threshold of what is considered to be a regression.
The subcommands run
and collate
are also available.
The diagram below depicts benchcomp
's order of operation.
Running benchcomp
invokes run
, collate
, and visualize
behind the scenes.
If you have previously run benchcomp
, then running benchcomp visualize
will emit the visualizations in the config file using the previous result.yaml
.
In the diagram above, two different suites (1 and 2) are both run using two variants---combinations of command, working directory, and environment variables.
Benchmark suite 2 requires a totally different command line to suite 1---for example, suite_1
might contain Kani harnesses invoked through cargo kani
, while suite_2
might contain CBMC harnesses invoked through run_cbmc_proofs.py
.
Users would therefore define different variants (c
and d
) for invoking suite_2
, and also specify a different parser to parse the results.
No matter how different the benchmark suites are, the collate
stage combines their results so that they can later be compared.
Example config file
Users must specify the actual suites to run, the parsers used to collect their results, and the visualizations to emit in a file called benchcomp.yaml
or a file passed to the -c/--config
flag.
The next section describes the schema for this configuration file.
A run similar to the diagram above might be achieved using the following configuration file:
# Compare a range of Kani and CBMC benchmarks when
# using Cadical versus the default SAT solver
variants:
variant_a:
config:
directory: kani_benchmarks
command_line: scripts/kani-perf.sh
env: {}
variant_b:
config:
directory: kani_benchmarks
command_line: scripts/kani-perf.sh
# This variant uses a hypothetical environment variable that
# forces Kani to use the cadical SAT solver
env:
KANI_SOLVER: cadical
variant_c:
config:
directory: cbmc_benchmarks
command_line: run_cbmc_proofs.py
env: {}
variant_d:
config:
directory: cbmc_benchmarks
command_line: run_cbmc_proofs.py
env:
EXTERNAL_SAT_SOLVER: cadical
run:
suites:
suite_1:
parser:
module: kani_perf
variants: [variant_a, variant_b]
suite_2:
parser:
module: cbmc_litani_parser
variants: [variant_c, variant_d]
visualize:
- type: dump_graph
out_file: graph.svg
- type: dump_markdown_results_table
out_file: summary.md
extra_columns: []
- type: error_on_regression
variant_pairs:
- [variant_a, variant_b]
- [variant_c, variant_d]
benchcomp
configuration file
benchcomp
's operation is controlled through a YAML file---benchcomp.yaml
by default or a file passed to the -c/--config
option.
This page lists the different visualizations that are available.
Variants
A variant is a single invocation of a benchmark suite. Benchcomp runs several
variants, so that their performance can be compared later. A variant consists of
a command-line argument, working directory, and environment. Benchcomp invokes
the command using the operating system environment, updated with the keys and
values in env
. If any values in env
contain strings of the form ${var}
,
Benchcomp expands them to the value of the environment variable $var
.
variants:
variant_1:
config:
command_line: echo "Hello, world"
directory: /tmp
env:
PATH: /my/local/directory:${PATH}
Filters
After benchcomp has finished parsing the results, it writes the results to results.yaml
by default.
Before visualizing the results (see below), benchcomp can filter the results by piping them into an external program.
To filter results before visualizing them, add filters
to the configuration file.
filters:
- command_line: ./scripts/remove-redundant-results.py
- command_line: cat
The value of filters
is a list of dicts.
Currently the only legal key for each of the dicts is command_line
.
Benchcomp invokes each command_line
in order, passing the results as a JSON file on stdin, and interprets the stdout as a YAML-formatted modified set of results.
Filter scripts can emit either YAML (which might be more readable while developing the script), or JSON (which benchcomp will parse as a subset of YAML).
Built-in visualizations
The following visualizations are available; these can be added to the visualize
list of benchcomp.yaml
.
Detailed documentation for these visualizations follows.
Plot
Scatterplot configuration options
dump_markdown_results_table
Print Markdown-formatted tables displaying benchmark results
For each metric, this visualization prints out a table of benchmarks, showing the value of the metric for each variant, combined with an optional scatterplot.
The 'out_file' key is mandatory; specify '-' to print to stdout.
'extra_colums' can be an empty dict. The sample configuration below assumes that each benchmark result has a 'success' and 'runtime' metric for both variants, 'variant_1' and 'variant_2'. It adds a 'ratio' column to the table for the 'runtime' metric, and a 'change' column to the table for the 'success' metric. The 'text' lambda is called once for each benchmark. The 'text' lambda accepts a single argument---a dict---that maps variant names to the value of that variant for a particular metric. The lambda returns a string that is rendered in the benchmark's row in the new column. This allows you to emit arbitrary text or markdown formatting in response to particular combinations of values for different variants, such as regressions or performance improvements.
'scatterplot' takes the values 'off' (default), 'linear' (linearly scaled axes), or 'log' (logarithmically scaled axes).
Sample configuration:
visualize:
- type: dump_markdown_results_table
out_file: "-"
scatterplot: linear
extra_columns:
runtime:
- column_name: ratio
text: >
lambda b: str(b["variant_2"]/b["variant_1"])
if b["variant_2"] < (1.5 * b["variant_1"])
else "**" + str(b["variant_2"]/b["variant_1"]) + "**"
success:
- column_name: change
text: >
lambda b: "" if b["variant_2"] == b["variant_1"]
else "newly passing" if b["variant_2"]
else "regressed"
Example output:
## runtime
| Benchmark | variant_1 | variant_2 | ratio |
| --- | --- | --- | --- |
| bench_1 | 5 | 10 | **2.0** |
| bench_2 | 10 | 5 | 0.5 |
## success
| Benchmark | variant_1 | variant_2 | change |
| --- | --- | --- | --- |
| bench_1 | True | True | |
| bench_2 | True | False | regressed |
| bench_3 | False | True | newly passing |
dump_yaml
Print the YAML-formatted results to a file.
The 'out_file' key is mandatory; specify '-' to print to stdout.
Sample configuration:
visualize:
- type: dump_yaml
out_file: '-'
error_on_regression
Terminate benchcomp with a return code of 1 if any benchmark regressed.
This visualization checks whether any benchmark regressed from one variant to another. Sample configuration:
visualize:
- type: error_on_regression
variant_pairs:
- [variant_1, variant_2]
- [variant_1, variant_3]
checks:
- metric: runtime
test: "lambda old, new: new / old > 1.1"
- metric: passed
test: "lambda old, new: False if not old else not new"
This says to check whether any benchmark regressed when run under variant_2 compared to variant_1. A benchmark is considered to have regressed if the value of the 'runtime' metric under variant_2 is 10% higher than the value under variant_1. Furthermore, the benchmark is also considered to have regressed if it was previously passing, but is now failing. These same checks are performed on all benchmarks run under variant_3 compared to variant_1. If any of those lambda functions returns True, then benchcomp will terminate with a return code of 1.
run_command
Run an executable command, passing the performance metrics as JSON on stdin.
This allows you to write your own visualization, which reads a result file on stdin and does something with it, e.g. writing out a graph or other output file.
Sample configuration:
visualize:
- type: run_command
command: ./my_visualization.py
Custom parsers
Benchcomp ships with built-in parsers that retrieve the results of a benchmark suite after the run has completed. You can also create your own parser, either to run locally or to check into the Kani codebase.
Built-in parsers
You specify which parser should run for each benchmark suite in benchcomp.yaml
.
For example, if you're running the kani performance suite, you would use the built-in kani_perf
parser to parse the results:
suites:
my_benchmark_suite:
variants: [variant_1, variant_2]
parser:
module: kani_perf
Custom parsers
A parser is a program that benchcomp runs inside the root directory of a benchmark suite, after the suite run has completed.
The parser should retrieve the results of the run (by parsing output files etc.) and print the results out as a YAML document.
You can use your executable parser by specifying the command
key rather than the module
key in your benchconf.yaml
file:
suites:
my_benchmark_suite:
variants: [variant_1, variant_2]
parser:
command: ./my-cool-parser.sh
The kani_perf
parser mentioned above, in tools/benchcomp/benchcomp/parsers/kani_perf.py
, is a good starting point for writing a custom parser, as it also works as a standalone executable.
Here is an example output from an executable parser:
metrics:
runtime: {}
success: {}
errors: {}
benchmarks:
bench_1:
metrics:
runtime: 32
success: true
errors: []
bench_2:
metrics:
runtime: 0
success: false
errors: ["compilation failed"]
The above format is different from the final result.yaml
file that benchcomp writes, because the above file represents the output of running a single benchmark suite using a single variant.
Your parser will run once for each variant, and benchcomp combines the dictionaries into the final result.yaml
file.
Contributing custom parsers to Kani
To turn your executable parser into one that benchcomp can invoke as a module, ensure that it has a main(working_directory)
method that returns a dict (the same dict that it would print out as a YAML file to stdout).
Save the file in tools/benchcomp/benchcomp/parsers
using python module naming conventions (filename should be an identifier and end in .py
).
Limitations
Like other tools, Kani comes with some limitations. In some cases, these limitations are inherent because of the techniques it's based on, or the undecidability of the properties that Kani seeks to prove. In other cases, it's just a matter of time and effort to remove these limitations (e.g., specific unsupported Rust language features).
In this chapter, we do the following to document these limitations:
- Discuss the effect of Rust undefined behaviour.
- Summarize the current support for Rust features.
- Explain the need for overrides and list all overriden symbols.
Undefined Behaviour
The Effect of Undefined Behaviour on Program Verification
Rust has a broad definition of undefined behaviour (UB). The Rust documentation warns that UB can have unexpected, non-local effects:
Note: Undefined behavior affects the entire program. For example, calling a function in C that exhibits undefined behavior of C means your entire program contains undefined behaviour that can also affect the Rust code. And vice versa, undefined behavior in Rust can cause adverse affects on code executed by any FFI calls to other languages.
If a program has UB, the semantics of the rest of the program are undefined. As a result, if the program under verification contains UB then, in principle, the program (including its representation in MIR analyzed by Kani) has no semantics and hence could do anything, including violating the guarantees checked by Kani. This means that verification results are subject to the proviso that the program under verification does not contain UB.
What forms of Undefined Behaviour can Rust Exhibit
Rust’s definition of UB is so broad that Rust has the following warning:
Warning The following list is not exhaustive. There is no formal model of Rust's semantics for what is and is not allowed in unsafe code, so there may be more behavior considered unsafe. The following list is just what we know for sure is undefined behavior. Please read the Rustonomicon (https://doc.rust-lang.org/nomicon/index.html) before writing unsafe code.
Given the lack of a formal semantics for UB, and given Kani's focus on memory safety, there are classes of UB which Kani does not detect. A non-exhaustive list of these, based on the non-exhaustive list from the Rust documentation, is:
- Data races.
- Kani focuses on sequential code.
- Breaking the pointer aliasing rules (http://llvm.org/docs/LangRef.html#pointer-aliasing-rules).
- Kani can detect if misuse of pointers causes memory safety or assertion violations, but does not track reference lifetimes.
- Mutating immutable data.
- Kani can detect if modification of immutable data causes memory safety or assertion violations, but does not track reference lifetimes.
- Invoking undefined behavior via compiler intrinsics.
- Kani makes a best effort attempt to check the preconditions of compiler intrinsics, but does not guarantee to do so in all cases.
- Executing code compiled with platform features that the current platform does not support (see target_feature).
- Kani relies on
rustc
to check for this case.
- Kani relies on
- Calling a function with the wrong call ABI or unwinding from a function with the wrong unwind ABI.
- Kani relies on
rustc
to check for this case.
- Kani relies on
- Producing an invalid value, even in private fields and locals.
- Kani won't create invalid values with
kani::any()
but it also won't complain if youtransmute
an invalid value to a Rust type (for example, a0
toNonZeroU32
).
- Kani won't create invalid values with
- Incorrect use of inline assembly.
- Kani does not support inline assembly.
- Using uninitialized memory.
- See the corresponding section in our Rust feature support.
Kani makes a best-effort attempt to detect some cases of UB:
- Evaluating a dereference expression (
*expr
) on a raw pointer that is dangling or unaligned.- Kani can detect invalid dereferences, but may not detect them in place expression context.
- Invoking undefined behavior via compiler intrinsics.
Rust feature support
The table below tries to summarize the current support in Kani for the Rust language features according to the Rust Reference. We use the following values to indicate the level of support:
- Yes: The feature is fully supported. We are not aware of any issue with it.
- Partial: The feature is at least partially supported. We are aware of some issue with with it.
- No: The feature is not supported. Some support may be available but analyses should not be trusted.
As with all software, bugs may be found anywhere regardless of the level of support. In such cases, we would greatly appreciate that you filed a bug report.
Reference | Feature | Support | Notes |
---|---|---|---|
3.1 | Macros By Example | Yes | |
3.2 | Procedural Macros | Yes | |
4 | Crates and source files | Yes | |
5 | Conditional compilation | Yes | |
6.1 | Modules | Yes | |
6.2 | Extern crates | Yes | |
6.3 | Use declarations | Yes | |
6.4 | Functions | Yes | |
6.5 | Type aliases | Yes | |
6.6 | Structs | Yes | |
6.7 | Enumerations | Yes | |
6.8 | Unions | Yes | |
6.9 | Constant items | Yes | |
6.10 | Static items | Yes | |
6.11 | Traits | Yes | |
6.12 | Implementations | Yes | |
6.13 | External blocks | Yes | |
6.14 | Generic parameters | Yes | |
6.15 | Associated Items | Yes | |
7 | Attributes | Yes | |
8.1 | Statements | Yes | |
8.2.1 | Literal expressions | Yes | |
8.2.2 | Path expressions | Yes | |
8.2.3 | Block expressions | Yes | |
8.2.4 | Operator expressions | Yes | |
8.2.5 | Grouped expressions | Yes | |
8.2.6 | Array and index expressions | Yes | |
8.2.7 | Tuple and index expressions | Yes | |
8.2.8 | Struct expressions | Yes | |
8.2.9 | Call expressions | Yes | |
8.2.10 | Method call expressions | Yes | |
8.2.11 | Field access expressions | Yes | |
8.2.12 | Closure expressions | Yes | |
8.2.13 | Loop expressions | Yes | |
8.2.14 | Range expressions | Yes | |
8.2.15 | If and if let expressions | Yes | |
8.2.16 | Match expressions | Yes | |
8.2.17 | Return expressions | Yes | |
8.2.18 | Await expressions | No | See Notes - Concurrency |
9 | Patterns | Partial | #707 |
10.1.1 | Boolean type | Yes | |
10.1.2 | Numeric types | Yes | |
10.1.3 | Textual types | Yes | |
10.1.4 | Never type | Yes | |
10.1.5 | Tuple types | Yes | |
10.1.6 | Array types | Yes | |
10.1.7 | Slice types | Yes | |
10.1.8 | Struct types | Yes | |
10.1.9 | Enumerated types | Yes | |
10.1.10 | Union types | Yes | |
10.1.11 | Function item types | Yes | |
10.1.12 | Closure types | Partial | See Notes - Advanced features |
10.1.13 | Pointer types | Partial | See Notes - Advanced features |
10.1.14 | Function pointer types | Partial | See Notes - Advanced features |
10.1.15 | Trait object types | Partial | See Notes - Advanced features |
10.1.16 | Impl trait type | Partial | See Notes - Advanced features |
10.1.17 | Type parameters | Partial | See Notes - Advanced features |
10.1.18 | Inferred type | Partial | See Notes - Advanced features |
10.2 | Dynamically Sized Types | Partial | See Notes - Advanced features |
10.3 | Type layout | Yes | |
10.4 | Interior mutability | Yes | |
10.5 | Subtyping and Variance | Yes | |
10.6 | Trait and lifetime bounds | Yes | |
10.7 | Type coercions | Partial | See Notes - Advanced features |
10.8 | Destructors | Partial | |
10.9 | Lifetime elision | Yes | |
11 | Special types and traits | Partial | |
Box<T> | Yes | ||
Rc<T> | Yes | ||
Arc<T> | Yes | ||
Pin<T> | Yes | ||
UnsafeCell<T> | Partial | ||
PhantomData<T> | Partial | ||
Operator Traits | Partial | ||
Deref and DerefMut | Yes | ||
Drop | Partial | ||
Copy | Yes | ||
Clone | Yes | ||
14 | Linkage | Yes | |
15.1 | Unsafe functions | Yes | |
15.2 | Unsafe blocks | Yes | |
15.3 | Behavior considered undefined | Partial | |
Data races | No | See Notes - Concurrency | |
Dereferencing dangling raw pointers | Yes | ||
Dereferencing unaligned raw pointers | No | ||
Breaking pointer aliasing rules | No | ||
Mutating immutable data | No | ||
Invoking undefined behavior via compiler intrinsics | Partial | See Notes - Intrinsics | |
Executing code compiled with platform features that the current platform does not support | No | ||
Producing an invalid value, even in private fields and locals | No |
Notes on partially or unsupported features
Code generation for unsupported features
Kani aims to be an industrial verification tool. Most industrial crates may include unsupported features in parts of their code that do not need to be verified. In general, this should not prevent users using Kani to verify their code.
Because of that, the general rule is that Kani generates an assert(false)
statement followed by an assume(false)
statement when compiling any
unsupported feature. assert(false)
will cause verification to fail if the
statement is reachable during the verification stage, while assume(false)
will
block any further exploration of the path. However, the analysis will not be
affected if the statement is not reachable from the code under verification, so
users can still verify components of their code that do not use unsupported
features.
In a few cases, Kani aborts execution if the analysis could be affected in some way because of an unsupported feature (e.g., global ASM).
Assembly
Kani does not support assembly code for now. We may add it in the future but at present there are no plans to do so.
Check out the tracking issues for inline assembly (asm!
macro) and global assembly
(asm_global!
macro) to know
more about the current status.
Concurrency
Concurrent features are currently out of scope for Kani. In general, the verification of concurrent programs continues to be an open research problem where most tools that analyze concurrent code lack support for other features. Because of this, Kani emits a warning whenever it encounters concurrent code and compiles as if it was sequential code.
Standard library functions
Kani overrides a few common functions (e.g., print macros) to provide a more verification friendly implementation.
Advanced features
The semantics around some advanced features (traits, types, etc.) from Rust are not formally defined which makes it harder to ensure that we can properly model all their use cases.
In particular, there are some outstanding issues to note here:
- Sanity check
Variant
type in projections #448. - Unexpected fat pointer results in #277, #327 and #676.
We are particularly interested in bug reports concerning these features, so please file a bug report if you're aware of one.
Panic strategies
Rust has two different strategies when a panic occurs:
- Stack unwinding (default): Walks back the stack cleaning up the data from each function it encounters.
- Abortion: Immediately ends the program without cleaning up.
Currently, Kani does not support stack unwinding. This has some implications regarding memory safety since programs sometimes rely on the unwinding logic to ensure there is no resource leak or persistent data inconsistency. Check out this issue for updates on stack unwinding support.
Uninitialized memory
Reading uninitialized memory is considered undefined behavior in Rust. At the moment, Kani cannot detect if memory is uninitialized, but in practice this is mitigated by the fact that all memory is initialized with nondeterministic values. Therefore, any code that depends on uninitialized data will exhibit nondeterministic behavior. See this issue for more details.
Destructors
At present, we are aware of some issues with destructors, in particular those related to advanced features.
Intrinsics
Please refer to Intrinsics for information on the current support in Kani for Rust compiler intrinsics.
Floating point operations
Kani supports floating point numbers, but some supported operations on floats are "over-approximated."
These are the trigonometric functions like sin
and cos
and the sqrt
function as well.
This means the verifier can raise errors that cannot actually happen when the code is run normally.
For instance, (#1342) the sin
/cos
functions basically return a nondeterministic value between -1 and 1.
In other words, they largely ignore their input and give very conservative answers.
This range certainly includes the "real" value, so proof soundness is still preserved, but it means Kani could raise spurious errors that cannot actually happen.
This makes Kani unsuitable for verifying some kinds of properties (e.g. precision) about numerical algorithms.
Proofs that fail because of this problem can sometimes be repaired by introducing "stubs" for these functions that return a more acceptable approximation.
However, note that the actual behavior of these functions can vary by platform/os/architecture/compiler, so introducing an "overly precise" approximation may introduce unsoundness: actual system behavior may produce different values from the stub's approximation.
Intrinsics
The tables below try to summarize the current support in Kani for Rust intrinsics. We define the level of support similar to how we indicate Rust feature support:
- Yes: The intrinsic is fully supported. We are not aware of any issue with it.
- Partial: The intrinsic is at least partially supported. We are aware of some issue with with it.
- No: The intrinsic is not supported.
In general, code generation for unsupported intrinsics follows the rule described in Rust feature support - Code generation for unsupported features.
Any intrinsic not appearing in the tables below is considered not supported. Please open a feature request if your code depends on an unsupported intrinsic.
Compiler intrinsics
Name | Support | Notes |
---|---|---|
abort | Yes | |
add_with_overflow | Yes | |
arith_offset | Yes | |
assert_inhabited | Yes | |
assert_uninit_valid | Yes | |
assert_zero_valid | Yes | |
assume | Yes | |
atomic_and_seqcst | Partial | See Atomics |
atomic_and_acquire | Partial | See Atomics |
atomic_and_acqrel | Partial | See Atomics |
atomic_and_release | Partial | See Atomics |
atomic_and_relaxed | Partial | See Atomics |
atomic_cxchg_acqrel_acquire | Partial | See Atomics |
atomic_cxchg_acqrel_relaxed | Partial | See Atomics |
atomic_cxchg_acqrel_seqcst | Partial | See Atomics |
atomic_cxchg_acquire_acquire | Partial | See Atomics |
atomic_cxchg_acquire_relaxed | Partial | See Atomics |
atomic_cxchg_acquire_seqcst | Partial | See Atomics |
atomic_cxchg_relaxed_acquire | Partial | See Atomics |
atomic_cxchg_relaxed_relaxed | Partial | See Atomics |
atomic_cxchg_relaxed_seqcst | Partial | See Atomics |
atomic_cxchg_release_acquire | Partial | See Atomics |
atomic_cxchg_release_relaxed | Partial | See Atomics |
atomic_cxchg_release_seqcst | Partial | See Atomics |
atomic_cxchg_seqcst_acquire | Partial | See Atomics |
atomic_cxchg_seqcst_relaxed | Partial | See Atomics |
atomic_cxchg_seqcst_seqcst | Partial | See Atomics |
atomic_cxchgweak_acqrel_acquire | Partial | See Atomics |
atomic_cxchgweak_acqrel_relaxed | Partial | See Atomics |
atomic_cxchgweak_acqrel_seqcst | Partial | See Atomics |
atomic_cxchgweak_acquire_acquire | Partial | See Atomics |
atomic_cxchgweak_acquire_relaxed | Partial | See Atomics |
atomic_cxchgweak_acquire_seqcst | Partial | See Atomics |
atomic_cxchgweak_relaxed_acquire | Partial | See Atomics |
atomic_cxchgweak_relaxed_relaxed | Partial | See Atomics |
atomic_cxchgweak_relaxed_seqcst | Partial | See Atomics |
atomic_cxchgweak_release_acquire | Partial | See Atomics |
atomic_cxchgweak_release_relaxed | Partial | See Atomics |
atomic_cxchgweak_release_seqcst | Partial | See Atomics |
atomic_cxchgweak_seqcst_acquire | Partial | See Atomics |
atomic_cxchgweak_seqcst_relaxed | Partial | See Atomics |
atomic_cxchgweak_seqcst_seqcst | Partial | See Atomics |
atomic_fence_seqcst | Partial | See Atomics |
atomic_fence_acquire | Partial | See Atomics |
atomic_fence_acqrel | Partial | See Atomics |
atomic_fence_release | Partial | See Atomics |
atomic_load_seqcst | Partial | See Atomics |
atomic_load_acquire | Partial | See Atomics |
atomic_load_relaxed | Partial | See Atomics |
atomic_load_unordered | Partial | See Atomics |
atomic_max_seqcst | Partial | See Atomics |
atomic_max_acquire | Partial | See Atomics |
atomic_max_acqrel | Partial | See Atomics |
atomic_max_release | Partial | See Atomics |
atomic_max_relaxed | Partial | See Atomics |
atomic_min_seqcst | Partial | See Atomics |
atomic_min_acquire | Partial | See Atomics |
atomic_min_acqrel | Partial | See Atomics |
atomic_min_release | Partial | See Atomics |
atomic_min_relaxed | Partial | See Atomics |
atomic_nand_seqcst | Partial | See Atomics |
atomic_nand_acquire | Partial | See Atomics |
atomic_nand_acqrel | Partial | See Atomics |
atomic_nand_release | Partial | See Atomics |
atomic_nand_relaxed | Partial | See Atomics |
atomic_or_seqcst | Partial | See Atomics |
atomic_or_acquire | Partial | See Atomics |
atomic_or_acqrel | Partial | See Atomics |
atomic_or_release | Partial | See Atomics |
atomic_or_relaxed | Partial | See Atomics |
atomic_singlethreadfence_seqcst | Partial | See Atomics |
atomic_singlethreadfence_acquire | Partial | See Atomics |
atomic_singlethreadfence_acqrel | Partial | See Atomics |
atomic_singlethreadfence_release | Partial | See Atomics |
atomic_store_seqcst | Partial | See Atomics |
atomic_store_release | Partial | See Atomics |
atomic_store_relaxed | Partial | See Atomics |
atomic_store_unordered | Partial | See Atomics |
atomic_umax_seqcst | Partial | See Atomics |
atomic_umax_acquire | Partial | See Atomics |
atomic_umax_acqrel | Partial | See Atomics |
atomic_umax_release | Partial | See Atomics |
atomic_umax_relaxed | Partial | See Atomics |
atomic_umin_seqcst | Partial | See Atomics |
atomic_umin_acquire | Partial | See Atomics |
atomic_umin_acqrel | Partial | See Atomics |
atomic_umin_release | Partial | See Atomics |
atomic_umin_relaxed | Partial | See Atomics |
atomic_xadd_seqcst | Partial | See Atomics |
atomic_xadd_acquire | Partial | See Atomics |
atomic_xadd_acqrel | Partial | See Atomics |
atomic_xadd_release | Partial | See Atomics |
atomic_xadd_relaxed | Partial | See Atomics |
atomic_xchg_seqcst | Partial | See Atomics |
atomic_xchg_acquire | Partial | See Atomics |
atomic_xchg_acqrel | Partial | See Atomics |
atomic_xchg_release | Partial | See Atomics |
atomic_xchg_relaxed | Partial | See Atomics |
atomic_xor_seqcst | Partial | See Atomics |
atomic_xor_acquire | Partial | See Atomics |
atomic_xor_acqrel | Partial | See Atomics |
atomic_xor_release | Partial | See Atomics |
atomic_xor_relaxed | Partial | See Atomics |
atomic_xsub_seqcst | Partial | See Atomics |
atomic_xsub_acquire | Partial | See Atomics |
atomic_xsub_acqrel | Partial | See Atomics |
atomic_xsub_release | Partial | See Atomics |
atomic_xsub_relaxed | Partial | See Atomics |
blackbox | Yes | |
bitreverse | Yes | |
breakpoint | Yes | |
bswap | Yes | |
caller_location | No | |
ceilf32 | Yes | |
ceilf64 | Yes | |
copy | Yes | |
copy_nonoverlapping | Yes | |
copysignf32 | Yes | |
copysignf64 | Yes | |
cosf32 | Partial | Results are overapproximated; this test explains how |
cosf64 | Partial | Results are overapproximated; this test explains how |
ctlz | Yes | |
ctlz_nonzero | Yes | |
ctpop | Yes | |
cttz | Yes | |
cttz_nonzero | Yes | |
discriminant_value | Yes | |
drop_in_place | No | |
exact_div | Yes | |
exp2f32 | Partial | Results are overapproximated |
exp2f64 | Partial | Results are overapproximated |
expf32 | Partial | Results are overapproximated |
expf64 | Partial | Results are overapproximated |
fabsf32 | Yes | |
fabsf64 | Yes | |
fadd_fast | Yes | |
fdiv_fast | Partial | #809 |
float_to_int_unchecked | Partial | #3629 |
floorf32 | Yes | |
floorf64 | Yes | |
fmaf32 | Partial | Results are overapproximated |
fmaf64 | Partial | Results are overapproximated |
fmul_fast | Partial | #809 |
forget | Yes | |
frem_fast | No | |
fsub_fast | Yes | |
likely | Yes | |
log10f32 | Partial | Results are overapproximated |
log10f64 | Partial | Results are overapproximated |
log2f32 | Partial | Results are overapproximated |
log2f64 | Partial | Results are overapproximated |
logf32 | Partial | Results are overapproximated |
logf64 | Partial | Results are overapproximated |
maxnumf32 | Yes | |
maxnumf64 | Yes | |
min_align_of | Yes | |
min_align_of_val | Yes | |
minnumf32 | Yes | |
minnumf64 | Yes | |
move_val_init | No | |
mul_with_overflow | Yes | |
nearbyintf32 | Yes | |
nearbyintf64 | Yes | |
needs_drop | Yes | |
nontemporal_store | No | |
offset | Partial | Doesn't check all UB conditions |
powf32 | Partial | Results are overapproximated |
powf64 | Partial | Results are overapproximated |
powif32 | Partial | Results are overapproximated |
powif64 | Partial | Results are overapproximated |
pref_align_of | Yes | |
prefetch_read_data | No | |
prefetch_read_instruction | No | |
prefetch_write_data | No | |
prefetch_write_instruction | No | |
ptr_guaranteed_eq | Yes | |
ptr_guaranteed_ne | Yes | |
ptr_offset_from | Partial | Doesn't check all UB conditions |
raw_eq | Partial | Cannot detect uninitialized memory |
rintf32 | Yes | |
rintf64 | Yes | |
rotate_left | Yes | |
rotate_right | Yes | |
roundf32 | Yes | |
roundf64 | Yes | |
rustc_peek | No | |
saturating_add | Yes | |
saturating_sub | Yes | |
sinf32 | Partial | Results are overapproximated; this test explains how |
sinf64 | Partial | Results are overapproximated; this test explains how |
size_of | Yes | |
size_of_val | Yes | |
sqrtf32 | Partial | Results are overapproximated |
sqrtf64 | Partial | Results are overapproximated |
sub_with_overflow | Yes | |
transmute | Partial | Doesn't check all UB conditions |
truncf32 | Yes | |
truncf64 | Yes | |
try | No | #267 |
type_id | Yes | |
type_name | Yes | |
typed_swap | Yes | |
unaligned_volatile_load | No | See Notes - Concurrency |
unaligned_volatile_store | No | See Notes - Concurrency |
unchecked_add | Yes | |
unchecked_div | Yes | |
unchecked_mul | Yes | |
unchecked_rem | Yes | |
unchecked_shl | Yes | |
unchecked_shr | Yes | |
unchecked_sub | Yes | |
unlikely | Yes | |
unreachable | Yes | |
variant_count | No | |
volatile_copy_memory | No | See Notes - Concurrency |
volatile_copy_nonoverlapping_memory | No | See Notes - Concurrency |
volatile_load | Partial | See Notes - Concurrency |
volatile_set_memory | No | See Notes - Concurrency |
volatile_store | Partial | See Notes - Concurrency |
wrapping_add | Yes | |
wrapping_mul | Yes | |
wrapping_sub | Yes | |
write_bytes | Yes |
Atomics
All atomic intrinsics are compiled as an atomic block where the operation is performed. But as noted in Notes - Concurrency, Kani support for concurrent verification is limited and not used by default. Verification on code containing atomic intrinsics should not be trusted given that Kani assumes the code to be sequential.
Platform intrinsics
Intrinsics from the platform_intrinsics
feature.
Name | Support | Notes |
---|---|---|
simd_add | Yes | |
simd_and | Yes | |
simd_div | Yes | |
simd_eq | Yes | |
simd_extract | Yes | |
simd_ge | Yes | |
simd_gt | Yes | |
simd_insert | Yes | |
simd_le | Yes | |
simd_lt | Yes | |
simd_mul | Yes | |
simd_ne | Yes | |
simd_or | Yes | |
simd_rem | Yes | Doesn't check for floating point overflow #2669 |
simd_shl | Yes | |
simd_shr | Yes | |
simd_shuffle* | Yes | |
simd_sub | Yes | |
simd_xor | Yes |
Unstable features
In general, unstable Rust features are out of scope and any support for them available in Kani should be considered unstable as well.
The following are examples of unstable features that are not supported in Kani:
- Generators
- C-variadics
Overrides
As explained in Comparison with other tools, Kani is based on a technique called model checking, which verifies a program without actually executing it. It does so through encoding the program and analyzing the encoded version. The encoding process often requires "modeling" some of the library functions to make them suitable for analysis. Typical examples of functionality that requires modeling are system calls and I/O operations. In some cases, Kani performs such encoding through overriding some of the definitions in the Rust standard library.
The following table lists some of the symbols that Kani
overrides and a description of their behavior compared to the std
versions:
Name | Description |
---|---|
assert , assert_eq , and assert_ne macros | Skips string formatting code, generates a more informative message and performs some instrumentation |
debug_assert , debug_assert_eq , and debug_assert_ne macros | Rewrites as equivalent assert* macro |
print , eprint , println , and eprintln macros | Skips string formatting and I/O operations |
unreachable macro | Skips string formatting and invokes panic!() |
std::process::{abort, exit} functions | Invokes panic!() to abort the execution |
FAQs
This section collects frequently asked questions about Kani. Please consider opening an issue if you have a question that would like to see here.
Questions
Kani doesn't fail after kani::assume(false)
. Why?
kani::assume(false)
(or kani::assume(cond)
where cond
is a condition that results in false
in the context of the program), won't cause errors in Kani.
Instead, such an assumption has the effect of blocking all the symbolic execution paths from the assumption.
Therefore, all checks after the assumption should appear as UNREACHABLE
.
That's the expected behavior for kani::assume(false)
in Kani.
If you didn't expect certain checks in a harness to be UNREACHABLE
, we recommend using the kani::cover
macro to determine what conditions are possible in case you've over-constrained the harness.
I implemented the kani::Arbitrary
trait for a type that's not from my crate, and got the error
only traits defined in the current crate can be implemented for types defined outside of the crate
.
What does this mean? What can I do?
This error is due to a violation of Rust's orphan rules for trait implementations, which are explained here. In that case, you'll need to write a function that builds an object from non-deterministic variables. Inside this function you would simply return an arbitrary value by generating arbitrary values for its components.
For example, let's assume the type you're working with is this enum:
#[derive(Copy, Clone)]
pub enum Rating {
One,
Two,
Three,
}
Then, you can match on a non-deterministic integer (supplied by kani::any
) to return non-deterministic Rating
variants:
pub fn any_rating() -> Rating {
match kani::any() {
0 => Rating::One,
1 => Rating::Two,
_ => Rating::Three,
}
}
More details about this option, which also useful in other cases, can be found here.
If the type comes from std
(Rust's standard library), you can open a request for adding Arbitrary
implementations to the Kani library.
Otherwise, there are more involved options to consider:
- Importing a copy of the external crate that defines the type, then implement
Arbitrary
there. - Contributing the
Arbitrary
implementation to the external crate that defines the type.