Introduction

Kani is an open-source verification tool that uses automated reasoning to analyze Rust programs. In order to integrate feedback from developers and users on future changes to Kani, we decided to follow a light-weight "RFC" (request for comments) process.

When to create an RFC

You should create an RFC in one of two cases:

  1. The change you are proposing would be a "one way door": e.g. a major change to the public API, a new feature that would be difficult to modify once released, etc.
  2. The change you are making has a significant design component, and would benefit from a design review.

Bugs and improvements to existing features do not require an RFC. If you are in doubt, feel free to create a feature request and discuss the next steps in the new issue. Your PR reviewer may also request an RFC if your change appears to fall into category 1 or 2.

You do not necessarily need to create an RFC immediately. It is our experience that it is often best to write some "proof of concept" code to test out possible ideas before writing the formal RFC.

The RFC process

This is the overall workflow for the RFC process:

    Create RFC ──> Receive Feedback  ──> Accepted?
                        │  ∧                  │ Y
                        ∨  │                  ├───> Implement ───> Test + Feedback ───> Stabilize?
                       Revise                 │ N                                          │ Y
                                              └───> Close PR                               ├───> RFC Stable
                                                                                           │ N
                                                                                           └───> Remove feature
  1. Create an RFC
    1. Create a tracking issue for your RFC (e.g.: Issue-1588).
    2. Start from a fork of the Kani repository.
    3. Copy the template file (rfc/src/template.md) to rfc/src/rfcs/<ID_NUMBER><my-feature>.md.
    4. Fill in the details according to the template instructions.
    • For the first RFC version, we recommend that you leave the "Software Design" section empty.
    • Focus on the user impact and user experience. Include a few usage examples if possible.
    1. Add a link to the new RFC inside rfc/src/SUMMARY.md
    2. Submit a pull request.
  2. Build consensus and integrate feedback.
    1. RFCs should get approved by at least 2 Kani developers.
    2. Once the RFC has been approved, update the RFC status and merge the PR.
    3. If the RFC is not approved, close the PR without merging.
  3. Feature implementation.
    1. Start implementing the new feature in your fork.
    2. Create a new revision of the RFC to add details about the "Software Design".
    3. It is OK to implement the feature incrementally over multiple PRs. Just ensure that every pull request has a testable end-to-end flow and that it is properly tested.
    4. In the implementation stage, the feature should only be accessible if the user explicitly passes -Z <FEATURE_ID> as an argument to Kani.
    5. Document how to use the feature.
    6. Keep the RFC up-to-date with the decisions you make during implementation.
  4. Test and Gather Feedback.
    1. Fix major issues related to the new feature.
    2. Gather user feedback and make necessary adjustments.
    3. Resolve RFC open questions.
    4. Add regression tests to cover all expected behaviors and unit tests whenever possible.
  5. Stabilization.
    1. Propose to stabilize the feature when feature is well tested and UX has received positive feedback.
    2. Create a new PR that removes the -Z <FEATURE_ID> guard and that marks the RFC status as "STABLE".
      1. Make sure the RFC reflects the final implementation and user experience.
    3. In some cases, we might decide not to incorporate a feature (E.g.: performance degradation, bad user experience, better alternative). In those cases, please update the RFC status to "CANCELLED as per <PR_LINK | ISSUE_LINK>" and remove the code that is no longer relevant.
    4. Close the tracking issue.
  • Feature Name: Fill me with pretty name and a unique ident 1. Example: New Feature (new_feature)
  • Feature Request Issue: Link to issue
  • RFC PR: Link to original PR
  • Status: One of the following: [Under Review | Unstable | Stable | Cancelled]
  • Version: [0-9]* Increment this version whenever you open a new PR to update the RFC (not at every revision). Start with 0.
  • Proof-of-concept: Optional field. If you have implemented a proof of concept, add a link here

Summary

Short (1-2 sentences) description of the feature. What is this feature about?

User Impact

Imagine this as your elevator pitch directed to users as well as Kani developers.

  • Why are we doing this?
  • Why should users care about this feature?
  • How will this benefit them?
  • What is the downside?

If this RFC is related to change in the architecture without major user impact, think about the long term impact for user. I.e.: what future work will this enable.

  • If you are unsure you need an RFC, please create a feature request issue and discuss the need with other Kani developers.

User Experience

This should be a description on how users will interact with the feature. Users should be able to read this section and understand how to use the feature. Do not include implementation details in this section, neither discuss the rationale behind the chosen UX.

Please include:

  • High level user flow description.
  • Any new major functions or attributes that will be added to Kani library.
  • New command line options or subcommands (no need to mention the unstable flag).
  • List failure scenarios and how are they presented (e.g., compilation errors, verification failures, and possible failed user iterations).
  • Substantial changes to existing functionality or Kani output.

If the RFC is related to architectural changes and there are no visible changes to UX, please state so. No further explanation is needed.

Software Design

This is the beginning of the technical portion of the RFC. From now on, your main audience is Kani developers, so it's OK to assume readers know Kani architecture.

Please provide a high level description your design.

  • What are the main components that will be modified? (E.g.: changes to kani-compiler, kani-driver, metadata, proc-macros, installation...)
  • Will there be changes to the components interface?
  • Any changes to how these components communicate?
  • What corner cases do you anticipate?

We recommend you to leave this empty for the first version of your RFC.

Rationale and alternatives

This is the section where you discuss the decisions you made.

  • What are the pros and cons of the UX? What would be the alternatives?
  • What is the impact of not doing this?
  • Any pros / cons on how you designed this?

Open questions

List of open questions + an optional link to an issue that captures the work required to address the open question. Capture the details of each open question in their respective issue, not here.

Example:

  • Is there any use case that isn't handled yet?
  • Is there any part of the UX that still needs some improvement?

Make sure all open questions are addressed before stabilization.

Out of scope / Future Improvements

Optional Section: List of extensions and possible improvements that you predict for this feature that is out of the scope of this RFC.

Feel free to add as many items as you want, but please refrain from adding too much detail. If you want to capture your thoughts or start a discussion, please create a feature request. You are welcome to add a link to the new issue here.

1

This unique ident should be used to enable features proposed in the RFC using -Z <ident> until the feature has been stabilized.


Summary

Fix linking issues with the rust standard library in a scalable manner by only generating goto-program for code that is reachable from the user harnesses.

User Impact

The main goal of this RFC is to enable Kani users to link against all supported constructs from the std library. Currently, Kani will only link to items that are either generic or have an inline annotation.

The approach introduced in this RFC will have the following secondary benefits.

  • Reduce spurious warnings about unsupported features for cases where the feature is not reachable from any harness.
  • In the verification mode, we will likely see a reduction on the compilation time and memory consumption by pruning the inputs of symtab2gb and goto-instrument.
    • Compared to linking against the standard library goto-models that take more than 5 GB.
  • In a potential assessment mode, only analyze code that is reachable from all public items in the target crate.

One downside is that we will include a pre-compiled version of the std, our release bundle will double in size (See Rational and Alternatives for more information on the size overhead). This will negatively impact the time taken to set up Kani (triggered by either the first time a user invokes kani | cargo-kani , or explicit invoke the subcommand setup).

User Experience

Once this RFC has been stabilized users shall use Kani in the same manner as they have been today. Until then, we wil add an unstable option --mir-linker to enable the cross-crate reachability analysis and the generation of the goto-program only when compiling the target crate.

Kani setup will likely take longer and more disk space as mentioned in the section above. This change will not be guarded by --mir-linker option above.

Detailed Design

In a nutshell, we will no longer generate a goto-program for every crate we compile. Instead, we will generate the MIR for every crate, and we will generate only one goto-program. This model will only include items reachable from the target crate's harnesses.

The current system flow for a crate verification is the following (Kani here represents either kani | cargo-kani executable):

  1. Kani compiles the user crate as well as all its dependencies. For every crate compiled, kani-compiler will generate a goto-program. This model includes everything reachable from the crate's public functions.
  2. After that, Kani links all models together by invoking goto-cc. This step will also link against Kani's C library.
  3. For every harness, Kani invokes goto-instrument to prune the linked model to only include items reachable from the given harness.
  4. Finally, Kani instruments and verify each harness model via goto-instrument and cbmc calls.

After this RFC, the system flow would be slightly different:

  1. Kani compiles the user crate dependencies up to the MIR translation. I.e., for every crate compiled, kani-compiler will generate an artifact that includes the MIR representation of all items in the crate.
  2. Kani will generate the goto-program only while compiling the target user crate. It will generate one goto-program that includes all items reachable from any harness in the target crate.
  3. goto-cc will still be invoked to link the generated model against Kani's C library.
  4. Steps #3 and #4 above will be performed without any change.

This feature will require three main changes to Kani which are detailed in the sub-sections below.

Kani's Sysroot

Kani currently uses rustup sysroot to gather information from the standard library constructs. The artifacts from this sysroot include the MIR for generic items as well as for items that may be included in a crate compilation (e.g.: functions marked with #[inline] annotation). The artifacts do not include the MIR for items that have already been compiled to the std shared library. This leaves a gap that cannot be filled by the kani-compiler; thus, we are unable to translate these items into goto-program.

In order to fulfill this gap, we must compile the standard library from scratch. This RFC proposes a similar method to what MIRI implements. We will generate our own sysroot using the -Z always-encode-mir compilation flag. This sysroot will be pre-compiled and included in our release bundle.

We will compile kani's libraries (kani and std) also with -Z always-encode-mir and with the new sysroot.

Cross-Crate Reachability Analysis

kani-compiler will include a new reachability module to traverse over the local and external MIR items. This module will monomorphize all generic code as it's performing the traversal.

The traversal logic will be customizable allowing different starting points to be used. The two options to be included in this RFC is starting from all local harnesses (tagged with #[kani::proof]) and all public functions in the local crate.

The kani-compiler behavior will be customizable via a new flag:

--reachability=[ harnesses | pub_fns |  none | legacy | tests ]

where:

  • harnesses: Use the local harnesses as the starting points for the reachability analysis.
  • pub_fns: Use the public local functions as the starting points for the reachability analysis.
  • none: This will be the default value if --reachability flag is not provided. It will skip reachability analysis. No goto-program will be generated. This will be used to compile dependencies up to the MIR level. kani-compiler will still generate artifacts with the crate's MIR.
  • tests: Use the functions marked as tests with #[tests] as the starting points for the analysis.
  • legacy: Mimics rustc behavior by invoking rustc_monomorphizer::collect_and_partition_mono_items() to collect the items to be generated. This will not include many items that go beyond the crate boundary. This option was only kept for now for internal usage in some of our compiler tests. It cannot be used as part of the end to end verification flow, and it will be removed in the future.

These flags will not be exposed to the final user. They will only be used for the communication between kani-driver and kani-compiler.

Dependencies vs Target Crate Compilation

The flags described in the section above will be used by kani-driver to implement the new system flow. For that, we propose the following mechanism:

  • For standalone kani, we will pass the option --reachability=harnesses to kani-compiler.

  • For cargo-kani, we will replace

    cargo build <FLAGS>
    

    with:

    cargo rustc <FLAGS> -- --reachability=harnesses
    

    to build everything. This command will compile all dependencies without the --reachability argument, and it will only pass harnesses value to the compiler when compiling the target crate.

Rational and Alternatives

Not doing anything is not an alternative, since this fixes a major gap in Kani's usability.

Benefits

  • The MIR linker will allow us to fix the linking issues with Rust's standard library.
  • Once stabilized, the MIR linker will be transparent to the user.
  • It will enable more powerful and precise static analysis to kani-compiler.
  • It won't require any changes to our dependencies.
  • This will fix the harnesses' dependency on the#[no_mangle] annotation (Issue-661).

Risks

Failures in the linking stage would not impact the tool soundness. I anticipate the following failure scenarios:

  • ICE (Internal compiler error): Some logic is incorrectly implemented and the linking stage crashes. Although this is a bad experience for the user, this will not impact the verification result.
  • Missing items: This would either result in ICE during code generation or a verification failure if the missing item is reachable.
  • Extra items: This shouldn't impact the verification results, and they should be pruned by CBMC's reachability analysis. This is already the case today. In extreme cases, this could include a symbol that we cannot compile and cause an ICE.

The new reachability code would be highly dependent on the rustc unstable APIs, which could increase the cost of the upstream synchronization. That said, the APIs that would be required are already used today.

Finally, this implementation relies on a few unstable options from cargo and rustc. These APIs are used by other tools such as MIRI, so we don't see a high risk that they would be removed.

Alternatives

The other options explored were:

  1. Pre-compile the standard library, and the kani library, and ship the generated *symtab.json files.
  2. Pre-compile the standard library, and the kani library, convert the standard library and dependencies to goto-program (viasymtab2gb) and link them into one single goto-program. Ship the generated model.

Both would still require shipping the compiler metadata (via rlib or rmeta) for the kani library, its dependencies, and kani_macro.so.

Both alternatives are very similar. They only differ on the artifact that would be shipped. They require generating and shipping a custom sysroot; however, there is no need to implement the reachability algorithm.

We implemented a prototype for the MIR linker and one for the alternatives. Both prototypes generate the sysroot as part of the cargo kani flow.

We performed a small experiment (on a c5.4xlarge ec2 instance running Ubuntu 20.04) to assess the options.

For this experiment, we used the following harness:

#[kani::proof]
#[kani::unwind(4)]
pub fn check_format() {
    assert!("2".parse::<u32>().unwrap() == 2);
}

The experiment showed that the MIR linker approach is much more efficient.

See the table bellow for the breakdown of time (in seconds) taken for each major step of the harness verification:

StageMIR LinkerAlternative 1
compilation22.2s64.7s
goto-program generation2.4s90.7s
goto-program linking0.8s33.2s
code instrumentation0.8s33.1
verification0.5s8.5s

It is possible that goto-cc time can be improved, but this would also require further experimentation and expertise that we don't have today.

Every option would require a custom sysroot to either be built or shipped with Kani. The table below shows the size of the sysroot files for the alternative #2 (goto-program files) vs compiler artifacts (*.rmeta files) files with -Z always-encode-mir for x86_64-unknown-linux-gnu (on Ubuntu 18.04).

File TypeRaw sizeCompressed size
symtab.json950M26M
symtab.out84M24M
*.rmeta92M25M

These results were obtained by looking at the artifacts generated during the same experiment.

Open questions

  • Should we build or download the sysroot during kani setup? We include pre-built MIR artifacts for the std library.
  • What's the best way to enable support to run Kani in the entire workspace? We decided to run cargo rustc per package.
  • Should we codegen all static items no matter what? We only generate code for static items that are collected by the reachability analysis. Static objects can only be initialized via constant function. Thus, it shouldn't have any side effect.
  • What's the best way to handle cargo kani --tests? We are going to use the test profile and iterate over all the targets available in the crate:
    • cargo rustc --profile test -- --reachability=harnesses

Future possibilities

  • Split the goto-program into two or more items to optimize compilation result caching.
    • Dependencies: One model will include items from all the crate dependencies. This model will likely be more stable and require fewer updates.
    • Target crate: The model for all items in the target crate.
  • Do the analysis per-harness. This might be adequate once we have a mechanism to cache translations.
  • Add an option to include external functions to the analysis starting point in order to enable verification when calls are made from C to rust.
  • Contribute the reachability analysis code back to upstream.

Summary

Allow users to specify that certain functions and methods should be replaced with mock functions (stubs) during verification.

Scope

In scope:

  • Replacing function bodies
  • Replacing method bodies (which means that the new method body will be executed, whether the method is invoked directly or through a vtable)

Out of scope:

  • Replacing type definitions
  • Replacing macro definitions
  • Mocking traits
  • Mocking intrinsics

User impact

We anticipate that function/method stubbing will have a substantial positive impact on the usability of Kani:

  1. Users might need to stub functions/methods containing features that Kani does not support, such as inline assembly.
  2. Users might need to stub functions/methods containing code that Kani supports in principle, but which in practice leads to bad verification performance (for example, if it contains deserialization code).
  3. Users could use stubbing to perform compositional reasoning: prove the behavior of a function/method f, and then in other proofs---that call f indirectly---use a stub of f that mocks that behavior but is less complex.

In all cases, stubbing would enable users to verify code that cannot currently be verified by Kani (or at least not within a reasonable resource bound). Even without stubbing types, the ability to stub functions/methods can help provide verification-friendly abstractions for standard data structures. For example, Issue 1673 suggests that some Kani proofs run more quickly if Vec::new is replaced with Vec::with_capacity; function stubbing would allow us to make this substitution everywhere in a codebase (and not just in the proof harness).

In what follows, we give an example of stubbing external code, using the annotations we propose in this RFC. We are able to run this example on a modified version of Kani using a proof-of-concept MIR-to-MIR transformation implementing stubbing (the prototype does not support stub-related annotations; instead, it reads the stub mapping from a file). This example stubs out a function that returns a random number. This is the type of function that is commonly stubbed in other verification and program analysis projects, along with system calls, timer functions, logging calls, and deserialization methods---all of which we should be able to handle. See the appendix at the end of this RFC for an extended example involving stubbing out a deserialization method.

Mocking randomization

The crate rand is widely used (150M downloads). However, Kani cannot currently handle code that uses it (Kani users have run into this; see Issue 1727. Consider this example:

#[cfg(kani)]
#[kani::proof]
fn random_cannot_be_zero() {
    assert_ne!(rand::random::<u32>(), 0);
}

For unwind values less than 2, Kani encounters an unwinding assertion error (there is a loop used to seed the random number generator); if we set an unwind value of 2, Kani fails to terminate within 5 minutes.

Using stubbing, we can specify that the function rand::random should be replaced with a mocked version:

#[cfg(kani)]
fn mock_random<T: kani::Arbitrary>() -> T {
    kani::any()
}

#[cfg(kani)]
#[kani::proof]
#[kani::stub(rand::random, mock_random)]
fn random_cannot_be_zero() {
    assert_ne!(rand::random::<u32>(), 0);
}

Under this substitution, Kani has a single check, which proves that the assertion can fail. Verification time is 0.02 seconds.

User experience

This feature is currently limited to stubbing functions and methods. We anticipate that the annotations we propose here could also be used for stubbing types, although the underlying technical approach might have to change.

Stubs will be specified per harness; that is, different harnesses can use different stubs. This is one of the main design points. Users might want to mock the behavior of a function within one proof harness, and then mock it a different way for another harness, or even use the original function definition. It would be overly restrictive to impose the same stub definitions across all proof harnesses. A good example of this is compositional reasoning: in some harnesses, we want to prove properties of a particular function (and so want to use its actual implementation), and in other harnesses we want to assume that that function has those properties.

Users will specify stubs by attaching the #[kani::stub(<original>, <replacement>)] attribute to each harness function. The arguments original and replacement give the names of functions/methods. They will be resolved using Rust's standard name resolution rules; this includes supporting imports like use foo::bar as baz, as well as imports of multiple versions of the same crate (in which case a name would resolve to a function/method in a particular version). The attribute may be specified multiple times per harness, so that multiple (non-conflicting) stub pairings are supported.

For example, this code specifies that the function mock_random should be used in place of the function rand::random and the function my_mod::bar should be used in place of the function my_mod::foo for the harness my_mod::my_harness:

#[cfg(kani)]
fn mock_random<T: kani::Arbitrary>() -> T {
    kani::any()
}

mod my_mod {

    fn foo(x: u32) -> u32 { ... }

    fn bar(x: u32) -> u32 { ... }

    #[cfg(kani)]
    #[kani::proof]
    #[kani::stub(rand::random, super::mock_random)]
    #[kani::stub(foo, bar)]
    fn my_harness() { ... }

}

We will support the stubbing of private functions and methods. While this provides flexibility that we believe will be necessary in practice, it can also lead to brittle proofs: private functions/methods can change or disappear in even minor version upgrades (thanks to refactoring), and so proofs that depend on them might have a high maintenance burden. In the documentation, we will discourage stubbing private functions/methods except if absolutely necessary.

Stub sets

As a convenience, we will provide a macro kani::stub_set that allows users to specify sets of stubs that can be applied to multiple harnesses:

kani::stub_set!(my_io_stubs(
    stub(std::fs::read, my_read),
    stub(std::fs::write, my_write),
));

When declaring a harness, users can use the #[kani::use_stub_set(<stub_set_name>)] attribute to apply the stub set:

#[cfg(kani)]
#[kani::proof]
#[kani::use_stub_set(my_io_stubs)]
fn my_harness() { ... }

The name of the stub set will be resolved through the module path (i.e., they are not global symbols), using Rust's standard name resolution rules.

A similar mechanism can be used to aggregate stub sets:

kani::stub_set!(all_my_stubs(
    use_stub_set(my_io_stubs),
    use_stub_set(my_other_stubs),
));

Error conditions

Given a set of original-replacement pairs, Kani will exit with an error if

  1. a specified original function/method does not exist;
  2. a specified replacement stub does not exist;
  3. the user specifies conflicting stubs for the same harness (e.g., if the same original function is mapped to multiple replacement functions); or
  4. the signature of the replacement stub is not compatible with the signature of the original function/method (see next section).

Stub compatibility and validation

When considering whether a function/method can be replaced with some given stub, we want to allow some measure of flexibility, while also ensuring that we can provide the user with useful feedback if stubbing results in misformed code. We consider a stub and a function/method to be compatible if all the following conditions are met:

  • They have the same number of parameters.
  • They have the same return type.
  • Each parameter in the stub has the same type as the corresponding parameter in the original function/method.
  • The stub must have the same number of generic parameters as the original function/method. However, a generic parameter in the stub is allowed to have a different name than the corresponding parameter in the original function/method. For example, the stub bar<A, B>(x: A, y: B) -> B is considered to have a type compatible with the function foo<S, T>(x: S, y: T) -> T.
  • The bounds for each type parameter don't need to match; however, all calls to the original function must also satisfy the bounds of the stub.

The final point is the most subtle. We do not require that a type parameter in the signature of the stub implements the same traits as the corresponding type parameter in the signature of the original function/method. However, Kani will reject a stub if a trait mismatch leads to a situation where a statically dispatched call to a trait method cannot be resolved during monomorphization. For example, this restriction rules out the following harness:

fn foo<T>(_x: T) -> bool {
    false
}

trait DoIt {
    fn do_it(&self) -> bool;
}

fn bar<T: DoIt>(x: T) -> bool {
    x.do_it()
}

#[kani::proof]
#[kani::stub(foo, bar)]
fn harness() {
    assert!(foo("hello"));
}

The call to the trait method DoIt::do_it is unresolvable in the stub bar when the type parameter T is instantiated with the type &str. On the other hand, our approach provides some flexibility, such as allowing our earlier example of mocking randomization: both rand::random and my_random have the type () -> T, but in the first case T is restricted such that the type Standard implements Distribution<T>, whereas in the latter case T has to implement kani::Arbitrary. This trait mismatch is allowed because at this call site T is instantiated with u32, which implements kani::Arbitrary.

Pedagogy

To teach this feature, we will update the documentation with a section on function and method stubbing, including simple examples showing how stubbing can help Kani handle code that currently cannot be verified, as well as a guide to best practices for stubbing.

Detailed design

We expect that this feature will require changes primarily to kani-compiler, with some less invasive changes to kani-driver. We will modify kani-compiler to collects stub mapping information (from the harness attributes) before code generation. Since stubs are specified on a per-harness basis, we need to generate multiple versions of code if all harnesses do not agree on their stub mappings; accordingly, we will update kani-compiler to generate multiple versions of code as appropriate. To do the stubbing, we will plug in a new MIR-to-MIR transformation that replaces the bodies of specified functions with their replacements. This can be achieved via rustc's query mechanism: if the user wants to replace foo with bar, then when the compiler requests the MIR for foo, we instead return the MIR for bar. kani-compiler will also be responsible for checking for the error conditions previously enumerated.

We will also need to update the metadata that kani-compiler generates, so that it maps each harness to the generated code that has the right stub mapping for that harness (since there will be multiple versions of generated code). The metadata will also list the stubs applied in each harness. kani-driver will need to be updated to process this new type of metadata and invoke the correct generated code for each harness. We can also update the results report to include the stubs that were used.

We anticipate that this design will evolve and be iterated upon.

Rationale and alternatives: user experience

Stubbing is a de facto necessity for verification tools, and the lack of stubbing has a negative impact on the usability of Kani.

Benefits

  • Because stubs are specified by annotating the harness, the user is able to specify stubs for functions they do not have source access to (like library functions). This contrasts with annotating the function to be replaced (such as with function contracts).
  • The current design provides the user with flexibility, as they can specify different sets of stubs to use for different harnesses. This is important if users are trying to perform compositional reasoning using stubbing, since in some harnesses a function/method should be fully verified, in in other harnesses its behavior should be mocked.
  • The stub selections are located adjacent to the harness, which makes it easy to understand which replacements are going to happen for each harness.

Risks

  • Users can always write stubs that do not correctly correspond to program behavior, and so a successful verification does not actually mean the program is bug-free. This is similar to other specification bugs. All the stubbing code will be available, so it is possible to inspect the assumptions it makes.

Comparison to function contracts

  • In many cases, stubs are more user-friendly than contracts. With contracts, it is sometimes necessary to explicitly provide information that is automatically captured in Rust (such as which memory is written). Furthermore, contract predicates constitute a DSL of their own that needs to be learned; using stubbing, we can stick to using just Rust.
  • Function contracts sometimes come with a mechanism for verifying that a function satisfies its contract (for example, CBMC provides this). While we do not plan to provide such a feature, it is possible to emulate this by writing proof harnesses comparing the behavior of the original function and the stub. Furthermore, our approach provides additional flexibility, as it is not always actually desirable for a stub to be an overapproximation of the function (e.g., we might want the stub to exhibit a certain behavior within a particular harness) or to have a consistent behavior across all harnesses.
  • The currently proposed function contract mechanism does not provide a way to specify contracts on external functions. In principle, it would be possible to extend it to do so. Doing so would require some additional UX design decisions (e.g., "How do users specify this?"); with stubbing there does not need to be a distinction between local and external functions.

Alternative #1: Annotate stubbed functions

In this alternative, users add an attribute #[kani::stub_by(<replacement>)] to the function that should be replaced. This approach is similar to annotating a function with a contract specifying its behavior (the stub acts like a programmatic contract). The major downside with this approach is that it would not be possible to stub external code. We see this as a likely use case that needs to be supported: users will want to replace std library functions or functions from arbitrary external crates.

Alternative #2: Annotate stubs

In this alternative, users add an attribute #[kani::stub_of(<original>)] to the stub function itself, saying which function it replaces:

#[cfg(kani)]
#[kani::stub_of(rand::random)]
fn mock_random<T: kani::Arbitrary>() -> T { ... }

The downside is that this stub must be uniformly applied across all harnesses and the stub specifications might be spread out across multiple files. It would also require an extra layer of indirection to use a function as a stub if the user does not have source code access to it.

Alternative #3: Annotate harnesses and stubs

This alternative combines the proposed solution and Alternative #2. Users annotate the stub (as in Alternative #2) and specify for each harness which stubs to use using an annotation #[kani::use_stubs(<stub>+)] placed above the harness.

This could be combined with modules, so that a module can be used to group stubs together, and then harnesses could pull in all the stubs in the module:

#[cfg(kani)]
mod my_stubs {

  #[kani::stub_of(foo)]
  fn stub1() { ... }

  #[kani::stub_of(bar)]
  fn stub2() { ... }

}

#[cfg(kani)]
#[kani::proof]
#[kani::use_stubs(my_stubs)]
fn my_harness() { ... }

The benefit is that stubs are specified per harness, and (using modules) it might be possible to group stubs together. The downside is that multiple annotations are required and the stub mappings themselves are remote from the harness (at the harness you would know what stub is being used, but not what it is replacing). There are also several issues that would need to be resolved:

  • How do you mock multiple functions with the same stub? (Say harness A wants to use stub1 to mock foo, and harness B wants to use stub1 to mock bar.)
  • How do you combine stub sets defined via modules? Would you use the module hierarchy?
  • If you use modules to define stub sets, are these modules regular modules or not? In particular, given that modules can contain other constructs than functions, how should we interpret the extra stuff?

Alternative #4: Specify stubs in a file

One alternative would be to specify stubs in a file that is passed to kani-driver via a command line option. Users would specify per-harness stub pairings in the file; JSON would be a possible format. Using a file would eliminate the need for kani-compiler to do an extra pass to extract harness information from the Rust source code before doing code generation; the rest of the implementation would stay the same. It would also allow the same harness to be run with different stub selections (by supplying a different file). The disadvantage is that the stub selection is remote from the harness itself.

Rationale and alternatives: stubbing mechanism

Our approach is based on a MIR-to-MIR transformation. Some advantages are that it operates over a relatively simple intermediate representation and rustc has good support for plugging in MIR-to-MIR transformations, so it would not require any changes to rustc itself. At this stage of the compiler, names have been fully resolved, and there is no problem with swapping in the body of a function defined in one crate for a function defined in another. Another benefit is that it should be possible to extend the compiler to integrate --concrete-playback with the abstractions (although doing so is out of scope for the current proposal).

The major downside with the MIR-to-MIR transformation is that it does not appear to be possible to stub types at that stage (there is no way to change the definition of a type through the MIR). Thus, our proposed approach will not be a fully general stubbing solution. However, it is technically feasible and relatively clean, and provides benefits over having no stubbing at all (as can be seen in the examples in the first part of this document).

Furthermore, it could be used as part of a portfolio of stubbing approaches, where users stub local types using conditional compilation (see Alternative #1), and Kani provides a modified version of the standard library with verification-friendly versions of types like std::vec::Vec.

Alternative #1: Conditional compilation

In this baseline alternative, we do not provide any stubbing mechanism at all. Instead, users can effectively stub local code (functions, methods, and types) using conditional compilation. For example, they could specify using #[cfg(kani)] to turn off the original definition and turn on the replacement definition when Kani is running, similarly to the ghost state approach taken in the Tokio Bytes proof.

The disadvantage with this approach is that it does not provide any way to stub external code, which is one of the main motivations of our proposed approach.

Alternative #2: Source-to-source transformation

In this alternative, we rewrite the source code before it even gets to the compiler. The advantage with this approach is that it is very flexible, allowing us to stub functions, methods, and types, either by directly replacing them, or appending their replacements and injecting appropriate conditional compilation guards.

This approach entails less user effort than Alternative #1, but it has the same downside that it requires all source code to be available. It also might be difficult to inject code in a way that names are correctly resolved (e.g., if the replacement code comes from a different crate). Also, source code is difficult to work with (e.g., unexpanded macros).

On the last two points, we might be able to take advantage of an existing source analysis platform like rust-analyzer (which has facilities like structural search replace), but this would add more (potentially fragile) dependencies to Kani.

Alternative #3: AST-to-AST or HIR-to-HIR transformation

In this alternative, we implement stubbing by rewriting the AST or High-Level IR (HIR) of the program. The HIR is a more compiler-friendly version of the AST; it is what is used for type checking. To swap out a function, method, or type at this level, it looks like it would be necessary to add another pass to rustc that takes the initial AST/HIR and produces a new AST/HIR with the appropriate replacements.

The advantage with this approach is, like source transformations, it would be very flexible. The downside is that it would require modifying rustc (as far as we know, there is not an API for plugging in a new AST/HIR pass), and would also require performing the transformations at a very syntactic level: although the AST/HIR would likely be easier to work with than source code directly, it is still very close to the source code and not very abstract. Furthermore, provided we supported stubbing across crate boundaries, it seems like we would run into a sequencing issue: if we were trying to stub a function in a dependency, we might not know until after we have compiled that dependency that we need to modify its AST/HIR; furthermore, even if we were aware of this, the replacement AST/HIR code would not be available at that time (the AST/HIR is usually just constructed for the crate currently being compiled).

Open questions

  • Would there ever be the need to stub a particular monomorphization of a function, as opposed to the polymorphic function?
  • How can the user verify that the stub is an abstraction of the original function/method? Sometimes it might be important that a stub is an overapproximation or underapproximation of the replaced code. One possibility would be writing proofs about stubs (possibly relating their behavior to that of the code they are replacing).

Limitations

  • Our proposed approach will not work with --concrete-playback (for now).
  • We are only able to apply abstractions to some dependencies if the user enables the MIR linker.

Future possibilities

  • It would increase the utility of stubbing if we supported stubs for types. The source code annotations could likely stay the same, although the underlying technical approach performing these substitutions might be significantly more complex.

  • It would probably make sense to provide a library of common stubs for users, since many applications might want to stub the same functions and mock the same behaviors (e.g., rand::random can be replaced with a function returning kani::any).

  • We could provide special classes of stubs that are likely to come up in practice:

    • unreachable: assert the function is unreachable.
    • havoc_locals: return nondeterministic values and assign nondeterministic values to all mutable arguments.
    • havoc: similar to havoc_locals but also assign nondeterministic values to all mutable global variables.
    • uninterpret: treat function as an uninterpreted function.
  • How can we provide a good user experience for accessing private fields of self in methods? It is possible to do so using std::mem::transmute (see below); this is clunky and error-prone, and it would be good to provide better support for users.

    struct Foo {
        x: u32,
    }
    
    impl Foo {
        pub fn m(&self) -> u32 {
            0
        }
    }
    
    struct MockFoo {
        pub x: u32,
    }
    
    fn mock_m(foo: &Foo) {
        let mock: &MockFoo = unsafe { std::mem::transmute(foo) };
        return mock.x;
    }
    
    #[cfg(kani)]
    #[kani::proof]
    #[kani::stub(Foo::m, mock_m)]
    fn my_harness() { ... }
    

Appendix: an extended example

In this example, we mock a serde_json (96M downloads) deserialization method so that we can prove a property about the following Firecracker function that parses a configuration from some raw data:

fn parse_put_vsock(body: &Body) -> Result<ParsedRequest, Error> {
    METRICS.put_api_requests.vsock_count.inc();
    let vsock_cfg = serde_json::from_slice::<VsockDeviceConfig>(body.raw()).map_err(|err| {
        METRICS.put_api_requests.vsock_fails.inc();
        err
    })?;

    // Check for the presence of deprecated `vsock_id` field.
    let mut deprecation_message = None;
    if vsock_cfg.vsock_id.is_some() {
        // vsock_id field in request is deprecated.
        METRICS.deprecated_api.deprecated_http_api_calls.inc();
        deprecation_message = Some("PUT /vsock: vsock_id field is deprecated.");
    }

    // Construct the `ParsedRequest` object.
    let mut parsed_req = ParsedRequest::new_sync(VmmAction::SetVsockDevice(vsock_cfg));
    // If `vsock_id` was present, set the deprecation message in `parsing_info`.
    if let Some(msg) = deprecation_message {
        parsed_req.parsing_info().append_deprecation_message(msg);
    }

    Ok(parsed_req)
}

We manually mocked some of the Firecracker types with simpler versions to reduce the number of dependencies we had to pull in (e.g., we removed some enum variants, unused struct fields). With these changes, we were able to prove that the configuration data has a vsock ID if and only if the parsing metadata includes a deprecation message:

#[cfg(kani)]
fn get_vsock_device_config(action: RequestAction) -> Option<VsockDeviceConfig> {
    match action {
        RequestAction::Sync(vmm_action) => match *vmm_action {
            VmmAction::SetVsockDevice(dev) => Some(dev),
            _ => None,
        },
        _ => None,
    }
}

#[cfg(kani)]
#[kani::proof]
#[kani::unwind(2)]
#[kani::stub(serde_json::deserialize_slice, mock_deserialize)]
fn test_deprecation_vsock_id_consistent() {
    // We are going to mock the parsing of this body, so might as well use an empty one.
    let body: Vec<u8> = Vec::new();
    if let Ok(res) = parse_put_vsock(&Body::new(body)) {
        let (action, mut parsing_info) = res.into_parts();
        let config = get_vsock_device_config(action).unwrap();
        assert_eq!(
            config.vsock_id.is_some(),
            parsing_info.take_deprecation_message().is_some()
        );
    }
}

Crucially, we did this by stubbing out serde_json::from_slice and replacing it with our mock version below, which ignores its input and creates a "symbolic" configuration struct:

#[cfg(kani)]
fn symbolic_string(len: usize) -> String {
    let mut v: Vec<u8> = Vec::with_capacity(len);
    for _ in 0..len {
        v.push(kani::any());
    }
    unsafe { String::from_utf8_unchecked(v) }
}

#[cfg(kani)]
fn mock_deserialize(_data: &[u8]) -> serde_json::Result<VsockDeviceConfig> {
    const STR_LEN: usize = 1;
    let vsock_id = if kani::any() {
        None
    } else {
        Some(symbolic_string(STR_LEN))
    };
    let guest_cid = kani::any();
    let uds_path = symbolic_string(STR_LEN);
    let config = VsockDeviceConfig {
        vsock_id,
        guest_cid,
        uds_path,
    };
    Ok(config)
}

The proof takes 170 seconds to complete (using Kissat as the backend SAT solver for CBMC).


Summary

A new Kani API that allows users to check that a certain condition can occur at a specific location in the code.

User Impact

Users typically want to gain confidence that a proof checks what it is supposed to check, i.e. that properties are not passing vacuously due to an over-constrained environment.

A new Kani macro, cover will be created that can be used for checking that a certain condition can occur at a specific location in the code. The purpose of the macro is to verify, for example, that assumptions are not ruling out those conditions, e.g.:

let mut v: Vec<i32> = Vec::new();
let len: usize = kani::any();
kani::assume(len < 5);
for _i in 0..len {
    v.push(kani::any());
}
// make sure we can get a vector of length 5
kani::cover!(v.len() == 5);

This is typically used to ensure that verified checks are not passing vacuously, e.g. due to overconstrained pre-conditions.

The special case of verifying that a certain line of code is reachable can be achieved using kani::cover!() (which is equivalent to cover!(true)), e.g.

match x {
    val_1 => ...,
    val_2 => ...,
    ...
    val_i => kani::cover!(), // verify that `x` can take the value `val_i`
}

Similar to Rust's assert macro, a custom message can be specified, e.g.

kani::cover!(x > y, "x can be greater than y");

User Experience

The cover macro instructs Kani to find at least one possible execution that satisfies the specified condition at that line of code. If no such execution is possible, the check is reported as unsatisfiable.

Each cover statement will be reported as a check whose description is cover condition: cond and whose status is:

  • SATISFIED (green): if Kani found an execution that satisfies the condition.
  • UNSATISFIABLE (yellow): if Kani proved that the condition cannot be satisfied.
  • UNREACHABLE (yellow): if Kani proved that the cover statement itself cannot be reached.

For example, for the following cover statement:

kani::cover!(a == 0);

An example result is:

Check 2: main.cover.2
         - Status: SATISFIED
         - Description: "cover condition: a == 0"
         - Location: foo.rs:9:5 in function main

Impact on Overall Verification Status

By default, unsatisfiable and unreachable cover checks will not impact verification success or failure. This is to avoid getting verification failure for harnesses for which a cover check is not relevant. For example, on the following program, verification should not fail for another_harness_that_doesnt_call_foo because the cover statement in foo is unreachable from it.

[kani::proof]
fn a_harness_that_calls_foo() {
    foo();
}

#[kani::proof]
fn another_harness_that_doesnt_call_foo() {
    // ...
}

fn foo() {
    kani::cover!( /* some condition */);
}

The --fail-uncoverable option will allow users to fail the verification if a cover property is unsatisfiable or unreachable. This option will be integrated within the framework of Global Conditions, which is used to define properties that depend on other properties.

Using the --fail-uncoverable option will enable the global condition with name fail_uncoverable. Following the format for global conditions, the outcome will be one of the following:

  1. - fail_uncoverable: FAILURE (expected all cover statements to be satisfied, but at least one was not)
  2. - fail_uncoverable: SUCCESS (all cover statements were satisfied as expected)

Note that the criteria to achieve a SUCCESS status depends on all properties of the "cover" class having a SATISFIED status. Otherwise, we return a FAILURE status.

Inclusion in the Verification Summary

Cover checks will be reported separately in the verification summary, e.g.

SUMMARY:
 ** 1 of 206 failed (2 unreachable)
 Failed Checks: assertion failed: x[0] == x[1]

 ** 30 of 35 cover statements satisfied (1 unreachable) <--- NEW

In this example, 5 of the 35 cover statements were found to be unsatisfiable, and one of those 5 is additionally unreachable.

Interaction with Other Checks

If one or more unwinding assertions fail or an unsupported construct is found to be reachable (which indicate an incomplete path exploration), and Kani found the condition to be unsatisfiable or unreachable, the result will be reported as UNDETERMINED.

Detailed Design

The implementation will touch the following components:

  • Kani library: The cover macro will be added there along with a cover function with a rustc_diagnostic_item
  • kani-compiler: The cover function will be handled via a hook and codegen as two assertions (cover(cond) will be codegen as __CPROVER_assert(false); __CPROVER_assert(!cond)). The purpose of the __CPROVER_assert(false) is to determine whether the cover statement is reachable. If it is, the second __CPROVER_assert(!cond) indicates whether the condition is satisfiable or not.
  • kani-driver: The CBMC output parser will extract cover properties through their property class, and their result will be set based on the result of the two assertions:
    • The first (reachability) assertion is proven: report as FAILURE (UNREACHABLE)
    • The first assertion fails, and the second one is proven: report as FAILURE to indicate that the condition is unsatisfiable
    • The first assertion fails, and the second one fails: report as SUCCESS to indicate that the condition is satisfiable

Rationale and alternatives

  • What are the pros and cons of this design? CBMC has its own cover API (__CPROVER_cover), for which SUCCESS is reported if an execution is found, and FAILURE is reported otherwise. However, using this API currently requires running CBMC in a separate "cover" mode. Having to run CBMC in that mode would complicate the Kani driver as it will have to perform two CBMC runs, and then combine their results into a single report. Thus, the advantage of the proposed design is that it keeps the Kani driver simple. In addition, the proposed solution does not depend on a feature in the backend, and thus will continue to work if we were to integrate a different backend.

  • What is the impact of not doing this? The current workaround to accomplish the same effect of verifying that a condition can be covered is to use assert!(!cond). However, if the condition can indeed be covered, verification would fail due to the failure of the assertion.

Open questions

  • ~Should we allow format arguments in the macro, e.g. kani::cover!(x > y, "{} can be greater than {}", x, y)? Users may expect this to be supported since the macro looks similar to the assert macro, but Kani doesn't include the formatted string in the message description, since it's not available at compile time.~
    • For now, this macro will not accept format arguments, since this is not well handled by Kani. This is an extesion to this API that can be easily added later on if Kani ever supports runtime formatting.

Other Considerations

We need to make sure the concrete playback feature can be used with cover statements that were found to be coverable.

Future possibilities

The new cover API subsumes the current kani::expect_fail function. Once it's implemented, we should be able to get rid of expect_fail, and all the related code in compiletest that handles the EXPECTED FAILURE message in a special manner.


Summary

A new option that allows users to verify programs without unwinding loops by synthesizing loop contracts for those loops.

User Impact

Currently Kani does not support verification on programs with unbounded control flow (e.g. loops with dynamic bounds). Kani unrolls all unbounded loops until a global threshold (unwinding number) specified by the user and then verifies this unrolled program, which limits the set of programs it can verify.

A new Kani flag --synthesize-loop-contracts will be created that can be used to enable the goto-level loop-contract synthesizer goto-synthesizer. The idea of loop contracts is, instead of unwinding loops, we abstract those loops as non-loop structures that can cover arbitrary iterations of the loops. The loop contract synthesizer, when enabled, will attempt to synthesize loop contracts for all loops. CBMC can then apply the synthesized loop contracts and verify the program without unwinding any loop. So, the synthesizer will help to verify the programs that require Kani to unwind loops for a very large number of times to cover all iterations.

For example, the number of executed iterations of the loop in the following harness is dynamically bounded by an unbounded variable y 1. Only an unwinding value of i32::MAX can guarantee to cover all iterations of the loop, and hence satisfy the unwinding assertions. Unwinding the loop an i32::MAX number of times will result in a too large goto program to be verified by CBMC.

#[kani::proof]
fn main() {
    let mut y: i32 = kani::any_where(|i| *i > 0);

    while y > 0 {
        y = y - 1;
    }
    assert!(y == 0);
}

With the loop-contract synthesizer, Kani can synthesize the loop invariant y >= 0, with which it can prove the post-condition y == 0 without unwinding the loop.

Also, loop contracts could improve Kani’s verification time since all loops will be abstracted to a single iteration, as opposed to being unwound a large number of iterations. For example, we can easily find out that the following loop is bounded by an unwinding value of 5000. Kani can verify the program in a few minutes by unwinding the loop 5000 times. With loop contracts, we only need to verify the single abstract iteration of the loop, which leads to a smaller query. As a result, Kani with the synthesizer can verify the program in a few seconds.

#[kani::proof]
#[kani::unwind(5000)]
fn main() {
    let mut y: i32 = 5000;

    while y > 0 {
        y = y - 1;
    }
    assert!(y == 0);
}

The goto-synthesizer is an enumeration-based synthesizer. It enumerates candidate invariants from a pre-designed search space described by a given regular tree grammar and verifies if the candidate is an inductive invariant. Therefore it has the following limitations:

  1. the search space is not complete, so it may fail to find a working candidate. The current search space consists of only conjunctions of linear inequalities built from the variables in the loop, which is not expressive enough to capture all loop invariants. For example, the loop invariant a[i] == 0 contains an array access and cannot be captured by the current search space. However, we can easily extend the search space to include more complex expressions with the cost of an exponential increase of the running time of the synthesizer.
  2. the synthesizer suffers from the same limitation as the loop contract verification in CBMC. For example, it does not support unbounded quantifiers, or dynamic allocations in the loop body.

User Experience

Users will be able to use the new command-line flag --synthesize-loop-contracts to run the synthesizer, which will attempt to synthesize loop contracts, and verify programs with the synthesized loop contracts.

Limit Resource Used by Synthesizer for Termination

Without a resource limit, an enumerative synthesizer may run forever to exhaust a search space consisting of an infinite number of candidates, especially when there is no solution in the search space. So, for the guarantee of termination, we provide users options: --limit-synthesis-time T to limit the running time of the synthesizer to be less than T seconds.

Output of Kani when the Synthesizer is Enabled

When the flag --synthesize-loop-contracts is provided, Kani will report different result for different cases

  1. When there exists some loop invariant in the candidate space with which all assertions can be proved, Kani will synthesize the loop contracts, verify the program with the synthesized loop contracts, and report verification SUCCESS;
  2. When no working candidate has been found in the search space within the specified limits, Kani will report the verification result with the best-effort-synthesized loop contracts. Note that as loop contracts are over-approximations of the loop, the violated assertions in this case may be spurious. So we will report the violated assertions as UNDETERMINED instead of FAILED.

A question about how do we print the synthesized loop contracts when users request is discussed in Open question.

Detailed Design

The synthesizer goto-synthesizer is implemented in the repository of CBMC, takes as input a goto binary, and outputs a new goto binary with the synthesized loop contracts applied. Currently, Kani invokes goto-instrument to instrument the goto binary main.goto into a new goto binary main_instrumented.goto, and then invokes cbmc on main_instrumented.goto to get the verification result. The synthesis will happen between calling goto-instrument and calling cbmc. That is, we invoke goto-synthesizer on main_instrumented.goto to produce a new goto binary main_synthesized.goto, and then call cbmc on main_synthesized.goto instead.

When invoking goto-synthesizer, we pass the following parameters to it with the flags built in goto-synthesizer:

  • the resource limit of the synthesis;
  • the solver options to specify what SAT solver we use to verify invariant candidates.

The enumerator used in the synthesizer enumerates candidates from the language of the following grammar template.

NT_Bool -> NT_Bool && NT_Bool | NT_int == NT_int 
            | NT_int <= NT_int | NT_int < NT_int 
            | SAME_OBJECT(terminals_ptr, terminals_ptr)
            
NT_int  -> NT_int + NT_int | terminals_int | LOOP_ENTRY(terminals_int)
            | POINTER_OFFSET(terminals_ptr) | OBJECT_SIZE(terminals_ptr)
            | POINTER_OFFSET(LOOP_ENTRY(terminals_ptr)) | 1

where terminals_ptr are all pointer variables in the scope, and terminal_int are all integer variables in the scope. For every candidate invariant, goto-synthesizer applies it to the GOTO program and runs CBMC to verify the program.

  • If all checks in the program pass, goto-synthesizer returns it as a solution.
  • If the inductive checks pass but some of the other checks fail, the candidate invariant is inductive. We keep it as an inductive invariant clause.
  • If the inductive checks fail, we discard the candidate. When the resource limit is reached, goto-synthesizer returns the conjunction of all inductive clauses as the best-effort-synthesized loop contracts.

We use the following example to illustrate how the synthesizer works.

#[kani::proof]
fn main() {
    let mut y: i32 = kani::any_where(|i| *i > 0);

    while y > 0 {
        y = y - 1;
    }
    assert!(y == 0);
}

As there is only one variable y in the scope, the grammar template above will be instantiated to the following grammar

NT_Bool -> NT_Bool && NT_Bool | NT_int == NT_int 
            | NT_int <= NT_int | NT_int < NT_int 
NT_int  -> NT_int + NT_int | y | LOOP_ENTRY(y) | 1

The synthesizer will enumerate candidates derived from NT_Bool in the following order.

y == y
y == LOOP_ENTRY(y)
y == 1
...
1 <= y + 1
...

The synthesizer then verifies with CBMC if the candidate is an inductive invariant that can be used to prove the post-condition y == 0. For example, the candidate y == y is verified to be an inductive invariant, but cannot be used to prove the post-condition y == 0. The candidate y == 1 is not inductive. The synthesizer rejects all candidates until it finds the candidate 1 <= y + 1, which can be simplified to y >= 0. y >= 0 is an inductive invariant that can be used to prove the post-condition. So the synthesizer will return y >= 0 and apply it to the goto model to get main_synthesized.goto.

For assign clauses, the synthesizer will first use alias analysis to determine an initial set of assign targets. During the following iteration, if any assignable-check is violated, the synthesizer will extract the assign target from the violated check.

Then Kani will call cbmc on main_synthesized.goto to verify the program with the synthesized loop contracts.

Rationale and alternatives

  • Different candidate space. The candidate grammar introduced above now only contains a restricted set of operators, which works well for array-manipulating programs with only pointer-checks instrumented by goto-instrument, but probably not enough for other user-written checks. We may want to include array-indexing, pointer-dereference, or other arithmetic operators in the candidate grammar for synthesizing a larger set of loop invariants. However, there is a trade-off between the size of candidate we enumerate and the running time of the enumeration. We will collect more data to decide what operators we should include in the candidate grammar. Once we decide more kinds of candidate grammars, we will provide users options to choose which candidate grammar they want to use.

Open questions

How does the synthesizer work with unwinding numbers? There may exist some loops for which the synthesizer cannot find loop contracts, but some small unwinding numbers are enough to cover all executions of the loops. In this case, we may want to unwind some loops in the program while synthesizing loop contracts for other loops. It requires us to have a way to identify and specify which loops we want to unwind.

In C programs, we identify loops by the loop ID, which is a pair (function name, loop number). However, in Rust programs, loops are usually in library functions such as Iterator::for_each. And a library function may be called from different places in the program. We may want to unwind the loop in some calls but not in other calls.

How do we output the synthesized loop contracts? To better earn users' trust, we want to be able to report what loop contracts we synthesized and used to verify the given programs. Now goto-synthesizer can dump the synthesized loop contracts into a JSON file. Here is an example of the dumped loop contracts. It contains the location of source files of the loops, the synthesized invariant clauses and assign clauses for loops identified by loop numbers.

{
    "sources": [ "/Users/qinhh/Repos/playground/kani/synthesis/base_2/test.rs" ],
    "functions": [
      {
        "main": [ "loop 1 invariant y >= 0", 
                  "loop 1 assigns var_9,var_10,var_11,x,y,var_12" ]
      }
    ],
    "output": "stdout"
}

There are two challenges here if we want to also dump synthesized loop contracts in Kani.

  1. We need to have a consistent way to identify loops.
  2. We need to dump loop invariants in rust instead of c.
  3. There are many auxiliary variables we added in Kani-compiled GOTO, such as var_9, var_10, var_11, and var_12 in the above JSON file. We need to translate them back to the original variables they represent.

Future possibilities

User-provided loop contracts. If we have a good answer for how to identify loops and dump synthesized loop contracts, we could probably also allow users to provide the loop contracts they wrote to Kani, and verify programs with user-provided loop contracts.

When users want to unwind some loops, we can also introduce macros to enable/disable unwinding for certain block of code.

#[kani::proof]
#[kani::unwind(10)]
fn check() {
    // unwinding starts as enabled, so all loops in this code block will be unwound to 10
    #[kani::disable_unwinding]
    // unwinding is disabled for all loops in this block of code
    #[kani::enable_unwinding]
    // it is enabled in this block of code until the end of the program
}

Invariant caching. The loop invariant could be broken when users modify their code. However, we could probably cache previously working loop invariants and attempt to reuse them when users modify their code. Even if the cached loop invariants are not enough to prove the post-condition, they could still be used as a starting point for the synthesizer to find new loop invariants.

1

We say an integer variable is unbounded if there is no other bound on its value besides the width of its bit-vector representation.


Summary

Users may want to express that a verification harness should panic. This RFC proposes a new harness attribute #[kani::should_panic] that informs Kani about this expectation.

User Impact

Users may want to express that a verification harness should panic. In general, a user adding such a harness wants to demonstrate that the verification fails because a panic is reachable from the harness.

Let's refer to this concept as negative verification, so the relation with negative testing becomes clearer. Negative testing can be exercised in Rust unit tests using the #[should_panic] attribute. If the #[should_panic] attribute is added to a test, cargo test will check that the execution of the test results in a panic. This capability doesn't exist in Kani at the moment, but it would be useful for the same reasons (e.g., to show that invalid inputs result in verification failures, or increase the overall verification coverage).

We propose an attribute that allows users to exercise negative verification in Kani.

We also acknowledge that, in other cases, users may want to express more granular expectations for their harnesses. For example, a user may want to specify that a given check is unreachable from the harness. An ergonomic mechanism for informing Kani about such expectations is likely to require other improvements in Kani (a comprehensive classification for checks reported by Kani, a language to describe expectations for checks and cover statements, and general output improvements). Moving forward, we consider that such a mechanism and this proposal solve different problems, so they don't need to be discussed together. This is further discussed in the rationale and alternatives and future possibilities sections.

User Experience

The scope of this functionality is limited to the overall verification result. The rationale section discusses the granularity of failures, and how this attribute could be extended.

Single Harness

Let's look at this code:

struct Device {
    is_init: bool,
}

impl Device {
    fn new() -> Self {
        Device { is_init: false }
    }

    fn init(&mut self) {
        assert!(!self.is_init);
        self.is_init = true;
    }
}

#[kani::proof]
fn cannot_init_device_twice() {
    let mut device = Device::new();
    device.init();
    device.init();
}

This is what a negative harness may look like. The user wants to verify that calling device.init() more than once should result in a panic.

NOTE: We could convert this into a Rust unit test and add the #[should_panic] attribute to it. However, there are a few good reasons to have a verification-specific attribute that does the same:

  1. To ensure that other unexpected behaviors don't occur (e.g., overflows).
  2. Because #[should_panic] cannot be used if the test harness contains calls to Kani's API.
  3. To ensure that a panic still occurs after stubbing out code which is expected to panic.

Currently, this example produces a VERIFICATION:- FAILED result. In addition, it will return a non-successful code.

#[kani::proof]
#[kani::should_panic]
fn cannot_init_device_twice() {
    let mut device = Device::new();
    device.init();
    device.init();
}

Since we added #[kani::should_panic], running this example would produce a successful code.

Now, we've considered two ways to represent this result in the verification output. Note that it's important that we provide the user with this feedback:

  1. (Expectation) Was Kani expecting the harness to panic?
  2. (Outcome): What's the actual result that Kani produced after the analysis? This will avoid a potential scenario where the user doesn't know for sure if the attribute has had an effect when verifying the harness.

Therefore, the representation must make clear both the expectation and the outcome. Below, we show how we'll represent this result.

The #[kani::should_panic] attribute essentially behaves as a property that depends on other properties. This makes it well-suited for integration within the framework of Global Conditions.

Using the #[kani::should_panic] attribute will enable the global condition with name should_panic. Following the format for global conditions, the outcome will be one of the following:

  1. - `should_panic`: FAILURE (encountered no panics, but at least one was expected) if there were no failures.
  2. - `should_panic`: FAILURE (encountered failures other than panics, which were unexpected) if there were failures but not all them had prop.property_class() == "assertion".
  3. - `should_panic`: SUCCESS (encountered one or more panics as expected) otherwise.

Note that the criteria to achieve a SUCCESS status depends on all failed properties having the property class "assertion". If they don't, then the failed properties may contain UB, so we return a FAILURE status instead.

Multiple Harnesses

When there are multiple harnesses, we'll implement the single-harness changes in addition to the following ones. Currently, a "Summary" section appears1 after reporting the results for each harness:

Verification failed for - harness3
Verification failed for - harness2
Verification failed for - harness1
Complete - 0 successfully verified harnesses, 3 failures, 3 total.

Harnesses marked with #[kani::should_panic] won't show unless the expected result was different from the actual result. The summary will consider harnesses that match their expectation as "successfully verified harnesses".

Therefore, if we added #[kani::should_panic] to all harnesses in the previous example, we'd see this output:

Complete - 3 successfully verified harnesses, 0 failures, 3 total.

Multiple panics

In a verification context, an execution can branch into multiple executions that depend on a condition. This may result in a situation where different panics are reachable, as in this example:

#[kani::proof]
#[kani::should_panic]
fn branch_panics() {
    let b: bool = kani::any();

    do_something();

    if b {
        call_panic_1(); // leads to a panic-related failure
    } else {
        call_panic_2(); // leads to a different panic-related failure
    }
}

Note that we could safeguard against these situations by checking that only one panic-related failure is reachable. However, users have expressed that a coarse version (i.e., checking that at least one panic can be reached) is preferred. Users also anticipate that #[kani::should_panic] will be used to exercise smoke testing in many cases. Additionally, restricting #[kani::should_panic] to the verification of single panic-related failures could be confusing for users and reduce its overall usefulness.

Availability

This feature will only be available as an attribute. That means this feature won't be available as a CLI option (i.e., --should-panic). There are good reasons to avoid the CLI option:

  • It'd make the design and implementation unnecessarily complex.
  • It'd only be useful when combined with --harness to filter negative harnesses.
  • We could have trouble extending its functionality (see Future possibilities for more details).

Pedagogy

The #[kani::should_panic] attribute will become one of the most basic attributes in Kani. As such, it'll be mentioned in the tutorial and added to the dedicated section planned in #2208.

In general, we'll also advise against negative verification when a harness can be written both as a regular (positive) harness and a negative one. The feature, as it's presented in this proposal, won't allow checking that the panic failure is due to the panic we expected. So there could be cases where the panic changes, but it goes unnoticed while running Kani. Because of that, it'll preferred that users write positive harnesses instead.

Detailed Design

At a high level, we expect modifications in the following components:

  • kani-compiler: Changes required to (1) process the new attribute, and (2) extend HarnessMetadata with a should_panic: bool field.
  • kani-driver: Changes required to (1) edit information about harnesses printed by kani-driver, (2) edit verification output when post-processing CBMC verification results, and (3) return the appropriate exit status after post-processing CBMC verification results.

We don't expect these changes to require new dependencies. Besides, we don't expect these changes to be updated unless we decide to extend the attribute with further fields (see Future possibilities for more details).

Rationale and alternatives

This proposal would enable users to exercise negative verification with a relatively simple mechanism. Not adding such a mechanism could impact Kani's usability by limiting the harnesses that users can write.

Alternative #1: Generic failures

This proposal doesn't consider generic failures but only panics. In principle, it's not clear that a mechanism for generic failures would be useful. Such a mechanism would allow users to expect UB in their harness, but there isn't a clear motivation for doing that.

Alternative #2: Name

We have considered two alternatives for the "expectation" part of the attribute's name: should and expect. We avoid expect altogether for two reasons:

  • We may consider adding the expected argument to #[kani::should_panic].
  • We may consider a more granular approach to indicate expectations regarding individual checks and cover statements in the future. One possible name for the attribute is #[kani::expect].
  • We heavily use this word for testing in Kani: there is an expected mode, which works with *.expected files. Other modes also use such files.

Alternative #3: The expected argument

We could consider an expected argument, similar to the #[should_panic] attribute. To be clear, the #[should_panic] attribute may receive an argument expected which allows users to specify the expected panic string:

    #[test]
    #[should_panic(expected = "Divide result is zero")]
    fn test_specific_panic() {
        divide_non_zero_result(1, 10);
    }

In principle, we anticipate that we'll extend this proposal to include the expected argument at some point. The implementation could compare the expected string against the panic string.

At present, the only technical limitation is that panic strings printed in Kani aren't formatted. One option is to use substrings to compare. However, the long-term solution is to use concrete playback to replay the panic and match against the expected panic string. By doing this, we would achieve feature parity with Rust's #[should_panic].

Alternative #4: Granularity

As mentioned earlier, users may want to express more granular expectations for their harnesses.

There could be problems with this proposal if we attempt to do both:

  • What if users don't want to only check for failures (e.g., reachability)?
  • In the previous case, would they expect the overall verification to fail or not?
  • How do we want these expectations to be declared?

We don't have sufficient data about the use-case considered in this alternative. This proposal can also contribute to collect this data: once users can expect panics, they may want to expect other things.

Alternative #5: Kani API

This functionality could be part of the Kani API instead of being an attribute. For example, some contributors proposed a function that takes a predicate closure to filter executions and check that they result in a panic.

However, such a function couldn't be used in external code, limiting its usability to the user's code.

Open questions

Once the feature is available, it'd be good to gather user feedback to answer these questions:

  • Do we need a mechanism to express more granular expectations?
  • If we need the mechanism in (2), do we really want to collapse them into one feature?

Resolved questions

  • What is the best representation to use for this feature? A representation that changes the overall result seems to be preferred, according to feedback we received during a discussion.
  • Do we want to extend #[kani::should_panic] with an expected field? Yes, but not in this version.
  • Do we want to allow multiple panic-related failures with #[kani::should_panic]? Yes (this is now discussed in User Experience).

Future possibilities

  • The attribute could be an argument to kani::proof (#[kani::proof(should_panic)] reads very well).
  • Add an expected argument to #[kani::should_panic], and replay the harness with concrete playback to get the actual panic string.
2

Double negation may not be the best representation, but it's at least accurate with respect to the original result.

1

This summary is printed in both the default and terse outputs.


Summary

Provide a standard option for users to enable experimental APIs and features in Kani, and ensure that those APIs are off by default.

User Impact

Add an opt-in model for users to try experimental APIs. The goal is to enable users to try features that aren't stable yet, which allow us to get valuable feedback during the development of new features and APIs.

The opt-in model empowers the users to control when some instability is acceptable, which makes Kani UX more consistent and safe.

Currently, each new unstable feature will introduce a new switch, some of them will look like --enable-<feature>, while others will be a plain switch which allows further feature configuration --<feature-config>=[value]. For example, we today have the following unstable switches --enable-stubbing, --concrete-playback, --gen-c. In all cases, users are still required to provide the additional --enable-unstable option. Some unstable features are included in the --help section, and only a few mention the requirement to include --enable-unstable. There is no way to list all unstable features. The transition to stable switches is also ad-hoc.

In order to reduce friction, we will also standardize how users opt-in to any Kani unstable feature. We will use similar syntax to the one used by the Rust compiler and Cargo. As part of this work, we will also deprecate and remove --enable-unstable option.

Note that although Kani is still on v0, which means that everything is somewhat unstable, this allow us to set different bars when it comes to what kind of changes is expected, as well as what kind of support we will provide for a feature.

User Experience

Users will have to invoke Kani with:

-Z <feature_identifier>

in order to enable any unstable feature in Kani, including unstable APIs in the Kani library. For unstable command line options, we will add -Z unstable-options, similar to the Rust compiler. E.g.:

-Z unstable-options --concrete-playback=print

Users will also be able to enable unstable features in their Cargo.toml in the unstable table under kani table. E.g:

[package.metadata.kani.unstable]
unstable-options = true

[workspace.metadata.kani]
flags = { concrete-playback = true }
unstable = { unstable-options = true }

In order to mark an API as unstable, we will add the following attribute to the APIs marked as unstable:

#[kani::unstable(feature="<IDENTIFIER>", issue="<TRACKING_ISSUE_NUMBER>", reason="<DESCRIPTION>")]
pub fn unstable_api() {}

This is similar to the interface used by the standard library.

If the user tries to use an unstable feature in Kani without explicitly enabling it, Kani will trigger an error. For unstable APIs, the error will be triggered during the crate compilation.

Detailed Design

We will add the -Z option to both kani-driver and kani-compiler. Kani driver will pass the information to the compiler.

For unstable APIs, the compiler will check if any reachable function uses an unstable feature that was not enabled. If that is the case, the compiler will trigger a compilation error.

We will also change the compiler to only generate code for harnesses that match the harness filter. The filter is already passed to the compiler, but it is currently only used for stubbing.

API Stabilization

Once an API has been stabilized, we will remove the unstable attributes from the given API. If the user tries to enable a feature that was already stabilized, Kani will print a warning stating that the feature has been stabilized.

API Removal

If we decide to remove an API that is marked as unstable, we should follow a regular deprecation path (using #[deprecated] attribute), and keep the unstable flag + attributes, until we are ready to remove the feature completely.

Rational and Alternatives

For this RFC, the suggestion is to only enable experimental features globally for simplicity of use and implementation.

For now, we will trigger a compilation error if an unstable API is reachable from a user crate unless if the user opts in for the unstable feature.

We could allow users to specify experimental features on a per-harness basis, but it could be tricky to make it clear to the user which harness may be affected by which feature. The extra granularity would also be painful when we decide a feature is no longer experimental, whether it is stabilized or removed. In those cases, users would have to edit each harness that enables the affected feature.

Open questions

  • Should we also add a stable attribute that documents when an API was stabilized?

Future possibilities

  • Delay the error due to the usage of a unstable API, and only fail at runtime if the API is reachable.
  • Allow users to enable unstable features on a per-harness basis.

Summary

A new section in Kani's output to summarize the status of properties that depend on other properties. We use the term global conditions to refer to such properties.

User Impact

The addition of new options that affect the overall verification result depending on certain property attributes demands some consideration. In particular, the addition of a new option to fail verification if there are uncoverable (i.e., unsatisfiable or unreachable) cover properties (requested in #2299) is posing new challenges to our current architecture and UI.

This concept isn't made explicit in Kani, but exists in some ways. For example, the kani::should_panic attribute is a global condition because it can be described in terms of other properties (checks). The request in #2299 is essentially another global conditions, and we may expect more to be requested in the future.

In this RFC, we propose a new section in Kani's output focused on reporting global conditions. The goal is for users to receive useful information about hyperproperties without it becoming overwhelming. This will help users to understand better options that are enabled through global conditions and ease the addition of such options to Kani.

User Experience

The output will refer to properties that depend on other properties as "global conditions", which is a simpler term. The options to enable different global conditions will depend on a case-by-case basis1.

The main UI change in this proposal is a new GLOBAL CONDITIONS section that won't be printed if no global conditions have been enabled. This section will only appear in Kani's default output after the RESULTS section (used for individual checks) and have the format:

GLOBAL CONDITIONS:
 - `<name>`: <status> (<reason>)
 - `<name>`: <status> (<reason>)
 [...]

where:

  • <name> is the name given to the global condition.
  • <status> is the status determined for the global condition.
  • <reason> is an explanation that depends on the status of the global condition.

For example, let's assume we implement the option requested in #2299. A concrete example of this output would be:

GLOBAL CONDITIONS:
 - `fail_uncoverable`: SUCCESS (all cover statements were satisfied as expected)

A FAILED status in any enabled global condition will cause verification to fail. In that case, the overall verification result will point out that one or more global conditions failed, as in:

VERIFICATION:- FAILURE (one or more global conditions failed)

This last UI change will also be implemented for the terse output. Finally, checks that cause an enabled global condition to fail will be reported using the same interface we use for failed checks2.

Global conditions which aren't enabled won't appear in the GLOBAL CONDITIONS section. Their status will be computed regardless3, and we may consider showing this status when the --verbose option is passed.

The documentation of global conditions will depend on how they're enabled, which depends on a case-by-case basis. However, we may consider adding a new subsection Global conditions to the Reference section that collects all of them so it's easier for users to consult all of them in one place.

Detailed Design

The only component to be modified is kani-driver since that's where verification results are built and determined. But we should consider moving this logic into another crate.

We don't need new dependencies. The corner cases will depend on the specific global conditions to be implemented.

Rationale and alternatives

As mentioned earlier, we're proposing this change to help users understand global conditions and how they're determined. In many cases, global conditions empower users to write harnesses which weren't possible to write before. As an example, the #[kani::should_panic] attribute allowed users to write harnesses expecting panic-related failures.

Also, we don't really know if more global conditions will be requested in the future. We may consider discarding this proposal and waiting for the next feature that can be implemented as a global condition to be requested.

Alternative: Global conditions as regular checks

One option we've considered in the past is to enable global conditions as a regular checks. While it's technically doable, it doesn't feel appropriate for global conditions to reported through regular checks since generally a higher degree of visibility may be appreciated.

Open questions

No open questions.

Future possibilities

A redesign of Kani's output is likely to change the style/architecture to report global conditions.

3

The results for global conditions would be computed during postprocessing based on the results of other checks. Global conditions' checks aren't part of the SAT, therefore this computation won't impact verification time.

2

We do not discuss the specific interface to report the failed checks because it needs improvements (for both global conditions and standard verification). In particular, the description field is the only information printed for properties (such as cover statements) without trace locations. There are additional improvements we should consider: printing the actual status (for global conditions, this won't always be FAILED), avoid the repetition of Failed Checks: , etc. This comment discusses problems with the current interface on some examples.

1

In other words, global conditions won't force a specific mechanism to be enabled. For example, if the #[kani::should_panic] attribute is converted into a global condition, it will continue to be enabled through the attribute itself. Other global conditions may be enabled through CLI flags only (e.g., --fail-uncoverable), or a combination of options in general.


Summary

Add verification-based line coverage reports to Kani.

User Impact

Nowadays, users can't easily obtain verification-based coverage reports in Kani. Generally speaking, coverage reports show which parts of the code under verification are covered and which are not. Because of that, coverage is often seen as a great metric to determine the quality of a verification effort.

Moreover, some users prefer using coverage information for harness development and debugging. That's because coverage information provides users with more familiar way to interpret verification results.

This RFC proposes adding a new option for verification-based line coverage reports to Kani. As mentioned earlier, we expect users to employ this coverage-related option on several stages of a verification effort:

  • Learning: New users are more familiar with coverage reports than property-based results.
  • Development: Some users prefer coverage results to property-based results since they are easier to interpret.
  • CI Integration: Users may want to enforce a minimum percentage of code coverage for new contributions.
  • Debugging: Users may find coverage reports particularly helpful when inputs are over-constrained (missing some corner cases).
  • Evaluation: Users can easily evaluate where and when more verification work is needed (some projects aim for 100% coverage).

Moreover, adding this option directly to Kani, instead of relying on another tools, is likely to:

  1. Increase the speed of development
  2. Improve testing for coverage features

Which translates into faster and more reliable coverage options for users.

User Experience

The goal is for Kani to generate code coverage report per harness in a well established format, such as LCOV, and possibly a summary in the output. For now, we will focus on an interim solution that will enable us to assess the results of our instrumentation and enable integration with the Kani VS Code extension.

High-level changes

For the first version, this experimental feature will report verification results along coverage reports. Because of that, we'll add a new section Coverage results that shows coverage results for each individual harness.

In the following, we describe an experimental output format. Note that the final output format and overall UX is to be determined.

Experimental output format for coverage results

The Coverage results section for each harness will produce coverage information in a CSV format as follows:

<file>, <line>, <status>

where <status> is either FULL, PARTIAL or NONE.

As mentioned, this format is designed for evaluating the native instrumentation-based design and is likely to be substituted with another well-established format as soon as possible.

Users are not expected to consume this output directly. Instead, coverage data is to be consumed by the Kani VS Code extension and displayed as in the VS Code Extension prototype.

How to activate and display coverage information in the extension is out of scope for this RFC. That said, a proof-of-concept implementation is available here.

Detailed Design

Architecture

We will add a new unstable --coverage verification option to Kani which will require -Z line-coverage until this feature is stabilized. We will also add a new --coverage-checks option to kani-compiler, which will result in the injection of coverage checks before each Rust statement and terminator1. This option will be supplied by kani-driver when the --coverage option is selected. These options will cause Kani to inject coverage checks during compilation and postprocess them to produce the coverage results sections described earlier.

Coverage Checks

Coverage checks are a new class of checks similar to cover checks. The main difference is that users cannot directly interact with coverage checks (i.e., they cannot add or remove them manually). Coverage checks are encoded as an assert(false) statement (to test reachability) with a fixed description. In addition, coverage checks are:

  • Hidden from verification results.
  • Postprocessed to produce coverage results.

In the following, we describe the injection and postprocessing procedures to generate coverage results.

Injection of Coverage Checks

The injection of coverage checks will be done while generating code for basic blocks. This allows us to add one coverage check before each statement and terminator, which provides the most accurate results1. It's not completely clear how this compares to the coverage instrumentation done in the Rust compiler, but an exploration to use the compiler APIs revealed that they're quite similar2.

Postprocessing Coverage Checks

The injection of coverage checks often results in one or more checks per line (assuming a well-formatted program). We'll postprocess these checks so for each line

  • if all checks are SATISFIED: return FULL
  • if all checks are UNSATISFIED: return NONE
  • otherwise: return PARTIAL

We won't report coverage status for lines which don't include a coverage check.

Rationale and alternatives

Benefits from a native coverage solution

Kani has relied on cbmc-viewer to report coverage information since the beginning. In essence, cbmc-viewer consumes data from coverage-focused invocations of CBMC and produces an HTML report containing (1) coverage information and (2) counterexample traces. Recently, there have been some issues with the coverage information reported by cbmc-viewer (e.g., #2048 or #1707), forcing us to mark the --visualize option as unstable and disable coverage results in the reports (in #2206).

However, it's possible for Kani to report coverage information without cbmc-viewer, as explained before. This would give Kani control on both ends:

  • The instrumentation performed on the program. Eventually, this would allow us to report more precise coverage information (maybe similar to Rust's instrument-based code coverage).
  • The format of the coverage report to be generated. Similarly, this would allow us to generate coverage data in different formats (see #1706 for GCOV, or #1777 for LCOV). While technically this is also doable from cbmc-viewer's output, development is likely to be faster this way.

Coverage through cbmc-viewer

As an alternative, we could fix and use cbmc-viewer to report line coverage.

Most of the issues with cbmc-viewer are generally due to:

  1. Missing locations due to non-propagation of locations in either Kani or CBMC.
  2. Differences in the definition of a basic block in CBMC and Rust's MIR.
  3. Scarce documentation for coverage-related options (i.e., --cover <option>) in CBMC.
  4. Limited testing with Rust code in cbmc-viewer.

Note that (1) is not exclusive to coverage results from cbmc-viewer. Finding checks with missing locations and propagating them if possible (as suggested in this comment) should be done regardless of the approach used for line coverage reports.

In contrast, (2) and (3) can be considered the main problems for Kani contributors to develop coverage options on top of cbmc-viewer and CBMC. It's not clear how much effort this would involve, but (3) is likely to require substantial documentation contributions. But (4) shouldn't be an issue if we decided to invest in cbmc-viewer.

Finally, the following downside must be considered: cbmc-viewer can report line coverage but the path to report region-based coverage may involve a complete rewrite.

Other output formats

One of the long-term goals for this feature is to provide a UX that is familiar for users. This is particularly relevant when talking about output formats. Some services and frameworks working with certain coverage output formats have become quite popular.

However, this version doesn't consider common output formats (i.e., GCOV or LCOV) since coverage results will only be consumed by the Kani VS Code Extension at first. But other output formats will be considered in the future.

Open questions

Open questions:

  • Do we want to report line coverage as COVERED/UNCOVERED or FULL/PARTIAL/NONE?
  • Should we report coverage results and verification results or not? Doing both is likely to result in worse performance. We have to perform an experimental evaluation with hard benchmarks.
  • Should we instrument dependencies or not? Doing so is likely to result in worse performance. We have to perform an experimental evaluation.
  • What should be the final UX for this feature? For instance, we could print a coverage summary and generate a report file per harness. But it's not clear if individual results are relevant to users, so another possibility is to automatically combine results.
  • What's the most appropriate and well-established output format we can emit?
  • Determine if there are cases in which coverage information is confusing for users (due to, e.g., constant propagation or other compiler optimizations). How can work around such cases?
  • Do we want to report coverage information for dependencies? For CI, most users may be only interested in their code. Most coverage frameworks have an aggregation tool with an option to exclude dependencies from coverage metrics.

Feedback to gather before stabilization:

Future possibilities

We expect many incremental improvements in the coverage area:

  1. Consuming the output produced in coverage results from the Kani VS Code extension.
  2. Building a tool that produces coverage results by combining the coverage results of more than one harness.
  3. Including span information in coverage checks and building region-based coverage reports.
  4. Adding new user-requested coverage formats such as GCOV #1706 or LCOV #1777.
1

We have experimented with different options for injecting coverage checks. For example, we have tried injecting one before each basic block, or one before each statement, etc. The proposed option (one before each statement AND each terminator) gives us the most accurate results.

2

In particular, comments in CoverageSpan and generate_coverage_spans hint that the initial set of spans come from Statements and Terminators. This comment goes in detail about the attempt to use the compiler APIs.

  • Feature Name: Function Contracts
  • Feature Request Issue: #2652 and Milestone
  • RFC PR: #2620
  • Status: Unstable
  • Version: 1
  • Proof-of-concept: features/contracts
  • Feature Gate: -Zfunction-contracts, enforced by compile time error1

Summary

Function contracts are a means to specify and check function behavior. On top of that the specification can then be used as a sound2 abstraction to replace the concrete implementation, similar to stubbing.

This allows for a modular verification.

User Impact

Function contracts provide an interface for a verified, sound2 function abstraction. This is similar to stubbing but with verification of the abstraction instead of blind trust. This allows for modular verification, which paves the way for the following two ambitious goals.

  • Scalability: A function contract is an abstraction (sound overapproximation) of a function's behavior. After verifying the contract against its implementation we can subsequently use the (cheaper) abstraction instead of the concrete implementation when analyzing its callers. Verification is thus modularized and even cacheable.
  • Unbounded Verification: Contracts enable inductive reasoning for recursive functions where the first call is checked against the contract and recursive calls are stubbed out using the abstraction.

Function contracts are completely optional with no user impact if unused. This RFC proposes the addition of new attributes, and functions, that shouldn't interfere with existing functionalities.

User Experience

A function contract specifies the behavior of a function as a predicate that can be checked against the function implementation and also used as an abstraction of the implementation at the call sites.

The lifecycle of a contract is split into three phases: specification, verification and call abstraction, which we will explore on this example:

fn my_div(dividend: u32, divisor: u32) -> u32 {
  dividend / divisor
}
  1. In the first phase we specify the contract. Kani provides two new annotations: requires (preconditions) to describe the expectations this function has as to the calling context and ensures (postconditions) which approximates function outputs in terms of function inputs.

    #[kani::requires(divisor != 0)]
    #[kani::ensures(result <= dividend)]
    fn my_div(dividend: u32, divisor: u32) -> u32 {
      dividend / divisor
    }
    

    requires here indicates this function expects its divisor input to never be 0, or it will not execute correctly (for instance panic or cause undefined behavior).

    ensures puts a bound on the output, relative to the dividend input.

    Conditions in contracts are Rust expressions which reference the function arguments and, in case of ensures, the return value of the function. The return value is a special variable called result (see open questions on a discussion about (re)naming). Syntactically Kani supports any Rust expression, including function calls, defining types etc. However they must be side-effect free (see also side effects here) or Kani will throw a compile error.

    Multiple requires and ensures clauses are allowed on the same function, they are implicitly logically conjoined.

  2. Next, Kani ensures that the function implementation respects all the conditions specified in its contract.

    To perform this check Kani needs a suitable harness to verify the function in. The harness is mainly responsible for providing the function arguments but also set up a valid heap that pointers may refer to and properly initialize static variables.

    Kani demands of us, as the user, to provide this harness; a limitation of this proposal. See also future possibilities for a discussion about the arising soundness issues and their remedies.

    Harnesses for checking contract are defined with the proof_for_contract(TARGET) attribute which references TARGET, the function for which the contract is supposed to be checked.

    #[kani::proof_for_contract(my_div)]
    fn my_div_harness() {
      my_div(kani::any(), kani::any())
    }
    

    Similar to a verification harness for any other function, we are supposed to create all possible input combinations the function can encounter, then call the function at least once with those abstract inputs. If we forget to call my_div Kani reports an error. Unlike other harnesses we only need to create suitable data structures but we don't need to add any checks as Kani will use the conditions we specified in the contract.

    Kani inserts preconditions (requires) as kani::assume before the call to my_div, limiting inputs to those the function is actually defined for. It inserts postconditions (ensures) as kani::assert checks after the call to my_div, enforcing the contract.

    The expanded version of our harness that Kani generates looks roughly like this:

    #[kani::proof]
    fn my_div_harness() {
      let dividend = kani::any();
      let divisor = kani::any();
      kani::assume(divisor != 0); // requires
      let result = my_div(dividend, divisor);
      kani::assert(result <= dividend); // ensures
    }
    

    Kani verifies the expanded harness like any other harness, giving the green light for the next step: call abstraction.

  3. In the last phase the verified contract is ready for us to use to abstract the function at its call sites.

    Kani requires that there has to be at least one associated proof_for_contract harness for each abstracted function, otherwise an error is thrown. In addition, by default, it requires all proof_for_contract harnesses to pass verification before attempting verification of any harnesses that use the contract as a stub.

    A possible harness that uses our my_div contract could be the following:

    #[kani::proof]
    #[kani::stub_verified(my_div)]
    fn use_div() {
      let v = vec![...];
      let some_idx = my_div(v.len() - 1, 3);
      v[some_idx];
    }
    

    At a call site where the contract is used as an abstraction Kani kani::asserts the preconditions (requires) and produces a nondeterministic value (kani::any) which satisfies the postconditions.

    Mutable memory is similarly made non-deterministic, discussed later in havocking.

    An expanded stubbing of my_div looks like this:

    fn my_div_stub(dividend: u32, divisor: u32) -> u32 {
      kani::assert(divisor != 0); // pre-condition
      kani::any_where(|result| { /* post-condition */ result <= dividend })
    }
    

    Notice that this performs no actual computation for my_div (other than the conditions) which allows us to avoid something potentially costly.

Also notice that Kani was able to express both contract checking and abstracting with existing capabilities; the important feature is the enforcement. The checking is, by construction, performed against the same condition that is later used as the abstraction, which ensures soundness (see discussion on lingering threats to soundness in the future section) and guarding against abstractions diverging from their checks.

Write Sets and Havocking

Functions can have side effects on data reachable through mutable references or pointers. To overapproximate all such modifications a function could apply to pointed-to data, the verifier "havocs" those regions, essentially replacing their content with non-deterministic values.

Let us consider a simple example of a pop method.

impl<T> Vec<T> {
  fn pop(&mut self) -> Option<T> {
    ...
  }
}

This function can, in theory, modify any memory behind &mut self, so this is what Kani will assume it does by default. It infers the "write set", that is the set of memory locations a function may modify, from the type of the function arguments. As a result, any data pointed to by a mutable reference or pointer is considered part of the write set3. In addition, a static analysis of the source code discovers any static mut variables the function or it's dependencies reference and adds all pointed-to data to the write set also.

During havocking the verifier replaces all locations in the write set with non-deterministic values. Kani emits a set of automatically generated postconditions which encode the expectations from the Rust type system and assumes them for the havocked locations to ensure they are valid. This encompasses both limits as to what values are acceptable for a given type, such as char or the possible values of an enum discriminator, as well as lifetime constraints.

While the inferred write set is sound and enough for successful contract checking4 in many cases this inference is too coarse grained. In the case of pop every value in this vector will be made non-deterministic.

To address this the proposal also adds a modifies and frees clause which limits the scope of havocking. Both clauses represent an assertion that the function will modify only the specified memory regions. Similar to requires/ensures the verifier enforces the assertion in the checking stage to ensure soundness. When the contract is used as an abstraction, the modifies clause is used as the write set to havoc.

In our pop example the only modified memory location is the last element and only if the vector was not already empty, which would be specified thusly.

impl<T> Vec<T> {
  #[modifies(if !self.is_empty() => (*self).buf.ptr.pointer.pointer[self.len])]
  #[modifies(if self.is_empty())]
  fn pop(&mut self) -> Option<T> {
    ...
  }
}

The #[modifies(when = CONDITION, targets = { MODIFIES_RANGE, ... })] consists of a CONDITION and zero or more, comma separated MODIFIES_RANGEs which are essentially a place expression.

Place expressions describe a position in the abstract program memory. You may think of it as what goes to the left of an assignment. They compose of the name of one function argument (or static variable) and zero or more projections (dereference *, field access .x and slice indexing [1]5).

If no when is provided the condition defaults to true, meaning the modifies ranges apply to all invocations of the function. If targets is omitted it defaults to {}, e.g. an empty set of targets meaning under this condition the function modifies no mutable memory.

Because place expressions are restricted to using projections only, Kani must break Rusts pub/no-pub encapsulation here6. If need be we can reference fields that are usually hidden, without an error from the compiler.

In addition to a place expression, a MODIFIES_RANGE can also be terminated with more complex slice expressions as the last projection. This only applies to *mut pointers to arrays. For instance this is needed for Vec::truncate where all of the latter section of the allocation is assigned (dropped).

impl<T> Vec<T> {
  #[modifies(self.buf.ptr.pointer.pointer[len..])]
  fn truncate(&mut self, len: usize) {
    ...
  }
}

[..] denotes the entirety of an allocation, [i..], [..j] and [i..j] are ranges of pointer offsets5. The slice indices are offsets with sizing T, e.g. in Rust p[i..j] would be equivalent to std::slice::from_raw_parts(p.offset(i), i - j). i must be smaller or equal than j.

A #[frees(when = CONDITION, targets = { PLACE, ... })] clause works similarly to modifies but denotes memory that is deallocated. Like modifies it applies only to pointers but unlike modifies it does not admit slice syntax, only place expressions, because the whole allocation has to be freed.

History Expressions

Kani's contract language contains additional support to reason about changes of mutable memory. One case where this is necessary is whenever ensures needs to refer to state before the function call. By default variables in the ensures clause are interpreted in the post-call state whereas history expressions are interpreted in the pre-call state.

Returning to our pop function from before we may wish to describe in which case the result is Some. However that depends on whether self is empty before pop is called. To do this Kani provides the old(EXPR) pseudo function (see this section about a discussion on naming), which evaluates EXPR before the call (e.g. to pop) and makes the result available to ensures. It is used like so:

impl<T> Vec<T> {
  #[kani::ensures(old(self.is_empty()) || result.is_some())]
  fn pop(&mut self) -> Option<T> {
    ...
  }
}

old allows evaluating any Rust expression in the pre-call context, so long as it is free of side-effects. See also this explanation. The borrow checker enforces that the mutations performed by e.g. pop cannot be observed by the history expression, as that would defeat the purpose. If you wish to return borrowed content from old, make a copy instead (using e.g. clone()).

Note also that old is syntax, not a function and implemented as an extraction and lifting during code generation. It can reference e.g. pop's arguments but not local variables. Compare the following

Invalid ❌: #[kani::ensures({ let x = self.is_empty(); old(x) } || result.is_some())]
Valid ✅: #[kani::ensures(old({ let x = self.is_empty(); x }) || result.is_some())]

And it will only be recognized as old(...), not as let old1 = old; old1(...) etc.

Workflow and Attribute Constraints Overview

  1. By default kani or cargo kani first verifies all contract harnesses (proof_for_contract) reachable from the file or in the local workspace respectively.
  2. Each contract (from the local crate7) that is used in a stub_verified is required to have at least one associated contract harness. Kani reports any missing contract harnesses as errors.
  3. Kani verifies all regular harnesses if their stub_verified contracts passed step 1 and 2.

When specific harnesses are selected (with --harness) contracts are not verified.

Kani reports a compile time error if any of the following constraints are violated:

  • A function may have any number of requires, ensures, modifies and frees attributes. Any function with at least one such annotation is considered as "having a contract".

    Harnesses (general or for contract checking) may not have any such annotation.

  • A harness may have up to one proof_for_contract(TARGET) annotation where TARGET must "have a contract". One or more proof_for_contract harnesses may have the same TARGET.

    A proof_for_contract harness may use any harness attributes, including stub and stub_verified, though the TARGET may not appear in either.

  • Kani checks that TARGET is reachable from the proof_for_contract harness, but it does not warn if abstracted functions use TARGET8.

  • A proof_for_contract function may not have the kani::proof attribute (it is already implied by proof_for_contract).

  • A harness may have multiple stub_verified(TARGET) attributes. Each TARGET must "have a contract". No TARGET may appear twice. Each local TARGET is expected to have at least one associated proof_for_contract harness which passes verification, see also the discussion on when to check contracts in open questions.

  • Harnesses may combine stub(S_TARGET, ..) and stub_verified(V_TARGET) annotations, though no target may occur in S_TARGETs and V_TARGETs simultaneously.

  • For mutually recursive functions using stub_verified, Kani will check their contracts in non-deterministic order and assume each time the respective other check succeeded.

Detailed Design

Kani implements the functionality of function contracts in three places.

  1. Code generation in the requires and ensures macros (kani_macros).
  2. GOTO level contracts using CBMC's contract language generated in kani-compiler for modifies clauses.
  3. Dependencies and ordering among harnesses in kani-driver to enforce contract checking before replacement. Also plumbing between compiler and driver for enforcement of assigns clauses.

Code generation in kani_macros

The requires and ensures macros perform code generation in the macro, creating a check and a replace function which use assert and assume as described in the user experience section. Both are attached to the function they are checking/replacing by kanitool::checked_with and kanitool::replaced_with attributes respectively. See also the discussion about why we decided to generate check and replace functions like this.

The code generation in the macros is straightforward, save two aspects: old and the borrow checker.

The special old builtin function is implemented as an AST rewrite. Consider the below example:

impl<T> Vec<T> {
  #[kani::ensures(self.is_empty() || self.len() == old(self.len()) - 1)]
  fn pop(&mut self) -> Option<T> {
    ...
  }
}

The ensures macro performs an AST rewrite consisting of an extraction of the expressions in old and a replacement with a fresh local variable, creating the following:

impl<T> Vec<T> {
  fn check_pop(&mut self) -> Option<T> {
    let old_1 = self.len();
    let result = Self::pop(self);
    kani::assert(self.is_empty() || self.len() == old_1 - 1)
  }
}

Nested invocations of old are prohibited (Kani throws an error) and the expression inside may only refer to the function arguments and not other local variables in the contract (Rust will report those variables as not being in scope).

The borrow checker also ensures for us that none of the temporary variables borrow in a way that would be able to observe the modification in pop which would occur for instance if the user wrote old(self). Instead of borrowing copies should be created (e.g. old(self.clone())). This is only enforced for safe Rust though.

The second part relevant for the implementation is how we deal with the borrow checker for postconditions. They reference the arguments of the function after the call which is problematic if part of an input is borrowed mutably in the return value. For instance the Vec::split_at_mut function does this and a sensible contract for it might look as follows:

impl<T> Vec<T> {
  #[ensures(self.len() == result.0.len() + result.1.len())]
  fn split_at_mut(&mut self, i: usize) -> (&mut [T], &mut [T]) {
    ...
  }
}

This contract refers simultaneously to self and the result. Since the method however borrows self mutably, it would no longer be accessible in the postcondition. To work around this we strategically break the borrowing rules using a new hidden builtin kani::unchecked_deref with the type signature for <T> fn (&T) -> T which is essentially a C-style dereference operation. Breaking the borrow checker like this is safe for 2 reasons:

  1. Postconditions are not allowed perform mutation and
  2. Post conditions are of type bool, meaning they cannot leak references to the arguments and cause the race conditions the Rust type system tries to prevent.

The "copies" of arguments created by unsafe_deref are stored as fresh local variables and their occurrence in the postcondition is renamed. In addition a mem::forget is emitted for each copy to avoid a double free.

Recursion

Kani verifies contracts for recursive functions inductively. Reentry of the function is detected with a function-specific static variable. Upon detecting reentry we use the replacement of the contract instead of the function body.

Kani generates an additional wrapper around the function to add the detection. The additional wrapper is there so we can place the modifies contract on check_pop and replace_pop instead of recursion_wrapper which prevents CBMC from triggering its recursion induction as this would skip our replacement checks.

#[checked_with = "recursion_wrapper"]
#[replaced_with = "replace_pop"]
fn pop(&mut self) { ... }

fn check_pop(&mut self) { ... }

fn replace_pop(&mut self) { ... }

fn recursion_wrapper(&mut self) { 
  static mut IS_ENTERED: bool = false;

  if unsafe { IS_ENTERED } {
    replace_pop(self)
  } else {
    unsafe { IS_ENTERED = true; }
    let result = check_pop(self);
    unsafe { IS_ENTERED = false; }
    result
  };
}

Note that this is insufficient to verify all types of recursive functions, as the contract specification language has no support for inductive lemmas (for instance in ACSL section 2.6.3 "inductive predicates"). Inductive lemmas are usually needed for recursive data structures.

Changes to Other Components

Contract enforcement and replacement (kani::proof_for_contract(f), kani::stub_verified(f)) both dispatch to the stubbing logic, stubbing f with the generated check and replace function respectively. If f has no contract, Kani throws an error.

For write sets Kani relies on CBMC. modifies clauses (whether derived from types or from explicit clauses) are emitted from the compiler as GOTO contracts in the artifact. Then the driver invokes goto-instrument with the name of the GOTO-level function names to enforce or replace the memory contracts. The compiler communicates the names of the function via harness metadata.

Code used in contracts is required to be side effect free which means it must not perform I/O, mutate memory (&mut vars and such) or (de)allocate heap memory. This is enforced in two layers. First with an MIR traversal over all code reachable from a contract expression. An error is thrown if known side-effecting actions are performed such as ptr::write, malloc, free or functions which we cannot check, such as e.g. extern "C", with the exception of known side effect free functions in e.g. the standard library.

Rationale and alternatives

Kani-side implementation vs CBMC

Instead of generating check and replace functions in Kani, we could use the contract instrumentation provided by CBMC.

We tried this earlier but came up short, because it is difficult to implement, while supporting arbitrary Rust syntax. We exported the conditions into functions so that Rust would do the parsing/type checking/lowering for us and then call the lowered function in the CBMC contract.

The trouble is that CBMC's old is only supported directly in the contract, not in functions called from the contract. This means we either need to inline the contract function body, which is brittle in the presence of control flow, or we must extract the old expressions, evaluate them in the contract directly and pass the results to the check function. However this means we must restrict the expressions in old, because we now need to lower those by hand and even if we could let rustc do it, CBMC's old has no support for function calls in its argument expression.

Expanding all contract macros at the same time

Instead of expanding contract macros one-at-a-time and layering the checks we could expand all subsequent one's with the outermost one in one go.

This is however brittle with respect to renaming. If a user does use kani::requires as my_requires and then does multiple #[my_requires(condition)] macro would not collect them properly since it can only match syntactically and it does not know about the use and neither can we restrict this kind of use or warn the user. By contrast, the collection with kanitool::checked_with is safe, because that attribute is generated by our macro itself, so we can rely on the fact that it uses the canonical representation.

Generating nested functions instead of siblings

Instead of generating the check and replace functions as siblings to the contracted function we could nest them like so

fn my_div(dividend: u32, divisor: u32) -> u32 {
  fn my_div_check_5e3713(dividend: u32, divisor: u32) -> u32 {
    ...
  }
  ...
}

This could be beneficial if we want to be able to allow contracts on trait impl items, in which case generating sibling functions is not allowed. On the other hand this makes it harder to implement contracts on trait definitions, because there is no body available which we could nest the function into. Ultimately we may require both so that we can support both.

What is required to make this work is an additional pass over the condition that replaces every self with a fresh identifier that now becomes the first argument of the function. In addition there are open questions as to how to resolve the nested name inside the compiler.

Explicit command line checking/substitution vs attributes:

Instead of adding a new special proof_for_contact attributes we could have instead done:

  1. Check contracts on the command line like CBMC does. This makes contract checking a separate kani invocation with something like a --check-contract flag that directs the system to instrument the function. This is a very flexible design, but also easily used incorrectly. Specifically nothing in the source indicates which harnesses are supposed to be used for which contract, users must remember to invoke the check and are also responsible for ensuring they really do verify all contacts they will later be replacing and lastly.
  2. Check contracts with a #[kani::proof] harness. This would have used e.g. a #[kani::for_contract] attributes on a #[kani::proof]. Since #[kani::for_contract] is only valid on a proof, we decided to just imply it and save the user some headache. Contract checking harnesses are not meant to be reused for other purposes anyway and if the user really wants to the can just factor out the actual contents of the harness to reuse it.

Polymorphism during contract checking

A current limitation with how contracts are enforced means that if the target of a proof_for_contract is polymorphic, only one monomorphization is permitted to occur in the harness. This does not limit the target to a single occurrence, but to a single instantiation of its generic parameters.

This is because we rely on CBMC for enforcing the modifies contract. At the GOTO level all monomorphized instances are distinct functions and CBMC only allows checking one function contract at a time, hence this restriction.

User supplied harnesses

We make the user supply the harnesses for checking contracts. This is our major source of unsoundness, if corner cases are not adequately covered. Having Kani generate the harnesses automatically is a non-trivial task (because heaps are hard) and will be the subject of future improvements.

In limited cases we could generate harnesses, for instance if only bounded types (integers, booleans, enums, tuples, structs, references and their combinations) were used. We could restrict the use of contracts to cases where only such types are involved in the function inputs and outputs, however this would drastically limit the applicability, as even simple heap data structures such as Vec, String and even &[T] and &str (slices) would be out of scope. These data structures however are ubiquitous and users can avoid the unsoundness with relative confidence by overprovisioning (generating inputs that are several times larger than what they expect the function will touch).

Open questions

  • Returning kani::any() in a replacement isn't great, because it wouldn't work for references as they can't have an Arbitrary implementation. Plus the soundness then relies on a correct implementation of Arbitrary. Instead it may be better to allow for the user to specify type invariants which can the be used to generate correct values in replacement but also be checked as part of the contract checking.

  • Making result special. Should we use special syntax here like @result or kani::result(), though with the latter I worry that people may get confused because it is syntactic and not subject to usual use renaming and import semantics. Alternatively we can let the user pick the name with an additional argument to ensures, e.g. ensures(my_result_var, CONDITION)

    Similar concerns apply to old, which may be more appropriate to be special syntax, e.g. @old.

    See #2597

  • How to check the right contracts at the right time. By default kani and cargo kani check all contracts in a crate/workspace. This represents the safest option for the user but may be too costly in some cases.

    The user should be provided with options to disable contract checking for the sake of efficiency. Such options may look like this:

    • By default (kani/cargo kani) all local contracts are checked, harnesses are only checked if the contracts they depend on succeeded their check.
    • With harness selection (--harness) only those contracts which the selected harnesses depend on are checked.
    • For high assurance passing a --paranoid flag also checks contracts for dependencies (other crates) when they are used in abstractions.
    • Per harness the users can disable the checking for specific contracts via attribute, like #[stub_verified(TARGET, trusted)] or #[stub_unverified(TARGET)]. This also plays nicely with cfg_attr.
    • On the command line users can similarly disable contract checks by passing (multiple times) --trusted TARGET to skip checking those contracts.
    • The bold (or naïve) user can skip all contracts with --all-trusted.
    • For the lawyer that is only interested in checking contracts and nothing else a --litigate flag checks only contract harnesses.

    Aside: I'm obviously having some fun here with the names, happy to change, it's really just about the semantics.

  • Can old accidentally break scope? The old function cannot reference local variables. For instance #[ensures({let x = ...; old(x)})] cannot work as an AST rewrite because the expression in old is lifted out of it's context into one where the only bound variables are the function arguments (see also history expressions). In most cases this will be a compiler error complaining that x is unbound, however it is possible that if there is also a function argument x, then it may silently succeed the code generation but confusingly fail verification. For instance #[ensures({ let x = 1; old(x) == x })] on a function that has an argument named x would not hold.

    To handle this correctly we would need an extra check that detects if old references local variables. That would also enable us to provide a better error message than the default "cannot find value x in this scope".

  • Can panicking be expected behavior? Usually preconditions are used to rule out panics but it is conceivable that a user would want to specify that a function panics under certain conditions. Specifying this would require an extension to the current interface.

  • UB checking. With unsafe rust it is possible to break the type system guarantees in Rust without causing immediate errors. Contracts must be cognizant of this and enforce the guarantees as part of the contract or require users to explicitly defer such checks to use sites. The latter case requires dedicated support because the potential UB must be reflected in the havoc.

  • modifies clauses over patterns. Modifies clauses mention values bound in the function header and as a user I would expect that if I use a pattern in the function header then I can use the names bound in that pattern as base variables in the modifies clause. However modifies clauses are implemented as assigns clauses in CBMC which does not have a notion of function header patterns. Thus it is necessary to project any modifies ranges deeper by the fields used in the matched pattern.

Future possibilities

  • Quantifiers: Quantifiers are like logic-level loops and a powerful reasoning helper. CBMC has support for both exists and forall, but the code generation is difficult. The most ergonomic and easy way to implement quantifiers on the Rust side is as higher-order functions taking Fn(T) -> bool, where T is some arbitrary type that can be quantified over. This interface is familiar to developers, but the code generation is tricky, as CBMC level quantifiers only allow certain kinds of expressions. This necessitates a rewrite of the Fn closure to a compliant expression.

  • Letting the user supply the harnesses for checking contracts is a source of unsoundness, if corner cases are not adequately covered. Ideally Kani would generate the check harness automatically, but this is difficult both because heap data structures are potentially infinite, and also because it must observe user-level type invariants.

    A complete solution for this is not known to us but there are ongoing investigations into harness generation mechanisms in CBMC.

    Function inputs that are non-inductive could be created from the type as the safe Rust type constraints describe a finite space.

    For dealing with pointers one applicable mechanism could be memory predicates to declaratively describe the state of the heap both before and after the function call.

    In CBMC's implementation memory predicates are part of the pre/postconditions. This does not easily translate to Kani, since we handle pre/postconditions manually and mainly in proc-macros. There are multiple ways to bridge this gap, perhaps the easiest being to add memory predicates separately to Kani instead of as part of pre/postconditions, so they can be handled by forwarding them to CBMC. However this is also tricky, because memory predicates are used to describe pointers and pointers only. Meaning that if they are encapsulated in a structure (such as Vec or RefCell) there is no way of specifying the target of the predicate without breaking encapsulation (similar to modifies). In addition there are limitations also on the pointer predicates in CBMC itself. For instance they cannot be combined with quantifiers.

    A better solution would be for the data structure to declare its own invariants at definition site which are automatically swapped in on every contract that uses this type.

  • What about mutable trait inputs (wrt memory access patters), e.g. a mut impl AccessMe

  • Trait contracts: Our proposal could be extended easily to handle simple trait contracts. The macros would generate new trait methods with default implementations, similar to the functions it generates today. Using sealed types we can prevent the user from overwriting the generated contract methods. Contracts for the trait and contracts on it's impls are combined by abstracting the original method depending on context. The occurrence inside the contract generated from the trait method is replaced by the impl contract. Any other occurrence is replaced by the just altered trait method contract.

  • Cross Session Verification Caching: This proposal focuses on scalability benefits within a single verification session, but those verification results could be cached across sessions and speed up verification for large projects using contacts in the future.

  • Inductive Reasoning: Describing recursive functions can require that the contract also recurse, describing a fixpoint logic. This is needed for instance for linked data structures like linked lists or trees. Consider for instance a reachability predicate for a linked list:

    struct LL<T> { head: T, next: *const LL<T> }
    
    fn reachable(list: &LL<T>, t: &T) -> bool {
        list.head == t
        || unsafe { next.as_ref() }.map_or(false, |p| reachable(p, t))
    }
    
    
  • Compositional Contracts: The proposal in this document lacks a comprehensive handling of type parameters. Contract checking harnesses require monomorphization. However this means the contract is only checked against a finite number of instantiations of any type parameter (at most as many as contract checking harnesses were defined). There is nothing preventing the user from using different instantiations of the function's type parameters.

    A function (f()) can only interact with its type parameters P through the traits (T) they are constrained over. We can require T to carry contracts on each method T::m(). During checking we can use a synthetic type that abstracts T::m() with its contract. This way we check f() against Ts contract. Then we later abstract f() we can ensure any instantiations of P have passed verification of the contract of T::m(). This makes the substitution safe even if the particular type has not been used in a checking harness.

    For higher order functions this gets a bit more tricky, as closures are ad-hoc defined types. Here the contract for the closure could be attached to f() and then checked for each closure that may be provided. However this does not work so long as the user has to provide the harnesses, as they cannot recreate the closure type.


1

Enforced gates means all uses of constructs (functions, annotations, macros) in this RFC are an error.

2

The main remaining threat to soundness in the use of contracts, as defined in this proposal, is the reliance on user-supplied harnesses for contract checking (explained in item 2 of user experience). A more thorough discussion on the dangers and potential remedies can be found in the future section.

3

For inductively defined types the write set inference will only add the first "layer" to the write set. If you wish to modify deeper layers of a recursive type an explicit modifies clause is required.

4

While inferred memory footprints are sound for both safe and unsafe Rust certain features in unsafe rust (e.g. RefCell) get inferred incorrectly and will lead to a failing contract check.

5

Slice indices can be place expressions referencing function arguments, constants and integer arithmetic expressions. Take for example this Vec method (places simplified vs. actual implementation in std): fn truncate(&mut self, len: usize). A relatively precise contract for this method can be achieved with slice indices like so: #[modifies(self.buf[len..self.len], self.len)]

6

Breaking the pub encapsulation has unfortunate side effects because it means the contract depends on non-public elements which are not expected to be stable and can drastically change even in minor versions. For instance if your project depends on crate a which in turn depends on crate b, and a::foo has a contract that takes as input a pointer data structure b::Bar then a::foos assigns contract must reference internal fields of b::Bar. Say your project depends on the replacement of a::foo, if b changes the internal representation of Bar in a minor version update cargo could bump your version of b, breaking the contract of a::foo (it now crashes because it e.g. references non-existent fields). You cannot easily update the contract for a::foo, since it is a third-party crate; in fact even the author of a could not properly update to the new contract since their old version specification would still admit the new, broken version of b. They would have to yank the old version and explicitly nail down the exact minor version of b which defeats the whole purpose of semantic versioning.

7

Contracts for functions from external crates (crates from outside the workspace, which is not quite the definition of extern crate in Rust) are not checked by default. The expectation is that the library author providing the contract has performed this check. See also open question for a discussion on defaults and checking external contracts.

8

Kani cannot report the occurrence of a contract function to check in abstracted functions as errors, because the mechanism is needed to verify mutually recursive functions.