Testing
As far as commands go, cargo supports test and bench to run a crate's tests. These tests are specified in the code by creating a "module" inside a module and annotating it with #[cfg(test)]. Furthermore, each test also has to be annotated with either #[test] or #[bench], whereas the latter takes an argument to the Bencher, a benchmark runner class that allows us to collect stats on each run:
#![feature(test)]
extern crate test;
pub fn my_add(a: i32, b: i32) -> i32 {
a + b
}
#[cfg(test)]
mod tests {
use super::*;
use test::Bencher;
#[test]
fn this_works() {
assert_eq!(my_add(1, 1), 2);
}
#[test]
#[should_panic(expected = "attempt to add with overflow")]
fn this_does_not_work() {
assert_eq!(my_add(std::i32::MAX, std::i32::MAX), 0);
}
#[bench]
fn how_fast(b: &mut Bencher) {
b.iter(|| my_add(42, 42))
}
}
After running cargo test, the output is as expected:
Finished dev [unoptimized + debuginfo] target(s) in 0.02s
Running target/debug/deps/ch2-6372277a4cd95206
running 3 tests
test tests::how_fast ... ok
test tests::this_works ... ok
test tests::this_does_not_work ... ok
test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
In this example, the tests are importing and calling a function from its parent module, called my_add. One of the tests even expects a panic! (caused by an overflow) to be thrown, which is why the #[should_panic] annotation has been added.
On top of this, cargo supports doctests, which is a special form of testing. One of the most tedious things when refactoring is updating the examples in the documentation which is why they are frequently not working. Coming from Python, the doctest is a solution to this dilemma. By running the actual code in a denoted example, doctests makes sure that everything that's printed in the documentation can be executed—creating a black box test at the same time.
Every function in Rust can be annotated using a special docstring—which is used to generate the documentation at DOCS.RS (https://docs.rs/).
This documentation has sections (indicated by a markdown header: #), and if a particular section is called Examples, any contained code will be compiled and run:
/// # A new Section
/// this [markdown](https://daringfireball.net/projects/markdown/) is picked up by `Rustdoc`
We can now add another test to the preceding sample by creating a few lines of documentation:
/// # A Simple Addition
///
/// Adds two integers.
///
/// # Arguments
///
/// - *a* the first term, needs to be i32
/// - *b* the second term, also a i32
///
/// ## Returns
/// The sum of *a* and *b*.
///
/// # Panics
/// The addition is not done safely, overflows will panic!
///
/// # Examples
///
/// ```rust
/// assert_eq!(ch2::my_add(1, 1), 2);
/// ```
pub fn my_add(a: i32, b: i32) -> i32 {
a + b
}
The cargo test command will now run the code in examples as well:
$ cargo test
Compiling ch2 v0.1.0 (file:///home/cm/workspace/Mine/rust.algorithms.data.structures/code/ch2)
Finished dev [unoptimized + debuginfo] target(s) in 0.58s
Running target/debug/deps/ch1-8ed0f81f04655fe4
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/debug/deps/ch2-3ddb7f7cbab6792d
running 3 tests
test tests::how_fast ... ok
test tests::this_does_not_work ... ok
test tests::this_works ... ok
test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Doc-tests ch2
running 1 test
test src/lib.rs - my_add (line 26) ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
For larger tests or black-box tests, it's also possible (and recommended) to put the tests into a subfolder of the project, called tests. cargo will pick this up automatically and run the tests accordingly.
On top of tests, other commands are often required (code metrics, linting, and so on) and recommended. For that, cargo provides a third-party command interface.