Skip to content
Darius Marian edited this page Oct 11, 2018 · 8 revisions

Welcome to the C++ Runtime Testing library tour! Here, you can find out how and why you should use this library.

Tests & expectations

The core of the library stands in the "test" and "expect" macros. Here is the most basic example of a test case you can write using this library:

#include <kktest>

void testCase() {
    test("This is a passing test!", [&]() {
        expect(1 + 2 == 3);
    });

    test("This is a failing test!", [&]() {
        expect(3 * 3 == 6);
    });
}

Note: both test and expect are macro-s, and therefore belong to no namespace.

test is the main macro of the library, and it does exactly what it says: it defines a test. Although not obvious from the implementation, the signature is similar to:

void test(std::string description, std::function<void()> testFunc);

Inside the test function, operations can be executed safely, since possible failure is caught and reported.

After executing different operations, you generally want to verify that some state is the way we want it to be. In this case, the expect macro can be called to assert a boolean expression evaluates to true. The signature is simple:

void expect(bool expr);

Once an expectation fails, the test stops, reports the failure and is considered a failed test. All expectations inside a test must pass for the test to be considered passed.

Remove boiler-plate: setUp and tearDown

Imagine the following test suite:

#include <algorithm>
#include <vector>

#include <kktest>

void testCase() {
    std::vector<int> v;

    test("After pushing back 3, 4 and 5, v has size 3", [&]() {
        v.push_back(3);
        v.push_back(4);
        v.push_back(5);
        expect(v.size() == 3);
        v.clear();
    });

    test("After pushing back 3, 4 and 5, v is equal to {3, 4, 5}", [&]() {
        v.push_back(3);
        v.push_back(4);
        v.push_back(5);
        expect(v == std::vector<int>{3, 4, 5});
        v.clear();
    });

    test("After pushing back 3, 4 and 5, v does not contain '6'", [&]() {
        v.push_back(3);
        v.push_back(4);
        v.push_back(5);
        expect(std::find(v.begin(), v.end(), 6) == v.end());
        v.clear();
    });
}

It is clear that there is a lot of duplicate code in all 3 tests, both before the main expectation and after it. Writing that code multiple times makes the tests bug-prone, harder to read and maintain. Second, what if a test fails on an expect call? Then the vector will not be cleared after the expect, since the execution halts and goes directly to the next test. Here is where we can use two new macro-s, setUp and tearDown:

#include <algorithm>
#include <vector>

#include <kktest>

void testCase() {
    std::vector<int> v;

    setUp([&]() {
        v.push_back(3);
        v.push_back(4);
        v.push_back(5);
    });

    tearDown([&]() {
        v.clear();
    });

    test("After pushing back 3, 4 and 5, v has size 3", [&]() {
        expect(v.size() == 3);
    });

    test("After pushing back 3, 4 and 5, v is equal to {3, 4, 5}", [&]() {
        expect(v == std::vector<int>{3, 4, 5});
    });

    test("After pushing back 3, 4 and 5, v does not contain '6'", [&]() {
        expect(std::find(v.begin(), v.end(), 6) == v.end());
    });
}

This gives us the same effect, but in a safer and cleaner manner: the setUp is executed before each test and the tearDown after each test (no matter whether it passed or failed).

Groups

Set-ups and tear-downs seem nice and all, but it's not great that all tests must have the same setUp and tearDown. Generally, we don't want to create a test file only for after 3 push_backs tests, but rather for a larger piece of functionality (say, the whole std::vector, or at least more configurations for a method of it). You can do this using the group macro:

#include <vector>

#include <kktest>

void testCase() {
    std::vector<int> v;

    setUp([&]() {
        v = std::vector<int>{}; // always start with a clean vector
    });

    test("Vector is initially empty", [&] {
        expect(v.empty());
        expect(v.size() == 0);
    });

    test("After one push_back, vector is not empty anymore", [&]() {
        v.push_back(3);
        expect(!v.empty());
        expect(v.size() != 0);
    });

    group("After inserting 5 elements", [&]() {
        setUp([&]() {
            v.insert(v.end(), {1, 2, 3, 4, 5});
        });

        test("Size of the vector is 5", [&]() {
            expect(v.size() == 5);
        });

        test("v.at(4) does not throw", [&]() {
            try {
                v.at(4);
            } catch(...) {
                fail("It did throw.");
            }
        });
    });

    test("v.at(4) throws on empty vector", [&]() {
        try {
            v.at(4);
            fail("Did not throw.");
        } catch(...) { /* All ok; */ }
    });
}

As you can see above, we have used the setUp macro at the main scope to make sure the vector is clean at the start of each test. Inside, we defined a group of tests, but then inside that group, we again defined a setUp. What happens is that both setUps are executed before the tests in the group, but only the outer one for tests outside the group. They are applied in definition order (the top-most one comes first, and then the in-group one). You can have as many layers of groups as you want / as many groups as you want on the same layer, and they will all behave this way: all tests between the group start and the group end are preceded by the group's setUp and postceded by the group's tearDown. In the output of the tests, all group descriptions for the nests of a test are pre-pended to the test's own description:

group.cpp:10: Vector is initially empty: PASSED
group.cpp:15: After one push_back, vector is not empty anymore: PASSED
group.cpp:26: After inserting 5 elements::Size of the vector is 5: PASSED
group.cpp:30: After inserting 5 elements::v.at(4) does not throw: PASSED
group.cpp:39: v.at(4) throws on empty vector: PASSED

Output, return code and reports

The testing driver actually logs the output for tests while executing. Also, if passed the -r or --report= option, the suite creates a report JSON (implicitly called report.json) that encapsulates all the information available about this test run. For example, the output for the first program shown in the tour is:

base.cpp:5: This is a passing test!: PASSED
base.cpp:9: This is a failing test!: FAILED
	base.cpp:10: 3 * 3 == 6 is false

Process finished with exit code 1

The exit code is 0 if all tests pass, or equal to the number of failed tests if tests failed. The report for the example above looks something like this:

{
  "numTests": 2,
  "numFailedTests": 1,
  "tests": [
    {
      "description": "This is a passing test!",
      "file": "base.cpp",
      "line": 5,
      "passed": true
    },
    {
      "description": "This is a failing test!",
      "file": "base.cpp",
      "line": 9,
      "passed": false,
      "failureMessage": "base.cpp:10: 3 * 3 == 6 is false"
    }
  ]
}
Clone this wiki locally