This is a boilerplate project to allow for a flexible and powerful testing setup while developing applications on AO.
It is based on busted, so you can write your tests entirely in Lua.
This boilerplate is suitable for
- unit tests (including fuzzing)
- integration tests
In the sections below we describe some key concepts and explain design decisions around this test setup. With AO being a relatively new technology, there are only a few established best-practice patterns that can be applied directly to AO process development. In many cases, the "way to go" about solving some problems is yet to be established within the builder community.
⚠️ The project is under development and may have bugs or design shortcomings. Feel free to contribute by opening issues or PR-ing. We welcome any effort aimed at making this a better tool for the community.
The files process.lua
and rewards.lua
contain some code you would typically have in your AO process (handlers and global state you are adding to the process).
The files process_test.lua
and rewards_test.lua
contain examples of integration tests and unit tests, respectively.
An integration test example
-- application code (process.lua)
Handlers.add(
"greet",
Handlers.utils.hasMatchingTag("Action", "Greet"),
function(msg)
Greetings = Greetings + 1
LastGreeting = msg.Data
end
)
-- integration test code (process_test.lua)
it("should increment Greetings count on a Greet message", function()
ao.send({ Target = ao.id, Action = "Greet", Data = "Hello" })
assert.are.equal(_G.Greetings, 1)
end)
A unit test example
-- application code (rewards.lua)
mod.nextReward = function(msg)
local maximumEntry = mod.getMaximumEntry()
if maximumEntry and maximumEntry.value >= mod.Threshold then
return maximumEntry.value * mod.RewardFactor
else
return 0
end
end
-- unit test code (rewards_test.lua)
it("it should return 0 reward if we are below threshold", function()
rewards.Threshold = 42
rewards.RewardFactor = 10
-- mock this for the purpose of the test
local originalGetMaximum = rewards.getMaximumEntry
rewards.getMaximumEntry = function()
return { value = 41 }
end
assert.are.equal(rewards.nextReward(), 0)
rewards.getMaximumEntry = originalGetMaximum
end)
If you start from scratch with a new project, use process.lua
as the place to define and add your Handlers in.
Use files like rewards.lua
for creating lib-like modules that implement helpers, handler execution or handler matching logic.
- Add the
scripts/
andtest/
directories from this repository to your project. - Replace
process.lua
with your main process file. If you prefer another name, feel free to sync the name oftest/process_test.lua
- Include your lib-like modules. They can be simply required into the
_test.lua
files:local rewards = require "rewards"
This testing setup assumes that
- your repo has a single "main" process
- the main process handlers are added in the file
process.lua
process.lua
potentially requires lua modules (lib-like) in order to define the handlers
The main goal is to test as thoroughly as needed without compromising good design patterns in the actual application code.
As in other types of programming, we do find a need to make design choices partially based on testability. Without any proper "units" there can be no unit testing at all.
Along these lines is one main suggestion we are making upfront:
In order to easily create unit tests it's better to have the top level lua file only define handlers, while execution functions are in one or multiple dedicated modules. Additionally we suggest splitting out any
ao.send({})
that occurs as part of the handler execution, into its own function within the dedicated module that is to be unit tested. That function can be mocked when doing unit tests, so that they do something other thanao.send({})
(ideally, something verifiable with assertions)
This principle makes it possible to "unit-test" the execution of specific handlers without the need for mocking the message that triggers that specific handler.
By building in the AO paradigm we've come to the following conceptual differentiation between types of testing:
- Integration tests btw. processes (requires mocking other processes and part of ao itself)
- Integration btw. modules (libs) that serve a single process - these can also be viewed as unit tests on the single process itself
- Unit tests for modules (libs) that serve a single process
If a unit (module) internally uses functions or variables which are also exposed, this setup allows for ad-hoc changes to these functions and values as needed for different tests. See rewards_test.lua
for an example on how to leverage this in order to perform fuzzing.
Additional unit test files can be added. They should be named similarly, as specified in test/setup.lua
.
The approach is to test one main app process (process.lua
) with its actual Handlers and imported modules, without modifying its original code in any way.
Other processes that our app process interacts with are mocked in test/mocked-env/processes
.
Inter-process communication occurs via the familiar ao.send({...})
call.
In addition to the familiar key-value pairs of its argument, ao.send
also supports
- a
From = "xyz...321"
which allows you to impersonate accounts when sending messages. This enables for instance the testing of access control. - a
Timestamp = 1234
which allows you to simulate specific timing of the messages. This enables for instance the testing of cooldown logic.
ao
is mocked such that
- messages targeting our app process are handled according to its actual handlers (to be tested)
- messages targeting mock processes are handled via their
handle(msg)
function which they expose for the purpose of being used in integration tests. - messages targeting users can be handled by storing them in a global value
_G.LastMessageToOwner
, or something more sophisticated like_G.MessagesToWallets
if more the test assertions require more info than just the last received message - the process environment
ao.env.Process
can be set as needed in the test file (seeprocess_test.lua
). This is especially useful if custom tags are passed into the tested process when it is spawned.
You are free to implement the internal state of mocked processess as you see fit, such that subsequent calls to handle(msg)
yield realistic results.
You can skip execution of ao.send()
entirely for any particular test file by using _G.IsInUnitTest = true
. This is useful if you want to perform a unit test of Type 2:
A specific handler is implemented as a separate function as described above. It performs internal state updates of many kinds, involving multiple modules. During execution it sends out messages that are not essential to the correct execution flow. (logging, syncing, success confirmations etc.).
By using the unit test flag, it's possible to test these handler execution functions without
You can use _G.VirtualTime
in order to test logic related to time (time locks, cooldown periods, etc.). The messages sent via ao.send()
will use the _G.VirtualTime
value if it is not nil
. However, using an explicit Timestamp = 1234
in the message payload overrides any virtual time value:
_G.MainProcessId
- an arbitrary ID assigned to the app process such that ao
and the mocked processes can reference it.
And this is how we set up mocked process references
_G.AoCredProcessId = 'AoCred-123xyz'
_G.Processes = {
[_G.AoCredProcessId] = require 'aocred' (_G.AoCredProcessId)
}
With all testing logic being in lua, it's possible to leverage the native print
function.
We wrap it in a higher order function such as to allow for specifying a verbosity level for each logging. See the function _G.printVerb
at the top of the example test/*_test.lua
files.
For example, any logging related to message passing and handling is done with level 2 verbosity, see printVerb(2)('> LOG: ' .. 'SKIPPING DEFAULT HANDLER')
in test/mocked-env/ao/handlers.lua
By using this approach you can also change the verbosity of your test suite while it is executing, such as to have only specific tests or parts of one test run with high verbosity, while others keep a low or 0 verbosity. That way you can more effectively focus on failing tests or parts of tests when debugging.
Also, by making _G.printVerb
a global function you can for a brief time include this logging in your application code, just for the time of debugging while some of your tests are failing.
test/mocked-env/ao/ao.lua
contains minimum functionality such as to facilitate the simulation of message communication. It borrows from the setup of the aos-test-kit.
We replicate some code from aos - test/mocked-env/ao/handlers.lua
and test/mocked-env/ao/handlers-utils.lua
are used for matching the actual handlers of app process
In order to keep things simple, the default handlers associated with each process (_default
and _eval
) are not added to app process and so they never kick in as they would in production.
For the purpose of testing, we find them not essential.
Testing can also be performed with the ao-loader from https://github.com/permaweb/ao, see the npm package and how it is used in the aos-test-kit.
While the aos-test-kit does facilitate TDD while developing an AO process, it offers less flexibility or capabilities in terms of
- systematically setting up the global state of the process to be tested (app process), as well as
- assertions on the global state of the process to be tested (should not need to go through Evals for this purpose)
- allowing inter-process interaction with mocked processes
Additionally, the present setup makes it possible to write the tests entirely in Lua, which may be preferrable in some cases.
This project is licensed under the MIT License - see the LICENSE.md file for details.