-
Notifications
You must be signed in to change notification settings - Fork 13
Development Workflows
Run make help
or simply make
to see all of the options here. The main ones are summarized below.
This "loop" could be employed in most cases while making any changes to the code or to the tests.
make build
Sometimes, you might end up with stale compilation output (e.g. .zo
files). If these are present, Racket will use them in preference to the corresponding source modules (which may happen to be more up to date). A common symptom of this is getting strange errors that don't make any sense, or errors lingering that you thought you had fixed. To address this, you can "clean" all compile output prior to building again, by using:
make clean
A faster option could be to do:
racket -y module-you-are-trying-to-run.rkt
This tells Racket, "[y]es, please recompile all dependent modules" before running this file.
make test
make test-flow
This is just an example, but it is the one you'll most commonly want to use, as it runs the fastest and tests the core language (which is contained in the flow.rkt
module). You may want to run make test
only at later stages in development. For other modules you can test, run make help
or simply make
to see all the options.
This loop may be employed while making changes to the documentation. The docs are in Scribble files in qi-doc/
. After making any additions or changes:
make build-docs
make docs
You'd typically only use these when you're optimizing performance in general or modifying the implementation of a particular form.
You will need to install the SDK first before running the benchmarks. The SDK is just a collection of Racket dependencies (e.g. for command line scripting, coverage reporting, etc.) that are needed in order for the profiling scripts to work. You could install these dependencies manually yourself, but the SDK collects these dependencies into a Racket package (qi-sdk
) so that you can use Raco to do it for you, instead.
This uses Dominik's Variable Length Input Benchmarking package (vlibench
) to run a rigorous series of benchmarks and get a statistically accurate picture of competitive performance of deforested operations against undeforested ones (e.g. using just the Racket functions map
, filter
, etc.).
make new-benchmarks
These are the same benchmarks that are run in our CI workflow. On GitHub, this currently takes about 20 minutes. On your machine, it could take more or less time, depending on your hardware.
During development, you may want to get a rough idea of performance rather than a rigorous report for presentation purposes. Use this:
make new-benchmarks-preview
This uses a "preview" profile and should run and present a rough idea of the performance of deforested operations. It should run in under a minute.
make benchmark
This runs comprehensive benchmarks for all aspects of the language, but it does it in a basic way that isn't as rigorous as the "new" benchmarks in vlibench
. Eventually, it may be a good idea to transition these benchmarks to vlibench
, or at least create a parallel set of such benchmarks using vlibench
.
These benchmarks are not rigorous, but they take a few seconds to run and can give a rough "smoke test" idea of performance, so they may be useful during the early stages of development.
make benchmark-competitive
make benchmark-forms
make benchmark-selected-forms
Home | Developer's Guide | Calendar | Events | Projects | Meeting Notes