Skip to content

Development Workflows

Siddhartha Kasivajhula edited this page Jan 1, 2025 · 6 revisions

Run make help or simply make to see all of the options here. The main ones are summarized below.

Dev Loop

This "loop" could be employed in most cases while making any changes to the code or to the tests.

Rebuilding

  make build

Cleaning

Sometimes, you might end up with stale compilation output (e.g. .zo files). If these are present, Racket will use them in preference to the corresponding source modules (which may happen to be more up to date). A common symptom of this is getting strange errors that don't make any sense, or errors lingering that you thought you had fixed. To address this, you can "clean" all compile output prior to building again, by using:

  make clean

A faster option could be to do:

  racket -y module-you-are-trying-to-run.rkt

This tells Racket, "[y]es, please recompile all dependent modules" before running this file.

Running Tests

Run all tests

  make test

Run tests for a specific module

  make test-flow

This is just an example, but it is the one you'll most commonly want to use, as it runs the fastest and tests the core language (which is contained in the flow.rkt module). You may want to run make test only at later stages in development. For other modules you can test, run make help or simply make to see all the options.

Docs Loop

This loop may be employed while making changes to the documentation. The docs are in Scribble files in qi-doc/. After making any additions or changes:

Rebuilding

  make build-docs

Viewing Docs

  make docs

Performance Loop

You'd typically only use these when you're optimizing performance in general or modifying the implementation of a particular form.

Running Benchmarks

You will need to install the SDK first before running the benchmarks. The SDK is just a collection of Racket dependencies (e.g. for command line scripting, coverage reporting, etc.) that are needed in order for the profiling scripts to work. You could install these dependencies manually yourself, but the SDK collects these dependencies into a Racket package (qi-sdk) so that you can use Raco to do it for you, instead.

Run deforestation benchmarks

This uses Dominik's Variable Length Input Benchmarking package (vlibench) to run a rigorous series of benchmarks and get a statistically accurate picture of competitive performance of deforested operations against undeforested ones (e.g. using just the Racket functions map, filter, etc.).

  make new-benchmarks

These are the same benchmarks that are run in our CI workflow. On GitHub, this currently takes about 20 minutes. On your machine, it could take more or less time, depending on your hardware.

During development, you may want to get a rough idea of performance rather than a rigorous report for presentation purposes. Use this:

  make new-benchmarks-preview

This uses a "preview" profile and should run and present a rough idea of the performance of deforested operations. It should run in under a minute.

Run basic benchmarks

  make benchmark

This runs comprehensive benchmarks for all aspects of the language, but it does it in a basic way that isn't as rigorous as the "new" benchmarks in vlibench. Eventually, it may be a good idea to transition these benchmarks to vlibench, or at least create a parallel set of such benchmarks using vlibench.

These benchmarks are not rigorous, but they take a few seconds to run and can give a rough "smoke test" idea of performance, so they may be useful during the early stages of development.

Run just the basic competitive benchmarks against Racket

  make benchmark-competitive

Run just the basic benchmarks for forms of the language

  make benchmark-forms

Run just the basic benchmarks for selected forms

  make benchmark-selected-forms
Clone this wiki locally