Skip to content

ritual-net/infernet-monorepo

Repository files navigation

Infernet Monorepo

This monorepo includes all of the libraries & frameworks used for building & running containers in Infernet Nodes.

Overview

Currently, the repository structure is as follows (not all files / directories are shown):

  • infernet_services/: Infernet-compatible containers and tests.
    • consumer-contracts/: Contracts used for onchain (web3) testing.
    • deploy/: Deployment files for test Nodes.
    • services/: Source code for containers.
      • css_inference_service/
      • ezkl_proof_service/
      • hf_inference_client_service/
      • onnx_inference_service/
      • tgi_client_inference_service/
      • torch_inference_service/
    • test_services/: Source code for testing-only containers, such as the local Anvil chain instance.
    • tests: Source code for container tests, Node E2E tests, and common test_library.
  • libraries/: All of the Python libraries.
    • infernet_ml/: Source code for the infernet-ml library.
    • infernet_cli: Source code for the infernet-cli library & CLI tool.
    • infernet_client/: Source code for the infernet-client library & CLI tool.
    • ritual_arweave/: Source code for the ritual-arweave library & CLI tool.
    • infernet_pyarweave/: Source code for the infernet-pyarweave library.
  • scripts/: Makefile scripts used for publishing packages, deploying services, generating docs, etc.
  • tools/: Miscellaneous scripts used for auto-generation and deployment of library and service documentation pages.
  • pyproject.toml: Top-level pyproject.toml primarily used by rye to handle various tasks regarding monorepo management. This is akin to the top-level package.json file in JS monorepos.

Development

Pre-requisites

To develop in the monorepo, we suggest you pre-install the following in advance:

Deployer Key

Only applies to the Ritual team.

For publishing libraries to the Ritual PyPi repository, you will need a service account with access to our private GCP project. Ask the team to provide you with a pypi-deployer-key.json file, or access to create one yourself. Place the file in the top-level directory of this repository.

Secrets

Only applies to the Ritual team.

To test libraries and services, you will need a .env file with secrets.

First, authenticate with the gcloud CLI. Run

gcloud auth login

and follow the steps in your browser. Then, initialize the repository:

make init-repo

If successful, you should now see the .env file in the top-level directory of this repository.

Python Libraries

The libraries/ portion of this repository was scaffolded using rye. Rye comes with built-in support for packaging & installing python packages. It also has support for workspaces, which allows us to follow a monorepo structure where we have multiple python libraries in the same repository.

For installation and usage documentation of the libraries, please refer to:

  1. infernet-cli
  2. infernet-client
  3. infernet-ml
  4. ritual-arweave

The following sections detail how to develop, build, and test the Python libraries.

Development setup

If developing on a library or running tests, you can set up the development environment by running:

make setup-library-env

This will create a new uv environment under the .venv directory. Activate it by running:

source .venv/bin/activate

If modifying the dependencies in a library's pyproject.toml (or to simply bump third-party library versions), you can update the requirements.lock file with:

make update-library-lockfile

Testing

You can run tests for a library as follows:

make test-library

You can run the pre-commit scripts for a library as follows:

make pre-commit-library

Building

You can build a library by running:

make build-library

This will create a dist folder and a .tar.gz file for the library.

Build System: Rye by default uses hatchling for packaging & creation of the libraries.

Publishing

To publish a library, you can run:

make publish-library

Note that you would need a pypi account and a key to be able to publish a library.

Services

All of the services are located in the infernet_services directory. These services are Infernet-compatible containers that work out-of-the-box, and cover many of the common use-cases for ML workflows.

For documentation on the services & how to use them, please refer to the Infernet Services Documentation. The following sections detail how to develop, build, and test the Infernet Services.

Development setup

If developing on a service or running tests, you can set up the development environment by running:

make setup-services-test-env

This will create a new uv environment under the .venv directory. Activate it by running:

source .venv/bin/activate

If modifying the dependencies in a service's requirements.txt (or to simply bump third-party library versions), you can update the requirements.lock file with:

make update-service-lockfile

Testing

You can run tests for a service as follows:

make test-service

You can run the pre-commit scripts for a service as follows:

make pre-commit-service

Building

You can build a service's Docker image as follows:

make build-service

Running

To run a service, we suggest you configure & deploy it via an Infernet Node. This is very similar to how testing a service is set up, except you have full control the node and container configurations.

Configure the service

Create a config.json file under the infernet_services/deploy directory. To learn about the possible config params, refer to the configuration documentation.

Hint: You can start by manually creating the config.json used for testing, and then proceed to modify it. To do so, run

PYTHONPATH=infernet_services/tests python infernet_services/tests/<service_name>/conftest.py

replacing <service_name> with the name of the service you are configuring.

Deploy the node

make deploy-node

This should deploy the Infernet Node on port 4000, as well as the configured services in config.json. We recommend using Docker Desktop to monitor and manage containers.