contractor is a CLI tool for generating and enriching OpenAPI specifications from source code using an LLM.
The following pipelines are currently available:
build— build or update an OpenAPI spec from the project codeenrich— enrich an existing OpenAPI artifact based on the project structure and implementation
The CLI is installed as the contractor command.
- Python
>=3.10,<3.15 - Poetry
- a running LiteLLM proxy
- an LLM backend with an OpenAI-compatible API
Before running contractor, you need to:
- Start the LiteLLM proxy
- Configure it
- Make sure the required models are available through the backend
- Install the project dependencies with Poetry
In this repository, LiteLLM is usually started via:
deploy/litellm/run.shThis is not mandatory. You can start the proxy in any convenient way, as long as:
- LiteLLM is reachable by the application
- the model names in the config match what is passed to
--model
File litellm_config.yaml:
model_list:
- model_name: lm-studio-nemotron
litellm_params:
model: openai/nvidia/nemotron-3-nano
api_key: lm-studio
api_base: http://localhost:1234/v1
tpm: 100000
rpm: 10
- model_name: lm-studio-openai
litellm_params:
model: openai/openai/gpt-oss-20b
api_key: lm-studio
api_base: http://localhost:1234/v1
tpm: 100000
rpm: 10
- model_name: lm-studio-qwen3.5
litellm_params:
model: openai/qwen/qwen3.5-35b-a3b
api_key: lm-studio
api_base: http://localhost:1234/v1
tpm: 100000
rpm: 10
litellm_settings:
num_retries: 3
request_timeout: 300model_name— the name later used in the CLI via--modelapi_base— the OpenAI-compatible API endpointrequest_timeout: 300— useful for long-running tasksnum_retries: 3— the number of retry attempts on errors
podman run --rm -d \
-v $(pwd)/litellm_config.yaml:/app/config.yaml \
-e LITELLM_MASTER_KEY="sk-litellm-changeme" \
-e LITELLM_SALT_KEY="sk-random-hash-changeme" \
--network="host" \
"ghcr.io/berriai/litellm:main-stable" \
--config /app/config.yamlIf the repository already contains a ready-made script, you can use it:
deploy/litellm/run.shpoetry installAfter installation, the CLI is available as:
poetry run contractor --helpIf the Poetry environment is activated, you can simply run:
contractor --helpcontractor \
--pipeline <pipeline-name> \
--project-path <path-to-project> \
--folder-name <project-relative-folder> \
--user-id <user-id> \
--model <model-name>-
--pipeline— the pipeline name Currently available:build,enrich -
--project-path— path to the project directory -
--folder-name— path inside the project that will be used in task templates Default:/ -
--artifact— path to an existing OpenAPI file Used by pipelines that require an input artifact, such asenrich -
--user-id— user identifier for the runner Default:cli-user -
--model— model name from the LiteLLM config Default:lm-studio-qwen3.5
contractor \
--pipeline build \
--project-path /path/to/project \
--folder-name /src \
--model lm-studio-qwen3.5It is better to place the project in a separate isolated folder so the agent does not accidentally wander into neighboring projects.
contractor \
--pipeline enrich \
--project-path /path/to/project \
--folder-name /src \
--artifact /path/to/openapi.yaml \
--model lm-studio-qwen3.5The CLI validates:
- that
--project-pathexists and is a directory - that
--folder-nameexists inside--project-path - that
--artifactexists and is a file - that
--artifactis only provided for pipelines that require it
After that, the new pipeline will automatically appear in --pipeline.
contractor/main.py— CLI entrypointcontractor/agents/— agentscontractor/runners/— pipeline runnerscontractor/tasks/— YAML task definitionscontractor/tools/— tools for agentscontractor/utils/— utilities
It is important that the corresponding model is actually available through the backend.
-
Start the backend with the model
-
Start the LiteLLM proxy
-
Install dependencies:
poetry install
-
Check the CLI:
poetry run contractor --help
-
Run the required pipeline:
poetry run contractor \ --pipeline build \ --project-path /path/to/project \ --model lm-studio-qwen3.5
Use:
poetry run contractor --helpor activate the Poetry virtual environment.
Check that:
- the LiteLLM proxy is running
- the name passed in
--modelmatchesmodel_nameinlitellm_config.yaml - the backend is actually reachable via
api_base
For enrich, you need to provide an existing file:
--artifact /path/to/openapi.yamlYou can:
- increase
request_timeoutin the LiteLLM config - choose a different model
- limit the analysis scope via
--folder-name
Install dependencies:
poetry installRun tests:
poetry run pytestChecks:
poetry run ruff check .
poetry run mypy .The list of pipelines is centralized in the pipeline registry in contractor/main.py.
To add a new pipeline, it is enough to:
- implement a function that returns a
TaskRunner - add it to
get_pipelines()
Example:
def get_pipelines() -> dict[str, PipelineSpec]:
return {
"build": PipelineSpec(builder=oas_building_pipeline),
"enrich": PipelineSpec(builder=oas_enrichment_pipeline, requires_artifact=True),
"my-new-pipeline": PipelineSpec(builder=my_new_pipeline),
}
}