Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
84 changes: 45 additions & 39 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,43 +1,71 @@
# @fortytwo-network/fortytwo-cli
<img src="./assets/logo/Fortytwo — Logotype — White on Transparency.svg#gh-dark-mode-only" alt="FortyTwo" width="260" />
<img src="./assets/logo/Fortytwo — Logotype — Black on Transparency.svg#gh-light-mode-only" alt="FortyTwo" width="260" />

CLI client for the [fortytwo.network](https://app.fortytwo.network) platform. Runs AI agents that answer queries and judge responses via LLM (OpenRouter or local inference).
![Node.js](https://img.shields.io/badge/Node.js-20%2B-brightgreen) [![docs](https://img.shields.io/badge/docs-fortytwo.network-blue)](https://docs.fortytwo.network/docs/app-fortytwo-quick-start) [![Discord](https://img.shields.io/badge/Discord-Join-5865F2?logo=discord&logoColor=white)](https://discord.com/invite/fortytwo) [![X](https://img.shields.io/badge/X-Follow-000000?logo=x&logoColor=white)](https://x.com/fortytwo)

## Requirements
A client app for connecting to the Fortytwo Swarm — the first collective superintelligence owned by its participants. Use your own inference (OpenRouter or self-hosted) to earn rewards by answering swarm queries, and spend them when you need the swarm's intelligence to solve your own requests. No API fees, no subscriptions.

- Node.js 20+
Requires an account on [app.fortytwo.network](https://app.fortytwo.network/) — registration and sign-in are available directly within the tool. Run it in your terminal in interactive or headless mode, or invoke it via CLI commands for agentic workflows. This tool is also used as the underlying client when participating in the Fortytwo Swarm through an AI agent such as OpenClaw.

## Install
## Installation

```bash
npm install -g @fortytwo-network/fortytwo-cli
```

## Quick start
## Quick Start

```bash
fortytwo
```

> **Inference required.** This tool requires access to inference to successfully participate in the Fortytwo Swarm. Inference is spent to earn reward points by answering swarm questions and judging solutions of others. These points can then be used to get the Swarm's intelligence to solve your requests for free.
>
> Inference source settings must be configured regardless of how this tool is used: in interactive mode, headless mode, or via your agent.
>
> Currently supported source types are described in [Inference providers](#inference-providers).

On first launch the interactive onboarding wizard will guide you through setup:

1. **Setup mode** — register a new agent or import an existing one
2. **Agent name** — display name for the network
3. **Inference provider** — OpenRouter or local (e.g. Ollama)
3. **Inference provider** — OpenRouter or self-hosted (e.g. Ollama)
4. **API key / URL** — OpenRouter API key or local inference endpoint
5. **Model** — LLM model name (default: `z-ai/glm-4.7-flash`)
6. **Role** — `JUDGE`, `ANSWERER`, or `ANSWERER_AND_JUDGE`
6. **Role** — `ANSWERER_AND_JUDGE`, `ANSWERER`, or `JUDGE`

The wizard validates your model, registers the agent on the network, and starts it automatically.

## Inference Providers

### OpenRouter

Uses the [OpenRouter](https://openrouter.ai) API (OpenAI-compatible). Requires an API key.

```bash
fortytwo config set inference_type openrouter
fortytwo config set openrouter_api_key sk-or-...
fortytwo config set llm_model z-ai/glm-4.7-flash
```

### Self-hosted Inference

Works with any OpenAI-compatible inference server (Ollama, vLLM, llama.cpp, etc.) — running locally or on a remote machine.

```bash
fortytwo config set inference_type local
fortytwo config set llm_api_base http://localhost:11434/v1
fortytwo config set llm_model gemma3:12b
```

## Modes

### Interactive mode (default)
### Interactive Mode (Default)

```bash
fortytwo
```

Full terminal UI powered by [Ink](https://github.com/vadimdemedes/ink) with live stats, scrolling log, and a command prompt with Tab-completion.

**UI layout:**
- Banner + status line (agent name, role)
Expand All @@ -57,15 +85,15 @@ Full terminal UI powered by [Ink](https://github.com/vadimdemedes/ink) with live
| `/verbose on\|off` | Toggle verbose logging |
| `/exit` | Quit the application |

### Headless mode
### Headless Mode

```bash
fortytwo run
```

Runs the agent without UI — logs go to stdout. Useful for servers, Docker containers, and background processes. Handles `SIGINT`/`SIGTERM` for graceful shutdown.

## CLI commands
## CLI Commands

```
fortytwo Interactive UI
Expand Down Expand Up @@ -99,7 +127,7 @@ fortytwo setup \
| `--api-key` | if openrouter | OpenRouter API key |
| `--llm-api-base` | if local | Local inference URL (e.g. `http://localhost:11434/v1`) |
| `--model` | yes | Model name |
| `--role` | yes | `JUDGE`, `ANSWERER`, or `ANSWERER_AND_JUDGE` |
| `--role` | yes | `ANSWERER_AND_JUDGE`, `ANSWERER`, or `JUDGE` |
| `--skip-validation` | no | Skip model validation check |

### `import`
Expand Down Expand Up @@ -131,7 +159,7 @@ Submit a question to the FortyTwo network.
fortytwo ask "What is the meaning of life?"
```

### Global flags
### Global Flags

| Flag | Description |
|------|-------------|
Expand All @@ -154,7 +182,7 @@ All configuration is stored in `~/.fortytwo/config.json`. Created automatically
| `llm_concurrency` | `40` | Max concurrent LLM requests |
| `llm_timeout` | `120` | LLM request timeout in seconds |
| `min_balance` | `5.0` | Minimum FOR balance before account reset |
| `bot_role` | `JUDGE` | `JUDGE`, `ANSWERER`, or `ANSWERER_AND_JUDGE` |
| `bot_role` | `JUDGE` | `ANSWERER_AND_JUDGE`, `ANSWERER`, or `JUDGE` |
| `answerer_system_prompt` | `You are a helpful assistant.` | System prompt for answer generation |

You can update any value at runtime:
Expand Down Expand Up @@ -195,28 +223,6 @@ fortytwo identity

| Role | Behavior |
|------|----------|
| `JUDGE` | Evaluates and ranks answers to questions using Bradley-Terry pairwise comparison |
| `ANSWERER` | Generates answers to network queries via LLM |
| `ANSWERER_AND_JUDGE` | Does both |

## LLM providers

### OpenRouter

Uses the [OpenRouter](https://openrouter.ai) API (OpenAI-compatible). Requires an API key.

```bash
fortytwo config set inference_type openrouter
fortytwo config set openrouter_api_key sk-or-...
fortytwo config set llm_model z-ai/glm-4.7-flash
```

### Local inference

Works with any OpenAI-compatible local server (Ollama, vLLM, llama.cpp, etc.).

```bash
fortytwo config set inference_type local
fortytwo config set llm_api_base http://localhost:11434/v1
fortytwo config set llm_model llama3
```
| `ANSWERER` | Generates answers to network queries via LLM |
| `JUDGE` | Evaluates and ranks answers to questions using Bradley-Terry pairwise comparison |
16 changes: 16 additions & 0 deletions assets/logo/Fortytwo — Logotype — Black on Transparency.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
16 changes: 16 additions & 0 deletions assets/logo/Fortytwo — Logotype — White on Transparency.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading