Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 9 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Intended use cases:
- .env.example — UI env variables (VITE_API_BASE_URL, etc.)
- Dockerfile — Builds static assets; serves via nginx with API upstream templating
- docker/entrypoint.sh, docker/nginx.conf.template — Runtime nginx config
- llm/ — Internal library for LLM interactions (OpenAI client, zod).
- llm/ — Internal library for LLM interactions (LiteLLM-compatible OpenAI client, zod).
- shared/ — Shared types/helpers for UI/Server.
- json-schema-to-zod/ — Internal helper library.
- docs/ — Platform documentation
Expand Down Expand Up @@ -58,7 +58,7 @@ Intended use cases:
- React 19, Vite 7, Tailwind CSS 4.1, Radix UI
- Storybook 10 for component documentation
- LLM:
- LiteLLM server (ghcr.io/berriai/litellm) or OpenAI (@langchain/* tooling)
- LiteLLM server (ghcr.io/berriai/litellm) providing adapters for upstream providers
- Tooling:
- pnpm 10.5 (corepack-enabled), Node.js 20
- Vitest 3 for testing; ESLint; Prettier
Expand Down Expand Up @@ -98,8 +98,7 @@ pnpm install
2) Configure environments:
- Server: copy packages/platform-server/.env.example to .env, then set:
- AGENTS_DATABASE_URL (required) — e.g. postgresql://agents:agents@localhost:5443/agents
- LLM_PROVIDER — litellm or openai (no default)
- LITELLM_BASE_URL, LITELLM_MASTER_KEY (required for LiteLLM path)
- LITELLM_BASE_URL, LITELLM_MASTER_KEY (required for LiteLLM provisioning)
- Optional: CORS_ORIGINS, VAULT_* (see packages/platform-server/src/core/services/config.service.ts and .env.example)
- UI: copy packages/platform-ui/.env.example to .env and set:
- VITE_API_BASE_URL — e.g. http://localhost:3010
Expand Down Expand Up @@ -135,11 +134,10 @@ Server listens on PORT (default 3010; see packages/platform-server/src/index.ts
- Use published images from GHCR (see .github/workflows/docker-ghcr.yml):
- ghcr.io/agynio/platform-server
- ghcr.io/agynio/platform-ui
- Example: server (env must include AGENTS_DATABASE_URL, LLM_PROVIDER, LITELLM_BASE_URL, LITELLM_MASTER_KEY):
- Example: server (env must include AGENTS_DATABASE_URL, LITELLM_BASE_URL, LITELLM_MASTER_KEY):
```bash
docker run --rm -p 3010:3010 \
-e AGENTS_DATABASE_URL=postgresql://agents:agents@host.docker.internal:5443/agents \
-e LLM_PROVIDER=litellm \
-e LITELLM_BASE_URL=http://host.docker.internal:4000 \
-e LITELLM_MASTER_KEY=sk-dev-master-1234 \
ghcr.io/agynio/platform-server:latest
Expand All @@ -156,11 +154,10 @@ docker run --rm -p 8080:80 \
Key environment variables (server) from packages/platform-server/.env.example and src/core/services/config.service.ts:
- Required:
- AGENTS_DATABASE_URL — Postgres connection for platform-server
- LLM_PROVIDER — litellm or openai
- LITELLM_BASE_URL — LiteLLM root URL (must not include /v1; default host in docker-compose is 127.0.0.1:4000)
- LITELLM_MASTER_KEY — admin key for LiteLLM
- Optional LLM:
- OPENAI_API_KEY, OPENAI_BASE_URL
- LITELLM_MASTER_KEY — admin key for LiteLLM (virtual key alias `agyn_key` is provisioned automatically)
- Optional LiteLLM tuning:
- LITELLM_MODELS, LITELLM_KEY_DURATION, LITELLM_MAX_BUDGET, LITELLM_RPM_LIMIT, LITELLM_TPM_LIMIT, LITELLM_TEAM_ID
- Graph store:
- GRAPH_REPO_PATH (default ./data/graph)
- GRAPH_BRANCH (default graph-state)
Expand All @@ -176,6 +173,7 @@ Key environment variables (server) from packages/platform-server/.env.example an
- NCPS_URL_SERVER, NCPS_URL_CONTAINER (default http://ncps:8501)
- NCPS_PUBKEY_PATH (default /pubkey), fetch/refresh/backoff settings
- NIX_ALLOWED_CHANNELS, NIX_* cache limits
- `/api/nix/resolve-repo` supports public GitHub repositories only; private repositories are not supported.
- CORS:
- CORS_ORIGINS — comma-separated allowed origins
- Misc:
Expand Down Expand Up @@ -259,7 +257,7 @@ pnpm --filter @agyn/platform-server run prisma:generate
- Local compose: docker-compose.yml includes all supporting services required for dev workflows.
- Server container:
- Image: ghcr.io/agynio/platform-server
- Required env: AGENTS_DATABASE_URL, LLM_PROVIDER, LITELLM_BASE_URL, LITELLM_MASTER_KEY, optional Vault and CORS
- Required env: AGENTS_DATABASE_URL, LITELLM_BASE_URL, LITELLM_MASTER_KEY (optional Vault and CORS vars supported)
- Exposes 3010; healthcheck verifies TCP connectivity
- UI container:
- Image: ghcr.io/agynio/platform-ui
Expand Down
14 changes: 10 additions & 4 deletions docs/contributing/style_guides.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,8 +82,8 @@ Our repo currently uses:

```ts
// Bad: implicit any, unvalidated env, side effects in module scope
const key = process.env.OPENAI_API_KEY; // string | undefined
export const client = new OpenAI({ apiKey: key });
const masterKey = process.env.LITELLM_MASTER_KEY; // string | undefined
export const client = new OpenAI({ apiKey: masterKey, baseURL: process.env.LITELLM_BASE_URL });

export function handle(data) {
return data.id;
Expand All @@ -94,15 +94,21 @@ export function handle(data) {
// Good: validated config, explicit types, controlled side effects
import { z } from 'zod';

const Config = z.object({ OPENAI_API_KEY: z.string().min(1) });
const Config = z.object({
LITELLM_BASE_URL: z.string().url(),
LITELLM_MASTER_KEY: z.string().min(1),
});
const cfg = Config.parse(process.env);

export interface Item { id: string }
export function getId(item: Item): string {
return item.id;
}

export const client = new OpenAI({ apiKey: cfg.OPENAI_API_KEY });
export const client = new OpenAI({
apiKey: cfg.LITELLM_MASTER_KEY,
baseURL: `${cfg.LITELLM_BASE_URL.replace(/\/$/, '')}/v1`,
});
```

## Tooling
Expand Down
17 changes: 6 additions & 11 deletions docs/litellm-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,26 +21,24 @@ Networking and ports
- To expose on your LAN (not recommended without auth/TLS), edit docker-compose.yml and change the litellm ports mapping to either `0.0.0.0:4000:4000` or just `4000:4000`.

Initial configuration (via UI)
- Create a provider key: add your real OpenAI (or other) API key under Providers.
- Create a provider key: add your real upstream API key (e.g., OpenAI, Anthropic, Azure OpenAI) under Providers.
- Create a model alias if desired:
- Choose any name you prefer (e.g., gpt-5) and point it to a real backend model target (e.g., gpt-4o, gpt-4o-mini, or openai/gpt-4o).
- In the Agents UI, the Model field now accepts free-text. Enter either your alias name (e.g., gpt-5) or a provider-prefixed identifier (e.g., openai/gpt-4o-mini). The UI does not validate availability; runtime will surface errors if misconfigured.
- Choose any name you prefer (e.g., gpt-5) and point it to a real backend model target (e.g., openai/gpt-4o-mini).
- In the Agents UI, the Model field accepts free-text. Enter either your alias name (e.g., gpt-5) or a provider-prefixed identifier (e.g., openai/gpt-4o-mini). The UI does not validate availability; runtime surfaces errors if misconfigured.

App configuration: LiteLLM admin requirements
- Set `LLM_PROVIDER=litellm` on the platform server.
- LiteLLM administration env vars are required at boot:
- `LITELLM_BASE_URL=http://localhost:4000`
- `LITELLM_MASTER_KEY=sk-<master-key>`
- The server provisions virtual keys by calling LiteLLM's admin API. Missing either env produces a `503 litellm_missing_config` response for the LLM settings API and disables UI writes.
- Optional overrides for generated virtual keys:
- The platform server always operates against LiteLLM. Missing either env produces a `503 litellm_missing_config` response for the LLM settings API and disables UI writes.
- Virtual keys are provisioned automatically under the fixed alias `agyn_key`. Optional overrides for generated keys:
- `LITELLM_MODELS=gpt-5` (comma-separated list)
- `LITELLM_KEY_DURATION=30d`
- `LITELLM_KEY_ALIAS=agents-${process.pid}`
- Limits: `LITELLM_MAX_BUDGET`, `LITELLM_RPM_LIMIT`, `LITELLM_TPM_LIMIT`, `LITELLM_TEAM_ID`
- Runtime requests use `${LITELLM_BASE_URL}/v1` with either the master key or the generated virtual key.

Model naming guidance
- Use the exact LiteLLM model name as configured in the LiteLLM UI. For OpenAI via LiteLLM, provider prefixes may be required (e.g., openai/gpt-4o-mini).
- Use the exact LiteLLM model name as configured in the LiteLLM UI (e.g., openai/gpt-4o-mini).
- Aliases are supported; enter the alias in the UI if you created one (e.g., gpt-5).
- Provider identifiers should match the canonical keys exposed by LiteLLM's `/public/providers` endpoint. The platform normalizes a few historical aliases (for example, `azure_openai` now maps to `azure`), but using the official key avoids sync errors.
- Provider names are handled case-insensitively and persisted as lowercase canonical keys.
Expand All @@ -50,9 +48,6 @@ Agent configuration behavior
- Agents respect the configured model end-to-end. If you set a model in the Agent configuration, the runtime binds that model to both the CallModel and Summarization nodes and will not silently fall back to the default (gpt-5).
- Ensure the chosen model or alias exists in LiteLLM; misconfigured names will surface as runtime errors from the provider.

Direct OpenAI mode
- Set `LLM_PROVIDER=openai` and provide `OPENAI_API_KEY` (and optional `OPENAI_BASE_URL`). No LiteLLM envs are read in this mode.

Persistence verification
- The LiteLLM DB persists to the named volume litellm_pgdata.
- Stop and start services; your providers, virtual keys, and aliases should remain.
Expand Down
13 changes: 5 additions & 8 deletions docs/product-spec.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,14 +102,11 @@ Upgrade and migration

Configuration matrix (server env vars)
- Required
- GITHUB_APP_ID
- GITHUB_APP_PRIVATE_KEY (PEM; multiline ok)
- GITHUB_INSTALLATION_ID
- GH_TOKEN
- LLM_PROVIDER (litellm | openai)
- If `LLM_PROVIDER=litellm`: LITELLM_BASE_URL and LITELLM_MASTER_KEY
- If `LLM_PROVIDER=openai`: OPENAI_API_KEY (OPENAI_BASE_URL optional)
- AGENTS_DATABASE_URL
- LITELLM_BASE_URL (LiteLLM root without /v1)
- LITELLM_MASTER_KEY (admin key; virtual key alias `agyn_key` is managed automatically)
- Optional
- GITHUB_APP_ID / GITHUB_APP_PRIVATE_KEY / GITHUB_INSTALLATION_ID / GH_TOKEN (only for GitHub App integrations)
- GRAPH_REPO_PATH (default ./data/graph)
- GRAPH_BRANCH (default graph-state)
- GRAPH_AUTHOR_NAME / GRAPH_AUTHOR_EMAIL
Expand All @@ -131,7 +128,7 @@ HTTP API and sockets (pointers)
Runbooks
- Local dev
- Prereqs: Node 18+, pnpm, Docker, Postgres.
- Set: LLM_PROVIDER=litellm, LITELLM_BASE_URL, LITELLM_MASTER_KEY, GITHUB_*, GH_TOKEN, AGENTS_DATABASE_URL. Optional VAULT_* and DOCKER_MIRROR_URL.
- Set: AGENTS_DATABASE_URL, LITELLM_BASE_URL, LITELLM_MASTER_KEY. Optional: VAULT_*, DOCKER_MIRROR_URL, GitHub App env vars when integrations are enabled.
- Start deps (compose or local Postgres)
- Server: pnpm -w -F @agyn/platform-server dev
- UI: pnpm -w -F @agyn/platform-ui dev
Expand Down
10 changes: 7 additions & 3 deletions packages/platform-server/.env.example
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,16 @@
# Node id (nodeId) is required for deterministic upsert and is provided by the Agent node.
AGENTS_DATABASE_URL=

# LLM provider must be explicit: 'openai' or 'litellm'. No default.
LLM_PROVIDER=

# LiteLLM admin setup (replace master key with your actual secret)
LITELLM_BASE_URL=http://127.0.0.1:4000
LITELLM_MASTER_KEY=sk-dev-master-1234
# Optional LiteLLM tuning (virtual key alias agyn_key is automatic)
# LITELLM_MODELS=gpt-4o
# LITELLM_KEY_DURATION=30d
# LITELLM_MAX_BUDGET=100
# LITELLM_RPM_LIMIT=600
# LITELLM_TPM_LIMIT=90000
# LITELLM_TEAM_ID=

# Optional: GitHub integration (App or PAT). Safe to omit for local dev.
# GITHUB_APP_ID=
Expand Down
10 changes: 3 additions & 7 deletions packages/platform-server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,8 +104,7 @@ Persistent conversation state (Prisma)
- `pnpm --filter @agyn/platform-server prisma studio`
- Best-effort: if AGENTS_DATABASE_URL is not set or DB errors occur, reducers fall back to in-memory only.
- Local dev:
- LLM_PROVIDER must be set explicitly to 'openai' or 'litellm'. There is no default.
- When `LLM_PROVIDER=litellm`, the server expects `LITELLM_BASE_URL` and `LITELLM_MASTER_KEY`.
- Provide `LITELLM_BASE_URL` and `LITELLM_MASTER_KEY` for LiteLLM administration.
- In docker-compose development the admin base defaults to `http://127.0.0.1:4000` if unset.
- For all other environments, set an explicit `LITELLM_BASE_URL` and master key.

Expand All @@ -118,16 +117,13 @@ LITELLM_BASE_URL=http://127.0.0.1:4000
LITELLM_MASTER_KEY=sk-dev-master-1234
```

Replace `sk-dev-master-1234` with your actual LiteLLM master key if it differs.
Replace `sk-dev-master-1234` with your actual LiteLLM master key if it differs. The server provisions a virtual key using the fixed alias `agyn_key`; override TTL, allowed models, and rate limits via `LITELLM_KEY_DURATION`, `LITELLM_MODELS`, `LITELLM_MAX_BUDGET`, `LITELLM_RPM_LIMIT`, `LITELLM_TPM_LIMIT`, and `LITELLM_TEAM_ID`.

## Context item payload guard

LiteLLM call logging, summarization, and tool execution persist context items as JSON blobs inside Postgres. The persistence layer now strips all `\u0000` (null bytes) from `contentText`, `contentJson`, and `metadata` prior to writes so Prisma does not reject the payload.

- Sanitization runs automatically for every `contextItem.create`/`update`.
- Enable a hard guard during development by setting `CONTEXT_ITEM_NULL_GUARD=1`. When the guard is active the server throws `ContextItemNullByteGuardError` if any unsanitized payload reaches the repository, ensuring new call sites cannot bypass the sanitizer.

Set the flag while running targeted tests or during local debugging to immediately catch regressions that would otherwise surface as Prisma `null byte in string` errors at runtime.
- Sanitization runs automatically for every `contextItem.create`/`update`, and the null-byte guard is always enforced (no runtime toggle).
- GitHub integration is optional. If no GitHub env is provided, the server boots and logs that GitHub is disabled. Any GitHub-dependent feature will error at runtime until credentials are configured.
- Shell tool streaming persistence:
- Tool stdout/stderr chunks are stored via Prisma when the `tool_output_*` tables exist.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,6 @@ import { LiveGraphRuntime } from '../src/graph-core/liveGraph.manager';
import { ConfigService, configSchema } from '../src/core/services/config.service';
import { LLMSettingsService } from '../src/settings/llm/llmSettings.service';

process.env.LLM_PROVIDER = process.env.LLM_PROVIDER || 'litellm';
process.env.AGENTS_DATABASE_URL = process.env.AGENTS_DATABASE_URL || 'postgres://localhost:5432/test';
process.env.NCPS_ENABLED = process.env.NCPS_ENABLED || 'false';
process.env.CONTAINERS_CLEANUP_ENABLED = process.env.CONTAINERS_CLEANUP_ENABLED || 'false';
Expand Down Expand Up @@ -164,7 +163,6 @@ describe('App bootstrap smoke test', () => {

const configService = new ConfigService().init(
configSchema.parse({
llmProvider: process.env.LLM_PROVIDER || 'litellm',
litellmBaseUrl: process.env.LITELLM_BASE_URL || 'http://127.0.0.1:4000',
litellmMasterKey: process.env.LITELLM_MASTER_KEY || 'sk-dev-master-1234',
agentsDatabaseUrl: process.env.AGENTS_DATABASE_URL || 'postgres://localhost:5432/test',
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,12 @@ import { ConfigService } from '../src/core/services/config.service';
describe('LLM settings controller (admin-status endpoint)', () => {
let app: NestFastifyApplication;
const previousEnv = {
llmProvider: process.env.LLM_PROVIDER,
agentsDbUrl: process.env.AGENTS_DATABASE_URL,
litellmBaseUrl: process.env.LITELLM_BASE_URL,
litellmMasterKey: process.env.LITELLM_MASTER_KEY,
};

beforeAll(async () => {
process.env.LLM_PROVIDER = 'litellm';
process.env.AGENTS_DATABASE_URL = 'postgres://localhost:5432/test';
process.env.LITELLM_BASE_URL = process.env.LITELLM_BASE_URL || 'http://127.0.0.1:4000';
process.env.LITELLM_MASTER_KEY = process.env.LITELLM_MASTER_KEY || 'sk-dev-master-1234';
Expand All @@ -38,7 +36,6 @@ describe('LLM settings controller (admin-status endpoint)', () => {
afterAll(async () => {
await app.close();
ConfigService.clearInstanceForTest();
process.env.LLM_PROVIDER = previousEnv.llmProvider;
process.env.AGENTS_DATABASE_URL = previousEnv.agentsDbUrl;
process.env.LITELLM_BASE_URL = previousEnv.litellmBaseUrl;
process.env.LITELLM_MASTER_KEY = previousEnv.litellmMasterKey;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,12 @@ import type { LiteLLMModelRecord } from '../src/settings/llm/types';
describe('LLM settings controller (models endpoint)', () => {
let app: NestFastifyApplication;
const previousEnv = {
llmProvider: process.env.LLM_PROVIDER,
agentsDbUrl: process.env.AGENTS_DATABASE_URL,
litellmBaseUrl: process.env.LITELLM_BASE_URL,
litellmMasterKey: process.env.LITELLM_MASTER_KEY,
};

beforeAll(async () => {
process.env.LLM_PROVIDER = 'litellm';
process.env.AGENTS_DATABASE_URL = 'postgres://localhost:5432/test';
process.env.LITELLM_BASE_URL = process.env.LITELLM_BASE_URL || 'http://127.0.0.1:4000';
process.env.LITELLM_MASTER_KEY = process.env.LITELLM_MASTER_KEY || 'sk-dev-master-1234';
Expand All @@ -39,7 +37,6 @@ describe('LLM settings controller (models endpoint)', () => {
afterAll(async () => {
await app.close();
ConfigService.clearInstanceForTest();
process.env.LLM_PROVIDER = previousEnv.llmProvider;
process.env.AGENTS_DATABASE_URL = previousEnv.agentsDbUrl;
process.env.LITELLM_BASE_URL = previousEnv.litellmBaseUrl;
process.env.LITELLM_MASTER_KEY = previousEnv.litellmMasterKey;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ const respondJson = (payload: unknown, init?: ResponseInit) =>

describe('LiteLLMProvisioner bootstrap (DI smoke)', () => {
const requiredEnv: Record<string, string> = {
LLM_PROVIDER: 'litellm',
LITELLM_BASE_URL: 'http://127.0.0.1:4000',
LITELLM_MASTER_KEY: 'sk-test',
AGENTS_DATABASE_URL: 'postgresql://postgres:postgres@localhost:5432/agents_test',
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,6 @@ const createAgentFixture = async () => {
provide: ConfigService,
useValue: new ConfigService().init(
configSchema.parse({
llmProvider: 'openai',
agentsDatabaseUrl: 'postgres://user:pass@host/db',
litellmBaseUrl: 'http://localhost:4000',
litellmMasterKey: 'sk-test',
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@ describe('Agent busy gating (wait mode)', () => {
provide: ConfigService,
useValue: new ConfigService().init(
configSchema.parse({
llmProvider: 'openai',
agentsDatabaseUrl: 'postgres://user:pass@host/db',
litellmBaseUrl: 'http://localhost:4000',
litellmMasterKey: 'sk-test',
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,6 @@ describe('AgentNode error termination handling', () => {
provide: ConfigService,
useValue: new ConfigService().init(
configSchema.parse({
llmProvider: 'openai',
agentsDatabaseUrl: 'postgres://user:pass@host/db',
litellmBaseUrl: 'http://localhost:4000',
litellmMasterKey: 'sk-test',
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,6 @@ const createAgentFixture = async () => {
provide: ConfigService,
useValue: new ConfigService().init(
configSchema.parse({
llmProvider: 'openai',
agentsDatabaseUrl: 'postgres://user:pass@host/db',
litellmBaseUrl: 'http://localhost:4000',
litellmMasterKey: 'sk-test',
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ describe('AgentNode termination auto-send', () => {
provide: ConfigService,
useValue: new ConfigService().init(
configSchema.parse({
llmProvider: 'litellm',
litellmBaseUrl: 'http://127.0.0.1:4000',
litellmMasterKey: 'sk-dev-master-1234',
agentsDatabaseUrl: 'postgres://user:pass@host/db',
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@ describe('AgentNode termination flow', () => {
provide: ConfigService,
useValue: new ConfigService().init(
configSchema.parse({
llmProvider: 'openai',
agentsDatabaseUrl: 'postgres://user:pass@host/db',
litellmBaseUrl: 'http://localhost:4000',
litellmMasterKey: 'sk-test',
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@ class StubProvisioner extends LLMProvisioner {

describe('Agent thread model binding', () => {
const baseConfig = {
llmProvider: 'openai',
litellmBaseUrl: 'http://localhost:4000',
litellmMasterKey: 'sk-test',
} as Partial<ConfigService>;
Expand Down
Loading