Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
249 changes: 249 additions & 0 deletions documentation/docs/tutorials/configure-agent-llm-configuration.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,249 @@
---
sidebar_position: 6
---

# Configure LLM Providers for an Agent

Agents can be configured to use one or more LLM Service Providers registered at the organization level. The configuration process differs slightly between **Platform-hosted** and **External** agents,
but both follow the same pattern: attach an org-level provider to the agent with an optional name, description, and guardrails.

## Prerequisites

- At least one LLM Service Provider registered at the org level (see [Register an LLM Service Provider](./register-llm-service-provider.mdx))
- An agent created in a project (Platform-hosted or External)

---

## Overview: Agent Types

| Type | Description |
|---|---|
| **Platform** | Agent code is built and deployed by the platform from a GitHub repository. The platform injects LLM credentials as environment variables. |
| **External** | Agent is deployed and managed externally. The platform registers it and provides the invoke URL + API key for the LLM provider. |

---

## Configuring LLM for a Platform-Hosted Agent

### Step 1: Open the Agent

1. Navigate to your project (**Projects** → select project → **Agents**).
2. Click on a **Platform**-tagged agent.
3. In the left sidebar, click **Configure**.

### Step 2: Add an LLM Provider

The **Configure** page displays the **LLM Providers** section listing all LLM providers currently attached to this agent.

1. Click **+ Add Provider**.
2. Fill in the **Basic Details**:

| Field | Description | Example |
|---|---|---|
| **Name** | A logical name for this LLM binding within the agent | `OpenAI GPT5` |
| **Description** | Optional description | `Primary reasoning model` |

3. Under **LLM Service Provider**, click **Select a Provider**.
- A side panel opens listing all org-level LLM Service Providers with their template, rate limiting status, and guardrails.
- Select the desired provider and close the panel.

4. Optionally, under **Guardrails**, click **+ Add Guardrail** to attach guardrails specific to this agent's use of the provider.

5. Click **Save**.

### Step 3: Use the Provider in Agent Code

After saving, the platform generates **environment variables** that are automatically injected into the agent's deployment runtime. You can view these on the LLM provider detail page under **Environment Variables References**:

| Variable Name | Description |
|---|---|
| `<NAME>_API_KEY` | API Key for authenticating with the LLM provider |
| `<NAME>_BASE_URL` | Base URL of the LLM Provider API endpoint |

Where `<NAME>` is derived from the provider name (uppercased, e.g., `OPENAI_GPT5` for a provider named `OpenAI GPT5`).

If your agent is already configured to read a different environment variable name, update the system provided variable name and click **Save**.

**Python code snippet** (shown in the UI):

```python
import os
from openai import OpenAI

apikey = os.environ.get('OPENAI_GPT5_API_KEY')
url = os.environ.get('OPENAI_GPT5_URL')

client = OpenAI(
base_url=url,
api_key="",
default_headers={"API-Key": apikey, "Authorization": ""}
)
```

> **Note**: The platform also provides an **AI Prompt** snippet — a ready-made prompt you can paste into an AI coding assistant to automatically update your code to use the injected environment variables.

### Step 4: Build and Deploy

1. After configuring the LLM provider, click **Build** in the sidebar.
2. Click **Trigger a Build** to build the agent from its GitHub source.
3. Once the build completes, click **Deploy** to deploy to the target environment.
4. The deployed agent URL appears on the **Overview** page (e.g., `http://default-default.localhost:19080/agent-name`).

---

## Configuring LLM for an External Agent

### Step 1: Create and Register the Agent

1. Navigate to your project (**Projects** → select project → **Agents**).
2. Click **+ Add Agent**.
3. On the **Add a New Agent** screen, select **Externally-Hosted Agent**.
> This option is for connecting an existing agent running outside the platform to enable observability and governance.
4. Fill in the **Agent Details**:

| Field | Description | Example |
|---|---|---|
| **Name** | A unique identifier for the agent | `my-external-agent` |
| **Description** *(optional)* | Short description of what this agent does | `Customer support bot` |

5. Click **Register**.

After registration, the agent is created with status **Registered** and the **Setup Agent** panel opens automatically.

---

### Step 2: Instrument the Agent (Setup Agent)

The **Setup Agent** panel provides a **Zero-code Instrumentation Guide** to connect your agent to the platform for observability (traces). Select your language from the **Language** dropdown (Python or Ballerina).

#### Python

1. **Install the AMP instrumentation package**:
```bash
pip install amp-instrumentation
```
Provides the ability to instrument your agent and export traces.

2. **Generate API Key** — choose a **Token Duration** (default: 1 year) and click **Generate**. Copy the token immediately — it will not be shown again.

3. **Set environment variables**:
```bash
export AMP_OTEL_ENDPOINT="http://localhost:22893/otel"
export AMP_AGENT_API_KEY="<your-generated-token>"
```
Sets the agent endpoint and agent-specific API key so traces can be exported securely.


#### Ballerina

1. **Import the Amp module** in your Ballerina program:
```ballerina
import ballerinax/amp as _;
```

2. **Add the following to `Ballerina.toml`**:
```toml
[build-options]
observabilityIncluded = true
```

3. **Update `Config.toml`**:
```toml
[ballerina.observe]
tracingEnabled = true
tracingProvider = "amp"
```

4. **Generate API Key** — choose a **Token Duration** and click **Generate**. Copy the token immediately.

5. **Set environment variables**:
```bash
export BAL_CONFIG_VAR_BALLERINAX_AMP_OTELENDPOINT="http://localhost:22893/otel"
export BAL_CONFIG_VAR_BALLERINAX_AMP_APIKEY="<your-generated-token>"
```

You can reopen the Setup Agent panel at any time from the agent **Overview** page by clicking **Setup Agent**.

---

### Step 3: Add an LLM Provider

1. In the left sidebar, click **Configure**.
2. The **Configure Agent** page shows the **LLM Providers** section (empty for a new agent).
3. Click **+ Add Provider**.
4. Fill in the **Basic Details**:

| Field | Description | Example |
|---|---|---|
| **Name** | A logical name for this LLM binding | `openai-provider` |
| **Description** | Optional description | `Main model for customer queries` |

5. Under **LLM Service Provider**, click **Select a Provider**.
- A side panel opens listing all org-level LLM Service Providers, showing the template (e.g., OpenAI), deployment time, rate limiting status, and guardrails.
- Select the desired provider.

6. Optionally, under **Guardrails**, click **+ Add Guardrail** to attach content safety policies.

7. Click **Save**.

---

### Step 4: Connect Your Agent Code to the LLM

Immediately after saving, the provider detail page is shown with a **Connect to your LLM Provider** section containing everything needed to call the LLM from your agent code:

| Field | Description |
|---|---|
| **Endpoint URL** | The gateway URL for this provider — use this as the base URL in your LLM client |
| **Header Name** | The HTTP header to pass the API key (`API-Key`) |
| **API Key** | The generated client key — **copy it now**, it will not be shown again |
| **Example cURL** | A ready-to-use cURL command showing the Endpoint URL, Header Name, and API Key together |

Example cURL:

```bash
curl -X POST <endpoint-url> \
--header "API-Key: <your-api-key>" \
-d '{"your": "data"}'
```

Configure your agent's LLM client using the Endpoint URL as the base URL and pass the API Key in the `API-Key` header on every request.

Below the connection details, the page also shows:

- **LLM Service Provider**: the linked org-level provider (name, template, rate limiting and guardrails status)
- **Guardrails**: agent-level guardrails attached to this LLM binding

### Step 5: Run the Agent

Run your agent.

Example: Python agent with instrumentation

```bash
amp-instrument python main.py
```

---

## Managing Attached LLM Providers

From the **Configure Agent** page, the LLM Providers table shows all attached providers with:

- **Name**: The logical name given to this LLM binding.
- **Description**: Optional description.
- **Created**: When the binding was created.
- **Actions**: Delete icon to remove the provider from the agent.

Multiple providers can be attached to a single agent, allowing the agent code to use different LLMs for different tasks by referencing their respective environment variable names (platform agents) or endpoint URLs and API keys (external agents).

---

## Notes

- LLM provider credentials are **never exposed** to agent code directly — only the injected environment variables are available at runtime.
- For platform agents, environment variables are re-injected on each deployment; no manual secret management is required.
- For external agents, the Endpoint URL routes traffic through the AI Gateway, enabling centralized rate limiting, access control, and guardrails configured at the org level.
- The external agent API Key shown after saving is a **one-time display** — it cannot be retrieved again. If lost, delete the LLM provider binding and re-add it to generate a new key.
- The **Setup Agent** instrumentation step is for observability (traces) only and is independent of LLM configuration.
- Guardrails added at the agent-LLM binding level are applied **in addition to** any guardrails configured on the provider itself.
136 changes: 136 additions & 0 deletions documentation/docs/tutorials/register-ai-gateway.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
---
sidebar_position: 4
---

# Register an AI Gateway

AI Gateways are organization-level infrastructure components that route LLM traffic through a controlled proxy. You can register multiple gateways (e.g., for different environments or teams), and each LLM Service Provider is exposed through a gateway's invoke URL.

Agent manager currently supports **WSO2 AI Gateway** (https://github.com/wso2/api-platform/tree/gateway/v0.9.0/docs/ai-gateway)

## Prerequisites

Before registering a gateway, ensure you have:

- Admin access to the WSO2 Agent Manager Console
- One of the following available depending on your chosen deployment method:
- **Quick Start / Docker**: cURL, unzip, Docker installed and running
- **Virtual Machine**: cURL, unzip, and a Docker-compatible container runtime (Docker Desktop, Rancher Desktop, Colima, or Docker Engine + Compose plugin)
- **Kubernetes**: cURL, unzip, Kubernetes 1.32+, Helm 3.18+

---

## Step 1: Navigate to AI Gateways

1. Log in to the WSO2 Agent Manager Console (`http://localhost:3000`).
2. Go to Organization level by closing the projects section from the top navigation
3. In the left sidebar, click **AI Gateways** under the **INFRASTRUCTURE** section.

> The AI Gateways page lists all registered gateways with their Name, Status, and Last Updated time.


Agent manager comes with a pre-configured AI gateway which is ready to be used out-of-the-box.


---

## Step 2: Add a New AI Gateway

1. Click the **+ Add AI Gateway** button (top right).
2. Fill in the **Gateway Details** form:

| Field | Description | Example |
|---|---|---|
| **Name** | A descriptive name for the gateway | `Production AI Gateway` |
| **Virtual Host** | The FQDN or IP address where the gateway will be reachable | `api.production.example.com` |
| **Critical production gateway** | Toggle to mark this gateway as critical for production deployments | Enabled / Disabled |

3. Click **Create AI Gateway**.

---

## Step 3: Configure and Start the Gateway

After creating the gateway, you are taken to the gateway detail page. It shows:

- **Virtual Host**: The internal cluster URL for the gateway runtime.
- **Environments**: The environments (e.g., `Default`) this gateway serves.

The **Get Started** section provides instructions to deploy the gateway process using one of the methods below.

### Quick Start (Docker)

**Prerequisites**: cURL, unzip, Docker installed and running.

**Step 1 – Download the Gateway**

```bash
curl -sLO https://github.com/wso2/api-platform/releases/download/ai-gateway/v0.9.0/ai-gateway-v0.9.0.zip && \
unzip ai-gateway-v0.9.0.zip
```

**Step 2 – Configure the Gateway**

Generate a registration token by clicking **Reconfigure** on the gateway detail page. This produces a `configs/keys.env` file with the token and connection details.

**Step 3 – Start the Gateway**

```bash
cd ai-gateway-v0.9.0
docker compose --env-file configs/keys.env up
```

---

### Virtual Machine

**Prerequisites**: cURL, unzip, and a Docker-compatible container runtime:

- Docker Desktop (Windows / macOS)
- Rancher Desktop (Windows / macOS)
- Colima (macOS)
- Docker Engine + Compose plugin (Linux)

Verify the runtime is available:

```bash
docker --version
docker compose version
```

Then follow the same **Download → Configure → Start** steps as Quick Start above.

---

### Kubernetes

**Prerequisites**: cURL, unzip, Kubernetes 1.32+, Helm 3.18+.

**Configure**: Click **Reconfigure** to generate a gateway registration token.

**Install the Helm chart**:

```bash
helm install gateway oci://ghcr.io/wso2/api-platform/helm-charts/gateway --version 0.9.0 \
--set gateway.controller.controlPlane.host="" \
--set gateway.controller.controlPlane.port=443 \
--set gateway.controller.controlPlane.token.value="your-gateway-token" \
--set gateway.config.analytics.enabled=true
```

Replace `your-gateway-token` with the token generated in the Reconfigure step.

---

## Verifying the Gateway

Once running, the gateway appears in the **AI Gateways** list with status **Active**. The gateway detail page shows the virtual host URL, which is the base URL for all LLM provider invoke URLs routed through this gateway.

---

## Notes

- A **Default AI Gateway** is pre-provisioned in new organizations.
- Each gateway can serve multiple environments (e.g., `Default`, `Production`).
- The registration token (generated via **Reconfigure**) is environment-specific and must be kept secret.
- Marking a gateway as **Critical production gateway** helps signal its importance for operational monitoring.
Loading
Loading