From 05d0cbed9eaf31c49827fd1d0d81f6a888e842a2 Mon Sep 17 00:00:00 2001 From: Menaka Jayawardena Date: Thu, 19 Mar 2026 15:20:20 +0530 Subject: [PATCH] Add ai gateway and llm provider docs --- .../configure-agent-llm-configuration.mdx | 249 ++++++++++++++++++ .../docs/tutorials/register-ai-gateway.mdx | 136 ++++++++++ .../register-llm-service-provider.mdx | 152 +++++++++++ documentation/sidebars.ts | 5 +- .../configure-agent-llm-configuration.mdx | 249 ++++++++++++++++++ .../tutorials/register-ai-gateway.mdx | 136 ++++++++++ .../register-llm-service-provider.mdx | 152 +++++++++++ .../version-v0.9.x-sidebars.json | 5 +- 8 files changed, 1082 insertions(+), 2 deletions(-) create mode 100644 documentation/docs/tutorials/configure-agent-llm-configuration.mdx create mode 100644 documentation/docs/tutorials/register-ai-gateway.mdx create mode 100644 documentation/docs/tutorials/register-llm-service-provider.mdx create mode 100644 documentation/versioned_docs/version-v0.9.x/tutorials/configure-agent-llm-configuration.mdx create mode 100644 documentation/versioned_docs/version-v0.9.x/tutorials/register-ai-gateway.mdx create mode 100644 documentation/versioned_docs/version-v0.9.x/tutorials/register-llm-service-provider.mdx diff --git a/documentation/docs/tutorials/configure-agent-llm-configuration.mdx b/documentation/docs/tutorials/configure-agent-llm-configuration.mdx new file mode 100644 index 00000000..d8ee0968 --- /dev/null +++ b/documentation/docs/tutorials/configure-agent-llm-configuration.mdx @@ -0,0 +1,249 @@ +--- +sidebar_position: 6 +--- + +# Configure LLM Providers for an Agent + +Agents can be configured to use one or more LLM Service Providers registered at the organization level. The configuration process differs slightly between **Platform-hosted** and **External** agents, +but both follow the same pattern: attach an org-level provider to the agent with an optional name, description, and guardrails. + +## Prerequisites + +- At least one LLM Service Provider registered at the org level (see [Register an LLM Service Provider](./register-llm-service-provider.mdx)) +- An agent created in a project (Platform-hosted or External) + +--- + +## Overview: Agent Types + +| Type | Description | +|---|---| +| **Platform** | Agent code is built and deployed by the platform from a GitHub repository. The platform injects LLM credentials as environment variables. | +| **External** | Agent is deployed and managed externally. The platform registers it and provides the invoke URL + API key for the LLM provider. | + +--- + +## Configuring LLM for a Platform-Hosted Agent + +### Step 1: Open the Agent + +1. Navigate to your project (**Projects** → select project → **Agents**). +2. Click on a **Platform**-tagged agent. +3. In the left sidebar, click **Configure**. + +### Step 2: Add an LLM Provider + +The **Configure** page displays the **LLM Providers** section listing all LLM providers currently attached to this agent. + +1. Click **+ Add Provider**. +2. Fill in the **Basic Details**: + + | Field | Description | Example | + |---|---|---| + | **Name** | A logical name for this LLM binding within the agent | `OpenAI GPT5` | + | **Description** | Optional description | `Primary reasoning model` | + +3. Under **LLM Service Provider**, click **Select a Provider**. + - A side panel opens listing all org-level LLM Service Providers with their template, rate limiting status, and guardrails. + - Select the desired provider and close the panel. + +4. Optionally, under **Guardrails**, click **+ Add Guardrail** to attach guardrails specific to this agent's use of the provider. + +5. Click **Save**. + +### Step 3: Use the Provider in Agent Code + +After saving, the platform generates **environment variables** that are automatically injected into the agent's deployment runtime. You can view these on the LLM provider detail page under **Environment Variables References**: + +| Variable Name | Description | +|---|---| +| `_API_KEY` | API Key for authenticating with the LLM provider | +| `_BASE_URL` | Base URL of the LLM Provider API endpoint | + +Where `` is derived from the provider name (uppercased, e.g., `OPENAI_GPT5` for a provider named `OpenAI GPT5`). + +If your agent is already configured to read a different environment variable name, update the system provided variable name and click **Save**. + +**Python code snippet** (shown in the UI): + +```python +import os +from openai import OpenAI + +apikey = os.environ.get('OPENAI_GPT5_API_KEY') +url = os.environ.get('OPENAI_GPT5_URL') + +client = OpenAI( + base_url=url, + api_key="", + default_headers={"API-Key": apikey, "Authorization": ""} +) +``` + +> **Note**: The platform also provides an **AI Prompt** snippet — a ready-made prompt you can paste into an AI coding assistant to automatically update your code to use the injected environment variables. + +### Step 4: Build and Deploy + +1. After configuring the LLM provider, click **Build** in the sidebar. +2. Click **Trigger a Build** to build the agent from its GitHub source. +3. Once the build completes, click **Deploy** to deploy to the target environment. +4. The deployed agent URL appears on the **Overview** page (e.g., `http://default-default.localhost:19080/agent-name`). + +--- + +## Configuring LLM for an External Agent + +### Step 1: Create and Register the Agent + +1. Navigate to your project (**Projects** → select project → **Agents**). +2. Click **+ Add Agent**. +3. On the **Add a New Agent** screen, select **Externally-Hosted Agent**. + > This option is for connecting an existing agent running outside the platform to enable observability and governance. +4. Fill in the **Agent Details**: + + | Field | Description | Example | + |---|---|---| + | **Name** | A unique identifier for the agent | `my-external-agent` | + | **Description** *(optional)* | Short description of what this agent does | `Customer support bot` | + +5. Click **Register**. + +After registration, the agent is created with status **Registered** and the **Setup Agent** panel opens automatically. + +--- + +### Step 2: Instrument the Agent (Setup Agent) + +The **Setup Agent** panel provides a **Zero-code Instrumentation Guide** to connect your agent to the platform for observability (traces). Select your language from the **Language** dropdown (Python or Ballerina). + +#### Python + +1. **Install the AMP instrumentation package**: + ```bash + pip install amp-instrumentation + ``` + Provides the ability to instrument your agent and export traces. + +2. **Generate API Key** — choose a **Token Duration** (default: 1 year) and click **Generate**. Copy the token immediately — it will not be shown again. + +3. **Set environment variables**: + ```bash + export AMP_OTEL_ENDPOINT="http://localhost:22893/otel" + export AMP_AGENT_API_KEY="" + ``` + Sets the agent endpoint and agent-specific API key so traces can be exported securely. + + +#### Ballerina + +1. **Import the Amp module** in your Ballerina program: + ```ballerina + import ballerinax/amp as _; + ``` + +2. **Add the following to `Ballerina.toml`**: + ```toml + [build-options] + observabilityIncluded = true + ``` + +3. **Update `Config.toml`**: + ```toml + [ballerina.observe] + tracingEnabled = true + tracingProvider = "amp" + ``` + +4. **Generate API Key** — choose a **Token Duration** and click **Generate**. Copy the token immediately. + +5. **Set environment variables**: + ```bash + export BAL_CONFIG_VAR_BALLERINAX_AMP_OTELENDPOINT="http://localhost:22893/otel" + export BAL_CONFIG_VAR_BALLERINAX_AMP_APIKEY="" + ``` + +You can reopen the Setup Agent panel at any time from the agent **Overview** page by clicking **Setup Agent**. + +--- + +### Step 3: Add an LLM Provider + +1. In the left sidebar, click **Configure**. +2. The **Configure Agent** page shows the **LLM Providers** section (empty for a new agent). +3. Click **+ Add Provider**. +4. Fill in the **Basic Details**: + + | Field | Description | Example | + |---|---|---| + | **Name** | A logical name for this LLM binding | `openai-provider` | + | **Description** | Optional description | `Main model for customer queries` | + +5. Under **LLM Service Provider**, click **Select a Provider**. + - A side panel opens listing all org-level LLM Service Providers, showing the template (e.g., OpenAI), deployment time, rate limiting status, and guardrails. + - Select the desired provider. + +6. Optionally, under **Guardrails**, click **+ Add Guardrail** to attach content safety policies. + +7. Click **Save**. + +--- + +### Step 4: Connect Your Agent Code to the LLM + +Immediately after saving, the provider detail page is shown with a **Connect to your LLM Provider** section containing everything needed to call the LLM from your agent code: + +| Field | Description | +|---|---| +| **Endpoint URL** | The gateway URL for this provider — use this as the base URL in your LLM client | +| **Header Name** | The HTTP header to pass the API key (`API-Key`) | +| **API Key** | The generated client key — **copy it now**, it will not be shown again | +| **Example cURL** | A ready-to-use cURL command showing the Endpoint URL, Header Name, and API Key together | + +Example cURL: + +```bash +curl -X POST \ + --header "API-Key: " \ + -d '{"your": "data"}' +``` + +Configure your agent's LLM client using the Endpoint URL as the base URL and pass the API Key in the `API-Key` header on every request. + +Below the connection details, the page also shows: + +- **LLM Service Provider**: the linked org-level provider (name, template, rate limiting and guardrails status) +- **Guardrails**: agent-level guardrails attached to this LLM binding + +### Step 5: Run the Agent + +Run your agent. + +Example: Python agent with instrumentation + +```bash +amp-instrument python main.py +``` + +--- + +## Managing Attached LLM Providers + +From the **Configure Agent** page, the LLM Providers table shows all attached providers with: + +- **Name**: The logical name given to this LLM binding. +- **Description**: Optional description. +- **Created**: When the binding was created. +- **Actions**: Delete icon to remove the provider from the agent. + +Multiple providers can be attached to a single agent, allowing the agent code to use different LLMs for different tasks by referencing their respective environment variable names (platform agents) or endpoint URLs and API keys (external agents). + +--- + +## Notes + +- LLM provider credentials are **never exposed** to agent code directly — only the injected environment variables are available at runtime. +- For platform agents, environment variables are re-injected on each deployment; no manual secret management is required. +- For external agents, the Endpoint URL routes traffic through the AI Gateway, enabling centralized rate limiting, access control, and guardrails configured at the org level. +- The external agent API Key shown after saving is a **one-time display** — it cannot be retrieved again. If lost, delete the LLM provider binding and re-add it to generate a new key. +- The **Setup Agent** instrumentation step is for observability (traces) only and is independent of LLM configuration. +- Guardrails added at the agent-LLM binding level are applied **in addition to** any guardrails configured on the provider itself. \ No newline at end of file diff --git a/documentation/docs/tutorials/register-ai-gateway.mdx b/documentation/docs/tutorials/register-ai-gateway.mdx new file mode 100644 index 00000000..f9c4f2de --- /dev/null +++ b/documentation/docs/tutorials/register-ai-gateway.mdx @@ -0,0 +1,136 @@ +--- +sidebar_position: 4 +--- + +# Register an AI Gateway + +AI Gateways are organization-level infrastructure components that route LLM traffic through a controlled proxy. You can register multiple gateways (e.g., for different environments or teams), and each LLM Service Provider is exposed through a gateway's invoke URL. + +Agent manager currently supports **WSO2 AI Gateway** (https://github.com/wso2/api-platform/tree/gateway/v0.9.0/docs/ai-gateway) + +## Prerequisites + +Before registering a gateway, ensure you have: + +- Admin access to the WSO2 Agent Manager Console +- One of the following available depending on your chosen deployment method: + - **Quick Start / Docker**: cURL, unzip, Docker installed and running + - **Virtual Machine**: cURL, unzip, and a Docker-compatible container runtime (Docker Desktop, Rancher Desktop, Colima, or Docker Engine + Compose plugin) + - **Kubernetes**: cURL, unzip, Kubernetes 1.32+, Helm 3.18+ + +--- + +## Step 1: Navigate to AI Gateways + +1. Log in to the WSO2 Agent Manager Console (`http://localhost:3000`). +2. Go to Organization level by closing the projects section from the top navigation +3. In the left sidebar, click **AI Gateways** under the **INFRASTRUCTURE** section. + + > The AI Gateways page lists all registered gateways with their Name, Status, and Last Updated time. + + + Agent manager comes with a pre-configured AI gateway which is ready to be used out-of-the-box. + + +--- + +## Step 2: Add a New AI Gateway + +1. Click the **+ Add AI Gateway** button (top right). +2. Fill in the **Gateway Details** form: + + | Field | Description | Example | + |---|---|---| + | **Name** | A descriptive name for the gateway | `Production AI Gateway` | + | **Virtual Host** | The FQDN or IP address where the gateway will be reachable | `api.production.example.com` | + | **Critical production gateway** | Toggle to mark this gateway as critical for production deployments | Enabled / Disabled | + +3. Click **Create AI Gateway**. + +--- + +## Step 3: Configure and Start the Gateway + +After creating the gateway, you are taken to the gateway detail page. It shows: + +- **Virtual Host**: The internal cluster URL for the gateway runtime. +- **Environments**: The environments (e.g., `Default`) this gateway serves. + +The **Get Started** section provides instructions to deploy the gateway process using one of the methods below. + +### Quick Start (Docker) + +**Prerequisites**: cURL, unzip, Docker installed and running. + +**Step 1 – Download the Gateway** + +```bash +curl -sLO https://github.com/wso2/api-platform/releases/download/ai-gateway/v0.9.0/ai-gateway-v0.9.0.zip && \ +unzip ai-gateway-v0.9.0.zip +``` + +**Step 2 – Configure the Gateway** + +Generate a registration token by clicking **Reconfigure** on the gateway detail page. This produces a `configs/keys.env` file with the token and connection details. + +**Step 3 – Start the Gateway** + +```bash +cd ai-gateway-v0.9.0 +docker compose --env-file configs/keys.env up +``` + +--- + +### Virtual Machine + +**Prerequisites**: cURL, unzip, and a Docker-compatible container runtime: + +- Docker Desktop (Windows / macOS) +- Rancher Desktop (Windows / macOS) +- Colima (macOS) +- Docker Engine + Compose plugin (Linux) + +Verify the runtime is available: + +```bash +docker --version +docker compose version +``` + +Then follow the same **Download → Configure → Start** steps as Quick Start above. + +--- + +### Kubernetes + +**Prerequisites**: cURL, unzip, Kubernetes 1.32+, Helm 3.18+. + +**Configure**: Click **Reconfigure** to generate a gateway registration token. + +**Install the Helm chart**: + +```bash +helm install gateway oci://ghcr.io/wso2/api-platform/helm-charts/gateway --version 0.9.0 \ + --set gateway.controller.controlPlane.host="" \ + --set gateway.controller.controlPlane.port=443 \ + --set gateway.controller.controlPlane.token.value="your-gateway-token" \ + --set gateway.config.analytics.enabled=true +``` + +Replace `your-gateway-token` with the token generated in the Reconfigure step. + +--- + +## Verifying the Gateway + +Once running, the gateway appears in the **AI Gateways** list with status **Active**. The gateway detail page shows the virtual host URL, which is the base URL for all LLM provider invoke URLs routed through this gateway. + +--- + +## Notes + +- A **Default AI Gateway** is pre-provisioned in new organizations. +- Each gateway can serve multiple environments (e.g., `Default`, `Production`). +- The registration token (generated via **Reconfigure**) is environment-specific and must be kept secret. +- Marking a gateway as **Critical production gateway** helps signal its importance for operational monitoring. \ No newline at end of file diff --git a/documentation/docs/tutorials/register-llm-service-provider.mdx b/documentation/docs/tutorials/register-llm-service-provider.mdx new file mode 100644 index 00000000..dc0c4ad9 --- /dev/null +++ b/documentation/docs/tutorials/register-llm-service-provider.mdx @@ -0,0 +1,152 @@ +--- +sidebar_position: 5 +--- + +# Register an LLM Service Provider + +LLM Service Providers are organization-level resources that represent connections to upstream LLM APIs (e.g., OpenAI, Anthropic, AWS Bedrock). Once registered, they are exposed through an AI Gateway and can be attached to agents across any project in the organization. + +## Prerequisites + +- Admin access to the WSO2 Agent Manager Console +- At least one AI Gateway registered and active (see [Register an AI Gateway](./register-ai-gateway.mdx)) +- API credentials for the target LLM provider (e.g., an OpenAI API key) + +--- + +## Step 1: Navigate to LLM Service Providers + +1. Log in to the WSO2 Agent Manager Console (`http://localhost:3000`). +2. Go to the Organization level by closing the projects section from the top navigation. +3. In the left sidebar, click **LLM Service Providers** under the **RESOURCES** section. + + > The LLM Service Providers page lists all registered providers with their Name, Template, and Last Updated time. + +--- + +## Step 2: Add a New Provider + +1. Click the **+ Add Service Provider** button. +2. Fill in the **Basic Details**: + + | Field | Description | Example | + |---|---|---| + | **Name** *(required)* | A descriptive name for this provider configuration | `Production OpenAI Provider` | + | **Version** *(required)* | Version identifier for this provider configuration | `v1.0` | + | **Short description** | Optional description of the provider's purpose | `Primary LLM provider for production` | + | **Context path** | The API path prefix for this provider (must start with `/`, no trailing slash) | `/my-provider` | + +3. Under **Provider Template**, select one of the pre-built provider templates: + + | Template | Description | + |---|---| + | **Anthropic** | Claude models via Anthropic API | + | **AWS Bedrock** | AWS-hosted foundation models | + | **Azure AI Foundry** | Azure AI model deployments | + | **Azure OpenAI** | OpenAI models hosted on Azure | + | **Gemini** | Google Gemini models | + | **Mistral** | Mistral AI models | + | **OpenAI** | OpenAI models (GPT-4, etc.) | + + Selecting a template auto-populates the upstream URL, authentication type, and API specification. + +5. Provide the Credentials for the selected template. (Follow the official documentation of the respective providers for getting an api key/ credential) + +4. Click **Add provider**. + +--- + +## Step 3: Configure Provider Settings + +After creation, the provider detail page appears with six configuration tabs. + +### Overview Tab + +Displays a summary of the provider: + +| Field | Description | +|---|---| +| **Context** | The context path (e.g., `/test`) | +| **Upstream URL** | The backend LLM API endpoint (e.g., `https://api.openai.com/v1`) | +| **Auth Type** | Authentication method (e.g., `api-key`) | +| **Access Control** | Current access policy (e.g., `allow_all`) | + +The **Invoke URL & API Key** section shows: + +- **Gateway**: Select which AI Gateway exposes this provider. +- **Invoke URL**: The full URL agents use to call this provider through the gateway (auto-generated). +- **Generate API Key**: Generate a client API key for agents to authenticate against this provider. + +--- + +### Connection Tab + +Configure the upstream connection to the LLM Provider API: + +| Field | Description | Example | +|---|---|---| +| **Provider Endpoint** | The base URL of the upstream LLM API | `https://api.openai.com/v1` | +| **Authentication** | Auth method for the upstream call | `API Key` | +| **Authentication Header** | HTTP header used to pass the credential | `Authorization` | +| **Credentials** | The API key or secret for the upstream LLM provider | `sk-...` | + +Click **Save** to persist changes. + +--- + +### Access Control Tab + +Control which API resources are accessible through this provider: + +- **Mode**: Choose `Allow all` (default – all resources permitted) or `Deny all` (whitelist only). +- **Allowed Resources**: List of API operations permitted (e.g., `GET /assistants`, `POST /chat/completions`). +- **Denied Resources**: List of API operations explicitly blocked. + +Use the arrow buttons to move resources between the Allowed and Denied lists. You can also **Import from specification** to populate the resource list from an OpenAPI spec. + +--- + +### Security Tab + +Configure how to authenticate to this provider via the gateway: + +| Field | Description | Example | +|---|---|---| +| **Authentication** | Auth scheme for inbound calls | `apiKey` | +| **Header Key** | HTTP header name carrying the API key | `X-API-Key` | +| **Key Location** | Where the key is passed | `header` | + +--- + +### Rate Limiting Tab + +Set backend rate limits to protect the upstream LLM API: + +- **Mode**: `Provider-wide` (single limit for all resources) or `Per Resource` (limits per endpoint). +- **Request Counts**: Configure request-per-window thresholds. +- **Token Count**: Configure token-per-window thresholds. +- **Cost**: *(Coming soon)* Cost-based limits. + +--- + +### Guardrails Tab + +Attach content safety policies to this provider: + +- **Global Guardrails**: Apply to all API resources under this provider. Click **+ Add Guardrail** to attach one. +- **Resource-wise Guardrails**: Per-operation guardrails for individual API endpoints (e.g., `POST /chat/completions`). + +--- + +## Verifying the Provider + +The registered provider appears in the **LLM Service Providers** list showing its name and the template used (e.g., `OpenAI`). From the Overview tab, select your active AI Gateway to see the **Invoke URL** — this is the endpoint agents use to call the LLM through the gateway. + +--- + +## Notes + +- The **context path** must be unique per organization. It forms part of the invoke URL: ``. +- Credentials entered in the Connection tab are stored securely and never exposed in the UI. +- A provider must be associated with at least one AI Gateway to be callable by agents. +- Multiple providers can share the same gateway but must have distinct context paths. diff --git a/documentation/sidebars.ts b/documentation/sidebars.ts index a2a972ef..2fb5d56c 100644 --- a/documentation/sidebars.ts +++ b/documentation/sidebars.ts @@ -63,7 +63,10 @@ const sidebars: SidebarsConfig = { items: [ 'tutorials/observe-first-agent', 'tutorials/evaluation-monitors', - 'tutorials/custom-evaluators' + 'tutorials/custom-evaluators', + 'tutorials/register-ai-gateway', + 'tutorials/register-llm-service-provider', + 'tutorials/configure-agent-llm-configuration' ], }, { diff --git a/documentation/versioned_docs/version-v0.9.x/tutorials/configure-agent-llm-configuration.mdx b/documentation/versioned_docs/version-v0.9.x/tutorials/configure-agent-llm-configuration.mdx new file mode 100644 index 00000000..1cb248b5 --- /dev/null +++ b/documentation/versioned_docs/version-v0.9.x/tutorials/configure-agent-llm-configuration.mdx @@ -0,0 +1,249 @@ +--- +sidebar_position: 6 +--- + +# Configure LLM Providers for an Agent + +Agents can be configured to use one or more LLM Service Providers registered at the organization level. The configuration process differs slightly between **Platform-hosted** and **External** agents, +but both follow the same pattern: attach an org-level provider to the agent with an optional name, description, and guardrails. + +## Prerequisites + +- At least one LLM Service Provider registered at the org level (see [Register an LLM Service Provider](./register-llm-service-provider.mdx)) +- An agent created in a project (Platform-hosted or External) + +--- + +## Overview: Agent Types + +| Type | Description | +|---|---| +| **Platform** | Agent code is built and deployed by the platform from a GitHub repository. The platform injects LLM credentials as environment variables. | +| **External** | Agent is deployed and managed externally. The platform registers it and provides the invoke URL + API key for the LLM provider. | + +--- + +## Configuring LLM for a Platform-Hosted Agent + +### Step 1: Open the Agent + +1. Navigate to your project (**Projects** → select project → **Agents**). +2. Click on a **Platform**-tagged agent. +3. In the left sidebar, click **Configure**. + +### Step 2: Add an LLM Provider + +The **Configure** page displays the **LLM Providers** section listing all LLM providers currently attached to this agent. + +1. Click **+ Add Provider**. +2. Fill in the **Basic Details**: + + | Field | Description | Example | + |---|---|---| + | **Name** | A logical name for this LLM binding within the agent | `OpenAI GPT5` | + | **Description** | Optional description | `Primary reasoning model` | + +3. Under **LLM Service Provider**, click **Select a Provider**. + - A side panel opens listing all org-level LLM Service Providers with their template, rate limiting status, and guardrails. + - Select the desired provider and close the panel. + +4. Optionally, under **Guardrails**, click **+ Add Guardrail** to attach guardrails specific to this agent's use of the provider. + +5. Click **Save**. + +### Step 3: Use the Provider in Agent Code + +After saving, the platform generates **environment variables** that are automatically injected into the agent's deployment runtime. You can view these on the LLM provider detail page under **Environment Variables References**: + +| Variable Name | Description | +|---|---| +| `_API_KEY` | API Key for authenticating with the LLM provider | +| `_BASE_URL` | Base URL of the LLM Provider API endpoint | + +Where `` is derived from the provider name (uppercased, e.g., `OPENAI_GPT5` for a provider named `OpenAI GPT5`). + +If your agent is already configured to read a different environment variable name, update the system provided variable name and click **Save**. + +**Python code snippet** (shown in the UI): + +```python +import os +from openai import OpenAI + +apikey = os.environ.get('OPENAI_GPT5_API_KEY') +url = os.environ.get('OPENAI_GPT5_URL') + +client = OpenAI( + base_url=url, + api_key="", + default_headers={"API-Key": apikey, "Authorization": ""} +) +``` + +> **Note**: The platform also provides an **AI Prompt** snippet — a ready-made prompt you can paste into an AI coding assistant to automatically update your code to use the injected environment variables. + +### Step 4: Build and Deploy + +1. After configuring the LLM provider, click **Build** in the sidebar. +2. Click **Trigger a Build** to build the agent from its GitHub source. +3. Once the build completes, click **Deploy** to deploy to the target environment. +4. The deployed agent URL appears on the **Overview** page (e.g., `http://default-default.localhost:19080/agent-name`). + +--- + +## Configuring LLM for an External Agent + +### Step 1: Create and Register the Agent + +1. Navigate to your project (**Projects** → select project → **Agents**). +2. Click **+ Add Agent**. +3. On the **Add a New Agent** screen, select **Externally-Hosted Agent**. + > This option is for connecting an existing agent running outside the platform to enable observability and governance. +4. Fill in the **Agent Details**: + + | Field | Description | Example | + |---|---|---| + | **Name** | A unique identifier for the agent | `my-external-agent` | + | **Description** *(optional)* | Short description of what this agent does | `Customer support bot` | + +5. Click **Register**. + +After registration, the agent is created with status **Registered** and the **Setup Agent** panel opens automatically. + +--- + +### Step 2: Instrument the Agent (Setup Agent) + +The **Setup Agent** panel provides a **Zero-code Instrumentation Guide** to connect your agent to the platform for observability (traces). Select your language from the **Language** dropdown (Python or Ballerina). + +#### Python + +1. **Install the AMP instrumentation package**: + ```bash + pip install amp-instrumentation + ``` + Provides the ability to instrument your agent and export traces. + +2. **Generate API Key** — choose a **Token Duration** (default: 1 year) and click **Generate**. Copy the token immediately — it will not be shown again. + +3. **Set environment variables**: + ```bash + export AMP_OTEL_ENDPOINT="http://localhost:22893/otel" + export AMP_AGENT_API_KEY="" + ``` + Sets the agent endpoint and agent-specific API key so traces can be exported securely. + + +#### Ballerina + +1. **Import the Amp module** in your Ballerina program: + ```ballerina + import ballerinax/amp as _; + ``` + +2. **Add the following to `Ballerina.toml`**: + ```toml + [build-options] + observabilityIncluded = true + ``` + +3. **Update `Config.toml`**: + ```toml + [ballerina.observe] + tracingEnabled = true + tracingProvider = "amp" + ``` + +4. **Generate API Key** — choose a **Token Duration** and click **Generate**. Copy the token immediately. + +5. **Set environment variables**: + ```bash + export BAL_CONFIG_VAR_BALLERINAX_AMP_OTELENDPOINT="http://localhost:22893/otel" + export BAL_CONFIG_VAR_BALLERINAX_AMP_APIKEY="" + ``` + +You can reopen the Setup Agent panel at any time from the agent **Overview** page by clicking **Setup Agent**. + +--- + +### Step 3: Add an LLM Provider + +1. In the left sidebar, click **Configure**. +2. The **Configure Agent** page shows the **LLM Providers** section (empty for a new agent). +3. Click **+ Add Provider**. +4. Fill in the **Basic Details**: + + | Field | Description | Example | + |---|---|---| + | **Name** | A logical name for this LLM binding | `openai-provider` | + | **Description** | Optional description | `Main model for customer queries` | + +5. Under **LLM Model Provider**, click **Select a Provider**. + - A side panel opens listing all org-level LLM Service Providers, showing the template (e.g., OpenAI), deployment time, rate limiting status, and guardrails. + - Select the desired provider. + +6. Optionally, under **Guardrails**, click **+ Add Guardrail** to attach content safety policies. + +7. Click **Save**. + +--- + +### Step 4: Connect Your Agent Code to the LLM + +Immediately after saving, the provider detail page is shown with a **Connect to your LLM Provider** section containing everything needed to call the LLM from your agent code: + +| Field | Description | +|---|---| +| **Endpoint URL** | The gateway URL for this provider — use this as the base URL in your LLM client | +| **Header Name** | The HTTP header to pass the API key (`API-Key`) | +| **API Key** | The generated client key — **copy it now**, it will not be shown again | +| **Example cURL** | A ready-to-use cURL command showing the Endpoint URL, Header Name, and API Key together | + +Example cURL: + +```bash +curl -X POST \ + --header "API-Key: " \ + -d '{"your": "data"}' +``` + +Configure your agent's LLM client using the Endpoint URL as the base URL and pass the API Key in the `API-Key` header on every request. + +Below the connection details, the page also shows: + +- **LLM Service Provider**: the linked org-level provider (name, template, rate limiting and guardrails status) +- **Guardrails**: agent-level guardrails attached to this LLM binding + +### Step 5: Run the Agent + +Run your agent. + +Example: Python agent with instrumentation + +```bash +amp-instrument python main.py +``` + +--- + +## Managing Attached LLM Providers + +From the **Configure Agent** page, the LLM Providers table shows all attached providers with: + +- **Name**: The logical name given to this LLM binding. +- **Description**: Optional description. +- **Created**: When the binding was created. +- **Actions**: Delete icon to remove the provider from the agent. + +Multiple providers can be attached to a single agent, allowing the agent code to use different LLMs for different tasks by referencing their respective environment variable names (platform agents) or endpoint URLs and API keys (external agents). + +--- + +## Notes + +- LLM provider credentials are **never exposed** to agent code directly — only the injected environment variables are available at runtime. +- For platform agents, environment variables are re-injected on each deployment; no manual secret management is required. +- For external agents, the Endpoint URL routes traffic through the AI Gateway, enabling centralized rate limiting, access control, and guardrails configured at the org level. +- The external agent API Key shown after saving is a **one-time display** — it cannot be retrieved again. If lost, delete the LLM provider binding and re-add it to generate a new key. +- The **Setup Agent** instrumentation step is for observability (traces) only and is independent of LLM configuration. +- Guardrails added at the agent-LLM binding level are applied **in addition to** any guardrails configured on the provider itself. \ No newline at end of file diff --git a/documentation/versioned_docs/version-v0.9.x/tutorials/register-ai-gateway.mdx b/documentation/versioned_docs/version-v0.9.x/tutorials/register-ai-gateway.mdx new file mode 100644 index 00000000..f9c4f2de --- /dev/null +++ b/documentation/versioned_docs/version-v0.9.x/tutorials/register-ai-gateway.mdx @@ -0,0 +1,136 @@ +--- +sidebar_position: 4 +--- + +# Register an AI Gateway + +AI Gateways are organization-level infrastructure components that route LLM traffic through a controlled proxy. You can register multiple gateways (e.g., for different environments or teams), and each LLM Service Provider is exposed through a gateway's invoke URL. + +Agent manager currently supports **WSO2 AI Gateway** (https://github.com/wso2/api-platform/tree/gateway/v0.9.0/docs/ai-gateway) + +## Prerequisites + +Before registering a gateway, ensure you have: + +- Admin access to the WSO2 Agent Manager Console +- One of the following available depending on your chosen deployment method: + - **Quick Start / Docker**: cURL, unzip, Docker installed and running + - **Virtual Machine**: cURL, unzip, and a Docker-compatible container runtime (Docker Desktop, Rancher Desktop, Colima, or Docker Engine + Compose plugin) + - **Kubernetes**: cURL, unzip, Kubernetes 1.32+, Helm 3.18+ + +--- + +## Step 1: Navigate to AI Gateways + +1. Log in to the WSO2 Agent Manager Console (`http://localhost:3000`). +2. Go to Organization level by closing the projects section from the top navigation +3. In the left sidebar, click **AI Gateways** under the **INFRASTRUCTURE** section. + + > The AI Gateways page lists all registered gateways with their Name, Status, and Last Updated time. + + + Agent manager comes with a pre-configured AI gateway which is ready to be used out-of-the-box. + + +--- + +## Step 2: Add a New AI Gateway + +1. Click the **+ Add AI Gateway** button (top right). +2. Fill in the **Gateway Details** form: + + | Field | Description | Example | + |---|---|---| + | **Name** | A descriptive name for the gateway | `Production AI Gateway` | + | **Virtual Host** | The FQDN or IP address where the gateway will be reachable | `api.production.example.com` | + | **Critical production gateway** | Toggle to mark this gateway as critical for production deployments | Enabled / Disabled | + +3. Click **Create AI Gateway**. + +--- + +## Step 3: Configure and Start the Gateway + +After creating the gateway, you are taken to the gateway detail page. It shows: + +- **Virtual Host**: The internal cluster URL for the gateway runtime. +- **Environments**: The environments (e.g., `Default`) this gateway serves. + +The **Get Started** section provides instructions to deploy the gateway process using one of the methods below. + +### Quick Start (Docker) + +**Prerequisites**: cURL, unzip, Docker installed and running. + +**Step 1 – Download the Gateway** + +```bash +curl -sLO https://github.com/wso2/api-platform/releases/download/ai-gateway/v0.9.0/ai-gateway-v0.9.0.zip && \ +unzip ai-gateway-v0.9.0.zip +``` + +**Step 2 – Configure the Gateway** + +Generate a registration token by clicking **Reconfigure** on the gateway detail page. This produces a `configs/keys.env` file with the token and connection details. + +**Step 3 – Start the Gateway** + +```bash +cd ai-gateway-v0.9.0 +docker compose --env-file configs/keys.env up +``` + +--- + +### Virtual Machine + +**Prerequisites**: cURL, unzip, and a Docker-compatible container runtime: + +- Docker Desktop (Windows / macOS) +- Rancher Desktop (Windows / macOS) +- Colima (macOS) +- Docker Engine + Compose plugin (Linux) + +Verify the runtime is available: + +```bash +docker --version +docker compose version +``` + +Then follow the same **Download → Configure → Start** steps as Quick Start above. + +--- + +### Kubernetes + +**Prerequisites**: cURL, unzip, Kubernetes 1.32+, Helm 3.18+. + +**Configure**: Click **Reconfigure** to generate a gateway registration token. + +**Install the Helm chart**: + +```bash +helm install gateway oci://ghcr.io/wso2/api-platform/helm-charts/gateway --version 0.9.0 \ + --set gateway.controller.controlPlane.host="" \ + --set gateway.controller.controlPlane.port=443 \ + --set gateway.controller.controlPlane.token.value="your-gateway-token" \ + --set gateway.config.analytics.enabled=true +``` + +Replace `your-gateway-token` with the token generated in the Reconfigure step. + +--- + +## Verifying the Gateway + +Once running, the gateway appears in the **AI Gateways** list with status **Active**. The gateway detail page shows the virtual host URL, which is the base URL for all LLM provider invoke URLs routed through this gateway. + +--- + +## Notes + +- A **Default AI Gateway** is pre-provisioned in new organizations. +- Each gateway can serve multiple environments (e.g., `Default`, `Production`). +- The registration token (generated via **Reconfigure**) is environment-specific and must be kept secret. +- Marking a gateway as **Critical production gateway** helps signal its importance for operational monitoring. \ No newline at end of file diff --git a/documentation/versioned_docs/version-v0.9.x/tutorials/register-llm-service-provider.mdx b/documentation/versioned_docs/version-v0.9.x/tutorials/register-llm-service-provider.mdx new file mode 100644 index 00000000..763be9d0 --- /dev/null +++ b/documentation/versioned_docs/version-v0.9.x/tutorials/register-llm-service-provider.mdx @@ -0,0 +1,152 @@ +--- +sidebar_position: 5 +--- + +# Register an LLM Service Provider + +LLM Service Providers are organization-level resources that represent connections to upstream LLM APIs (e.g., OpenAI, Anthropic, AWS Bedrock). Once registered, they are exposed through an AI Gateway and can be attached to agents across any project in the organization. + +## Prerequisites + +- Admin access to the WSO2 Agent Manager Console +- At least one AI Gateway registered and active (see [Register an AI Gateway](./register-ai-gateway.mdx)) +- API credentials for the target LLM provider (e.g., an OpenAI API key) + +--- + +## Step 1: Navigate to LLM Service Providers + +1. Log in to the WSO2 Agent Manager Console (`http://localhost:3000`). +2. Go to the Organization level by closing the projects section from the top navigation. +3. In the left sidebar, click **LLM Providers** under the **RESOURCES** section. + + > The LLM Service Providers page lists all registered providers with their Name, Template, and Last Updated time. + +--- + +## Step 2: Add a New Provider + +1. Click the **+ Add Provider** button. +2. Fill in the **Basic Details**: + + | Field | Description | Example | + |---|---|---| + | **Name** *(required)* | A descriptive name for this provider configuration | `Production OpenAI Provider` | + | **Version** *(required)* | Version identifier for this provider configuration | `v1.0` | + | **Short description** | Optional description of the provider's purpose | `Primary LLM provider for production` | + | **Context path** | The API path prefix for this provider (must start with `/`, no trailing slash) | `/my-provider` | + +3. Under **Provider Template**, select one of the pre-built provider templates: + + | Template | Description | + |---|---| + | **Anthropic** | Claude models via Anthropic API | + | **AWS Bedrock** | AWS-hosted foundation models | + | **Azure AI Foundry** | Azure AI model deployments | + | **Azure OpenAI** | OpenAI models hosted on Azure | + | **Gemini** | Google Gemini models | + | **Mistral** | Mistral AI models | + | **OpenAI** | OpenAI models (GPT-4, etc.) | + + Selecting a template auto-populates the upstream URL, authentication type, and API specification. + +5. Provide the Credentials for the selected template. (Follow the official documentation of the respective providers for getting an api key/ credential) + +4. Click **Add provider**. + +--- + +## Step 3: Configure Provider Settings + +After creation, the provider detail page appears with six configuration tabs. + +### Overview Tab + +Displays a summary of the provider: + +| Field | Description | +|---|---| +| **Context** | The context path (e.g., `/test`) | +| **Upstream URL** | The backend LLM API endpoint (e.g., `https://api.openai.com/v1`) | +| **Auth Type** | Authentication method (e.g., `api-key`) | +| **Access Control** | Current access policy (e.g., `allow_all`) | + +The **Invoke URL & API Key** section shows: + +- **Gateway**: Select which AI Gateway exposes this provider. +- **Invoke URL**: The full URL agents use to call this provider through the gateway (auto-generated). +- **Generate API Key**: Generate a client API key for agents to authenticate against this provider. + +--- + +### Connection Tab + +Configure the upstream connection to the LLM Provider API: + +| Field | Description | Example | +|---|---|---| +| **Provider Endpoint** | The base URL of the upstream LLM API | `https://api.openai.com/v1` | +| **Authentication** | Auth method for the upstream call | `API Key` | +| **Authentication Header** | HTTP header used to pass the credential | `Authorization` | +| **Credentials** | The API key or secret for the upstream LLM provider | `sk-...` | + +Click **Save** to persist changes. + +--- + +### Access Control Tab + +Control which API resources are accessible through this provider: + +- **Mode**: Choose `Allow all` (default – all resources permitted) or `Deny all` (whitelist only). +- **Allowed Resources**: List of API operations permitted (e.g., `GET /assistants`, `POST /chat/completions`). +- **Denied Resources**: List of API operations explicitly blocked. + +Use the arrow buttons to move resources between the Allowed and Denied lists. You can also **Import from specification** to populate the resource list from an OpenAPI spec. + +--- + +### Security Tab + +Configure how to authenticate to this provider via the gateway: + +| Field | Description | Example | +|---|---|---| +| **Authentication** | Auth scheme for inbound calls | `apiKey` | +| **Header Key** | HTTP header name carrying the API key | `X-API-Key` | +| **Key Location** | Where the key is passed | `header` | + +--- + +### Rate Limiting Tab + +Set backend rate limits to protect the upstream LLM API: + +- **Mode**: `Provider-wide` (single limit for all resources) or `Per Resource` (limits per endpoint). +- **Request Counts**: Configure request-per-window thresholds. +- **Token Count**: Configure token-per-window thresholds. +- **Cost**: *(Coming soon)* Cost-based limits. + +--- + +### Guardrails Tab + +Attach content safety policies to this provider: + +- **Global Guardrails**: Apply to all API resources under this provider. Click **+ Add Guardrail** to attach one. +- **Resource-wise Guardrails**: Per-operation guardrails for individual API endpoints (e.g., `POST /chat/completions`). + +--- + +## Verifying the Provider + +The registered provider appears in the **LLM Service Providers** list showing its name and the template used (e.g., `OpenAI`). From the Overview tab, select your active AI Gateway to see the **Invoke URL** — this is the endpoint agents use to call the LLM through the gateway. + +--- + +## Notes + +- The **context path** must be unique per organization. It forms part of the invoke URL: ``. +- Credentials entered in the Connection tab are stored securely and never exposed in the UI. +- A provider must be associated with at least one AI Gateway to be callable by agents. +- Multiple providers can share the same gateway but must have distinct context paths. diff --git a/documentation/versioned_sidebars/version-v0.9.x-sidebars.json b/documentation/versioned_sidebars/version-v0.9.x-sidebars.json index 46ecb07f..b6e7c866 100644 --- a/documentation/versioned_sidebars/version-v0.9.x-sidebars.json +++ b/documentation/versioned_sidebars/version-v0.9.x-sidebars.json @@ -49,7 +49,10 @@ "items": [ "tutorials/observe-first-agent", "tutorials/evaluation-monitors", - "tutorials/custom-evaluators" + "tutorials/custom-evaluators", + "tutorials/register-ai-gateway", + "tutorials/register-llm-service-provider", + "tutorials/configure-agent-llm-configuration" ] }, {