Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added docs/assets/integrations/ai/chatgpt/apps.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/integrations/ai/chatgpt/create.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/integrations/ai/chatgpt/details.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file added docs/assets/integrations/ai/chatgpt/profile.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
159 changes: 106 additions & 53 deletions docs/settings/integrations/ai/agentic.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,73 +31,68 @@ Before using the Agentic API, you must configure your LLM provider credentials.

### Supported Providers

The Agentic API supports major LLM providers including:
The Agentic API supports 22+ LLM providers. For the full list, see [Supported LLM Providers](./mcp-quickstart.md#supported-llm-providers) in the Agent Q Quickstart guide.

- Anthropic (Claude)
- OpenAI (GPT-4, GPT-4o)
- Azure OpenAI
- Other OpenAI-compatible APIs
Common providers include:

### Managing LLM Configuration
- **OpenAI** — GPT-4o, GPT-4, o1, o3
- **Anthropic** — Claude Sonnet, Claude Opus, Claude Haiku
- **Google Gemini** — Gemini 2.0 Flash, Gemini 2.5 Pro
- **Amazon Bedrock** — Claude, Titan, and other models via AWS
- **Google Vertex AI** — Gemini models via GCP
- **Groq** — Llama, Mixtral (low-latency inference)
- **Mistral** — Mistral Large, Codestral
- **DeepSeek** — DeepSeek-V3, DeepSeek-R1
- **Ollama** — Self-hosted open-source models (requires custom base URL)

#### Create Configuration
!!! tip
Use the `GET /api/agent/supported-models` endpoint to dynamically retrieve the current list of supported providers and their available models.

Set up your LLM provider credentials:
### Managing LLM Configuration

```bash
curl -X POST "https://your-qualytics.qualytics.io/api/agent/llm-config" \
-H "Authorization: Bearer YOUR_QUALYTICS_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"provider": "anthropic",
"api_key": "YOUR_LLM_API_KEY",
"model": "claude-sonnet-4-20250514"
}'
```
LLM configuration is managed through the Qualytics UI, just like any other integration:

#### View Configuration
1. Navigate to **Settings** > **Integrations** in your Qualytics instance
2. Click **Connect** next to **LLM Configuration**
3. Select your **Provider**, **Model**, and enter your **API Key**
4. Optionally provide a **Base URL** if required by your provider
5. Click **Save** to complete the configuration

Check your current LLM configuration:
For detailed setup instructions with screenshots, see [Agent Q Quickstart — LLM Setup](./mcp-quickstart.md#before-you-start-llm-setup-required).

```bash
curl -X GET "https://your-qualytics.qualytics.io/api/agent/llm-config" \
-H "Authorization: Bearer YOUR_QUALYTICS_TOKEN"
```
## Capabilities

#### Update Configuration
### Chat with Agent

Modify your LLM settings:
The chat endpoint provides a streaming conversational interface for exploring and managing your data quality infrastructure. This is the most flexible endpoint, allowing free-form natural language interactions with real-time streaming responses.

```bash
curl -X PATCH "https://your-qualytics.qualytics.io/api/agent/llm-config" \
curl -X POST "https://your-qualytics.qualytics.io/api/agent/chat" \
-H "Authorization: Bearer YOUR_QUALYTICS_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-20250514"
"messages": [
{"role": "user", "content": "What tables are in our sales_db datastore and what quality checks do we have on them?"}
]
}'
```

#### Delete Configuration
The response is delivered as a Server-Sent Events (SSE) stream following the Vercel AI Data Stream Protocol. Each event contains either text content, tool execution progress, or error information.

Remove your LLM configuration:
**Multi-turn Conversations:**

```bash
curl -X DELETE "https://your-qualytics.qualytics.io/api/agent/llm-config" \
-H "Authorization: Bearer YOUR_QUALYTICS_TOKEN"
```

## Capabilities

### Chat with Agent

The chat endpoint provides a conversational interface for exploring and managing your data quality infrastructure. This is the most flexible endpoint, allowing free-form natural language interactions.
Include previous messages to maintain conversation context. Pass the `session_id` returned in the `X-Chat-Session-Id` response header to continue an existing conversation:

```bash
curl -X POST "https://your-qualytics.qualytics.io/api/agent/chat" \
curl -X POST "https://your-qualytics.qualytics.io/api/agent/chat?session_id=42" \
-H "Authorization: Bearer YOUR_QUALYTICS_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"message": "What tables are in our sales_db datastore and what quality checks do we have on them?"
"messages": [
{"role": "user", "content": "What tables are in our sales_db datastore?"},
{"role": "assistant", "content": "I found 12 tables in the sales_db datastore..."},
{"role": "user", "content": "Set up quality checks on the orders table"}
]
}'
```

Expand Down Expand Up @@ -141,9 +136,10 @@ Create computed assets—tables, files, or cross-datastore joins—through natur
curl -X POST "https://your-qualytics.qualytics.io/api/agent/transform-dataset" \
-H "Authorization: Bearer YOUR_QUALYTICS_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"description": "Create a computed table in sales_db that aggregates daily revenue by product category from the transactions table, including only completed orders from the last 90 days"
}'
-G \
--data-urlencode "asset_name=daily_revenue_by_category" \
--data-urlencode "source_description=transactions table in sales_db" \
--data-urlencode "transformation_criteria=Aggregate daily revenue by product category, including only completed orders from the last 90 days"
```

**Use Cases:**
Expand All @@ -165,10 +161,10 @@ Create data quality checks by describing the business rule or validation require
```bash
curl -X POST "https://your-qualytics.qualytics.io/api/agent/generate-quality-check" \
-H "Authorization: Bearer YOUR_QUALYTICS_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"description": "Ensure the email field in the customers table of sales_db is never null and matches a valid email format"
}'
-G \
--data-urlencode "datastore_name=sales_db" \
--data-urlencode "container_name=customers" \
--data-urlencode "expectation=Ensure the email field is never null and matches a valid email format"
```

**Use Cases:**
Expand All @@ -192,10 +188,8 @@ Get detailed, contextual explanations of data quality issues:
```bash
curl -X POST "https://your-qualytics.qualytics.io/api/agent/investigate-anomaly" \
-H "Authorization: Bearer YOUR_QUALYTICS_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"anomaly_id": 12345
}'
-G \
--data-urlencode "anomaly_identifier=12345"
```

**Use Cases:**
Expand All @@ -213,6 +207,65 @@ curl -X POST "https://your-qualytics.qualytics.io/api/agent/investigate-anomaly"
- Potential business impact
- Suggested investigation or remediation steps

### Analyze Trends

Analyze data quality trends over time for a specific data asset:

```bash
curl -X POST "https://your-qualytics.qualytics.io/api/agent/analyze-trends" \
-H "Authorization: Bearer YOUR_QUALYTICS_TOKEN" \
-G \
--data-urlencode "datastore_name=sales_db" \
--data-urlencode "container_name=orders" \
--data-urlencode "timeframe=month"
```

| Parameter | Required | Description |
|-----------|----------|-------------|
| `datastore_name` | Yes | The name of the datastore to analyze |
| `container_name` | No | Specific table or container (omit for datastore-level trends) |
| `field_name` | No | Specific field to focus on |
| `timeframe` | No | Time period to analyze: `week`, `month` (default), `quarter`, or `year` |

**Use Cases:**

- **Quality Reporting**: Generate trend reports for stakeholders and management
- **Improvement Tracking**: Measure the impact of quality initiatives over time
- **Regression Detection**: Identify when quality metrics started declining

### Get Suggestions

Retrieve AI-generated contextual suggestions for the chat interface:

```bash
curl -X GET "https://your-qualytics.qualytics.io/api/agent/suggestions" \
-H "Authorization: Bearer YOUR_QUALYTICS_TOKEN"
```

Returns a list of suggested prompts based on the available tools and data sources. Useful for building guided user experiences.

### List Supported Models

Retrieve the list of supported LLM providers and their available models:

```bash
curl -X GET "https://your-qualytics.qualytics.io/api/agent/supported-models" \
-H "Authorization: Bearer YOUR_QUALYTICS_TOKEN"
```

Returns provider metadata including display names, available models, whether the provider accepts arbitrary model names, and whether it requires a custom base URL. Use this endpoint to dynamically build provider selection UIs.

### Check LLM Configuration Status

Check whether an LLM provider is configured without retrieving the full configuration:

```bash
curl -X GET "https://your-qualytics.qualytics.io/api/agent/llm-config/status" \
-H "Authorization: Bearer YOUR_QUALYTICS_TOKEN"
```

Returns `is_configured` (boolean), `model_name` (if configured), and `web_search_enabled` status. This lightweight endpoint is ideal for conditionally rendering UI elements.

## Integration Patterns

### Automated Quality Check Setup
Expand Down
41 changes: 33 additions & 8 deletions docs/settings/integrations/ai/mcp-quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ Agent Q supports the following LLM providers:
| 22 | xAI |

!!! note
You must provide your own API key for the selected provider.
You must provide your own API key for the selected provider. Some providers (such as Ollama and LiteLLM) require a custom Base URL to be configured. The platform dynamically retrieves the latest supported providers and models—use the configuration modal to see available options.

## How to Use Agent Q

Expand Down Expand Up @@ -95,23 +95,48 @@ Agent Q will:
- Execute the required steps using MCP tools
- Show real-time progress indicators
- Display detailed results
- Allow you to expand inputs and outputs for transparency
- Allow you to expand inputs and outputs for transparency

You can continue the conversation to refine results or ask follow-up questions.

## What Agent Q Can Help You With

Agent Q can assist with:

- Exploring connected datastores
- Validating SQL queries
- Creating computed tables
- Building and managing quality checks
- Investigating anomalies
- Analyzing quality scores and trends
- **Exploring connected datastores** — Browse tables, schemas, and field definitions across all your data sources
- **Global search** — Find tables, fields, and quality checks across your entire data landscape
- **Validating SQL queries** — Check query syntax and permissions before running against production
- **Creating computed tables, files, and joins** — Build derived datasets through natural language, including cross-datastore joins
- **Building and managing quality checks** — Create, update, and list quality checks by describing business rules
- **Investigating anomalies** — Get AI-generated context, impact analysis, and remediation suggestions
- **Analyzing quality scores and trends** — Review quality metrics over time and identify patterns
- **Managing tags** — Organize data assets by adding, removing, or replacing tags
- **Triggering operations** — Run profiling and scanning operations conversationally
- **Sending notifications and creating tickets** — Connect quality events to your alerting and ticketing integrations

Each response shows the actions taken, so you can clearly see how the result was generated.

![response](../../../assets/integrations/mcp-quickstart/response.png)

## Managing Chat Sessions

Agent Q automatically saves your conversation history so you can return to previous sessions:

- **Chat History Sidebar**: Access your past conversations from the sidebar. Click any session to resume where you left off.
- **Search Sessions**: Use the search bar in the sidebar to find specific conversations by keyword.
- **Rename Sessions**: Click the settings menu on any session to rename it for easier reference.
- **New Chat**: Click the **New Chat** button to start a fresh conversation at any time.

Long conversations are automatically summarized to maintain context while keeping performance optimal.

## Additional Features

### Web Search

When supported by your configured LLM provider, Agent Q can search the Qualytics documentation to find answers to platform-related questions. This is automatically enabled when your provider supports it—no additional configuration is required.

### Guided Workflows

Agent Q includes built-in workflow guides for common tasks. When you ask for help with complex operations like trend analysis or anomaly investigation, the assistant follows a structured step-by-step approach to ensure thorough results.

Agent Q helps you perform data quality tasks faster by combining natural language interaction with guided workflows.
Loading