v0.1.0 - ALPHA - API and structure may change without notice
Authors: @jchristn @rawingate
Tablix is a database discovery and query platform that connects your databases to AI agents and humans through REST and MCP interfaces.
Tablix sits between your databases and your tools. It crawls database schemas — discovering tables, columns, primary keys, foreign keys, and indexes — and exposes that metadata alongside query execution through a REST API and an MCP server. A built-in dashboard provides a browser-based UI for the same operations.
Supported databases: SQLite, PostgreSQL, MySQL, SQL Server.
- Give AI agents database access. Connect Tablix via MCP to Claude Code, Cursor, Codex, or Gemini. Your agent can discover what databases are available, understand their schemas, and run queries to answer your questions — without you writing SQL.
- Centralize database discovery. Configure all your database connections in one place with user-supplied context that describes what each database contains and how its tables relate to one another. AI agents use this context to figure out what queries to run.
- Control what's allowed. Each database entry specifies which SQL statement types are permitted (
SELECT,INSERT,UPDATE,DELETE, etc.). Tablix validates every query before execution. - Inspect schemas visually. The dashboard shows crawled table geometry — columns, types, primary keys, foreign keys, and indexes — in a clean, browsable interface with light and dark modes.
- Configure one or more database connections in
tablix.json - Tablix crawls each database on startup, caching schema geometry
- AI agents connect via MCP to discover databases and execute queries
- Humans use the dashboard or REST API for the same operations
- Query validation enforces the
AllowedQuerieslist per database
The quickest way to get Tablix running is with Docker Compose. The setup includes a Tablix server, a dashboard UI, and a sample SQLite database.
git clone https://github.com/jchristn/Tablix.git
cd Tablix/docker
docker compose up -dOnce running, the following services are available:
| Service | URL |
|---|---|
| REST API | http://localhost:9100 |
| Swagger UI | http://localhost:9100/swagger |
| Dashboard | http://localhost:9101 |
| MCP | http://localhost:9102/rpc |
Default API key: tablixadmin
The sample SQLite database includes users, orders, and line_items tables so you can explore schema discovery and query execution immediately.
The docker/compose.yaml starts two containers:
- tablix-server (
jchristn77/tablix-server) — the REST API and MCP server. Ports 9100 (REST) and 9102 (MCP) are exposed. Thetablix.jsonconfiguration,database.dbSQLite file, andlogs/directory are bind-mounted from thedocker/directory so data persists across restarts. - tablix-ui (
jchristn77/tablix-ui) — the dashboard, served via nginx on port 9101. It connects to the server using theTABLIX_SERVER_URLenvironment variable.
To run the server standalone with docker run:
docker run -d \
-p 9100:9100 \
-p 9102:9102 \
-v $(pwd)/tablix.json:/app/tablix.json \
-v $(pwd)/database.db:/app/database.db \
-v $(pwd)/logs:/app/logs \
jchristn77/tablix-server:v0.1.0To run the dashboard standalone:
docker run -d \
-p 9101:9101 \
-e TABLIX_SERVER_URL=http://host.docker.internal:9100 \
jchristn77/tablix-ui:v0.1.0To restore the Docker environment to its default state (resets tablix.json and database.db to their original contents):
cd docker/factory
./reset.sh # Linux/Mac
reset.bat # WindowsTo build and push Docker images from source:
build-server.bat v0.1.0
build-dashboard.bat v0.1.0cd src
dotnet build
dotnet run --project Tablix.ServerThe server creates a default tablix.json on first run with a sample SQLite database entry. Swagger UI is available at http://localhost:9100/swagger.
cd dashboard
npm install
npm run devcd src
dotnet testTablix can automatically install its MCP configuration into supported AI clients:
dotnet run --project src/Tablix.Server -- --install-mcpThis detects and patches configuration for:
| Client | Config File |
|---|---|
| Claude Code | ~/.claude.json |
| Cursor | ~/.cursor/mcp.json |
| Codex | ~/.codex/config.json |
| Gemini | ~/.gemini/settings.json |
After installing or updating MCP configuration, restart your AI agent or client to pick up the changes.
To configure manually, add to your client's MCP settings:
{
"mcpServers": {
"tablix": {
"type": "http",
"url": "http://localhost:9102/rpc"
}
}
}Tablix exposes six MCP tools. The recommended discovery flow for AI agents is:
tablix_discover_databases— List all configured databases to identify which ones are availabletablix_list_tables— List the tables in a specific databasetablix_discover_table— Get full geometry (columns, types, PKs, FKs, indexes) for a specific tabletablix_execute_query— Execute a SQL query once you understand the schema
List all configured databases with their connection details and user-supplied context.
| Parameter | Type | Required | Description |
|---|---|---|---|
maxResults |
integer | No | Maximum results to return (1–1000, default 100) |
skip |
integer | No | Number of records to skip (default 0) |
filter |
string | No | Filter by database ID or name |
Get full schema geometry for every table in a database at once. This can produce large responses — prefer tablix_list_tables + tablix_discover_table for targeted discovery.
| Parameter | Type | Required | Description |
|---|---|---|---|
databaseId |
string | Yes | Database entry ID |
List all tables in a database with their schema names and column counts.
| Parameter | Type | Required | Description |
|---|---|---|---|
databaseId |
string | Yes | Database entry ID |
Returns each table's SchemaName, TableName, and Columns (column count), along with the database Context and IsCrawled status.
Get full geometry for a single table — columns (names, data types, primary keys, nullability, defaults), foreign keys, and indexes.
| Parameter | Type | Required | Description |
|---|---|---|---|
databaseId |
string | Yes | Database entry ID |
tableName |
string | Yes | Table name to retrieve geometry for |
Execute a SQL query against a database. The query must be a single statement (no semicolons) and the statement type must be in the database's AllowedQueries list.
| Parameter | Type | Required | Description |
|---|---|---|---|
databaseId |
string | Yes | Database entry ID |
query |
string | Yes | SQL query to execute |
Returns Success, RowsReturned, TotalMs, and a Data object containing Columns (name and type) and Rows.
Update the user-supplied context description for a database. The context helps AI agents understand what the database contains, how its tables relate, and what queries are useful.
| Parameter | Type | Required | Description |
|---|---|---|---|
databaseId |
string | Yes | Database entry ID |
context |
string | Yes | New context description |
Tablix is configured via tablix.json:
{
"Rest": {
"Hostname": "*",
"Port": 9100,
"Ssl": false,
"McpPort": 9102
},
"Logging": {
"Servers": [],
"ConsoleLogging": true,
"FileLogging": true,
"LogDirectory": "./logs/",
"LogFilename": "tablix.log",
"MinimumSeverity": 0,
"EnableColors": true
},
"Databases": [
{
"Id": "db_sample_sqlite",
"Type": "Sqlite",
"Filename": "./database.db",
"AllowedQueries": ["SELECT", "INSERT", "UPDATE", "DELETE"],
"Context": "Description of the database for AI agents..."
}
],
"ApiKeys": ["tablixadmin"]
}SQLite
{
"Id": "db_my_sqlite",
"Name": "Local App Database",
"Type": "Sqlite",
"Filename": "./myapp.db",
"AllowedQueries": ["SELECT"],
"Context": "Local SQLite database for the myapp application."
}PostgreSQL
{
"Id": "db_my_postgres",
"Name": "Staging Orders",
"Type": "Postgresql",
"Hostname": "pg.example.com",
"Port": 5432,
"User": "readonly",
"Password": "secret",
"DatabaseName": "orders",
"Schema": "public",
"AllowedQueries": ["SELECT"],
"Context": "Staging PostgreSQL database for the orders service."
}MySQL
{
"Id": "db_my_mysql",
"Name": "Production Users",
"Type": "Mysql",
"Hostname": "mysql.example.com",
"Port": 3306,
"User": "readonly",
"Password": "secret",
"DatabaseName": "users",
"AllowedQueries": ["SELECT"],
"Context": "Production MySQL database for user accounts."
}SQL Server
{
"Id": "db_my_sqlserver",
"Name": "Analytics Warehouse",
"Type": "SqlServer",
"Hostname": "sql.example.com",
"Port": 1433,
"User": "readonly",
"Password": "secret",
"DatabaseName": "analytics",
"Schema": "dbo",
"AllowedQueries": ["SELECT"],
"Context": "SQL Server analytics warehouse for reporting."
}| Field | Description |
|---|---|
Id |
Unique identifier (e.g. db_myapp) |
Type |
Sqlite, Postgresql, Mysql, or SqlServer |
Hostname |
Database host (network databases) |
Port |
Database port (network databases) |
User |
Database username |
Password |
Database password |
DatabaseName |
Database name |
Schema |
Schema name (default public) |
Filename |
File path (SQLite) |
AllowedQueries |
Permitted SQL statement types |
Context |
Free-form description for AI agents |
| Field | Description |
|---|---|
Servers |
Array of syslog server objects (optional, default empty) |
ConsoleLogging |
Enable console output (default true) |
FileLogging |
Enable file logging (default true) |
LogDirectory |
Directory for log files (default ./logs/) |
LogFilename |
Log filename (default tablix.log) |
MinimumSeverity |
0 = debug, 1 = info, 2 = warn, 3 = error, 4 = alert, 5 = critical, 6 = emergency (default 0) |
EnableColors |
Colored console output (default true) |
Each syslog server entry has Hostname and Port:
{
"Logging": {
"Servers": [
{ "Hostname": "127.0.0.1", "Port": 514 }
],
"ConsoleLogging": true,
"FileLogging": true,
"LogDirectory": "./logs/",
"LogFilename": "tablix.log",
"MinimumSeverity": 0,
"EnableColors": true
}
}When syslog servers are configured, log messages are forwarded to each server in addition to console and file output.
All endpoints except health checks require Authorization: Bearer <api-key>. See REST_API.md for full request/response details. A Postman collection is included in the repository. Swagger UI is available at /swagger when the server is running.
| Method | Path | Auth | Description |
|---|---|---|---|
GET |
/ |
No | Health check with version, uptime |
HEAD |
/ |
No | Lightweight health check (200 OK) |
GET |
/v1/database |
Yes | List databases (paginated) |
GET |
/v1/database/{id} |
Yes | Get database details and schema geometry |
POST |
/v1/database |
Yes | Add a database entry |
PUT |
/v1/database/{id} |
Yes | Update a database entry |
DELETE |
/v1/database/{id} |
Yes | Delete a database entry |
POST |
/v1/database/{id}/crawl |
Yes | Re-crawl database schema |
POST |
/v1/database/{id}/query |
Yes | Execute a SQL query |
- Only statement types listed in
AllowedQueriesare permitted - Multi-statement queries (containing
;) are rejected - Leading SQL comments are stripped before validation
- This is a heuristic safeguard, not a security boundary — always use database-level permissions for production safety
- Passwords in
tablix.jsonare stored in cleartext — protect the file with OS-level permissions
If a database crawl fails on startup (unreachable host, bad credentials, missing file):
- The server continues to start — crawl failures are non-fatal
- The affected database reports
IsCrawled: falsewith aCrawlErrormessage - Re-crawl at any time via
POST /v1/database/{id}/crawlor the dashboard - Query execution may still work even when the crawl has not completed
- Report bugs and request features at https://github.com/jchristn/Tablix/issues
- Start or join discussions at https://github.com/jchristn/Tablix/discussions
MIT License - Copyright (c) 2026 Joel Christner and Adam Wingate
