A Go-based Model Context Protocol (MCP) Host service with AI agent capabilities. This service acts as an intelligent agent that connects to multiple MCP servers, integrates with Ollama (OpenAI-compatible LLM), and provides an agentic AI experience to users.
-
MCP Protocol Support: Full implementation of the Model Context Protocol
- Stdio and HTTP transports
- Tools, Resources, and Prompts primitives
- Dynamic capability negotiation
- Notification handling (list_changed events)
-
AI Agent Orchestration: Intelligent reasoning loop
- Context gathering from MCP resources
- Tool discovery and execution
- Multi-iteration planning and execution
- Error recovery
-
LLM Integration: OpenAI-compatible API
- Ollama support out of the box
- Streaming responses
- Function calling for tool execution
-
Conversation Management: Full persistence
- PostgreSQL-backed conversation history
- Multi-user support with authentication
- Session lifecycle management
-
Production Ready
- RESTful API with WebSocket streaming
- JWT authentication support
- Kubernetes deployment via Helm
- Prometheus metrics
- Structured logging
Deploy go-mcp-host as a microservice in your infrastructure.
- Ollama running locally or remotely
- PostgreSQL database
- (Optional) Kubernetes cluster for production deployment
# Clone the repository
git clone https://github.com/d4l-data4life/go-mcp-host.git
cd go-mcp-host
# Copy and configure
cp config.example.yaml config.yaml
# Edit config.yaml to add your MCP servers
# Start PostgreSQL
make docker-database
# Run the service
make runThe service will be available at http://localhost:8080.
# See detailed deployment guide
cd deploy
cat README.md
# Quick deploy
helm install go-mcp-host ./helm-chart \
-f examples/local/values.yaml \
--namespace mcp-hostSee deploy/README.md for full deployment documentation.
Embed MCP Host functionality into your own Go application.
go get github.com/d4l-data4life/go-mcp-hostpackage main
import (
"context"
"fmt"
"log"
"github.com/d4l-data4life/go-mcp-host/pkg/config"
"github.com/d4l-data4life/go-mcp-host/pkg/mcphost"
"github.com/google/uuid"
"gorm.io/driver/postgres"
"gorm.io/gorm"
)
func main() {
// Setup database
db, err := gorm.Open(postgres.Open("host=localhost port=5432 user=mcphost dbname=mcphost password=postgres sslmode=disable"), &gorm.Config{})
if err != nil {
log.Fatal(err)
}
// Create MCP Host
host, err := mcphost.NewHost(context.Background(), mcphost.Config{
MCPServers: []config.MCPServerConfig{
{
Name: "weather",
Type: "stdio",
Command: "npx",
Args: []string{"-y", "@h1deya/mcp-server-weather"},
Enabled: true,
Description: "Weather information server",
},
},
OpenAIBaseURL: "http://localhost:11434",
DB: db,
})
if err != nil {
log.Fatal(err)
}
// Chat with the agent
response, err := host.Chat(context.Background(), mcphost.ChatRequest{
ConversationID: uuid.New(),
UserID: uuid.New(),
UserMessage: "What's the weather in New York?",
})
if err != nil {
log.Fatal(err)
}
fmt.Println(response.Message.Content)
}Run examples:
# Simple library usage
go run examples/simple_library/simple_library.go
# Web server integration
go run examples/embed_in_webserver/embed_in_webserver.go
# Agent package (mid-level)
go run examples/agent_chat/agent_chat.go
# Low-level MCP usage
go run examples/ollama_with_mcp/ollama_with_mcp.goSee examples/README.md for detailed documentation.
┌────────────────────────────────────────────────────┐
│ Frontend (React/API) │
└────────────────────────┬───────────────────────────┘
│ HTTP/WebSocket
┌────────────────────────▼───────────────────────────┐
│ go-mcp-host │
│ ┌───────────────────────────────────────────────┐ │
│ │ Agent Orchestrator │ │
│ │ - Context gathering │ │
│ │ - Tool execution loop │ │
│ │ - Response generation │ │
│ └───────────┬───────────────────────┬───────────┘ │
│ │ │ │
│ ┌───────────▼──────────┐ ┌─────────▼──────────┐ │
│ │ MCP Manager │ │ LLM Client │ │
│ │ - Session mgmt │ │ - Ollama API │ │
│ │ - Client pooling │ │ - Function calls │ │
│ └───────────┬──────────┘ └────────────────────┘ │
│ │ │
└──────────────┼─────────────────────────────────────┘
│
┌──────────┼──────────┐
│ │ │
┌───▼───┐ ┌───▼───┐ ┌───▼───┐
│ MCP │ │ MCP │ │ MCP │
│Server1│ │Server2│ │Server3│
│(stdio)│ │(HTTP) │ │(stdio)│
└───────┘ └───────┘ └───────┘
Configure MCP servers in config.yaml:
mcp_servers:
# Stdio server example
- name: weather
type: stdio
command: npx
args:
- "-y"
- "@h1deya/mcp-server-weather"
enabled: true
description: "Weather information server"
# HTTP server example
- name: my-api
type: http
url: "https://api.example.com/mcp"
mode: batch # HTTP mode: batch (JSON) or stream (SSE)
headers:
X-API-Key: "your-api-key"
forwardBearer: true # Forward user's bearer token
enabled: true
description: "My custom MCP server"go-mcp-host speaks the OpenAI Chat Completions API natively. Configure the following environment variables (or matching config.yaml keys):
OPENAI_API_KEYOPENAI_BASE_URL(defaults tohttps://api.openai.com/v1)OPENAI_DEFAULT_MODEL(defaults togpt-4o-mini)OPENAI_TEMPERATURE,OPENAI_MAX_TOKENS,OPENAI_TOP_P,OPENAI_REQUEST_TIMEOUT
Using Ollama: run ollama serve locally, then set OPENAI_BASE_URL="http://localhost:11434" (the Go SDK automatically appends /v1) and leave OPENAI_API_KEY empty. Any OpenAI-compatible provider can be used in the same way.
Library users can bypass the built-in client by supplying a custom llm.Client through mcphost.Config.LLMClient.
Key environment variables:
PORT- HTTP port (default: 8080)CORS_HOSTS- Allowed CORS originsDB_HOST,DB_PORT,DB_NAME,DB_USER,DB_PASS- Database connectionREMOTE_KEYS_URL- JWT validation endpoint (optional)DEBUG- Enable debug loggingOPENAI_API_KEY,OPENAI_BASE_URL,OPENAI_DEFAULT_MODEL- LLM configuration
See config.example.yaml for all options.
POST /api/v1/auth/register- Register new userPOST /api/v1/auth/login- LoginGET /api/v1/conversations- List conversationsPOST /api/v1/conversations- Create conversationDELETE /api/v1/conversations/:id- Delete conversationPOST /api/v1/messages- Send messageWS /api/v1/messages/stream- Stream responsesGET /api/v1/mcp/servers- List MCP serversGET /api/v1/mcp/tools- List available tools
See swagger/api.yml for the full API specification.
- Go 1.24+
- Docker & Docker Compose
- PostgreSQL
- Ollama
- golangci-lint (for linting)
# Build binary
make build
# Build Docker image
make docker-build
# Run tests
make test
# Run linter
make lintgo-mcp-host/
├── cmd/
│ └── api/ # Main application entry point
├── pkg/
│ ├── agent/ # Agent orchestration logic
│ ├── mcp/ # MCP integration helpers
│ │ ├── helpers/ # Shared utilities (schema conversions, etc.)
│ │ └── manager/ # Session management built on modelcontextprotocol/go-sdk
│ ├── llm/ # LLM integration (Ollama)
│ ├── handlers/ # HTTP handlers
│ ├── models/ # Database models
│ ├── config/ # Configuration
│ ├── auth/ # Authentication
│ ├── server/ # HTTP server setup
│ └── mcphost/ # Public API for library usage
├── deploy/
│ ├── helm-chart/ # Kubernetes Helm chart
│ └── examples/ # Example configurations
├── docs/ # Additional documentation
├── examples/ # Usage examples
├── sql/ # Database migrations
└── swagger/ # API specification
Note: The low-level MCP protocol, client, and transport implementations are provided by the official github.com/modelcontextprotocol/go-sdk. This repository focuses on orchestration, caching, and product-specific integrations on top of that client.
# Run all tests
make test
# Run with coverage
make test-coverage
# Run integration tests (requires running database)
make docker-database
make test-integrationContributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow standard Go conventions
- Run
gofmtandgolangci-lint - Write tests for new features
- Update documentation
See .cursorrules for detailed coding standards.
Apache License 2.0 - see LICENSE for details.
- Built with go-svc framework
- Powered by the official OpenAI Go SDK and compatible endpoints such as Ollama
- Implements Model Context Protocol
- GitHub Issues: Report bugs or request features
- Documentation: Full documentation
Made with ❤️ by Data4Life