Skip to content

d4l-data4life/go-mcp-host

Repository files navigation

go-mcp-host

A Go-based Model Context Protocol (MCP) Host service with AI agent capabilities. This service acts as an intelligent agent that connects to multiple MCP servers, integrates with Ollama (OpenAI-compatible LLM), and provides an agentic AI experience to users.

Go Version License

Features

  • MCP Protocol Support: Full implementation of the Model Context Protocol

    • Stdio and HTTP transports
    • Tools, Resources, and Prompts primitives
    • Dynamic capability negotiation
    • Notification handling (list_changed events)
  • AI Agent Orchestration: Intelligent reasoning loop

    • Context gathering from MCP resources
    • Tool discovery and execution
    • Multi-iteration planning and execution
    • Error recovery
  • LLM Integration: OpenAI-compatible API

    • Ollama support out of the box
    • Streaming responses
    • Function calling for tool execution
  • Conversation Management: Full persistence

    • PostgreSQL-backed conversation history
    • Multi-user support with authentication
    • Session lifecycle management
  • Production Ready

    • RESTful API with WebSocket streaming
    • JWT authentication support
    • Kubernetes deployment via Helm
    • Prometheus metrics
    • Structured logging

Quick Start

Option 1: Use as a Standalone Service

Deploy go-mcp-host as a microservice in your infrastructure.

Prerequisites

  • Ollama running locally or remotely
  • PostgreSQL database
  • (Optional) Kubernetes cluster for production deployment

Run Locally

# Clone the repository
git clone https://github.com/d4l-data4life/go-mcp-host.git
cd go-mcp-host

# Copy and configure
cp config.example.yaml config.yaml
# Edit config.yaml to add your MCP servers

# Start PostgreSQL
make docker-database

# Run the service
make run

The service will be available at http://localhost:8080.

Deploy to Kubernetes

# See detailed deployment guide
cd deploy
cat README.md

# Quick deploy
helm install go-mcp-host ./helm-chart \
  -f examples/local/values.yaml \
  --namespace mcp-host

See deploy/README.md for full deployment documentation.

Option 2: Use as a Go Library

Embed MCP Host functionality into your own Go application.

Installation

go get github.com/d4l-data4life/go-mcp-host

Basic Usage

package main

import (
    "context"
    "fmt"
    "log"

    "github.com/d4l-data4life/go-mcp-host/pkg/config"
    "github.com/d4l-data4life/go-mcp-host/pkg/mcphost"
    "github.com/google/uuid"
    "gorm.io/driver/postgres"
    "gorm.io/gorm"
)

func main() {
    // Setup database
    db, err := gorm.Open(postgres.Open("host=localhost port=5432 user=mcphost dbname=mcphost password=postgres sslmode=disable"), &gorm.Config{})
    if err != nil {
        log.Fatal(err)
    }

    // Create MCP Host
    host, err := mcphost.NewHost(context.Background(), mcphost.Config{
        MCPServers: []config.MCPServerConfig{
            {
                Name:    "weather",
                Type:    "stdio",
                Command: "npx",
                Args:    []string{"-y", "@h1deya/mcp-server-weather"},
                Enabled: true,
                Description: "Weather information server",
            },
        },
        OpenAIBaseURL: "http://localhost:11434",
        DB:          db,
    })
    if err != nil {
        log.Fatal(err)
    }

    // Chat with the agent
    response, err := host.Chat(context.Background(), mcphost.ChatRequest{
        ConversationID: uuid.New(),
        UserID:         uuid.New(),
        UserMessage:    "What's the weather in New York?",
    })
    if err != nil {
        log.Fatal(err)
    }

    fmt.Println(response.Message.Content)
}

Run examples:

# Simple library usage
go run examples/simple_library/simple_library.go

# Web server integration
go run examples/embed_in_webserver/embed_in_webserver.go

# Agent package (mid-level)
go run examples/agent_chat/agent_chat.go

# Low-level MCP usage
go run examples/ollama_with_mcp/ollama_with_mcp.go

See examples/README.md for detailed documentation.

Architecture

┌────────────────────────────────────────────────────┐
│               Frontend (React/API)                 │
└────────────────────────┬───────────────────────────┘
                         │ HTTP/WebSocket
┌────────────────────────▼───────────────────────────┐
│                   go-mcp-host                      │
│  ┌───────────────────────────────────────────────┐ │
│  │  Agent Orchestrator                           │ │
│  │  - Context gathering                          │ │
│  │  - Tool execution loop                        │ │
│  │  - Response generation                        │ │
│  └───────────┬───────────────────────┬───────────┘ │
│              │                       │             │
│  ┌───────────▼──────────┐  ┌─────────▼──────────┐  │
│  │  MCP Manager         │  │  LLM Client        │  │
│  │  - Session mgmt      │  │  - Ollama API      │  │
│  │  - Client pooling    │  │  - Function calls  │  │
│  └───────────┬──────────┘  └────────────────────┘  │
│              │                                     │
└──────────────┼─────────────────────────────────────┘
               │
    ┌──────────┼──────────┐
    │          │          │
┌───▼───┐  ┌───▼───┐  ┌───▼───┐
│ MCP   │  │ MCP   │  │ MCP   │
│Server1│  │Server2│  │Server3│
│(stdio)│  │(HTTP) │  │(stdio)│
└───────┘  └───────┘  └───────┘

Configuration

MCP Servers

Configure MCP servers in config.yaml:

mcp_servers:
  # Stdio server example
  - name: weather
    type: stdio
    command: npx
    args:
      - "-y"
      - "@h1deya/mcp-server-weather"
    enabled: true
    description: "Weather information server"
  
  # HTTP server example
  - name: my-api
    type: http
    url: "https://api.example.com/mcp"
    mode: batch  # HTTP mode: batch (JSON) or stream (SSE)
    headers:
      X-API-Key: "your-api-key"
    forwardBearer: true  # Forward user's bearer token
    enabled: true
    description: "My custom MCP server"

LLM Configuration

go-mcp-host speaks the OpenAI Chat Completions API natively. Configure the following environment variables (or matching config.yaml keys):

  • OPENAI_API_KEY
  • OPENAI_BASE_URL (defaults to https://api.openai.com/v1)
  • OPENAI_DEFAULT_MODEL (defaults to gpt-4o-mini)
  • OPENAI_TEMPERATURE, OPENAI_MAX_TOKENS, OPENAI_TOP_P, OPENAI_REQUEST_TIMEOUT

Using Ollama: run ollama serve locally, then set OPENAI_BASE_URL="http://localhost:11434" (the Go SDK automatically appends /v1) and leave OPENAI_API_KEY empty. Any OpenAI-compatible provider can be used in the same way.

Library users can bypass the built-in client by supplying a custom llm.Client through mcphost.Config.LLMClient.

Environment Variables

Key environment variables:

  • PORT - HTTP port (default: 8080)
  • CORS_HOSTS - Allowed CORS origins
  • DB_HOST, DB_PORT, DB_NAME, DB_USER, DB_PASS - Database connection
  • REMOTE_KEYS_URL - JWT validation endpoint (optional)
  • DEBUG - Enable debug logging
  • OPENAI_API_KEY, OPENAI_BASE_URL, OPENAI_DEFAULT_MODEL - LLM configuration

See config.example.yaml for all options.

API Documentation

REST Endpoints

  • POST /api/v1/auth/register - Register new user
  • POST /api/v1/auth/login - Login
  • GET /api/v1/conversations - List conversations
  • POST /api/v1/conversations - Create conversation
  • DELETE /api/v1/conversations/:id - Delete conversation
  • POST /api/v1/messages - Send message
  • WS /api/v1/messages/stream - Stream responses
  • GET /api/v1/mcp/servers - List MCP servers
  • GET /api/v1/mcp/tools - List available tools

See swagger/api.yml for the full API specification.

Development

Prerequisites

  • Go 1.24+
  • Docker & Docker Compose
  • PostgreSQL
  • Ollama
  • golangci-lint (for linting)

Building

# Build binary
make build

# Build Docker image
make docker-build

# Run tests
make test

# Run linter
make lint

Project Structure

go-mcp-host/
├── cmd/
│   └── api/              # Main application entry point
├── pkg/
│   ├── agent/            # Agent orchestration logic
│   ├── mcp/              # MCP integration helpers
│   │   ├── helpers/      # Shared utilities (schema conversions, etc.)
│   │   └── manager/      # Session management built on modelcontextprotocol/go-sdk
│   ├── llm/              # LLM integration (Ollama)
│   ├── handlers/         # HTTP handlers
│   ├── models/           # Database models
│   ├── config/           # Configuration
│   ├── auth/             # Authentication
│   ├── server/           # HTTP server setup
│   └── mcphost/          # Public API for library usage
├── deploy/
│   ├── helm-chart/       # Kubernetes Helm chart
│   └── examples/         # Example configurations
├── docs/                 # Additional documentation
├── examples/             # Usage examples
├── sql/                  # Database migrations
└── swagger/              # API specification

Note: The low-level MCP protocol, client, and transport implementations are provided by the official github.com/modelcontextprotocol/go-sdk. This repository focuses on orchestration, caching, and product-specific integrations on top of that client.

Testing

# Run all tests
make test

# Run with coverage
make test-coverage

# Run integration tests (requires running database)
make docker-database
make test-integration

Contributing

Contributions are welcome! Please:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Code Standards

  • Follow standard Go conventions
  • Run gofmt and golangci-lint
  • Write tests for new features
  • Update documentation

See .cursorrules for detailed coding standards.

Documentation

MCP Resources

License

Apache License 2.0 - see LICENSE for details.

Acknowledgments

Support


Made with ❤️ by Data4Life

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors 3

  •  
  •  
  •