Skip to content

A high-performance Go proxy server that turns GitHub Copilot's chat completion/embeddings capabilities into an OpenAI-compatible API service, with Anthropic compatibility and robust security.

License

Notifications You must be signed in to change notification settings

teamcoltra/go-copilot-api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Copilot OpenAI API

πŸ€– Copilot OpenAI API (Go Edition)

Go 1.21+ Docker License

A high-performance Go proxy server that turns GitHub Copilot's chat completion/embeddings capabilities into an OpenAI-compatible API service, with Anthropic compatibility and robust security.

I wouldn't call this a "fork" but I would say it's heavily inspired by copilot-openai-api. If you want to use Python instead of Go then you should check out that project.

✨ Key Features

πŸš€ Advanced Integration

  • Seamless GitHub Copilot chat completion API proxy (/v1/chat/completions)
  • Embeddings API proxy (/v1/embeddings)
  • Anthropic API compatibility (/v1/messages, experimental)
  • Real-time streaming response support
  • High-performance, concurrent request handling

πŸ” Security & Reliability

  • Secure authentication middleware (Bearer token)
  • Automatic Copilot token management and refresh
  • Built-in CORS support for web applications
  • Clear error handling (401, 403, etc.)

πŸ’» Universal Compatibility

  • Cross-platform config auto-detection (Windows, Unix, macOS)
  • Docker containerization ready
  • Flexible deployment and configuration

πŸš€ Prerequisites


πŸ“¦ Installation

  1. Clone the repository:
git clone https://github.com/your-org/copilot-openai-api-go.git
cd go-copilot-api
  1. Build the binary:
go build -o bin/go-copilot-api ./cmd/go-copilot-api

βš™οΈ Configuration

Set up environment variables (or use a .env file):

Variable Description Default
COPILOT_TOKEN Required. API access token for authentication. Randomly generated
COPILOT_OAUTH_TOKEN Copilot OAuth token (auto-detected if not set) (auto)
COPILOT_SERVER_PORT Port to listen on (e.g. 8080 for :8080) 9191
CORS_ALLOWED_ORIGINS Comma-separated list of allowed CORS origins *
DEBUG Enable debug logging false
DEFAULT_MODEL Default model to use if not specified in request (none)

Copilot OAuth Token Auto-Detection:

  • If COPILOT_OAUTH_TOKEN is not set, the app will look for your Copilot config:
    • Unix/macOS: ~/.config/github-copilot/apps.json
    • Windows: %LOCALAPPDATA%/github-copilot/apps.json
  • The first available oauth_token will be used.

How to get a valid Copilot configuration?

  • Install any official GitHub Copilot plugin (VS Code, JetBrains, Vim, etc.), sign in, and the config files will be created automatically.

πŸ–₯️ Local Run

Start the server:

go run ./cmd/go-copilot-api

or

bin/go-copilot-api

πŸ”„ Making API Requests

Chat Completions

curl -X POST http://localhost:9191/v1/chat/completions \
  -H "Authorization: Bearer your_access_token_here" \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [{"role": "user", "content": "Hello, Copilot!"}]
  }'

Embeddings

curl -X POST http://localhost:9191/v1/embeddings \
  -H "Authorization: Bearer your_access_token_here" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "copilot-text-embedding-ada-002",
    "input": ["The quick brown fox", "Jumped over the lazy dog"]
  }'

Anthropic API Compatibility (Experimental)

curl -X POST http://localhost:9191/v1/messages \
  -H "Authorization: Bearer your_access_token_here" \
  -H "Content-Type: application/json" \
  -d '{ ...Anthropic API message format... }'

List Available Models

Fetch the list of available models and their capabilities:

curl -X GET http://localhost:9191/v1/models
  • This endpoint does not require authentication.
  • The models list is fetched from GitHub's model catalog API at server startup and periodically refreshed (every 6 hours).
  • The response is a JSON array of model objects, including id, name, summary, and more.
  • Use the "id" field (e.g., "gpt-5-mini", "gpt-4o-mini-2024-07-18") as the "model" value in your requests.

Example: Find the correct model name for GPT-5 Mini

curl -s http://localhost:9191/v1/models | jq '.[] | select(.id | test("5-mini"))'

Default Model

You can set a default model for all requests by adding to your .env or environment:

DEFAULT_MODEL=gpt-5-mini

If a client request does not specify a "model" field, this value will be used automatically for /v1/chat/completions, /v1/embeddings, and /v1/messages.

  • If DEFAULT_MODEL is not set, and the client omits "model", no model is sent to Copilot (Copilot will auto-select).
  • If the client provides a "model", that value is always used as-is.

Example .env:

COPILOT_TOKEN=your_token_here
DEFAULT_MODEL=gpt-5-mini
COPILOT_SERVER_PORT=9191

πŸ”Œ API Reference

POST /v1/chat/completions

  • Proxies requests to GitHub Copilot's Completions API.
  • Headers: Authorization: Bearer <your_access_token>, Content-Type: application/json
  • Body: Must include "messages". You may include "model" (see /v1/models for valid values). If omitted and DEFAULT_MODEL is set, it will be injected.
  • Response: Streams responses directly from Copilot (supports streaming and non-streaming).

POST /v1/embeddings

  • Proxies requests to Copilot's Embeddings API.
  • Headers: Authorization: Bearer <your_access_token>, Content-Type: application/json
  • Body: Must include "input". You may include "model" (see /v1/models). If omitted and DEFAULT_MODEL is set, it will be injected.
  • Response: JSON from Copilot's embeddings API.

POST /v1/messages

  • Converts Anthropic API format to Copilot chat completion format.
  • Headers: Authorization: Bearer <your_access_token>, Content-Type: application/json
  • Body: Anthropic-compatible. You may include "model" (see /v1/models). If omitted and DEFAULT_MODEL is set, it will be injected.
  • Response: Anthropic API-compatible response.

Note: Claude Code/Anthropic compatibility is currently untested. If you use Claude Code or Anthropic clients and encounter issues, we would appreciate any PRs or feedback to help improve support!

GET /v1/models

  • Returns a list of available models and their capabilities.
  • No authentication required.
  • Response: JSON array of models as provided by GitHub's model catalog API.
  • Tip: Use the "id" field as the "model" value in your requests.

πŸ”’ Authentication

  • Set COPILOT_TOKEN in your environment.
  • Include in request headers:
    Authorization: Bearer your_access_token_here
    

⚠️ Error Handling

  • 401: Missing/invalid authorization header
  • 403: Invalid access token
  • Other errors are propagated from GitHub Copilot API

πŸ›‘οΈ Security Best Practices

  • Configure CORS for your specific domains (default: *)
  • Safeguard your COPILOT_TOKEN and GitHub OAuth token
  • Built-in token management with concurrent access protection

πŸ§ͺ Experimental Features

  • Anthropic API compatibility (/v1/messages)

πŸ—‚οΈ Project Structure

go-copilot-api/
β”œβ”€β”€ cmd/
β”‚   └── go-copilot-api/
β”‚       └── main.go         # Application entrypoint
β”œβ”€β”€ internal/
β”‚   β”œβ”€β”€ api/                # HTTP handlers and routing
β”œβ”€β”€ pkg/
β”‚   └── config/             # Configuration loading
β”œβ”€β”€ test/                   # Test files
β”œβ”€β”€ LICENSE                 # YO LICENSE
β”œβ”€β”€ go.mod                  # Go module definition
β”œβ”€β”€ go.sum                  # Go module checksums
└── README.md               # This file

πŸ§ͺ Testing

Run all tests:

go test ./...

πŸ“„ License

See LICENSE β€” YO LICENSE, Version 1.0, February 2025 Copyright (C) 2025 Travis Peacock

About

A high-performance Go proxy server that turns GitHub Copilot's chat completion/embeddings capabilities into an OpenAI-compatible API service, with Anthropic compatibility and robust security.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages