Skip to content

๐ŸงŠ Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 ๐Ÿ“

License

Notifications You must be signed in to change notification settings

teamtoyumi/helicone

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

5,399 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿ” Observability ๐Ÿ•ธ๏ธ Agent Tracing ๐Ÿš‚ LLM Routing
๐Ÿ’ฐ Cost & Latency Tracking ๐Ÿ“š Datasets & Fine-tuning ๐ŸŽ›๏ธ Automatic Fallbacks

helicone logo


Contributors GitHub stars GitHub commit activity GitHub closed issues Y Combinator

Docs โ€ข Changelog โ€ข Bug reports โ€ข See Helicone in Action! (Free)

Helicone is an AI Gateway & LLM Observability Platform for AI Engineers

  • ๐ŸŒ AI Gateway: Access 100+ AI models with 1 API key through the OpenAI API with intelligent routing and automatic fallbacks. Get started in 2 minutes.
  • ๐Ÿ”Œ Quick integration: One-line of code to log all your requests from OpenAI, Anthropic, LangChain, Gemini, Vercel AI SDK, and more.
  • ๐Ÿ“Š Observe: Inspect and debug traces & sessions for agents, chatbots, document processing pipelines, and more
  • ๐Ÿ“ˆ Analyze: Track metrics like cost, latency, quality, and more. Export to PostHog in one-line for custom dashboards
  • ๐ŸŽฎ Playground: Rapidly test and iterate on prompts, sessions and traces in our UI.
  • ๐Ÿง  Prompt Management: Version prompts using production data. Deploy prompts through the AI Gateway without code changes. Your prompts remain under your control, always accessible.
  • ๐ŸŽ›๏ธ Fine-tune: Fine-tune with one of our fine-tuning partners: OpenPipe or Autonomi (more coming soon)
  • ๐Ÿ›ก๏ธ Enterprise Ready: SOC 2 and GDPR compliant

๐ŸŽ Generous monthly free tier (10k requests/month) - No credit card required!

Open Sourced LLM Observability & AI Gateway Platform

Quick Start โšก๏ธ

  1. Get your API key by signing up here and add credits at helicone.ai/credits

  2. Update the baseURL in your code and add your API key.

    import OpenAI from "openai";
    
    const client = new OpenAI({
      baseURL: "https://ai-gateway.helicone.ai",
      apiKey: process.env.HELICONE_API_KEY,
    });
    
    const response = await client.chat.completions.create({
      model: "gpt-4o-mini",  // claude-sonnet-4, gemini-2.0-flash or any model from https://www.helicone.ai/models
      messages: [{ role: "user", content: "Hello!" }]
    });
  3. ๐ŸŽ‰ You're all set! View your logs at Helicone and access 100+ models through one API.

Self-Hosting Open Source LLM Observability

Docker

Helicone is simple to self-host and update. To get started locally, just use our docker-compose file.

# Clone the repository
git clone https://github.com/Helicone/helicone.git
cd docker
cp .env.example .env

# Start the services
./helicone-compose.sh helicone up

Helm

For Enterprise workloads, we also have a production-ready Helm chart available. To access, contact us at enterprise@helicone.ai.

Manual (Not Recommended)

Manual deployment is not recommended. Please use Docker or Helm. If you must, follow the instructions here.

Architecture

Helicone is comprised of five services:

  • Web: Frontend Platform (NextJS)
  • Worker: Proxy Logging (Cloudflare Workers)
  • Jawn: Dedicated Server for serving collecting logs (Express + Tsoa)
  • Supabase: Application Database and Auth
  • ClickHouse: Analytics Database
  • Minio: Object Storage for logs.

Integrations ๐Ÿ”Œ

Inference Providers

Integration Supports Description
AI Gateway JS/TS, Python, cURL Unified API for 100+ providers with intelligent routing, automatic fallbacks, and unified observability
Async Logging (OpenLLMetry) JS/TS, Python Asynchronous logging for multiple LLM platforms
OpenAI JS/TS, Python Inference provider
Azure OpenAI JS/TS, Python Inference provider
Anthropic JS/TS, Python Inference provider
Ollama JS/TS Run and use large language models locally
AWS Bedrock JS/TS Inference provider
Gemini API JS/TS Inference provider
Gemini Vertex AI JS/TS Gemini models on Google Cloud's Vertex AI
Vercel AI JS/TS AI SDK for building AI-powered applications
Anyscale JS/TS, Python Inference provider
TogetherAI JS/TS, Python Inference provider
Hyperbolic JS/TS, Python Inference provider
Groq JS/TS, Python High-performance models
DeepInfra JS/TS, Python Serverless AI inference for various models
Fireworks AI JS/TS, Python Fast inference API for open-source LLMs

Frameworks

Framework Supports Description
LangChain JS/TS, Python Use AI Gateway with LangChain for unified provider access
LlamaIndex Python Framework for building LLM-powered data applications
LangGraph Python Build stateful, multi-actor applications with LLMs
Vercel AI SDK JS/TS AI SDK for building AI-powered applications
Semantic Kernel C#, Python Microsoft's AI orchestration framework
CrewAI Python Framework for orchestrating role-playing AI agents
ModelFusion JS/TS Abstraction layer for integrating AI models into JavaScript and TypeScript applications
PostHog JS/TS, Python, cURL Product analytics platform. Build custom dashboards.
RAGAS Python Evaluation framework for retrieval-augmented generation
Open WebUI JS/TS Web interface for interacting with local LLMs
MetaGPT YAML Multi-agent framework
Open Devin Docker AI software engineer
Mem0 EmbedChain Python Framework for building RAG applications
Dify No code required LLMOps platform for AI-native application development

This list may be out of date. Don't see your provider or framework? Check out the latest integrations in our docs. If not found there, request a new integration by contacting help@helicone.ai.

Contributing

We โค๏ธ our contributors! We warmly welcome contributions for documentation, integrations, costs, and feature requests.

If you have an idea for how Helicone can be better, create a GitHub issue.

License

Helicone is licensed under the Apache v2.0 License.

Additional Resources

For more information, visit our documentation.

Contributors

About

๐ŸงŠ Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 ๐Ÿ“

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 91.2%
  • MDX 7.1%
  • Python 0.7%
  • PLpgSQL 0.3%
  • Shell 0.2%
  • JavaScript 0.2%
  • Other 0.3%