A lightweight Bun + Express template that connects to the Testune AI API and streams chat responses in real time using Server-Sent Events (SSE).
Use this project as a starting point for building AI apps, proxies, or backend services that need live, token-by-token responses from an AI model.
- β‘ Built with Bun + Express
- π Connects to the Testune AI API for LLM interactions
- π‘ Supports SSE streaming (just like OpenAIβs streaming responses)
- π¬ Example
/chat
endpoint you can call from your frontend - π Easy to extend with your own routes, auth, or business logic
.
βββ src/
β βββ index.ts # Express server with streaming proxy
βββ package.json
βββ tsconfig.json
βββ README.md