Skip to content

Live implementation: multi-service L402 AI marketplace — use case feedback #20

@cnghockey

Description

@cnghockey

We've been running sats4ai.com as a live L402 service provider since early 2026 — 15+ AI services (image generation, video, audio, SMS, phone calls, document analysis, email, and more) all gated behind L402 endpoints, with an MCP server on top for agent discovery.

A few implementation notes that may be useful to the spec or community:

Single-use tokens by design
For per-generation billing, each token is intentionally single-use (tied to a specific charge ID embedded in the macaroon identifier). The spec's "cache and reuse" model works well for subscription-style access, but per-generation services need a new invoice per call. Worth a note in the spec that single-use vs. reusable is a server-side policy choice — not a protocol violation.

token= + macaroon= dual-field migration
After the recent bLIP update renaming macaroon= to token=, we now emit both fields in our WWW-Authenticate header for backwards compatibility with existing clients. Recommend the spec explicitly document this as the migration path for existing implementors.

MCP + L402 combination
We expose services via an MCP server (Model Context Protocol) alongside the L402 endpoints. Agents can discover available services via MCP and pay via L402 in a single workflow — no human setup required. The combination feels like the natural next step for agent-native commerce. Happy to share implementation details if useful.

We're following the active development here and on the bLIP PR closely. Great to see the protocol getting the formalization it deserves.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions