-
Notifications
You must be signed in to change notification settings - Fork 2
Description
Type
Documentation
Area
Remote backend and model integration
Difficulty
Beginner
Maintainer Guidance Needed
Medium
Problem
The repository supports a remote LLM backend and even has tests covering remote embedding/query flow, but the README only exposes environment variables. There is no quickstart showing how to point KINDX at Ollama, LM Studio, or another OpenAI-compatible endpoint.
Why It Matters
Without a quickstart, users cannot easily discover or trust a major fallback path for machines where local model execution is not desirable or not possible.
Scope
Add a short remote-backend guide to the README that covers backend selection, the required environment variables, optional API keys, and how rerank fallback behaves when /v1/rerank is unavailable.
Acceptance Criteria
- The README shows a minimal remote-backend setup example.
- The docs mention
KINDX_LLM_BACKEND=remoteand the required endpoint variables. - The docs explain the current rerank fallback behavior for backends without
/v1/rerank. - The examples are consistent with the existing remote-backend tests.
Relevant Files
README.mdengine/remote-llm.tsspecs/command-line.test.ts
Testing
- Cross-check the docs against
engine/remote-llm.tsdefaults and behavior. - Run
npm run build.
Non-goals
- Adding new remote-backend features
- Implementing provider-specific adapters
- Writing a full hosted deployment guide
Difficulty Rationale
This is still documentation work, but it requires reading the remote backend implementation and tests closely enough to avoid inaccurate examples.
Checklist
- I searched open and recently closed issues before drafting this task.
- The issue is scoped to one focused PR.
- The likely files and verification steps are listed.