From dd1d2dfe737ec01b5e18152865fe2a9b2c69cc18 Mon Sep 17 00:00:00 2001 From: Daniel van Strien Date: Tue, 28 Oct 2025 15:03:45 +0000 Subject: [PATCH 1/2] Add Hugging Face Inference Providers documentation Adds documentation for using Hugging Face Inference Providers with Aider. Key additions: - Setup instructions using OpenAI-compatible client - Model discovery guidance (Hub browsing, model cards, API docs) - Provider selection for users with existing credits/preferences - Links to Inference Providers documentation Benefits for Aider users: - Access to multiple providers (Cerebras, Groq, Together, etc.) via one API - Zero markup pricing - Automatic routing with optional provider selection - OpenAI-compatible integration --- aider/website/docs/llms/huggingface.md | 70 ++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) create mode 100644 aider/website/docs/llms/huggingface.md diff --git a/aider/website/docs/llms/huggingface.md b/aider/website/docs/llms/huggingface.md new file mode 100644 index 00000000000..13218dbeefd --- /dev/null +++ b/aider/website/docs/llms/huggingface.md @@ -0,0 +1,70 @@ +--- +parent: Connecting to LLMs +nav_order: 450 +--- + +# Hugging Face Inference Providers + +Aider can connect to [Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers/) using the OpenAI-compatible client. Inference Providers gives you access to multiple AI providers through a single OpenAI-compatible API. + +You'll need a [Hugging Face account](https://huggingface.co/join) and an [access token](https://huggingface.co/settings/tokens). + +First, install aider: + +{% include install.md %} + +Then configure your API key and base URL: + +``` +# Mac/Linux: +export OPENAI_API_BASE=https://router.huggingface.co/v1 +export OPENAI_API_KEY= + +# Windows: +setx OPENAI_API_BASE https://router.huggingface.co/v1 +setx OPENAI_API_KEY +# ... restart shell after setx commands +``` + +Start working with aider on your codebase: + +```bash +# Change directory into your codebase +cd /to/your/project + +# Use any model from Inference Providers +# Prefix the model name with openai/ +aider --model openai/ + +# Using MiniMaxAI/MiniMax-M2: +aider --model openai/MiniMaxAI/MiniMax-M2 +``` + +## Selecting a specific provider + +By default, Inference Providers automatically routes requests to the best available provider. If you want to use a specific provider you can append the provider name to the model: + +```bash +aider --model openai/: + +# Using GLM-4.6 via zai-org provider: +aider --model openai/zai-org/GLM-4.6:zai-org +``` + +You can find the provider-specific syntax on any model card's "Use this model" → "Inference Providers" section. + +## Finding models + +You can discover models available via Inference Providers in several ways: + +1. **Browse models on Hugging Face**: Visit [models with Inference Providers](https://huggingface.co/models?pipeline_tag=text-generation&inference_provider=all&sort=trending) to see all models with at least one Inference Provider hosting them. + +2. **From model cards**: On any model card page (like [zai-org/GLM-4.6](https://huggingface.co/zai-org/GLM-4.6)), click "Use this model" → "Inference Providers" to see the code snippet with the exact model name. + +3. **See the docs**: Check the [Inference Providers Hub API docs](https://huggingface.co/docs/inference-providers/hub-api) for programmatic model discovery + +## More information + +- [Inference Providers documentation](https://huggingface.co/docs/inference-providers/) +- [Pricing details](https://huggingface.co/docs/inference-providers/pricing) +- [Supported providers](https://huggingface.co/docs/inference-providers/index#partners) From 3b3266239fbdef4e88936a4b589a4d632113c2ed Mon Sep 17 00:00:00 2001 From: Daniel van Strien Date: Tue, 4 Nov 2025 20:18:12 +0000 Subject: [PATCH 2/2] Use HF-branded variables and model prefix for Inference Providers Update documentation to use HUGGINGFACE_API_KEY and huggingface/ model prefix instead of OPENAI_* equivalents for better clarity. OPENAI_API_BASE is still required (litellm limitation) and is now explained in a note. Also documents new :cheapest and :fastest provider selection suffixes. Changes: - Use HUGGINGFACE_API_KEY (or HF_TOKEN) for API authentication - Change model prefix from openai/ to huggingface/ in all examples - Add explanatory note about OPENAI_API_BASE requirement - Document :cheapest and :fastest provider selection options All changes tested and confirmed working. --- aider/website/docs/llms/huggingface.md | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-) diff --git a/aider/website/docs/llms/huggingface.md b/aider/website/docs/llms/huggingface.md index 13218dbeefd..9db268053f5 100644 --- a/aider/website/docs/llms/huggingface.md +++ b/aider/website/docs/llms/huggingface.md @@ -18,14 +18,16 @@ Then configure your API key and base URL: ``` # Mac/Linux: export OPENAI_API_BASE=https://router.huggingface.co/v1 -export OPENAI_API_KEY= +export HUGGINGFACE_API_KEY= # Windows: setx OPENAI_API_BASE https://router.huggingface.co/v1 -setx OPENAI_API_KEY +setx HUGGINGFACE_API_KEY # ... restart shell after setx commands ``` +**Note:** `OPENAI_API_BASE` is required because Hugging Face Inference Providers uses an OpenAI-compatible endpoint. You can also use `HF_TOKEN` instead of `HUGGINGFACE_API_KEY`. + Start working with aider on your codebase: ```bash @@ -33,11 +35,11 @@ Start working with aider on your codebase: cd /to/your/project # Use any model from Inference Providers -# Prefix the model name with openai/ -aider --model openai/ +# Prefix the model name with huggingface/ +aider --model huggingface/ # Using MiniMaxAI/MiniMax-M2: -aider --model openai/MiniMaxAI/MiniMax-M2 +aider --model huggingface/MiniMaxAI/MiniMax-M2 ``` ## Selecting a specific provider @@ -45,10 +47,20 @@ aider --model openai/MiniMaxAI/MiniMax-M2 By default, Inference Providers automatically routes requests to the best available provider. If you want to use a specific provider you can append the provider name to the model: ```bash -aider --model openai/: +aider --model huggingface/: # Using GLM-4.6 via zai-org provider: -aider --model openai/zai-org/GLM-4.6:zai-org +aider --model huggingface/zai-org/GLM-4.6:zai-org +``` + +You can also use `:cheapest` or `:fastest` to automatically select based on cost or throughput: + +```bash +# Use the cheapest available provider: +aider --model huggingface/MiniMaxAI/MiniMax-M2:cheapest + +# Use the fastest available provider: +aider --model huggingface/MiniMaxAI/MiniMax-M2:fastest ``` You can find the provider-specific syntax on any model card's "Use this model" → "Inference Providers" section.