Vercel AI Provider for running LLMs locally using SambaNova's models.
- Requirements
- Installation
- Setup Environment
- Provider Instance
- Models
- Examples
- Intercepting Fetch requests
API key can be obtained from the SambaNova Cloud Platform.
The SambaNova provider is available in the sambanova-ai-provider
module. You can install it with
npm:
npm install sambanova-ai-provider
yarn:
yarn add sambanova-ai-provider
or pnpm:
pnpm add sambanova-ai-provider
You will need to setup a SAMBANOVA_API_KEY
environment variable. You can get your API key on the SambaNova Cloud Portal.
You can import the default provider instance sambanova
from sambanova-ai-provider
:
import { sambanova } from 'sambanova-ai-provider';
If you need a customized setup, you can import createSambaNova
from sambanova-ai-provider
and create a provider instance with your settings:
import { createSambaNova } from 'sambanova-ai-provider';
const sambanova = createSambaNova({
apiKey: 'YOUR_API_KEY',
// Optional settings
});
You can use the following optional settings to customize the SambaNova provider instance:
-
baseURL string
Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is
https://api.sambanova.ai/v1
. -
apiKey string
API key that is being sent using the
Authorization
header. It defaults to theSAMBANOVA_API_KEY
environment variable*. -
headers Record<string,string>
Custom headers to include in the requests.
-
fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>
Custom fetch implementation. Defaults to the global
fetch
function. You can use it as a middleware to intercept requests, or to provide a custom fetch implementation for e.g. testing.
* If you set the environment variable in a .env
file, you will need to use a loader like dotenv
in order for the script to read it.
You can use SambaNova models on the provider instance.
The first argument is the model ID, e.g. Meta-Llama-3.3-70B-Instruct
.
const model = sambanova('Meta-Llama-3.3-70B-Instruct');
This provider is capable of generating and streaming text, interpreting image inputs, run tool callings, and use embeddings.
At least it has been tested with the following features:
Chat completion | Image input | Tool calling | Embeddings |
---|---|---|---|
✅ | ✅ | ✅ | ✅ |
You need to use any of the following models for visual understanding:
- Llama-4-Maverick-17B-128E-Instruct
- Llama-4-Scout-17B-16E-Instruct
SambaNova vision models support up to five (5) images per request. They don't support URLs.
You can use any of the Function calling supported models for tool calling.
You can use the E5-Mistral-7B-Instruct
model to use the embeddings feature of the SambaNova provider.
On the examples
folder you will find some Markdown files containing simple code snippets of some of the features of the SambaNova Provider.
This provider supports Intercepting Fetch Requests.