Use local LLMs in your Python apps, with GPU acceleration and zero dependencies. This package is designed to patch OpenAI
and Anthropic
clients for running inference locally, using predictors hosted on Function.
Tip
We offer a similar package for use in the browser and Node.js. Check out fxn-llm-js.
Important
This package is still a work-in-progress, so the API could change drastically between all releases.
Function is distributed on PyPi. To install, open a terminal and run the following command:
# Install Function LLM
$ pip install --upgrade fxn-llm
Note
Function LLM requires Python 3.10+
Important
Make sure to create an access key by signing onto Function. You'll need it to fetch the predictor at runtime.
To run text generation and embedding models locally using the OpenAI client, patch your OpenAI
instance with the locally
function:
from openai import OpenAI
from fxn_llm import locally
# 💥 Create your OpenAI client
openai = OpenAI()
# 🔥 Make it local
openai = locally(openai)
# 🚀 Generate embeddings
embeddings = openai.embeddings.create(
model="@nomic/nomic-embed-text-v1.5-quant",
input="search_query: Hello world!"
)
Warning
Currently, only openai.embeddings.create
is supported. Text generation is coming soon!
- Discover predictors to use in your apps.
- Join our Discord community.
- Check out our docs.
- Learn more about us on our blog.
- Reach out to us at hi@fxn.ai.
Function is a product of NatML Inc.