diff --git a/docs/blog/posts/chain-of-density.md b/docs/blog/posts/chain-of-density.md index a6f2980f6..1835cf82e 100644 --- a/docs/blog/posts/chain-of-density.md +++ b/docs/blog/posts/chain-of-density.md @@ -204,16 +204,15 @@ class RewrittenSummary(BaseModel): ..., description="This is a new, denser summary of identical length which covers every entity and detail from the previous summary plus the Missing Entities. It should have the same length ( ~ 80 words ) as the previous summary and should be easily understood without the Article", ) - absent: List[str] = Field( + absent: typing.List[str] = Field( ..., default_factory=list, description="this is a list of Entities found absent from the new summary that were present in the previous summary", ) - missing: List[str] = Field( + missing: typing.List[str] = Field( default_factory=list, description="This is a list of 1-3 informative Entities from the Article that are missing from the new summary which should be included in the next generated summary.", ) -``` !!! tip "Using Pydantic Validators with Instructor" @@ -394,18 +393,18 @@ In order to prevent any contamination of data during testing, we randomly sample ```py hl_lines="2 9 11 13-21 40 43" from typing import List -from chain_of_density import summarize_article #(1)! +from chain_of_density import summarize_article # (1)! import csv import logging import instructor from pydantic import BaseModel from openai import OpenAI -client = instructor.from_openai(OpenAI()) # (2)! +client = instructor.from_openai(OpenAI()) # (2)! -logging.basicConfig(level=logging.INFO) #(3)! +logging.basicConfig(level=logging.INFO) # (3)! -instructions = instructor.Instructions( #(4)! +instructions = instructor.Instructions( # (4)! name="Chain Of Density", finetune_format="messages", # log handler is used to save the data to a file @@ -432,10 +431,10 @@ class GeneratedSummary(BaseModel): description="This represents the final summary generated that captures the meaning of the original article which is as concise as possible. ", ) -@instructions.distil #(4)! +@instructions.distil # (4)! def distil_summarization(text: str) -> GeneratedSummary: summary_chain: List[str] = summarize_article(text) - return GeneratedSummary(summary=summary_chain[-1]) #(5)! + return GeneratedSummary(summary=summary_chain[-1]) # (5)! with open("train.csv", "r") as file: reader = csv.reader(file) @@ -443,7 +442,6 @@ with open("train.csv", "r") as file: for article, summary in reader: # Run Distillisation to generate the values distil_summarization(article) -``` 1. In this example, we're using the summarize_article that we defined up above. We saved it in a local file called `chain_of_density.py`, hence the import @@ -547,4 +545,4 @@ Interestingly, the model finetuned with the least examples seems to outperform t Finetuning this iterative method was 20-40x faster while improving overall performance, resulting in massive efficiency gains by finetuning and distilling capabilities into specialized models. -We've seen how `Instructor` can make your life easier, from data modeling to distillation and finetuning. If you enjoy the content or want to try out `instructor` check out the [github](https://github.com/jxnl/instructor) and don't forget to give us a star! \ No newline at end of file +We've seen how `Instructor` can make your life easier, from data modeling to distillation and finetuning. If you enjoy the content or want to try out `instructor` check out the [github](https://github.com/jxnl/instructor) and don't forget to give us a star! diff --git a/docs/blog/posts/introducing-structured-outputs.md b/docs/blog/posts/introducing-structured-outputs.md index 8e0f47ca8..fdd65d9f5 100644 --- a/docs/blog/posts/introducing-structured-outputs.md +++ b/docs/blog/posts/introducing-structured-outputs.md @@ -117,38 +117,15 @@ with client.beta.chat.completions.stream( for event in stream: if event.type == "content.delta": print(event.snapshot, flush=True, end="\n") - #> - #> {" - #> {"name - #> {"name":" - #> {"name":"Jason - #> {"name":"Jason"," - #> {"name":"Jason","age - #> {"name":"Jason","age": - #> {"name":"Jason","age":25 - #> {"name":"Jason","age":25} # > - #> {" - #> {"name - #> {"name":" - #> {"name":"Jason - #> {"name":"Jason"," - #> {"name":"Jason","age - #> {"name":"Jason","age": - #> {"name":"Jason","age":25 - #> {"name":"Jason","age":25} -``` - -### Unpredictable Latency Spikes - -In order to benchmark the two modes, we made 200 identical requests to OpenAI and noted the time taken for each request to complete. The results are summarized in the following table: - -| mode | mean | min | max | std_dev | variance | -| ------------------ | ----- | ----- | ------ | ------- | -------- | -| Tool Calling | 6.84 | 6.21 | 12.84 | 0.69 | 0.47 | -| Structured Outputs | 28.20 | 14.91 | 136.90 | 9.27 | 86.01 | - -Structured Outputs suffers from unpredictable latency spikes while Tool Calling maintains consistent performance. This could cause users to occasionally experience significant delays in response times, potentially impacting the overall user satisfication and retention rates. + # > {"name + # > {"name":" + # > {"name":"Jason + # > {"name":"Jason"," + # > {"name":"Jason","age + # > {"name":"Jason","age": + # > {"name":"Jason","age":25 + # > {"name":"Jason","age":25} ## Why use `instructor` diff --git a/docs/blog/posts/multimodal-gemini.md b/docs/blog/posts/multimodal-gemini.md index 3a1ddb8ba..252c01918 100644 --- a/docs/blog/posts/multimodal-gemini.md +++ b/docs/blog/posts/multimodal-gemini.md @@ -87,74 +87,93 @@ print(resp) ```python Recomendations( - chain_of_thought='The video recommends visiting Takayama city, in the Hida Region, Gifu Prefecture. The - video suggests visiting the Miyagawa Morning Market, to try the Sarubobo good luck charms, and to enjoy the - cookie cup espresso, made by Koma Coffee. Then, the video suggests visiting a traditional Japanese Cafe, - called Kissako Katsure, and try their matcha and sweets. Afterwards, the video suggests to visit the Sanmachi - Historic District, where you can find local crafts and delicious foods. The video recommends trying Hida Wagyu - beef, at the Kin no Kotte Ushi shop, or to have a sit-down meal at the Kitchen Hida. Finally, the video - recommends visiting Shirakawa-go, a World Heritage Site in Gifu Prefecture.', - description='This video recommends a number of places to visit in Takayama city, in the Hida Region, Gifu - Prefecture. It shows some of the local street food and highlights some of the unique shops and restaurants in - the area.', + chain_of_thought=" ".join([ + "The video recommends visiting Takayama city, in the Hida Region, Gifu Prefecture.", + "The video suggests visiting the Miyagawa Morning Market, to try the Sarubobo good luck charms,", + "and to enjoy the cookie cup espresso, made by Koma Coffee. Then, the video suggests visiting", + "a traditional Japanese Cafe, called Kissako Katsure, and try their matcha and sweets.", + "Afterwards, the video suggests to visit the Sanmachi Historic District, where you can find", + "local crafts and delicious foods. The video recommends trying Hida Wagyu beef, at the Kin", + "no Kotte Ushi shop, or to have a sit-down meal at the Kitchen Hida. Finally, the video", + "recommends visiting Shirakawa-go, a World Heritage Site in Gifu Prefecture." + ]), + description=" ".join([ + "This video recommends a number of places to visit in Takayama city, in the Hida Region, Gifu", + "Prefecture. It shows some of the local street food and highlights some of the unique shops and restaurants in", + "the area." + ]), destinations=[ TouristDestination( - name='Takayama', - description='Takayama is a city at the base of the Japan Alps, located in the Hida Region of - Gifu.', - location='Hida Region, Gifu Prefecture' + name="Takayama", + description=( + "Takayama is a city at the base of the Japan Alps, located in the Hida Region of " + "Gifu." + ), + location="Hida Region, Gifu Prefecture" ), TouristDestination( - name='Miyagawa Morning Market', - description="The Miyagawa Morning Market, or the Miyagawa Asai-chi in Japanese, is a market that - has existed officially since the Edo Period, more than 100 years ago. It's open every single day, rain or - shine, from 7am to noon.", - location='Hida Takayama' + name="Miyagawa Morning Market", + description=( + "The Miyagawa Morning Market, or the Miyagawa Asai-chi in Japanese, is a market that " + "has existed officially since the Edo Period, more than 100 years ago. It's open every " + "single day, rain or shine, from 7am to noon." + ), + location="Hida Takayama" ), TouristDestination( - name='Nakaya - Handmade Hida Sarubobo', - description='The Nakaya shop sells handcrafted Sarubobo good luck charms.', - location='Hida Takayama' + name="Nakaya - Handmade Hida Sarubobo", + description="The Nakaya shop sells handcrafted Sarubobo good luck charms.", + location="Hida Takayama" ), TouristDestination( - name='Koma Coffee', - description="Koma Coffee is a shop that has been in business for about 50 or 60 years, and they - serve coffee in a cookie cup. They've been serving coffee for about 10 years.", - location='Hida Takayama' + name="Koma Coffee", + description=( + "Koma Coffee is a shop that has been in business for about 50 or 60 years, and they " + "serve coffee in a cookie cup. They've been serving coffee for about 10 years." + ), + location="Hida Takayama" ), TouristDestination( - name='Kissako Katsure', - description='Kissako Katsure is a traditional Japanese style cafe, called Kissako, and the name - means would you like to have some tea. They have a variety of teas and sweets.', - location='Hida Takayama' + name="Kissako Katsure", + description=( + "Kissako Katsure is a traditional Japanese style cafe, called Kissako, and the name " + "means would you like to have some tea. They have a variety of teas and sweets." + ), + location="Hida Takayama" ), TouristDestination( - name='Sanmachi Historic District', - description='Sanmachi Dori is a Historic Merchant District in Takayama, all of the buildings here - have been preserved to look as they did in the Edo Period.', - location='Hida Takayama' + name="Sanmachi Historic District", + description=( + "Sanmachi Dori is a Historic Merchant District in Takayama, all of the buildings here " + "have been preserved to look as they did in the Edo Period." + ), + location="Hida Takayama" ), TouristDestination( - name='Suwa Orchard', - description='The Suwa Orchard has been in business for more than 50 years.', - location='Hida Takayama' + name="Suwa Orchard", + description="The Suwa Orchard has been in business for more than 50 years.", + location="Hida Takayama" ), TouristDestination( - name='Kitchen HIDA', - description='Kitchen HIDA is a restaurant with a 50 year history, known for their Hida Beef dishes - and for using a lot of local ingredients.', - location='Hida Takayama' + name="Kitchen HIDA", + description=( + "Kitchen HIDA is a restaurant with a 50 year history, known for their Hida Beef dishes " + "and for using a lot of local ingredients." + ), + location="Hida Takayama" ), TouristDestination( - name='Kin no Kotte Ushi', - description='Kin no Kotte Ushi is a shop known for selling Beef Sushi, especially Hida Wagyu Beef - Sushi. Their sushi is medium rare.', - location='Hida Takayama' + name="Kin no Kotte Ushi", + description=( + "Kin no Kotte Ushi is a shop known for selling Beef Sushi, especially Hida Wagyu Beef " + "Sushi. Their sushi is medium rare." + ), + location="Hida Takayama" ), TouristDestination( - name='Shirakawa-go', - description='Shirakawa-go is a World Heritage Site in Gifu Prefecture.', - location='Gifu Prefecture' + name="Shirakawa-go", + description="Shirakawa-go is a World Heritage Site in Gifu Prefecture.", + location="Gifu Prefecture" ) ] ) diff --git a/docs/blog/posts/version-1.md b/docs/blog/posts/version-1.md index d27a54cd2..c2db47de6 100644 --- a/docs/blog/posts/version-1.md +++ b/docs/blog/posts/version-1.md @@ -62,11 +62,12 @@ Now, whenever you call `client.chat.completions.create` the `model` and `tempera When I first started working on this project, my goal was to ensure that we weren't introducing any new standards. Instead, our focus was on maintaining compatibility with existing ones. By creating our own client, we can seamlessly proxy OpenAI's `chat.completions.create` and Anthropic's `messages.create` methods. This approach allows us to provide a smooth upgrade path for your client, enabling support for all the latest models and features as they become available. Additionally, this strategy safeguards us against potential downstream changes. ```python +from __future__ import annotations # Required for Python 3.7 import openai import anthropic import litellm import instructor -from typing import TypeVar +from typing import TypeVar, Type, Any T = TypeVar("T") @@ -77,10 +78,11 @@ client = instructor.from_litellm(litellm.completion) # all of these will route to the same underlying create function # allow you to add instructor to try it out, while easily removing it -client.create(model="gpt-4", response_model=type[T]) -> T -client.chat.completions.create(model="gpt-4", response_model=type[T]) -> T -client.messages.create(model="gpt-4", response_model=type[T]) -> T -``` + + +def create(model: str, response_model: 'Any') -> 'T': ... # type: ignore +def chat_completions_create(model: str, response_model: 'Any') -> 'T': ... # type: ignore +def messages_create(model: str, response_model: 'Any') -> 'T': ... # type: ignore ## Type are infered correctly @@ -204,25 +206,25 @@ user_stream = client.chat.completions.create_partial( for user in user_stream: print(user) - #> name=None age=None - #> name=None age=None - #> name=None age=None - #> name=None age=25 - #> name=None age=25 - #> name=None age=25 - #> name='' age=25 - #> name='John' age=25 - #> name='John Smith' age=25 - #> name='John Smith' age=25 - # name=None age=None - # name='' age=None - # name='John' age=None - # name='John Doe' age=None - # name='John Doe' age=30 + # > name=None age=None + # > name=None age=None + # > name=None age=None + # > name=None age=25 + # > name=None age=25 + # > name=None age=25 + # > name='' age=25 + # > name='John' age=25 + # > name='John Smith' age=25 + # > name='John Smith' age=25 + # > name=None age=None + # > name='' age=None + # > name='John' age=None + # > name='John Doe' age=None + # > name='John Doe' age=30 + +# Note: The return type is Generator[User, None, None] ``` -Notice now that the type infered is `Generator[User, None]` - ![generator](./img/generator.png) ### Streaming Iterables: `create_iterable` diff --git a/docs/integrations/anthropic.md b/docs/integrations/anthropic.md index 5e97308b5..120af60e1 100644 --- a/docs/integrations/anthropic.md +++ b/docs/integrations/anthropic.md @@ -7,69 +7,24 @@ description: Learn how to combine Anthropic and Instructor clients to create use Now that we have a [Anthropic](https://www.anthropic.com/) client, we can use it with the `instructor` client to make requests. -Let's first install the instructor client with anthropic support +Let's first install the instructor client with anthropic support: -``` -pip install "instructor[anthropic]" -``` - -Once we've done so, getting started is as simple as using our `from_anthropic` method to patch the client up. - -```python -from pydantic import BaseModel -from typing import List -import anthropic -import instructor +=== "UV (Recommended)" + ```bash + # Install UV if you haven't already + curl -LsSf https://astral.sh/uv/install.sh | sh -# Patching the Anthropics client with the instructor for enhanced capabilities -client = instructor.from_anthropic( - anthropic.Anthropic(), -) + # Install instructor + uv pip install "instructor[anthropic]" + ``` +=== "pip" + ```bash + pip install "instructor[anthropic]" + ``` -class Properties(BaseModel): - name: str - value: str -class User(BaseModel): - name: str - age: int - properties: List[Properties] - - -# client.messages.create will also work due to the instructor client -user_response = client.chat.completions.create( - model="claude-3-haiku-20240307", - max_tokens=1024, - max_retries=0, - messages=[ - { - "role": "user", - "content": "Create a user for a model with a name, age, and properties.", - } - ], - response_model=User, -) # type: ignore - -print(user_response.model_dump_json(indent=2)) -""" -{ - "name": "John Doe", - "age": 35, - "properties": [ - { - "name": "City", - "value": "New York" - }, - { - "name": "Occupation", - "value": "Software Engineer" - } - ] -} -""" -``` ## Streaming Support diff --git a/docs/integrations/azure.md b/docs/integrations/azure.md index c3742f29d..c3a507b6e 100644 --- a/docs/integrations/azure.md +++ b/docs/integrations/azure.md @@ -9,23 +9,22 @@ This guide demonstrates how to use Azure OpenAI with instructor for structured o ## Installation -We can use the same installation as we do for OpenAI since the default `openai` client ships with an AzureOpenAI client. +=== "UV (Recommended)" + ```bash + # Install UV if you haven't already + curl -LsSf https://astral.sh/uv/install.sh | sh -First, install the required dependencies: + # Install instructor + uv pip install "instructor[azure]" + ``` -```bash -pip install instructor -``` - -Next, make sure that you've enabled Azure OpenAI in your Azure account and have a deployment for the model you'd like to use. [Here is a guide to get started](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal) +=== "pip" + ```bash + pip install "instructor[azure]" + ``` -Once you've done so, you'll have an endpoint and a API key to be used to configure the client. -```bash -instructor.exceptions.InstructorRetryException: Error code: 401 - {'statusCode': 401, 'message': 'Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired.'} -``` -If you see an error like the one above, make sure you've set the correct endpoint and API key in the client. ## Authentication diff --git a/docs/integrations/cerebras.md b/docs/integrations/cerebras.md index bfb326fcb..2ed005667 100644 --- a/docs/integrations/cerebras.md +++ b/docs/integrations/cerebras.md @@ -12,8 +12,31 @@ Cerebras provides hardware-accelerated AI models optimized for high-performance Install Instructor with Cerebras support: ```bash -pip install "instructor[cerebras_cloud_sdk]" -``` +=== "UV (Recommended)" + ```bash +=== "UV (Recommended)" + ```bash +Let's first install the instructor client with cerebras support: + +=== "UV (Recommended)" + ```bash + # Install UV if you haven't already + curl -LsSf https://astral.sh/uv/install.sh | sh + + # Install instructor + uv pip install "instructor[cerebras]" + ``` + +=== "pip" + ```bash + pip install "instructor[cerebras]" + ``` + + + + + + ## Simple User Example (Sync) diff --git a/docs/integrations/cohere.md b/docs/integrations/cohere.md index d3824651a..f4ae32d10 100644 --- a/docs/integrations/cohere.md +++ b/docs/integrations/cohere.md @@ -13,10 +13,15 @@ You'll need a cohere API key which can be obtained by signing up [here](https:// ## Setup -``` -pip install "instructor[cohere]" - -``` +=== "UV (Recommended)" + ```bash + uv pip install "instructor[cohere]" + ``` + +=== "pip" + ```bash + pip install "instructor[cohere]" + ``` Export your key: diff --git a/docs/integrations/deepseek.md b/docs/integrations/deepseek.md index 64a3f8649..ec0956a31 100644 --- a/docs/integrations/deepseek.md +++ b/docs/integrations/deepseek.md @@ -14,8 +14,28 @@ This guide covers everything you need to know about using DeepSeek with Instruct Instructor comes with support for the OpenAI Client out of the box, so you don't need to install anything extra. ```bash -pip install "instructor" -``` +=== "UV (Recommended)" + ```bash +Let's first install the instructor client with deepseek support: + +=== "UV (Recommended)" + ```bash + # Install UV if you haven't already + curl -LsSf https://astral.sh/uv/install.sh | sh + + # Install instructor + uv pip install "instructor[deepseek]" + ``` + +=== "pip" + ```bash + pip install "instructor[deepseek]" + ``` + + + + + ⚠️ **Important**: You must set your DeepSeek API key before using the client. You can do this in two ways: diff --git a/docs/integrations/fireworks.md b/docs/integrations/fireworks.md index a12edab1d..282963680 100644 --- a/docs/integrations/fireworks.md +++ b/docs/integrations/fireworks.md @@ -12,8 +12,31 @@ Fireworks provides efficient and cost-effective AI models with enterprise-grade Install Instructor with Fireworks support: ```bash -pip install "instructor[fireworks-ai]" -``` +=== "UV (Recommended)" + ```bash +=== "UV (Recommended)" + ```bash +Let's first install the instructor client with fireworks support: + +=== "UV (Recommended)" + ```bash + # Install UV if you haven't already + curl -LsSf https://astral.sh/uv/install.sh | sh + + # Install instructor + uv pip install "instructor[fireworks]" + ``` + +=== "pip" + ```bash + pip install "instructor[fireworks]" + ``` + + + + + + ## Simple User Example (Sync) diff --git a/docs/integrations/google.md b/docs/integrations/google.md index fcc280f0e..4cf4c6975 100644 --- a/docs/integrations/google.md +++ b/docs/integrations/google.md @@ -11,9 +11,24 @@ This guide will show you how to use Instructor with the Google.GenerativeAI libr Google's Gemini models provide powerful AI capabilities with multimodal support. This guide shows you how to use Instructor with Google's Gemini models for type-safe, validated responses. -```bash -pip install "instructor[google-generativeai] -``` +Let's first install the instructor client with Google/Gemini support: + +=== "UV (Recommended)" + ```bash + # Install UV if you haven't already + curl -LsSf https://astral.sh/uv/install.sh | sh + + # Install instructor + uv pip install "instructor[google]" + ``` + +=== "pip" + ```bash + pip install "instructor[google]" + ``` + + + ## Simple User Example (Sync) diff --git a/docs/integrations/groq.md b/docs/integrations/groq.md index 81fd8068c..c41e4fe00 100644 --- a/docs/integrations/groq.md +++ b/docs/integrations/groq.md @@ -11,9 +11,31 @@ you'll need to sign up for an account and get an API key. You can do that [here] ```bash export GROQ_API_KEY= -pip install "instructor[groq]" ``` +Let's first install the instructor client with groq support: +```bash + +=== "UV (Recommended)" + ```bash + # Install UV if you haven't already + curl -LsSf https://astral.sh/uv/install.sh | sh + + # Install instructor + uv pip install "instructor[groq]" + ``` + +=== "pip" + ```bash + pip install "instructor[groq]" + ``` + + + + + + + ## Groq AI Groq supports structured outputs with their new `llama-3-groq-70b-8192-tool-use-preview` model. diff --git a/docs/integrations/litellm.md b/docs/integrations/litellm.md index 671f5fd25..53b0a6bcd 100644 --- a/docs/integrations/litellm.md +++ b/docs/integrations/litellm.md @@ -12,8 +12,31 @@ LiteLLM provides a unified interface for multiple LLM providers, making it easy Install Instructor with LiteLLM support: ```bash -pip install "instructor[litellm]" -``` +=== "UV (Recommended)" + ```bash +=== "UV (Recommended)" + ```bash +Let's first install the instructor client with litellm support: + +=== "UV (Recommended)" + ```bash + # Install UV if you haven't already + curl -LsSf https://astral.sh/uv/install.sh | sh + + # Install instructor + uv pip install "instructor[litellm]" + ``` + +=== "pip" + ```bash + pip install "instructor[litellm]" + ``` + + + + + + ## Simple User Example (Sync) diff --git a/docs/integrations/llama-cpp-python.md b/docs/integrations/llama-cpp-python.md index de267c39b..8abca9ebe 100644 --- a/docs/integrations/llama-cpp-python.md +++ b/docs/integrations/llama-cpp-python.md @@ -16,6 +16,22 @@ This guide demonstrates how to use llama-cpp-python with Instructor to generate Open-source LLMS are gaining popularity, and llama-cpp-python has made the `llama-cpp` model available to obtain structured outputs using JSON schema via a mixture of [constrained sampling](https://llama-cpp-python.readthedocs.io/en/latest/#json-schema-mode) and [speculative decoding](https://llama-cpp-python.readthedocs.io/en/latest/#speculative-decoding). +Let's first install the instructor client with llama-cpp-python support: + +=== "UV (Recommended)" + ```bash + # Install UV if you haven't already + curl -LsSf https://astral.sh/uv/install.sh | sh + + # Install instructor with llama-cpp-python support + uv pip install "instructor[llama]" + ``` + +=== "pip" + ```bash + pip install "instructor[llama]" + ``` + They also support a [OpenAI compatible client](https://llama-cpp-python.readthedocs.io/en/latest/#openai-compatible-web-server), which can be used to obtain structured output as a in process mechanism to avoid any network dependency. diff --git a/docs/integrations/mistral.md b/docs/integrations/mistral.md index 37f2b9d04..af737b016 100644 --- a/docs/integrations/mistral.md +++ b/docs/integrations/mistral.md @@ -18,6 +18,25 @@ Mistral Large is the flagship model from Mistral AI, supporting 32k context wind By the end of this blog post, you will learn how to effectively utilize Instructor with Mistral Large. +Let's first install the instructor client with Mistral support: + +=== "UV (Recommended)" + ```bash + # Install UV if you haven't already + curl -LsSf https://astral.sh/uv/install.sh | sh + + # Install instructor + uv pip install "instructor[mistral]" + ``` + +=== "pip" + ```bash + pip install "instructor[mistral]" + ``` + + + + ```python import os from pydantic import BaseModel diff --git a/docs/integrations/ollama.md b/docs/integrations/ollama.md index f7254bf2e..4848fa835 100644 --- a/docs/integrations/ollama.md +++ b/docs/integrations/ollama.md @@ -15,6 +15,22 @@ authors: This guide demonstrates how to use Ollama with Instructor to generate structured outputs. You'll learn how to use JSON schema mode with local LLMs to create type-safe responses. +Let's first install the instructor client with Ollama support: + +=== "UV (Recommended)" + ```bash + # Install UV if you haven't already + curl -LsSf https://astral.sh/uv/install.sh | sh + + # Install instructor with Ollama support + uv pip install "instructor[ollama]" + ``` + +=== "pip" + ```bash + pip install "instructor[ollama]" + ``` + Open-source LLMS are gaining popularity, and the release of Ollama's OpenAI compatibility later it has made it possible to obtain structured outputs using JSON schema. By the end of this blog post, you will learn how to effectively utilize instructor with ollama. But before we proceed, let's first explore the concept of patching. diff --git a/docs/integrations/openai.md b/docs/integrations/openai.md index d8dd58ad1..d17898fc5 100644 --- a/docs/integrations/openai.md +++ b/docs/integrations/openai.md @@ -11,17 +11,19 @@ OpenAI is the primary integration for Instructor, offering robust support for st Instructor comes with support for OpenAI out of the box, so you don't need to install anything extra. -```bash -pip install "instructor" -``` - -⚠️ **Important**: You must set your OpenAI API key before using the client. You can do this in two ways: - -1. Set the environment variable: - -```bash -export OPENAI_API_KEY='your-api-key-here' -``` +=== "UV (Recommended)" + ```bash + # Install UV if you haven't already + curl -LsSf https://astral.sh/uv/install.sh | sh + + # Install instructor + uv pip install "instructor" + ``` + +=== "pip" + ```bash + pip install "instructor" + ``` 2. Or provide it directly to the client: diff --git a/docs/integrations/together.md b/docs/integrations/together.md index 273c9c00d..48c007d66 100644 --- a/docs/integrations/together.md +++ b/docs/integrations/together.md @@ -15,6 +15,22 @@ authors: This guide demonstrates how to use Together AI with Instructor to generate structured outputs. You'll learn how to use function calling with Together's models to create type-safe responses. +Let's first install the instructor client with Together AI support: + +=== "UV (Recommended)" + ```bash + # Install UV if you haven't already + curl -LsSf https://astral.sh/uv/install.sh | sh + + # Install instructor with Together AI support + uv pip install "instructor[together]" + ``` + +=== "pip" + ```bash + pip install "instructor[together]" + ``` + Open-source LLMS are gaining popularity, and with the release of Together's Function calling models, its been easier than ever to get structured outputs. By the end of this blog post, you will learn how to effectively utilize instructor with Together AI. But before we proceed, let's first explore the concept of patching. diff --git a/docs/integrations/vertex.md b/docs/integrations/vertex.md index 79ead60c2..5c4c1b58c 100644 --- a/docs/integrations/vertex.md +++ b/docs/integrations/vertex.md @@ -9,11 +9,24 @@ Google Cloud's Vertex AI provides enterprise-grade AI capabilities with robust s ## Quick Start -Install Instructor with Vertex AI support. You can do so by running the command below. +Let's first install the instructor client with Vertex AI support: + +=== "UV (Recommended)" + ```bash + # Install UV if you haven't already + curl -LsSf https://astral.sh/uv/install.sh | sh + + # Install instructor + uv pip install "instructor[vertex]" + ``` + +=== "pip" + ```bash + pip install "instructor[vertex]" + ``` + + -```bash -pip install "instructor[vertexai]" -``` ## Simple User Example (Sync) diff --git a/docs/integrations/writer.md b/docs/integrations/writer.md index 4c5c9674f..e6a3a956f 100644 --- a/docs/integrations/writer.md +++ b/docs/integrations/writer.md @@ -11,8 +11,31 @@ You'll need to sign up for an account and get an API key. You can do that [here] ```bash export WRITER_API_KEY= -pip install "instructor[writer]" -``` +=== "UV (Recommended)" + ```bash +=== "UV (Recommended)" + ```bash +Let's first install the instructor client with writer support: + +=== "UV (Recommended)" + ```bash + # Install UV if you haven't already + curl -LsSf https://astral.sh/uv/install.sh | sh + + # Install instructor + uv pip install "instructor[writer]" + ``` + +=== "pip" + ```bash + pip install "instructor[writer]" + ``` + + + + + + ## Palmyra-X-004