diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 000000000..e69de29bb diff --git a/404.html b/404.html new file mode 100644 index 000000000..bb914a1fc --- /dev/null +++ b/404.html @@ -0,0 +1,2642 @@ + + + +
+ + + + + + + + + + + + + + + + +We're going to write documentation for Marvin together.
+First, here's a style guide.
+A style guide for AI documentation authors to adhere to Marvin's documentation standards. Remember, you are an expert technical writer with an extensive background in educating and explaining open-source software. You are not a marketer, a salesperson, or a product manager. Marvin's documentation should resemble renowned technical documentation like Stripe.
+You must follow the below guide. Do not deviate from it.
+code
in headers or titles (e.g. prefer "Overview" to "Overview of extract()
"). If you must, use code
in headers or titles sparingly.marvin.classify()
, not just classify()
.
generate_speech
+
+
+ async
+
+
+¶Generates speech based on a provided prompt template.
+This function uses the OpenAI Audio API to generate speech based on a provided +prompt template. The function supports additional arguments for the prompt +and the model.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ prompt_template
+ |
+
+ str
+ |
+
+
+
+ The template for the prompt. + |
+ + required + | +
+ prompt_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +prompt. Defaults to None. + |
+
+ None
+ |
+
+ stream
+ |
+
+ bool
+ |
+
+
+
+ Whether to stream the audio. If False, the
+audio can not be saved or played until it has all been generated. If
+True, |
+
+ True
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Audio |
+ Audio
+ |
+
+
+
+ The response from the OpenAI Audio API, which includes the +generated speech. + |
+
speak
+
+¶Generates audio from text using an AI.
+This function uses an AI to generate audio from the provided text. The voice +used for the audio can be specified.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ text
+ |
+
+ str
+ |
+
+
+
+ The text to generate audio from. + |
+ + required + | +
+ voice
+ |
+
+ Literal['alloy', 'echo', 'fable', 'onyx', 'nova', 'shimmer']
+ |
+
+
+
+ The voice to use for the audio. Defaults to None. + |
+
+ None
+ |
+
+ stream
+ |
+
+ bool
+ |
+
+
+
+ Whether to stream the audio. If False, the
+audio can not be saved or played until it has all been generated. If
+True, |
+
+ True
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Audio |
+ Audio
+ |
+
+
+
+ The generated audio. + |
+
speak_async
+
+
+ async
+
+
+¶Generates audio from text using an AI.
+This function uses an AI to generate audio from the provided text. The voice +used for the audio can be specified.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ text
+ |
+
+ str
+ |
+
+
+
+ The text to generate audio from. + |
+ + required + | +
+ voice
+ |
+
+ Literal['alloy', 'echo', 'fable', 'onyx', 'nova', 'shimmer']
+ |
+
+
+
+ The voice to use for the audio. Defaults to None. + |
+
+ None
+ |
+
+ stream
+ |
+
+ bool
+ |
+
+
+
+ Whether to stream the audio. If False, the
+audio can not be saved or played until it has all been generated. If
+True, |
+
+ True
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Audio |
+ Audio
+ |
+
+
+
+ The generated audio. + |
+
speech
+
+¶Function decorator that generates audio from the wrapped function's return +value. The voice used for the audio can be specified.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ fn
+ |
+
+ Callable
+ |
+
+
+
+ The function to wrap. Defaults to None. + |
+
+ None
+ |
+
+ voice
+ |
+
+ str
+ |
+
+
+
+ The voice to use for the audio. Defaults to None. + |
+
+ None
+ |
+
+ stream
+ |
+
+ bool
+ |
+
+
+
+ Whether to stream the audio. If False, the
+audio can not be saved or played until it has all been generated. If
+True, |
+
+ True
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Callable |
+ Callable
+ |
+
+
+
+ The wrapped function. + |
+
transcribe
+
+¶Transcribes audio from a file.
+This function converts audio from a file to text.
+ +
transcribe_async
+
+
+ async
+
+
+¶Transcribes audio from a file.
+This function converts audio from a file to text.
+ +
generate_image
+
+
+ async
+
+
+¶Generates an image based on a provided prompt template.
+This function uses the DALL-E API to generate an image based on a provided +prompt template. The function supports additional arguments for the prompt +and the model.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ prompt_template
+ |
+
+ str
+ |
+
+
+
+ The template for the prompt. + |
+ + required + | +
+ prompt_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +prompt. Defaults to None. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
ImagesResponse |
+ ImagesResponse
+ |
+
+
+
+ The response from the DALL-E API, which includes the +generated image. + |
+
image
+
+¶A decorator that transforms a function's output into an image.
+This decorator takes a function that returns a string, and uses that string +as instructions to generate an image. The generated image is then returned.
+The decorator can be used with or without parentheses. If used without
+parentheses, the decorated function's output is used as the instructions
+for the image. If used with parentheses, an optional literal
argument can
+be provided. If literal
is set to True
, the function's output is used
+as the literal instructions for the image, without any modifications.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ fn
+ |
+
+ callable
+ |
+
+
+
+ The function to decorate. If |
+
+ None
+ |
+
+ literal
+ |
+
+ bool
+ |
+
+
+
+ Whether to use the function's output as the
+literal instructions for the image. Defaults to |
+
+ False
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
callable | + | +
+
+
+ The decorated function. + |
+
paint
+
+¶Generates an image based on the provided instructions and context.
+This function uses the DALLE-3 API to generate an image based on the provided
+instructions and context. By default, the API modifies prompts to add detail
+and style. This behavior can be disabled by setting literal=True
.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ instructions
+ |
+
+ str
+ |
+
+
+
+ The instructions for the image generation. +Defaults to None. + |
+
+ None
+ |
+
+ context
+ |
+
+ dict
+ |
+
+
+
+ The context for the image generation. Defaults to None. + |
+
+ None
+ |
+
+ literal
+ |
+
+ bool
+ |
+
+
+
+ Whether to disable the API's default behavior of +modifying prompts. Defaults to False. + |
+
+ False
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
ImagesResponse | + | +
+
+
+ The response from the DALLE-3 API, which includes the +generated image. + |
+
paint_async
+
+
+ async
+
+
+¶Generates an image based on the provided instructions and context.
+This function uses the DALLE-3 API to generate an image based on the provided
+instructions and context. By default, the API modifies prompts to add detail
+and style. This behavior can be disabled by setting literal=True
.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ instructions
+ |
+
+ str
+ |
+
+
+
+ The instructions for the image generation. +Defaults to None. + |
+
+ None
+ |
+
+ context
+ |
+
+ dict
+ |
+
+
+
+ The context for the image generation. Defaults to None. + |
+
+ None
+ |
+
+ literal
+ |
+
+ bool
+ |
+
+
+
+ Whether to disable the API's default behavior of +modifying prompts. Defaults to False. + |
+
+ False
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
ImagesResponse | + | +
+
+
+ The response from the DALLE-3 API, which includes the +generated image. + |
+
Core LLM tools for working with text and structured data.
+ + + + + + + + +
Model
+
+
+¶A Pydantic model that can be instantiated from a natural language string, in +addition to keyword arguments.
+ + + + + + + + + +
from_text_async
+
+
+ async
+ classmethod
+
+
+¶Class method to create an instance of the model from a natural language string.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ text
+ |
+
+ str
+ |
+
+
+
+ The natural language string to convert into an instance of the model. + |
+ + required + | +
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Specific instructions for the conversion. Defaults to None. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ AsyncMarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Model |
+ Model
+ |
+
+
+
+ An instance of the model. + |
+
caption
+
+¶Generates a caption for an image using a language model synchronously.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ data
+ |
+
+ Union[Image, List[Image]]
+ |
+
+
+
+ The image or images to caption. + |
+ + required + | +
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Instructions for the caption generation. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional arguments for the language model. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
str |
+ str
+ |
+
+
+
+ Generated caption. + |
+
caption_async
+
+
+ async
+
+
+¶Generates a caption for a set of images using a language model.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ data
+ |
+
+ Union[Image, List[Image]]
+ |
+
+
+
+ The image or images to caption. + |
+ + required + | +
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Instructions for the caption generation. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional arguments for the language model. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
str |
+ str
+ |
+
+
+
+ Generated caption. + |
+
cast
+
+¶Converts the input data into the specified type.
+This function uses a language model to convert the input data into a +specified type. The conversion process can be guided by specific +instructions. The function also supports additional arguments for the +language model.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ data
+ |
+
+ FN_INPUT_TYPES
+ |
+
+
+
+ Union[str, Image, list[Union[str, Image]]]: the data to which +the function will be applied. + |
+ + required + | +
+ target
+ |
+
+ type
+ |
+
+
+
+ The type to convert the data into. If none is provided
+but instructions are provided, |
+
+ None
+ |
+
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Specific instructions for the conversion. +Defaults to None. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ AsyncMarvinClient
+ |
+
+
+
+ The client to use for the AI +function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
T |
+ T
+ |
+
+
+
+ The converted data of the specified type. + |
+
cast_async
+
+
+ async
+
+
+¶Converts the input data into the specified type.
+This function uses a language model to convert the input data into a +specified type. The conversion process can be guided by specific +instructions. The function also supports additional arguments for the +language model.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ data
+ |
+
+ FN_INPUT_TYPES
+ |
+
+
+
+ Union[str, Image, list[Union[str, Image]]]: the data to which +the function will be applied. + |
+ + required + | +
+ target
+ |
+
+ type
+ |
+
+
+
+ The type to convert the data into. If none is provided
+but instructions are provided, |
+
+ None
+ |
+
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Specific instructions for the conversion. +Defaults to None. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ AsyncMarvinClient
+ |
+
+
+
+ The client to use for the AI +function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
T |
+ T
+ |
+
+
+
+ The converted data of the specified type. + |
+
classifier
+
+¶Class decorator that modifies the behavior of an Enum class to classify a string.
+This decorator modifies the call method of the Enum class to use the
+marvin.classify
function instead of the default Enum behavior. This allows
+the Enum class to classify a string based on its members.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ cls
+ |
+
+ Enum
+ |
+
+
+
+ The Enum class to be decorated. + |
+
+ None
+ |
+
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Instructions for the AI on +how to perform the classification. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword +arguments to pass to the model. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Enum | + | +
+
+
+ The decorated Enum class with modified call method. + |
+
Raises:
+Type | +Description | +
---|---|
+ AssertionError
+ |
+
+
+
+ If the decorated class is not a subclass of Enum. + |
+
classify
+
+¶Classifies the provided data based on the provided labels.
+This function uses a language model with a logit bias to classify the input +data. The logit bias constrains the language model's response to a single +token, making this function highly efficient for classification tasks. The +function will always return one of the provided labels.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ data
+ |
+
+ FN_INPUT_TYPES
+ |
+
+
+
+ Union[str, Image, list[Union[str, Image]]]: the data to which +the function will be applied. + |
+ + required + | +
+ labels
+ |
+
+ Union[Enum, list[T], type]
+ |
+
+
+
+ The labels to classify the data into. + |
+ + required + | +
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Specific instructions for the +classification. Defaults to None. + |
+
+ None
+ |
+
+ return_index
+ |
+
+ bool
+ |
+
+
+
+ Whether to return the index of the label instead of the label itself. + |
+
+ False
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ AsyncMarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Type | +Description | +
---|---|
+ Union[T, int]
+ |
+
+
+
+ Union[T, int]: The label or index that the data was classified into. + |
+
classify_async
+
+
+ async
+
+
+¶Classifies the provided data based on the provided labels.
+This function uses a language model with a logit bias to classify the input +data. The logit bias constrains the language model's response to a single +token, making this function highly efficient for classification tasks. The +function will always return one of the provided labels.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ data
+ |
+
+ FN_INPUT_TYPES
+ |
+
+
+
+ Union[str, Image, list[Union[str, Image]]]: the data to which +the function will be applied. + |
+ + required + | +
+ labels
+ |
+
+ Union[Enum, list[T], type]
+ |
+
+
+
+ The labels to classify the data into. + |
+ + required + | +
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Specific instructions for the +classification. Defaults to None. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ AsyncMarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Type | +Description | +
---|---|
+ Union[T, int]
+ |
+
+
+
+ Union[T, int]: The label or index that the data was classified into. + |
+
extract
+
+¶Extracts entities of a specific type from the provided data.
+This function uses a language model to identify and extract entities of the +specified type from the input data. The extracted entities are returned as a +list.
+Note that either a target type or instructions must be provided (or both). +If only instructions are provided, the target type is assumed to be a +string.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ data
+ |
+
+ FN_INPUT_TYPES
+ |
+
+
+
+ Union[str, Image, list[Union[str, Image]]]: the data to which +the function will be applied. + |
+ + required + | +
+ target
+ |
+
+ type
+ |
+
+
+
+ The type of entities to extract. + |
+
+ None
+ |
+
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Specific instructions for the extraction. +Defaults to None. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ AsyncMarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
list |
+ list[T]
+ |
+
+
+
+ A list of extracted entities of the specified type. + |
+
extract_async
+
+
+ async
+
+
+¶Extracts entities of a specific type from the provided data.
+This function uses a language model to identify and extract entities of the +specified type from the input data. The extracted entities are returned as a +list.
+Note that either a target type or instructions must be provided (or both). +If only instructions are provided, the target type is assumed to be a +string.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ data
+ |
+
+ FN_INPUT_TYPES
+ |
+
+
+
+ Union[str, Image, list[Union[str, Image]]]: the data to which +the function will be applied. + |
+ + required + | +
+ target
+ |
+
+ type
+ |
+
+
+
+ The type of entities to extract. + |
+
+ None
+ |
+
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Specific instructions for the extraction. +Defaults to None. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ MarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
list |
+ list[T]
+ |
+
+
+
+ A list of extracted entities of the specified type. + |
+
fn
+
+¶Converts a Python function into an AI function using a decorator.
+This decorator allows a Python function to be converted into an AI function. +The AI function uses a language model to generate its output.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ func
+ |
+
+ Callable
+ |
+
+
+
+ The function to be converted. Defaults to None. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ AsyncMarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Callable |
+ Callable
+ |
+
+
+
+ The converted AI function. + |
+
generate
+
+¶Generates a list of 'n' items of the provided type or based on instructions.
+Either a type or instructions must be provided. If instructions are provided +without a type, the type is assumed to be a string. The function generates at +least 'n' items.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ target
+ |
+
+ type
+ |
+
+
+
+ The type of items to generate. Defaults to None. + |
+
+ None
+ |
+
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Instructions for the generation. Defaults to None. + |
+
+ None
+ |
+
+ n
+ |
+
+ int
+ |
+
+
+
+ The number of items to generate. Defaults to 1. + |
+
+ 1
+ |
+
+ use_cache
+ |
+
+ bool
+ |
+
+
+
+ If True, the function will cache the last +100 responses for each (target, instructions, and temperature) and use +those to avoid repetition on subsequent calls. Defaults to True. + |
+
+ True
+ |
+
+ temperature
+ |
+
+ float
+ |
+
+
+
+ The temperature for the generation. Defaults to 1. + |
+
+ 1
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ AsyncMarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
list |
+ list[T]
+ |
+
+
+
+ A list of generated items. + |
+
generate_async
+
+
+ async
+
+
+¶Generates a list of 'n' items of the provided type or based on instructions.
+Either a type or instructions must be provided. If instructions are provided +without a type, the type is assumed to be a string. The function generates at +least 'n' items.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ target
+ |
+
+ type
+ |
+
+
+
+ The type of items to generate. Defaults to None. + |
+
+ None
+ |
+
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Instructions for the generation. Defaults to None. + |
+
+ None
+ |
+
+ n
+ |
+
+ int
+ |
+
+
+
+ The number of items to generate. Defaults to 1. + |
+
+ 1
+ |
+
+ use_cache
+ |
+
+ bool
+ |
+
+
+
+ If True, the function will cache the last +100 responses for each (target, instructions, and temperature) and use +those to avoid repetition on subsequent calls. Defaults to True. + |
+
+ True
+ |
+
+ temperature
+ |
+
+ float
+ |
+
+
+
+ The temperature for the generation. Defaults to 1. + |
+
+ 1
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ AsyncMarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
list |
+ list[T]
+ |
+
+
+
+ A list of generated items. + |
+
generate_llm_response
+
+
+ async
+
+
+¶Generates a language model response based on a provided prompt template.
+This function uses a language model to generate a response based on a provided prompt template. +The function supports additional arguments for the prompt and the language model.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ prompt_template
+ |
+
+ str
+ |
+
+
+
+ The template for the prompt. + |
+ + required + | +
+ prompt_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the prompt. Defaults to None. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
ChatResponse |
+ ChatResponse
+ |
+
+
+
+ The generated response from the language model. + |
+
model
+
+¶Class decorator for instantiating a Pydantic model from a string.
+This decorator allows a Pydantic model to be instantiated from a string. It's +equivalent to subclassing the Model class.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ type_
+ |
+
+ Union[Type[M], None]
+ |
+
+
+
+ The type of the Pydantic model. +Defaults to None. + |
+
+ None
+ |
+
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Specific instructions for the conversion. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Type | +Description | +
---|---|
+ Union[Type[M], Callable[[Type[M]], Type[M]]]
+ |
+
+
+
+ Union[Type[M], Callable[[Type[M]], Type[M]]]: The decorated Pydantic model. + |
+
prepare_data
+
+¶Prepares the input data for the AI function.
+This function prepares the input data for the AI function by converting it +into a list of strings. If the input data is a list of strings or images, the +function converts the images into strings.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ data
+ |
+
+ Union[str, Image, list[Union[str, Image]]]
+ |
+
+
+
+ The input data to be prepared. + |
+ + required + | +
Returns:
+Type | +Description | +
---|---|
+ list[str]
+ |
+
+
+
+ list[str]: The prepared input data. + |
+
Tip
+All async methods that have an _async
suffix have sync equivalents that can be called with out the suffix e.g. run()
and await run_async()
.
Application
+
+
+¶Tools for Applications have a special property: if any parameter is
+annotated as Application
, then the tool will be called with the
+Application instance as the value for that parameter. This allows tools to
+access the Application's state and other properties.
Tip
+All async methods that have an _async
suffix have sync equivalents that can be called with out the suffix e.g. run()
and await run_async()
.
Assistant
+
+
+¶The Assistant class represents an AI assistant that can be created, deleted, +loaded, and interacted with.
+ + +Attributes:
+Name | +Type | +Description | +
---|---|---|
id |
+
+ str
+ |
+
+
+
+ The unique identifier of the assistant. None if the assistant + hasn't been created yet. + |
+
name |
+
+ str
+ |
+
+
+
+ The name of the assistant. + |
+
description |
+
+ str
+ |
+
+
+
+ A description of the assistant. + |
+
instructions |
+
+ str
+ |
+
+
+
+ Instructions for the assistant. + |
+
model |
+
+ str
+ |
+
+
+
+ The model used by the assistant. + |
+
tools |
+
+ list
+ |
+
+
+
+ List of tools used by the assistant. + |
+
tool_resources |
+
+ dict
+ |
+
+
+
+ dict of tool resources associated with the assistant. + |
+
metadata |
+
+ dict
+ |
+
+
+
+ Additional data about the assistant. + |
+
download_temp_file
+
+
+ cached
+
+
+¶Downloads a file from OpenAI's servers and saves it to a temporary file.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ file_id
+ |
+
+ str
+ |
+
+
+
+ The ID of the file to be downloaded. + |
+ + required + | +
Returns:
+Type | +Description | +
---|---|
+ | +
+
+
+ The file path of the downloaded temporary file. + |
+
format_run
+
+¶Formats a run, which is an object that has both .messages
and .steps
+attributes, each of which is a list of Messages and RunSteps.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ run
+ |
+ + | +
+
+
+ A Run object + |
+ + required + | +
+ include_messages
+ |
+
+ bool
+ |
+
+
+
+ Whether to include messages in the formatted output + |
+
+ True
+ |
+
+ include_steps
+ |
+
+ bool
+ |
+
+
+
+ Whether to include steps in the formatted output + |
+
+ True
+ |
+
format_timestamp
+
+¶Outputs timestamp as a string in 12 hour format. Hours are left-padded with space instead of zero.
+ +
pprint_message
+
+¶Pretty-prints a single message using the rich library, highlighting the +speaker's role, the message text, any available images, and the message +timestamp in a panel format.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ message
+ |
+
+ Message
+ |
+
+
+
+ A message object + |
+ + required + | +
pprint_messages
+
+¶Iterates over a list of messages and pretty-prints each one.
+Messages are pretty-printed using the rich library, highlighting the +speaker's role, the message text, any available images, and the message +timestamp in a panel format.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ messages
+ |
+
+ list[Message]
+ |
+
+
+
+ A list of Message objects to be +printed. + |
+ + required + | +
pprint_run
+
+¶Pretty-prints a run, which is an object that has both .messages
and
+.steps
attributes, each of which is a list of Messages and RunSteps.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ run
+ |
+ + | +
+
+
+ A Run object + |
+ + required + | +
pprint_step
+
+¶Formats and prints a run step with status information.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ run_step
+ |
+ + | +
+
+
+ A RunStep object containing the details of the run step. + |
+ + required + | +
pprint_steps
+
+¶Iterates over a list of run steps and pretty-prints each one.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ steps
+ |
+
+ list[RunStep]
+ |
+
+
+
+ A list of RunStep objects to be printed. + |
+ + required + | +
Tip
+All async methods that have an _async
suffix have sync equivalents that can be called with out the suffix e.g. run()
and await run_async()
.
Run
+
+
+¶The Run class represents a single execution of an assistant.
+ + +Attributes:
+Name | +Type | +Description | +
---|---|---|
thread |
+
+ Thread
+ |
+
+
+
+ The thread in which the run is executed. + |
+
assistant |
+
+ Assistant
+ |
+
+
+
+ The assistant that is being run. + |
+
model |
+
+ str
+ |
+
+
+
+ The model used by the assistant. + |
+
instructions |
+
+ str
+ |
+
+
+
+ Replacement instructions for the run. + |
+
additional_instructions |
+
+ str
+ |
+
+
+
+ Additional instructions to append + to the assistant's instructions. + |
+
tools |
+
+ list[Union[AssistantTool, Callable]]
+ |
+
+
+
+ Replacement tools + for the run. + |
+
additional_tools |
+
+ list[AssistantTool]
+ |
+
+
+
+ Additional tools to append + to the assistant's tools. + |
+
tool_choice |
+
+ Union[Literal['auto', 'none', 'required'], AssistantTool]
+ |
+
+
+
+
|
+
run |
+
+ Run
+ |
+
+
+
+ The OpenAI run object. + |
+
data |
+
+ Any
+ |
+
+
+
+ Any additional data associated with the run. + |
+
Tip
+All async methods that have an _async
suffix have sync equivalents that can be called with out the suffix e.g. run()
and await run_async()
.
Thread
+
+
+¶The Thread class represents a conversation thread with an assistant.
+ + +Attributes:
+Name | +Type | +Description | +
---|---|---|
id |
+
+ Optional[str]
+ |
+
+
+
+ The unique identifier of the thread. None if the thread + hasn't been created yet. + |
+
metadata |
+
+ dict
+ |
+
+
+
+ Additional data about the thread. + |
+
add_async
+
+
+ async
+
+
+¶Add a user message to the thread.
+ +
create_async
+
+
+ async
+
+
+¶Creates a thread.
+ +
get_messages_async
+
+
+ async
+
+
+¶Asynchronously retrieves messages from the thread.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ limit
+ |
+
+ int
+ |
+
+
+
+ The maximum number of messages to return. + |
+
+ None
+ |
+
+ before_message
+ |
+
+ str
+ |
+
+
+
+ The ID of the message to start the +list from, retrieving messages sent before this one. + |
+
+ None
+ |
+
+ after_message
+ |
+
+ str
+ |
+
+
+
+ The ID of the message to start the +list from, retrieving messages sent after this one. + |
+
+ None
+ |
+
Returns: + list[Union[Message, dict]]: A list of messages from the thread
+ +
run_async
+
+
+ async
+
+
+¶Creates and returns a Run
of this thread with the provided assistant.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ assistant
+ |
+
+ Assistant
+ |
+
+
+
+ The assistant to run the thread with. + |
+ + required + | +
+ run_kwargs
+ |
+ + | +
+
+
+ Additional keyword arguments to pass to the Run constructor. + |
+
+ {}
+ |
+
Image
+
+
+¶
render_for_transcript
+
+¶Renders a JSON representation of the image for use in a Transcript +object, including IMAGE and TEXT tags.
+ +
Model
+
+
+¶A Pydantic model that can be instantiated from a natural language string, in +addition to keyword arguments.
+ + + + + + + + + +
from_text_async
+
+
+ async
+ classmethod
+
+
+¶Class method to create an instance of the model from a natural language string.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ text
+ |
+
+ str
+ |
+
+
+
+ The natural language string to convert into an instance of the model. + |
+ + required + | +
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Specific instructions for the conversion. Defaults to None. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ AsyncMarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Model |
+ Model
+ |
+
+
+
+ An instance of the model. + |
+
caption
+
+¶Generates a caption for an image using a language model synchronously.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ data
+ |
+
+ Union[Image, List[Image]]
+ |
+
+
+
+ The image or images to caption. + |
+ + required + | +
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Instructions for the caption generation. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional arguments for the language model. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
str |
+ str
+ |
+
+
+
+ Generated caption. + |
+
caption_async
+
+
+ async
+
+
+¶Generates a caption for a set of images using a language model.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ data
+ |
+
+ Union[Image, List[Image]]
+ |
+
+
+
+ The image or images to caption. + |
+ + required + | +
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Instructions for the caption generation. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional arguments for the language model. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
str |
+ str
+ |
+
+
+
+ Generated caption. + |
+
cast
+
+¶Converts the input data into the specified type.
+This function uses a language model to convert the input data into a +specified type. The conversion process can be guided by specific +instructions. The function also supports additional arguments for the +language model.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ data
+ |
+
+ FN_INPUT_TYPES
+ |
+
+
+
+ Union[str, Image, list[Union[str, Image]]]: the data to which +the function will be applied. + |
+ + required + | +
+ target
+ |
+
+ type
+ |
+
+
+
+ The type to convert the data into. If none is provided
+but instructions are provided, |
+
+ None
+ |
+
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Specific instructions for the conversion. +Defaults to None. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ AsyncMarvinClient
+ |
+
+
+
+ The client to use for the AI +function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
T |
+ T
+ |
+
+
+
+ The converted data of the specified type. + |
+
cast_async
+
+
+ async
+
+
+¶Converts the input data into the specified type.
+This function uses a language model to convert the input data into a +specified type. The conversion process can be guided by specific +instructions. The function also supports additional arguments for the +language model.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ data
+ |
+
+ FN_INPUT_TYPES
+ |
+
+
+
+ Union[str, Image, list[Union[str, Image]]]: the data to which +the function will be applied. + |
+ + required + | +
+ target
+ |
+
+ type
+ |
+
+
+
+ The type to convert the data into. If none is provided
+but instructions are provided, |
+
+ None
+ |
+
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Specific instructions for the conversion. +Defaults to None. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ AsyncMarvinClient
+ |
+
+
+
+ The client to use for the AI +function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
T |
+ T
+ |
+
+
+
+ The converted data of the specified type. + |
+
classifier
+
+¶Class decorator that modifies the behavior of an Enum class to classify a string.
+This decorator modifies the call method of the Enum class to use the
+marvin.classify
function instead of the default Enum behavior. This allows
+the Enum class to classify a string based on its members.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ cls
+ |
+
+ Enum
+ |
+
+
+
+ The Enum class to be decorated. + |
+
+ None
+ |
+
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Instructions for the AI on +how to perform the classification. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword +arguments to pass to the model. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Enum | + | +
+
+
+ The decorated Enum class with modified call method. + |
+
Raises:
+Type | +Description | +
---|---|
+ AssertionError
+ |
+
+
+
+ If the decorated class is not a subclass of Enum. + |
+
classify
+
+¶Classifies the provided data based on the provided labels.
+This function uses a language model with a logit bias to classify the input +data. The logit bias constrains the language model's response to a single +token, making this function highly efficient for classification tasks. The +function will always return one of the provided labels.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ data
+ |
+
+ FN_INPUT_TYPES
+ |
+
+
+
+ Union[str, Image, list[Union[str, Image]]]: the data to which +the function will be applied. + |
+ + required + | +
+ labels
+ |
+
+ Union[Enum, list[T], type]
+ |
+
+
+
+ The labels to classify the data into. + |
+ + required + | +
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Specific instructions for the +classification. Defaults to None. + |
+
+ None
+ |
+
+ return_index
+ |
+
+ bool
+ |
+
+
+
+ Whether to return the index of the label instead of the label itself. + |
+
+ False
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ AsyncMarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Type | +Description | +
---|---|
+ Union[T, int]
+ |
+
+
+
+ Union[T, int]: The label or index that the data was classified into. + |
+
classify_async
+
+
+ async
+
+
+¶Classifies the provided data based on the provided labels.
+This function uses a language model with a logit bias to classify the input +data. The logit bias constrains the language model's response to a single +token, making this function highly efficient for classification tasks. The +function will always return one of the provided labels.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ data
+ |
+
+ FN_INPUT_TYPES
+ |
+
+
+
+ Union[str, Image, list[Union[str, Image]]]: the data to which +the function will be applied. + |
+ + required + | +
+ labels
+ |
+
+ Union[Enum, list[T], type]
+ |
+
+
+
+ The labels to classify the data into. + |
+ + required + | +
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Specific instructions for the +classification. Defaults to None. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ AsyncMarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Type | +Description | +
---|---|
+ Union[T, int]
+ |
+
+
+
+ Union[T, int]: The label or index that the data was classified into. + |
+
extract
+
+¶Extracts entities of a specific type from the provided data.
+This function uses a language model to identify and extract entities of the +specified type from the input data. The extracted entities are returned as a +list.
+Note that either a target type or instructions must be provided (or both). +If only instructions are provided, the target type is assumed to be a +string.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ data
+ |
+
+ FN_INPUT_TYPES
+ |
+
+
+
+ Union[str, Image, list[Union[str, Image]]]: the data to which +the function will be applied. + |
+ + required + | +
+ target
+ |
+
+ type
+ |
+
+
+
+ The type of entities to extract. + |
+
+ None
+ |
+
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Specific instructions for the extraction. +Defaults to None. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ AsyncMarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
list |
+ list[T]
+ |
+
+
+
+ A list of extracted entities of the specified type. + |
+
extract_async
+
+
+ async
+
+
+¶Extracts entities of a specific type from the provided data.
+This function uses a language model to identify and extract entities of the +specified type from the input data. The extracted entities are returned as a +list.
+Note that either a target type or instructions must be provided (or both). +If only instructions are provided, the target type is assumed to be a +string.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ data
+ |
+
+ FN_INPUT_TYPES
+ |
+
+
+
+ Union[str, Image, list[Union[str, Image]]]: the data to which +the function will be applied. + |
+ + required + | +
+ target
+ |
+
+ type
+ |
+
+
+
+ The type of entities to extract. + |
+
+ None
+ |
+
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Specific instructions for the extraction. +Defaults to None. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ MarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
list |
+ list[T]
+ |
+
+
+
+ A list of extracted entities of the specified type. + |
+
fn
+
+¶Converts a Python function into an AI function using a decorator.
+This decorator allows a Python function to be converted into an AI function. +The AI function uses a language model to generate its output.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ func
+ |
+
+ Callable
+ |
+
+
+
+ The function to be converted. Defaults to None. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ AsyncMarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Callable |
+ Callable
+ |
+
+
+
+ The converted AI function. + |
+
generate
+
+¶Generates a list of 'n' items of the provided type or based on instructions.
+Either a type or instructions must be provided. If instructions are provided +without a type, the type is assumed to be a string. The function generates at +least 'n' items.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ target
+ |
+
+ type
+ |
+
+
+
+ The type of items to generate. Defaults to None. + |
+
+ None
+ |
+
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Instructions for the generation. Defaults to None. + |
+
+ None
+ |
+
+ n
+ |
+
+ int
+ |
+
+
+
+ The number of items to generate. Defaults to 1. + |
+
+ 1
+ |
+
+ use_cache
+ |
+
+ bool
+ |
+
+
+
+ If True, the function will cache the last +100 responses for each (target, instructions, and temperature) and use +those to avoid repetition on subsequent calls. Defaults to True. + |
+
+ True
+ |
+
+ temperature
+ |
+
+ float
+ |
+
+
+
+ The temperature for the generation. Defaults to 1. + |
+
+ 1
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ AsyncMarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
list |
+ list[T]
+ |
+
+
+
+ A list of generated items. + |
+
generate_async
+
+
+ async
+
+
+¶Generates a list of 'n' items of the provided type or based on instructions.
+Either a type or instructions must be provided. If instructions are provided +without a type, the type is assumed to be a string. The function generates at +least 'n' items.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ target
+ |
+
+ type
+ |
+
+
+
+ The type of items to generate. Defaults to None. + |
+
+ None
+ |
+
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Instructions for the generation. Defaults to None. + |
+
+ None
+ |
+
+ n
+ |
+
+ int
+ |
+
+
+
+ The number of items to generate. Defaults to 1. + |
+
+ 1
+ |
+
+ use_cache
+ |
+
+ bool
+ |
+
+
+
+ If True, the function will cache the last +100 responses for each (target, instructions, and temperature) and use +those to avoid repetition on subsequent calls. Defaults to True. + |
+
+ True
+ |
+
+ temperature
+ |
+
+ float
+ |
+
+
+
+ The temperature for the generation. Defaults to 1. + |
+
+ 1
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
+ client
+ |
+
+ AsyncMarvinClient
+ |
+
+
+
+ The client to use for the AI function. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
list |
+ list[T]
+ |
+
+
+
+ A list of generated items. + |
+
image
+
+¶A decorator that transforms a function's output into an image.
+This decorator takes a function that returns a string, and uses that string +as instructions to generate an image. The generated image is then returned.
+The decorator can be used with or without parentheses. If used without
+parentheses, the decorated function's output is used as the instructions
+for the image. If used with parentheses, an optional literal
argument can
+be provided. If literal
is set to True
, the function's output is used
+as the literal instructions for the image, without any modifications.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ fn
+ |
+
+ callable
+ |
+
+
+
+ The function to decorate. If |
+
+ None
+ |
+
+ literal
+ |
+
+ bool
+ |
+
+
+
+ Whether to use the function's output as the
+literal instructions for the image. Defaults to |
+
+ False
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
callable | + | +
+
+
+ The decorated function. + |
+
model
+
+¶Class decorator for instantiating a Pydantic model from a string.
+This decorator allows a Pydantic model to be instantiated from a string. It's +equivalent to subclassing the Model class.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ type_
+ |
+
+ Union[Type[M], None]
+ |
+
+
+
+ The type of the Pydantic model. +Defaults to None. + |
+
+ None
+ |
+
+ instructions
+ |
+
+ str
+ |
+
+
+
+ Specific instructions for the conversion. + |
+
+ None
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Type | +Description | +
---|---|
+ Union[Type[M], Callable[[Type[M]], Type[M]]]
+ |
+
+
+
+ Union[Type[M], Callable[[Type[M]], Type[M]]]: The decorated Pydantic model. + |
+
paint
+
+¶Generates an image based on the provided instructions and context.
+This function uses the DALLE-3 API to generate an image based on the provided
+instructions and context. By default, the API modifies prompts to add detail
+and style. This behavior can be disabled by setting literal=True
.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ instructions
+ |
+
+ str
+ |
+
+
+
+ The instructions for the image generation. +Defaults to None. + |
+
+ None
+ |
+
+ context
+ |
+
+ dict
+ |
+
+
+
+ The context for the image generation. Defaults to None. + |
+
+ None
+ |
+
+ literal
+ |
+
+ bool
+ |
+
+
+
+ Whether to disable the API's default behavior of +modifying prompts. Defaults to False. + |
+
+ False
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
ImagesResponse | + | +
+
+
+ The response from the DALLE-3 API, which includes the +generated image. + |
+
paint_async
+
+
+ async
+
+
+¶Generates an image based on the provided instructions and context.
+This function uses the DALLE-3 API to generate an image based on the provided
+instructions and context. By default, the API modifies prompts to add detail
+and style. This behavior can be disabled by setting literal=True
.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ instructions
+ |
+
+ str
+ |
+
+
+
+ The instructions for the image generation. +Defaults to None. + |
+
+ None
+ |
+
+ context
+ |
+
+ dict
+ |
+
+
+
+ The context for the image generation. Defaults to None. + |
+
+ None
+ |
+
+ literal
+ |
+
+ bool
+ |
+
+
+
+ Whether to disable the API's default behavior of +modifying prompts. Defaults to False. + |
+
+ False
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
ImagesResponse | + | +
+
+
+ The response from the DALLE-3 API, which includes the +generated image. + |
+
speak
+
+¶Generates audio from text using an AI.
+This function uses an AI to generate audio from the provided text. The voice +used for the audio can be specified.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ text
+ |
+
+ str
+ |
+
+
+
+ The text to generate audio from. + |
+ + required + | +
+ voice
+ |
+
+ Literal['alloy', 'echo', 'fable', 'onyx', 'nova', 'shimmer']
+ |
+
+
+
+ The voice to use for the audio. Defaults to None. + |
+
+ None
+ |
+
+ stream
+ |
+
+ bool
+ |
+
+
+
+ Whether to stream the audio. If False, the
+audio can not be saved or played until it has all been generated. If
+True, |
+
+ True
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Audio |
+ Audio
+ |
+
+
+
+ The generated audio. + |
+
speak_async
+
+
+ async
+
+
+¶Generates audio from text using an AI.
+This function uses an AI to generate audio from the provided text. The voice +used for the audio can be specified.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ text
+ |
+
+ str
+ |
+
+
+
+ The text to generate audio from. + |
+ + required + | +
+ voice
+ |
+
+ Literal['alloy', 'echo', 'fable', 'onyx', 'nova', 'shimmer']
+ |
+
+
+
+ The voice to use for the audio. Defaults to None. + |
+
+ None
+ |
+
+ stream
+ |
+
+ bool
+ |
+
+
+
+ Whether to stream the audio. If False, the
+audio can not be saved or played until it has all been generated. If
+True, |
+
+ True
+ |
+
+ model_kwargs
+ |
+
+ dict
+ |
+
+
+
+ Additional keyword arguments for the +language model. Defaults to None. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Audio |
+ Audio
+ |
+
+
+
+ The generated audio. + |
+
speech
+
+¶Function decorator that generates audio from the wrapped function's return +value. The voice used for the audio can be specified.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ fn
+ |
+
+ Callable
+ |
+
+
+
+ The function to wrap. Defaults to None. + |
+
+ None
+ |
+
+ voice
+ |
+
+ str
+ |
+
+
+
+ The voice to use for the audio. Defaults to None. + |
+
+ None
+ |
+
+ stream
+ |
+
+ bool
+ |
+
+
+
+ Whether to stream the audio. If False, the
+audio can not be saved or played until it has all been generated. If
+True, |
+
+ True
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
Callable |
+ Callable
+ |
+
+
+
+ The wrapped function. + |
+
transcribe
+
+¶Transcribes audio from a file.
+This function converts audio from a file to text.
+ +
transcribe_async
+
+
+ async
+
+
+¶Transcribes audio from a file.
+This function converts audio from a file to text.
+ +Settings for configuring marvin
.
AssistantSettings
+
+
+¶Settings for the assistant API.
+ + +Attributes:
+Name | +Type | +Description | +
---|---|---|
model |
+
+ str
+ |
+
+
+
+ The default assistant model to use + |
+
AudioSettings
+
+
+¶Settings for the audio API.
+ + + + + + + + + +
ImageSettings
+
+
+¶Settings for OpenAI's image API.
+ + +Attributes:
+Name | +Type | +Description | +
---|---|---|
model |
+
+ str
+ |
+
+
+
+ The default image model to use, defaults to |
+
size |
+
+ Literal['1024x1024', '1792x1024', '1024x1792']
+ |
+
+
+
+ The default image size to use, defaults to |
+
response_format |
+
+ Literal['url', 'b64_json']
+ |
+
+
+
+ The default response format to use, defaults to |
+
style |
+
+ Literal['vivid', 'natural']
+ |
+
+
+
+ The default style to use, defaults to |
+
OpenAISettings
+
+
+¶Settings for the OpenAI API.
+ + +Attributes:
+Name | +Type | +Description | +
---|---|---|
api_key |
+
+ Optional[SecretStr]
+ |
+
+
+
+ Your OpenAI API key. + |
+
organization |
+
+ Optional[str]
+ |
+
+
+
+ Your OpenAI organization ID. + |
+
llms |
+
+ Optional[str]
+ |
+
+
+
+ Settings for the chat API. + |
+
images |
+
+ ImageSettings
+ |
+
+
+
+ Settings for the images API. + |
+
audio |
+
+ AudioSettings
+ |
+
+
+
+ Settings for the audio API. + |
+
assistants |
+
+ AssistantSettings
+ |
+
+
+
+ Settings for the assistants API. + |
+
Set the OpenAI API key: +
+
Settings
+
+
+¶Settings for marvin
.
This is the main settings object for marvin
.
Attributes:
+Name | +Type | +Description | +
---|---|---|
openai |
+
+ OpenAISettings
+ |
+
+
+
+ Settings for the OpenAI API. + |
+
log_level |
+
+ str
+ |
+
+
+
+ The log level to use, defaults to |
+
Set the log level to INFO
:
+
SpeechSettings
+
+
+¶Settings for OpenAI's speech API.
+ + +Attributes:
+Name | +Type | +Description | +
---|---|---|
model |
+
+ str
+ |
+
+
+
+ The default speech model to use, defaults to |
+
voice |
+
+ Literal['alloy', 'echo', 'fable', 'onyx', 'nova', 'shimmer']
+ |
+
+
+
+ The default voice to use, defaults to |
+
response_format |
+
+ Literal['mp3', 'opus', 'aac', 'flac']
+ |
+
+
+
+ The default response format to use, defaults to |
+
speed |
+
+ float
+ |
+
+
+
+ The default speed to use, defaults to |
+
temporary_settings
+
+¶Temporarily override Marvin setting values, including nested settings objects.
+To override nested settings, use __
to separate nested attribute names.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ **kwargs
+ |
+
+ Any
+ |
+
+
+
+ The settings to override, including nested settings. + |
+
+ {}
+ |
+
Temporarily override log level and OpenAI API key: +
import marvin
+from marvin.settings import temporary_settings
+
+# Override top-level settings
+with temporary_settings(log_level="INFO"):
+ assert marvin.settings.log_level == "INFO"
+assert marvin.settings.log_level == "DEBUG"
+
+# Override nested settings
+with temporary_settings(openai__api_key="new-api-key"):
+ assert marvin.settings.openai.api_key.get_secret_value() == "new-api-key"
+assert marvin.settings.openai.api_key.get_secret_value().startswith("sk-")
+
Audio
+
+
+¶
BaseMessage
+
+
+¶Base schema for messages
+ + + + + + + + + +
Image
+
+
+¶
render_for_transcript
+
+¶Renders a JSON representation of the image for use in a Transcript +object, including IMAGE and TEXT tags.
+ +
ImageFileContentBlock
+
+
+¶Schema for messages containing images
+ + + + + + + + + +
TextContentBlock
+
+
+¶Schema for messages containing text
+ + + + + + + + + +Utilities for working with asyncio.
+ + + + + + + + +
ExposeSyncMethodsMixin
+
+
+¶A mixin that can take functions decorated with expose_sync_method
+and automatically create synchronous versions.
create_task
+
+¶Creates async background tasks in a way that is safe from garbage +collection.
+See +https://textual.textualize.io/blog/2023/02/11/the-heisenbug-lurking-in-your-async-code/
+Example:
+async def my_coro(x: int) -> int: + return x + 1
+create_task(my_coro(1))
+ +
expose_sync_method
+
+¶Decorator that automatically exposes synchronous versions of async methods. +Note it doesn't work with classmethods.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ name
+ |
+
+ str
+ |
+
+
+
+ The name of the synchronous method. + |
+ + required + | +
Returns:
+Type | +Description | +
---|---|
+ Callable[..., Any]
+ |
+
+
+
+ The decorated function. + |
+
make_sync
+
+¶Creates a synchronous function from an asynchronous function.
+ +
run_async
+
+
+ async
+
+
+¶Runs a synchronous function in an asynchronous manner.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ fn
+ |
+
+ Callable[..., T]
+ |
+
+
+
+ The function to run. + |
+ + required + | +
+ *args
+ |
+
+ Any
+ |
+
+
+
+ Positional arguments to pass to the function. + |
+
+ ()
+ |
+
+ **kwargs
+ |
+
+ Any
+ |
+
+
+
+ Keyword arguments to pass to the function. + |
+
+ {}
+ |
+
Returns:
+Type | +Description | +
---|---|
+ T
+ |
+
+
+
+ The return value of the function. + |
+
run_sync
+
+¶Runs a coroutine from a synchronous context. A thread will be spawned +to run the event loop if necessary, which allows coroutines to run in +environments like Jupyter notebooks where the event loop runs on the main +thread.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ coroutine
+ |
+
+ Coroutine[Any, Any, T]
+ |
+
+
+
+ The coroutine to run. + |
+ + required + | +
Returns:
+Type | +Description | +
---|---|
+ T
+ |
+
+
+
+ The return value of the coroutine. + |
+
run_sync_if_awaitable
+
+¶If the object is awaitable, run it synchronously. Otherwise, return the +object.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ obj
+ |
+
+ Any
+ |
+
+
+
+ The object to run. + |
+ + required + | +
Returns:
+Type | +Description | +
---|---|
+ Any
+ |
+
+
+
+ The return value of the object if it is awaitable, otherwise the object + |
+
+ Any
+ |
+
+
+
+ itself. + |
+
Module for defining context utilities.
+ + + + + + + + +
ScopedContext
+
+
+¶ScopedContext
provides a context management mechanism using contextvars
.
This class allows setting and retrieving key-value pairs in a scoped context, +which is preserved across asynchronous tasks and threads within the same context.
+ + +Attributes:
+Name | +Type | +Description | +
---|---|---|
_context_storage |
+
+ ContextVar
+ |
+
+
+
+ A context variable to store the context data. + |
+
Basic Usage of ScopedContext +
+
base64_to_image
+
+¶Converts a base64 string to a local image file.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ base64_str
+ |
+
+ str
+ |
+
+
+
+ The base64 string representation of the image. + |
+ + required + | +
+ output_path
+ |
+
+ Union[str, Path]
+ |
+
+
+
+ The path to the output image file. This can be a +string or a Path object. + |
+ + required + | +
Returns:
+Type | +Description | +
---|---|
+ None
+ |
+
+
+
+ None + |
+
image_to_base64
+
+¶Converts a local image file to a base64 string.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ image_path
+ |
+
+ Union[str, Path]
+ |
+
+
+
+ The path to the image file. This can be a +string or a Path object. + |
+ + required + | +
Returns:
+Name | Type | +Description | +
---|---|---|
str |
+ str
+ |
+
+
+
+ The base64 representation of the image. + |
+
Module for Jinja utilities.
+ + + + + + + + +
BaseEnvironment
+
+
+¶BaseEnvironment provides a configurable environment for rendering Jinja templates.
+This class encapsulates a Jinja environment with customizable global functions and +template settings, allowing for flexible template rendering.
+ + +Attributes:
+Name | +Type | +Description | +
---|---|---|
environment |
+
+ Environment
+ |
+
+
+
+ The Jinja environment for template rendering. + |
+
globals |
+
+ dict[str, Any]
+ |
+
+
+
+ A dictionary of global functions and variables available in templates. + |
+
Basic Usage of BaseEnvironment +
+
render
+
+¶Renders a given template str
or BaseTemplate
with provided context.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ template
+ |
+
+ Union[str, Template]
+ |
+
+
+
+ The template to be rendered. + |
+ + required + | +
+ **kwargs
+ |
+
+ Any
+ |
+
+
+
+ Context variables to be passed to the template. + |
+
+ {}
+ |
+
Returns:
+Type | +Description | +
---|---|
+ str
+ |
+
+
+
+ The rendered template as a string. + |
+
Transcript
+
+
+¶A Transcript is a model that represents a conversation involving multiple
+roles as a single string. It can be parsed into discrete JSON messages.
+
+Transcripts contain special tokens that indicate how to split the transcript
+into discrete messages.
+
+The first special token type indicates the message `role`. Default roles are
+`|SYSTEM|`, `|HUMAN|`, `|USER|`, and `|ASSISTANT|`. When these tokens appear
+at the start of a newline, all text following the token until the next
+newline or token is considered part of the message with the given role.
+
+The second special token type indicates the message `type`. By default, messages all have the `text` type. By supplying a token like `|IMAGE|`, you can indicate that a portion of the message is an image. Use `|TEXT|` to end the image portion and return to text. An
+
+Attributes:
+ content: The content of the transcript.
+ roles: The roles involved in the transcript.
+ environment: The jinja environment to use for rendering the transcript.
+
+Example:
+ Basic Usage of Transcript:
+ ```python
+ from marvin.utilities.jinja import Transcript
+
+ transcript = Transcript(
+ content="|SYSTEM| Hello, there!
+
|USER| Hello, yourself!", + roles={"|SYSTEM|": "system", "|USER|": "user"}, + ) + print(transcript.render_to_messages()) + # [ + # BaseMessage(content='system: Hello, there!', role='system'), + # BaseMessage(content='Hello, yourself!', role='user') + # ] + ```
+ + + + + + + + + +
split_text_by_tokens
+
+¶Splits a given text by a list of tokens.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ text
+ |
+
+ str
+ |
+
+
+
+ The text to be split. split_tokens: The tokens to split the text + |
+ + required + | +
+ by.
+ |
+
+ only_on_newline
+ |
+
+
+
+ If True, only match tokens that are either + |
+ + required + | +
Returns:
+Type | +Description | +
---|---|
+ list[tuple[str, str]]
+ |
+
+
+
+ A list of tuples containing the token and the text following it. + |
+
Basic Usage of split_text_by_tokens
```python from
+marvin.utilities.jinja import split_text_by_tokens
text = "Hello, World!" split_tokens = ["Hello", "World"] pairs = +split_text_by_tokens(text, split_tokens) print(pairs) # Output: +[("Hello", ", "), ("World", "!")] ```
+Module for logging utilities.
+ + + + + + + + +
get_logger
+
+
+ cached
+
+
+¶Retrieves a logger with the given name, or the root logger if no name is given.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ name
+ |
+
+ Optional[str]
+ |
+
+
+
+ The name of the logger to retrieve. + |
+
+ None
+ |
+
Returns:
+Type | +Description | +
---|---|
+ Logger
+ |
+
+
+
+ The logger with the given name, or the root logger if no name is given. + |
+
Utilities for working with OpenAI.
+ + + + + + + + +
get_openai_client
+
+¶Retrieves an OpenAI client (sync or async) based on the current configuration.
+ + +Returns:
+Type | +Description | +
---|---|
+ Union[AsyncClient, Client, AzureOpenAI, AsyncAzureOpenAI]
+ |
+
+
+
+ The OpenAI client + |
+
Module for Pydantic utilities.
+ + + + + + + + +
cast_to_model
+
+¶Casts a type or callable to a Pydantic model.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ function_or_type
+ |
+
+ Union[type, type[BaseModel], GenericAlias, Callable[..., Any]]
+ |
+
+
+
+ The type or callable to cast to a Pydantic model. + |
+ + required + | +
+ name
+ |
+
+ Optional[str]
+ |
+
+
+
+ The name of the model to create. + |
+
+ None
+ |
+
+ description
+ |
+
+ Optional[str]
+ |
+
+
+
+ The description of the model to create. + |
+
+ None
+ |
+
+ field_name
+ |
+
+ Optional[str]
+ |
+
+
+
+ The name of the field to create. + |
+
+ None
+ |
+
Returns:
+Type | +Description | +
---|---|
+ type[BaseModel]
+ |
+
+
+
+ The Pydantic model created from the given type or callable. + |
+
parse_as
+
+¶Parse a given data structure as a Pydantic model via TypeAdapter
.
Read more about TypeAdapter
here.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ type_
+ |
+
+ type[T]
+ |
+
+
+
+ The type to parse the data as. + |
+ + required + | +
+ data
+ |
+
+ Any
+ |
+
+
+
+ The data to be parsed. + |
+ + required + | +
+ mode
+ |
+
+ Literal['python', 'json', 'strings']
+ |
+
+
+
+ The mode to use for parsing, either |
+
+ 'python'
+ |
+
Returns:
+Type | +Description | +
---|---|
+ T
+ |
+
+
+
+ The parsed |
+
Basic Usage of parse_as
+
from marvin.utilities.pydantic import parse_as
+from pydantic import BaseModel
+
+class ExampleModel(BaseModel):
+ name: str
+
+# parsing python objects
+parsed = parse_as(ExampleModel, {"name": "Marvin"})
+assert isinstance(parsed, ExampleModel)
+assert parsed.name == "Marvin"
+
+# parsing json strings
+parsed = parse_as(
+ list[ExampleModel],
+ '[{"name": "Marvin"}, {"name": "Arthur"}]',
+ mode="json"
+)
+assert all(isinstance(item, ExampleModel) for item in parsed)
+assert parsed[0].name == "Marvin"
+assert parsed[1].name == "Arthur"
+
+# parsing raw strings
+parsed = parse_as(int, '123', mode="strings")
+assert isinstance(parsed, int)
+assert parsed == 123
+
PythonFunction
+
+
+¶A Pydantic model representing a Python function.
+ + +Attributes:
+Name | +Type | +Description | +
---|---|---|
function |
+
+ Callable
+ |
+
+
+
+ The original function object. + |
+
signature |
+
+ Signature
+ |
+
+
+
+ The signature object of the function. + |
+
name |
+
+ str
+ |
+
+
+
+ The name of the function. + |
+
docstring |
+
+ Optional[str]
+ |
+
+
+
+ The docstring of the function. + |
+
parameters |
+
+ List[ParameterModel]
+ |
+
+
+
+ The parameters of the function. + |
+
return_annotation |
+
+ Optional[Any]
+ |
+
+
+
+ The return annotation of the function. + |
+
source_code |
+
+ str
+ |
+
+
+
+ The source code of the function. + |
+
bound_parameters |
+
+ dict[str, Any]
+ |
+
+
+
+ The parameters of the function bound with values. + |
+
return_value |
+
+ Optional[Any]
+ |
+
+
+
+ The return value of the function call. + |
+
from_function
+
+
+ classmethod
+
+
+¶Create a PythonFunction instance from a function.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ func
+ |
+
+ Callable
+ |
+
+
+
+ The function to create a PythonFunction instance from. + |
+ + required + | +
+ **kwargs
+ |
+ + | +
+
+
+ Additional keyword arguments to set as attributes on the PythonFunction instance. + |
+
+ {}
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
PythonFunction |
+ PythonFunction
+ |
+
+
+
+ The created PythonFunction instance. + |
+
from_function_call
+
+
+ classmethod
+
+
+¶Create a PythonFunction instance from a function call.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ func
+ |
+
+ Callable
+ |
+
+
+
+ The function to call. + |
+ + required + | +
+ *args
+ |
+ + | +
+
+
+ Positional arguments to pass to the function call. + |
+
+ ()
+ |
+
+ **kwargs
+ |
+ + | +
+
+
+ Keyword arguments to pass to the function call. + |
+
+ {}
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
PythonFunction |
+ PythonFunction
+ |
+
+
+
+ The created PythonFunction instance, with the return value of the function call set as an attribute. + |
+
Module for Slack-related utilities.
+ + + + + + + + +
edit_slack_message
+
+
+ async
+
+
+¶Edit an existing Slack message by appending new text or replacing it.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ channel
+ |
+
+ str
+ |
+
+
+
+ The Slack channel ID. + |
+ + required + | +
+ ts
+ |
+
+ str
+ |
+
+
+
+ The timestamp of the message to edit. + |
+ + required + | +
+ new_text
+ |
+
+ str
+ |
+
+
+
+ The new text to append or replace in the message. + |
+ + required + | +
+ mode
+ |
+
+ str
+ |
+
+
+
+ The mode of text editing, 'append' (default) or 'replace'. + |
+
+ 'append'
+ |
+
Returns:
+Type | +Description | +
---|---|
+ Response
+ |
+
+
+
+ httpx.Response: The response from the Slack API. + |
+
fetch_current_message_text
+
+
+ async
+
+
+¶Fetch the current text of a specific Slack message using its timestamp.
+ +
get_thread_messages
+
+
+ async
+
+
+¶Get all messages from a slack thread.
+ +
get_token
+
+
+ async
+
+
+¶Get the Slack bot token from the environment.
+ +
search_slack_messages
+
+
+ async
+
+
+¶Search for messages in Slack workspace based on a query.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ query
+ |
+
+ str
+ |
+
+
+
+ The search query. + |
+ + required + | +
+ max_messages
+ |
+
+ int
+ |
+
+
+
+ The maximum number of messages to retrieve. + |
+
+ 3
+ |
+
+ channel
+ |
+
+ str
+ |
+
+
+
+ The specific channel to search in. Defaults to None, +which searches all channels. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
list |
+ list
+ |
+
+
+
+ A list of message contents and permalinks matching the query. + |
+
Module for string utilities.
+ + + + + + + + +
count_tokens
+
+¶Counts the number of tokens in the given text using the specified model.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ text
+ |
+
+ str
+ |
+
+
+
+ The text to count tokens in. + |
+ + required + | +
+ model
+ |
+
+ str
+ |
+
+
+
+ The model to use for token counting. If not provided, + the default model is used. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
int |
+ int
+ |
+
+
+
+ The number of tokens in the text. + |
+
detokenize
+
+¶Detokenizes the given tokens using the specified model.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ tokens
+ |
+
+ list[int]
+ |
+
+
+
+ The tokens to detokenize. + |
+ + required + | +
+ model
+ |
+
+ str
+ |
+
+
+
+ The model to use for detokenization. If not provided, + the default model is used. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
str |
+ str
+ |
+
+
+
+ The detokenized text. + |
+
slice_tokens
+
+¶Slices the given text to the specified number of tokens.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ text
+ |
+
+ str
+ |
+
+
+
+ The text to slice. + |
+ + required + | +
+ n_tokens
+ |
+
+ int
+ |
+
+
+
+ The number of tokens to slice the text to. + |
+ + required + | +
+ model
+ |
+
+ str
+ |
+
+
+
+ The model to use for token counting. If not provided, + the default model is used. + |
+
+ None
+ |
+
Returns:
+Name | Type | +Description | +
---|---|---|
str |
+ str
+ |
+
+
+
+ The sliced text. + |
+
tokenize
+
+¶Tokenizes the given text using the specified model.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ text
+ |
+
+ str
+ |
+
+
+
+ The text to tokenize. + |
+ + required + | +
+ model
+ |
+
+ str
+ |
+
+
+
+ The model to use for tokenization. If not provided, + the default model is used. + |
+
+ None
+ |
+
Returns:
+Type | +Description | +
---|---|
+ list[int]
+ |
+
+
+
+ list[int]: The tokenized text as a list of integers. + |
+
Utilities for running unit tests.
+ + + + + + + + +
assert_equal
+
+¶Asserts whether the LLM output meets the expected output.
+This function uses an LLM to assess whether the provided output (llm_output) +meets some expectation. It allows us to make semantic claims like "the output +is a list of first names" to make assertions about stochastic LLM outputs.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
+ llm_output
+ |
+
+ Any
+ |
+
+
+
+ The output from the LLM. + |
+ + required + | +
+ expected
+ |
+
+ Any
+ |
+
+
+
+ The expected output. + |
+ + required + | +
Returns:
+Name | Type | +Description | +
---|---|---|
bool |
+ bool
+ |
+
+
+
+ True if the LLM output meets the expectation, False otherwise. + |
+
Raises:
+Type | +Description | +
---|---|
+ AssertionError
+ |
+
+
+
+ If the LLM output does not meet the expectation. + |
+
assert_locations_equal
+
+¶Helpful LLM assert for comparing two locations (e.g. New York, New York City)
+ +Module for LLM tool utilities.
+ + + + + + + + +
call_function_tool
+
+¶Helper function for calling a function tool from a list of tools, using the arguments +provided by an LLM as a JSON string. This function handles many common errors.
+ +
custom_partial
+
+¶Returns a new function with partial application of the given keyword arguments. +The new function has the same name and docstring as the original, and its +signature excludes the provided kwargs.
+ +
output_to_string
+
+¶Function outputs must be provided as strings
+ +
tool_from_function
+
+¶Creates an OpenAI-compatible tool from a Python function.
+If any kwargs are provided, they will be stored and provided at runtime. +Provided kwargs will be removed from the tool's parameter schema.
+ +
tool_from_model
+
+¶Creates an OpenAI-compatible tool from a Pydantic model class.
+ +
tool_from_type
+
+¶Creates an OpenAI-compatible tool from a Python type.
+ +