Warning
This SDK has breaking changes in >= 0.6.0
versions.
All methods now return Pydantic models.
Previously, you had to use the []
syntax to access response values. This
required a little more code for every property access.
chat_response = humanloop.chat(
# parameters
)
print(chat_response.body["project_id"])
With Pydantic-based response values, you can use the .
syntax to access. This
is slightly less verbose and looks more Pythonic.
chat_response = humanloop.chat(
# parameters
)
print(chat_response.project_id)
To reuse existing implementations from < 0.6.0
, use the .raw
namespace as specified in the Raw HTTP Response section.
- Requirements
- Installation
- Getting Started
- Async
- Raw HTTP Response
- Streaming
- Reference
humanloop.chat
humanloop.chat_deployed
humanloop.chat_model_config
humanloop.complete
humanloop.complete_deployed
humanloop.complete_model_configuration
humanloop.datapoints.delete
humanloop.datapoints.get
humanloop.datapoints.update
humanloop.datasets.create
humanloop.datasets.create_datapoint
humanloop.datasets.delete
humanloop.datasets.get
humanloop.datasets.list
humanloop.datasets.list_all_for_project
humanloop.datasets.list_datapoints
humanloop.datasets.update
humanloop.evaluations.add_evaluators
humanloop.evaluations.create
humanloop.evaluations.get
humanloop.evaluations.list
humanloop.evaluations.list_all_for_project
humanloop.evaluations.list_datapoints
humanloop.evaluations.log
humanloop.evaluations.result
humanloop.evaluations.update_status
humanloop.evaluators.create
humanloop.evaluators.delete
humanloop.evaluators.get
humanloop.evaluators.list
humanloop.evaluators.update
humanloop.feedback
humanloop.logs.delete
humanloop.logs.get
humanloop.logs.list
humanloop.log
humanloop.logs.update
humanloop.logs.update_by_ref
humanloop.model_configs.deserialize
humanloop.model_configs.export
humanloop.model_configs.get
humanloop.model_configs.register
humanloop.model_configs.serialize
humanloop.projects.create
humanloop.projects.create_feedback_type
humanloop.projects.deactivate_config
humanloop.projects.delete
humanloop.projects.delete_deployed_config
humanloop.projects.deploy_config
humanloop.projects.export
humanloop.projects.get
humanloop.projects.get_active_config
humanloop.projects.list
humanloop.projects.list_configs
humanloop.projects.list_deployed_configs
humanloop.projects.update
humanloop.projects.update_feedback_types
humanloop.sessions.create
humanloop.sessions.get
humanloop.sessions.list
Python >=3.7
pip install humanloop==0.7.35
from pprint import pprint
from humanloop import Humanloop, ApiException
humanloop = Humanloop(
api_key="YOUR_API_KEY",
openai_api_key="YOUR_OPENAI_API_KEY",
anthropic_api_key="YOUR_ANTHROPIC_API_KEY",
)
try:
# Chat
chat_response = humanloop.chat(
project="sdk-example",
messages=[
{
"role": "user",
"content": "Explain asynchronous programming.",
}
],
model_config={
"model": "gpt-3.5-turbo",
"max_tokens": -1,
"temperature": 0.7,
"chat_template": [
{
"role": "system",
"content": "You are a helpful assistant who replies in the style of {{persona}}.",
},
],
},
inputs={
"persona": "the pirate Blackbeard",
},
stream=False,
)
print(chat_response)
except ApiException as e:
print("Exception when calling .chat: %s\n" % e)
pprint(e.body)
if e.status == 422:
pprint(e.body["detail"])
pprint(e.headers)
pprint(e.status)
pprint(e.reason)
pprint(e.round_trip_time)
try:
# Complete
complete_response = humanloop.complete(
project="sdk-example",
inputs={
"text": "Llamas that are well-socialized and trained to halter and lead after weaning and are very friendly and pleasant to be around. They are extremely curious and most will approach people easily. However, llamas that are bottle-fed or over-socialized and over-handled as youth will become extremely difficult to handle when mature, when they will begin to treat humans as they treat each other, which is characterized by bouts of spitting, kicking and neck wrestling.[33]",
},
model_config={
"model": "gpt-3.5-turbo",
"max_tokens": -1,
"temperature": 0.7,
"prompt_template": "Summarize this for a second-grade student:\n\nText:\n{{text}}\n\nSummary:\n",
},
stream=False,
)
print(complete_response)
except ApiException as e:
print("Exception when calling .complete: %s\n" % e)
pprint(e.body)
if e.status == 422:
pprint(e.body["detail"])
pprint(e.headers)
pprint(e.status)
pprint(e.reason)
pprint(e.round_trip_time)
try:
# Feedback
feedback_response = humanloop.feedback(
type="rating",
value="good",
data_id="data_[...]",
user="user@example.com",
)
print(feedback_response)
except ApiException as e:
print("Exception when calling .feedback: %s\n" % e)
pprint(e.body)
if e.status == 422:
pprint(e.body["detail"])
pprint(e.headers)
pprint(e.status)
pprint(e.reason)
pprint(e.round_trip_time)
try:
# Log
log_response = humanloop.log(
project="sdk-example",
inputs={
"text": "Llamas that are well-socialized and trained to halter and lead after weaning and are very friendly and pleasant to be around. They are extremely curious and most will approach people easily. However, llamas that are bottle-fed or over-socialized and over-handled as youth will become extremely difficult to handle when mature, when they will begin to treat humans as they treat each other, which is characterized by bouts of spitting, kicking and neck wrestling.[33]",
},
output="Llamas can be friendly and curious if they are trained to be around people, but if they are treated too much like pets when they are young, they can become difficult to handle when they grow up. This means they might spit, kick, and wrestle with their necks.",
source="sdk",
config={
"model": "gpt-3.5-turbo",
"max_tokens": -1,
"temperature": 0.7,
"prompt_template": "Summarize this for a second-grade student:\n\nText:\n{{text}}\n\nSummary:\n",
"type": "model",
},
)
print(log_response)
except ApiException as e:
print("Exception when calling .log: %s\n" % e)
pprint(e.body)
if e.status == 422:
pprint(e.body["detail"])
pprint(e.headers)
pprint(e.status)
pprint(e.reason)
pprint(e.round_trip_time)
async
support is available by prepending a
to any method.
import asyncio
from pprint import pprint
from humanloop import Humanloop, ApiException
humanloop = Humanloop(
api_key="YOUR_API_KEY",
openai_api_key="YOUR_OPENAI_API_KEY",
anthropic_api_key="YOUR_ANTHROPIC_API_KEY",
)
async def main():
try:
complete_response = await humanloop.acomplete(
project="sdk-example",
inputs={
"text": "Llamas that are well-socialized and trained to halter and lead after weaning and are very friendly and pleasant to be around. They are extremely curious and most will approach people easily. However, llamas that are bottle-fed or over-socialized and over-handled as youth will become extremely difficult to handle when mature, when they will begin to treat humans as they treat each other, which is characterized by bouts of spitting, kicking and neck wrestling.[33]",
},
model_config={
"model": "gpt-3.5-turbo",
"max_tokens": -1,
"temperature": 0.7,
"prompt_template": "Summarize this for a second-grade student:\n\nText:\n{{text}}\n\nSummary:\n",
},
stream=False,
)
print(complete_response)
except ApiException as e:
print("Exception when calling .complete: %s\n" % e)
pprint(e.body)
if e.status == 422:
pprint(e.body["detail"])
pprint(e.headers)
pprint(e.status)
pprint(e.reason)
pprint(e.round_trip_time)
asyncio.run(main())
To access raw HTTP response values, use the .raw
namespace.
from pprint import pprint
from humanloop import Humanloop, ApiException
humanloop = Humanloop(
openai_api_key="OPENAI_API_KEY",
openai_azure_api_key="OPENAI_AZURE_API_KEY",
openai_azure_endpoint_api_key="OPENAI_AZURE_ENDPOINT_API_KEY",
anthropic_api_key="ANTHROPIC_API_KEY",
cohere_api_key="COHERE_API_KEY",
api_key="YOUR_API_KEY",
)
try:
# Chat
create_response = humanloop.chats.raw.create(
messages=[
{
"role": "user",
}
],
model_config={
"provider": "openai",
"model": "model_example",
"max_tokens": -1,
"temperature": 1,
"top_p": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
"endpoint": "complete",
},
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
save=True,
source_datapoint_id="string_example",
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
seed=1,
return_inputs=True,
tool_choice="string_example",
tool_call="string_example",
response_format={
"type": "string_example",
},
)
pprint(create_response.body)
pprint(create_response.body["data"])
pprint(create_response.body["provider_responses"])
pprint(create_response.body["project_id"])
pprint(create_response.body["num_samples"])
pprint(create_response.body["logprobs"])
pprint(create_response.body["suffix"])
pprint(create_response.body["user"])
pprint(create_response.body["usage"])
pprint(create_response.body["metadata"])
pprint(create_response.body["provider_request"])
pprint(create_response.body["session_id"])
pprint(create_response.body["tool_choice"])
pprint(create_response.headers)
pprint(create_response.status)
pprint(create_response.round_trip_time)
except ApiException as e:
print("Exception when calling ChatsApi.create: %s\n" % e)
pprint(e.body)
if e.status == 422:
pprint(e.body["detail"])
pprint(e.headers)
pprint(e.status)
pprint(e.reason)
pprint(e.round_trip_time)
Streaming support is available by suffixing a chat
or complete
method with _stream
.
import asyncio
from humanloop import Humanloop
humanloop = Humanloop(
api_key="YOUR_API_KEY",
openai_api_key="YOUR_OPENAI_API_KEY",
anthropic_api_key="YOUR_ANTHROPIC_API_KEY",
)
async def main():
response = await humanloop.chat_stream(
project="sdk-example",
messages=[
{
"role": "user",
"content": "Explain asynchronous programming.",
}
],
model_config={
"model": "gpt-3.5-turbo",
"max_tokens": -1,
"temperature": 0.7,
"chat_template": [
{
"role": "system",
"content": "You are a helpful assistant who replies in the style of {{persona}}.",
},
],
},
inputs={
"persona": "the pirate Blackbeard",
},
)
async for token in response.content:
print(token)
asyncio.run(main())
Get a chat response by providing details of the model configuration in the request.
create_response = humanloop.chat(
messages=[
{
"role": "user",
}
],
model_config={
"provider": "openai",
"model": "model_example",
"max_tokens": -1,
"temperature": 1,
"top_p": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
"endpoint": "complete",
},
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
save=True,
source_datapoint_id="string_example",
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
seed=1,
return_inputs=True,
tool_choice="string_example",
tool_call="string_example",
response_format={
"type": "string_example",
},
)
The messages passed to the to provider chat endpoint.
model_config: ModelConfigChatRequest
The model configuration used to create a chat response.
Unique project name. If no project exists with this name, a new project will be created.
Unique ID of a project to associate to the log. Either this or project
must be provided.
ID of the session to associate the datapoint.
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
ID associated to the parent datapoint in a session.
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
The inputs passed to the prompt template.
Identifies where the model was called from.
Any additional metadata to record.
Whether the request/response payloads will be stored on Humanloop.
ID of the source datapoint if this is a log derived from a datapoint in a dataset.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
The number of generations.
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
End-user ID passed through to provider call.
Deprecated field: the seed is instead set as part of the request.config object.
Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.
tool_call: Union[str
, Dict[str, str]
]
NB: Deprecated with new tool_choice. Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'name': <TOOL_NAME>} forces the model to use the provided tool of the same name.
response_format: ResponseFormat
The format of the response. Only type json_object is currently supported for chat.
/chat
post
π Back to Table of Contents
Get a chat response using the project's active deployment.
The active deployment can be a specific model configuration.
create_deployed_response = humanloop.chat_deployed(
messages=[
{
"role": "user",
}
],
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
save=True,
source_datapoint_id="string_example",
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
seed=1,
return_inputs=True,
tool_choice="string_example",
tool_call="string_example",
response_format={
"type": "string_example",
},
environment="string_example",
)
The messages passed to the to provider chat endpoint.
Unique project name. If no project exists with this name, a new project will be created.
Unique ID of a project to associate to the log. Either this or project
must be provided.
ID of the session to associate the datapoint.
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
ID associated to the parent datapoint in a session.
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
The inputs passed to the prompt template.
Identifies where the model was called from.
Any additional metadata to record.
Whether the request/response payloads will be stored on Humanloop.
ID of the source datapoint if this is a log derived from a datapoint in a dataset.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
The number of generations.
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
End-user ID passed through to provider call.
Deprecated field: the seed is instead set as part of the request.config object.
Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.
tool_call: Union[str
, Dict[str, str]
]
NB: Deprecated with new tool_choice. Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'name': <TOOL_NAME>} forces the model to use the provided tool of the same name.
response_format: ResponseFormat
The format of the response. Only type json_object is currently supported for chat.
The environment name used to create a chat response. If not specified, the default environment will be used.
/chat-deployed
post
π Back to Table of Contents
Get chat response for a specific model configuration.
create_model_config_response = humanloop.chat_model_config(
messages=[
{
"role": "user",
}
],
model_config_id="string_example",
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
save=True,
source_datapoint_id="string_example",
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
seed=1,
return_inputs=True,
tool_choice="string_example",
tool_call="string_example",
response_format={
"type": "string_example",
},
)
The messages passed to the to provider chat endpoint.
Identifies the model configuration used to create a chat response.
Unique project name. If no project exists with this name, a new project will be created.
Unique ID of a project to associate to the log. Either this or project
must be provided.
ID of the session to associate the datapoint.
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
ID associated to the parent datapoint in a session.
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
The inputs passed to the prompt template.
Identifies where the model was called from.
Any additional metadata to record.
Whether the request/response payloads will be stored on Humanloop.
ID of the source datapoint if this is a log derived from a datapoint in a dataset.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
The number of generations.
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
End-user ID passed through to provider call.
Deprecated field: the seed is instead set as part of the request.config object.
Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.
tool_call: Union[str
, Dict[str, str]
]
NB: Deprecated with new tool_choice. Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'name': <TOOL_NAME>} forces the model to use the provided tool of the same name.
response_format: ResponseFormat
The format of the response. Only type json_object is currently supported for chat.
/chat-model-config
post
π Back to Table of Contents
Create a completion by providing details of the model configuration in the request.
create_response = humanloop.complete(
model_config={
"provider": "openai",
"model": "model_example",
"max_tokens": -1,
"temperature": 1,
"top_p": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
"endpoint": "complete",
"prompt_template": "{{question}}",
},
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
save=True,
source_datapoint_id="string_example",
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
seed=1,
return_inputs=True,
logprobs=1,
suffix="string_example",
)
model_config: ModelConfigCompletionRequest
The model configuration used to generate.
Unique project name. If no project exists with this name, a new project will be created.
Unique ID of a project to associate to the log. Either this or project
must be provided.
ID of the session to associate the datapoint.
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
ID associated to the parent datapoint in a session.
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
The inputs passed to the prompt template.
Identifies where the model was called from.
Any additional metadata to record.
Whether the request/response payloads will be stored on Humanloop.
ID of the source datapoint if this is a log derived from a datapoint in a dataset.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
The number of generations.
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
End-user ID passed through to provider call.
Deprecated field: the seed is instead set as part of the request.config object.
Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
Include the log probabilities of the top n tokens in the provider_response
The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.
/completion
post
π Back to Table of Contents
Create a completion using the project's active deployment.
The active deployment can be a specific model configuration.
create_deployed_response = humanloop.complete_deployed(
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
save=True,
source_datapoint_id="string_example",
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
seed=1,
return_inputs=True,
logprobs=1,
suffix="string_example",
environment="string_example",
)
Unique project name. If no project exists with this name, a new project will be created.
Unique ID of a project to associate to the log. Either this or project
must be provided.
ID of the session to associate the datapoint.
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
ID associated to the parent datapoint in a session.
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
The inputs passed to the prompt template.
Identifies where the model was called from.
Any additional metadata to record.
Whether the request/response payloads will be stored on Humanloop.
ID of the source datapoint if this is a log derived from a datapoint in a dataset.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
The number of generations.
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
End-user ID passed through to provider call.
Deprecated field: the seed is instead set as part of the request.config object.
Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
Include the log probabilities of the top n tokens in the provider_response
The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.
The environment name used to create a chat response. If not specified, the default environment will be used.
/completion-deployed
post
π Back to Table of Contents
Create a completion for a specific model configuration.
create_model_config_response = humanloop.complete_model_configuration(
model_config_id="string_example",
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
save=True,
source_datapoint_id="string_example",
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
seed=1,
return_inputs=True,
logprobs=1,
suffix="string_example",
)
Identifies the model configuration used to create a chat response.
Unique project name. If no project exists with this name, a new project will be created.
Unique ID of a project to associate to the log. Either this or project
must be provided.
ID of the session to associate the datapoint.
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
ID associated to the parent datapoint in a session.
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
The inputs passed to the prompt template.
Identifies where the model was called from.
Any additional metadata to record.
Whether the request/response payloads will be stored on Humanloop.
ID of the source datapoint if this is a log derived from a datapoint in a dataset.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
The number of generations.
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
End-user ID passed through to provider call.
Deprecated field: the seed is instead set as part of the request.config object.
Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
Include the log probabilities of the top n tokens in the provider_response
The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.
/completion-model-config
post
π Back to Table of Contents
Delete a list of datapoints by their IDs.
WARNING: This endpoint has been decommissioned and no longer works. Please use the v5 datasets API instead.
humanloop.datapoints.delete()
/datapoints
delete
π Back to Table of Contents
Get a datapoint by ID.
get_response = humanloop.datapoints.get(
id="id_example",
)
String ID of datapoint.
/datapoints/{id}
get
π Back to Table of Contents
Edit the input, messages and criteria fields of a datapoint.
WARNING: This endpoint has been decommissioned and no longer works. Please use the v5 datasets API instead.
update_response = humanloop.datapoints.update(
id="id_example",
)
String ID of datapoint.
/datapoints/{id}
patch
π Back to Table of Contents
Create a new dataset for a project.
create_response = humanloop.datasets.create(
description="string_example",
name="string_example",
project_id="project_id_example",
)
The description of the dataset.
The name of the dataset.
/projects/{project_id}/datasets
post
π Back to Table of Contents
Create a new datapoint for a dataset.
Here in the v4 API, this has the following behaviour:
- Retrieve the current latest version of the dataset.
- Construct a new version of the dataset with the new testcases added.
- Store that latest version as a committed version with an autogenerated commit message and return the new datapoints
create_datapoint_response = humanloop.datasets.create_datapoint(
body={
"log_ids": ["log_ids_example"],
},
dataset_id="dataset_id_example",
log_ids=["string_example"],
inputs={
"key": "string_example",
},
messages=[
{
"role": "user",
}
],
target={
"key": "string_example",
},
)
String ID of dataset. Starts with evts_
.
requestBody: DatasetsCreateDatapointRequest
DatasetsCreateDatapointResponse
/datasets/{dataset_id}/datapoints
post
π Back to Table of Contents
Delete a dataset by ID.
delete_response = humanloop.datasets.delete(
id="id_example",
)
String ID of dataset. Starts with evts_
.
/datasets/{id}
delete
π Back to Table of Contents
Get a single dataset by ID.
get_response = humanloop.datasets.get(
id="id_example",
)
String ID of dataset. Starts with evts_
.
/datasets/{id}
get
π Back to Table of Contents
Get all Datasets for an organization.
list_response = humanloop.datasets.list()
/datasets
get
π Back to Table of Contents
Get all datasets for a project.
list_all_for_project_response = humanloop.datasets.list_all_for_project(
project_id="project_id_example",
)
DatasetsListAllForProjectResponse
/projects/{project_id}/datasets
get
π Back to Table of Contents
Get datapoints for a dataset.
list_datapoints_response = humanloop.datasets.list_datapoints(
dataset_id="dataset_id_example",
page=0,
size=50,
)
String ID of dataset. Starts with evts_
.
PaginatedDataDatapointResponse
/datasets/{dataset_id}/datapoints
get
π Back to Table of Contents
Update a testset by ID.
update_response = humanloop.datasets.update(
id="id_example",
description="string_example",
name="string_example",
)
String ID of testset. Starts with evts_
.
The description of the dataset.
The name of the dataset.
/datasets/{id}
patch
π Back to Table of Contents
Add evaluators to an existing evaluation run.
add_evaluators_response = humanloop.evaluations.add_evaluators(
id="id_example",
evaluator_ids=["string_example"],
evaluator_version_ids=["string_example"],
)
String ID of evaluation run. Starts with ev_
.
evaluator_ids: AddEvaluatorsRequestEvaluatorIds
evaluator_version_ids: AddEvaluatorsRequestEvaluatorVersionIds
/evaluations/{id}/evaluators
patch
π Back to Table of Contents
Create an evaluation.
create_response = humanloop.evaluations.create(
config_id="string_example",
evaluator_ids=["string_example"],
dataset_id="string_example",
project_id="project_id_example",
provider_api_keys={},
hl_generated=True,
name="string_example",
)
ID of the config to evaluate. Starts with config_
.
evaluator_ids: CreateEvaluationRequestEvaluatorIds
ID of the dataset to use in this evaluation. Starts with evts_
.
String ID of project. Starts with pr_
.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization. Ensure you provide an API key for the provider for the model config you are evaluating, or have one saved to your organization.
Whether the log generations for this evaluation should be performed by Humanloop. If False
, the log generations should be submitted by the user via the API.
Name of the Evaluation to help identify it.
/projects/{project_id}/evaluations
post
π Back to Table of Contents
Get evaluation by ID.
get_response = humanloop.evaluations.get(
id="id_example",
evaluator_aggregates=True,
evaluatee_id="string_example",
)
String ID of evaluation run. Starts with ev_
.
Whether to include evaluator aggregates in the response.
String ID of evaluatee version to return. If not defined, the first evaluatee will be returned. Starts with evv_
.
/evaluations/{id}
get
π Back to Table of Contents
Get the evaluations associated with a project.
Sorting and filtering are supported through query params for categorical columns
and the created_at
timestamp.
Sorting is supported for the dataset
, config
, status
and evaluator-{evaluator_id}
columns.
Specify sorting with the sort
query param, with values {column}.{ordering}
.
E.g. ?sort=dataset.asc&sort=status.desc will yield a multi-column sort. First by dataset then by status.
Filtering is supported for the id
, dataset
, config
and status
columns.
Specify filtering with the id_filter
, dataset_filter
, config_filter
and status_filter
query params.
E.g. ?dataset_filter=my_dataset&dataset_filter=my_other_dataset&status_filter=running will only show rows where the dataset is "my_dataset" or "my_other_dataset", and where the status is "running".
An additional date range filter is supported for the created_at
column. Use the start_date
and end_date
query parameters to configure this.
list_response = humanloop.evaluations.list(
project_id="project_id_example",
id=["string_example"],
start_date="1970-01-01",
end_date="1970-01-01",
size=50,
page=0,
evaluatee_id="string_example",
)
String ID of project. Starts with pr_
.
A list of evaluation run ids to filter on. Starts with ev_
.
Only return evaluations created after this date.
Only return evaluations created before this date.
String ID of evaluatee version to return. If not defined, the first evaluatee will be returned. Starts with evv_
.
PaginatedDataEvaluationResponse
/evaluations
get
π Back to Table of Contents
Get all the evaluations associated with your project.
Deprecated: This is a legacy unpaginated endpoint. Use /evaluations
instead, with appropriate
sorting, filtering and pagination options.
list_all_for_project_response = humanloop.evaluations.list_all_for_project(
project_id="project_id_example",
evaluatee_id="string_example",
evaluator_aggregates=True,
)
String ID of project. Starts with pr_
.
String ID of evaluatee version to return. If not defined, the first evaluatee will be returned. Starts with evv_
.
Whether to include evaluator aggregates in the response.
EvaluationsGetForProjectResponse
/projects/{project_id}/evaluations
get
π Back to Table of Contents
Get testcases by evaluation ID.
list_datapoints_response = humanloop.evaluations.list_datapoints(
id="id_example",
page=1,
size=10,
evaluatee_id="string_example",
)
String ID of evaluation. Starts with ev_
.
Page to fetch. Starts from 1.
Number of evaluation results to retrieve.
String ID of evaluatee version to return. If not defined, the first evaluatee will be returned. Starts with evv_
.
PaginatedDataEvaluationDatapointSnapshotResponse
/evaluations/{id}/datapoints
get
π Back to Table of Contents
Log an external generation to an evaluation run for a datapoint.
The run must have status 'running'.
log_response = humanloop.evaluations.log(
datapoint_id="string_example",
log={
"save": True,
},
evaluation_id="evaluation_id_example",
evaluatee_id="string_example",
)
The datapoint for which a log was generated. Must be one of the datapoints in the dataset being evaluated.
log: LogRequest
The log generated for the datapoint.
ID of the evaluation run. Starts with evrun_
.
String ID of evaluatee version to return. If not defined, the first evaluatee will be returned. Starts with evv_
.
/evaluations/{evaluation_id}/log
post
π Back to Table of Contents
Log an evaluation result to an evaluation run.
The run must have status 'running'. One of result
or error
must be provided.
result_response = humanloop.evaluations.result(
log_id="string_example",
evaluator_id="string_example",
evaluation_id="evaluation_id_example",
result=True,
error="string_example",
evaluatee_id="string_example",
)
The log that was evaluated. Must have as its source_datapoint_id
one of the datapoints in the dataset being evaluated.
ID of the evaluator that evaluated the log. Starts with evfn_
. Must be one of the evaluator IDs associated with the evaluation run being logged to.
ID of the evaluation run. Starts with evrun_
.
The result value of the evaluation.
An error that occurred during evaluation.
String ID of evaluatee version to return. If not defined, the first evaluatee will be returned. Starts with evv_
.
CreateEvaluationResultLogRequest
/evaluations/{evaluation_id}/result
post
π Back to Table of Contents
Update the status of an evaluation run.
Can only be used to update the status of an evaluation run that uses external or human evaluators. The evaluation must currently have status 'running' if swithcing to completed, or it must have status 'completed' if switching back to 'running'.
update_status_response = humanloop.evaluations.update_status(
status="pending",
id="id_example",
)
status: EvaluationStatus
The new status of the evaluation.
String ID of evaluation run. Starts with ev_
.
/evaluations/{id}/status
patch
π Back to Table of Contents
Create an evaluator within your organization.
create_response = humanloop.evaluators.create(
description="string_example",
name="a",
arguments_type="target_free",
return_type="boolean",
type="python",
code="string_example",
model_config={
"provider": "openai",
"model": "model_example",
"max_tokens": -1,
"temperature": 1,
"top_p": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
"endpoint": "complete",
"prompt_template": "{{question}}",
},
)
The description of the evaluator.
The name of the evaluator.
arguments_type: EvaluatorArgumentsType
Whether this evaluator is target-free or target-required.
return_type: EvaluatorReturnTypeEnum
The type of the return value of the evaluator.
type: EvaluatorType
The type of the evaluator.
The code for the evaluator. This code will be executed in a sandboxed environment.
model_config: ModelConfigCompletionRequest
The model configuration used to generate.
/evaluators
post
π Back to Table of Contents
Delete an evaluator within your organization.
humanloop.evaluators.delete(
id="id_example",
)
/evaluators/{id}
delete
π Back to Table of Contents
Get an evaluator within your organization.
get_response = humanloop.evaluators.get(
id="id_example",
)
/evaluators/{id}
get
π Back to Table of Contents
Get all evaluators within your organization.
list_response = humanloop.evaluators.list()
/evaluators
get
π Back to Table of Contents
Update an evaluator within your organization.
update_response = humanloop.evaluators.update(
id="id_example",
description="string_example",
name="string_example",
arguments_type="target_free",
return_type="boolean",
code="string_example",
model_config={
"provider": "openai",
"model": "model_example",
"max_tokens": -1,
"temperature": 1,
"top_p": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
"endpoint": "complete",
"prompt_template": "{{question}}",
},
)
The description of the evaluator.
The name of the evaluator.
arguments_type: EvaluatorArgumentsType
Whether this evaluator is target-free or target-required.
return_type: EvaluatorReturnTypeEnum
The type of the return value of the evaluator.
The code for the evaluator. This code will be executed in a sandboxed environment.
model_config: ModelConfigCompletionRequest
The model configuration used to generate.
/evaluators/{id}
patch
π Back to Table of Contents
Submit an array of feedback for existing data_ids
feedback_response = humanloop.feedback(
body=[
{
"type": "string_example",
}
],
type="string_example",
value=True,
data_id="string_example",
user="string_example",
created_at="1970-01-01T00:00:00.00Z",
unset=True,
)
type: Union[FeedbackType
, str
]
The type of feedback. The default feedback types available are 'rating', 'action', 'issue', 'correction', and 'comment'.
The feedback value to be set. This field should be left blank when unsetting 'rating', 'correction' or 'comment', but is required otherwise.
ID to associate the feedback to a previously logged datapoint.
A unique identifier to who provided the feedback.
User defined timestamp for when the feedback was created.
If true, the value for this feedback is unset.
/feedback
post
π Back to Table of Contents
Delete
humanloop.logs.delete(
id=["string_example"],
)
/logs
delete
π Back to Table of Contents
Retrieve a log by log id.
get_response = humanloop.logs.get(
id="id_example",
)
String ID of log to return. Starts with data_
.
/logs/{id}
get
π Back to Table of Contents
Retrieve paginated logs from the server.
Sorting and filtering are supported through query params.
Sorting is supported for the source
, model
, timestamp
, and feedback-{output_name}
columns.
Specify sorting with the sort
query param, with values {column}.{ordering}
.
E.g. ?sort=source.asc&sort=model.desc will yield a multi-column sort. First by source then by model.
Filtering is supported for the source
, model
, feedback-{output_name}
,
evaluator-{evaluator_external_id}
columns.
Specify filtering with the source_filter
, model_filter
, feedback-{output.name}_filter
and
evaluator-{evaluator_external_id}_filter
query params.
E.g. ?source_filter=AI&source_filter=user_1234&feedback-explicit_filter=good
will only show rows where the source is "AI" or "user_1234", and where the latest feedback for the "explicit" output
group is "good".
An additional date range filter is supported for the Timestamp
column (i.e. Log.created_at
).
These are supported through the start_date
and end_date
query parameters.
The date format could be either date: YYYY-MM-DD
, e.g. 2024-01-01
or datetime: YYYY-MM-DD[T]HH:MM[:SS[.ffffff]][Z or [Β±]HH[:]MM], e.g. 2024-01-01T00:00:00Z.
Searching is supported for the model inputs and output.
Specify a search term with the search
query param.
E.g. ?search=hello%20there
will cause a case-insensitive search across model inputs and output.
list_response = humanloop.logs.list(
project_id="project_id_example",
search="string_example",
metadata_search="string_example",
version_status="uncommitted",
start_date="1970-01-01",
end_date="1970-01-01",
size=50,
page=0,
)
version_status: VersionStatus
/logs
get
π Back to Table of Contents
Log a datapoint or array of datapoints to your Humanloop project.
log_response = humanloop.log(
body=[
{
"save": True,
}
],
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
save=True,
source_datapoint_id="string_example",
reference_id="string_example",
messages=[
{
"role": "user",
}
],
output="string_example",
judgment=True,
config_id="string_example",
config={
"provider": "openai",
"model": "model_example",
"max_tokens": -1,
"temperature": 1,
"top_p": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
"endpoint": "complete",
"type": "ModelConfigRequest",
},
environment="string_example",
feedback={
"type": "string_example",
"value": True,
},
created_at="1970-01-01T00:00:00.00Z",
error="string_example",
stdout="string_example",
duration=3.14,
output_message={
"role": "user",
},
prompt_tokens=1,
output_tokens=1,
prompt_cost=3.14,
output_cost=3.14,
provider_request={},
provider_response={},
)
Unique project name. If no project exists with this name, a new project will be created.
Unique ID of a project to associate to the log. Either this or project
must be provided.
ID of the session to associate the datapoint.
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
ID associated to the parent datapoint in a session.
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
The inputs passed to the prompt template.
Identifies where the model was called from.
Any additional metadata to record.
Whether the request/response payloads will be stored on Humanloop.
ID of the source datapoint if this is a log derived from a datapoint in a dataset.
A unique string to reference the datapoint. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a subsequent log request.
The messages passed to the to provider chat endpoint.
Generated output from your model for the provided inputs. Can be None
if logging an error, or if logging a parent datapoint with the intention to populate it later
Unique ID of a config to associate to the log.
The model config used for this generation. Required unless config_id
is provided.
The environment name used to create the log.
Optional parameter to provide feedback with your logged datapoint.
User defined timestamp for when the log was created.
Error message if the log is an error.
Captured log and debug statements.
Duration of the logged event in seconds.
output_message: ChatMessageWithToolCall
The message returned by the provider.
Number of tokens in the prompt used to generate the output.
Number of tokens in the output generated by the model.
Cost in dollars associated to the tokens in the prompt.
Cost in dollars associated to the tokens in the output.
Raw request sent to provider.
Raw response received the provider.
/logs
post
π Back to Table of Contents
Update a logged datapoint in your Humanloop project.
update_response = humanloop.logs.update(
id="id_example",
output="string_example",
error="string_example",
duration=3.14,
)
String ID of logged datapoint to return. Starts with data_
.
Generated output from your model for the provided inputs.
Error message if the log is an error.
Duration of the logged event in seconds.
/logs/{id}
patch
π Back to Table of Contents
Update a logged datapoint by its reference ID.
The reference_id
query parameter must be provided, and refers to the
reference_id
of a previously-logged datapoint.
update_by_ref_response = humanloop.logs.update_by_ref(
reference_id="reference_id_example",
output="string_example",
error="string_example",
duration=3.14,
)
A unique string to reference the datapoint. Identifies the logged datapoint created with the same reference_id
.
Generated output from your model for the provided inputs.
Error message if the log is an error.
Duration of the logged event in seconds.
/logs
patch
π Back to Table of Contents
Deserialize a model config from a .prompt file format.
deserialize_response = humanloop.model_configs.deserialize(
config="string_example",
)
/model-configs/deserialize
post
π Back to Table of Contents
Export a model config to a .prompt file by ID.
export_response = humanloop.model_configs.export(
id="id_example",
)
String ID of the model config. Starts with config_
.
/model-configs/{id}/export
post
π Back to Table of Contents
Get a specific model config by ID.
get_response = humanloop.model_configs.get(
id="id_example",
)
String ID of the model config. Starts with config_
.
/model-configs/{id}
get
π Back to Table of Contents
Register a model config to a project.
If the project name provided does not exist, a new project will be created automatically.
If the model config is the first to be associated to the project, it will be set as the active model config.
register_response = humanloop.model_configs.register(
model="string_example",
description="string_example",
name="string_example",
provider="openai",
max_tokens=-1,
temperature=1,
top_p=1,
stop="string_example",
presence_penalty=0,
frequency_penalty=0,
other={},
seed=1,
response_format={
"type": "string_example",
},
project="string_example",
project_id="string_example",
prompt_template="string_example",
chat_template=[
{
"role": "user",
}
],
endpoint="complete",
tools=[
{
"id": "id_example",
"source": "organization",
}
],
)
The model instance used. E.g. text-davinci-002.
A description of the model config.
A friendly display name for the model config. If not provided, a name will be generated.
provider: ModelProviders
The company providing the underlying model service.
The maximum number of tokens to generate. Provide max_tokens=-1 to dynamically calculate the maximum number of tokens to generate given the length of the prompt
What sampling temperature to use when making a generation. Higher values means the model will be more creative.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
The string (or list of strings) after which the model will stop generating. The returned text will not contain the stop sequence.
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the generation so far.
Number between -2.0 and 2.0. Positive values penalize new tokens based on how frequently they appear in the generation so far.
Other parameter values to be passed to the provider call.
If specified, model will make a best effort to sample deterministically, but it is not guaranteed.
response_format: ResponseFormat
The format of the response. Only type json_object is currently supported for chat.
Unique project name. If it does not exist, a new project will be created.
Unique project ID
Prompt template that will take your specified inputs to form your final request to the provider model. NB: Input variables within the prompt template should be specified with syntax: {{INPUT_NAME}}.
Messages prepended to the list of messages sent to the provider. These messages that will take your specified inputs to form your final request to the provider model. NB: Input variables within the prompt template should be specified with syntax: {{INPUT_NAME}}.
endpoint: ModelEndpoints
Which of the providers model endpoints to use. For example Complete or Edit.
/model-configs
post
π Back to Table of Contents
Serialize a model config to a .prompt file format.
serialize_response = humanloop.model_configs.serialize(
body={
"provider": "openai",
"model": "model_example",
"max_tokens": -1,
"temperature": 1,
"top_p": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
"endpoint": "complete",
},
description="string_example",
name="string_example",
provider="openai",
model="string_example",
max_tokens=-1,
temperature=1,
top_p=1,
stop="string_example",
presence_penalty=0,
frequency_penalty=0,
other={},
seed=1,
response_format={
"type": "string_example",
},
endpoint="complete",
chat_template=[
{
"role": "user",
}
],
tools=[
{
"id": "id_example",
"source": "organization",
}
],
prompt_template="{{question}}",
)
A description of the model config.
A friendly display name for the model config. If not provided, a name will be generated.
provider: ModelProviders
The company providing the underlying model service.
The model instance used. E.g. text-davinci-002.
The maximum number of tokens to generate. Provide max_tokens=-1 to dynamically calculate the maximum number of tokens to generate given the length of the prompt
What sampling temperature to use when making a generation. Higher values means the model will be more creative.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
The string (or list of strings) after which the model will stop generating. The returned text will not contain the stop sequence.
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the generation so far.
Number between -2.0 and 2.0. Positive values penalize new tokens based on how frequently they appear in the generation so far.
Other parameter values to be passed to the provider call.
If specified, model will make a best effort to sample deterministically, but it is not guaranteed.
response_format: ResponseFormat
The format of the response. Only type json_object is currently supported for chat.
endpoint: ModelEndpoints
The provider model endpoint used.
Messages prepended to the list of messages sent to the provider. These messages that will take your specified inputs to form your final request to the provider model. Input variables within the template should be specified with syntax: {{INPUT_NAME}}.
tools: ModelConfigChatRequestTools
Prompt template that will take your specified inputs to form your final request to the model. Input variables within the prompt template should be specified with syntax: {{INPUT_NAME}}.
/model-configs/serialize
post
π Back to Table of Contents
Create a new project.
create_response = humanloop.projects.create(
name="string_example",
directory_id="string_example",
)
Unique project name.
ID of directory to assign project to. Starts with dir_
. If not provided, the project will be created in the root directory.
/projects
post
π Back to Table of Contents
Create Feedback Type
create_feedback_type_response = humanloop.projects.create_feedback_type(
type="string_example",
_class="select",
id="id_example",
values=[
{
"value": "value_example",
"sentiment": "positive",
}
],
)
The type of feedback to update.
_class: FeedbackClass
The data type associated to this feedback type; whether it is a 'text'/'select'/'multi_select'.
String ID of project. Starts with pr_
.
The feedback values to be available. This field should only be populated when updating a 'select' or 'multi_select' feedback class.
/projects/{id}/feedback-types
post
π Back to Table of Contents
Remove the project's active config, if set.
This has no effect if the project does not have an active model config set.
deactivate_config_response = humanloop.projects.deactivate_config(
id="id_example",
environment="string_example",
)
String ID of project. Starts with pr_
.
Name for the environment. E.g. 'production'. If not provided, will delete the active config for the default environment.
/projects/{id}/active-config
delete
π Back to Table of Contents
Delete a specific file.
humanloop.projects.delete(
id="id_example",
)
String ID of project. Starts with pr_
.
/projects/{id}
delete
π Back to Table of Contents
Remove the version deployed to environment.
This has no effect if the project does not have an active version set.
delete_deployed_config_response = humanloop.projects.delete_deployed_config(
project_id="project_id_example",
environment_id="environment_id_example",
)
/projects/{project_id}/deployed-config/{environment_id}
delete
π Back to Table of Contents
Deploy a model config to an environment.
If the environment already has a model config deployed, it will be replaced.
deploy_config_response = humanloop.projects.deploy_config(
config_id="string_example",
project_id="project_id_example",
environments=[
{
"id": "id_example",
}
],
)
Model config unique identifier generated by Humanloop.
List of environments to associate with the model config.
EnvironmentProjectConfigRequest
ProjectsDeployConfigToEnvironmentsResponse
/projects/{project_id}/deploy-config
patch
π Back to Table of Contents
Export all logged datapoints associated to your project.
Results are paginated and sorts the datapoints based on created_at
in
descending order.
export_response = humanloop.projects.export(
id="id_example",
page=0,
size=10,
)
String ID of project. Starts with pr_
.
Page offset for pagination.
Page size for pagination. Number of logs to export.
/projects/{id}/export
post
π Back to Table of Contents
Get a specific project.
get_response = humanloop.projects.get(
id="id_example",
)
String ID of project. Starts with pr_
.
/projects/{id}
get
π Back to Table of Contents
Retrieves a config to use to execute your model.
A config will be selected based on the project's active config settings.
get_active_config_response = humanloop.projects.get_active_config(
id="id_example",
environment="string_example",
)
String ID of project. Starts with pr_
.
Name for the environment. E.g. 'production'. If not provided, will return the active config for the default environment.
/projects/{id}/active-config
get
π Back to Table of Contents
Get a paginated list of files.
list_response = humanloop.projects.list(
page=0,
size=10,
filter="string_example",
user_filter="string_example",
sort_by="created_at",
order="asc",
)
Page offset for pagination.
Page size for pagination. Number of projects to fetch.
Case-insensitive filter for project name.
Case-insensitive filter for users in the project. This filter matches against both email address and name of users.
sort_by: ProjectSortBy
Field to sort projects by
order: SortOrder
Direction to sort by.
/projects
get
π Back to Table of Contents
Get an array of versions associated to your file.
list_configs_response = humanloop.projects.list_configs(
id="id_example",
evaluation_aggregates=True,
)
String ID of project. Starts with pr_
.
/projects/{id}/configs
get
π Back to Table of Contents
Get an array of environments with the deployed configs associated to your project.
list_deployed_configs_response = humanloop.projects.list_deployed_configs(
id="id_example",
)
String ID of project. Starts with pr_
.
ProjectsGetDeployedConfigsResponse
/projects/{id}/deployed-configs
get
π Back to Table of Contents
Update a specific project.
Set the project's active model config by passing active_model_config_id
.
These will be set to the Default environment unless a list of environments
are also passed in specifically detailing which environments to assign the
active config.
update_response = humanloop.projects.update(
id="id_example",
name="string_example",
active_config_id="string_example",
directory_id="string_example",
)
String ID of project. Starts with pr_
.
The new unique project name. Caution, if you are using the project name as the unique identifier in your API calls, changing the name will break the calls.
ID for a config to set as the project's active deployment. Starts with 'config_'.
ID of directory to assign project to. Starts with dir_
.
/projects/{id}
patch
π Back to Table of Contents
Update feedback types.
WARNING: This endpoint has been decommissioned and no longer works. Please use the v5 Human Evaluators API instead.
update_feedback_types_response = humanloop.projects.update_feedback_types(
id="id_example",
)
String ID of project. Starts with pr_
.
/projects/{id}/feedback-types
patch
π Back to Table of Contents
Create a new session.
Returns a session ID that can be used to log datapoints to the session.
create_response = humanloop.sessions.create()
/sessions
post
π Back to Table of Contents
Get a session by ID.
get_response = humanloop.sessions.get(
id="id_example",
)
String ID of session to return. Starts with sesh_
.
/sessions/{id}
get
π Back to Table of Contents
Get a page of sessions.
list_response = humanloop.sessions.list(
project_id="project_id_example",
page=1,
size=10,
)
String ID of project to return sessions for. Sessions that contain any datapoints associated to this project will be returned. Starts with pr_
.
Page to fetch. Starts from 1.
Number of sessions to retrieve.
/sessions
get
π Back to Table of Contents
This Python package is automatically generated by Konfig