-
Notifications
You must be signed in to change notification settings - Fork 31
langsmith: clarifications to messages format #655
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Preview ID generated: preview-llmcal-1758848807-b95b6b2 |
Preview ID generated: preview-llmcal-1759194510-092ba06 |
Preview ID generated: preview-llmcal-1759195340-f7db8a9 |
Preview ID generated: preview-llmcal-1759195522-41a72bb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think table of contents can be better (too much under "Tracing a Model with a Custom Input/Output Format") . input/output format should be at a higher level. there should probably be more explicit setions for tool calls and content blocks. O
--- | ||
title: Log custom LLM traces | ||
sidebarTitle: Log custom LLM traces | ||
title: Guidelines for logging LLM calls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe just "Logging LLM calls"
|
||
* Each message must contain the key `"role"` and `"content"`. | ||
* `"role"`: `"system" | "user" | "assistant" | "tool"` | ||
* `"content"`: string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this isnt true right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The alternative being type
?
|
||
* An dict/object containing `"messages"` key with a list of messages in the above format. | ||
* LangSmith may use additional parameters in this input dict that match OpenAI's [chat completion endpoint](https://platform.openai.com/docs/guides/text?api-mode=chat) for rendering in the trace view, such as a list of available `tools` for the model to call. | ||
A Python dictionary or TypeScript object containing a `"messages"` key with a list of messages in [LangChain](https://python.langchain.com/docs/concepts/messages), [OpenAI](https://platform.openai.com/docs/api-reference/messages) (chat completions) or [Anthropic](https://docs.anthropic.com/en/api/messages) format. The messages key must be in the top level of the input field. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i actually think we should somewhat document these here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
esp LangChain format
* Each message must contain the key `"role"` and `"content"`. | ||
* `"role"`: `"system" | "user" | "assistant" | "tool"` | ||
* `"content"`: string | ||
* Messages with the `"assistant"` role may contain `"tool_calls"`. These `"tool_calls"` may be in [OpenAI](https://platform.openai.com/docs/guides/function-calling?api-mode=chat) format or [LangChain's format](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.tool.ToolCall.html#langchain_core.messages.tool.ToolCall). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same, we should consider documenting these here
|
||
LangSmith provides special rendering and processing for LLM traces, including token counting (assuming token counts are not available from the model provider) and token-based cost calculation. In order to make the most of this feature, you must log your LLM traces in a specific format. | ||
<Tip>In order to make the most of LangSmith's LLM trace processing, **we recommend logging your LLM traces in one of the specified formats**. | ||
If you don't log your LLM traces in the suggested formats, you will still be able to log the data to LangSmith, but it may not be processed or rendered in expected ways. </Tip> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A little repetitive, how about view your traces in LangSmith
|
||
If you're using a custom input or output format, you can convert it to a LangSmith compatible format using `process_inputs` and `process_outputs` functions on the [`@traceable` decorator](https://docs.smith.langchain.com/reference/python/run_helpers/langsmith.run_helpers.traceable). Note that these parameters are only available in the Python SDK. | ||
|
||
`process_inputs` and `process_outputs` accept functions that allow you to transform the inputs and outputs of a specific trace before they are logged to LangSmith. They have access to the trace's inputs and outputs, and can return a new dictionary with the processed data. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: add a JS
section with camel cased variants: processInputs
and processOutputs
class LangSmithOutputs(BaseModel): | ||
"""The output format LangSmith expects.""" | ||
|
||
def process_inputs(inputs: dict) -> dict: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add a real example?
Goals
Preview deployment: https://langchain-5e9cc07a-preview-llmcal-1759195522-41a72bb.mintlify.app/langsmith/log-llm-trace