Skip to content

Conversation

tanushree-sharma
Copy link
Contributor

@tanushree-sharma tanushree-sharma commented Sep 26, 2025

Goals

  • be explicit about the message formats we render well
  • show how to use process_inputs and process_outputs to convert messages before they are logged

Preview deployment: https://langchain-5e9cc07a-preview-llmcal-1759195522-41a72bb.mintlify.app/langsmith/log-llm-trace

@github-actions github-actions bot added the langsmith For docs changes to LangSmith label Sep 26, 2025
Copy link

Preview ID generated: preview-llmcal-1758848807-b95b6b2

Copy link

Preview ID generated: preview-llmcal-1759194510-092ba06

@tanushree-sharma tanushree-sharma changed the title clarifications to llm call formats langsmith: clarifications to messages format Sep 30, 2025
@tanushree-sharma tanushree-sharma marked this pull request as ready for review September 30, 2025 01:14
Copy link

Preview ID generated: preview-llmcal-1759195340-f7db8a9

Copy link

Preview ID generated: preview-llmcal-1759195522-41a72bb

Copy link
Contributor

@hwchase17 hwchase17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think table of contents can be better (too much under "Tracing a Model with a Custom Input/Output Format") . input/output format should be at a higher level. there should probably be more explicit setions for tool calls and content blocks. O

---
title: Log custom LLM traces
sidebarTitle: Log custom LLM traces
title: Guidelines for logging LLM calls
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe just "Logging LLM calls"


* Each message must contain the key `"role"` and `"content"`.
* `"role"`: `"system" | "user" | "assistant" | "tool"`
* `"content"`: string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this isnt true right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The alternative being type?


* An dict/object containing `"messages"` key with a list of messages in the above format.
* LangSmith may use additional parameters in this input dict that match OpenAI's [chat completion endpoint](https://platform.openai.com/docs/guides/text?api-mode=chat) for rendering in the trace view, such as a list of available `tools` for the model to call.
A Python dictionary or TypeScript object containing a `"messages"` key with a list of messages in [LangChain](https://python.langchain.com/docs/concepts/messages), [OpenAI](https://platform.openai.com/docs/api-reference/messages) (chat completions) or [Anthropic](https://docs.anthropic.com/en/api/messages) format. The messages key must be in the top level of the input field.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i actually think we should somewhat document these here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

esp LangChain format

* Each message must contain the key `"role"` and `"content"`.
* `"role"`: `"system" | "user" | "assistant" | "tool"`
* `"content"`: string
* Messages with the `"assistant"` role may contain `"tool_calls"`. These `"tool_calls"` may be in [OpenAI](https://platform.openai.com/docs/guides/function-calling?api-mode=chat) format or [LangChain's format](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.tool.ToolCall.html#langchain_core.messages.tool.ToolCall).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same, we should consider documenting these here


LangSmith provides special rendering and processing for LLM traces, including token counting (assuming token counts are not available from the model provider) and token-based cost calculation. In order to make the most of this feature, you must log your LLM traces in a specific format.
<Tip>In order to make the most of LangSmith's LLM trace processing, **we recommend logging your LLM traces in one of the specified formats**.
If you don't log your LLM traces in the suggested formats, you will still be able to log the data to LangSmith, but it may not be processed or rendered in expected ways. </Tip>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A little repetitive, how about view your traces in LangSmith


If you're using a custom input or output format, you can convert it to a LangSmith compatible format using `process_inputs` and `process_outputs` functions on the [`@traceable` decorator](https://docs.smith.langchain.com/reference/python/run_helpers/langsmith.run_helpers.traceable). Note that these parameters are only available in the Python SDK.

`process_inputs` and `process_outputs` accept functions that allow you to transform the inputs and outputs of a specific trace before they are logged to LangSmith. They have access to the trace's inputs and outputs, and can return a new dictionary with the processed data.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: add a JS section with camel cased variants: processInputs and processOutputs

class LangSmithOutputs(BaseModel):
"""The output format LangSmith expects."""

def process_inputs(inputs: dict) -> dict:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a real example?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
langsmith For docs changes to LangSmith
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants