Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: comparison docs #661

Merged
merged 25 commits into from
Feb 4, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
6a262dc
docs: single experiment view
madams0013 Feb 4, 2025
2c38705
update
madams0013 Feb 4, 2025
1891f78
Update docs/evaluation/how_to_guides/analyze_single_experiment.mdx
madams0013 Feb 4, 2025
836798e
Update docs/evaluation/how_to_guides/analyze_single_experiment.mdx
madams0013 Feb 4, 2025
21af3c1
Update docs/evaluation/how_to_guides/analyze_single_experiment.mdx
madams0013 Feb 4, 2025
e61991d
Update docs/evaluation/how_to_guides/analyze_single_experiment.mdx
madams0013 Feb 4, 2025
19ea250
Update docs/evaluation/how_to_guides/analyze_single_experiment.mdx
madams0013 Feb 4, 2025
9c6bc7a
Update docs/evaluation/how_to_guides/analyze_single_experiment.mdx
madams0013 Feb 4, 2025
983f790
Update docs/evaluation/how_to_guides/analyze_single_experiment.mdx
madams0013 Feb 4, 2025
99e24b0
Update docs/evaluation/how_to_guides/analyze_single_experiment.mdx
madams0013 Feb 4, 2025
bf60d08
Update docs/evaluation/how_to_guides/analyze_single_experiment.mdx
madams0013 Feb 4, 2025
cda2c53
Update docs/evaluation/how_to_guides/analyze_single_experiment.mdx
madams0013 Feb 4, 2025
ce192f4
Update docs/evaluation/how_to_guides/analyze_single_experiment.mdx
madams0013 Feb 4, 2025
de40293
updated pics
madams0013 Feb 4, 2025
4be544f
links
madams0013 Feb 4, 2025
d124177
comparison updates
madams0013 Feb 4, 2025
7f597d3
Update docs/evaluation/how_to_guides/compare_experiment_results.mdx
madams0013 Feb 4, 2025
c8ec604
Update docs/evaluation/how_to_guides/compare_experiment_results.mdx
madams0013 Feb 4, 2025
d27e759
Update docs/evaluation/how_to_guides/compare_experiment_results.mdx
madams0013 Feb 4, 2025
0e264b0
Update docs/evaluation/how_to_guides/compare_experiment_results.mdx
madams0013 Feb 4, 2025
2ab6106
Update docs/evaluation/how_to_guides/compare_experiment_results.mdx
madams0013 Feb 4, 2025
fe8fe73
Update docs/evaluation/how_to_guides/compare_experiment_results.mdx
madams0013 Feb 4, 2025
393e240
Update docs/evaluation/how_to_guides/compare_experiment_results.mdx
madams0013 Feb 4, 2025
cb3cc92
Update docs/evaluation/how_to_guides/compare_experiment_results.mdx
madams0013 Feb 4, 2025
7747a84
Update docs/evaluation/how_to_guides/compare_experiment_results.mdx
madams0013 Feb 4, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
67 changes: 67 additions & 0 deletions docs/evaluation/how_to_guides/analyze_single_experiment.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
---
sidebar_position: 1
---

# Analyze a single experiment
After running an experiment, you can use LangSmith's experiment view to analyze the results and draw insights about how your experiment performed.

This guide will walk you through viewing the results of an experiment and highlights the features available in the experiments view.

## Open the experiment view
To open the experiment view, select the relevant Dataset from the Dataset & Experiments page and then select the experiment you want to view.

![Open experiment view](./static/select_experiment.png)

## View experiment results
This table displays your experiment results. This includes the input, output, and reference output for each [example](/evaluation/concepts#examples) in the dataset. It also shows each configured feedback key in separate columns alongside its corresponding feedback score.

Out of the box metrics (latency, status, cost, and token count) will also be displayed in individual columns.

In the columns dropdown, you can choose which columns to hide and which to show.

![Experiment view](./static/experiment_view.png)

## Heatmap view
The experiment view defaults to a heatmap view, where feedback scores for each run are highlighted in a color.
Red indicates a lower score, while green indicates a higher score.
The heatmap visualization makes it easy to identify patterns, spot outliers, and understand score distributions across your dataset at a glance.

![Heatmap view](./static/heatmap.png)

## Sort and filter
To sort or filter feedback scores, you can use the actions in the column headers.

![Sort and filter](./static/sort_filter.png)

## Table views
Depending on the view most useful for your analysis, you can change the formatting of the table by toggling between a compact view, a full, view, and a diff view.
- The `Compact` view shows each run as a one-line row, for ease of comparing scores at a glance.
- The `Full` view shows the full output for each run for digging into the details of individual runs.
- The `Diff` view shows the text difference between the reference output and the output for each run.

![Diff view](./static/diff_mode.png)

## View the traces
Hover over any of the output cells, and click on the trace icon to view the trace for that run. This will open up a trace in the side panel.

To view the entire tracing project, click on the "View Project" button in the top right of the header.

![View trace](./static/view_trace.png)

## View evaluator runs
For evaluator scores, you can view the source run by hovering over the evaluator score cell and clicking on the arrow icon. This will open up a trace in the side panel. If you're running a LLM-as-a-judge evaluator, you can view the prompt used for the evaluator in this run.
If your experiment has [repetitions](/evaluation/concepts#repetitions), you can click on the aggregate average score to find links to all of the individual runs.

![View evaluator runs](./static/evaluator_run.png)

## Repetitions
If you've run your experiment with [repetitions](/evaluation/concepts#repetitions), there will be arrows in the output results column so you can view outputs in the table. To view each run from the repetition, hover over the output cell and click the expanded view.

When you run an experiment with repetitions, LangSmith displays the average for each feedback score in the table. Click on the feedback score to view the feedback scores from individual runs, or to view the standard deviation across repetitions.

![Repetitions](./static/repetitions.png)
## Compare to another experiment
In the top right of the experiment view, you can select another experiment to compare to. This will open up a comparison view, where you can see how the two experiments compare.
To learn more about the comparison view, see [how to compare experiment results](./compare_experiment_results).

![Compare](./static/compare_to_another.png)
51 changes: 21 additions & 30 deletions docs/evaluation/how_to_guides/compare_experiment_results.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,21 +8,23 @@ Oftentimes, when you are iterating on your LLM application (such as changing the

LangSmith supports a powerful comparison view that lets you hone in on key differences, regressions, and improvements between different experiments.

![](./static/regression_test.gif)
![](./static/compare.gif)

## Open the comparison view

To open the comparison view, select two or more experiments from the "Experiments" tab from a given dataset page. Then, click on the "Compare" button at the bottom of the page.
To open the experiment comparison view, click the **Dataset & Experiments** page, select the relevant Dataset, select two or more experiments on the Experiments tab and click compare.

![](./static/open_comparison_view.png)
![](./static/compare_select.png)

## Toggle different views
## Adjust the table display

You can toggle between different views by clicking on the "Display" dropdown at the top right of the page. You can toggle different views to be displayed.
You can toggle between different views by clicking "Full" or "Compact" at the top of the page.

Toggling Full Text will show the full text of the input, output and reference output for each run. If the reference output is too long to display in the table, you can click on expand to view the full content.

![](./static/toggle_views.png)
You can also select and hide individual feedback keys or individual metrics in the display settings dropdown to isolate the information you want to see.

![](./static/toggle_views.gif)

## View regressions and improvements

Expand All @@ -37,50 +39,39 @@ Click on the regressions or improvements buttons on the top of each column to fi

![Regressions Filter](./static/filter_to_regressions.png)

## Update baseline experiment

In order to track regressions, you need a baseline experiment against which to compare. This will be automatically assigned as the first experiment in your comparison, but you can
change it from the dropdown at the top of the page.
## Update baseline experiment and metric

![Baseline](./static/select_baseline.png)
In order to track regressions, you need to:
1. Select a baseline experiment against which to compare and a metric to measure. By default, the newest experiment is selected as the baseline.
2. Select feedback key (evaluation metric) you want to focus compare against. One will be assigned by default, but you can adjust as needed.
3. Configure whether a higher score is better for the selected feedback key. This preference will be stored.

## Select feedback key

You will also want to select the feedback key (evaluation metric) on which you would like focus on. This can be selected via another dropdown at the top. Again, one will be assigned by
default, but you can adjust as needed.

![Feedback](./static/select_feedback.png)
![Baseline](./static/select_baseline.png)

## Open a trace

If tracing is enabled for the evaluation run, you can click on the trace icon in the hover state of any experiment cell to open the trace view for that run. This will open up a trace in the side panel.
If the example you're evaluating is from an ingested [run](/observability/concepts#runs), you can hover over the output cell and click on the trace icon to open the trace view for that run. This will open up a trace in the side panel.

![](./static/open_trace_comparison.png)
![](./static/open_source_trace.png)

## Expand detailed view

From any cell, you can click on the expand icon in the hover state to open up a detailed view of all experiment results on that particular example input, along with feedback keys and scores.

![](./static/expanded_view.png)

## Update display settings
## View summary charts

You can adjust the display settings for comparison view by clicking on "Display" in the top right corner.
You can also view summary charts by clicking on the "Charts" tab at the top of the page.

Here, you'll be able to toggle feedback, metrics, summary charts, and expand full text.

![](./static/update_display.png)
![](./static/charts_tab.png)

## Use experiment metadata as chart labels

With the summary charts enabled, you can configure the x-axis labels based on [experiment metadata](./filter_experiments_ui#background-add-metadata-to-your-experiments). First, click the three dots in the top right of the charts (note that you will only see them if your experiments have metadata attached).

![](./static/three_dots_charts.png)

Next, select a metadata key - note that this key must contain string values in order to render in the charts.

![](./static/select_metadata_key.png)
You can configure the x-axis labels for the charts based on [experiment metadata](./filter_experiments_ui#background-add-metadata-to-your-experiments).

You will now see your metadata in the x-axis of the charts:
Select a metadata key to see change the x-axis labels of the charts.

![](./static/metadata_in_charts.png)
1 change: 1 addition & 0 deletions docs/evaluation/how_to_guides/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,7 @@ Set up evaluators that automatically run for all experiments against a dataset.

Use the UI & API to understand your experiment results.

- [Analyze a single experiment](./how_to_guides/analyze_single_experiment)
- [Compare experiments with the comparison view](./how_to_guides/compare_experiment_results)
- [Filter experiments](./how_to_guides/filter_experiments_ui)
- [View pairwise experiments](./how_to_guides/evaluate_pairwise#view-pairwise-experiments)
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/evaluation/how_to_guides/static/compare.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/evaluation/how_to_guides/static/expanded_view.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/evaluation/how_to_guides/static/heatmap.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/evaluation/how_to_guides/static/metadata_in_charts.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file modified docs/evaluation/how_to_guides/static/regression_view.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/evaluation/how_to_guides/static/select_baseline.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading