From 253025776bd5eab897789f411f593f8da05e3e78 Mon Sep 17 00:00:00 2001 From: Matt Linville Date: Mon, 9 Feb 2026 15:26:36 -0800 Subject: [PATCH 1/3] Fix redundant links to DSPy Optimizers page --- weave/cookbooks/dspy_prompt_optimization.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/weave/cookbooks/dspy_prompt_optimization.mdx b/weave/cookbooks/dspy_prompt_optimization.mdx index cad2ae69a8..cd115ccb3f 100644 --- a/weave/cookbooks/dspy_prompt_optimization.mdx +++ b/weave/cookbooks/dspy_prompt_optimization.mdx @@ -220,7 +220,7 @@ Running the evaluation causal reasoning dataset will cost approximately $0.24 in ## Optimizing our DSPy Program -Now, that we have a baseline DSPy program, let us try to improve its performance for causal reasoning using a [DSPy teleprompter](https://dspy.ailearn/optimization/optimizers/) that can tune the parameters of a DSPy program to maximize the specified metrics. In this tutorial, we use the [BootstrapFewShot](https://dspy.aiapi/optimizers/BootstrapFewShot/) teleprompter. +Now, that we have a baseline DSPy program, let us try to improve its performance for causal reasoning using the [BootstrapFewShot](https://dspy.ai/api/optimizers/BootstrapFewShot/) teleprompter, which can tune the parameters of a DSPy program to maximize the specified metrics. ```python lines from dspy.teleprompt import BootstrapFewShot From 07de0bca3e216ffa0ef8614f255bc8d5c10ca767 Mon Sep 17 00:00:00 2001 From: Matt Linville Date: Mon, 9 Feb 2026 15:30:10 -0800 Subject: [PATCH 2/3] Fix broken link from bad search/replace --- weave/cookbooks/dspy_prompt_optimization.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/weave/cookbooks/dspy_prompt_optimization.mdx b/weave/cookbooks/dspy_prompt_optimization.mdx index cd115ccb3f..8dfdf1871d 100644 --- a/weave/cookbooks/dspy_prompt_optimization.mdx +++ b/weave/cookbooks/dspy_prompt_optimization.mdx @@ -117,7 +117,7 @@ dspy_train_examples, dspy_val_examples = get_dataset(metadata) [DSPy](https://dspy.ai) is a framework that pushes building new LM pipelines away from manipulating free-form strings and closer to programming (composing modular operators to build text transformation graphs) where a compiler automatically generates optimized LM invocation strategies and prompts from a program. -We will use the [`dspy.OpenAI`](https://dspy.ailearn/programming/language_models/#__tabbed_1_1) abstraction to make LLM calls to [GPT3.5 Turbo](https://platform.openai.com/docs/models/gpt-3.5-turbo). +We will use the [`dspy.OpenAI`](https://dspy.ai/learn/programming/language_models/#__tabbed_1_1) abstraction to make LLM calls to [GPT3.5 Turbo](https://platform.openai.com/docs/models/gpt-3.5-turbo). ```python lines system_prompt = """ @@ -131,7 +131,7 @@ dspy.settings.configure(lm=llm) ### Writing the Causal Reasoning Signature -A [signature](https://dspy.ailearn/programming/signatures) is a declarative specification of input/output behavior of a [DSPy module](https://dspy.ailearn/programming/modules) which are task-adaptive components—akin to neural network layers—that abstract any particular text transformation. +A [signature](https://dspy.ai/learn/programming/signatures) is a declarative specification of input/output behavior of a [DSPy module](https://dspy.ai/learn/programming/modules) which are task-adaptive components—akin to neural network layers—that abstract any particular text transformation. ```python lines from pydantic import BaseModel, Field From 9e1870621b82e481b08ad38e81d8977d02c34412 Mon Sep 17 00:00:00 2001 From: Matt Linville Date: Mon, 9 Feb 2026 15:38:18 -0800 Subject: [PATCH 3/3] Fix Weave link --- weave/cookbooks/dspy_prompt_optimization.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/weave/cookbooks/dspy_prompt_optimization.mdx b/weave/cookbooks/dspy_prompt_optimization.mdx index cd115ccb3f..893812a471 100644 --- a/weave/cookbooks/dspy_prompt_optimization.mdx +++ b/weave/cookbooks/dspy_prompt_optimization.mdx @@ -20,7 +20,7 @@ This tutorial demonstrates how we can improve the performance of our LLM workflo We need the following libraries for this tutorial: - [DSPy](https://dspy.ai) for building the LLM workflow and optimizing it. -- [Weave](/weave/quickstart) to track our LLM workflow and evaluate our prompting strategies. +- [Weave](/weave) to track our LLM workflow and evaluate our prompting strategies. - [datasets](https://huggingface.co/docs/datasets/index) to access the Big-Bench Hard dataset from HuggingFace Hub. ```python lines