Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions weave/cookbooks/dspy_prompt_optimization.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ This tutorial demonstrates how we can improve the performance of our LLM workflo
We need the following libraries for this tutorial:

- [DSPy](https://dspy.ai) for building the LLM workflow and optimizing it.
- [Weave](/weave/quickstart) to track our LLM workflow and evaluate our prompting strategies.
- [Weave](/weave) to track our LLM workflow and evaluate our prompting strategies.
- [datasets](https://huggingface.co/docs/datasets/index) to access the Big-Bench Hard dataset from HuggingFace Hub.

```python lines
Expand Down Expand Up @@ -117,7 +117,7 @@ dspy_train_examples, dspy_val_examples = get_dataset(metadata)

[DSPy](https://dspy.ai) is a framework that pushes building new LM pipelines away from manipulating free-form strings and closer to programming (composing modular operators to build text transformation graphs) where a compiler automatically generates optimized LM invocation strategies and prompts from a program.

We will use the [`dspy.OpenAI`](https://dspy.ailearn/programming/language_models/#__tabbed_1_1) abstraction to make LLM calls to [GPT3.5 Turbo](https://platform.openai.com/docs/models/gpt-3.5-turbo).
We will use the [`dspy.OpenAI`](https://dspy.ai/learn/programming/language_models/#__tabbed_1_1) abstraction to make LLM calls to [GPT3.5 Turbo](https://platform.openai.com/docs/models/gpt-3.5-turbo).

```python lines
system_prompt = """
Expand All @@ -131,7 +131,7 @@ dspy.settings.configure(lm=llm)

### Writing the Causal Reasoning Signature

A [signature](https://dspy.ailearn/programming/signatures) is a declarative specification of input/output behavior of a [DSPy module](https://dspy.ailearn/programming/modules) which are task-adaptive components—akin to neural network layers—that abstract any particular text transformation.
A [signature](https://dspy.ai/learn/programming/signatures) is a declarative specification of input/output behavior of a [DSPy module](https://dspy.ai/learn/programming/modules) which are task-adaptive components—akin to neural network layers—that abstract any particular text transformation.

```python lines
from pydantic import BaseModel, Field
Expand Down Expand Up @@ -220,7 +220,7 @@ Running the evaluation causal reasoning dataset will cost approximately $0.24 in

## Optimizing our DSPy Program

Now, that we have a baseline DSPy program, let us try to improve its performance for causal reasoning using a [DSPy teleprompter](https://dspy.ailearn/optimization/optimizers/) that can tune the parameters of a DSPy program to maximize the specified metrics. In this tutorial, we use the [BootstrapFewShot](https://dspy.aiapi/optimizers/BootstrapFewShot/) teleprompter.
Now, that we have a baseline DSPy program, let us try to improve its performance for causal reasoning using the [BootstrapFewShot](https://dspy.ai/api/optimizers/BootstrapFewShot/) teleprompter, which can tune the parameters of a DSPy program to maximize the specified metrics.

```python lines
from dspy.teleprompt import BootstrapFewShot
Expand Down