Skip to content

Commit ac2960f

Browse files
Merge branch 'master' into add_structured_support
2 parents a3b4467 + 0c782ee commit ac2960f

File tree

10 files changed

+618
-80
lines changed

10 files changed

+618
-80
lines changed

.github/workflows/scheduled_test.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ on:
1414

1515
env:
1616
POETRY_VERSION: "1.8.4"
17-
DEFAULT_LIBS: '["libs/partners/openai", "libs/partners/anthropic", "libs/partners/fireworks", "libs/partners/groq", "libs/partners/mistralai", "libs/partners/google-vertexai", "libs/partners/google-genai", "libs/partners/aws"]'
17+
DEFAULT_LIBS: '["libs/partners/openai", "libs/partners/anthropic", "libs/partners/fireworks", "libs/partners/groq", "libs/partners/mistralai", "libs/partners/deepseek", "libs/partners/google-vertexai", "libs/partners/google-genai", "libs/partners/aws"]'
1818

1919
jobs:
2020
compute-matrix:
@@ -117,6 +117,7 @@ jobs:
117117
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
118118
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
119119
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
120+
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
120121
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
121122
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
122123
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
Lines changed: 354 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,354 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "raw",
5+
"id": "afaf8039",
6+
"metadata": {},
7+
"source": [
8+
"---\n",
9+
"sidebar_label: Goodfire\n",
10+
"---"
11+
]
12+
},
13+
{
14+
"cell_type": "markdown",
15+
"id": "e49f1e0d",
16+
"metadata": {},
17+
"source": [
18+
"# ChatGoodfire\n",
19+
"\n",
20+
"This will help you getting started with Goodfire [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatGoodfire features and configurations head to the [PyPI project page](https://pypi.org/project/langchain-goodfire/), or go directly to the [Goodfire SDK docs](https://docs.goodfire.ai/sdk-reference/example). All of the Goodfire-specific functionality (e.g. SAE features, variants, etc.) is available via the main `goodfire` package. This integration is a wrapper around the Goodfire SDK.\n",
21+
"\n",
22+
"## Overview\n",
23+
"### Integration details\n",
24+
"\n",
25+
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
26+
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
27+
"| [ChatGoodfire](https://python.langchain.com/api_reference/goodfire/chat_models/langchain_goodfire.chat_models.ChatGoodfire.html) | [langchain-goodfire](https://python.langchain.com/api_reference/goodfire/) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-goodfire?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-goodfire?style=flat-square&label=%20) |\n",
28+
"\n",
29+
"### Model features\n",
30+
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
31+
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
32+
"| ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
33+
"\n",
34+
"## Setup\n",
35+
"\n",
36+
"To access Goodfire models you'll need to create a/an Goodfire account, get an API key, and install the `langchain-goodfire` integration package.\n",
37+
"\n",
38+
"### Credentials\n",
39+
"\n",
40+
"Head to [Goodfire Settings](https://platform.goodfire.ai/organization/settings/api-keys) to sign up to Goodfire and generate an API key. Once you've done this set the GOODFIRE_API_KEY environment variable."
41+
]
42+
},
43+
{
44+
"cell_type": "code",
45+
"execution_count": 1,
46+
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
47+
"metadata": {},
48+
"outputs": [],
49+
"source": [
50+
"import getpass\n",
51+
"import os\n",
52+
"\n",
53+
"if not os.getenv(\"GOODFIRE_API_KEY\"):\n",
54+
" os.environ[\"GOODFIRE_API_KEY\"] = getpass.getpass(\"Enter your Goodfire API key: \")"
55+
]
56+
},
57+
{
58+
"cell_type": "markdown",
59+
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
60+
"metadata": {},
61+
"source": [
62+
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
63+
]
64+
},
65+
{
66+
"cell_type": "code",
67+
"execution_count": null,
68+
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
69+
"metadata": {},
70+
"outputs": [],
71+
"source": [
72+
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
73+
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
74+
]
75+
},
76+
{
77+
"cell_type": "markdown",
78+
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
79+
"metadata": {},
80+
"source": [
81+
"### Installation\n",
82+
"\n",
83+
"The LangChain Goodfire integration lives in the `langchain-goodfire` package:"
84+
]
85+
},
86+
{
87+
"cell_type": "code",
88+
"execution_count": 2,
89+
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
90+
"metadata": {},
91+
"outputs": [
92+
{
93+
"name": "stdout",
94+
"output_type": "stream",
95+
"text": [
96+
"Note: you may need to restart the kernel to use updated packages.\n"
97+
]
98+
}
99+
],
100+
"source": [
101+
"%pip install -qU langchain-goodfire"
102+
]
103+
},
104+
{
105+
"cell_type": "markdown",
106+
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
107+
"metadata": {},
108+
"source": [
109+
"## Instantiation\n",
110+
"\n",
111+
"Now we can instantiate our model object and generate chat completions:"
112+
]
113+
},
114+
{
115+
"cell_type": "code",
116+
"execution_count": 3,
117+
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
118+
"metadata": {},
119+
"outputs": [
120+
{
121+
"name": "stderr",
122+
"output_type": "stream",
123+
"text": [
124+
"None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.\n"
125+
]
126+
}
127+
],
128+
"source": [
129+
"import goodfire\n",
130+
"from langchain_goodfire import ChatGoodfire\n",
131+
"\n",
132+
"base_variant = goodfire.Variant(\"meta-llama/Llama-3.3-70B-Instruct\")\n",
133+
"\n",
134+
"llm = ChatGoodfire(\n",
135+
" model=base_variant,\n",
136+
" temperature=0,\n",
137+
" max_completion_tokens=1000,\n",
138+
" seed=42,\n",
139+
")"
140+
]
141+
},
142+
{
143+
"cell_type": "markdown",
144+
"id": "2b4f3e15",
145+
"metadata": {},
146+
"source": [
147+
"## Invocation"
148+
]
149+
},
150+
{
151+
"cell_type": "code",
152+
"execution_count": 4,
153+
"id": "62e0dbc3",
154+
"metadata": {
155+
"tags": []
156+
},
157+
"outputs": [
158+
{
159+
"data": {
160+
"text/plain": [
161+
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, response_metadata={}, id='run-8d43cf35-bce8-4827-8935-c64f8fb78cd0-0', usage_metadata={'input_tokens': 51, 'output_tokens': 39, 'total_tokens': 90})"
162+
]
163+
},
164+
"execution_count": 4,
165+
"metadata": {},
166+
"output_type": "execute_result"
167+
}
168+
],
169+
"source": [
170+
"messages = [\n",
171+
" (\n",
172+
" \"system\",\n",
173+
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
174+
" ),\n",
175+
" (\"human\", \"I love programming.\"),\n",
176+
"]\n",
177+
"ai_msg = await llm.ainvoke(messages)\n",
178+
"ai_msg"
179+
]
180+
},
181+
{
182+
"cell_type": "code",
183+
"execution_count": 5,
184+
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
185+
"metadata": {},
186+
"outputs": [
187+
{
188+
"name": "stdout",
189+
"output_type": "stream",
190+
"text": [
191+
"J'adore la programmation.\n"
192+
]
193+
}
194+
],
195+
"source": [
196+
"print(ai_msg.content)"
197+
]
198+
},
199+
{
200+
"cell_type": "markdown",
201+
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
202+
"metadata": {},
203+
"source": [
204+
"## Chaining\n",
205+
"\n",
206+
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
207+
]
208+
},
209+
{
210+
"cell_type": "code",
211+
"execution_count": 6,
212+
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
213+
"metadata": {},
214+
"outputs": [
215+
{
216+
"data": {
217+
"text/plain": [
218+
"AIMessage(content='Ich liebe das Programmieren. How can I help you with programming today?', additional_kwargs={}, response_metadata={}, id='run-03d1a585-8234-46f1-a8df-bf9143fe3309-0', usage_metadata={'input_tokens': 46, 'output_tokens': 46, 'total_tokens': 92})"
219+
]
220+
},
221+
"execution_count": 6,
222+
"metadata": {},
223+
"output_type": "execute_result"
224+
}
225+
],
226+
"source": [
227+
"from langchain_core.prompts import ChatPromptTemplate\n",
228+
"\n",
229+
"prompt = ChatPromptTemplate(\n",
230+
" [\n",
231+
" (\n",
232+
" \"system\",\n",
233+
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
234+
" ),\n",
235+
" (\"human\", \"{input}\"),\n",
236+
" ]\n",
237+
")\n",
238+
"\n",
239+
"chain = prompt | llm\n",
240+
"await chain.ainvoke(\n",
241+
" {\n",
242+
" \"input_language\": \"English\",\n",
243+
" \"output_language\": \"German\",\n",
244+
" \"input\": \"I love programming.\",\n",
245+
" }\n",
246+
")"
247+
]
248+
},
249+
{
250+
"cell_type": "markdown",
251+
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
252+
"metadata": {},
253+
"source": [
254+
"## Goodfire-specific functionality\n",
255+
"\n",
256+
"To use Goodfire-specific functionality such as SAE features and variants, you can use the `goodfire` package directly."
257+
]
258+
},
259+
{
260+
"cell_type": "code",
261+
"execution_count": 7,
262+
"id": "3aef9e0a",
263+
"metadata": {},
264+
"outputs": [
265+
{
266+
"data": {
267+
"text/plain": [
268+
"FeatureGroup([\n",
269+
" 0: \"The assistant should adopt the persona of a pirate\",\n",
270+
" 1: \"The assistant should roleplay as a pirate\",\n",
271+
" 2: \"The assistant should engage with pirate-themed content or roleplay as a pirate\",\n",
272+
" 3: \"The assistant should roleplay as a character\",\n",
273+
" 4: \"The assistant should roleplay as a specific character\",\n",
274+
" 5: \"The assistant should roleplay as a game character or NPC\",\n",
275+
" 6: \"The assistant should roleplay as a human character\",\n",
276+
" 7: \"Requests for the assistant to roleplay or pretend to be something else\",\n",
277+
" 8: \"Requests for the assistant to roleplay or pretend to be something\",\n",
278+
" 9: \"The assistant is being assigned a role or persona to roleplay\"\n",
279+
"])"
280+
]
281+
},
282+
"execution_count": 7,
283+
"metadata": {},
284+
"output_type": "execute_result"
285+
}
286+
],
287+
"source": [
288+
"client = goodfire.Client(api_key=os.environ[\"GOODFIRE_API_KEY\"])\n",
289+
"\n",
290+
"pirate_features = client.features.search(\n",
291+
" \"assistant should roleplay as a pirate\", base_variant\n",
292+
")\n",
293+
"pirate_features"
294+
]
295+
},
296+
{
297+
"cell_type": "code",
298+
"execution_count": 8,
299+
"id": "52f03a00",
300+
"metadata": {},
301+
"outputs": [
302+
{
303+
"data": {
304+
"text/plain": [
305+
"AIMessage(content='Why did the scarecrow win an award? Because he was outstanding in his field! Arrr! Hope that made ye laugh, matey!', additional_kwargs={}, response_metadata={}, id='run-7d8bd30f-7f80-41cb-bdb6-25c29c22a7ce-0', usage_metadata={'input_tokens': 35, 'output_tokens': 60, 'total_tokens': 95})"
306+
]
307+
},
308+
"execution_count": 8,
309+
"metadata": {},
310+
"output_type": "execute_result"
311+
}
312+
],
313+
"source": [
314+
"pirate_variant = goodfire.Variant(\"meta-llama/Llama-3.3-70B-Instruct\")\n",
315+
"\n",
316+
"pirate_variant.set(pirate_features[0], 0.4)\n",
317+
"pirate_variant.set(pirate_features[1], 0.3)\n",
318+
"\n",
319+
"await llm.ainvoke(\"Tell me a joke\", model=pirate_variant)"
320+
]
321+
},
322+
{
323+
"cell_type": "markdown",
324+
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
325+
"metadata": {},
326+
"source": [
327+
"## API reference\n",
328+
"\n",
329+
"For detailed documentation of all ChatGoodfire features and configurations head to the [API reference](https://python.langchain.com/api_reference/goodfire/chat_models/langchain_goodfire.chat_models.ChatGoodfire.html)"
330+
]
331+
}
332+
],
333+
"metadata": {
334+
"kernelspec": {
335+
"display_name": ".venv",
336+
"language": "python",
337+
"name": "python3"
338+
},
339+
"language_info": {
340+
"codemirror_mode": {
341+
"name": "ipython",
342+
"version": 3
343+
},
344+
"file_extension": ".py",
345+
"mimetype": "text/x-python",
346+
"name": "python",
347+
"nbconvert_exporter": "python",
348+
"pygments_lexer": "ipython3",
349+
"version": "3.12.8"
350+
}
351+
},
352+
"nbformat": 4,
353+
"nbformat_minor": 5
354+
}
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
# Goodfire
2+
3+
[Goodfire](https://www.goodfire.ai/) is a research lab focused on AI safety and
4+
interpretability.
5+
6+
## Installation and Setup
7+
8+
```bash
9+
pip install langchain-goodfire
10+
```
11+
12+
## Chat models
13+
14+
See detail on available chat models [here](/docs/integrations/chat/goodfire).

0 commit comments

Comments
 (0)