Skip to content

Commit fe723ea

Browse files
authored
Merge pull request #204 from raspawar/raspawar/nvidia_notebooks
NVIDIA Marketing Strategy Example Notebook
2 parents 9295194 + 436b0dd commit fe723ea

File tree

5 files changed

+163
-5
lines changed

5 files changed

+163
-5
lines changed

nvidia_models/intro/README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,11 @@
44
This is a simple example using the CrewAI framework with an NVIDIA endpoint and langchain-nvidia-ai-endpoints integration.
55

66
## Running the Script
7-
This example uses the Azure OpenAI API to call a model.
7+
This example show cases the NVIDIA NIM endpoint integration with CrewAI.
88

99
- **Configure Environment**: Set NVIDIA_API_KEY to appropriate api key.
1010
Set MODEL to select appropriate model
11+
Set NVIDIA_API_URL to select the endpoint(Catalogue/local endpoint)
1112
- **Install Dependencies**: Run `make install`.
1213
- **Execute the Script**: Run `python main.py` to see a list of recommended changes to this document.
1314

nvidia_models/intro/main.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,8 @@ def set_callbacks(self, callbacks: List[Any]):
116116

117117

118118
model = os.environ.get("MODEL", "meta/llama-3.1-8b-instruct")
119-
llm = ChatNVIDIA(model=model)
119+
api_base = os.environ.get("NVIDIA_API_URL", "https://integrate.api.nvidia.com/v1")
120+
llm = ChatNVIDIA(model=model, base_url=api_base)
120121
default_llm = nvllm(model_str="nvidia_nim/" + model, llm=llm)
121122

122123
os.environ["NVIDIA_NIM_API_KEY"] = os.environ.get("NVIDIA_API_KEY")

nvidia_models/marketing_strategy/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ It uses meta/llama-3.1-8b-instruct by default so you should have access to that
3535

3636
***Disclaimer:** This will use gpt-4o unless you change it to use a different model, and by doing so it may incur in different costs.*
3737

38-
- **Configure Environment**: Copy `.env.example` and set up the environment variables for [OpenAI](https://platform.openai.com/api-keys) and other tools as needed, like [Serper](serper.dev).
38+
- **Configure Environment**: Copy `.env.example` and set up the environment variables for [NVIDIA](https://build.nvidia.com) and other tools as needed, like [Serper](serper.dev).
3939
- **Install Dependencies**: Run `make install`.
4040
- **Customize**: Modify `src/marketing_posts/main.py` to add custom inputs for your agents and tasks.
4141
- **Customize Further**: Check `src/marketing_posts/config/agents.yaml` to update your agents and `src/marketing_posts/config/tasks.yaml` to update your tasks.
Lines changed: 155 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,155 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# NVIDIA NIMs\n",
8+
"\n",
9+
"The `langchain-nvidia-ai-endpoints` package contains LangChain integrations building applications with models on \n",
10+
"NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models \n",
11+
"from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA \n",
12+
"accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single \n",
13+
"command on NVIDIA accelerated infrastructure.\n",
14+
"\n",
15+
"NVIDIA hosted deployments of NIMs are available to test on the [NVIDIA API catalog](https://build.nvidia.com/). After testing, \n",
16+
"NIMs can be exported from NVIDIA’s API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud, \n",
17+
"giving enterprises ownership and full control of their IP and AI application.\n",
18+
"\n",
19+
"This example goes over how to use LangChain to interact with NVIDIA supported via the `ChatNVIDIA` class to implement Marketing Post CrewAI Agent.\n",
20+
"\n",
21+
"For more information on accessing the chat models through this api, check out the [ChatNVIDIA](https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/) documentation."
22+
]
23+
},
24+
{
25+
"cell_type": "code",
26+
"execution_count": null,
27+
"metadata": {},
28+
"outputs": [],
29+
"source": [
30+
"%pip install --upgrade --quiet marketing_posts"
31+
]
32+
},
33+
{
34+
"cell_type": "markdown",
35+
"metadata": {},
36+
"source": [
37+
"## Setup\n",
38+
"\n",
39+
"Import our dependencies and set up our NVIDIA API key from the API catalog, https://build.nvidia.com for the two models we'll use hosted on the catalog (embedding and re-ranking models).\n",
40+
"\n",
41+
"**To get started:**\n",
42+
"\n",
43+
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models.\n",
44+
"\n",
45+
"2. Click on your model of choice.\n",
46+
"\n",
47+
"3. Under Input select the Python tab, and click `Get API Key`. Then click `Generate Key`.\n",
48+
"\n",
49+
"4. Copy and save the generated key as NVIDIA_API_KEY. From there, you should have access to the endpoints."
50+
]
51+
},
52+
{
53+
"cell_type": "code",
54+
"execution_count": 2,
55+
"metadata": {},
56+
"outputs": [],
57+
"source": [
58+
"import getpass\n",
59+
"import os\n",
60+
"\n",
61+
"# del os.environ['NVIDIA_API_KEY'] ## delete key and reset\n",
62+
"if os.environ.get(\"NVIDIA_API_KEY\", \"\").startswith(\"nvapi-\"):\n",
63+
" print(\"Valid NVIDIA_API_KEY already in environment. Delete to reset\")\n",
64+
"else:\n",
65+
" nvapi_key = getpass.getpass(\"NVAPI Key (starts with nvapi-): \")\n",
66+
" assert nvapi_key.startswith(\n",
67+
" \"nvapi-\"\n",
68+
" ), f\"{nvapi_key[:5]}... is not a valid key\"\n",
69+
" os.environ[\"NVIDIA_API_KEY\"] = nvapi_key"
70+
]
71+
},
72+
{
73+
"cell_type": "code",
74+
"execution_count": null,
75+
"metadata": {},
76+
"outputs": [],
77+
"source": [
78+
"# set API Endoipoint\n",
79+
"# to call local model set NVIDIA_API_URL to local NIM endpoint\n",
80+
"os.environ[\"NVIDIA_API_URL\"] = \"http://localhost:8000/v1\" # for local NIM container\n",
81+
"# os.environ[\"NVIDIA_API_URL\"] = \"https://integrate.api.nvidia.com/v1\""
82+
]
83+
},
84+
{
85+
"cell_type": "markdown",
86+
"metadata": {},
87+
"source": [
88+
"Setup model using environment variable MODEL as below"
89+
]
90+
},
91+
{
92+
"cell_type": "code",
93+
"execution_count": 4,
94+
"metadata": {},
95+
"outputs": [],
96+
"source": [
97+
"#set model\n",
98+
"os.environ[\"MODEL\"] = \"meta/llama-2-7b-chat\""
99+
]
100+
},
101+
{
102+
"cell_type": "markdown",
103+
"metadata": {},
104+
"source": [
105+
"Import the run function and kickoff the marketting creawai agent"
106+
]
107+
},
108+
{
109+
"cell_type": "code",
110+
"execution_count": 5,
111+
"metadata": {},
112+
"outputs": [],
113+
"source": [
114+
"from marketing_posts.main import run"
115+
]
116+
},
117+
{
118+
"cell_type": "code",
119+
"execution_count": null,
120+
"metadata": {},
121+
"outputs": [],
122+
"source": [
123+
"run()"
124+
]
125+
},
126+
{
127+
"cell_type": "code",
128+
"execution_count": null,
129+
"metadata": {},
130+
"outputs": [],
131+
"source": []
132+
}
133+
],
134+
"metadata": {
135+
"kernelspec": {
136+
"display_name": "Python 3 (ipykernel)",
137+
"language": "python",
138+
"name": "python3"
139+
},
140+
"language_info": {
141+
"codemirror_mode": {
142+
"name": "ipython",
143+
"version": 3
144+
},
145+
"file_extension": ".py",
146+
"mimetype": "text/x-python",
147+
"name": "python",
148+
"nbconvert_exporter": "python",
149+
"pygments_lexer": "ipython3",
150+
"version": "3.10.12"
151+
}
152+
},
153+
"nbformat": 4,
154+
"nbformat_minor": 2
155+
}

nvidia_models/marketing_strategy/src/marketing_posts/crew.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,10 +16,11 @@
1616
load_dotenv()
1717

1818
model = os.getenv("MODEL", "meta/llama-3.1-8b-instruct")
19-
llm = ChatNVIDIA(model=model)
19+
api_base = os.environ.get("NVIDIA_API_URL", "https://integrate.api.nvidia.com/v1")
20+
llm = ChatNVIDIA(model=model, base_url=api_base)
2021
default_llm = nvllm(model_str="nvidia_nim/" + model, llm=llm)
2122

22-
os.environ["NVIDIA_NIM_API_KEY"] = os.getenv("NVIDIA_API_KEY")
23+
os.environ["NVIDIA_API_KEY"] = os.getenv("NVIDIA_API_KEY")
2324

2425

2526
class MarketStrategy(BaseModel):

0 commit comments

Comments
 (0)