You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: .github/steps/2-to-azure.md
+12-9Lines changed: 12 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,24 +20,27 @@ In this step, you will learn how to deploy your AI model to Azure AI Foundry aft
20
20
21
21
- You have already completed the previous steps in this project and have opened the **model playground on GitHub Models.**
22
22
23
-
## 🧠 GitHub Models to Azure AI Foundry
23
+
> [!IMPORTANT]
24
+
> If you have done the previous quest, ensure you pull your changes from GitHub using `git pull` before continuing with this project to update the project README.
24
25
25
-
‼️ The following steps on GitHub Models should be in A SEPARATE BROWSER TAB. Keep this tab open for reference.
26
+
## 🧠 GitHub Models to Azure AI Foundry
27
+
> [!Note]
28
+
> The following steps on GitHub Models should be in A SEPARATE BROWSER TAB. Keep this tab open for reference.
26
29
27
30
1. On the separate tab on the GitHub models playground, click on **Use this model** and select **Language: JavaScript** and **SDK: Azure AI Inference SDK**.
3. The model you selected will be pre-populated in the **Deployment name** field. You can optionally click on **Customize** to change the default configuration on deployment type, model version, tokens per minute (TPM) rate limit etc.
39
42
40
-

43
+

41
44
42
45
## 🧰 AI Foundry VS Code Extension
43
46
@@ -47,15 +50,15 @@ To continue working with your deployed model in VS Code, you will need to instal
47
50
48
51
2. Once installed, click on the **AI Foundry** icon in the left sidebar and click on **Set Default Project**. Select your project and expand the **Models** section. You should see your deployed model(s) listed there.
3. Click on the model name to open the model details view, where you can see the model's metadata, including the model version, deployment status, and TPM rate limit.
53
56
54
-

57
+

55
58
56
59
4. Right click on your model and select **Open in Playground**. This will open a tab in VS Code with a chat Playground, where you can test your deployed model.
5. You can also use the **Compare** feature to compare the performance of your deployed model with other models for manual evaluation. Once you are happy with the performance of your deployed model, right click on the model and select **Open Code File**, then:
61
64
- Select **SDK**: Azure AI Inference SDK/ Azure OpenAI SDK
@@ -89,7 +92,7 @@ To continue working with your deployed model in VS Code, you will need to instal
89
92
90
93
8. Finally, run `node ai-foundry.js` and observe the output in the terminal. You should see the response from your deployed model.
Copy file name to clipboardExpand all lines: .github/steps/3-add-chat-ui.md
+6-3Lines changed: 6 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,6 +20,9 @@ In this step, you will learn how to add a simple chat interface to your AI appli
20
20
21
21
- The `ai-foundry.js` file being referenced in this step is a script created in the previous step, _moving AI prototye to Azure_. However, if you have not completed the previous step, this shouldn't block you from completing this quest.
22
22
23
+
> [!IMPORTANT]
24
+
> If you have done the previous quest, ensure you pull your changes from GitHub using `git pull` before continuing with this project to update the project README.
25
+
23
26
## Step 1️⃣ : Initialize a new Vite project
24
27
25
28
### Introduction to Azure Developer CLI (azd)
@@ -75,7 +78,7 @@ npm run dev
75
78
76
79
Navigate to `http://localhost:5173` in your browser to see the chat interface.
## Step 2️⃣: Add your AI model to the chat interface
81
84
@@ -215,7 +218,7 @@ Rename the `_mockAiCall` function to `_apiCall` and update the `sendMessage` met
215
218
216
219
With the server running, navigate to `http://localhost:5173` in your browser. You should be able to send messages to the AI model and receive responses.
217
220
218
-

221
+

Copy file name to clipboardExpand all lines: .github/steps/4-add-rag.md
+7-5Lines changed: 7 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,10 +20,12 @@ In this step, you will learn how to add RAG (**R**etrieval-**A**ugmented **G**en
20
20
21
21
- You have completed the previous steps and have a working AI app that can answer questions. If you haven't, please click the **Reset Progress** button above to select the _Add a simple chat interface_ quest.
22
22
23
+
> [!IMPORTANT]
24
+
> If you have done the previous quest, ensure you pull your changes from GitHub using `git pull` before continuing with this project to update the project README.
23
25
24
26
To complete this step, you will need to get a sample dataset in any format (e.g., PDF, CSV, JSON) to work with.
25
27
26
-
An an example, will use a [sample Contoso Electronics Employee Handbook PDF](https://github.com/Azure-Samples/JS-Journey-to-AI-Foundry/blob/assets/jsai-buildathon-assets/employee_handbook.pdf) file. **You can bring any file of your choice**, but make sure it contains relevant information that you want your AI app to use for RAG. The code provided here will work with any text-based file.
28
+
An example, will use a [sample Contoso Electronics Employee Handbook PDF](https://github.com/Azure-Samples/JS-AI-Build-a-thon/blob/assets/jsai-buildathon-assets/employee_handbook.pdf) file. **You can bring any file of your choice**, but make sure it contains relevant information that you want your AI app to use for RAG. The code provided here will work with any text-based file.
27
29
28
30
- Create a new folder `data` in the root of your project and move the file in it. To search and read your PDF, you will need to extract the text from it. You can use any PDF parser library of your choice, but for this example, we will use the `pdf-parse` library.
29
31
@@ -364,24 +366,24 @@ Open your browser to use the app, usually at `http://localhost:5123`.
364
366
2. Ask a question related to the employee handbook, such as _"What is our company's mission statement?"_
365
367
- The expected outcome is that the AI will respond with an answer based on the content of the employee handbook PDF, and the relevant excerpts will be displayed below the response.
3. Now ask a question not covered in the employee handbook, such as _"What's the company's stock price?"_
370
372
- The expected outcome is that the AI will respond saying it doesn't have the information, and no excerpts will be displayed.
371
373
372
-

374
+

373
375
374
376
### Test with RAG OFF 🔴
375
377
1. **Clear chat and uncheck the "Use Employee Handbook" checkbox**.
376
378
2. Ask a question related to the employee handbook, such as _"What is our company's mission statement?"_
377
379
- The expected outcome is that the AI will respond with a generic answer, and likely ask for more context, and no excerpts will be displayed.
378
380
379
-

381
+

380
382
381
383
3. Now ask any general question, such as _"What is the capital of Morocco?"_
382
384
- The expected outcome is that the AI will respond with the correct answer, and no excerpts will be displayed.
383
385
384
-

386
+

385
387
386
388
Notice how, with RAG enabled, the AI is strictly limited to the handbook and refuses to answer unrelated questions. With RAG disabled, the AI is more flexible and answers any question to the best of its ability.
Copy file name to clipboardExpand all lines: .github/steps/5-frameworks.md
+6-3Lines changed: 6 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,6 +20,9 @@ In this step, you will learn you can simplify integrating AI features into your
20
20
21
21
This step assumes you have already completed the previous steps in this project and have a working web application that uses Azure's LLM endpoints. If you haven't done so, please click the **Reset Progress** button above to select the _Add a simple chat interface_ quest.
22
22
23
+
> [!IMPORTANT]
24
+
> If you have done the previous quest, ensure you pull your changes from GitHub using `git pull` before continuing with this project to update the project README.
25
+
23
26
## Step 1️⃣: Add LangChain.js to your project
24
27
We'll first install LangChain.js in our project to ensure our backend can communicate with Azure's LLM endpoints using LangChain's abstractions.
25
28
@@ -96,7 +99,7 @@ Currently, the chat model does not remember previous messages. For example, if y
96
99
97
100
Then ask the model _"Quiz time. What's my name?"_. The model will not remember your name because your name is not passed to the model in the prompt.
To add memory, you will use LangChain's built-in memory modules - `ChatMessageHistory` and `ConversationSummaryMemory`. Conversation memory allows the AI to reference previous exchanges in a session, enabling more context-aware and coherent responses and LangChain.js provides built-in memory modules that make this easy to implement. With LangChain, you can implement stateful AI app experiences without manually managing chat logs, and you can easily switch between in-memory, Redis, or other storage options.
To test this, open the chat UI in your browser and send a message like _"Hey, you can call me Terry. What should I call you?"_ and then ask _"Quiz time. What's my name?"_. The model should remember your name.
203
206
204
-

207
+

205
208
206
209
## ✅ Activity: Push your updated code to the repository
207
210
@@ -211,7 +214,7 @@ To complete this quest and **AUTOMATICALLY UPDATE** your progress, you MUST push
211
214
212
215
**Checklist**
213
216
214
-
-[ ] Have a `@langchain/azure-oenai` dependency in your package.json in the webapi directory
217
+
-[ ] Have a `@langchain/azure-openai` dependency in your package.json in the webapi directory
215
218
216
219
1. In the terminal, run the following commands to add, commit, and push your changes to the repository:
0 commit comments