You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: .github/steps/2-to-azure.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,18 +26,18 @@ In this step, you will learn how to deploy your AI model to Azure AI Foundry aft
26
26
27
27
1. On the separate tab on the GitHub models playground, click on **Use this model** and select **Language: JavaScript** and **SDK: Azure AI Inference SDK**.
3. The model you selected will be pre-populated in the **Deployment name** field. You can optionally click on **Customize** to change the default configuration on deployment type, model version, tokens per minute (TPM) rate limit etc.
39
39
40
-

40
+

41
41
42
42
## 🧰 AI Foundry VS Code Extension
43
43
@@ -47,15 +47,15 @@ To continue working with your deployed model in VS Code, you will need to instal
47
47
48
48
2. Once installed, click on the **AI Foundry** icon in the left sidebar and click on **Set Default Project**. Select your project and expand the **Models** section. You should see your deployed model(s) listed there.
3. Click on the model name to open the model details view, where you can see the model's metadata, including the model version, deployment status, and TPM rate limit.
53
53
54
-

54
+

55
55
56
56
4. Right click on your model and select **Open in Playground**. This will open a tab in VS Code with a chat Playground, where you can test your deployed model.
5. You can also use the **Compare** feature to compare the performance of your deployed model with other models for manual evaluation. Once you are happy with the performance of your deployed model, right click on the model and select **Open Code File**, then:
61
61
- Select **SDK**: Azure AI Inference SDK/ Azure OpenAI SDK
@@ -89,7 +89,7 @@ To continue working with your deployed model in VS Code, you will need to instal
89
89
90
90
8. Finally, run `node ai-foundry.js` and observe the output in the terminal. You should see the response from your deployed model.
## Step 2️⃣: Add your AI model to the chat interface
81
81
@@ -215,7 +215,7 @@ Rename the `_mockAiCall` function to `_apiCall` and update the `sendMessage` met
215
215
216
216
With the server running, navigate to `http://localhost:5173` in your browser. You should be able to send messages to the AI model and receive responses.
217
217
218
-

218
+

Copy file name to clipboardExpand all lines: .github/steps/4-add-rag.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ In this step, you will learn how to add RAG (**R**etrieval-**A**ugmented **G**en
23
23
24
24
To complete this step, you will need to get a sample dataset in any format (e.g., PDF, CSV, JSON) to work with.
25
25
26
-
An an example, will use a [sample Contoso Electronics Employee Handbook PDF](https://github.com/juliamuiruri4/JS-Journey-to-AI-Foundry/blob/assets/jsai-buildathon-assets/employee_handbook.pdf) file. **You can bring any file of your choice**, but make sure it contains relevant information that you want your AI app to use for RAG. The code provided here will work with any text-based file.
26
+
An an example, will use a [sample Contoso Electronics Employee Handbook PDF](https://github.com/Azure-Samples/JS-Journey-to-AI-Foundry/blob/assets/jsai-buildathon-assets/employee_handbook.pdf) file. **You can bring any file of your choice**, but make sure it contains relevant information that you want your AI app to use for RAG. The code provided here will work with any text-based file.
27
27
28
28
- Create a new folder `data` in the root of your project and move the file in it. To search and read your PDF, you will need to extract the text from it. You can use any PDF parser library of your choice, but for this example, we will use the `pdf-parse` library.
29
29
@@ -364,24 +364,24 @@ Open your browser to use the app, usually at `http://localhost:5123`.
364
364
2. Ask a question related to the employee handbook, such as _"What is our company's mission statement?"_
365
365
- The expected outcome is that the AI will respond with an answer based on the content of the employee handbook PDF, and the relevant excerpts will be displayed below the response.
3. Now ask a question not covered in the employee handbook, such as _"What's the company's stock price?"_
370
370
- The expected outcome is that the AI will respond saying it doesn't have the information, and no excerpts will be displayed.
371
371
372
-

372
+

373
373
374
374
### Test with RAG OFF 🔴
375
375
1. **Clear chat and uncheck the "Use Employee Handbook" checkbox**.
376
376
2. Ask a question related to the employee handbook, such as _"What is our company's mission statement?"_
377
377
- The expected outcome is that the AI will respond with a generic answer, and likely ask for more context, and no excerpts will be displayed.
378
378
379
-

379
+

380
380
381
381
3. Now ask any general question, such as _"What is the capital of Morocco?"_
382
382
- The expected outcome is that the AI will respond with the correct answer, and no excerpts will be displayed.
383
383
384
-

384
+

385
385
386
386
Notice how, with RAG enabled, the AI is strictly limited to the handbook and refuses to answer unrelated questions. With RAG disabled, the AI is more flexible and answers any question to the best of its ability.
To add memory, you will use LangChain's built-in memory modules - `ChatMessageHistory` and `ConversationSummaryMemory`. Conversation memory allows the AI to reference previous exchanges in a session, enabling more context-aware and coherent responses and LangChain.js provides built-in memory modules that make this easy to implement. With LangChain, you can implement stateful AI app experiences without manually managing chat logs, and you can easily switch between in-memory, Redis, or other storage options.
To test this, open the chat UI in your browser and send a message like _"Hey, you can call me Terry. What should I call you?"_ and then ask _"Quiz time. What's my name?"_. The model should remember your name.
203
203
204
-

204
+

205
205
206
206
## ✅ Activity: Push your updated code to the repository
0 commit comments