Skip to content

Commit

Permalink
fix: according to Dina's feedback
Browse files Browse the repository at this point in the history
  • Loading branch information
glaucia86 committed Apr 16, 2024
1 parent 59d1526 commit 1e27cd6
Show file tree
Hide file tree
Showing 3 changed files with 21 additions and 23 deletions.
12 changes: 7 additions & 5 deletions docs/tutorial/01-introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,9 @@ Here's an example of the application in action:

![ChatGPT with RAG](../../docs/images/demo.gif)

The goal of the tutorial is to provide you with a hands-on experience building a serverless application using Azure Services and LangChain.js. You'll be guided through each step of the process from setting up the environment to deploying the application.
This tutorial will teach you how to build a serverless application using Azure Functions and LangChain.js.

LangChain.js is important because it makes it easy to create language models and integrate them into applications. It also makes it easy to develop AI-driven chatbots. You'll learn how to set up the environment and deploy the application.

The frontend of the application is provided so that you can focus on the backend code and technologies.

Expand Down Expand Up @@ -44,7 +46,7 @@ If you choose to use a local environment, you will need to install:
## Project Overview

Building AI applications can be complex and time-consuming. By using LangChain.js and Azure serverless technologies, you can greatly simplify the process. This application is a chatbot that uses a set of enterprise documents to generate AI responses to user queries.
Building AI applications can be complex and time-consuming. By using LangChain.js and Azure Functions including Serverless technologies, you can greatly simplify the process. These tools streamline the development by managing infrastructure concerns and scaling automatically, allowing you to focus more on building the chatbot functionality and less on the underlying system architecture. This application is a chatbot that uses a set of enterprise documents to generate AI responses to user queries.

The code sample includes sample data to make trying the application quick and easy, but feel free to replace it with your own. You'll use a fictitious company called Contoso Real Estate, and the experience allows its customers to ask support questions about the usage of the company's products. The sample data includes a set of documents that describes the company's terms of service, privacy policy, and support guide.

Expand All @@ -65,7 +67,7 @@ To understand the architecture of the project, let's break it down into its indi

- When a user submits a query through the web app, it is sent via HTTP to an API built using Azure Functions.
- The API uses LangChain.js to process the query.
- The API handles the logic of ingesting enterprise documents and generating responses to the chat queries.
- The API handles the logic of corporate documents and generates answers to chat queries.
- The code for this functionality will be shown later in the tutorial and is located in the `packages/api` folder.

3. **Database:**
Expand All @@ -86,7 +88,7 @@ Let's examine the application flow based on the architecture diagram:
- A user interacts with the chat interface in the web app
- The web app sends the user's query to the Serverless API via HTTP calls
- The Serverless API interacts with Azure OpenAI Service to generate a response, using the data from Azure Cosmos DB for MongoDB vCore.
- If there's a need to reference the original documents, Azure Blob Storage is used to retrieve the PDF documents.
- If there's a need to reference the documents, Azure Blob Storage is used to retrieve the PDF documents.
- The generated response is then sent back to the web app and displayed to the user.

The architecture is based on the RAG (Retrieval-Augmented Generation) architecture. This architecture combines the ability to retrieve information from a database with the ability to generate text from a language model. You'll learn more about RAG later in the tutorial.
Expand All @@ -108,7 +110,7 @@ npm install
2. To run the project, with only FrontEnd, execute the following command:

```bash
start:webapp
npm run start:webapp
```

> At this point, don't worry about the other scripts in the `package.json` file at the root of the project. They will be used throughout the tutorial.
Expand Down
28 changes: 12 additions & 16 deletions docs/tutorial/02-setting-up-azure-functions.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,26 +31,22 @@ Azure Functions v4 is the latest version of the Node.js programming model for Az

After following the steps above and forking the repository in the `starter` branch, you should do the following steps:

1. Open the `packages` folder and create a new folder called `api`
| Step | Description |
| ---- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | Open the `packages` folder and create a new folder called `api`. |
| 2 | To use the Azure Functions extension in Visual Studio Code, press `Ctrl + Shift + P` (for Windows users) or `Cmd + Shift + P` (for Mac users) and type `Azure Functions: Create New Project...` |
| 3 | Select the `api` folder we just created and click `Select`. |
| 4 | Select the programming language `TypeScript`. |
| 5 | Select a TypeScript programming model `Model V4`. |
| 6 | Select a template for your project's first function `HTTP trigger`. |
| 7 | Enter a name for the function `chat-post`. |

After creating the project, you will see a similar folder and file structure:

2. To use the Azure Functions extension in Visual Studio Code, press `Ctrl + Shift + P` (for Windows users) or `Cmd + Shift + P` (for Mac users) and type `Azure Functions: Create New Project...`

3. Select the `api` folder we just created and click `Select`

4. Select the programming language `TypeScript`

5. Select a TypeScript programming model `Model V4`

6. Select a template for your project's first function `HTTP trigger`

7. Enter a name for the function `chat-post`
![Azure Functions Project Structure](./images/azure-functions-project-structure.png)

> **Note:** when we create a new project, it may take some time to install all the project dependencies and structure. Wait until the process is complete.
8. After creating the project, you will see a similar folder and file structure:

![Azure Functions Project Structure](./images/azure-functions-project-structure.png)

Note that the project structure differs significantly from the programming model 3.x versin. Let's take a moment to understand the structure of this new programming model version:

- **src folder:** has all the project logic, including functions and configuration files.
Expand Down
4 changes: 2 additions & 2 deletions docs/tutorial/04-preparing-understanding-language-models.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Preparing and Understanding Language Models: Configuring Azure OpenAI Service and Local Installation of Ollama with Mistral 7B
# Preparing and Understanding Language Models: Configuring Azure OpenAI Service and Installing Ollama with Mistral 7B

This section we will cover the language models used in the project. Throughout the tutorial, we will also learn how to generate the environment variables needed to use the Azure Services, including the **[Azure OpenAI Service](https://learn.microsoft.com/azure/ai-services/openai/overview)**.

We will also teach you how to use **[Ollama](https://ollama.com/)** with **[Mistral 7B](https://mistral.ai/)**, an Open Source Language Model, if you want to use it locally.

## Models to be used in the project

We will teach you how to uise two different language models: GPT-3.5 Turbo integrated with Azure OpenAI Service and Ollama with Mistral 7B. Let's take a look at eac of them.
We will teach you how to uise two different language models: GPT-3.5 Turbo integrated with _Azure OpenAI Service_ (on Azure) and _Ollama with Mistral 7B_ (if you decide to use a model locally). Let's take a look at each of them.

### GPT-3.5 Turbo Integrated with Azure OpenAI Service

Expand Down

0 comments on commit 1e27cd6

Please sign in to comment.