Skip to content

Commit

Permalink
Merge pull request #637 from RizaFarheen/main
Browse files Browse the repository at this point in the history
llm chat complete doc
  • Loading branch information
RizaFarheen authored Mar 25, 2024
2 parents 3eed307 + c9c2942 commit 0495df9
Show file tree
Hide file tree
Showing 12 changed files with 115 additions and 9 deletions.
106 changes: 106 additions & 0 deletions docs/reference-docs/ai-tasks/llm-chat-complete.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
---
sidebar_position: 10
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

# LLM Chat Complete

A system task to complete the chat query that is designed to direct the model's behavior accurately, preventing any deviation from the objective.

## Definitions

```json
{
"name": "llm_chat_complete",
"taskReferenceName": "llm_chat_complete_ref",
"inputParameters": {
"llmProvider": "openai",
"model": "gpt-4",
"instructions": "your-prompt-template",
"messages": [
{
"role": "user",
"message": "${workflow.input.text}"
}
],
"temperature": 0.1,
"topP": 0.2,
"maxTokens": 4,
"stopWords": "and"
},
"type": "LLM_CHAT_COMPLETE"
}
```

## Input Parameters

| Parameter | Description |
| --------- | ----------- |
| llmProvider | Choose the required LLM provider. You can only choose providers to which you have access for at least one model from that provider.<br/><br/>**Note:**If you haven’t configured your AI / LLM provider on your Orkes console, navigate to the **Integrations** tab and configure your required provider. Refer to this doc on [how to integrate the LLM providers with Orkes console and provide access to required groups](https://orkes.io/content/category/integrations/ai-llm). |
| model | Choose from the available language model for the chosen LLM provider. You can only choose models for which you have access.<br/><br/>For example, If your LLM provider is Azure Open AI & you’ve configured *text-davinci-003* as the language model, you can choose it under this field. |
| instructions | Set the ground rule/instructions for the chat so the model responds to only specific queries and will not deviate from the objective.<br/><br/>Under this field, choose the AI prompt created. You can only use the prompts for which you have access.<br/><br/>**Note:**If you haven’t created an AI prompt for your language model, refer to this documentation on [how to create AI Prompts in Orkes Conductor and provide access to required groups](https://orkes.io/content/reference-docs/ai-tasks/prompt-template). |
| messages | Choose the role and messages to complete the chat query.<p align="center"><img src="/content/img/llm-chat-complete-messages.png" alt="Role and messages in LLM Chat complete task" width="50%" height="auto"></img></p><ul><li>Under ‘Role,’ choose the required role for the chat completion. It can take values such as *user*, *assistant*, *system*, or *human*.<ul><li>The roles “user” and “human” represent the user asking questions or initiating the conversation.</li><li>The roles “assistant” and “system” refer to the model responding to the user queries.</li></ul></li><li>Under “Message”, choose the corresponding input to be provided. It can also be [passed as variables](https://orkes.io/content/developer-guides/passing-inputs-to-task-in-conductor). </li></ul> |
| temperature | A parameter to control the randomness of the model’s output. Higher temperatures, such as 1.0, make the output more random and creative. Whereas a lower value makes the output more deterministic and focused.<br/><br/>Example: If you're using a text blurb as input and want to categorize it based on its content type, opt for a lower temperature setting. Conversely, if you're providing text inputs and intend to generate content like emails or blogs, it's advisable to use a higher temperature setting. |
| stopWords | Provide the stop words to be omitted during the text generation process.<br/><br/>In LLM, stop words may be filtered out or given less importance during the text generation process to ensure that the generated text is coherent and contextually relevant. |
| topP | Another parameter to control the randomness of the model’s output. This parameter defines a probability threshold and then chooses tokens whose cumulative probability exceeds this threshold.<br/><br/>For example: Imagine you want to complete the sentence: “She walked into the room and saw a ______.” Now, the top 4 words the LLM model would consider based on the highest probabilities would be:<ul><li>Cat - 35%</li><li>Dog - 25% </li><li>Book - 15% </li><li>Chair - 10%</li></ul>If you set the top-p parameter to 0.70, the AI will consider tokens until their cumulative probability reaches or exceeds 70%. Here's how it works:<ul><li>Adding "Cat" (35%) to the cumulative probability.</li><li>Adding "Dog" (25%) to the cumulative probability, totaling 60%.</li><li>Adding "Book" (15%) to the cumulative probability, now at 75%.</li></ul>At this point, the cumulative probability is 75%, exceeding the set top-p value of 70%. Therefore, the AI will randomly select one of the tokens from the list of "Cat," "Dog," and "Book" to complete the sentence because these tokens collectively account for approximately 75% of the likelihood. |
| maxTokens | The maximum number of tokens to be generated by the LLM and returned as part of the result. A token should be approximately 4 characters. |

## Output Parameters

The task output displays the completed chat by the LLM.

## Examples

<Tabs>
<TabItem value="UI" label="UI" className="paddedContent">

<div className="row">
<div className="col col--4">

<br/>
<br/>

1. Add task type **LLM Chat Complete**.
2. Choose the LLM provider, model & prompt template.
3. Provide the input parameters.

</div>
<div className="col">
<div className="embed-loom-video">

<p><img src="/content/img/llm-chat-complete-ui-method.png" alt="LLM Chat Complete Task" width="500" height="auto"/></p>

</div>
</div>
</div>



</TabItem>
<TabItem value="JSON" label="JSON Example">

```json
{
"name": "llm_chat_complete",
"taskReferenceName": "llm_chat_complete_ref",
"inputParameters": {
"llmProvider": "openai",
"model": "gpt-4",
"instructions": "your-prompt-template",
"messages": [
{
"role": "user",
"message": "${workflow.input.text}"
}
],
"temperature": 0.1,
"topP": 0.2,
"maxTokens": 4,
"stopWords": "and"
},
"type": "LLM_CHAT_COMPLETE"
}
```
</TabItem>
</Tabs>
2 changes: 1 addition & 1 deletion docs/reference-docs/ai-tasks/llm-generate-embeddings.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ A system task to generate embeddings from the input data provided. Embeddings ar

## Input Parameters

| Attribute | Description |
| Parameter | Description |
| --------- | ----------- |
| llmProvider | Choose the required LLM provider. You can only choose providers to which you have access for at least one model from that provider.<br/><br/>**Note**:If you haven’t configured your AI / LLM provider on your Orkes console, navigate to the **Integrations** tab and configure your required provider. Refer to this doc on [how to integrate the LLM providers with Orkes console and provide access to required groups](/content/category/integrations/ai-llm).|
| model | Choose from the available language model for the chosen LLM provider. You can only choose models for which you have access.<br/><br/>For example, If your LLM provider is Azure Open AI & you’ve configured *text-davinci-003* as the language model, you can choose it under this field. |
Expand Down
2 changes: 1 addition & 1 deletion docs/reference-docs/ai-tasks/llm-get-document.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ A system task to retrieve the content of the document provided and use it for fu

## Input Parameters

| Attribute | Description |
| Parameter | Description |
| --------- | ----------- |
| url | Provide the URL of the document to be retrieved.<br/><br/>Check out our documentation on [how to pass parameters to tasks](https://orkes.io/content/developer-guides/passing-inputs-to-task-in-conductor). |
| mediaType | Select the media type of the file to be retrieved. Currently, supported media types include:<ul><li>application/pdf</li><li>text/html</li><li>text/plain</li><li>json</li></ul> |
Expand Down
2 changes: 1 addition & 1 deletion docs/reference-docs/ai-tasks/llm-get-embeddings.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ A system task to get the numerical vector representations of words, phrases, sen

## Input Parameters

| Attribute | Description |
| Parameter | Description |
| --------- | ----------- |
| vectorDB | Choose the required vector database.<br/><br/>**Note**:If you haven’t configured the vector database on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to this doc on [how to integrate Vector Databases with Orkes console](/content/category/integrations/vector-databases). |
| namespace | Choose from the available namespace configured within the chosen vector database.<br/><br/>Namespaces are separate isolated environments within the database to manage and organize vector data effectively.<br/><br/>**Note**:Namespace field is applicable only for Pinecone integration and is not applicable to Weaviate integration.|
Expand Down
2 changes: 1 addition & 1 deletion docs/reference-docs/ai-tasks/llm-index-document.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ A system task to index the provided document into a vector database that can be

## Input Parameters

| Attribute | Description |
| Parameter | Description |
| --------- | ----------- |
| vectorDB | Choose the required vector database.<br/><br/>**Note**:If you haven’t configured the vector database on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to this doc on [how to integrate Vector Databases with Orkes console](/content/category/integrations/vector-databases). |
| namespace | Choose from the available namespace configured within the chosen vector database.<br/><br/>Namespaces are separate isolated environments within the database to manage and organize vector data effectively.<br/><br/>**Note**:Namespace field is applicable only for Pinecone integration and is not applicable to Weaviate integration.|
Expand Down
2 changes: 1 addition & 1 deletion docs/reference-docs/ai-tasks/llm-index-text.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ A system task to index the provided text into a vector space that can be efficie

## Input Parameters

| Attribute | Description |
| Parameter | Description |
| --------- | ----------- |
| vectorDB | Choose the required vector database.<br/><br/>**Note**:If you haven’t configured the vector database on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to this doc on [how to integrate Vector Databases with Orkes console](/content/category/integrations/vector-databases). |
| namespace | Choose from the available namespace configured within the chosen vector database.<br/><br/>Namespaces are separate isolated environments within the database to manage and organize vector data effectively.<br/><br/>**Note**:Namespace field is applicable only for Pinecone integration and is not applicable to Weaviate integration.|
Expand Down
2 changes: 1 addition & 1 deletion docs/reference-docs/ai-tasks/llm-search-index.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ For example, in a recommendation system, a user might issue a query to find prod

## Input Parameters

| Attribute | Description |
| Parameter | Description |
| --------- | ----------- |
| vectorDB | Choose the required vector database.<br/><br/>**Note**:If you haven’t configured the vector database on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to this doc on [how to integrate Vector Databases with Orkes console](/content/category/integrations/vector-databases). |
| namespace | Choose from the available namespace configured within the chosen vector database.<br/><br/>Namespaces are separate isolated environments within the database to manage and organize vector data effectively.<br/><br/>**Note**:Namespace field is applicable only for Pinecone integration and is not applicable to Weaviate integration.|
Expand Down
2 changes: 1 addition & 1 deletion docs/reference-docs/ai-tasks/llm-store-embeddings.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ A system task responsible for storing the generated embeddings produced by the [

## Input Parameters

| Attribute | Decsription |
| Parameter | Description |
| ---------- | ----------- |
| vectorDB | Choose the vector database to which the data is to be stored. <br/><br/>**Note**: If you haven’t configured the vector database on your Orkes console, navigate to the **_Integrations_** tab and configure your required provider. Refer to this doc on [how to integrate Vector Databases with Orkes console](https://orkes.io/content/category/integrations/vector-databases). |
| index | Choose the index in your vector database where the text or data is to be stored.<br/><br/>**Note**: For Weaviate integration, this field refers to the class name, while in Pinecone integration, it denotes the index name itself. |
Expand Down
2 changes: 1 addition & 1 deletion docs/reference-docs/ai-tasks/llm-text-complete.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ A system task to predict or generate the next phrase or words in a given text ba

## Input Parameters

| Attribute | Description |
| Parameter | Description |
| --------- | ----------- |
| llmProvider | Choose the required LLM provider. You can only choose providers to which you have access for at least one model from that provider.<br/><br/>**Note**:If you haven’t configured your AI / LLM provider on your Orkes console, navigate to the **Integrations** tab and configure your required provider. Refer to this doc on [how to integrate the LLM providers with Orkes console and provide access to required groups](/content/category/integrations/ai-llm).|
| model | Choose from the available language model for the chosen LLM provider. You can only choose models for which you have access.<br/><br/>For example, If your LLM provider is Azure Open AI & you’ve configured *text-davinci-003* as the language model, you can choose it under this field. |
Expand Down
2 changes: 1 addition & 1 deletion docs/reference-docs/ai-tasks/prompt-template.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ The AI prompts can be created in the Orkes Conductor cluster and can be used in

## Parameters

| Attribute | Description |
| Parameter | Description |
| --------- | ----------- |
| Prompt Name | A name for the prompt. |
| Description | A description for the prompt. |
Expand Down
Binary file added static/img/llm-chat-complete-messages.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added static/img/llm-chat-complete-ui-method.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 0495df9

Please sign in to comment.