Skip to content

Commit

Permalink
Merge pull request #711 from RizaFarheen/main
Browse files Browse the repository at this point in the history
Doc Updates
  • Loading branch information
RizaFarheen authored Jun 25, 2024
2 parents 0e56ff2 + 0f508e0 commit e2ea872
Show file tree
Hide file tree
Showing 9 changed files with 24 additions and 6 deletions.
23 changes: 17 additions & 6 deletions docs/reference-docs/ai-tasks/llm-chat-complete.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ A system task to complete the chat query. It can be used to instruct the model's
## Definitions

```json
{
{
"name": "llm_chat_complete",
"taskReferenceName": "llm_chat_complete_ref",
"inputParameters": {
Expand All @@ -27,7 +27,11 @@ A system task to complete the chat query. It can be used to instruct the model's
"temperature": 0.1,
"topP": 0.2,
"maxTokens": 4,
"stopWords": "and"
"stopWords": "spam",
"promptVariables": {
"text": "${workflow.input.text}",
"language": "${workflow.input.language}"
}
},
"type": "LLM_CHAT_COMPLETE"
}
Expand All @@ -40,11 +44,14 @@ A system task to complete the chat query. It can be used to instruct the model's
| llmProvider | Choose the required LLM provider. You can only choose providers to which you have access for at least one model from that provider.<br/><br/>**Note:**If you haven’t configured your AI / LLM provider on your Orkes console, navigate to the **Integrations** tab and configure your required provider. Refer to this doc on [how to integrate the LLM providers with Orkes console and provide access to required groups](https://orkes.io/content/category/integrations/ai-llm). |
| model | Choose from the available language model for the chosen LLM provider. You can only choose models for which you have access.<br/><br/>For example, If your LLM provider is Azure Open AI & you’ve configured *text-davinci-003* as the language model, you can choose it under this field. |
| instructions | Set the ground rule/instructions for the chat so the model responds to only specific queries and will not deviate from the objective.<br/><br/>Under this field, choose the AI prompt created. You can only use the prompts for which you have access.<br/><br/>**Note:**If you haven’t created an AI prompt for your language model, refer to this documentation on [how to create AI Prompts in Orkes Conductor and provide access to required groups](https://orkes.io/content/reference-docs/ai-tasks/prompt-template). |
| promptVariables | The instructions/prompts can include **_promptVariables_**, allowing for dynamic input. These variables support multiple data types, including string, number, boolean, null, and object/array. |
| messages | Choose the role and messages to complete the chat query.<p align="center"><img src="/content/img/llm-chat-complete-messages.png" alt="Role and messages in LLM Chat complete task" width="50%" height="auto"></img></p><ul><li>Under ‘Role,’ choose the required role for the chat completion. It can take values such as *user*, *assistant*, *system*, or *human*.<ul><li>The roles “user” and “human” represent the user asking questions or initiating the conversation.</li><li>The roles “assistant” and “system” refer to the model responding to the user queries.</li></ul></li><li>Under “Message”, choose the corresponding input to be provided. It can also be [passed as variables](https://orkes.io/content/developer-guides/passing-inputs-to-task-in-conductor). </li></ul> |
| temperature | A parameter to control the randomness of the model’s output. Higher temperatures, such as 1.0, make the output more random and creative. Whereas a lower value makes the output more deterministic and focused.<br/><br/>Example: If you're using a text blurb as input and want to categorize it based on its content type, opt for a lower temperature setting. Conversely, if you're providing text inputs and intend to generate content like emails or blogs, it's advisable to use a higher temperature setting. |
| stopWords | Provide the stop words to be omitted during the text generation process.<br/><br/>In LLM, stop words may be filtered out or given less importance during the text generation process to ensure that the generated text is coherent and contextually relevant. |
| stopWords | Provide the stop words to be omitted during the text generation process. It can be string or object/array.<br/><br/>In LLM, stop words may be filtered out or given less importance during the text generation process to ensure that the generated text is coherent and contextually relevant. |
| topP | Another parameter to control the randomness of the model’s output. This parameter defines a probability threshold and then chooses tokens whose cumulative probability exceeds this threshold.<br/><br/>For example: Imagine you want to complete the sentence: “She walked into the room and saw a ______.” Now, the top 4 words the LLM model would consider based on the highest probabilities would be:<ul><li>Cat - 35%</li><li>Dog - 25% </li><li>Book - 15% </li><li>Chair - 10%</li></ul>If you set the top-p parameter to 0.70, the AI will consider tokens until their cumulative probability reaches or exceeds 70%. Here's how it works:<ul><li>Adding "Cat" (35%) to the cumulative probability.</li><li>Adding "Dog" (25%) to the cumulative probability, totaling 60%.</li><li>Adding "Book" (15%) to the cumulative probability, now at 75%.</li></ul>At this point, the cumulative probability is 75%, exceeding the set top-p value of 70%. Therefore, the AI will randomly select one of the tokens from the list of "Cat," "Dog," and "Book" to complete the sentence because these tokens collectively account for approximately 75% of the likelihood. |
| maxTokens | The maximum number of tokens to be generated by the LLM and returned as part of the result. A token should be approximately 4 characters. |
| maxTokens<br/><br/>(Referred as **_Token limit_** in UI) | The maximum number of tokens to be generated by the LLM and returned as part of the result. A token should be approximately 4 characters. |
| cacheConfig | Enabling this option allows saving the cache output of the task. On enabling, you can provide the following parameters:<ul><li>ttlInSecond - Provide the time to live in seconds. You can also [pass this parameter as a variable](https://orkes.io/content/developer-guides/passing-inputs-to-task-in-conductor).</li><li>key - Provide the cache key, which is a string with parameter substitution based on the task input. You can also [pass this parameter as a variable](https://orkes.io/content/developer-guides/passing-inputs-to-task-in-conductor).</li></ul> |
| optional | Enabling this option renders the task optional. The workflow continues unaffected by the task's outcome, whether it fails or remains incomplete. |

## Output Parameters

Expand Down Expand Up @@ -81,7 +88,7 @@ The task output displays the completed chat by the LLM.
<TabItem value="JSON" label="JSON">

```json
{
{
"name": "llm_chat_complete",
"taskReferenceName": "llm_chat_complete_ref",
"inputParameters": {
Expand All @@ -97,7 +104,11 @@ The task output displays the completed chat by the LLM.
"temperature": 0.1,
"topP": 0.2,
"maxTokens": 4,
"stopWords": "and"
"stopWords": "spam",
"promptVariables": {
"text": "${workflow.input.text}",
"language": "${workflow.input.language}"
}
},
"type": "LLM_CHAT_COMPLETE"
}
Expand Down
1 change: 1 addition & 0 deletions docs/reference-docs/ai-tasks/llm-generate-embeddings.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ A system task to generate embeddings from the input data provided. Embeddings ar
| llmProvider | Choose the required LLM provider. You can only choose providers to which you have access for at least one model from that provider.<br/><br/>**Note**:If you haven’t configured your AI / LLM provider on your Orkes console, navigate to the **Integrations** tab and configure your required provider. Refer to this doc on [how to integrate the LLM providers with Orkes console and provide access to required groups](/content/category/integrations/ai-llm).|
| model | Choose from the available language model for the chosen LLM provider. You can only choose models for which you have access.<br/><br/>For example, If your LLM provider is Azure Open AI & you’ve configured *text-davinci-003* as the language model, you can choose it under this field. |
| text | Provide the text to be converted and stored as a vector. The text can also be [passed as parameters to the workflow](https://orkes.io/content/developer-guides/passing-inputs-to-task-in-conductor).|
| optional | Enabling this option renders the task optional. The workflow continues unaffected by the task's outcome, whether it fails or remains incomplete. |

## Output Parameters

Expand Down
1 change: 1 addition & 0 deletions docs/reference-docs/ai-tasks/llm-get-embeddings.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ A system task to get the numerical vector representations of words, phrases, sen
| namespace | Choose from the available namespace configured within the chosen vector database.<br/><br/>Namespaces are separate isolated environments within the database to manage and organize vector data effectively.<br/><br/>**Note**:Namespace field is applicable only for Pinecone integration and is not applicable to Weaviate integration.|
| index | Choose the index in your vector database where indexed text or data was stored.<br/><br/> **Note:**For Weaviate integration, this field refers to the class name, while in Pinecone integration, it denotes the index name itself.|
| embeddings | Choose the embeddings from which the stored data is to be retrieved. It needs to be from the same embedding model that was used to create the other embeddings that are stored in the same index. |
| optional | Enabling this option renders the task optional. The workflow continues unaffected by the task's outcome, whether it fails or remains incomplete. |

## Output Parameters

Expand Down
1 change: 1 addition & 0 deletions docs/reference-docs/ai-tasks/llm-index-document.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ A system task to index the provided document into a vector database that can be
| mediaType | Select the media type of the file to be indexed. Currently, supported media types include:<ul><li>application/pdf</li><li>text/html</li><li>text/plain</li><li>json</li></ul> |
| chunkSize | Specifies how long each segment of the input text should be when it’s divided for processing by the LLM.<br/><br/>For example, if your article contains 2000 words and you specify the chunk size of 500, then the document would be divided into four chunks for processing. |
| chunkOverlap | Specifies the overlap quantity between the adjacent chunks.<br/><br/>For example, if the chunk overlap is specified as 100, then the first 100 words of each chunk would overlap with the last 100 words of the previous chunk. |
| optional | Enabling this option renders the task optional. The workflow continues unaffected by the task's outcome, whether it fails or remains incomplete. |

## Examples

Expand Down
1 change: 1 addition & 0 deletions docs/reference-docs/ai-tasks/llm-index-text.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ A system task to index the provided text into a vector space that can be efficie
| embeddingModel | Choose from the available language model for the chosen LLM provider. |
| text | Provide the text to be indexed. |
| docId | Provide the ID of the document where you need to store the indexed text. |
| optional | Enabling this option renders the task optional. The workflow continues unaffected by the task's outcome, whether it fails or remains incomplete. |

## Examples

Expand Down
1 change: 1 addition & 0 deletions docs/reference-docs/ai-tasks/llm-search-index.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ For example, in a recommendation system, a user might issue a query to find prod
| llmProvider | Choose the required LLM provider configured.<br/><br/>**Note:**If you haven’t configured your AI / LLM provider on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to this doc on [how to integrate the LLM providers with Orkes console](/content/category/integrations/ai-llm).|
| model | Choose from the available language model configured for the chosen LLM provider.<br/><br/>For example, If your LLM provider is Azure Open AI & you’ve configured _text-davinci-003_ as the language model, you can choose it under this field. |
| query | Provide your search query. A query typically refers to a question, statement, or request made in natural language that is used to search, retrieve, or manipulate data stored in a database. |
| optional | Enabling this option renders the task optional. The workflow continues unaffected by the task's outcome, whether it fails or remains incomplete. |

## Output Parameters

Expand Down
1 change: 1 addition & 0 deletions docs/reference-docs/ai-tasks/llm-store-embeddings.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ A system task responsible for storing the generated embeddings produced by the [
| embeddingModelProvider | Choose the required LLM provider for embedding.<br/><br/>**Note**:If you haven’t configured your AI / LLM provider on your Orkes console, navigate to the **_Integrations_** tab and configure your required provider. Refer to this doc on [how to integrate the LLM providers with Orkes console](https://orkes.io/content/category/integrations/ai-llm). |
| embeddingModel | Choose from the available language model for the chosen LLM provider. |
| Id | Optional field to provide the vector ID. |
| optional | Enabling this option renders the task optional. The workflow continues unaffected by the task's outcome, whether it fails or remains incomplete. |

## Examples

Expand Down
1 change: 1 addition & 0 deletions docs/reference-docs/ai-tasks/llm-text-complete.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ A system task to predict or generate the next phrase or words in a given text ba
| stopWords | Provide the stop words to be omitted during the text generation process.<br/><br/>In LLM, stop words may be filtered out or given less importance during the text generation process to ensure that the generated text is coherent and contextually relevant.
| topP | Another parameter to control the randomness of the model’s output. This parameter defines a probability threshold and then chooses tokens whose cumulative probability exceeds this threshold.<br/><br/>For example: Imagine you want to complete the sentence: “She walked into the room and saw a ______.” Now, the top 4 words the LLM model would consider based on the highest probabilities would be:<ul><li>Cat - 35%</li><li>Dog - 25%</li><li>Book - 15%</li><li>Chair - 10%</li></ul>If you set the top-p parameter to 0.70, the AI will consider tokens until their cumulative probability reaches or exceeds 70%. Here's how it works:<ul><li>Adding "Cat" (35%) to the cumulative probability.</li><li>Adding "Dog" (25%) to the cumulative probability, totaling 60%.</li><li>Adding "Book" (15%) to the cumulative probability, now at 75%.</li></ul>At this point, the cumulative probability is 75%, exceeding the set top-p value of 70%. Therefore, the AI will randomly select one of the tokens from the list of "Cat," "Dog," and "Book" to complete the sentence because these tokens collectively account for approximately 75% of the likelihood.|
| maxTokens | The maximum number of tokens to be generated by the LLM and returned as part of the result. A token should be approximately 4 characters. |
| optional | Enabling this option renders the task optional. The workflow continues unaffected by the task's outcome, whether it fails or remains incomplete. |

## Output Parameters

Expand Down
Binary file modified static/img/llm-chat-complete-ui-method.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit e2ea872

Please sign in to comment.