Skip to content

Commit

Permalink
documentation updated
Browse files Browse the repository at this point in the history
  • Loading branch information
Sara Fatih authored and Sara Fatih committed Mar 3, 2024
1 parent 6b4305c commit ffecc6a
Show file tree
Hide file tree
Showing 32 changed files with 550 additions and 584 deletions.
1,057 changes: 515 additions & 542 deletions package-lock.json

Large diffs are not rendered by default.

3 changes: 2 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,8 @@
]
},
"devDependencies": {
"lerna": "^7.4.2"
"lerna": "^7.4.2",
"vite-plugin-dts": "^3.7.3"
},
"volta": {
"node": "18.18.2"
Expand Down
2 changes: 1 addition & 1 deletion websites/docs/pages/concepts/features.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,6 @@ Prompt Studio is a fully-managed LLM backend that allows you to deploy your AI r

### Extra Notes

LLMs excel in tasks such as information extraction, text classification, summarization, text generation, etc. The possibilities are endless, and the quality of the results you get from LLMs will be determined by the quality of your instructions. That's why we're soon adding what we call our "Promptly", which is a sort of Grammarly for prompts to help you craft better instructions to the LLM.
LLMs excel in tasks such as information extraction, text classification, summarization, text generation, etc. The possibilities are endless, and the quality of the results you get from LLMs will be determined by the quality of your instructions. That's why we're soon adding features to enhance your prompts to help you craft better instructions to the LLM.

With Prompt Studio, we are fully decoupling prompt engineering and AI development from software development and we are adding new features quite often to support you in your AI development journey. If you have any feature requests or questions, you can contact us at `support@prompt.studio` or our [discord](https://discord.gg/3RxwUEk8fW)
6 changes: 3 additions & 3 deletions websites/docs/pages/concepts/file.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,15 @@ When you add a file to your instruction, the content of the file is added **in-l
Let's go back to the same instruction that we used earlier to extract concerns from a file containing interviews:

```
Based on the following: /interviews, list the concerns expressed by the interviewees.
Based on the following interviews: /interviews, list the concerns expressed by the interviewees.
```

Once you upload your file, the string `/interviews` will be replaced with the actual content of the file. This will make the context available to the LLM in the instruction.


::: warning Should I follow a specific format when adding file content to an instruction
::: warning Should I follow a specific format when adding file content to an instruction?

Adding context to a prompt is something that we are planning to assist you in with our upcoming "Promptly", which a sort of "Grammarly" for prompts.
Adding context to a prompt is something that we are planning to assist you in with our upcoming prompt enhancement features.

:::

Expand Down
Binary file removed websites/docs/pages/concepts/images/ai_behavior.png
Binary file not shown.
Binary file modified websites/docs/pages/concepts/images/click_on_preview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed websites/docs/pages/concepts/images/controls.png
Binary file not shown.
Binary file not shown.
Binary file modified websites/docs/pages/concepts/images/preview_inputs.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified websites/docs/pages/concepts/images/preview_outputs.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified websites/docs/pages/concepts/images/preview_page.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified websites/docs/pages/concepts/images/run_preview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified websites/docs/pages/concepts/images/select_ai_behavior.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified websites/docs/pages/concepts/images/select_output_type.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified websites/docs/pages/concepts/images/select_table_output.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
18 changes: 6 additions & 12 deletions websites/docs/pages/concepts/instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,25 +21,19 @@ In the first instruction, we're passing a [File](file.md) containing the intervi

```
Based on the following /interviews, list the concerns expressed by the interviewees
Based on the following interviews: /interviews, list the concerns expressed by the interviewees
```

In the second instruction, we want the LLM to suggest solutions based on the concerns extracted from the first instruction. For that, we're referring to the output of the first instruction with `/concerns`. The second instruction looks like this:

```
Based on the following: /concerns, suggest a solution that would make the interviewee happy
Based on the following concerns: /concerns, suggest a solution that would make the interviewee happy
```

This is the flow when you run the second instruction:
1. The first instruction runs, we get the output called `/concerns`. Let's say the output is `the couch is too small`
1. The first instruction runs, we get the output called `/concerns`. Let's say the content of the output is `the couch is too small`
2. The second instruction will run and replace `/concerns` with `the couch is too small`

This is how the second instruction **actually** looks like when it's sent to the LLM:
```
Based on the following: "the couch was too small", suggest a solution that would make the interviewee happy
```


## AI behavior of an instruction
Setting a specific AI behavior would tailor the model's output and interaction style to your specific needs or preferences. This customization can affect how the model generates responses, the tone and style of those responses, and the model's focus on certain types of information or modes of interaction.
Expand All @@ -64,10 +58,10 @@ You can set the output format of an instruction to either **text** or **table**
<img src="./images/select_output_type.png" style="width: 48%;"/>


### Text output format
### Text format
Text output is ideal for narrative responses, explanations, creative writing, or any scenario where a flowing, continuous form of information is preferred.

### Table output format
### Table format
Table output is invaluable for organizing data, comparisons, statistical information, or any content where structure and quick reference are key. Tables make it easier to digest complex information at a glance, facilitating analysis and decision-making processes. More on how to set the table to guide the LLM [here](#how-to-return-output-in-a-table-format).

### Example of text and table formats
Expand All @@ -81,7 +75,7 @@ If we go back to the previous instruction where we extracted the concerns expres
#### Step 1: Select table output format
<img src="./images/select_table_output.png" style="width: 48%;"/>

### Step 2: Fill the table in the instruction field
#### Step 2: Fill the table in the instruction field
In the video below, I write the two columns or fields that I want the LLM to fill out. In this case, I want it to extract the interviewees and to write the concerns of every interviewee.

<div style="position: relative; padding-bottom: 53.22916666666667%; height: 0;"><iframe src="https://www.loom.com/embed/db4338f9a2d14b0fabb1efec3b2e206d?sid=2ad6a959-4a2c-4243-af6f-3503cb9b1d2f" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;"></iframe></div>
6 changes: 3 additions & 3 deletions websites/docs/pages/concepts/preview.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,17 @@ To begin the preview, look for the **Preview** button situated on the top right

## Step 2: Open the Preview Page

Click the **Preview** button. This action will open a new browser tab, taking you to the preview page where you can interact with your recipe. Back to the [Recipe](instructions#chained-instructions-in-a-recipe) that we had built earlier, we have two instructions. In this case, I want to test my recipe by uploading different interview files and seeing what concerns it extracts (1st instruction) and what solutions it suggests(2nd instruction). This will help me refine my prompt better.
Click the **Preview** button. This action will open a new browser tab, taking you to the preview page where you can interact with your recipe. Back to the [Recipe](instructions#chained-instructions-in-a-recipe) that we had built earlier, we have two instructions. In this case, I want to test my recipe by uploading an interview file and seeing what concerns it extracts (1st instruction) and what solutions it suggests(2nd instruction). This will help me refine my prompt better.

![Preview Page](./images/preview_page.png)

## Step 3: Interact with Your Recipe

On the preview page, you will find the input fields to the left. These fields are dynamic and will reflect the [file/text](file.md) references that you wrote in your instructions.
On the preview page, you will find the input fields to the left. These fields are dynamic and will reflect the [file](file.md) references that you wrote in your instructions.

![Preview Inputs](./images/preview_inputs.png)

To the right, the outputs of the instructions inside of your recipe will be displayed. In the case of this [recipe](instructions#chained-instructions-in-a-recipe), it's the concerns instruction and the solutions instruction.
To the right, the outputs of the instructions inside of your recipe will be displayed. In the case of this [recipe](instructions#chained-instructions-in-a-recipe), it's the outputs of the concerns instruction and the solutions instruction.

![Results Section Placeholder](./images/preview_outputs.png)

Expand Down
8 changes: 1 addition & 7 deletions websites/docs/pages/concepts/prompts.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,10 @@

A **prompt** is a text input that you provide to a language model to achieve a specific result. With a prompt, your task is to help the LLM understand what you're trying to achieve so that it can generate coherent and correct output. A good prompt should be structured to provide context and guidance to the model allowing it to generate a meaningful response. Prompts are constrained by the context length of the large language model used. You can find more about LLMs [here](https://www.techtarget.com/whatis/definition/large-language-model-LLM).

In Prompt Studio, **prompts** are used by [instructions](/concepts/instructions).
In Prompt Studio, prompts are used by [instructions](/concepts/instructions).

Writing good prompts has become a domain of its own and there are many resources out there on how to write better prompts, a good place to start is [the prompt engineering guide](https://www.promptingguide.ai/).

## Meta Prompts

A **meta prompt** is a prompt used to generate other prompts that perform well at specific tasks. Meta prompts usually provide some framework or format that the language model will follow when creating prompts given some user instructions.

::: warning Prompt Injections

Be aware that inserting user generated text in your prompts might throw off the results, either intentionally: the user tries to override the original instructions you created, or unintentionally due to text passages that are reminiscent of instructions.

:::
4 changes: 2 additions & 2 deletions websites/docs/pages/concepts/recipe.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Recipes

A recipe is a combination of AI [instructions](instructions.md). It can be described as code written in natural language that you can use as automation, part of an automation or as part of a customer facing application. A recipe is your way of defining a blueprint of AI [**instructions**](concepts/instructions.md) that the LLM follows to solve your specific problem.
A recipe is a combination of AI [instructions](instructions.md). It can be described as code written in natural language that you can use as automation, part of an automation or as part of a customer facing application. A recipe is your way of defining a blueprint of AI instructions that the LLM follows to solve your specific problem.

In a recipe, instructions can be combined together to create more advanced functionality. Once created, a recipe can be deployed through an API. You can use our [SDK](/sdk/js) or our [Rest API](/api/getting-started) to integrate the tool into your codebase. You can also share a preview of the tool using our [shareable ui](../tools/preview.md) feature.
In a recipe, instructions can be combined together to create more advanced functionality. Once created, a recipe can be deployed through an API. You can use our [SDK](/sdk/js) or our [Rest API](/api/getting-started) to integrate the tool into your codebase. You can also share a preview of the recipe using our [shareable ui](../concepts/preview.md) feature.

A recipe can contain [**chained**](instructions.md#chained-instructions-in-a-recipe) instructions. But, unless you chain the instructions by referring to the other instructions, then they the default behavior is that they run independently.

Expand Down
8 changes: 3 additions & 5 deletions websites/docs/pages/index.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
# Welcome to the Prompt Studio docs!

Prompt Studio is your integrated AI development environment (IDE) where you can write LLM-powered business logic in natural language.
# Welcome to the Prompt Studio documentation!

## What can I use Prompt Studio for?
Think of those tasks that you have to do everyday at work, they're a vital part of your job, yet so repetitive and time-consuming. Maybe you even tried ChatGPT to automate parts of it and you're starting to notice just how handy AI can be. If so, you're in the right place! With Prompt Studio, you can craft your unique AI-driven solution, all in natural language and in just text! You meticulously design your ["Recipe"](../pages/concepts/recipe.md) once, embedding all the nuances of your business use case by combining [instructions](../pages/concepts/instructions.md) and Voilà! You have your own "app"! Do you want to share it with your colleagues? Click on [preview](../pages/concepts/preview.md) and generate a shareable UI which is a web application based on the logic you build in your recipe! This isn't just any application; it's your personalized tool, born from your specific needs and understanding of your daily challenges. Sharing the preview with your team is as simple as sending a link.
Think of those tasks that you have to do everyday at work, they're a vital part of your job, yet so repetitive and time-consuming. Maybe you even tried ChatGPT to automate parts of it and you're starting to notice just how handy AI can be. If so, you're in the right place! With Prompt Studio, you can craft your unique AI-driven solution, all in natural language and in just text! You meticulously design your [Recipe](../pages/concepts/recipe.md) once, embedding all the nuances of your business use case by combining [instructions](../pages/concepts/instructions.md) and Voilà! You have your own "app"! Do you want to share it with your colleagues? Click on [preview](../pages/concepts/preview.md) and generate a shareable UI which is a web application based on the logic you build in your recipe! This isn't just any application; it's your personalized tool, born from your specific needs and understanding of your daily challenges. Sharing the preview with your team is as simple as sending a link.
It gets even better! What if your team wants this AI solution to be part of a customer facing application that your company? Just click on "API" and let us do the work for you, within literally seconds, your AI solution is available through our API for the engineers in your team to integrate with the codebase of your company!


Expand All @@ -12,7 +10,7 @@ It gets even better! What if your team wants this AI solution to be part of a cu

In Prompt Studio, you can start by creating your first [**recipe**](concepts/recipe.md) which would be a collection of AI instructions.

- Does your AI solution involve multiple steps? Chain your AI [**instructions**](concepts/instructions.md#chained-instructions-in-a-recipe) by simply referring with `/ to the output of the previous instruction.
- Does your AI solution involve multiple steps? Chain your AI [**instructions**](concepts/instructions.md#chained-instructions-in-a-recipe) by simply referring with `/` to the output of the previous instruction.

- Do you need to enrich the AI instruction with contextual data from [**files**](concepts/file.md)? Type "/", define your context name and upload your file. This will include the content of the file within the instruction and give more context to guide the LLM for better output.

Expand Down
12 changes: 9 additions & 3 deletions websites/docs/pages/recipes/deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Deploying your recipe makes it available through the Prompt Studio API so that y
What this means is that if you call the API endpoint that you expose by deploying the recipe, you can pass inputs to the endpoints, and you will get the output of the request through the API.

## Example use case
We will be using the example recipe that we built earlier [here](instructions#chained-instructions-in-a-recipe). The recipe has two instructions. the first instruction extracts concerns expressed by interviewees from a file containing interview scripts. The second instruction suggests solutions to the concerns returned by the first instruction. Let's say that I want to get the output of both the two instructions when I call the endpoint that I deploy.
We will be using the example recipe that we built earlier [here](../concepts/instructions#chained-instructions-in-a-recipe). The recipe has two instructions. the first instruction extracts concerns expressed by interviewees from a file containing interview scripts. The second instruction suggests solutions to the concerns returned by the first instruction. Let's say that I want to get the output of both the two instructions when I call the endpoint that I deploy.

## Step 1: Locate the API Button

Expand All @@ -20,13 +20,13 @@ Click the **API** button. This action will open a side bar where you can deploy

## Step 3: Define request fields

The request fields are the values that you want to pass to the endpoint from an external source (an external web page input for example). In the case of our previous [recipe](instructions#chained-instructions-in-a-recipe), I want to pass the file containing the interviews to the request. See below how you can do that:
The request fields are the values that you want to pass to the endpoint from an external source (an external web page input for example). For this [usecase](#example-use-case), I want to pass the file containing the interviews to the request. See below how you can do that:

<div style="position: relative; padding-bottom: 53.22916666666667%; height: 0;"><iframe src="https://www.loom.com/embed/77c29d0683004158a09eb7bff3da0562?sid=675c6f68-82a9-4255-a61e-79d51d89743e" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;"></iframe></div>

## Step 4: Define response fields

The response fields are the values that you want the endpoint to return to you after the recipe runs via API. As mentioned [above](#example-use-case), we want the endpoint to return the two outputs `concerns` and `solutions` from the two instructions in the recipe. See below how you can do that:
The response fields are the values that you want the endpoint to return to you after the recipe runs via API. As mentioned above regarding this [usecase](#example-use-case), we want the endpoint to return the two outputs `concerns` and `solutions` from the two instructions in the recipe. See below how you can do that:

<div style="position: relative; padding-bottom: 53.22916666666667%; height: 0;"><iframe src="https://www.loom.com/embed/85491177d4a5416c978aade57bf70e3d?sid=3a9bc550-f090-484d-98c0-b05ba26afc98" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;"></iframe></div>

Expand All @@ -50,3 +50,9 @@ curl --location --request POST 'https://api.prompt.studio/api/v1/instructions/{d
}
}'
```

::: warning Prompt Injections

Be aware that inserting user generated text in your prompts might throw off the results, either intentionally: the user tries to override the original instructions you created, or unintentionally due to text passages that are reminiscent of instructions.

:::
Binary file modified websites/docs/pages/recipes/images/click_deploy.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified websites/docs/pages/recipes/images/click_on_api.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified websites/docs/pages/recipes/images/click_on_preview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified websites/docs/pages/recipes/images/consume_deploy.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified websites/docs/pages/recipes/images/preview_inputs.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified websites/docs/pages/recipes/images/preview_outputs.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified websites/docs/pages/recipes/images/preview_page.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified websites/docs/pages/recipes/images/run_instruction.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified websites/docs/pages/recipes/images/run_preview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit ffecc6a

Please sign in to comment.