Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 42 additions & 3 deletions runtime/prompty/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# Prompty

Prompty is an asset class and format for LLM prompts designed to enhance observability, understandability, and portability for developers. The primary goal is to accelerate the developer inner loop of prompt engineering and prompt source management in a cross-language and cross-platform implementation.

Expand All @@ -6,9 +7,11 @@ The file format has a supporting toolchain with a VS Code extension and runtimes
The tooling comes together in three ways: the *prompty file asset*, the *VS Code extension tool*, and *runtimes* in multiple programming languages.

## The Prompty File Format

Prompty is a language agnostic prompt asset for creating prompts and engineering the responses. Learn more about the format [here](https://prompty.ai/docs/prompty-file-spec).

Examples prompty file:

```markdown
---
name: Basic Prompt
Expand Down Expand Up @@ -41,14 +44,14 @@ user:
{{question}}
```


## The Prompty VS Code Extension

Run Prompty files directly in VS Code. This Visual Studio Code extension offers an intuitive prompt playground within VS Code to streamline the prompt engineering process. You can find the Prompty extension in the Visual Studio Code Marketplace.

Download the [VS Code extension here](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.prompty).


## Using this Prompty Runtime

The Python runtime is a simple way to run your prompts in Python. The runtime is available as a Python package and can be installed using pip. Depending on the type of prompt you are running, you may need to install additional dependencies. The runtime is designed to be extensible and can be customized to fit your needs.

```bash
Expand All @@ -68,15 +71,48 @@ response = prompty.execute("path/to/prompty/file")
print(response)
```

## Configuration Options

### Disabling Image Parsing

By default, the Prompty chat parser automatically processes markdown images (`![alt](image.png)`) by converting them to base64 data URIs for LLM consumption. You can disable this behavior when you want to preserve image references as plain text:

```yaml
---
name: Markdown Converter
model:
api: chat
configuration:
azure_deployment: gpt-35-turbo
template:
format: jinja2
parser: prompty
options:
disable_image_parsing: true
---
system:
Convert the following markdown to HTML, preserving image references as-is:

user:
{{content}}
```

This is useful when:

- Converting markdown to other formats where you want to preserve image references
- Processing documentation where images aren't needed for LLM processing
- Avoiding file access issues when images aren't available in the execution environment

## Available Invokers

The Prompty runtime comes with a set of built-in invokers that can be used to execute prompts. These include:

- `azure`: Invokes the Azure OpenAI API
- `openai`: Invokes the OpenAI API
- `serverless`: Invokes serverless models (like the ones on GitHub) using the [Azure AI Inference client library](https://learn.microsoft.com/en-us/python/api/overview/azure/ai-inference-readme?view=azure-python-preview) (currently only key based authentication is supported with more managed identity support coming soon)


## Using Tracing in Prompty

Prompty supports tracing to help you understand the execution of your prompts. This functionality is customizable and can be used to trace the execution of your prompts in a way that makes sense to you. Prompty has two default traces built in: `console_tracer` and `PromptyTracer`. The `console_tracer` writes the trace to the console, and the `PromptyTracer` writes the trace to a JSON file. You can also create your own tracer by creating your own hook.

```python
Expand Down Expand Up @@ -162,6 +198,7 @@ def get_response(customerId, prompt):
In this case, whenever this code is executed, a `.tracy` file will be created in the `path/to/output` directory. This file will contain the trace of the execution of the `get_response` function, the execution of the `get_customer` function, and the prompty internals that generated the response.

## OpenTelemetry Tracing

You can add OpenTelemetry tracing to your application using the same hook mechanism. In your application, you might create something like `trace_span` to trace the execution of your prompts:

```python
Expand All @@ -185,6 +222,7 @@ Tracer.add("OpenTelemetry", trace_span)
This will produce spans during the execution of the prompt that can be sent to an OpenTelemetry collector for further analysis.

## CLI

The Prompty runtime also comes with a CLI tool that allows you to run prompts from the command line. The CLI tool is installed with the Python package.

```bash
Expand All @@ -194,4 +232,5 @@ prompty -s path/to/prompty/file -e .env
This will execute the prompt and print the response to the console. If there are any environment variables the CLI should take into account, you can pass those in via the `-e` flag. It also has default tracing enabled.

## Contributing

We welcome contributions to the Prompty project! This community led project is open to all contributors. The project can be found on [GitHub](https://github.com/Microsoft/prompty).
12 changes: 7 additions & 5 deletions runtime/prompty/prompty/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -272,13 +272,15 @@ def _validate_inputs(prompt: Prompty, inputs: dict[str, typing.Any], merge_sampl
f"Type mismatch for input property {input.name}: input type ({inputs[input.name].type}) != sample type ({input.type})"
)
clean_inputs[input.name] = inputs[input.name]
elif input.default is not None:
clean_inputs[input.name] = input.default
elif getattr(input, "required", False):
raise ValueError(f"Missing required input property {input.name}")
else:
if input.default is not None:
clean_inputs[input.name] = input.default
else:
raise ValueError(f"Missing input property {input.name}")
# optional input with no default - continue
continue

# check stra inputs
# check stray inputs
invalid: list[str] = []
for k, v in inputs.items():
if prompt.get_input(k) is None:
Expand Down
4 changes: 4 additions & 0 deletions runtime/prompty/prompty/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,10 @@ class TemplateProperty:
Nonce is automatically genereted for each run
content : str
Template content used for rendering
strict : bool
Whether the template is strict or not
options : dict
The options of the template (optional)
"""

format: str = field(default="mustache")
Expand Down
4 changes: 4 additions & 0 deletions runtime/prompty/prompty/parsers.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,10 @@ def parse_content(self, content: str):
any
The parsed content
"""
# Check if image parsing is disabled in template options
if self.prompty.template.options.get("disable_image_parsing", False):
return content

# regular expression to parse markdown images
image = r"(?P<alt>!\[[^\]]*\])\((?P<filename>.*?)(?=\"|\))\)"
matches = re.findall(image, content, flags=re.MULTILINE)
Expand Down
83 changes: 82 additions & 1 deletion runtime/prompty/tests/test_parser.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import prompty
from prompty.core import Prompty
from pathlib import Path
from prompty.core import Prompty, TemplateProperty
from prompty.parsers import PromptyChatParser

roles = ["assistant", "function", "system", "user"]
Expand Down Expand Up @@ -39,3 +40,83 @@ def test_thread_parse():
assert content[0]["role"] == "system"
assert content[1]["role"] == "thread"
assert content[2]["role"] == "system"


def test_disable_image_parsing():
"""Test that image parsing can be disabled via template options"""

# Create a prompty with image parsing disabled
template = TemplateProperty(format="jinja2", parser="prompty", options={"disable_image_parsing": True})

prompty_obj = Prompty(template=template, file=Path("/tmp/test.prompty"))
parser = PromptyChatParser(prompty_obj)

# Test content with markdown images
content_with_images = """Here's some text with an image:
![Test Image](test_image.png)
And some more text."""

# Parse the content - should return as-is without image processing
result = parser.parse_content(content_with_images)

# Verify that the content is returned unchanged (no image processing)
assert result == content_with_images


def test_normal_image_parsing():
"""Test that image parsing works normally when not disabled"""

# Create a prompty with default settings (image parsing enabled)
template = TemplateProperty(
format="jinja2",
parser="prompty",
options={}, # No disable_image_parsing option
)

prompty_obj = Prompty(template=template, file=Path("/tmp/test.prompty"))
parser = PromptyChatParser(prompty_obj)

# Test content with markdown images (using URL to avoid file access issues)
content_with_images = """Here's some text with an image:
![Test Image](http://example.com/test.png)
And some more text."""

# Parse the content - should process images normally
result = parser.parse_content(content_with_images)

# Verify that image processing was attempted (result should be structured)
assert isinstance(result, list)
assert len(result) == 3 # text, image, text
assert result[0]["type"] == "text"
assert result[0]["text"] == "Here's some text with an image:"
assert result[1]["type"] == "image_url"
assert result[1]["image_url"]["url"] == "http://example.com/test.png"
assert result[2]["type"] == "text"
assert result[2]["text"] == "And some more text."


def test_image_parsing_with_only_text():
"""Test that content without images is handled correctly regardless of the setting"""

# Test with image parsing disabled
template_disabled = TemplateProperty(format="jinja2", parser="prompty", options={"disable_image_parsing": True})

prompty_disabled = Prompty(template=template_disabled, file=Path("/tmp/test.prompty"))
parser_disabled = PromptyChatParser(prompty_disabled)

# Test with image parsing enabled
template_enabled = TemplateProperty(format="jinja2", parser="prompty", options={})

prompty_enabled = Prompty(template=template_enabled, file=Path("/tmp/test.prompty"))
parser_enabled = PromptyChatParser(prompty_enabled)

# Content without any images
text_only_content = "This is just plain text without any images."

# Both should return the same result (plain text)
result_disabled = parser_disabled.parse_content(text_only_content)
result_enabled = parser_enabled.parse_content(text_only_content)

assert result_disabled == text_only_content
assert result_enabled == text_only_content
assert result_disabled == result_enabled