PAL (Prompt Assembly Language) is a framework for managing LLM prompts as versioned, composable software artifacts. It treats prompt engineering with the same rigor as software engineering, focusing on modularity, versioning, and testability.
This is the NodeJS port of the Python version of PAL.
- Modular Components: Break prompts into reusable, versioned components
- Template System: Powerful Nunjucks-based templating with variable injection
- Dependency Management: Import and compose components from local files or URLs
- LLM Integration: Built-in support for OpenAI, Anthropic, and custom providers
- Evaluation Framework: Comprehensive testing system for prompt validation
- Rich CLI: Beautiful command-line interface with syntax highlighting
- Flexible Extensions: Use
.pal/.pal.lib
or.yml/.lib.yml
extensions - Type Safety: Full TypeScript support with Zod validation for all schemas
- Observability: Structured logging and execution tracking
# Install with npm
npm install -g pal-framework
# Or with yarn
yarn global add pal-framework
# Or with pnpm
pnpm add -g pal-framework
my_pal_project/
βββ prompts/
β βββ classify_intent.pal # or .yml for better IDE support
β βββ code_review.pal
βββ libraries/
β βββ behavioral_traits.pal.lib # or .lib.yml
β βββ reasoning_strategies.pal.lib
β βββ output_formats.pal.lib
βββ evaluation/
βββ classify_intent.eval.yaml
For a detailed guide, read this.
# libraries/traits.pal.lib
pal_version: '1.0'
library_id: 'com.example.traits'
version: '1.0.0'
description: 'Behavioral traits for AI agents'
type: 'trait'
components:
- name: 'helpful_assistant'
description: 'A helpful and polite assistant'
content: |
You are a helpful, harmless, and honest AI assistant. You provide
accurate information while being respectful and considerate.
Note: The content
field uses YAML multi-line strings with the |
operator to preserve line breaks. You can also use multi-line strings in the composition
field:
composition:
- '{{ traits.helpful_assistant }}'
- |
## Instructions
Please follow these guidelines when responding:
1. Be concise and clear
2. Provide accurate information
3. Ask clarifying questions when needed
- 'Additional context: {{ user_context }}'
For a detailed guide, read this.
# prompts/classify_intent.pal
pal_version: '1.0'
id: 'classify-user-intent'
version: '1.0.0'
description: 'Classifies user queries into intent categories'
imports:
traits: './libraries/traits.pal.lib'
variables:
- name: 'user_query'
type: 'string'
description: "The user's input query"
- name: 'available_intents'
type: 'list'
description: 'List of available intent categories'
composition:
- '{{ traits.helpful_assistant }}'
- |
## Task
Classify this user query into one of the available intents:
**Available Intents:**
{% for intent in available_intents %}
- {{ intent.name }}: {{ intent.description }}
{% endfor %}
**User Query:** {{ user_query }}
# Compile a prompt
pal compile prompts/classify_intent.pal --vars '{"user_query": "Take me to google.com", "available_intents": [{"name": "navigate", "description": "Go to URL"}]}'
# Execute with an LLM
pal execute prompts/classify_intent.pal --model gpt-4 --provider openai --vars '{"user_query": "Take me to google.com", "available_intents": [{"name": "navigate", "description": "Go to URL"}]}'
# Validate PAL files
pal validate prompts/ --recursive
# Run evaluation tests
pal evaluate evaluation/classify_intent.eval.yaml
import { PromptCompiler, PromptExecutor, MockLLMClient } from 'pal-framework';
async function main() {
// Set up components
const compiler = new PromptCompiler();
const llmClient = new MockLLMClient('Mock response');
const executor = new PromptExecutor(llmClient);
// Compile prompt
const variables = {
user_query: "What's the weather?",
available_intents: [{ name: 'search', description: 'Search for info' }],
};
const compiledPrompt = await compiler.compileFromFile(
'prompts/classify_intent.pal',
variables
);
console.log('Compiled Prompt:', compiledPrompt);
}
main().catch(console.error);
Create test suites to validate your prompts:
# evaluation/classify_intent.eval.yaml
pal_version: '1.0'
prompt_id: 'classify-user-intent'
target_version: '1.0.0'
test_cases:
- name: 'navigation_test'
variables:
user_query: 'Go to google.com'
available_intents: [{ 'name': 'navigate', 'description': 'Visit URL' }]
assertions:
- type: 'json_valid'
- type: 'contains'
config:
text: 'navigate'
PAL follows modern software engineering principles:
- Schema Validation: All files are validated against strict Zod schemas
- Dependency Resolution: Automatic import resolution with circular dependency detection
- Template Engine: Nunjucks for powerful variable interpolation and logic
- Observability: Structured logging with execution metrics and cost tracking
- Type Safety: Full TypeScript support with runtime validation
Command | Description |
---|---|
pal compile |
Compile a PAL file into a prompt string |
pal execute |
Compile and execute a prompt with an LLM |
pal validate |
Validate PAL files for syntax and semantic errors |
pal evaluate |
Run evaluation tests against prompts |
pal info |
Show detailed information about PAL files |
PAL supports different types of reusable components:
- persona: AI personality and role definitions
- task: Specific instructions or objectives
- context: Background information and knowledge
- rules: Constraints and guidelines
- examples: Few-shot learning examples
- output_schema: Output format specifications
- reasoning: Thinking strategies and methodologies
- trait: Behavioral characteristics
- note: Documentation and comments
We welcome contributions! Please see our Contributing Guide for details.
This project is licensed under the MIT License - see the LICENSE file for details.
- π Documentation
- π Issues
- π¬ Discussions
- PAL Registry: Centralized repository for sharing components
- Visual Builder: Drag-and-drop prompt composition interface
- IDE Extensions: VS Code and other editor integrations