Skip to content

A prompt optimizer structure that should work across most AI platforms. Drop this into a new chat, a gem, etc and give it your sloppiest prompt that you want to dive into with more detail.

Notifications You must be signed in to change notification settings

HeWhoRoams/prompt_optimizer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 

Repository files navigation

Prompt Engineering Analyst 10.2 — Diagnostic Audit Protocol

A copy‑ready prompt optimizer that takes a “v0.9” user prompt, delivers a sharpened “v1.0” in one pass, and proposes targeted pathways to “v2.0” using an expanded Prompt DNA rubric for rigorous, reliable results.

What this is

  • A system prompt that audits, rewrites, and reports on prompts using 6 core and 12 additional DNA attributes, then asks up to 3 intent-aware, optioned follow‑ups only when needed.
  • Designed for clarity, reliability, structured outputs, and low friction via defaults, schemas, and minimal questions.

Why it works

  • Encodes best practices from established prompt engineering guides and frameworks, combining alignment, evidence rigor, reasoning strategy, and output schemas.
  • Reduces back‑and‑forth by mirroring intent, proposing safe defaults, and offering option menus with “auto” and “keep as‑is.”

Quick start

  • Paste the “System Prompt” below into the system role of the model or orchestration tool.
  • Provide any draft prompt as the user message; the agent returns a v1.0 rewrite, a structured audit report, v2.0 pathways, and up to 3 concise, optioned questions if uncertainty is high.
  • For structured outputs, specify a desired format (JSON/table/outline) or let the agent propose one under Output Format Fidelity.
  • If silence is preferred, set defaults like Success Metric and Cost vs Depth to “auto” to minimize follow‑ups.

Would you like to know more?

The Prompt DNA matrix (18 attributes)

  • Clarity: Unambiguous task and outcome language to avoid diffuse outputs.
  • Specificity: Concrete quantities, topics, and boundaries that focus the model’s effort.
  • Persona Depth: Role definition that shapes style, method, and domain guardrails.
  • Contextual Sufficiency: Inputs, audience, and background needed to avoid assumptions.
  • Actionability: Tasks that the model can actually perform in constrained environments.
  • Constraint Clarity: Explicit rules and exclusions to steer behavior and reduce noise.
  • Intent Alignment: Objective, audience, and success criteria are coherent with task type.
  • Audience & Tone Fit: Voice/register match the reader and purpose to improve adoption.
  • Evidence & Citation Rigor: Source policy and verification to prevent hallucinations.
  • Reasoning Strategy & Transparency: Stepwise checks or brief summaries for traceability.
  • Output Format Fidelity: Schemas, examples, and validation hints for parseable results.
  • Robustness & Ambiguity Handling: Assumptions, edge‑case plans, and ask/skip rules.
  • Safety & Compliance: Guardrails for sensitive topics and domain constraints.
  • Tool/Resource Directives: Allowed tools, retrieval policy, and data binding limits.
  • Memory & State Management: What to retain, forget, and summarize across turns.
  • Evaluation & Testability: Acceptance criteria, test prompts, and simple metrics.
  • Cost & Latency Awareness: Speed/quality trade‑offs and token/runtime budgets.
  • Creativity Control: Degree of divergence vs fidelity to inputs or sources.

Scoring anchors (per attribute)

  • 1–3: Missing or vague; major risks.
  • 4–6: Partially specified; defaults/examples required.
  • 7–8: Well‑specified with minor gaps.
  • 9–10: Explicit, testable, resilient.

Workflow and logic

  • Phase 1: Deep Diagnostic Audit—classify task type, score all attributes, surface top 3 risks, and record Assumptions & Risks.
  • Phase 2: Comprehensive Rewrite (v1.0)—deliver one consolidated prompt with role, objective, scope, inputs, outputs, constraints, success criteria, and schemas.
  • Phase 3: Prompt Audit Report—Scorecard, Modification Log, Optimized Prompt, Assumptions & Defaults, and Pathways to v2.0.
  • Phase 4: Intent‑Aware Refinement—up to 3 optioned questions targeting lowest‑scoring traits, each with “auto” and “keep as‑is.”

Output structure

  • Prompt DNA Scorecard: Original vs Optimized across core and DNA+ traits.
  • Modification Log & Rationale: Change → trait(s) → why, linked to outcomes.
  • Optimized Prompt: Copy‑ready block with explicit formats and constraints.
  • Assumptions & Defaults: Transparent inferences and override instructions.
  • Pathways to v2.0: 2–3 targeted tracks such as reliability, structure, or speed.
  • Refinement Questions: ≤3 optioned prompts focusing on the weakest traits.

Canonical option menus

  • Success Metric: Accuracy, Depth, Novelty, Coverage, Speed, Balance, auto, keep.
  • Trade‑off: Brevity vs Depth, Strictness vs Flexibility, Creativity vs Fidelity, auto, keep.
  • Evidence Rigor: Cite all, Cite key, None, auto, keep.
  • Reasoning Style: Stepwise summary, Brief bullets, Hidden, auto, keep.
  • Output Format: JSON schema, Markdown table, Outline, Steps, Draft, auto, keep.
  • Robustness: Ask when missing, Assume defaults, Skip and flag, auto, keep.
  • Tone/Audience: Executive‑neutral, Technical‑precise, General‑clear, Academic, Persuasive, auto, keep.
  • Memory: Summary, Key points, None, auto, keep.
  • Cost vs Depth: Speed, Balance, Depth, auto, keep.

Best practices

  • Prefer explicit output schemas and short examples to drive deterministic structure.
  • Make success criteria measurable to enable self‑checks and acceptance testing.
  • Use “auto” defaults to minimize friction, then incrementally tighten with v2.0 pathways.
  • Turn on citation and reasoning rigor for high‑stakes or research‑grade tasks.

Customization

  • Reliability track: Increase Evidence Rigor, adopt Stepwise summary reasoning, and add acceptance tests.
  • Structure track: Enforce JSON/table schemas with examples and validation hints.
  • Speed track: Set Cost vs Depth to Speed and cap tokens while keeping critical constraints.

Limitations

  • Prompts cannot guarantee external tool or web access unless explicitly permitted and configured.
  • Over‑constraining creativity may reduce novelty on ideation tasks; tune Creativity Control.
  • Safety and compliance require domain‑specific rules where applicable; defaults are conservative.

Example

" I want to build a prompt optimizer that helps me construct better prompts when talking to AI " Prompt Audit Report

Prompt DNA Scorecard

Trait Original (v0.9) Optimized (v1.0)
Clarity 4 9
Specificity 2 8
Persona Depth 1 8
Contextual Sufficiency 3 7
Actionability 5 9
Constraint Clarity 2 8

Modification Log & Rationale

What Changed Which Trait Why
Defined the persona as a "prompt optimization expert." Persona Depth Your original request was a broad statement. Defining a specific role gives the AI a clear identity and a goal-oriented mindset, ensuring a more focused and helpful response.
Specified the task as rewriting a prompt based on a set of criteria. Specificity, Actionability The original prompt was too general. Specifying the task makes the AI's job clear, enabling it to deliver a concrete, actionable output rather than just a general discussion.
Added clear sections for Role, Objective, Input, and Output. Clarity, Actionability, Contextual Sufficiency Structured prompts perform better. This provides a blueprint for the AI to follow, ensuring it knows what information to expect and what to deliver, which significantly improves the quality of the final output.
Included explicit Constraints and Success Criteria. Constraint Clarity By defining what a "good" output looks like and what rules to follow, the AI can self-correct and meet your expectations more precisely, minimizing the need for revision.
Inferred and included the need for an audit report and follow-up questions. Contextual Sufficiency Your original prompt lacked details on what you'd like to get back. By anticipating these needs and building them into the prompt, the v1.0 version delivers a complete solution right away.

Optimized Prompt (v1.0)

[ROLE]
You are a "Prompt Engineering Analyst 10.1," a specialized AI assistant that audits and optimizes user-submitted prompts. Your goal is to provide a single, comprehensive analysis and rewrite of a user's prompt.

[OBJECTIVE]
Analyze the user-submitted prompt based on the "Prompt DNA" framework, identify weaknesses, and rewrite it into a v1.0 optimized version. Your final output must include a clear, copy-ready prompt.

[PROMPT DNA]
1. Clarity (Is the goal unambiguous?)
2. Specificity (Are the details precise?)
3. Persona Depth (Is the role and tone defined?)
4. Contextual Sufficiency (Is all necessary info provided?)
5. Actionability (Can the model act on the instructions?)
6. Constraint Clarity (Are rules and non-goals explicit?)

[INPUT]
The user will provide a prompt (v0.9) for you to optimize.

[OUTPUT FORMAT]
Provide a single, complete response containing the following sections:
1. **Prompt DNA Scorecard:** A markdown table comparing the original and optimized prompt scores (1-10) for each DNA trait.
2. **Modification Log & Rationale:** A markdown table detailing what was changed, which DNA trait it addresses, and the reason for the change.
3. **Optimized Prompt (v1.0):** The complete, rewritten prompt in a markdown code block, ready for a user to copy and paste.
4. **Assumptions & Defaults:** A list of any assumptions you made or defaults you added (e.g., tone, output format).
5. **Pathways to v2.0:** A concise list of 2-3 specific options for the user to deepen or refine the prompt further.
6. **Refinement Questions:** Ask up to 3 short, lettered questions (A, B, C) with multiple-choice options to clarify user intent for a future v2.0.

[CONSTRAINTS & RULES]
- Do not engage in a multi-turn conversation before delivering the full report.
- The entire response must be a single, complete block of text and tables.
- The persona and formatting rules defined here are your highest priority.

Assumptions & Defaults

I've assumed your end goal is to not just receive a better prompt, but to understand the 'why' behind the changes so you can improve your own prompt engineering skills over time. I've defaulted the tone to a professional, analytical style and the output format to markdown tables and code blocks for clarity and easy copying.


Pathways to v2.0

  • Implement a Grader: Add a success metric to automatically evaluate the quality of the prompt's output.
  • A/B Test Variations: Generate and compare multiple v1.0 variations for a specific use case.
  • Integrate a Feedback Loop: Create a multi-turn system where the AI refines the prompt based on your critiques.

It sounds like you want to build a prompt optimizer to create better AI prompts for your own use or for a broader audience.

To refine this further, what's your primary success metric for a "better" prompt?

A) Accuracy B) Depth/Thoroughness C) Creativity/Novelty D) Conciseness E) auto F) keep as-is

About

A prompt optimizer structure that should work across most AI platforms. Drop this into a new chat, a gem, etc and give it your sloppiest prompt that you want to dive into with more detail.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published