Prompt Engineering
Prompt engineering is the process of structuring an instruction that can be interpreted and understood by a generative AI model. It enhances the effectiveness and efficiency of AI systems and is critical for developing advanced AI capabilities.
Based on the presentation by Greg DeCarlo, here are the key prompt engineering techniques: The Status Quo of Prompt Engineering - Greg DeCarlo 2024.pdf
- Azure AI and Copilot Studio Resource Hub
- azure-ai-troubleshooting-guide.md
- power-automate-copilot-integration.md
- Advanced Prompt Engineering Calculator
- AI Prompt Engineering Academy
- AI Prompt Engineering Infographic
- Language Model Context Management
- Prompt Engineering Cookbook - websim.ai
- Prompt Engineering Timeline Explorer
- The Status Quo of Prompt Engineering
- The Status Quo of Prompt Engineering - Greg DeCarlo 2024
- High Dimensional Latent Space
- Prompt Engineering Techniques
- Tokenization and Embeddings BackPropagation
- Keywords: Emergent abilities, temporary learning, meta-learning
- Utilize the model's ability to learn from prompts temporarily
- Adapt prompts based on the specific context of your project
- Keywords: Task description, example-based learning
- Zero-Shot: Provide clear task descriptions without examples
- Few-Shot: Include 2-3 relevant examples before the main task
- Keywords: Step-by-step reasoning, intermediate steps, logical thinking
- Guide the model to break down complex problems into smaller steps
- Useful for tasks requiring multi-step solutions or logical reasoning
- Keywords: Spatial reasoning, symbol interpretation, text formatting
- Use random symbols to assist with spatial reasoning in text
- Helpful for tasks involving layout or structure interpretation
- Keywords: Multiple rollouts, consistency check, reliability
- Generate multiple CoT solutions and select the most common conclusion
- Improves reliability for complex reasoning tasks
- Keywords: Fact generation, contextual information, commonsense reasoning
- First prompt the model to generate relevant facts, then use these facts in the main prompt
- Enhances performance on tasks requiring specific knowledge or context
- Keywords: Sequential prompts, task breakdown, structured responses
- Combine multiple prompts in sequence to guide the model through complex tasks
- Useful for breaking down projects into manageable steps
- Keywords: Multiple paths, evaluation, breadth-first search, beam search
- Generate and evaluate multiple possible next steps for complex problem-solving
- Effective for tasks requiring exploration of different solution paths
- Keywords: Recursive explanations, logical consistency, self-questioning
- Prompt the model to explain parts of its explanation recursively
- Improves performance on tasks requiring high logical consistency
- Keywords: Sub-problem listing, sequential solving, task decomposition
- Guide the model to list sub-problems and solve them in sequence
- Ideal for breaking down complex challenges into smaller, manageable tasks
- Keywords: Multiple rollouts, longest chains, thoughtful reasoning
- Generate multiple CoT solutions and select based on complexity and consensus
- Useful for tasks requiring deep, multi-step reasoning
- Keywords: Iterative improvement, self-critique, feedback loop
- Prompt the model to solve, critique, and refine its own solutions
- Valuable for iteratively improving project outcomes during the course of the project
- Experiment with combining these techniques based on your specific project needs
- Use CoT and ToT for complex problem-solving and reasoning tasks
- Leverage Few-Shot prompting when you have clear examples of desired outputs
- Apply Self-Consistency and Complexity-Based Prompting for increased reliability in critical components
- Utilize Self-Refinement for iterative improvement of your project