Skip to content

llm-prompt-optimizer with Langchain that allows you to compare prompt and model performance πŸ€–

License

Notifications You must be signed in to change notification settings

eli6/llm-prompt-optimizer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

7 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ LLMPromptOptimizer: a simple tool for Optimizing Prompts! πŸ€–

LLMPromptOptimizer helps you optimize your prompts to get the best possible responses from AI models like ChatOpenAI! πŸ’»

πŸŽ‰ How does it work? To use the tool out-of-the box, simply configure your desired input and settings values in the config.js file. Using the Langchain library, you can choose which AI model to use and its settings, which input files to fetch, and how to print the results.

If you want to use the tool out of the box, start out by setting an environment variable OPENAI_API_KEY to an OpenAI API key. You can set it in your PATH and it will be accessed automatically.

Here is how to use the tool when you have configured the config.js file:

  1. First, run node createOutputs.js to generate the outputs for each prompt pair.

  2. Then, run node resultAnalyser.js to see each result from the prompts in turn. You can say y/n to each result and see the score for each prompt pair at the end! πŸ“Š

    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚ (index) β”‚ prompt β”‚ files                   β”‚ yes β”‚ no β”‚ percentageYes β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ 0       β”‚ 1      β”‚ 'system1.txt,user1.txt' β”‚ 3   β”‚ 0  β”‚ '100.00%'     β”‚
    β”‚ 1       β”‚ 2      β”‚ 'system2.txt,user2.txt' β”‚ 1   β”‚ 2  β”‚ '33.33%'      β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

🀝 Some key features of LLMPromptOptimizer include:

  1. Multiple prompt pairs: Configure multiple prompt pairs in a single file, each with its own set of system and user messages, and prompt template inputs

    //config.js
    export const promptPairs = [
    ['system1.txt', 'user1.txt', promptInput1, aiModel],
    ['system2.txt', 'user2.txt', promptInput2, aiModel],
    ]
  2. Customizable AI model: Choose from a variety of AI models available through Langchain, including ChatOpenAI, and customize the configuration for each prompt pair. You can test the same prompt with two different models and evaluate the result.

    //config.js
    const aiModel= new ChatOpenAI({
    // modelName: "gpt-4",
    // temperature: 0,
    });
  3. Result printing: LLMPromptOptimizer lets you configure how to print the results of the prompt pairs.

    ➀  πŸ”₯ Please evaluate the following result:
    
    
    ➀  "NASA is a space agency that sends people and robots into space. They explore planets, study stars, and learn about the universe. It's like a big adventure in space!"
    
    
    β™₯  Do you approve (y/n)?:
    
  4. Adaptive structure: LLMPromptOptimizer allows you to adapt the result structure to suit your needs. In config.js you can extract individual items from array outputs if you want to evaluate them individually.

πŸ’» And of course, if you need to adapt the tool even more, you can go beyond the config.js file and edit the other code to suit your needs.

About

llm-prompt-optimizer with Langchain that allows you to compare prompt and model performance πŸ€–

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published