LLMPromptOptimizer helps you optimize your prompts to get the best possible responses from AI models like ChatOpenAI! π»
π How does it work? To use the tool out-of-the box, simply configure your desired input and settings values in the config.js
file. Using the Langchain library, you can choose which AI model to use and its settings, which input files to fetch, and how to print the results.
If you want to use the tool out of the box, start out by setting an environment variable OPENAI_API_KEY
to an OpenAI API key. You can set it in your PATH and it will be accessed automatically.
Here is how to use the tool when you have configured the config.js
file:
-
First, run
node createOutputs.js
to generate the outputs for each prompt pair. -
Then, run
node resultAnalyser.js
to see each result from the prompts in turn. You can say y/n to each result and see the score for each prompt pair at the end! πβββββββββββ¬βββββββββ¬ββββββββββββββββββββββββββ¬ββββββ¬βββββ¬ββββββββββββββββ β (index) β prompt β files β yes β no β percentageYes β βββββββββββΌβββββββββΌββββββββββββββββββββββββββΌββββββΌβββββΌββββββββββββββββ€ β 0 β 1 β 'system1.txt,user1.txt' β 3 β 0 β '100.00%' β β 1 β 2 β 'system2.txt,user2.txt' β 1 β 2 β '33.33%' β βββββββββββ΄βββββββββ΄ββββββββββββββββββββββββββ΄ββββββ΄βββββ΄ββββββββββββββββ
π€ Some key features of LLMPromptOptimizer include:
-
Multiple prompt pairs: Configure multiple prompt pairs in a single file, each with its own set of system and user messages, and prompt template inputs
//config.js export const promptPairs = [ ['system1.txt', 'user1.txt', promptInput1, aiModel], ['system2.txt', 'user2.txt', promptInput2, aiModel], ]
-
Customizable AI model: Choose from a variety of AI models available through Langchain, including ChatOpenAI, and customize the configuration for each prompt pair. You can test the same prompt with two different models and evaluate the result.
//config.js const aiModel= new ChatOpenAI({ // modelName: "gpt-4", // temperature: 0, });
-
Result printing: LLMPromptOptimizer lets you configure how to print the results of the prompt pairs.
β€ π₯ Please evaluate the following result: β€ "NASA is a space agency that sends people and robots into space. They explore planets, study stars, and learn about the universe. It's like a big adventure in space!" β₯ Do you approve (y/n)?:
-
Adaptive structure: LLMPromptOptimizer allows you to adapt the result structure to suit your needs. In
config.js
you can extract individual items from array outputs if you want to evaluate them individually.
π» And of course, if you need to adapt the tool even more, you can go beyond the config.js
file and edit the other code to suit your needs.