Preserving model parameters and prompts while testing models[ playground] #140537
Replies: 3 comments
-
💬 Your Product Feedback Has Been Submitted 🎉 Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users. Here's what you can expect moving forward ⏩
Where to look to see what's shipping 👀
What you can do in the meantime 💻
As a member of the GitHub community, your participation is essential. While we can't promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities. Thank you once again for your contribution to making GitHub even better! We're grateful for your ongoing support and collaboration in shaping the future of our platform. ⭐ |
Beta Was this translation helpful? Give feedback.
-
These are all coming soon, in particular stored prompts / params |
Beta Was this translation helpful? Give feedback.
-
Preserving model parameters and prompts when switching models would save a lot of time, especially during iterative testing in RAG apps. I’ve faced the same issue when needing to re-enter settings every time, and it would definitely streamline the workflow. As for metrics like token usage, latency, and performance evaluation—having those built-in would be incredibly helpful. It would allow for more effective model comparison and faster optimization. Hopefully, the team can consider implementing both of these features in future updates. Anyone else feel this would improve their workflow? |
Beta Was this translation helpful? Give feedback.
-
Select Topic Area
Product Feedback
Body
Hi, I am trying to prototype a RAG App and to test different models I have to set the model parameters (ej. temperature) and copy the system prompt and the user prompt on each model. It is really time-consuming. I think it could be nice to have a way of preserving that model parameters and the prompts while switching models. Going further to a previous suggestion (https://github.com/orgs/community/discussions/139295#discussion-7214680), it could be nice to have tokens used, latency, and other metrics to evaluate the model performance.
Beta Was this translation helpful? Give feedback.
All reactions