Skip to content
Discussion options

You must be logged in to vote

The same thing was answered earlier in your question about future of MaSh, However for your reference. We will be utilising NLP and ML Cases for such thing, where meaningful tokens (prompt that generated an output) will be stored together and MaSh's algorithm will be designed to utilise those tokens first before generating any new response to save our resources during OpenAI API integration.

After this implementation, significant increase in MaSh's response and efficiency to a prompt is expected. Let's hope for the best :)

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by BOSS294
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants