Replies: 1 comment
-
Hi @plantroon , there are a few options and some of them have been discussed in connection to Wolfram Alpha support (unfortunately this was in German). My favorite option is to build a simple custom smart-service that simply reroutes every question to the LLM (I'm assuming a call to a cloud server or something running on a fat machine locally). Actually I've started this a while ago during the first days of ChatGPT until I realized that there was no official API for it yet (only website hacks). Since there is one now I could maybe share the code for testing. The other option would be to integrate it directly into the client app using the custom widget feature and bypass the SEPIA server NLU entirely. Both ways would work something like this: "Hey SEPIA I want to talk to [my-LLM]" -> then SEPIA confirms and switches the mode. Let me know if you have more questions or suggestions 🙂 |
Beta Was this translation helpful? Give feedback.
-
Is there a recommended way on how to integrate this project with some form of GPT? Be it chatGPT or a locally hosted GPT-J or LLaMA. In this whole github repo I found not a single mention of these technologies, while there's a lot about them on the twitter and mastodon feeds. I am only passing by this project, not yet decided if I want to use it, so I did not yet try to figure it out by myself.
Beta Was this translation helpful? Give feedback.
All reactions