Replies: 3 comments 4 replies
-
Hi @samvanity You may need to run the diagnostics so that I can understand your Python environment. I assume your machine is on a recent version/update of Text-generation-webui? The following may resolve it for you: In Text-generation-webui folder run
launch the ooba once, then close it Then once again - run
Thanks |
Beta Was this translation helpful? Give feedback.
-
Hi @samvanity Have you looked in the built in documentation http://127.0.0.1:7851/ at the LowVRAM mode? This is specifically designed for a situation on your local machine where your LLM has filled your VRAM up. It moves the TTS model in and out of VRAM/RAM as required, temporarily displacing a couple of layers of the LLM when it does this. Depending on the performance of your PCI bus, this results in usually a maximum 1 second addition to TTS generation (but could be 2 seconds if you have slower PCI and memory). The layers of the LLM move back in when requested and are booted out again when TTS is generated. The built in documentation has more explanation on this process and also you can set this as the default start-up setting at the top of the page. Outside of that, the Text-generation-webui interface is looking specifically for a wav file on the local disk, not across a network, which is why its not working in a local/remote setup. It would be possible to write a local/remote text-generation-webui interface, but it doesnt currently support that. Would the LowVRAM mode be a suitable solution for you? As for SillyTavern, the AllTalk extension/add-in is in their staging/development area currently and hasn't been pushed to the live build https://github.com/SillyTavern/SillyTavern/tree/staging though they did push the documentation live, which obviously leads to a confusing situation. I don't know when they will push it live. The current files required for the extension are in Thanks |
Beta Was this translation helpful? Give feedback.
-
@samvanity Glad youve got it sorted. As for the text-gen setup, with a rework, I could make a remote text-gen interface to communicate with a remote AllTalk server. It already stores everything with a unique ID number, so that aspect wouldn't be a challenge. If its something you really think you need, let me know. Im going to close this for now, but you can still reply on here if needed. Thanks |
Beta Was this translation helpful? Give feedback.
-
Hi,
I tried installing it as part of TextGenWebUi and when I tried to start the extension, I ran into this error message:
LLVM ERROR: Symbol not found: __svml_cosf8_ha
I've done the installation on another computer with TextGenWebUi and everything works there. How do I resolve this issue?
Beta Was this translation helpful? Give feedback.
All reactions