You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, a huge thank you for your work. I stumbled upon it by accident after having issues trying to use the original Python version. I also found a Swift fork that seems to be based on your work initially. I'm new to the AI "game," but I already see fragmentation regarding OS/hardware. This implementation is pure CPU, and if I want a faster version, I would need a GPU version, and generally, CUDA is preferred or CoreML in the Apple world. Similarly, if we use CoreML, I understand that we need to convert the models into a specific format. Since Swift is available on Windows and Linux, I was almost wondering if it wouldn't be simpler to implement CoreML on these platforms and use the Swift version, ...
Is implementing a GPU version complicated and time-consuming, and if someone does it, will it work on an AMD card, for example?
Will the GPU version be much faster on a macintel with a card like a 5700XT, for example?
Sorry for all these questions.
This discussion was converted from issue #713 on April 14, 2023 16:31.
Heading
Bold
Italic
Quote
Code
Link
Numbered list
Unordered list
Task list
Attach files
Mention
Reference
Menu
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
First of all, a huge thank you for your work. I stumbled upon it by accident after having issues trying to use the original Python version. I also found a Swift fork that seems to be based on your work initially. I'm new to the AI "game," but I already see fragmentation regarding OS/hardware. This implementation is pure CPU, and if I want a faster version, I would need a GPU version, and generally, CUDA is preferred or CoreML in the Apple world. Similarly, if we use CoreML, I understand that we need to convert the models into a specific format. Since Swift is available on Windows and Linux, I was almost wondering if it wouldn't be simpler to implement CoreML on these platforms and use the Swift version, ...
Is implementing a GPU version complicated and time-consuming, and if someone does it, will it work on an AMD card, for example?
Will the GPU version be much faster on a macintel with a card like a 5700XT, for example?
Sorry for all these questions.
Beta Was this translation helpful? Give feedback.
All reactions