Replies: 1 comment 1 reply
-
Hey @devbrones, sorry for the late response, just noticed this. We don't have plans to offer a local version, but the package in this repository provides a fully featured API for running these simulations on our remote servers, which will be faster than your local GPUs. We also can handle parallel processing of simulations submitted in batches. I think you'll find it pretty convenient and seamless experience even if it's not 100% like running locally. If you're interested in giving that a try with a free trial, feel free to sign up here. And our docs are good place to learn more about the API and many examples. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, I am developing an application that makes EM simulations more available to hobbyists, amateurs and people interested in learning the basics of RF. The general idea is that the user would be able to run this software on a sufficiently powerful computer with a Nvidia graphics card, the software would offload computational workloads to the GPU, and run the simulations in parallel in order to shave off sim time, as well as analysis time.
I have found a handful of libraries that are able to run FDTD simulations on CUDA cores but these are not as feature-rich as your toolkit.
Being able to run simulations locally on a CUDA compatible would be an absolute deal-maker!
Beta Was this translation helpful? Give feedback.
All reactions