How to choose execution provider ? #18
mykolamelnykml
started this conversation in
General
Replies: 1 comment 2 replies
-
Hey @kolia1985 👋, I have released a new version (https://github.com/felixdittrich92/OnnxTR/releases/tag/v0.3.0). For example to force GPU usage:
So the single predictors are completly configurable for example if you want to run the detection model on CPU and the recognition model on GPU you can do this now 🤗 This gives also the option to test different execution providers like DirectML / OpenVINO / .. (but keep in mind it's not tested with these providers yet) Best regards, |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
How to choose execution provider ?
How to force run only on CPU or GPU?
Beta Was this translation helpful? Give feedback.
All reactions