-
Notifications
You must be signed in to change notification settings - Fork 3.5k
feat: Add XPU device support for Intel GPUs #2809
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
✅ DCO Check Passed Thanks @jspast, all your commits are properly signed off. 🎉 |
Merge ProtectionsYour pull request matches the following merge protections and will not be merged until they are valid. 🟢 Enforce conventional commitWonderful, this rule succeeded.Make sure that we follow https://www.conventionalcommits.org/en/v1.0.0/
|
…hub.com> I, jspast <140563347+jspast@users.noreply.github.com>, hereby add my Signed-off-by to this commit: f26e8b8 I, jspast <140563347+jspast@users.noreply.github.com>, hereby add my Signed-off-by to this commit: a4a2bf9 I, jspast <140563347+jspast@users.noreply.github.com>, hereby add my Signed-off-by to this commit: a2d5dac Signed-off-by: jspast <140563347+jspast@users.noreply.github.com>
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
dolfim-ibm
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
|
@jspast Do you know if adding the supported device will choose it automatically on all Intel embedded GPU? Meaning, will this trigger the bad performance you observed as a default behavior? |
Users need to install a PyTorch build with XPU support (not even listed on the usual Get Started page), and setup Intel Level Zero packages. I would say you really have to know what you are doing to get to this state. |
If everything is set up properly, the XPU backend should take precedence over the CPU backend. I’ve also noticed that XPU can sometimes be slower than CPU in other PyTorch‑based projects, so your experience isn’t unusual. PyTorch’s XPU documentation is available on a dedicated page, but you’ll likely need to install the appropriate Intel GPU drivers for your operating system before anything else. As @jspast mentioned, you really need to know what you’re doing (especially when it comes to Intel’s driver stack) but once that’s in place, PyTorch’s XPU support is fairly straightforward. According to the guide, you can install XPU‑enabled PyTorch directly from the wheel releases: pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/xpuHopefully this clears things up. |
|
Hi, Thanks for adding XPU support! However, this change introduces a breaking compatibility issue with PyTorch < 2.5.0.Problem —— "AttributeError: module 'torch' has no attribute 'xpu'" Users (like me) may still use PyTorch 2.2.2 on my Mac Impact: Suggested Fix: Would you consider adding the hasattr() check to maintain backward compatibility? |
Added XPU to
AcceleratorDeviceand updateddecide_device()accordingly. Also added it as a supported device on models that should work with the XPU backend.I tested the layout model with success, but it unfortunately runs terribly on my 48 EU Intel UHD Graphics. It took 8 minutes to process a 5-page PDF while the CPU does it on 10 seconds. That is why I decided to open this as a draft. If anyone is able to properly test this on a newer Intel ARC GPU please leave feedback.
I should also point out that I went through some notable known issues with Intel drivers/PyTorch support. First,
torch.xpu.memory.mem_get_info()is currently not supported by Meteor Lake and older GPUs and is used inside thetransformerslibrary. The simple workaround for now is to hardcode the device memory in the code as described in the upstream issue. Another issue I faced is related with unsupported type conversions. It seems Intel PyTorch developers can't convince the driver team to fix it. But it does work with the suggested flags.However, things should be better with newer or dedicated GPUs. Installation is fairly simple even though documentation is not clear about it. This guide on PyTorch discuss is a great reference. I run Fedora 43 and there was no need to use external repositories.
For the models I added XPU to the
supported_deviceslist, it was based on the information I found. EasyOCR and whisper have pending pull requests for XPU support. Transformers and vLLM already support XPU. Also did small updates to the docs and a test.Issue resolved by this Pull Request:
Resolves #2783
Checklist: