Skip to content

Conversation

@jspast
Copy link
Contributor

@jspast jspast commented Dec 19, 2025

Added XPU to AcceleratorDevice and updated decide_device() accordingly. Also added it as a supported device on models that should work with the XPU backend.

I tested the layout model with success, but it unfortunately runs terribly on my 48 EU Intel UHD Graphics. It took 8 minutes to process a 5-page PDF while the CPU does it on 10 seconds. That is why I decided to open this as a draft. If anyone is able to properly test this on a newer Intel ARC GPU please leave feedback.

I should also point out that I went through some notable known issues with Intel drivers/PyTorch support. First, torch.xpu.memory.mem_get_info() is currently not supported by Meteor Lake and older GPUs and is used inside the transformers library. The simple workaround for now is to hardcode the device memory in the code as described in the upstream issue. Another issue I faced is related with unsupported type conversions. It seems Intel PyTorch developers can't convince the driver team to fix it. But it does work with the suggested flags.

However, things should be better with newer or dedicated GPUs. Installation is fairly simple even though documentation is not clear about it. This guide on PyTorch discuss is a great reference. I run Fedora 43 and there was no need to use external repositories.

For the models I added XPU to the supported_devices list, it was based on the information I found. EasyOCR and whisper have pending pull requests for XPU support. Transformers and vLLM already support XPU. Also did small updates to the docs and a test.

Issue resolved by this Pull Request:
Resolves #2783

Checklist:

  • Documentation has been updated, if necessary.
  • Examples have been added, if necessary.
  • Tests have been added, if necessary.

@github-actions
Copy link
Contributor

github-actions bot commented Dec 19, 2025

DCO Check Passed

Thanks @jspast, all your commits are properly signed off. 🎉

@mergify
Copy link

mergify bot commented Dec 19, 2025

Merge Protections

Your pull request matches the following merge protections and will not be merged until they are valid.

🟢 Enforce conventional commit

Wonderful, this rule succeeded.

Make sure that we follow https://www.conventionalcommits.org/en/v1.0.0/

  • title ~= ^(fix|feat|docs|style|refactor|perf|test|build|ci|chore|revert)(?:\(.+\))?(!)?:

…hub.com>

I, jspast <140563347+jspast@users.noreply.github.com>, hereby add my Signed-off-by to this commit: f26e8b8
I, jspast <140563347+jspast@users.noreply.github.com>, hereby add my Signed-off-by to this commit: a4a2bf9
I, jspast <140563347+jspast@users.noreply.github.com>, hereby add my Signed-off-by to this commit: a2d5dac

Signed-off-by: jspast <140563347+jspast@users.noreply.github.com>
@codecov
Copy link

codecov bot commented Dec 23, 2025

Codecov Report

❌ Patch coverage is 63.63636% with 4 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
docling/utils/accelerator_utils.py 60.00% 4 Missing ⚠️

📢 Thoughts on this report? Let us know!

dolfim-ibm
dolfim-ibm previously approved these changes Jan 5, 2026
Copy link
Member

@dolfim-ibm dolfim-ibm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@dolfim-ibm dolfim-ibm marked this pull request as ready for review January 5, 2026 07:42
@dosubot
Copy link

dosubot bot commented Jan 5, 2026

Related Documentation

Checked 10 published document(s) in 0 knowledge base(s). No updates required.

How did I do? Any feedback?  Join Discord

@dolfim-ibm
Copy link
Member

@jspast Do you know if adding the supported device will choose it automatically on all Intel embedded GPU? Meaning, will this trigger the bad performance you observed as a default behavior?

@dolfim-ibm dolfim-ibm dismissed their stale review January 5, 2026 07:44

Waiting for comment's reply

@jspast
Copy link
Contributor Author

jspast commented Jan 5, 2026

@jspast Do you know if adding the supported device will choose it automatically on all Intel embedded GPU? Meaning, will this trigger the bad performance you observed as a default behavior?

Users need to install a PyTorch build with XPU support (not even listed on the usual Get Started page), and setup Intel Level Zero packages. I would say you really have to know what you are doing to get to this state.

@dolfim-ibm dolfim-ibm merged commit 2b83fdd into docling-project:main Jan 5, 2026
25 of 26 checks passed
@MatteoFasulo
Copy link

@jspast Do you know if adding the supported device will choose it automatically on all Intel embedded GPU? Meaning, will this trigger the bad performance you observed as a default behavior?

Users need to install a PyTorch build with XPU support (not even listed on the usual Get Started page), and setup Intel Level Zero packages. I would say you really have to know what you are doing to get to this state.

If everything is set up properly, the XPU backend should take precedence over the CPU backend. I’ve also noticed that XPU can sometimes be slower than CPU in other PyTorch‑based projects, so your experience isn’t unusual.

PyTorch’s XPU documentation is available on a dedicated page, but you’ll likely need to install the appropriate Intel GPU drivers for your operating system before anything else. As @jspast mentioned, you really need to know what you’re doing (especially when it comes to Intel’s driver stack) but once that’s in place, PyTorch’s XPU support is fairly straightforward.

According to the guide, you can install XPU‑enabled PyTorch directly from the wheel releases:

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/xpu

Hopefully this clears things up.

@zhixiangxue
Copy link

Hi,

Thanks for adding XPU support! However, this change introduces a breaking compatibility issue with PyTorch < 2.5.0.Problem —— "AttributeError: module 'torch' has no attribute 'xpu'"

Users (like me) may still use PyTorch 2.2.2 on my Mac
Calling torch.xpu.is_available() raises AttributeError on older PyTorch versions

Impact:
Docling ≥ 2.67.0 fails immediately on systems with PyTorch < 2.5.0, even if they don't have Intel GPU hardware.

Suggested Fix:
Add a compatibility check before accessing torch.xpu

Would you consider adding the hasattr() check to maintain backward compatibility?

@dolfim-ibm dolfim-ibm mentioned this pull request Jan 20, 2026
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add Intel GPU support as an AcceleratorDevice

4 participants