-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[HIP] Implement workaround for hipMemset2D #1395
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #1395 +/- ##
==========================================
- Coverage 14.82% 12.51% -2.32%
==========================================
Files 250 239 -11
Lines 36220 35949 -271
Branches 4094 4076 -18
==========================================
- Hits 5369 4498 -871
- Misses 30800 31447 +647
+ Partials 51 4 -47 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally looks good to me. Would just prefer the lambda be a free function.
e400d8f
to
7a05c32
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
e6dfc62
to
f277422
Compare
This PR changes the `queue.fill()` implementation to make use of the native functions for a specific backend. It also unifies that implementation with the one for memset, since it is just an 8-bit subset operation of fill. In the CUDA case, both memset and fill are currently calling `urEnqueueUSMFill` which depending on the size of the filling pattern calls either `cuMemsetD8Async`, `cuMemsetD16Async`, `cuMemsetD32Async` or `commonMemSetLargePattern`. Before this patch memset was using the same thing, just beforehand setting patternSize always to 1 byte which resulted in calling `cuMemsetD8Async`. In other backends, the behaviour is analogous. The fill method was just invoking a `parallel_for` to fill the memory with the pattern which was making this operation quite slow. This PR depends on: - oneapi-src/unified-runtime#1395 - oneapi-src/unified-runtime#1412
There is an issue with
hipMemset2D
in ROCm prior to 6.0.0 version and this PR adds a workaround for it incommonMemSetLargePattern
.The issue appears only when using a pointer to host pinned memory from
hipHostMalloc
. I believe such a case hasn't been used until trying to refactor the USM fill (intel/llvm#12702).Testing intel/llvm: intel/llvm#12898