Skip to content

Commit 4091fa4

Browse files
Victor Bourginfacebook-github-bot
authored andcommitted
Ignore pin_memory if cuda is not available
Summary: Ignore `pin_memory` in `get_pytorch_dataloader` if cuda is not available. This replicates the behavior of the pytorch dataloader (see `torch.utils.data.dataloader._BaseDataLoaderIter`) and avoids job failures when cuda is not available. Reviewed By: moto-meta Differential Revision: D68357863
1 parent ca1389a commit 4091fa4

File tree

1 file changed

+9
-1
lines changed

1 file changed

+9
-1
lines changed

src/spdl/dataloader/_pytorch_dataloader.py

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -323,7 +323,15 @@ def get_pytorch_dataloader(
323323

324324
from torch.utils.data._utils.pin_memory import pin_memory as pin_memory_fn
325325

326-
transfer_fn = pin_memory_fn if pin_memory else None
326+
if pin_memory and not torch.cuda.is_available():
327+
_LG.warning(
328+
"'pin_memory' argument is set as true but no accelerator is found, "
329+
"then device pinned memory won't be used."
330+
)
331+
332+
transfer_fn = (
333+
pin_memory_fn if pin_memory and torch.accelerator.is_available() else None
334+
)
327335

328336
mp_ctx = (
329337
multiprocessing_context

0 commit comments

Comments
 (0)