Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one - 问题求助:torch.nn.parallel.DistributedDataParallel梯度传播错误 #8

Open
zhth53 opened this issue Oct 22, 2024 · 1 comment

Comments

@zhth53
Copy link

zhth53 commented Oct 22, 2024

Traceback (most recent call last):
File "/public/home//E2E-MFD-main/./tools/train.py", line 204, in
main()
File "/public/home/
/E2E-MFD-main/./tools/train.py", line 191, in main
train_detector(
File "/public/home//E2E-MFD-main/mmrotate/apis/train.py", line 144, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/public/home/
/miniconda3/envs/E2E-MFD_v1/lib/python3.9/site-packages/mmcv/runner/epoch_based_runner.py", line 136, in run
epoch_runner(data_loaders[i], kwargs)
File "/public/home/
/miniconda3/envs/E2E-MFD_v1/lib/python3.9/site-packages/mmcv/runner/epoch_based_runner.py", line 53, in train
self.run_iter(data_batch, train_mode=True, kwargs)
File "/public/home/
/miniconda3/envs/E2E-MFD_v1/lib/python3.9/site-packages/mmcv/runner/epoch_based_runner.py", line 31, in run_iter
outputs = self.model.train_step(data_batch, self.optimizer,
File "/public/home/******/miniconda3/envs/E2E-MFD_v1/lib/python3.9/site-packages/mmcv/parallel/distributed.py", line 46, in train_step
and self.reducer._rebuild_buckets()):
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by
making sure all forward function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable).
Parameter indices which did not receive grad for rank 0: 358 359 360
In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error

@kaka-Cao
Copy link
Collaborator

您能否调试一下代码,提供更详细的报错信息呢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants