Skip to content

Conversation

pjin-nvidia
Copy link
Contributor

@pjin-nvidia pjin-nvidia commented Oct 7, 2025

What does this PR do ?

The policy.logprob_chunk_size config value was not being passed to loss functions which directly call from_parallel_logits_to_logprobs or get_logprobs_from_vocab_parallel_logits, causing OOMs in some training run configurations.

Issues

List issues that this PR closes (syntax):

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

…WIP).

Signed-off-by: Peter Jin <pjin@nvidia.com>
Copy link

github-actions bot commented Oct 7, 2025

ℹ️ File Consistency Check

Check based on commit: d22624e (PR #1295 from pjin/chunk-size)

✅ DTensor Policy Worker Synchronization Check

Both DTensor policy worker files were modified in this PR:

  • nemo_rl/models/policy/dtensor_policy_worker.py
  • nemo_rl/models/policy/dtensor_policy_worker_v2.py

Please ensure that the changes are consistent between both files where applicable.


This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

Signed-off-by: Peter Jin <pjin@nvidia.com>
Copy link

github-actions bot commented Oct 7, 2025

ℹ️ File Consistency Check

Check based on commit: b962760 (PR #1295 from pjin/chunk-size)

✅ DTensor Policy Worker Synchronization Check

Both DTensor policy worker files were modified in this PR:

  • nemo_rl/models/policy/dtensor_policy_worker.py
  • nemo_rl/models/policy/dtensor_policy_worker_v2.py

Please ensure that the changes are consistent between both files where applicable.


This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

Signed-off-by: Peter Jin <pjin@nvidia.com>
Copy link

github-actions bot commented Oct 8, 2025

ℹ️ File Consistency Check

Check based on commit: 2469dec (PR #1295 from pjin/chunk-size)

✅ DTensor Policy Worker Synchronization Check

Both DTensor policy worker files were modified in this PR:

  • nemo_rl/models/policy/dtensor_policy_worker.py
  • nemo_rl/models/policy/dtensor_policy_worker_v2.py

Please ensure that the changes are consistent between both files where applicable.


This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant