-
Notifications
You must be signed in to change notification settings - Fork 159
fix: Add helpful error message if prepare_for_*() not called #1333
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix: Add helpful error message if prepare_for_*() not called #1333
Conversation
📝 WalkthroughWalkthroughAdds a per-instance readiness flag to MegatronPolicyWorker. Methods train, get_logprobs, and generate now raise RuntimeError if called before preparation. The flag is set during prepare_for_training and prepare_for_lp_inference. This enforces an explicit preparation step before GPU-bound execution. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant C as Caller
participant W as MegatronPolicyWorker
Note over W: is_prepared defaults to False
C->>W: prepare_for_training() / prepare_for_lp_inference()
activate W
W->>W: set is_prepared = True
deactivate W
alt Prepared
C->>W: train()/get_logprobs()/generate()
W-->>C: proceed with GPU-bound execution
else Not prepared
C->>W: train()/get_logprobs()/generate()
W-->>C: RuntimeError("Model must be prepared before execution")
end
Note over W: Guards enforce explicit preparation before use
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
nemo_rl/models/policy/megatron_policy_worker.py
(5 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py
: Follow the Google Python Style Guide for all Python code
Target Python 3.12+ for all Python code in NeMo-RL
Indent Python code with 4 spaces; do not use tabs
Python filenames should be snake_case (e.g., some_file.py)
Class names should be PascalCase
Function and method names should be snake_case
Local variable names should be snake_case; if starting with a number, prefix with k (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE and prefixed with G_ (e.g., G_MY_GLOBAL)
Constants should be UPPER_SNAKE_CASE
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
For public interfaces used outside a file, prefer docstrings over comments
Use comments mainly for code within a function or interfaces local to a file
Commented-out code must include a nearby comment explaining usage and why it is commented out; otherwise remove before merging
Use Google-style docstrings for classes and functions (Sphinx-parseable)
Avoid using reflection when functionality can be easily achieved without it
Limit except clauses to the smallest specific set of exceptions possible
For duck-typing via try/except, keep the try body minimal and use else for main logic
Add the NVIDIA copyright header (with current year) at the top of all Python files, excluding tests/ and test-only scripts
Files:
nemo_rl/models/policy/megatron_policy_worker.py
nemo_rl/**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
nemo_rl/**/*.py
: Do not set non-None configuration defaults in code; YAML is the single source of truth for defaults
Access required config attributes directly (e.g., policy_cfg["precision"]) and assume presence; do not introduce hidden defaults
Express configuration optionality via TypedDict using typing.NotRequired
When adding a new config key to a TypedDict subclass, document the key’s purpose, valid values/types, and recommended default in code
For any class or function decorated with @ray.remote, add '# pragma: no cover' on the class/def line (and on remote functions)
Files:
nemo_rl/models/policy/megatron_policy_worker.py
🧬 Code graph analysis (1)
nemo_rl/models/policy/megatron_policy_worker.py (4)
nemo_rl/models/policy/interfaces.py (2)
offload_before_refit
(149-150)prepare_for_training
(125-126)nemo_rl/models/policy/lm_policy.py (2)
offload_before_refit
(735-738)prepare_for_training
(633-636)nemo_rl/models/policy/dtensor_policy_worker.py (2)
offload_before_refit
(1856-1866)prepare_for_training
(1831-1852)nemo_rl/models/policy/dtensor_policy_worker_v2.py (2)
offload_before_refit
(1817-1827)prepare_for_training
(1792-1813)
🪛 Ruff (0.13.3)
nemo_rl/models/policy/megatron_policy_worker.py
887-890: Avoid specifying long messages outside the exception class
(TRY003)
1156-1159: Avoid specifying long messages outside the exception class
(TRY003)
1451-1454: Avoid specifying long messages outside the exception class
(TRY003)
1787-1787: Unused method argument: args
(ARG002)
1787-1787: Unused method argument: kwargs
(ARG002)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Post automodel integration comment / Comment on PR
- GitHub Check: Post submodule check comment / Comment on PR
🔇 Additional comments (2)
nemo_rl/models/policy/megatron_policy_worker.py (2)
456-456
: LGTM! Clean initialization of the readiness flag.The
is_prepared
flag is appropriately initialized in__init__
and serves its purpose well.
1782-1782
: LGTM! Flag is set at the appropriate locations.The
is_prepared
flag is correctly set toTrue
in both preparation methods. This ensures the guard checks will pass after proper initialization.Note: The flag remains
True
even afteroffload_after_refit
moves the model to CPU. Verify this is intentional - the flag seems to track whether initial preparation was done (not current GPU state). If the model is used after being offloaded, device mismatch errors will occur naturally.Also applies to: 1789-1789
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hi @Aniketsy . thanks for the contribution. this approach looks reasonable to me. would you mind adding this in the other policy_workers so we have parity?
Also a unit test would be appreciated
47b0f07
to
95784e1
Compare
@terrykong I've updated the changes as per your suggestions, please let me know if this needs improvement. |
#1141
Adds checks to
train
,get_logprobs
, and generate in MegatronPolicyWorker to raise a clear error if the model is not prepared for GPU execution.Please let me know if my approach or fix needs any improvements . I’m open to feedback and happy to make changes based on suggestions.
Thankyou !
Summary by CodeRabbit
New Features
Bug Fixes