-
Notifications
You must be signed in to change notification settings - Fork 565
Fix FusedAdam DTensor compatibility issue #2425
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…ike(param)/empty_like(param) to support DTensor Signed-off-by: jianbinc <shjwudp@gmail.com>
0b1d2db to
629c786
Compare
Greptile OverviewGreptile SummaryFixed DTensor compatibility issue in FusedAdam optimizer by replacing Key Changes:
Note: Line 388 still uses Confidence Score: 5/5
Important Files ChangedFile Analysis
Sequence DiagramsequenceDiagram
participant Optimizer as FusedAdam
participant Param as Parameter/DTensor
participant State as Optimizer State
Note over Optimizer,State: State Initialization Flow
Optimizer->>Optimizer: initialize_state(param)
Optimizer->>Optimizer: _initialize_state(param, "exp_avg")
alt Before this PR (broken for DTensor)
Note over Optimizer,Param: torch.zeros(param.shape)<br/>returns global shape tensor
Optimizer->>Param: param.shape
Param-->>Optimizer: global_shape (e.g., [1024, 512])
Optimizer->>State: torch.zeros(global_shape)
Note over State: Creates tensor with global shape<br/>instead of local DTensor shape
end
alt After this PR (DTensor compatible)
Note over Optimizer,Param: torch.zeros_like(param)<br/>preserves DTensor structure
Optimizer->>Param: torch.zeros_like(param)
Param-->>Optimizer: DTensor with correct local shape
Optimizer->>State: Store DTensor state
Note over State: Correctly creates DTensor state<br/>matching parameter structure
end
Note over Optimizer,State: State is now compatible with<br/>distributed tensor parallelism
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 file reviewed, no comments
| """ | ||
| dtype = self.name_to_dtype_map[state_name] | ||
| if store_param_remainders: | ||
| data = torch.zeros(param.shape, dtype=torch.int16, device=param.device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we also change run_fsdp2_model.py to use te FusedAdam optimizer instead of torch Adam so we dont break this again in the future?
Description
Recent modifications to FusedAdam have made it incompatible with DTensor. Specifically, in the optimizer state initialization section, the optimizer state is now created according to the global shape of the DTensor instead of creating a DTensor optimizer state with the same shape as the parameters.
To maintain compatibility with DTensor, the state tensors should be initialized using zeros_like(param) or empty_like(param) instead of zeros(param.shape) or empty(param.shape).
Fixes #2424
Type of change
Changes
Please list the changes introduced in this PR:
Checklist: