Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tensor wrapper subclass] Add trace transform for tensor subclasses #1584

Draft
wants to merge 3 commits into
base: tensor_subclass_1
Choose a base branch
from

Conversation

crcrpar
Copy link
Collaborator

@crcrpar crcrpar commented Dec 23, 2024

What does this PR do?

Implement

  • prims for my_subclass.__tensor_flatten__ and MySubclass.__tensor_unflatten__
  • trace transform to unroll MySubclass.__torch_dispatch__.
    • This transform uses torch.fx to understand the extended behavior. Currently the mapping from core aten ops to thunder.torch ops is quite optimistic; just querying the aten op name to thunder.torch.

Since __torch_dispatch__ extends the behavior that is implemented in C++ level, we'd need to apply the transform to split forward and backward traces separately.

@github-actions github-actions bot added the documentation Improvements or additions to documentation label Dec 23, 2024
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
to support `__torch_dispatch__`.
Since it extends the behavior that is implemented in C++ level,
we'd need to apply the transform to split forward and backward traces
separately.

Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant