-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tensor Subclasses] Trace transfom to interpret __torch_dispatch__
#1394
base: main
Are you sure you want to change the base?
Conversation
fa05c82
to
519e813
Compare
21c2af8
to
11fea26
Compare
__torch_dispatch__
and get the correct output type. Depends on 1393
5e4b00d
to
4cd7a3d
Compare
__torch_dispatch__
and get the correct output type. Depends on 1393__torch_dispatch__
and get the correct output type. based on 1393
4cd7a3d
to
bed021e
Compare
To my mind, we would get better error handling (eg stack traces) if we resolved the |
This comment was marked as outdated.
This comment was marked as outdated.
3fa8e2d
to
d5fb9fe
Compare
d5fb9fe
to
15c8d12
Compare
15c8d12
to
70dc6ba
Compare
70dc6ba
to
fc6d8a9
Compare
__torch_dispatch__
and get the correct output type. based on 1393__torch_dispatch__
fc6d8a9
to
ce3edbc
Compare
There's a lot going on with this PR, and it's pretty complicated. Maybe we should schedule an online sync, @crcrpar and @IvanYashchuk, to see if we can make it more incremental? |
…ass` lookaside Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
no `__torch_dispatch__` support at all. Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
somehow, apparently Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
ce3edbc
to
e9027ba
Compare
I would still prefer to do the flattening during interpretation for the benefit of getting error messages with backtraces. |
We don't have a nice time slots that work all of us. EDIT: #1583, #1584, and #1585 are a sequence of PRs that cover this one and #1415.
I embarrassingly am not familiar with interpretation implementation at all. Could you give me some pointers to add flattening so that it happens during interpretation? EDIT: |
OK; we can try to work asynchronously. For a first incremental PR, would you create a PR adding support for aten operators? In particular, if someone were to call something like |
So far in my implementation there's an optimistic mapping from core aten ops to ltorch ops: lightning-thunder/thunder/transforms/tensor_wrapper_subclass.py Lines 339 to 348 in 515d425
By the way, if we're to cover core aten ops, then I'd say it'd be worth thinking of using thunder as a custom backend after AOTAutograd.
Currently lightning-thunder/thunder/torch/__init__.py Lines 172 to 173 in 9d79b8d
lightning-thunder/thunder/torch/__init__.py Line 4700 in 9d79b8d
lightning-thunder/thunder/torch/__init__.py Line 4308 in 9d79b8d
thunder/torch/__init__.py would be fair.
|
OK; expanding thunder/torch/init.py sounds good for now. Let's not "optimistically" try to map ATen operations to torch operations for the moment, but just treat them like different operations. Would you submit a PR adding torch.ops.aten.add to the torch operations? EDITED BELOW. As a follow-up PR to that, what about working with a program like
Where the initial trace shows the tensor subclass and its flattened information, and the prologue validates the subclass and its flattening. Then I'd be curious to see addition with that tensor, like this:
This can be translated for execution by PyTorch, but I think working through this will be interesting. Then the follow-up question is what the grad transform for it looks like, and how this operation should be translated for execution by nvFuser. |
IMHO, it'd sound more natural to me to register core aten ops to Then registration comes after the aforementioned PR, before #1584 and #1585, followed by some refinement of prologue and how traces with tensor subclasses look accompanied by #1584.
With the experience of #1585, I do think we'd have to let the trace get split into forward and backward before interpreting |
torch.Tensor._make_wrapper_subclass
in their__new__
and define their own__torch_dispatch__
BoundSymbol
s of a trace one by one so that we could make a trace free from actual tensor subclass objects as possible and write out the actual behavior that is defined by__torch_dispatch__
in a trace