-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TensorProxy.shape should be unpacked automatically #1253
Comments
I'm a bit skeptical about the alternative here and my gut feeling is that the main solution (to unpack the shape "close" to where it is used is preferable). The tricky thing with re-unpacking could be that I'm not sure we are good at having one name assigned to multiple times, so we may need new names every time we do this. |
sorry missed this email earlier (gmail access is limited at this moment, depending on how reliable vpn is).
I'm already hitting this one. In #1260 , every shape query resulted in a e.g. with the following program:
We have a trace:
I'm seeing a couple issues here:
|
🐛 Bug
TensorProxy.shape
remains as an attribute, hence accessing it won't leave an unpack in trace. This causes issues when we haveNumberProxy
inTensorProxy.shape
.In #1201 commit 26f883e. I have to rely on this hack. Otherwise, grad transform would see an invalid trace,
e.g. in a trivial slice:
Without the explicit unpack of a.shape, the subsymbols in ltorch.getitem would access
i0
, which is implicitly carried by a.shape but not explicitly in the trace.Alternative
This problem can also be properly resolved in prologue trace. i.e. here
i1
is unpacked in prologue, because it is consumed by the top level symbolltorch.getitem
. Unfortunately the uses of subsymbol is not considered as consumed by computation trace today, see code, soi0
isn't getting unpacked in prologue yet.So for input TensorProxy, I think prologue unpacking is the right choice here. For intermediate tensor, it might be a mixed solution, which goes back to the conversation we have in #1133 .
The text was updated successfully, but these errors were encountered: