-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Repro function saved from FX graph is segmented again when passed back to torch.compile
#1521
Comments
Great issue, @kiya00! What a pain that it graph breaks on the code it generated! |
Yeah, I didn't realize that before. When I tested the repro function of HF models on @mruberry Do you think it's OK to use |
I think so, although @IvanYashchuk or @kshitij12345 may have some ideas. What differences do you think there would be? We can certainly use |
I was wondering if using Ref: https://pytorch.org/docs/stable/generated/torch.compiler.allow_in_graph.html |
The graph still breaks with |
In that case, |
…ript to avoid segmentation by Dynamo (#1521)
Note: If you have a model or program that is not supported yet but should be, please use the program coverage template.
🐛 Bug
In the saved reproducer function, since it's already processed by dynamo, it's possible that there are some dynamo operators such as
torch.amp.autocast_mode._enter_autocast
. If we pass this repro function into torch.compile again, it creates graph-break on this kind of operator, which can result in slower perf for torch.compile case.To Reproduce
Run the above script it produces repro function as follows, we modifies it to print out the graph break information:
Possible solution
Instead of using torch.compile(), we can try to pass the repro function directly to the inductor
cc: @kshitij12345 @mruberry @IvanYashchuk
cc @crcrpar @apaz-cli
The text was updated successfully, but these errors were encountered: