You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Before #705 Thunder applied autocast transform to all functions invoked within a torch.autocast region. In #705 the logic has changed to apply autocast dispatch only for "torchsymbols" introducing a regression for all other symbols.
Here's a reproducer that is run on before the above PR is merged and after:
PyTorch users are used to using torch.autocast for enabling mixed precision training. I think Thunder should support it as a way of invoking the autocast transform for Thunder functions.
It's confusing to have thunder.torch.linear and thunder.prims.linear behave differently when both have an autocast rule registered. If a custom executor adds an autocast rule for its own op then it's also out of reach to use the same PyTorch code that worked without using the custom executor.
🐛 Bug
Before #705 Thunder applied autocast transform to all functions invoked within a torch.autocast region. In #705 the logic has changed to apply autocast dispatch only for "torchsymbols" introducing a regression for all other symbols.
Here's a reproducer that is run on before the above PR is merged and after:
trace before c816506:
trace after c816506:
cc @crcrpar @apaz-cli
The text was updated successfully, but these errors were encountered: