-
Notifications
You must be signed in to change notification settings - Fork 93
torch._dynamo.exc.Unsupported: call_method UserDefinedObjectVariable(set) __contains__ [UserDefinedObjectVariable()] {} #664
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Looks like we're eventually asking dynamo to do something that it cannot due to our autograd. triage: is there something we can do to not tickle dynamo or do we need to just report this upstream? |
Asking dynamo to do something that it cannot due to our generated backward trace:
and use of
setting fullgraph=False might fix this problem. |
@wprazuch can I ask you to do a one-off that tests this with (I don't know that this the long-term solution but it will allow us to have a more reasoned discussion on the long-term solution.) |
We can confirm that after the modification in torch_compile.py: |
Thanks Martyna, Wojciech! |
triage review:
|
We don't see it anymore in our logs, so I think it is resolved 👍 |
🐛 Bug
There is unsupported error when running models:
for Thunder inductor for fsdp zero2/zero3:
To Reproduce
Steps to reproduce the behavior:
Run in the container:
Expected behavior
The model should run or we should get OOM error.
Environment
As in the Docker image
Additional context
We reproduced for fsdp (1/2 nodes, 8 gpus), zero2/zero3.
The traceback is below:
The text was updated successfully, but these errors were encountered: