Has anyone successfully used anomalib on a TPU instance on Google Colab? #2251
WestwardWinds
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I haven't seen much in the docs or in the github about xla support in Anomalib. I know, in theory, it should work because I can start the training process on a TPU instance but right as it starts I end up getting different strange errors. So while I would love to troubleshoot my code, first I would like to see if anyone has been using tpu instances to train models with Anomalib. I will share some examples of lightning working correctly with a TPU instance and then a small number of errors I've encountered trying to make Anomalib work.
Anomalib version: 1.2.0.dev0
I can access the TPU's directly using lightning EX:
Outputs:
But anything I try to do using Anomalib encounters a lot of errors.
EX:
Result:
Not only is there an error, it is only accessing 1 TPU core instead of the 8 I get using lightning directly.
This is not the only error I've gotten, I wanted to see if I could get a barebones Anomalib example working. In my own projects I've gotten access errors such as this:
And some others related to cuda to xla conversion, I think related to the dataset creation but I don't remember.
Beta Was this translation helpful? Give feedback.
All reactions