-
Hi, I’ve been trying to implement something based on the low level Pyro/GPyTorch integration tutorial. I’m working with RNA sequencing data, and I’m trying to do individual models for each gene. When I try to iterate over multiple genes, the first model will train correctly, but every subsequent model will completely fail to learn and just remain stuck at the loss that they start out with. I've also been working on fitting the pyro stuff into a batched model, which is almost certainly the better way to be doing it in the first place, but I'm really curious as to what's causing this to fail. My first thought was that some bit of information was being held over between these iterations where it shouldn’t be, but I’ve tried deleting the model and clearing the cache at the end of each loop and nothing really changed. I’ve also tried choosing different optimizers and changing up the parameters, but that also hasn’t solved it. I feel like I’m missing something obvious, but I haven’t been able to figure it out, so hopefully someone else knows what I’m doing wrong. I’ve included some code that encapsulates the problem below.
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
My bet is because you are reusing the same variable names with pyro. Call |
Beta Was this translation helpful? Give feedback.
My bet is because you are reusing the same variable names with pyro. Call
pyro.clear_param_store()
after training each model.