You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your admirable and meaningful work!
My question is that why the downsampling operator is implemented by a model rather than a algorithm in the meta-test step. And I only found the models of bicubic x2, direct x2,direct x4 and multi-scale. But I want to know how they are implemented. Could you please upload the code? Thank you very much!
The text was updated successfully, but these errors were encountered:
Dear Sir,
Amazing work ! Congratulation!!
please , I have a question.can you kindly provide me with the full path I should insert of checkpoint the trained large scale training model to be able to use it as a pre-trained to meta transfer training?
I'm waiting for your reply.
Thanks in advance
Please @wuguolil I'm facing a problem when i load the pretrained model , specially when it reads the checkpoint
this is the error .. how did you kindly solve it please ??
NotFoundError (see above for traceback): Key MODEL/conv7/kernel/Adam_3 not found in checkpoint
[[Node: save/RestoreV2_69 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2_69/tensor_names, save/RestoreV2_69/shape_and_slices)]]
Thank you for your admirable and meaningful work!
My question is that why the downsampling operator is implemented by a model rather than a algorithm in the meta-test step. And I only found the models of bicubic x2, direct x2,direct x4 and multi-scale. But I want to know how they are implemented. Could you please upload the code? Thank you very much!
The text was updated successfully, but these errors were encountered: