-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using fine tuned CNN model from example without hyper-engine #4
Comments
Yeah, this case isn't covered by current examples. What model do you use? And what TF API? By the way, you don't see any variables because they're hidden in |
Thanks, I understand. So, how I can to extract these hidden variables to graph and use the model with session.run() ? I use your model from example 1_3_saving_best_models_mnist.py and I obtained good hyper parameters (better than in my models). Now I only want load saved model and give it some data (and get prediction, of course). Standart methods of doing this necessarily require the declaration of all variables in the graph (maybe I don't know any other methods). I use original Tensorflow on GPU (tensorflow-gpu package). |
Unfortunately, it doesn't work that easily. You can load a model which graph matches exactly to the defined graph. For example, if you use TF slim API, you can load a pre-trained Inception network that had been trained with Of course, it's possible to load a model without defining any graph. But I'm not sure what hyper-params you wish to tune then... |
Hello! First of all, thank you for your work, it is really helpful! I am experimenting with code from 1_3_saving_best_models_mnist.py . Tell me, please, how can I use fine-tuned models directly from Tensorflow? I am asking, because I don't see any Tensorflow variable declaration in this code. How to do that easily?
The text was updated successfully, but these errors were encountered: