You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 4, 2024. It is now read-only.
Thank you for sharing your code! But I find out that the hyperparameter of loss function(lamda_reconstruction and lamda_low_frequency) in your code is different from the paper, which one I should use?
The text was updated successfully, but these errors were encountered:
Hi, thanks for your interest. The hyperparameter depends on the tradeoff between the quality of concealing and recovering. In the first stage, all hyperparameters can be set to 1 until the network converges. Then, you could finetune the network with different lambda according to the performance. For example, if you prefer a higher reconstruction quality, then set lamda_reconstruction higher.
Thank you for sharing your code. I am trying your code and I do find the loss explosion problem. Do you know the inherent reason of it? Is there any better solution instead of restarting training with lower learning rate every time manually?
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Thank you for sharing your code! But I find out that the hyperparameter of loss function(lamda_reconstruction and lamda_low_frequency) in your code is different from the paper, which one I should use?
The text was updated successfully, but these errors were encountered: