-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Commit loss is negative #43
Comments
Try lowering the diversity_gamma from 1.0 to 0.1 - 0.3. Part of the commit loss calculation in LFQ:
|
Hi @MarcusLoppe Happy New Year! 🎉🎉🎉 I attempted to train an autoencoder using 20 different chairs as training samples and encountered the same issue where the commit loss was negative. This is the commit loss curve during my training process. I will reduce the diversity_gamma from 1.0 to between 0.1 and 0.3 to see what changes occur in the commit loss. Best regards, |
Experiment a little bit since I just discovered this yesterday and haven't tested it out fully :) Please let me know what you find out. |
I tried reducing the diversity_gamma from 1.0 to 0.2 and retrained the data. The current commit loss curve is shown in the following image. |
Thank you @MarcusLoppe, From my perspective, maybe we can use a 'decaying' gamma, since we want it to explore at the beginning but converge at the end. I also notice that in @qixuema 's experiment, with gamma = 0.2 commit loss do drop, but there are some extreme commit loss values. Does that mean the model overfits to a certain or several types of shapes, codes, whatever and can't do rare cases well. |
Also @qixuema how's your recon loss going? I find that though my recon loss is going down (~0.32), I still can't reconstruct the train data using autoencoder when trained on multi objects. |
Hi, @ZekaiGalaxy The following are my recon_loss and total_loss. |
Hi all, this issue resolves itself when training on a large dataset, using a 300x50 augmentations the commit was at 3-14 at start and then settled itself and matched the recon loss at around 0.6 |
Hey, isn't this still a problem with the commit loss implementation itself? Any plans on fixing it? |
When I trained on several objects with several epochs, the commit loss starts to become negative, and it turns out that the overall loss keeps going down, but neither the recon loss nor the reconstruction result turns better.
I wonder if the commit loss being negative is normal or not, or what it implies
The text was updated successfully, but these errors were encountered: