-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loss not decreasing when training outside of script #59
Comments
For clarification, I see that the average loss is decreasing, but is there any reason that making a minimum working example like this would not yield a changing loss? MWEResultsUnedited modelReturning tensor immediately after self.decoder_norm() step as the model output yields an updating lossAre there any special considerations to be made when training this model outside of the training script? |
How did you fix the code ? |
I put my own data set into the ade20k folder and used "python -m segm.train --log-dir seg_tiny_mask --dataset ade20k
How is the.yml file set up in data?My data set has only one class of objects and backgrounds.
|
@plehman2000 did you try with the linear head? |
我也遇到这样的问题: #68 |
After running
python -m segm.scripts.prepare_ade20k $DATASET
python -m segm.train --log-dir seg_tiny_mask --dataset ade20k \ --backbone vit_tiny_patch16_384 --decoder mask_transformer
The training script runs without error, but the loss is not decreasing:
Upon further examination, when I isolated your segmenter class, I found that the parameters after the decoders transformer step do not seem to be updating.
See my note in the comment below for clarification. Any help is appreciated! I've been stuck for quite a while on this
(from decoder.py, MaskTransformer())
The text was updated successfully, but these errors were encountered: