-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I can't achieve the accuracy in bench mark, could somebody help? #38
Comments
@GranMin what backbone do you use? |
@Androsimus I used the Resnet50. I will try and reply as soon. Thanks for your advice. |
@GranMin Maybe use of 30 epochs is too much and you have got some overfitting?.. |
@Androsimus I have the same feeling of too much epochs.But the loss of 10-15 epoch is about 20, and the acc of lfw is about 98. |
@GranMin This is very strange. Maybe you changed some other parameters? Maybe parameters of Arcface: margin, scale? |
@Androsimus I don't change any other parameters. And I tried NormHead for one epoch, then use the Archead, it's amazing that just after one epoch in Archead, the loss comes to about 11. But then the same phenomenon took place: the loss increase a few at the begin of the epoch, and then decrease, but at very low speed. Like this: |
@GranMin I'm not sure how NormHead is supposed to use. Maybe as a warmup. To sum up. I used strictly ArcHead, I suppose other people did it too. So I propose to try using only ArcHead. |
@Androsimus In fact, I tried two times about training the model recently. |
@GranMin |
@GranMin this is some mystery ) |
@GranMin could you post your config file *.yaml? |
@GranMin There is one idea. For correct inference model must be used as |
@Androsimus Sorry for long waiting. Hahah...I just took a vacation to Jiuzhai Gou nature reserve last week. |
@GranMin glad you got nice results :) |
How do you change head (normhead to archead)? I tried it but I have this error: |
I use the same train dataset and test dataset as you proposed, but the best result I've got so far is as the picture shows.
I used SGD optimizer and lr=0.1,0.05,0.01,0.0001,0.00001, each lr an epoch. And when I found the loss increasing rather than decreasing, I stoped training. And I got the test result for loss 19.42 as up picture.
More, this is test result when the train loss is 21.15, shown as down picture.
The text was updated successfully, but these errors were encountered: