You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 5, 2024. It is now read-only.
I have encountered an error while testing a base_models.py implementation. Here is the log and the line where it breaks. There is a comment there that may be describing the issue, but I'm not sure how to interpret it.
It looks like this traces back to the behavioral benchmark, specifically here. Is this assuming the model has a layer named 'logits'? The model I am testing does not have this.
Any advice on how to debug this would be very appreciated.
Thank you,
Cory
The text was updated successfully, but these errors were encountered:
Thanks for pointing out this issue. The check is indeed assuming that the model has a layer named 'logits', leading the check to fail.
For submitting your model and having it tested on all the brain benchmarks, you can safely ignore this, the submission should still work. It just won't run through on the "engineering"/ML benchmarks.
(background: we generally test models on an ImageNet benchmark as well to get some sense of their ground-truth performance. To shortcut this and because a majority of models was trained on ImageNet, we just query for a logits layer and hope that it knows the ImageNet classes. We are working on automatically training a readout from the penultimate layer to also test models that aren't trained on ImageNet. Either way, this should have no impact on any of the brain benchmarks.)
Please let us know of any further issues with the submission!
Martin
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hello,
I have encountered an error while testing a
base_models.py
implementation. Here is the log and the line where it breaks. There is a comment there that may be describing the issue, but I'm not sure how to interpret it.It looks like this traces back to the behavioral benchmark, specifically here. Is this assuming the model has a layer named
'logits'
? The model I am testing does not have this.Any advice on how to debug this would be very appreciated.
Thank you,
Cory
The text was updated successfully, but these errors were encountered: