Skip to content
This repository has been archived by the owner on Jan 5, 2024. It is now read-only.

Error encountered while testing a base model. #57

Open
efirdc opened this issue Oct 27, 2021 · 1 comment
Open

Error encountered while testing a base model. #57

efirdc opened this issue Oct 27, 2021 · 1 comment

Comments

@efirdc
Copy link

efirdc commented Oct 27, 2021

Hello,

I have encountered an error while testing a base_models.py implementation. Here is the log and the line where it breaks. There is a comment there that may be describing the issue, but I'm not sure how to interpret it.

It looks like this traces back to the behavioral benchmark, specifically here. Is this assuming the model has a layer named 'logits'? The model I am testing does not have this.

Any advice on how to debug this would be very appreciated.

Thank you,
Cory

@mschrimpf
Copy link
Member

Hi Cory,

Thanks for pointing out this issue. The check is indeed assuming that the model has a layer named 'logits', leading the check to fail.

For submitting your model and having it tested on all the brain benchmarks, you can safely ignore this, the submission should still work. It just won't run through on the "engineering"/ML benchmarks.

(background: we generally test models on an ImageNet benchmark as well to get some sense of their ground-truth performance. To shortcut this and because a majority of models was trained on ImageNet, we just query for a logits layer and hope that it knows the ImageNet classes. We are working on automatically training a readout from the penultimate layer to also test models that aren't trained on ImageNet. Either way, this should have no impact on any of the brain benchmarks.)

Please let us know of any further issues with the submission!

Martin

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants