Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Difference between code and paper #4

Open
FALLANGELZOU opened this issue Oct 17, 2024 · 0 comments
Open

Difference between code and paper #4

FALLANGELZOU opened this issue Oct 17, 2024 · 0 comments

Comments

@FALLANGELZOU
Copy link

I noticed that in the code implementation the expert network INRNet is just a FC layer with positional embedding and no separate layer is added for each expert sub-network, whereas in the article it is written that ”To downsize the whole MoE layer, we share the positional embedding and the first 4 layers among all expert networks. Then we append two independent layers for each expert. We note this design can make two experts share the early-stage features and adjust their coherence.“

How to explain this, code wise its hardly a MoE, more like a MLP layer with sparse coding.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant