Development of an Open-Source EEG Foundation Model #37
NorhanAbdelhafez
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello!
I've recently read the paper NEURO-GPT: DEVELOPING A FOUNDATION MODEL FOR EEG that handled the creation of a foundational model for EEG data in BCI tasks. I familiarized myself with their approach to EEG data processing, model building, and the suggestions they proposed for future improvement. Their main approach was developing an EEG encoder to extract spatio-temporal features from EEG data, and a GPT model that uses self-supervision to predict the masked chunks. They pretrianed the Encoder+GPT (aka NEURO-GPT) model on the TUH EEG dataset. And in fine-tuning, they used BCI Competition IVDataset 2a provided by Graz University of Technology. They used 3 strategies in fine-tuning:
pre-trained EEG encoder only.
fine-tune only the linear head (3 linear layers).
The main takeaway of their fine-tuning experiments is that Encoder-only achieved the best performance. And the Encoder+GPT yielded worse performance and they justify that because "the GPT model only serves as an auxiliary component to assist the EEG encoder in encoding meaningful features from raw EEG data. The GPT model learns features related to inferring the next token, which may be challenging to transfer to classification tasks".
Thus the suggestions they provide for further improvement:
I'm currently working through implementing their approach and also searching for other literature that handles the same issue.
Other than that, I would like to ask whether there is any feedback/comment or potential task for this project to prove eligibility.
Thanks so much for your assistance and understanding,
Norhan.
email: norhan.abdelhafez3@gmail.com
Beta Was this translation helpful? Give feedback.
All reactions