Skip to content
Alexis edited this page Sep 14, 2021 · 2 revisions

Serenity

Table of content

Introduction

Serinity is mind reading software to recognize letters developed by 3 computer science students from Epitech. The main objective was to explore the amazing world of electroencephalography and brainwaves analysis. A subject mainly worked on by full-time researchers and other professionals. We will go across everything we encountered from the first use of the headset to our last little tweaks.

Our equipment and how to set it

During our project we used the EPOC Flex Gel Sensor Kit from EMOTIV. We were lucky to have a fellow student named Paul who decided to go bald. We asked him he wanted to contribute to our project and he said yes. We advise if you can to find someone bald willing to be your test subject because it is way easier to setup the headset on them. During setup, don't be afraid to use lots of gel, it is better to have to much gel than not enough and have bad contacts. Also you can mix the gel with a bit of alcohol to make it softer and get better contacts. Keep in mind to let the alcohol dry after.

Here is a quick video on how we did it : https://youtu.be/lh9mBzmb1R0

How does it work

Our project is divided into 3 scripts:

preprocess.py

This script takes data in a CSV format to sort them because the headset software doesn't allow us to chose which field we want. The input is then processed and saved in numpy binary format to enhance performance. The output file name will contain the current date.

Script arguments are:

  • -f -> specify CSV source file
  • -o -> specify save file (current date is append to the given name)

We would have added more steps to the data preprocess but we didn't have the time because of the added complexity. eggLib was used but in hindsight this was a bad choice because it was an end of studies' project from other students. This library was not complete nor finished and didn't enable us to do what we wanted.

train.py

This script is used to train our network.

It's output depends on the folder and file structure. As an example for a folder organised like this:

In our script the variable 'LABELS' which defaults to '[A, B, C]' will search for the numpy data file 'train' in the folder A, B and C. Next the data is used to train our model with the following shape: 32 * 500. We choose this format because our headset has 32 electrodes and 500 corresponds to a record of 5 seconds. Once the training is complete, the model is saved and the model settings are added to folder name (number of labels, accuracy, ...).

For our model we chosen EEGNet. It is a widely used model which has proven its worth in brain computer interfaces, notably for its powerful feature extraction. It is a convolutional network with a lot of dropout to avoid over fitting.

In hindsight, for the precise processing we needed maybe a better network would have help but it is hard to tell without enough data.

real_time.py

This script uses a Tkinter window to display the letter detected by the AI in real time. To do this we used EmotivPro LSL (Lab Streaming Layer) connected to our AI. Once the headset is ready and the network model has been generated, you just need to enable LSL et everything should work automatically. To get an output you will need to think 5 seconds about a letter. Then click on the button and the program will take the last 5 recorded seconds and the letter will appear in the window.

Our parcour

Due to the delayed delivery of our headset, we first started by searching what had already been done with EEG and AI. We stumbled upon SeizureNet and to gain some experience we tried to reproduce it. We were very lucky to be granted access to TUH EEG datasets. Since two of us hadn't any experience with AI we struggled quite a bit.

We finally received our headset and started brainstorm about a project seemingly doable to us. Our final idea was to create an AI capable of detecting what letter we thought about. It gave rise to existential questions and exposed our different opinions on the subject. This was a very constructive process for all of us and we are really proud to have gained a deeper understanding of ourselves. A quick tour of what we debated about :

  • Will the headset detect what we want him to ?
  • How are we supposed to think during the test ?
  • Is a visual representation of a letter the best way to think ?
  • How are we supposed to record data ?
  • If we tell our test subject to tell us when he is thinking about a letter is he really thinking about it ?
  • Do we think the same way day to day ? Hour to hour ?
  • Does our mood impact the way we think ?
  • ...

We really advise asking yourself the same questions and formulate you own answers. It is too long for us to put our full reasoning here. During these discussions we were by ourselves and sometimes felt a bit lost. We thought we were getting nowhere. If you can get some external help do it, it will greatly help and guide you.

Sadly we didn't know MNE existed and we didn't find it. So we didn't get a chance to try it and don't know how useful it could have been. Maybe try it if you can.

Data was a big problem because we didn't had enough so try to start recording some a soon as you can.

Our thoughts and conclusion

This wasn't a simple project and it shows in our lack of clear results. We are still very pleased to have had the opportunity to work on it and we had lots of fun doing it. We loved the fact we weren't really constrained in what we should do with the headset even if it brought up questions about feasibility of the project. This project really opened our mind about the way we think. We really liked having existential debates about consciousness, perceptions, the limitation of language to describe toughs, ... A bit of external would have been appreciated because sometimes it felt like we were heading nowere and getting a bit lost. We hope our work will be helpful for future students working on the subject. And if you have any questions do not hesitate to contact us.