This BCI binary-classifies emotions along the dimensions of valence, arousal, and dominance
- Have two computers (call them A and B)
- This could all be done on 1 computer if yours is beefy enough
- Install PsychoPy on the PsychoPy website if you haven't already
- Connect the OpenBCI headset to connect to computer A following this OpenBCI Tutorial
- Connect these electrodes:
- GND (black)
- REF (white)
- CH1 (gray) - C3
- CH2 (purple) - Cz
- CH3 (blue) - C4
- CH4 (green) - P3
- CH5 (yellow) - Pz
- CH6 (orange) - P4
- CH7 (red) - O1
- CH8 (brown) - O2
- Open the Emotions (Core) Google form on computer B
- On computer B, download the core videos from the core videos Google folder and personalized videos for the particular participant in the videos Google folder
- If you haven't already, clone the ntx_project Github repo on computer B
- It's recommended to download and use the Github Desktop app for ease of committing/syncing/branching/cloning
- To clone using Github Desktop, follow this Yotube tutorial
- Place all the videos in a folder called "Videos" in ntx_project/data_collection
- The videos need to be labeled as so (replace # with the PID number):
- #_amusement.mp4
- #_anger.mp4
- #_contentment.mp4
- #_disgust.mp4
- #_fear.mp4
- #_sadness.mp4
- amusement_core_1.mp4
- amusement_core_2.mp4
- anger_core_1.mp4
- anger_core_2.mp4
- contentment_core_1.mp4
- contentment_core_2.mp4
- disgust_core_1.mp4
- disgust_core_2.mp4
- fear_core_1.mp4
- fear_core_2.mp4
- neutral_core_1.mp4
- neutral_core_2.mp4
- neutral_core_3.mp4
- neutral_core_4.mp4
- neutral_core_5.mp4
- sadness_core_1.mp4
- sadness_core_2.mp4
- Have the participant put on the headset and inject gel
- Start recording EEG data in OpenBCI GUI on computer A
- Open and run GUI_code.py in the ntx_project/data_collection folder on computer B using PsychoPy (specifically the editor)
- Participant should now be watching the videos on computer B while EEG data is being recorded on computer A. In between the videos, have them fill out the Google form question for that particular video on computer B
- When the participant has gone through all the videos, stop the data recording on OpenBCI GUI and upload the data (it saves it in User/Documents/OpenBCI_GUI/Recordings by default) to EEG Data Google folder. The GUI_code should have also made a video_timestamps_PID_... file, which you should commit and push to the GitHub repo