-
Notifications
You must be signed in to change notification settings - Fork 0
Information Bottleneck in Neuroimaging
Girish-Anadv-07 edited this page Feb 26, 2024
·
1 revision
Information-theoretic measures can be useful for providing insight into how deep neural networks learn on different data sets. One possible application in data science involves measuring the mutual information between hidden layers and the input and output spaces, and how this differs on Auto-Encoders trained on different groups of data. For neuroimaging, we may be interested in visualizing the trajectories of an auto-encoder trained on people diagnosed with schizophrenia vs those who are not.
- NeuroNeural
- Data Science/ML Theory/Neuroimaging
- Graduate student or sufficiently motivated undergraduate. Recommend strong math skills, or willingness to learn difficult concepts. Experience with signal processing, linear algebra, machine learning recommended. Basic familiarity with statistical tests also recommended.
- Brad Baker (bbaker43@gsu.edu)
- Sergey Plis (s.m.plis@gmail.com)
- Catalyst: https://github.com/catalyst-team/catalyst
- Catalyst Neuro: https://github.com/catalyst-team/neuro
- Shwartz-Ziv, Ravid, and Naftali Tishby. "Opening the black box of deep neural networks via information." arXiv preprint arXiv:1703.00810 (2017).
- Saxe, Andrew M., et al. "On the information bottleneck theory of deep learning." Journal of Statistical Mechanics: Theory and Experiment 2019.12 (2019): 124020.
- Cheng, Hao, et al. "Utilizing information bottleneck to evaluate the capability of deep neural networks for image classification." Entropy 21.5 (2019): 456.
- Semester
- Set of Experiments demonstrating the trajectories in the information plane for auto encoders (and/or VAEs) trained on different groups in a neuroimaging data set.
- 2-4 page report summarizing primary methodology and results.
- Eventual Submission to Neuroimaging Conference (e.g. OHBM)
- Eventual Submission to Neuroimaging Journal