This repository contains code required to reproduce the experimental results published as a preprint on arXiv.
These analyses were run on a workstation with
- Processor: 13th Gen Intel(R) Core(TM) i9-13900K
- RAM: 128 GB
- GPU: NVIDIA GeForce RTX 4090 (24 GB VRAM)
- Storage: 2 TB NVMe SSD
If you plan to reproduce these results, I'd recommend using a machine with
- >= 64 GB RAM
- a GPU with >= 8 GB VRAM
- >= 1.5 TB of free disk space
Caution
RAM usage peaks when preparing the datasets for use the first time: we concatenate 40 sessions of fMRI data (!) for each subject in the Natural Scenes Dataset, which uses almost 64 GB of memory.
Tip
Throughout the code, there are batch_size
parameters that control the amount of GPU memory used. You might want to adjust this especially if you run permutation tests or bootstrap resampling when computing covariance spectra.
The code has been tested on RHEL 9.3. Any standard Linux distribution should work.
We use Python 3.12.4 for all analyses; the other required Python dependencies are described in requirements.txt
. Do not attempt to install them directly; follow the installation guide below.
- Clone this repository.
git clone https://github.com/BonnerLab/scale-free-visual-cortex.git
-
Edit
.env
PROJECT_HOME
should be the path of the cloned repository (e.g./home/$USER/scale-free-visual-cortex
)AWS_SHARED_CREDENTIALS_FILE
should be the path to an AWS credentials file that gives you access to the Natural Scenes Dataset- The other environment variables can be left unset: simply delete those lines. Cache directories will be created at
~/.cache/bonner-*
by default.
-
Set up the Python environment
- Option 1: Use
conda
- Install the environment (
conda env create -f $PROJECT_HOME/environment.yml
) - Activate the environment (
conda activate scale-free-visual-cortex
)
- Install the environment (
- Option 2: Use your favorite package manager
- Install Python 3.12.4
- Create a virtual environment (
python -m venv <path-to-venv>
) - Activate your virtual environment (e.g.
<path-to-venv>/bin/activate
if you're usingbash
) - Install the required packages (
pip install -r requirements.txt
)
- Option 1: Use
We provide a simple high-level overview of the analysis in demo.ipynb
using a small subset of the data. After installing this package, simply run the notebook file: this will automatically download ~300 MB of data and run the within- and between-subject analyses for one pair of subjects.
Important
You will need access to the Natural Scenes Dataset to reproduce the analyses in the paper. Specifically, you will need to obtain an AWS credentials file and set the AWS_SHARED_CREDENTIALS_FILE
environment variable (see Step 2 of Installation).
manuscript/notebooks
contains Jupyter notebooks that generate the figures in the paper.
schematic.ipynb
shows the comparison between different spectral estimators (Figure 1)spectra.ipynb
computes all the power-law covariance spectra (Figures 2, 4, S1, S3, and S4)singular_vectors.ipynb
generates brain maps of some example singular vectors (Figure S2)cross_correlations.ipynb
compares functional and anatomical alignment (Figures 3, S5, and S6)rsa.ipynb
demonstrates the insensitivity of RSA to high-dimensional structure (Figure S7)
Warning
Running a notebook for the first time will likely take ages since the datasets used will be downloaded, processed and cached. Subsequent runs will be much faster.