This repository contains Jupyter notebooks with Python and R code for preprocessing and analyzing data for Elsayed et al., (2025) "Synergistic geniculate and cortical dynamics facilitate a decorrelated spatial frequency code in the early visual system". The data files needed for these analysis notebooks can be found at: https://doi.org/10.6084/m9.figshare.30090211
Usage Instructions:
IMPORTANT NOTE: Running these notebooks requires a working Python and R kernel and several dependencies.
Python dependencies include: Pandas, Matplotlib, Seaborn, Scipy, Scikit-Learn
R dependencies include: tidyverse, lme4, emmeans
The only data files strictly needed to run all of the preprocessing and analysis notebooks are:
- lgn_raw_trial_averaged.pkl
- v1_raw_trial_averaged.pkl
These files contain the electrophsiology data compiled from the Intan output files for ease of handling. Going through all of the Jupyter notebooks (in the order they are numbered) in the "preprocessing" folder generates the rest of the files necessary for the main analysis. However, all of the files generated by the preprocessing notebooks are included in the figshare link to reduce runtime. The files generated by the preprocessing notebooks which are necessary for the main analysis are listed below. You must copy these files into the analysis folder to run the main analysis notebooks:
- lgn_data_sfresp_at_all_ori_phase_full.pkl -> fully preprocessed dLGN data.
- v1_data_sfresp_at_all_ori_phase_full.pkl -> fully preprocessed V1 data.
- lgn_ori_phase_condition_pcascores.pkl -> PCA transformed dLGN data.
- v1_ori_phase_condition_pcascores.pkl -> PCA transformed V1 data.
- lgn_pca_relational_properties_ori_phase_combos.pkl -> SF-response vector correlations + other metrics for dLGN.
- v1_pca_relational_properties_ori_phase_combos.pkl -> SF-response vector correlations + other metrics for V1.
In addition to these files, the preprocessing notebooks will generate two files that are necessary for the revision analysis notebooks (9-9.2). These are listed below:
- lgn_pre_baseline_sub_trial_averaged.pkl -> processed dLGN data before baseline subtraction.
- v1_pre_baseline_sub_trial_averaged.pkl -> processed V1 data before baseline subtraction.
Finally, there are three optional data files included to make running the revision analysis notebooks easier. The revision analysis Python notebook has two time-consuming curve fitting sections. We have included the results from the curve fitting in the following files to reduce the runtime:
- ringach_model_v1_fits.pkl -> Ringach model fits to the V1 data.
- ringach_model_v1_fits_params.pkl -> parameters of the Ringach model fits to the V1 data.
- sf_tuning_cruve_dog_fits.pkl -> gaussian bandwidth selectivity estimates.
When all of the data files are in the corresponding folders. You can simply run all of the Jupyter notebooks in the order they are numbered. There are some notebooks running Python and R code that are separate but named in a manner to reflect that they are related. For example:
- 5.1_peak_shift_analysis.ipynb
- 5.2_RStats_peakshift_xsq.ipynb
- 5.3_RStats_peaksf_pred.ipynb
Are all related and should be ran in the order: 5.1 > 5.2 > 5.3.
If you have any questions about the data, analysis methods, or results please email the primary author at: ahmad.elsayed94@gmail.com.