Skip to content

Latest commit

 

History

History
79 lines (59 loc) · 3.62 KB

README.md

File metadata and controls

79 lines (59 loc) · 3.62 KB

Spike-GAN

Tensorflow implementation of Spike-GAN, which allows generating realistic patterns of neural activity whose statistics approximate a given traing dataset (Molano-Mazon et al. 2018 ICRL2018).

alt tag

Prerequisites

  • Python ≥ 3.5 (tested with 3.8)
  • Tensorflow 2.x (tested with 2.5)
  • Numpy ≥ 1.9
  • SciPy
  • Matplotlib
  • Seaborn

Installing

Just download the repository to your computer and add the Spike-GAN folder to your path.

Data

The folder named data contains the retinal data used for Fig. 3 in the ICLR paper. The whole data set can be found in Marre et al. 2014. The folder also contains the data generated by the k-pairwise and the Dichotomized-Gaussian models for that same figure and for Fig. S6.

Disabling the GPU

This code uses Tensorflow 1 under the hood (through TensorFlow 2's compatibility layer), so the easiest way to force TF to ignore your GPUs, if you so wish, is to set the CUDA_VISIBLE_DEVICES environment variable to empty when calling the main_conv.py script, like so:

python CUDA_VISIBLE_DEVICES="" main_conv.py ...

Alternatively, you can add/uncomment the line os.environ["CUDA_VISIBLE_DEVICES"] = "" at the top of the main_conv.py file.

Examples

Retinal data

Spike-GAN can be run using this data with the command:

python3.5 main_conv.py --architecture='conv' --dataset='retina' --num_bins=32 --iteration='test' --num_neurons=50 --is_train --num_layers=2 --num_features=128 --kernel_width=5 --data_instance='1'

Simulated correlated firing

The example below will train Spike-GAN with the semi-convolutional architecture on a simulated dataset containing the activity of two correlated neurons whose firing rate follows a uniform distribution across 12 ms. See main_conv.py for more options on the type of simulated activity (refractory period, firing rate...).

python3.5 main_conv.py --is_train --architecture='conv' --dataset='uniform' --num_bins=12 --num_neurons=2 

Maxent data

In this example we train a fully-connected architecture on the toy maxent dataset (which contains 12 neurons). The fully connected network has two hidden layers with 256 units each. The number of synthetic samples generated every time we compute diagnostics (such as when we pause training and monitor its advancement, or at the end of training) is 1000, controlled by --num_samples. Training will be run for 100000 iterations (controlled by --num_iter). The arbitrary name training_run_name, controlled by --iteration will be added to the data folder name for that training run, which could be useful if you are planning to train the network from scratch multiple times with the same settings. Typically the --iteration flag is not needed, and can be safely left out.

python main_conv.py --is_train --architecture='fc' --dataset='maxent' --num_bins=1 --num_neurons=12 --num_layers=2 --num_units=256 --num_samples=1000 --num_iter=100000 --iteration='training_run_name'

Authors

This work has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 699829 (ETIC).

ETIC