Skip to content

Transforming spiking networks into traditional networks

Matthew Wootten edited this page Feb 1, 2018 · 2 revisions

The current multi-spiking neural network solution is... not that efficient. The train time for a XOR network is in the dozens of seconds, and that is variable, not a constant startup time. We need to start looking into more efficient simulation methods if we want to have any chance of training our network in a reasonable time span.

One potential way to do this is exploiting the parallels between spiking and traditional neural networks, and the incredible amount of optimization work that past machine learning researchers have put into traditional neural networks.

How can we convert a spiking neural network into a traditional one? The sole difference is the time sequencing, and since (at least in the model of Ghosh-Dastidar and Adeli (2009)) there are a small number of discrete time steps, we can take each of these time steps and give them their own neuron.

To illustrate this, see the next two diagrams. The gray diagram is a spiking neural network; each of the neurons are spiking, and the network runs over three time steps. In the colored diagram, each of these time steps gets a new neuron. These new networks are fully-connected, except future neurons cannot influence past neurons.

The connections from past to future neurons (modeling refractoriness) are the most problematic in this model, and will require the most consideration. In the Ghosh-Dastidar and Adeli (2009) model, the last spike time alone influences the refractoriness. Modeling this would require modifying neuron connections on the fly. While PyTorch, the software package we are using, can handle this, this modification has the potential to remove our performance gains from vectorization if done wrong.

Alternatively, we could modify the model which Ghosh-Dastidar and Adeli (2009) proposed, and instead of taking into account only the most recent spike, take into account all previous spikes with an exponential decay. This is plausible because a neuron is indeed influenced by all previous states of the neuron.