Skip to content

Article Notes: Ghosh Dastidar, Adeli (2009)

Matthew Wootten edited this page Oct 2, 2017 · 1 revision
  • Multiple spiking neural network
  • Each neuron is fully connected and each connection involves multiple synapses
  • Each transmission along each synapse involves multiples spikes, unlike in a single spiking SNN which only involves the transmission of one spike.
  • Variable meanings:
    • x = internal state
    • j = postsynaptic neuron (receiving transmission)
    • t = time (Maximum computation and efficiency found with time step of 1 ms)
    • N = neuron
    • l = layer
    • i = presynaptic neuron (sending transmission)
    • K = number of synapses, constant for any two neurons
    • k = synapse number
    • G = number of spikes
    • g = spike number
    • w = synapse weight
    • ε = spike response function
    • d = delay
    • And 𝜏 = time decay constant (determines spread shape of the function. These researchers found that the encoding interval + 1 ms worked well)
    • θ = Neuron Threshold
    • η = learning rate (Maximum computation and efficiency found from 0.001-0.01)
  • Neuron in layer l are postsynaptic to all presynaptic neurons in l+1 (layers are numbered backward starting with output layer as number 1)
  • Modeling of synapses is identical for all neurons, and the kth synapse between any new neurons has the same delay.