Replies: 2 comments 3 replies
-
This is something we also want to see in Lava. Flexible plasticity rule definition on e.g. CPU that emulates chip behavior. We did some work in this direction (see brian2loihi emulator) where we have implemented the pre and postsynaptic traces allowing emulation of SNNs with learning. Due to stochastic rounding, we could not reproduce Loihi1 behavior exactly, but in distribution. Note reward traces and tags are not yet implemented. You can install the package here. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the feedback. We are definitely planning to provide bit-accurate simulation capability for Loihi in Lava. In our last release, we have just provided a first bit-accurate example of a LIF process. The rough plan is to...:
Somewhere along the way, we are also planning to build a dedicated NeuroCoreProcess which mimics a Loihi neuro core in its entirety not just a reduced set of special purpose features like those conveniently exposed in LIF, DENSE, etc. We are aware that our NxCore documentation for Loihi 1 and NxSDK was never really that great. We will document these upcoming Lava processes appropriately, so that you have a chance to understand how they work. I believe that any training method should probably be robust or tolerate against effect like random rounding because you would face similar disturbances anyways when going from sim to real. So modelling it in distribution should be good enough. When using these higher level processes like LIF or DENSE that do not exactly correspond to a Loihi core with fixed size constraints, there will very likely be some deviation between simulation and HW execution whenever LFSRs are involved. This is because LFSRs are generally shared in certain ways per core on Loihi. However, which neurons end up on a core is hard to predict and only gets determined during compilation to Loihi. Therefore the exact sequence of random numbers a neuron, synapse or trace will receive may vary. However, the distribution will be stable. Therefore, its advisable to make applications not too dependent on exact random number sequences but only their distribution. It would be conceivable to first compile a model to Loihi to capture how neurons get mapped to cores such that the same random number sequence per neurons, synapse and trace can be reproduced in simulation. But I hope this can be avoided in general because such simulations will likely be slower to execute. |
Beta Was this translation helpful? Give feedback.
-
When learning/training plastic SNN (pSNN) policies offline (using gradients or gradient free methods) with the intent to deploy in Loihi hardware there is a need to accurately simulate the plasticity engine of Loihi offline. The current LAVA eco-system does not have the ability to accurate simulate plastic SNNs for Loihi 1 or Loihi 2.
In our experience with Loihi 1 the following made it difficult to produce an accurate simulation of a pSNN:
Here is a simple task that I'd like to be able to do. Through some offline means train a pSNN-based policy that balances a mujoco simulated inverted pendulum, once trained, deploy the pSNN to Loihi and balance the mujoco simulated pendulum using the trained policy in Loihi hardware. This seemingly straight-forward task is quite difficult given the inability to accurately simulate the Loihi plasticity engine.
Beta Was this translation helpful? Give feedback.
All reactions