Using GPU Resources for Lava Simulations & Drastic Slow-Down when Probing #262
Unanswered
avitaleSon
asked this question in
Q&A
Replies: 1 comment 1 reply
-
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
firstly: thanks a lot for open-sourcing the whole Lava framework! This is quite a powerful set of tools which will surely contribute to advancing the frontier of Neuromorphic Research!
I have two questions regarding the runtime performance of small workloads simulated with Lava:
I have a question regarding running Lava Simulations on GPU. I am running a small workload on my workstation. I have a GPU enabled workstation, yet when I include the
@requries(GPU)
flag the runtime stays exactly the same as using@requires(CPU)
. Is there something that I am missing?In order to monitor the spiking activity of a population
lif_1
, I am connecting it's output port to the input port of aio.sink. RingBuffer
in the following way:When I include the RingBuffer probing object, the workload runtime increases by a factor of ~10 (without probing: 34s with probing: 320s). Is there a way to make this more efficient? For example, probing only every N-th timestep? What about implementing the probing in a Lava-independent way e.g. by saving the spike-timestamps and spike-NeuronIDs in a python list. Would this bring speed up benefits?
Thanks for your help!
Antonio
Beta Was this translation helpful? Give feedback.
All reactions