Clarification needed for the spike generation and GPU execution time in CARLSIM

23 views
Skip to first unread message

arnab roy

unread,
Feb 28, 2017, 11:27:32 AM2/28/17
to CARLsim: A GPU-accelerated Spiking Neural Network (SNN) Simulator
Hi ,

I need clarifications regrading the spike generation in CARLSIM and GPU execution time while runing the SNN,

spike generation in CARLSIM:

1) can we provide customized spike trains instead of the PossionRate for the input layer neurons?For e.g, if I have the input spike trains in some text file, can I use it for the input neuron groups?
2) I believe while setting up the spike rates with setRates() methods, we are setting up the input spike rates. However, when I am seeing the spike rate of input layer via sim.setSpikeMonitor(gin,"DEFAULT") ( the input layer name is gin); the spike rates don't seem to have impact on the number of input neurons spiking at each time step. For an example, if I have set number of input neurons (gin) as 2048, and spike rate as 75, I can expect around 1500 neurons spiking per timestamp, though in spikemonitor I could see only 150-200 neurons are spiking. Is my understand correct about the spike rate?

GPU execution time:

My objective is to capture the GPU execution time only at the time of running the SNN network,i.e while calling runNetwork() method.

I have used the cuda timer in runNetwork() (in snn_cpu.cpp file), between different places of GPU functions in following way

CUDA_RESET_TIMER(timer);
CUDA_START_TIMER(timer);
<method to GPU>
CUDA_STOP_TIMER(timer);

and the methods are;

a) doGPUSim()
b) updateWeights_GPU()
c) updateFiringTable_GPU()
d) CopyFiringTable_GPU()

Now, I have simulated different SNNs with varying number of layers, neuron numbers in each layer and spike rate.
For example:
Network 1: 4 layers, INPUT : 784, HIDDEN1: 1200 , HIDDEN2: 1200, OUTPUT: 10 ,spike rate ~4% (all neurons are excitatory in nature)
Network 2: 5 layers  INPUT : 3072, HIDDEN1: 2000 , HIDDEN2: 2000, HIDDEN3: 500,  OUTPUT: 10, spike rate ~47% (all neurons are excitatory in nature)
Network 3: 4 layers  INPUT : 2048, HIDDEN1: 500 , HIDDEN2: 500, OUTPUT: 5, spike rate ~75% (all neurons are excitatory in nature)

Also,
a) I am using COBA method ;
          sim.setConductances(true);
b) I have used the basic parameters in the methods like,
         sim.setNeuronParameters(gout, 0.02f, 0.2f, -65.0f, 8.0f);
         sim.connect(gin, gout, "full", RangeWeight(0.05), 1.0f, RangeDelay(1)); //all adjacent layers are connected with the respective group names with same configuration
         sim.setIntegrationMethod(FORWARD_EULER, 2);

c) I am running each network for 35 milliseconds;
    for (int i=0; i<35; i++) {
        sim.runNetwork(0,1);  //1ms each
    }

When I got the execution time in GPU from the above mentioned 4 methods, first three methods' execution time by GPU are not changing significantly for different network configurations; while only the final method 's(CopyFiringTable_GPU()) execution time is changing.
That's something I am not able to figure out. I was expecting some differences in other methods as well, since I am changing the network configuration.
1) So, is my way of evaluation correct?
2) What does this method CopyFiringTable_GPU() actually do?

PN: I am new to this simulator, I apologize if my questions are naive.

Regards,
Arnab

Michael Beyeler

unread,
Feb 28, 2017, 11:54:34 AM2/28/17
to CARLsim: A GPU-accelerated Spiking Neural Network (SNN) Simulator
Hi Arnab,

These are all great questions! Thanks for giving our simulator a test drive. Let's get right to it.

1) It's possible to define your own spike times per neuron by subclassing SpikeGenerator, explained here. Right before I left the lab, we were talking about having some useful SpikeGenerators under "tools/spike_generators" that would allow you to schedule spikes from a vector or a file. It seems to me though that the current code is not very useful (e.g., the one on vectors only works for a single vector of spike times, which will be applied to every neuron in the group - not what you want). Thus I'm afraid your current best bet would be to implement your own SpikeGenerator that does exactly what you want. Of course, we would also be grateful if you wanted to contribute your own version of a SpikeGenerator by creating a pull request on GitHub.

2) You are correct: The spike rate is set in Hz. As you change the rate, the number of spikes in the input layer (as observed with a SpikeMonitor) should definitely change! Of course, since spike times are drawn from a Poisson distribution with rate \lambda, the variance in firing rate will also be \lambda. But, numbers should definitely change. With 2,048 neurons spiking at 75 Hz, you should expect something along the lines of 153,600 spikes per second. Per one millisecond (i.e., the time step), you should therefore expect some ~154 spikes. That seems right on target with what SpikeMonitor reports.

3)/4) Your way of evaluating is correct, with one minor suggestion. The bottleneck with GPU computation is actually the copying of data from/to the GPU. This is what CopyFiringTable_GPU does: It copies the produced/elicited spikes of the network back from GPU to host. There is an equivalent memory transfer for getting scheduled spikes (from SpikeGenerator) on the GPU.
The amount of data that is copied in one go is defined by the simulation duration in runNetwork: So, if you run the network for 1ms at a time, I suspect that most of the time will be spent copying, not computing.
Long story short, you should see better numbers if you change your simulation loop from:
for (int i=0; i<35; i++) {
    sim
.runNetwork(0,1);  //1ms each
}
to a simple:
sim.runNetwork(0, 35);

We try to buffer spikes for as long as possible. The current cut off is 1 second. Meaning that if you run the network for <1s, memory transfer happens at the end of runNetwork. If you run for >1s, memory transfers happen every 1s.

Let me know if you have further questions.

Best,
Michael

arnab roy

unread,
Mar 1, 2017, 4:31:33 AM3/1/17
to CARLsim: A GPU-accelerated Spiking Neural Network (SNN) Simulator
Hi Michael,

Thanks a lot for your inputs.
I have run the simulation for 35 milliseconds as a whole, as you suggested. However I still have some doubts about the GPU execution time.
I believe below are the methods which update the states (like neuron potentials, weight) of neuron groups;


a) doGPUSim()
b) updateWeights_GPU()
c) updateFiringTable_GPU()

So when I am changing the network configuration, amount of work done by GPU due to the update operations should change and so the GPU computation time. However, for all the network setups (mentioned in the last post) the GPU computation time mostly remains same. Even the GPU execution time doesn't vary significantly for different gridsize and blocksize combination.
Any reason behind that or is there any way I can change some parameters so that difference in GPU execution time could be reflected across the different network setup?

Michael Beyeler

unread,
Mar 4, 2017, 1:37:19 PM3/4/17
to CARLsim: A GPU-accelerated Spiking Neural Network (SNN) Simulator
Hi Arnab,

This might be a silly question, but have you checked whether your neurons (other than SpikeGenerators) are actually spiking? Otherwise there's not much computation going on, except for maybe dv/dt.

Also, updateWeights_GPU() should only come into effect if you have STDP enabled.

We have benchmarked the Vogels & Abbott random net with STDP here: http://www.socsci.uci.edu/~jkrichma/BeyelerCarlsonChouDuttKrichmar-CARLsim3-IJCNN2015.pdf.
Have a look at Figure 6. We see clear effects for number of neurons and number of synapses in the network. However, these might only be apparent if you make your networks larger: on the order of 100,000 neurons. That's where the GPU will really start to make a difference.

Best,
Michael
Reply all
Reply to author
Forward
0 new messages