ReSuMe Learning on Brian (Inspired from STDP)

258 views
Skip to first unread message

io...@york.ac.uk

unread,
Jul 28, 2015, 10:39:52 AM7/28/15
to Brian
Hi,
I would like to re-implement a couple of basic experiments includes XOR implementation from this paper : "Supervised Learning of Logical Operations in Layered Spiking Neural Networks with Spike Train Encoding" on http://arxiv.org/pdf/1112.0213.pdf.
As a learning algorithm it is using supervised ReSuMe Learning by Filip Ponulak : http://d1.cie.put.poznan.pl/~fp/research/pr.htm

I have started with implementing input, output spike trains for True and False logics and hopefully I have got all of them as in the graph. After these steps, I am currently looking for constructing SNN. And the challenging part for me is how I will add supervised learning or teacher signal into STDP implementation of Brian. I have looked a number STDP examples but still dont have progress. 

<PoissonSpikeGeneration.png>

Even if I have got some idea something like :
2 inputs x 1 output neurons without hidden layer.
Single spike train input (which is generated)
Desired spike train output(which is generated)
Train network to see the input 1 pattern on the output. 

IN 1
           ---connect----> OUT                  
IN 2


Finally, How can I start to implement really basic example for supervised STDP on Brian ?
Many thanks for your comments in advance.

Ibrahim Ozturk

unread,
Jul 28, 2015, 10:49:03 AM7/28/15
to Brian, io...@york.ac.uk

Marcel Stimberg

unread,
Aug 7, 2015, 1:25:58 PM8/7/15
to brians...@googlegroups.com
Hi,

from a cursory look at the first paper that you linked, this seems to be a very specific and not very biological use of the STDP rule. You'd need to have a SpikeGeneratorGroup to create the target spike train and link a synapse with STDP from the source neuron to that, as well as a synapse between the source and the target neuron that uses "anti-STDP". But then, the actual weight used for the propagation has to be the sum of the weights of the two different types of synapses which is something that you cannot do with the standard Brian mechanisms, you'd have to use a network_operation (which allows you to run arbitrary code at every timestep). All in all, this is a complex task that I would only recommend for someone with significant experience with Brian.
From my superficial reading of their methods section it seems that they are actually not implementing STDP in any online sense at all, instead they use a very machine-learny approach and run a bit of simulation, calculate errors on the results afterwards and apply the weight updates in batches after a couple of simulation. If you want to use a similar approach, i.e. something that is quite far from learning in a biological network, you might be better off writing your own custom code instead of using Brian, I guess...

Best,
  Marcel
--
http://www.facebook.com/briansimulator
https://twitter.com/briansimulator
 
New paper about Brian 2: Stimberg M, Goodman DFM, Benichoux V, Brette R (2014).Equation-oriented specification of neural models for simulations. Frontiers Neuroinf, doi: 10.3389/fninf.2014.00006.
---
You received this message because you are subscribed to the Google Groups "Brian" group.
To unsubscribe from this group and stop receiving emails from it, send an email to briansupport...@googlegroups.com.
To post to this group, send email to brians...@googlegroups.com.
Visit this group at http://groups.google.com/group/briansupport.
For more options, visit https://groups.google.com/d/optout.

Ibrahim Ozturk

unread,
Aug 11, 2015, 12:02:52 PM8/11/15
to Brian
Hi Marcel,
First of all, many thanks for your great feedbacks about the issue.
Although I know that ReSuMe is not biologically plausible as STDP Learning, I would like to implement and compare it with different learnings. On the other hand, I am a bit disappointed about the case after reading your comments because you are probably not suggesting it.

But then, the actual weight used for the propagation has to be the sum of the weights of the two different types of synapses which is something that you cannot do with the standard Brian mechanisms, you'd have to use a network_operation (which allows you to run arbitrary code at every timestep). 

When you have said that "cannot do with standard Brian", do you mean that it cannot be implementable with the Brian package I have ? Or do you mean that using network_operation is not inside the standard usage of Brian package?

As you have said, ReSuMe is using STDP and anti-STDP at the same synapse in order to update weight. At least, I would like to implement and duplicate the results. 
What I have done so far :
 - Logic True-False Spike Trains for inputs and outputs are generated.
 - Network Structure with all layers, synapses, neuron equations has been initialized.
 - Synapses are configured as STDP learning mechanism instead of ReSuMe. (Just for initial trials)
 - Multi synaptic connections, initial weights, initial parameters are implemented.

The problems :
- Synapses Input to Output based on ReSuMe arent correctly configured so those synapses (now as STDP synapses) should be converted into ReSuMe synapses.
- Not sure how I will add teacher signal into my simulation. 
For instance :
I have randomly selected two logic inputs, 0 and 0, and the operations is and operation for the current network. So after running the network, I have some
output spike train and not sure how I will generate teacher signal or teacher weight update. Basically, not sure how I will implement simple supervised learning technique here.

Many thanks 

Marcel Stimberg

unread,
Aug 12, 2015, 11:13:25 AM8/12/15
to brians...@googlegroups.com
Hi Ibrahim,

it is certainly possible to do something like this, it is just beyond the standard scope of Brian so it is not straightforward. You will end up quite a bit of Brian-independent custom code which is of course fine, but please understand that we cannot help you very much with that.

But then, the actual weight used for the propagation has to be the sum of the weights of the two different types of synapses which is something that you cannot do with the standard Brian mechanisms, you'd have to use a network_operation (which allows you to run arbitrary code at every timestep). 

When you have said that "cannot do with standard Brian", do you mean that it cannot be implementable with the Brian package I have ? Or do you mean that using network_operation is not inside the standard usage of Brian package?
You can use a network operation to run arbitrary Python code at every timestep of a simulation, e.g. you can use it to update the weights in any way you want during the simulation (have a look at the documentation for more details). This is what you'd need if you implement their learning rule in an online way, i.e. if you want to update weights during a simulation run.
On the other hand, if you follow their approach of doing everything offline, then there's not much that you actually need Brian for. You'd have a simple network with static synapses, simulate it with some inputs and record the output spike trains (with a SpikeMonitor). This would be the Brian part. Then you'd take the recorded spike trains and compare them to the target spike trains and calculate the weight changes according to the STDP equations. To be clear, this means that you'd not have STDP synapses in your network but instead calculate the STDP changes based on the spike trains explicitly. If I understood correctly, this is what they did in the paper. You then apply the weight changes to the weights of your network and run it for some more time.

Again, all this is doable with Brian, but most of the code will not be actual Brian code but code for setting the inputs, comparing the outputs to the target values, calculating the weight changes, etc.

Best,
  Marcel
Reply all
Reply to author
Forward
0 new messages