Re: [Brian] STDP + Shared Weight = impossible?

265 views
Skip to first unread message

Marcel Stimberg

unread,
Jan 21, 2015, 10:03:34 AM1/21/15
to brians...@googlegroups.com, brian-de...@googlegroups.com
Hi Guillaume,

[we try to keep Brian 2 discussions on the brian-de...@googlegroups.com mailing list (CC'ed), please reply to that list.]

There are two problems with this code:
1. Our code generation mechanism generates code with all the code writing to shared variables first, followed by the code that updates vector-valued variables -- abstract code like the one provided for the "pre" pathway has to follow the same structure (this is what the error message is about).
2. It's not permitted to write to a scalar variable in a pre statement (similar for a reset, for example). The reason is that it is not well defined what this operation means, in general all statements are implicitly vectorized, i.e. for example "Apre += dApre" updates a vector of values -- "w = clip(w + Apost, 0, gmax)" on the other hand has a scalar value on the left-hand-side and a vector value on the right-hand-side.

How to get around these restrictions depends a bit on the specific model you have in mind. Do all synapses are supposed to share their weight, even if they target/arise from different neurons? If you want to share variables among sources/targets, you can store the values in the source/target group.

Best,
  Marcel


On 20/01/15 19:22, Guillaume Dumas wrote:
Hi!

I am trying to setup a STDP example in Brian2, with the synapses sharing weight.

My Synapses are declared like this:

S1 = Synapses(input, neurons1,
             model='''w : 1 (shared)
                dApre/dt = -Apre / taupre : 1 (event-driven)
                dApost/dt = -Apost / taupost : 1 (event-driven)''',
             pre='''ge += w
                    Apre += dApre
                    w = clip(w + Apost, 0, gmax)''',
             post='''Apost += dApost
                     w = clip(w + Apre, 0, gmax)''',
             connect='j==((i+1)%375)',
             )
S1.w = '0.5*gmax'


But when I run the simulation, I got this error:

ValueError: Error generating code for code object synapses_1_post_codeobject* from 5 lines of abstract code, first line is: "Apre = Apre*exp(-(t - lastupdate)/taupre)"
All writes to scalar variables in a code block have to be made before writes to vector variables. Illegal write to w.


I search for example of shared weight with the Synapses class but could not find any, even without STDP.
Is this actually impossible?

Cheers!

Guillaume


Guillaume Dumas

unread,
Feb 2, 2015, 4:16:07 AM2/2/15
to brian-de...@googlegroups.com, brians...@googlegroups.com
Dear Marcel, Dear All,

thanks for your answer and sorry to missed the Brian Development list.

I did not understood well your comment on code generation but, to clarify what I am trying to do, this is a feature map of a convolutional network.
The original code was taken from the synapse_STDP.py example of the Brian2 documentation.

All the neurons from NeuronGroup neurons1 have to share the same synaptic weight regarding the subgroups of neurons of the input SpikeGeneratorGroup.
I guess the best solution would be then to use the target NeuronGroup for storing those weights but I could not find a good example online and did not managed to do that on my own. Any hint?

Cheers!

Guillaume

Marcel Stimberg

unread,
Feb 2, 2015, 10:19:17 AM2/2/15
to brian-de...@googlegroups.com
Hi Guillaume,

thanks, I now get a somewhat clearer idea of what you want to do. Unfortunately, neither the shared variable approach that you tried earlier (that would have *one* weight for all synapses), nor storing it in the post-synaptic NeuronGroup (that would give you the same weights for all incoming spikes at a particular neuron) will work here. What you'd need is linked variables in the Synapses object, but this is something we don't allow currently. We will eventually, it's not in there because linking variables assumes a constant mapping between two groups, but the size of a Synapses object can change with the addition of synapses.

This all said, I think you *can* do what you want, but you have to use an internal Brian mechanism that bypasses most checks, i.e. you can shoot yourself in the foot if you use it incorrectly. Here's the idea (untested...):
In addition to your group neurons1, you create a NeuronGroup (that is not really about neurons) as a container for your shared weights:
    shared_weights = NeuronGroup(n_shared, 'w : 1')
In your Synapses object, you remove the 'w : 1' from the equations, but add an indexing state variable 'w_index : integer'. After you've created the Synapses object, you reference the variable w in the shared_weights group and tell Brian to use w_index as an index for it (this is the part of Brian that a user normally wouldn't use directly):
    S1.variables.add_reference(shared_weights, 'w', index='w_index')
Now, whenever Brian executes code such as ge += w for a synapse, it will use the w of the shared_weights group, indexed with the value of w_index of that synapse. Expressed in a pseudo-Pythonic way, if the synapse number "syn" receives a spike for target neuron number "target", it will translate 'ge += w' into:
    neuron1.ge[target] += shared_weights.w[w_index[syn]]
Now, all you have to do is to set w_index for every synapse so that it points to the correct shared weight. If your connection pattern is like the one in the link you sent (under "shared weights"), this is actually quite easy, for the connections (source -- target) 0 -- 0, 1 -- 1, 2 -- 2, you use shared weight 0, for 1 -- 0, 2 -- 1, 3 --1 you use shared weight 1, etc., i.e. you can simply write
    S1.w_index = 'i - j '  # i is the source index, j the target index

As I said, I did not actually test this but it should work, let us know how it goes! Best
  Marcel
--
http://www.facebook.com/briansimulator
https://twitter.com/briansimulator
 
New paper about Brian 2: Stimberg M, Goodman DFM, Benichoux V, Brette R (2014).Equation-oriented specification of neural models for simulations. Frontiers Neuroinf, doi: 10.3389/fninf.2014.00006.
---
You received this message because you are subscribed to the Google Groups "Brian" group.
To unsubscribe from this group and stop receiving emails from it, send an email to briansupport...@googlegroups.com.
To post to this group, send email to brians...@googlegroups.com.
Visit this group at http://groups.google.com/group/briansupport.
For more options, visit https://groups.google.com/d/optout.

Guillaume Dumas

unread,
Feb 2, 2015, 5:43:58 PM2/2/15
to brian-de...@googlegroups.com
Dear Marcel,

thanks a lot for your help.
So I tried your solution but got stoped on the way when adding the reference index to my Synapses.
Here is the error I got:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-12-c9b4433f7a6f> in <module>()
     36              connect='j==((i+1)%375)',
     37              )
---> 38 S1.variables.add_reference(shared_weights, 'w', index='w_index')
     39 S1.w_index = 'i - j'  # i is the source index, j the target index
     40 

/Users/kwisatz/anaconda/lib/python2.7/site-packages/Brian2-2.0beta-py2.7-macosx-10.5-x86_64.egg/brian2/core/variables.pyc in add_reference(self, name, group, varname, index)
   1673         if self.owner is not None and index in self.owner.variables:
   1674             if (not self.owner.variables[index].read_only and
-> 1675                     group.variables.indices[varname] != group.variables.default_index):
   1676                 raise TypeError(('Cannot link variable %s to %s in group %s -- '
   1677                                  'need to precalculate direct indices but '

AttributeError: 'str' object has no attribute 'variables'

I updated my Brian2 installation but it did not seem to change anything.
Did I made a mistake in the declaration?

Cheers,

Guillaume

Marcel Stimberg

unread,
Feb 3, 2015, 6:00:28 AM2/3/15
to brian-de...@googlegroups.com
Hi Guillaume,

sorry, apparently I switched the order of arguments around, the add_reference line should read:
    S1.variables.add_reference('w', shared_weights, index='w_index')

Best,
  Marcel
--

---
You received this message because you are subscribed to the Google Groups "Brian Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to brian-developm...@googlegroups.com.

Guillaume Dumas

unread,
Feb 10, 2015, 5:07:00 PM2/10/15
to brian-de...@googlegroups.com
Dear Marcel,

the references seem to work but the indexing appear too complicated since my receptive fields are overlapping.
Is there a possibility to use a direct mapping of i to j for the S1.w_index and S1 connections instead formal equations?

Best,

Guillaume

--

---
You received this message because you are subscribed to a topic in the Google Groups "Brian Development" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/brian-development/7n7SXLUt0mY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to brian-developm...@googlegroups.com.

Marcel Stimberg

unread,
Feb 12, 2015, 5:02:48 AM2/12/15
to brian-de...@googlegroups.com
Hi Guillaume,

you can specify everything manually without the use of strings. First create an array of source and target indices, e.g.:
    sources = array([0, 1, 2, 1, 2, 3, 2, 3, 4])
    targets = array([0, 0, 0, 1, 1, 1, 2, 2, 2])
(this assumes that the two layers of your network are separate NeuronGroups -- for performance it is actually better to put everything in a single NeuronGroup and just use the indices to do the layer structure. You could define an additional "layer : integer" state variable in the NeuronGroup to have the layer information more explicit)

Then you can use this two arrays to create the connections:
    S1.connect(sources, targets)

The usual Brian syntax for setting synaptic state variables uses two-dimensional indexing based on the source and the target, but I in your case this is not very practical, I think. You can always set the state variable as a flat, one-dimensional array in the same order as you created the connections, e.g.:
    S1.w_index[:] = array([0, 1, 2, 0, 1, 2, 0, 1, 2])

Alternatively, if you have a matrix M_indices of size N_sources x N_targets, you could also write (this would work no matter in what order you created the connections):
    S1.w_index[:] = M_indices[S.i, S.j]

Hope that helps, best
  Marcel

Guillaume Dumas

unread,
Feb 18, 2015, 9:53:43 AM2/18/15
to brian-de...@googlegroups.com
Hi Marcel,

this was tricky but now it's working perfectly!
Thanks a lot for your help.

Best,

Guillaume
Reply all
Reply to author
Forward
0 new messages