ConvolutionConnector

19 views
Skip to first unread message

mazdak fatahi

unread,
Nov 28, 2022, 5:44:53 PM11/28/22
to SpiNNaker Users Group
Hello,

Thanks to your support, now we have the ConvolutionConnector.
According to the sample (https://spinnakermanchester.github.io/development/2d_convolutions), I did the following to see how the output is close to the output of the same kernel in PyTorch:

sim.setup(1.0)
#sim.set_number_of_neurons_per_core(sim.IF_curr_exp, (x, y))
kernel_1 =[[0,0.,0],[0.,1.,0.],[0,0,0]]
conn_1 = sim.ConvolutionConnector(kernel_1)

in_shape = (x, y)
out_shape = conn_1.get_post_shape(in_shape)
n_input = int(numpy.prod(in_shape))
print(f'n_input={n_input}')
n_output = int(numpy.prod(out_shape))

time, spike_times= convert_to_spike(...)

src = sim.Population(n_input, sim.SpikeSourceArray, {'spike_times': spike_times},label='input spikes', structure=Grid2D(in_shape[0] / in_shape[1]))

conv_1 = sim.Population(n_output, sim.IF_curr_exp(), label="conv_1",structure=Grid2D(out_shape[0] / out_shape[1]))
sim.Projection(src, conv_1, conn_1, sim.Convolution())

sim.run(20)

============
I tested different values for (x,y) as input size.  It works with maximum shape = (28,28), without setting the #n/core, but for bigger input (ex. (29,29)), I had the following error:

SpiNNManCoresNotInStateException: waiting for cores ['0, 0, 5 (ph: 5)', '0, 0, 6 (ph: 6)'] to reach one of [<CPUState.PAUSED: 10>, <CPUState.READY: 5>]

( I replaced the sim.run(t) with "globals_variables.get_simulator().run_until_complete()", but it is still waiting :))  )

By setting the #n/core as a rectangle it works, but I'm not sure about the size of the rectangular space. Is there any relation between the input size and the size of the reserved space? Is it exactly the same size and shape as the input? What about the maximum #n/core?  Because I set it as (28,28) which is 784 n/core! (If I was right, the max is practically 256) What is the maximum size for that?


Another question is about connecting the output of the conv to a non-square space. For example, if I want to connect it OneToOne, how they will be connected in terms of pre-post weights?
 
I appreciate your help in advance

Best regards,
Mazdak FATAHI

Andrew Rowley

unread,
Nov 29, 2022, 2:57:14 AM11/29/22
to mazdak fatahi, SpiNNaker Users Group

Hi,

 

The “number of neurons per core” can be any rectangle you like; as you noticed, this can enable many more neurons-per-core than in “normal” neural networks.  This is because the convolution processing happens in a different way to normal synapse processing, with all processing done in local memory rather than using SDRAM.  The neuron processing itself is the same though, so there will still be a limit, which is probably what you are seeing with the change in size of the overall Population vs. the neurons per core.

 

One of the aims of doing the 2D work was to also support a reduction in “useless” incoming spikes to these Populations.  These are spikes that are received from a source core of a Population because some of the neurons in the source core *are* useful to the target, but not all of them.  In a traditional splitting of neurons which is not 2D-aware, this can be worse because the neurons tend to be “raster scanned” so e.g. the first splitting of neurons might be such that you get a core sending a few lines from the top of the image.  With the 2D splitting this is reduced because we can make sure that only rectangles that are around the target actually reach it, but note that the *whole* rectangle will still reach the target so the more neurons per core, the more useless spikes will be received!

 

In summary, this is likely to be a bit of trial-and-error.  Hopefully you can find a good balance with a reasonable number of neurons-per-core.

 

Regarding the output, we have been experimenting with going from 2D to 1D space and “normal” neuron populations that occur after a convolution.  The only connector we have that can achieve this so far is the PoolDenseConnector, which correspondingly uses the PoolDense synapse type.  This connector performs an equivalent of “all-to-all” connectivity, though the weights can be different for each connection, the aim being that this could be the final step in classification.  As this also works in local memory, the number of weights will be restricted so it can also perform a pooling operation simultaneously, so that the number of weights specified doesn’t have to be input_size x output_size but rather pooled_input_size x output_size.  For example, and input of 640 x 480 with a pool size of (10, 10) and a target population size of (10) would require 64 x 48 x 10 weights.  An example of this is:

 

width = 640

height = 480

n_categories = 10

pool_shape = (10, 10)

 

pop_2d = p.Population(width * height, p.IF_curr_exp(), structure=p.Grid2D(width / height))

pop_1d = p.Population(n_categories, p.IF_curr_exp())

 

pooled_width, pooled_height = p.PoolDenseConnector.get_post_pool_shape((width, height), pool_shape)

weights = get_weights(pooled_width, pooled_height, n_categories)

 

p.Projection(pop_2d, pop_1d, p.PoolDenseConnector(weights, pool_shape), p.PoolDense())

 

 

I believe that the output of pop_1d could then go on to target “normal” populations i.e. using normal PyNN networks.

 

I hope that helps,

 

Andrew :)

 

--
You received this message because you are subscribed to the Google Groups "SpiNNaker Users Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spinnakeruser...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/spinnakerusers/11dcc4ff-e88f-4455-b53c-bb480aaa6b95n%40googlegroups.com.

mazdak fatahi

unread,
Nov 29, 2022, 4:03:58 AM11/29/22
to SpiNNaker Users Group
Hi  Andrew,

Thank you so much for the detailed explanation.
And may I ask about the "get_weights" function in line "weights = get_weights(pooled_width, pooled_height, n_categories)"?
Is this a built-in function or I should implement it? Is this something like a list for the "FromListConnector"?
Thank you once more.
Best regards,

Andrew Rowley

unread,
Nov 29, 2022, 4:08:13 AM11/29/22
to mazdak fatahi, SpiNNaker Users Group

Hi,

 

You would need to implement the get_weights.  This will determine the weights between each source pixel and each target output.  It is just a list of values, one for each connection.  You can put in zero values in this list for non-connected items also.

 

Andrew :)

 

mazdak fatahi

unread,
Nov 29, 2022, 4:32:39 AM11/29/22
to SpiNNaker Users Group
Thank you,
If yes and as I understand, the output shape, for the example, will be (64, 48). As it is written also in the source code of pool_dense_connector:
"""
Where the pre- and post-synaptic populations are considered as a 2D\
array. Connect every post(row, col) neuron to many pre(row, col, kernel)\
through a (kernel) set of weights and/or delays.
"""

Then if I want to connect a 2d to a 1d population, I should consider them as (x,y)--->(1,n_Classes). And then every square (with  pool_shape dimensions) will be mapped to 1 neuron in the post_pop.
Am I right?

Andrew Rowley

unread,
Nov 29, 2022, 4:46:09 AM11/29/22
to mazdak fatahi, SpiNNaker Users Group

Hi,

 

Yes, that is the connector.  If you consider 2D to 1D, every square maps to *every* neuron in the post-pop not just one.  If you only want one, that can be achieved by setting the weights so that they are 0 for all but one square.

 

Andrew :)

 

mazdak fatahi

unread,
Nov 29, 2022, 4:51:42 AM11/29/22
to SpiNNaker Users Group
Great. 
Thank you very much for the quick reply :)



Reply all
Reply to author
Forward
0 new messages