Hi,
The “number of neurons per core” can be any rectangle you like; as you noticed, this can enable many more neurons-per-core than in “normal” neural networks. This is because the convolution processing happens in a different way to normal synapse processing, with all processing done in local memory rather than using SDRAM. The neuron processing itself is the same though, so there will still be a limit, which is probably what you are seeing with the change in size of the overall Population vs. the neurons per core.
One of the aims of doing the 2D work was to also support a reduction in “useless” incoming spikes to these Populations. These are spikes that are received from a source core of a Population because some of the neurons in the source core *are* useful to the target, but not all of them. In a traditional splitting of neurons which is not 2D-aware, this can be worse because the neurons tend to be “raster scanned” so e.g. the first splitting of neurons might be such that you get a core sending a few lines from the top of the image. With the 2D splitting this is reduced because we can make sure that only rectangles that are around the target actually reach it, but note that the *whole* rectangle will still reach the target so the more neurons per core, the more useless spikes will be received!
In summary, this is likely to be a bit of trial-and-error. Hopefully you can find a good balance with a reasonable number of neurons-per-core.
Regarding the output, we have been experimenting with going from 2D to 1D space and “normal” neuron populations that occur after a convolution. The only connector we have that can achieve this so far is the PoolDenseConnector, which correspondingly uses the PoolDense synapse type. This connector performs an equivalent of “all-to-all” connectivity, though the weights can be different for each connection, the aim being that this could be the final step in classification. As this also works in local memory, the number of weights will be restricted so it can also perform a pooling operation simultaneously, so that the number of weights specified doesn’t have to be input_size x output_size but rather pooled_input_size x output_size. For example, and input of 640 x 480 with a pool size of (10, 10) and a target population size of (10) would require 64 x 48 x 10 weights. An example of this is:
width = 640
height = 480
n_categories = 10
pool_shape = (10, 10)
pop_2d = p.Population(width * height, p.IF_curr_exp(), structure=p.Grid2D(width / height))
pop_1d = p.Population(n_categories, p.IF_curr_exp())
pooled_width, pooled_height = p.PoolDenseConnector.get_post_pool_shape((width, height), pool_shape)
weights = get_weights(pooled_width, pooled_height, n_categories)
p.Projection(pop_2d, pop_1d, p.PoolDenseConnector(weights, pool_shape), p.PoolDense())
I believe that the output of pop_1d could then go on to target “normal” populations i.e. using normal PyNN networks.
I hope that helps,
Andrew :)
--
You received this message because you are subscribed to the Google Groups "SpiNNaker Users Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
spinnakeruser...@googlegroups.com.
To view this discussion on the web, visit
https://groups.google.com/d/msgid/spinnakerusers/11dcc4ff-e88f-4455-b53c-bb480aaa6b95n%40googlegroups.com.
Hi,
You would need to implement the get_weights. This will determine the weights between each source pixel and each target output. It is just a list of values, one for each connection. You can put in zero values in this list for non-connected items also.
Andrew :)
To view this discussion on the web, visit https://groups.google.com/d/msgid/spinnakerusers/1b08f07a-7f06-49c7-af84-5bf1a47cb930n%40googlegroups.com.
Hi,
Yes, that is the connector. If you consider 2D to 1D, every square maps to *every* neuron in the post-pop not just one. If you only want one, that can be achieved by setting the weights so that they are 0 for all but one square.
Andrew :)
To view this discussion on the web, visit https://groups.google.com/d/msgid/spinnakerusers/94a36027-1be1-4cbf-ba34-80bae1f29085n%40googlegroups.com.