Hi,
We are trying to explore new SNN-based memory networks implemented in SpiNNaker with python. For this, we need a very simple reward-based STDP learning mechanism. This STDP mechanism should apply the weight and time rules to modify the synaptic weight only when a third neuron (dopamine) has been activated in a time window, and not change the synaptic weight otherwise.
Checking the models present in the current sPyNNaker library, we have not found an implementation of STDP that achieves this operation, so we have tried to develop it ourselves. Our idea was as follows:
- A new LIF neuron model (Dopamine) that, when it generates an event, updates a global variable with the current timestamp.
- A new STDP model (Dopamine STDP) that simply checks that shared global variable and if the pairs of events to be evaluated are within the specified time window, it applies the learning/forgetting, or does not apply it otherwise.
After implementing and testing it on SpiNNaker, we have realised that the memory space of the new Dopamine neuron and that of the synapses with our new STDP model is not the same, therefore, this global variable is independent of each other, invalidating our implementation idea. When the dopamine neuron is activated, it updates the variable correctly in its memory space, but when we read it from the new STDP (debugged through the logs) this variable remains with its initial value.
Would there be a way, through C code or compilation directives, to make such a global variable shareable between the neuron model and the STDP model? If it is not possible, could you please give us an idea on how could this STDP model be implemented within SpiNNaker with the given tools?
Thank you very much in advance.
Best regards,
Daniel