hi,
I'd like to try porting an inference workflow we have to the Spinnaker. It involves (a) massively parallel simulations and (b) some deep neural network training (pytorch based). Having read the spinnaker 2 paper, I suspect the individual simulations themselves would fit on single chips (they don't benefit from multicore parallelism on regular CPUs), and the deep net could be ported with the snn_toolbox. Assuming that my naive assumptions hold, a few questions still:
(1) For the massively parallel simulations, my intuition on a GPU would be to write the simulations in a batched/vectorized form in e.g. TensorFlow. Would doing this and then using snn_toolbox be workable or terrible?
(2) It seems like snn_toolbox doesn't handle training, but if I implemented the gradients myself, could an iterative optimizer still be implemented?
thanks in advance,
Marmaduke