--
You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+unsubscribe@tensorflow.org.
To post to this group, send email to dis...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/discuss/6d692076-6ab9-400c-b671-74c11151eb60%40tensorflow.org.
You could swap them out, but you have to do it using client API (no automatic solution like for dynamic_rnn). IE, using persistent tensors you can move things between GPU/CPU using combination of GetSessionTensor/GetSessionHandle/DeleteSessionTensor.You could also split graph in multiple parts. You backpropagate the gradients into new section of graph by feeding the backprop results from the previous session.run call. For instance see the example in function_testIt wraps backprop computation for a computation in a single TensorFlow function by using Defun and _symbolic_gradient. You can then feed this function the backprops from upstream graph and it'll produce new backprops
On Tue, Jan 31, 2017 at 1:51 AM, Paul Voigtlaender <p.voigt...@gmail.com> wrote:
Hi,
I noticed, that the RNNs in tensorflow have an option to swap out memory from the GPU to the CPU.
Is it also possible to do this for a feed-forward network, e.g. swapping out the memory of the lower conv layers?
Or alternatively, maybe I can split the graph into multiple parts and evaluate part by part? But how can I then backpropagate the gradients from one part into the next one?
(I'm talking about an approach like in https://arxiv.org/pdf/1611.08323.pdf
"we partition the computation graph into several subsequent blocks by manually placing cut points in the graph. We then compute the derivatives for each block individually. To this end, we perform one (partial) forward pass per block and only store the feature maps for the block whose derivatives". This is done in theano)
I'd prefer the first option, as there the intermediate results don't need to be computed again.
--
You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+unsubscribe@tensorflow.org.
To post to this group, send email to dis...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/discuss/ffcd2a8c-2666-4a0a-8bb1-84d293b66a41%40tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/discuss/ffcd2a8c-2666-4a0a-8bb1-84d293b66a41%40tensorflow.org.