You don't have specify the batch size value during the graph construction. The only change needed would be to update the placeholders to replace the dimension value with None, e.g.:
--
You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@tensorflow.org.
To post to this group, send email to dis...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/discuss/c4e9511b-a933-4fe8-8b79-ca9e952d782d%40tensorflow.org.
On Jan 21, 2016 6:21 PM, "마피아" <jazzsa...@gmail.com> wrote:
>
> Passing the length of the minibatch and terminating earlier sounds like a good idea... I never thought that way. Thank you very much!
>
I said this because the rnn code will do the early termination for you (but only if you pass in the lengths).
> By the way, when you saying "if you go through all your data starting from the shortest sequences it may also improve convergence", you mean I gather short sequences and make them a minibatch?
Usually it is easier to learn from short sequences (depending on the application). Yes, gather short sequences first, run through minibatch, then use longer sequences, run through minibatch etc.
Mike
> To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/discuss/376403af-c207-4860-b607-39c2f8086dbd%40tensorflow.org.
Hi Mike,Does doing this break the randomness of data distribution and worsen the convergence rate?In another study on linear learning, we found that random shuffling (between epochs) is necessary to obtain good convergence rate.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/discuss/27ea9165-a6e9-4476-8dd6-57d3b8e1a77d%40tensorflow.org.
I found that for SGD and variants, even if the data is randomized, shuffling between epochs is still necessary for good convergence.
Sent while mobile
--
You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@tensorflow.org.
To post to this group, send email to dis...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/discuss/e2767f72-a84e-4ec8-9d8b-84d8b718e347%40tensorflow.org.
Use tf.nn.dynamic_rnn. then for each minibatch you only feed a 3-tensor shaped (batch, time, depth) with time padded to the maximum length for that minibatch.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/discuss/CALg6vLPFjiE6FKfUwvT_trV0cVvLSydbsh45p3oKmGiWHiBbiw%40mail.gmail.com.