<
claudiac...@gmail.com> wrote:
>
> Jiri, thank you very much, the information you gave was very helpful!
>
> After analyzing model.cc I concluded that the autotune changes multiple parameters, including the buffer size, correct? This buffer is used by the interleave method to store what exactly? The elements that need to be processed or the interleaved output? I'm still a bit confused about how It all works...
>
> quarta-feira, 15 de Julho de 2020 às 19:28:11 UTC+1, Jiri Simsa escreveu:
>>
>> The bulk of the autotuning implementation can be found here:
>> -
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/model.h
>> -
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/model.cc
>>
>> A high-level summary: tf.data collects runtime information about what transformations the input pipeline performs and how much time is spent in each of them. This information is then used to periodically perform a background computation which uses an analytical model of the input pipeline performance and a hill climbing technique to decide how to divide available CPU across parallel transformations; the result of this computation is then propagated to the actual running pipeline.
>>
>> Best,
>>
>> Jiri
>>
>> On Wed, Jul 15, 2020 at 11:13 AM cláudia correia <
claudiac...@gmail.com> wrote:
>>>
>>> Hello,
>>>
>>> When using the interleave method it is possible to set num_parallel_calls equal to AUTOTUNE. When analyzing the parallel_interleave_dataset op (parallel_interleave_dataset_op.cc) I understood that during training the autotuning may increase/decrease the num_parallel_calls argument however, I didn't find the code responsible for the autotuning.
>>> Given this, I would like to know how the autotune works, more precisely in which situations does the autotune change the value of num_parallel_calls. I would also like to know where's the code responsible for the autotuning.
>>>
>>> Thank you in advance for your help!
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups "TensorFlow Developers" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an email to
devel...@tensorflow.org.
>>> To view this discussion on the web visit
https://groups.google.com/a/tensorflow.org/d/msgid/developers/57cd81aa-b642-499d-bdc0-ed47c27b141do%40tensorflow.org.
>
> --
> You received this message because you are subscribed to the Google Groups "TensorFlow Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to