You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to TensorFlow Runtime
If I understand it correctly, native TF only use 1 CUDA stream for computation per device. Is/will TFRT use multiple CUDA streams?
Thanks,
Haibin
Idan Mintz
unread,
Sep 15, 2020, 12:23:31 AM9/15/20
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to TensorFlow Runtime
Hi Haibin,
TFRT uses a single stream when executing in eager (op by op) mode. In graph execution mode, TFRT exposes kernels that enable multi-stream computation but it is up to the model creator and/or their compiler to do multi-stream assignment. The TF MLIR compiler will be capable of doing stream assignment.