Is there a way or a plan to leverage the new bf16 data format on the new 3rd gen tensorcore of Nvidia ampere arch?

58 views
Skip to first unread message

Chan Yu

unread,
Nov 12, 2020, 4:33:06 AM11/12/20
to TensorFlow Developers
Since we've got BF16 support on a100 and 3000-series Nvidia products now, I think it may be highly recommended to use bf16 (instead of fp16 which may leads to NaN in training from time to time)

Is there a way or a plan to leverage new bf16 data format on the new 3rd gen tensorcore of Nvidia ampere arch?  

Sanjoy Das

unread,
Nov 12, 2020, 11:41:06 PM11/12/20
to Chan Yu, Nathan Luehr, Reed Wanderman-Milne, TensorFlow Developers
On Thu, Nov 12, 2020 at 1:33 AM Chan Yu <ael...@gmail.com> wrote:
Since we've got BF16 support on a100 and 3000-series Nvidia products now, I think it may be highly recommended to use bf16 (instead of fp16 which may leads to NaN in training from time to time)

cuDNN, cuBLAS etc. don't support bf16 today.  Do you mind opening a github issue?  That will help us gauge demand for this in the external community.


-- Sanjoy


Is there a way or a plan to leverage new bf16 data format on the new 3rd gen tensorcore of Nvidia ampere arch?  

--
You received this message because you are subscribed to the Google Groups "TensorFlow Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to developers+...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/developers/5cb2c863-13b3-44f3-b505-a1f8d37dbb92n%40tensorflow.org.

Farhan Abdul

unread,
Nov 17, 2020, 7:44:18 PM11/17/20
to TensorFlow Developers, Sanjoy Das, TensorFlow Developers, ael...@gmail.com, Nathan Luehr, ree...@google.com
I think someone else mentioned this here before. But cuBLAS and cuFFT support bf16.
I think there is sizable requests for it.
Reply all
Reply to author
Forward
0 new messages