Since we've got BF16 support on a100 and 3000-series Nvidia products now, I think it may be highly recommended to use bf16 (instead of fp16 which may leads to NaN in training from time to time)
Is there a way or a plan to leverage new bf16 data format on the new 3rd gen tensorcore of Nvidia ampere arch?
--
You received this message because you are subscribed to the Google Groups "TensorFlow Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to developers+...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/developers/5cb2c863-13b3-44f3-b505-a1f8d37dbb92n%40tensorflow.org.