Documentations of adding support for new accelerator devices (FPGA) in TFRT?

53 views
Skip to first unread message

Yaxiong Zhao

unread,
Jun 21, 2020, 10:34:10 PM6/21/20
to TensorFlow Runtime
A new accelerator is being built with custom instructions and other optimization for a certain type of deep learning model for ecommerce applications.

We are looking to integrate that with TensorFlow serving (i.e. model inference).

We could not find detailed instructions on how to do that with the new TFRT.
Or were we actually misunderstanding TFRT, and it might not be the right technology for this purpose?

Bairen YI

unread,
Jun 21, 2020, 10:43:22 PM6/21/20
to Yaxiong Zhao, TensorFlow Runtime
Not in the team, but discussions in this thread might be of help:


Best,
Bairen

On 22 Jun 2020, at 10:34, Yaxiong Zhao <justi...@gmail.com> wrote:


--
You received this message because you are subscribed to the Google Groups "TensorFlow Runtime" group.
To unsubscribe from this group and stop receiving emails from it, send an email to TFRT+uns...@tensorflow.org.

Yaxiong Zhao

unread,
Jun 21, 2020, 11:58:36 PM6/21/20
to Bairen YI, TensorFlow Runtime
Thanks Bairen! Will be following the development.

My reading has been: (to save click for casual readers of this thread in the future)

#1 For TF 1.15-2.2 (and 2.3 coming soon), and sometime in the near future, there is no defined & modular APIs to add new device.

#2 It was suggested to directly modify a fork and release one's own TensorFlow package with the device support.

#3 Modular APIs built on MLIR is being actively worked on. At the time of this writing, there is some progress around defining underlying APIs (StreamExcutor, myself not really sure it's relationship with device support though)
Reply all
Reply to author
Forward
0 new messages