Block Floating Point Support

Skip to first unread message

Andrew Stanford-Jason

Jun 18, 2021, 3:49:06 AM6/18/21
to SIG Micro
Hello all, I am interested in the possibility of supporting block floating point(BFP) representation of data/dynamic tensors. To be clear BFP is a quantisation representation where a tensor has a single floating point exponent for all elements. This is in contrast to floating point tensors where each element has an exponent and fixed point where all elements have a static exponent(typically common to all elements). 

My interests lie in supporting more traditional DSP through tflm. I would like to express, for example, an audio frontend, which can have enormous dynamic range, in TensorFlow. Using BFP allows much more efficient computation. Acoustic echo cancellation is a good example of where the signal of interest can be 60dB beneath the input signal(~10bits down).

I have plenty of experience implementing BFP DSP functions. From a high level BFP implementations of kernels are fixed point kernels with redundant sign bits and exponent management, i.e. they are implementable with integer arithmetic. 

So my questions are:
  • is this possible?
  • is anyone else interested in this?
  • is this a roadmap feature?
  • if not then what's the process? RFC?
Thank you

Advait Jain

Jun 18, 2021, 1:03:03 PM6/18/21
to Andrew Stanford-Jason,, SIG Micro
Adding the tflite mailing list to this conversation.

You received this message because you are subscribed to the Google Groups "SIG Micro" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
To view this discussion on the web visit
Reply all
Reply to author
0 new messages