Hello all, I am interested in the possibility of supporting block floating point(BFP) representation of data/dynamic tensors. To be clear BFP is a quantisation representation where a tensor has a single floating point exponent for all elements. This is in contrast to floating point tensors where each element has an exponent and fixed point where all elements have a static exponent(typically common to all elements).
My interests lie in supporting more traditional DSP through tflm. I would like to express, for example, an audio frontend, which can have enormous dynamic range, in TensorFlow. Using BFP allows much more efficient computation. Acoustic echo cancellation is a good example of where the signal of interest can be 60dB beneath the input signal(~10bits down).
I have plenty of experience implementing BFP DSP functions. From a high level BFP implementations of kernels are fixed point kernels with redundant sign bits and exponent management, i.e. they are implementable with integer arithmetic.
So my questions are:
- is this possible?
- is anyone else interested in this?
- is this a roadmap feature?
- if not then what's the process? RFC?