Post training quantization of AveragePool2D layer

14 views
Skip to first unread message

Lukáš Sztefek

unread,
Mar 7, 2023, 5:01:48 AM3/7/23
to TensorFlow Lite
Hello everyone,

I struggle to understand why there isn’t calibrated output quantization for AveragePool2D layer in my TFLite model.

I have following AveragePool2D layer in my quantized model:

avg_pool_quant

As you could see, layer is quantized to int8 and quantization parameters are same for both, input and output tensors. I’m able to measure, that values coming from Add/Relu layer are within range <-128,125>, what indicates that quantization on this layer is performed nicely. On the other hand, values produced by AveragePool2D are always in range <-128, -83>, so less than quarter of available int8 range (<-128, 127>) is utilized. I would expect different quantization parameters on output tensor, so full int8 range is used.

Could you please give me hints where are the gaps in my thought process/understanding? I would be also grateful for links to additional sources or code, that give me more clue.

Thank you and have a nice day,
Lukas
Reply all
Reply to author
Forward
0 new messages