Symbols follow statistics that are stored in histogram (called
population in the code).
There is a theoretical entropy for that histogram but it does not take
into account how the symbol will be stored: it will not be stored
using its perfect entropy (that is computed in
VP8LBitsEntropyUnrefined), but through a Huffman tree which will give
it a slightly different entropy.
That is where those functions are useful: they try to get a better
estimate of the final entropy, knowing that it will be stored in a
Huffman tree.
BitsEntropyRefine tries to compare to the best case scenario (all
symbols have the same probability) which we know we cannot beat. The
"mixing" with entropy is empirical.
FinalHuffmanCost takes into account some empirical observation that
some trees with some properties (e.g. several streaks of 0), are
easier to encode than others.
> --
> You received this message because you are subscribed to the Google Groups "WebP Discussion" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
webp-discuss...@webmproject.org.
> To view this discussion on the web visit
https://groups.google.com/a/webmproject.org/d/msgid/webp-discuss/c3b250c1-9c4c-4d3e-ab95-88c0110b36c3n%40webmproject.org.