Yaowu Xu wrote:
> On Fri, May 21, 2010 at 1:45 AM,
maikm...@googlemail.com
> <
maikm...@googlemail.com> wrote:
>> (that is very possible), but is it just me who thinks that the
>> definition of segments isn't actually that big of an overhead and
>> hopefully rather well-compressible by the entropy coding, given that
>
> I agree with your assessment on the encoding cost of segment map
I don't. You can either update the segmentation map for every coded
block in the frame, or for none of them. There's nothing in between. If
you do update it, each flag is coded entirely independently. Deltas are
only used w.r.t. to the parameters for a segment at the beginning of the
frame, not the actual segment index associated with each MB, which is
the bulk of the data. There's no prediction, either temporal or spatial.
For any real usage of the segmentation map for adaptive quantization,
this is going to run you a cost pretty close to 2 bits per coded MB.
Entropy coding buys you almost nothing here: you only get a benefit if
some segments are used much more frequently than others, and this is the
opposite of what you want when choosing the set of quantizers to use for
the frame (because you only get 4). This basically makes the
segmentation map useless at low rates.
You can still get roughly 50% of the benefit of adaptive quantization
just by varying lambda during RDO, though.