I'm somewhat disappointed by WebP lossless on grayscale images

803 views
Skip to first unread message

Frédéric Kayser

unread,
Aug 4, 2012, 7:14:28 PM8/4/12
to webp-d...@webmproject.org
Hello,
I've just started to run some WebP against PNG comparisons and sorted the files based on their color properties (color palette, greyscale, RGB, RGB+alpha).
I can understand that WebP lossless gains on paletted images are rather modest compared to optimized PNGs, but it looks like WebP on grayscale images has a problem.

On these 3 files, a default conversion done by cwebp 0.1.99 resulted in bigger WebP files:
sample_G_01.png  23829 bytes
sample_G_01.webp 26838 bytes (+12%)

sample_G_05.png 2800 bytes
sample_G_05.webp 2826 bytes (+1%)

sample_G_06.png 2800 bytes
sample_G_06.webp 3744 bytes (+33%)

pngcheck sample_G_01.png sample_G_06.png sample_G_05.png
OK: sample_G_01.png (240x160, 8-bit grayscale, non-interlaced, 37.9%).
OK: sample_G_06.png (2200x71, 8-bit grayscale, non-interlaced, 98.2%).
OK: sample_G_05.png (100x100, 8-bit grayscale, non-interlaced, 72.0%).

The 3 files are enclosed in the grey_samples.zip archive.

grey_samples.zip

Jyrki Alakuijala

unread,
Aug 4, 2012, 8:44:43 PM8/4/12
to webp-d...@webmproject.org
I did not analyze these images in detail, but I am just writing about
the experience with similar other images.

For images like sample_G_06.png, the 2d locality mapping for distances
in WebP is inefficient, and the way png backward reference distance
encoding happens to work is a better match with the image.

The difference on sample_G_01.png is likely due to the fact that the
current implementation of WebP does not do a shortest path
optimization after the entropy codes have been chosen (the shortest
path optimization is done using a single entropy code), and possibly
due to the differences between Paeth-predictor (PNG) and the Select
predictor (WebP).

In general, WebP lossless does not have many technical improvements
for gray scale images in comparison to png, and often one sees similar
performance between the two.
> --
> You received this message because you are subscribed to the Google Groups
> "WebP Discussion" group.
> To view this discussion on the web visit
> https://groups.google.com/a/webmproject.org/d/msg/webp-discuss/-/kSGFLMkMHXQJ.
> To post to this group, send email to webp-d...@webmproject.org.
> To unsubscribe from this group, send email to
> webp-discuss...@webmproject.org.
> For more options, visit this group at
> http://groups.google.com/a/webmproject.org/group/webp-discuss/?hl=en.

Pascal Massimino

unread,
Aug 5, 2012, 2:41:44 PM8/5/12
to webp-d...@webmproject.org
Hi,

On Sat, Aug 4, 2012 at 5:44 PM, Jyrki Alakuijala <jy...@google.com> wrote:
I did not analyze these images in detail, but I am just writing about
the experience with similar other images.

I tried sample_G_01.png with https://gerrit.chromium.org/gerrit/29257
and got 22744 bytes output (default setting).

The idea of this patch is that when there's *exactly* 256 colors, it's counter-productive
to use the palette mode: you'll just use up space to store a already-full palette.
I didn't try the other samples yet.

Frédéric Kayser

unread,
Aug 6, 2012, 4:02:17 AM8/6/12
to webp-d...@webmproject.org
Hello,
since these images are gray (R=G=B) why bother with a palette?
I thought that the Substract Green Transform would kick in first eventually followed by a Predictor transform on such images.
Isn't the Substract Green Transform the obvious way to handle grayscale images?

Frédéric Kayser

unread,
Aug 6, 2012, 4:34:09 AM8/6/12
to webp-d...@webmproject.org
Shouldn't this be evaluated at run time?
I mean the Color indexing image is compressed more ore less efficiently based on the color order (benefit from delta) and individual components frequency (affects entropy coding).
I don't know how hard WebP is trying to compress the Color indexing image, it could be inefficient on some wacky palettes but really helpful on sepia or duo toned images for instance.
Did to try head to head Color cache alone vs Color indexing alone vs None of those comparisons on images with 256 colors or less?

Jyrki Alakuijala

unread,
Aug 6, 2012, 4:37:13 AM8/6/12
to webp-d...@webmproject.org
On Mon, Aug 6, 2012 at 10:34 AM, Frédéric Kayser <cry...@free.fr> wrote:
> Shouldn't this be evaluated at run time?
> I mean the Color indexing image is compressed more ore less efficiently
> based on the color order (benefit from delta) and individual components
> frequency (affects entropy coding).
> I don't know how hard WebP is trying to compress the Color indexing image,
> it could be inefficient on some wacky palettes but really helpful on sepia
> or duo toned images for instance.
> Did to try head to head Color cache alone vs Color indexing alone vs None of
> those comparisons on images with 256 colors or less?

Yes. I experimented a lot with this. 256 color indexing beats the
color cache -- as a dense mapping it gives less collisions (== none)
and allows for a more efficient representation of the entropy maps.

>
>
>> Le dimanche 5 août 2012 20:41:44 UTC+2, skal a écrit :
>>>
>>>
>>> I tried sample_G_01.png with https://gerrit.chromium.org/gerrit/29257
>>> and got 22744 bytes output (default setting).
>>>
>>> The idea of this patch is that when there's *exactly* 256 colors, it's
>>> counter-productive
>>> to use the palette mode: you'll just use up space to store a already-full
>>> palette.
>>> I didn't try the other samples yet.
>>>
> --
> You received this message because you are subscribed to the Google Groups
> "WebP Discussion" group.
> To view this discussion on the web visit
> https://groups.google.com/a/webmproject.org/d/msg/webp-discuss/-/YTkOE_i2Y1UJ.

Frédéric Kayser

unread,
Aug 6, 2012, 5:07:26 AM8/6/12
to webp-d...@webmproject.org
But when is the Color cache useful in this case? As a short-cut to address single pixels when out of 2D locality range?

Jyrki Alakuijala

unread,
Aug 6, 2012, 5:16:25 AM8/6/12
to webp-d...@webmproject.org
On Mon, Aug 6, 2012 at 11:07 AM, Frédéric Kayser <cry...@free.fr> wrote:
> But when is the Color cache useful in this case? As a short-cut to address
> single pixels when out of 2D locality range?

Yes. The color cache can be considered as a content-addressable
backward reference with a length of 1.

Color cache is useless for the 256 color cases. The color cache is
extremely useful when there are 2000 colors and 60 of them are used
heavily, or there is locality in their use. On its best it is with
Waterloo Frymire image, I think, but it helps on typical graphical
images.

Also, many computer generated images have easily 256+ colors due to
antialiasing, and there, one can avoid the cost of palettization with
the color cache -- without losing that much in compression density,
and without getting the quality surprises associated with
palettization.

Pascal Massimino

unread,
Aug 14, 2012, 2:50:47 AM8/14/12
to webp-d...@webmproject.org
Hi,

a follow-up. I made some further experiments, hard-coding the coding tools used for 
compressing your examples.

On Sun, Aug 5, 2012 at 11:41 AM, Pascal Massimino <pascal.m...@gmail.com> wrote:
Hi,

On Sat, Aug 4, 2012 at 5:44 PM, Jyrki Alakuijala <jy...@google.com> wrote:
I did not analyze these images in detail, but I am just writing about
the experience with similar other images.

I tried sample_G_01.png with https://gerrit.chromium.org/gerrit/29257
and got 22744 bytes output (default setting).

The idea of this patch is that when there's *exactly* 256 colors, it's counter-productive
to use the palette mode: you'll just use up space to store a already-full palette.
I didn't try the other samples yet.

Btw: i get smaller file on G_05 by using only the subtract-green transform:

File:      sample_G_05.png
Dimension: 100 x 100
Output:    2724 bytes
Lossless-ARGB compressed size: 2724 bytes
  * Lossless features used: SUBTRACT-GREEN
  * Precision Bits: histogram=3 transform=4 cache=0
  * Palette size:   11
 
The last one (G_06) is still surprisingly hard to compress.
Prediction and palette give a good combination, but we're still
far from the original PNG size of 2800 bytes, not quite sure why.

File:      /Users/skal/work/grey/sample_G_06.png
Dimension: 2200 x 71
Output:    3174 bytes
Lossless-ARGB compressed size: 3174 bytes
  * Lossless features used: PREDICTION PALETTE
  * Precision Bits: histogram=3 transform=4 cache=0
  * Palette size:   77

Frédéric Kayser

unread,
Aug 14, 2012, 8:12:58 PM8/14/12
to webp-d...@webmproject.org
From what I have seen on the PNG when the filter 1 (Sub, the same as prediction mode 1 Left) is applied to the entire sample_G_06.png image and using a single Huffman table gives a 3186 bytes file. Using the same filter but several Huffman blocks (4) brings it down to 2863 bytes (pngout -force -f1 -n4 followed by defluff). An entropy image is probably the best way to mimic the Huffman blocks behavior even if it has a bigger overhead, could you try to add it?


Le mardi 14 août 2012 08:50:47 UTC+2, skal a écrit :
Hi,

a follow-up. I made some further experiments, hard-coding the coding tools used for 
compressing your examples.

Frédéric Kayser

unread,
Aug 18, 2012, 8:24:55 AM8/18/12
to webp-d...@webmproject.org
Two other files, the webp files is the smallest one out of 700 trials (q ranging from 1 to 100 and m from 0 to 6).

pngcheck sample_G_03.png sample_G_04.png
OK: sample_G_03.png (128x128, 8-bit grayscale, non-interlaced, 49.0%).
OK: sample_G_04.png (80x80, 8-bit grayscale, non-interlaced, 32.7%).

8350 sample_G_03.png
9082 sample_G_03_020.webp-bestll (+8%)

4308 sample_G_04.png
5026 sample_G_04_020.webp-bestll (+16%)

By the way I have noticed that cwebp 0.2.0 (release) does not always produce the same files as 0.1.99, roughly 1 file out of 10 is bigger, 3 files out of 10 are smaller the others have the same size. If the files for which cwebp 0.2.0 is a regression have some interest for you, I could pack some of them together.
new_grey_samples.zip

Frédéric Kayser

unread,
Aug 18, 2012, 9:28:58 AM8/18/12
to webp-d...@webmproject.org
I forgot to ask if compression improvements of grayscale images that are indeed the alpha channel of an RGBA image would translate into gains for WebP lossy+alpha?
For instance if I take this image:


Its alpha channel looks like this:

If I'm right this alpha mask will be losslessly compressed beside the RGB components that use the lossy encoder in a lossy+alpha WebP file.
The alpha mask is shifted into the green channel of the WebP ARGB image, is this operation exactly the same as the SUBTRACT-GREEN transform? Would it mean that if SUBSTRACT-GREEN is the only transform applied to a grayscale image it compresses exactly as the alpha part of a lossy+alpha WebP file?

Frédéric Kayser

unread,
Aug 18, 2012, 9:30:29 AM8/18/12
to webp-d...@webmproject.org


Reply all
Reply to author
Forward
0 new messages