Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Studying Lossy Image Compression Efficiency

3,856 views
Skip to first unread message

Josh Aas

unread,
Oct 17, 2013, 10:48:16β€―AM10/17/13
to
This is the discussion thread for the Mozilla Research blog post entitled "Studying Lossy Image Compression Efficiency", and the related study.

Josh Aas

unread,
Oct 17, 2013, 10:50:49β€―AM10/17/13
to

pgas...@gmail.com

unread,
Oct 17, 2013, 1:27:43β€―PM10/17/13
to
On Thursday, 17 October 2013 10:48:16 UTC-4, Josh Aas wrote:
> This is the discussion thread for the Mozilla Research blog post entitled "Studying Lossy Image Compression Efficiency", and the related study.

Would be interesting if you could post your conclusions from these tests.

cry...@free.fr

unread,
Oct 17, 2013, 1:50:12β€―PM10/17/13
to
Thank you for publishing this study, here are my first questions:
- Why didn't you include JPEG 2000?

- Correct me if I'm wrong but JPEG-XR native color space is not Y'CbCr this means that this format had to perform an extra (possibly lossy) color space conversion.

- I suppose that the final lossless step used for JPEGs was the usual Huffman encoding and not arithmetic coding, have you considered testing the later one independently?

- The image set is some what biased toward outdoor photographic images and highly contrasted artificial black and white ones, what about fractal renderings, operating systems and 2D/3D games screen-shots, blurry, out of frame or night shots?

- I've found only two cats and not a single human face in the Tecnick image set, no fancy Γ  la Instagram filters, this can't be seriously representative of web images, a larger image corpus would be welcome.

Leman Bennett (Omega X)

unread,
Oct 17, 2013, 2:20:46β€―PM10/17/13
to
On 10/17/2013 9:48 AM, Josh Aas wrote:
> This is the discussion thread for the Mozilla Research blog post entitled "Studying Lossy Image Compression Efficiency", and the related study.
>


HEVC-MSP did really well. Its unfortunate that Mozilla could not use it
in any capacity since its tied to the encumbered MPEG HEVC standard.

Also, I didn't know that someone was working on a JPEG-XR FOSS encoder.
I wonder how it compares to the Microsoft reference encoder.
--
==================================
~Omega X
MozillaZine Nightly Tester

Josh Aas

unread,
Oct 17, 2013, 5:07:20β€―PM10/17/13
to
On Thursday, October 17, 2013 12:50:12 PM UTC-5, cry...@free.fr wrote:
> Thank you for publishing this study, here are my first questions:
>
> - Why didn't you include JPEG 2000?

We couldn't test everything, we picked a small set of the formats that we hear the most about and that seem interesting. We're not opposed to including JPEG 2000 in future testing, particularly if we see more evidence that it's competitive.

> - The image set is some what biased toward outdoor photographic images and highly contrasted artificial black and white ones, what about fractal renderings, operating systems and 2D/3D games screen-shots, blurry, out of frame or night shots?
>
> - I've found only two cats and not a single human face in the Tecnick image set, no fancy Γ  la Instagram filters, this can't be seriously representative of web images, a larger image corpus would be welcome.

We considered improving the image sets in some of the ways you suggest, we just didn't get to it this time. Trying to be thorough and accurate with these kinds of studies is more work that it seems like it'll be, we couldn't do everything. We'll try to do better with image sets in future work. I still think this set produces meaningful results.

Thanks for the feedback. Maybe Tim, Gregory, or Jeff can respond to some of your other questions.

cry...@free.fr

unread,
Oct 17, 2013, 6:08:06β€―PM10/17/13
to
HDR-VDP-2 is relatively recent metric that produces predictions for difference visibility and quality degradation.
http://sourceforge.net/apps/mediawiki/hdrvdp/index.php?title=Main_Page
It could been interesting to add this metric in future studies.

RafaΕ‚ Mantiuk (the guy behind HDR-VDP-2) also worked on this paper: "New Measurements Reveal Weaknesses of Image Quality Metrics in Evaluating Graphics Artifacts" http://www.mpi-inf.mpg.de/resources/hdr/iqm-evaluation/

Which leads to think that doing some blinded experiment (real people evaluating the images) to compare compressed images has still some value. It could be fun to conduct such an experience, presenting 2 or 3 versions of the same image compressed with different methods and asking a wide panel (could be open to anyone on the web) to pick their favorite one.

Yoav Weiss

unread,
Oct 18, 2013, 4:57:11β€―AM10/18/13
to
On Thursday, October 17, 2013 4:48:16 PM UTC+2, Josh Aas wrote:
> This is the discussion thread for the Mozilla Research blog post entitled "Studying Lossy Image Compression Efficiency", and the related study.

Thank you for publishing this research!

While I like the methodology used a lot, I find the image sample used extremely small to accurately represent images on today's Web (or tomorrow's Web for that matter).

I understand that one of the reasons you used artificial benchmarks instead of real-life Web images is to avoid the bias of images that already went through JPEG compression.

Would you consider a large sample of lossless Web images (real-life images served as PNG24, even though it'd be wiser to serve them as JPEGs) to be unbiased enough to run this research against? I believe such a sample would better represent Web images.

battle...@gmail.com

unread,
Oct 18, 2013, 5:31:09β€―AM10/18/13
to
Very interesting study. I’m shocked to see WebP and JPEG-XR perform so poorly on so many of the tests. Do they really perform *that* much *worse* than JPEG? It seems hard to imagine. I've done my own tests on jpeg, web-p and jpeg-xr by blindly comparing files of the same size and deciding subjectively which one I thought looked closest to the uncompressed version. The conclusions I came to were very close I thought to the RGB-SSIM tests which showed web-p best, with JPEG-XR much better than JPEG but significantly behind Web-P and JPEG much worse than all. This seemed consistent to me at all encoding qualities with many kind of images just as the RGB-SSIM tests show. It seems very curious that Y-SSIM, IW-SSIM and PSNR-HVS-M all show JPEG-XR and Web-P both dipping below JPEG quality at the same file sizes. I’d be very interested in seeing the images that those comparisons are determining JPEG-XR and Web-P are doing a worse job than JPEG.

I think the most important kind of comparison to do is a subjective blind test with real people. This is of course produces less accurate results, but more meaningful ones. It doesn't really matter if a certain algorithm determines a certain codec produces less lossy images than another codec if actual humans looking at the compressed images don’t tend to feel the same way. All that matters in the end is if a codec does a good job of keeping the details that the human compressing and viewing the image thinks are important, not what various algorithms testing image quality think are important.

Although it’s outside the scope of this study, I wonder what interest Mozilla is taking on image formats with more features being supported on the Web? Lossy + transparency seems like a particularly desirable one for games and certainly for web developers in general. RGB565 colour format support sounds like it could be useful for optimized WebGL applications.

ch...@improbable.org

unread,
Oct 18, 2013, 10:16:59β€―AM10/18/13
to
On Thursday, October 17, 2013 1:50:12 PM UTC-4, cry...@free.fr wrote:
> Thank you for publishing this study, here are my first questions:
>
> - Why didn't you include JPEG 2000?

You might find https://bugzilla.mozilla.org/show_bug.cgi?id=36351#c120 interesting: it discusses what it would take to get a new format into Firefox: a high-quality open-source library, some level of security vetting and a solid rights statement.

I think JP2 support could potentially be very interesting because it would make responsive images almost trivial without requiring separate files (i.e. srcset could simply specify a byte-range for each size image) but the toolchain support needs some serious attention.

Chris

jmar...@google.com

unread,
Oct 18, 2013, 11:03:53β€―AM10/18/13
to
The blog post indicates "we’re primarily interested in the impact that smaller file sizes would have on page load times". I looked at the study, http://people.mozilla.org/~josh/lossy_compressed_image_study_october_2013/, but couldn't find any reference to PLT impact from smaller file sizes.

Moreover, the web performance community is moving away from PLT as a proxy for user experience in favor of Speed Index (https://sites.google.com/a/webpagetest.org/docs/using-webpagetest/metrics/speed-index) and other metrics. See http://velocityconf.com/velocityny2013/public/schedule/detail/32820 from last week's Velocity conference in NYC.

To deliver a sub-second mobile web experience we have very tight time budget. Ideally we'd like to deliver any important above-the-fold images in the first CWND. Ilya Grigorik (see "High Performance Browser Networking" on amazon) gives a guideline of 40k bytes of network traffic with no blocking external resources to have a fighting chance. Images can be inlined if they are important to the initial user experience.

Full disclosure: I work on mod_pagespeed and ngx_pagespeed, which do those transformations automatically, as well as transcoding to webp for compatible browsers.


In addition to the quality graphs, I'd like to see web page videos (www.webpagetest.org) and waterfalls to see the impact of making images smaller. This can be done in the context of Firefox which doesn't have support for new image formats, but you could simulate that by recompressing jpeg at a lower quality until you get the size you'd get using webp, and relying on your quality metrics to determine the target byte size for acceptable quality.

lept...@gmail.com

unread,
Oct 18, 2013, 11:34:45β€―AM10/18/13
to
I think you are attacking from the wrong angle. Being responsible in an Enterprise for quite a few sites, most issues I have are, where all current formats fail miserably. To make the point, see the two following Images, were I have to live with PNG-24 huge sized files, due to a) alpha-transparancy necessary or b) other formats fail from a visual perspective.
https://www.netzclub.net/css/default/start_flatman_videos.png
and
https://www.fonic.de/assets/images/bg.png
So I am not so much in a need of a compressor, which is able to shave off a couple of bytes on standard photo-like images, but in a desperate need of an efficient encoder for 256+ colour like palette-driven colour-gradient graphs, photos with alpha transparancy, or, even more so, of a mixture of both.

Ralph Giles

unread,
Oct 18, 2013, 7:12:14β€―PM10/18/13
to dev-pl...@lists.mozilla.org
On 2013-10-18 1:57 AM, Yoav Weiss wrote:

> Would you consider a large sample of lossless Web images (real-life images served as PNG24, even though it'd be wiser to serve them as JPEGs) to be unbiased enough to run this research against? I believe such a sample would better represent Web images.

Do you have such a sample?

Note that the scripts to run the image comparisons are on github, so you
can verify the results and try them on other image sets.

-r

battle...@gmail.com

unread,
Oct 19, 2013, 6:15:30β€―AM10/19/13
to
On Saturday, October 19, 2013 12:12:14 AM UTC+1, Ralph Giles wrote:
> On 2013-10-18 1:57 AM, Yoav Weiss wrote:
> Do you have such a sample?

For what it's worth here's an image I made quite awhile ago showing the results of my own blind subjective comparison between codecs: http://www.filedropper.com/lossy

The image shows the original lossless image alongside a JPEG, JPEG-XR, JPEG2000 and Web-P version of the image all of which have been compressed to 7.5kb. I used the leadtools compression suite for all images except the web-p one, where i used Google's libwebp. I'll be *very* clear here that I don't consider this image very good proof of how good each codec is, clearly the JPEG compressed image could be optimized more. The lossy compressed images are ordered as JPEG, JPEG-XR, JPEG2000, Web-P with respect to the results I personally came to about their performance, web-p being the best and jpeg being the worst. I did this comparison at every quality level and using many different image sources and found the subjective results were the same. The difference between Web-P, JPEG200 and JPEG-XR can at times be hard to call as it felt like i was deciding which compression artifacts bothered my most personally rather than which image felt closest to the original. What was consistent however was that all the modern codecs seems clearly superior to JPEG, or at best appeared the same as JPEG at higher compression qualities but certainly never worse. What I'm saying is based off my own experiences I'd be shocked if anyone could go through a subjective blind test like this and feel that JPEG was performing better at any quality level or with any images.

I'd also agree the points brought up by lept...@gmail.com. I think the actual features supported by the current range of web image formats is quite lacking. It's common on the web for web and game developers to compress photographic images as PNG's because they need transparency. Animated Gif's are also popular for compressing short live action video clips, something the format is terribly inadequate at. Both JPEG-XR and Web-P include transparency + alpha support. Only Web-P supports animation, though I believe animation could be added to JPEG-XR easily http://ajxr.codeplex.com/. The extra color formats supported in JPEG-XR could one day be useful on the web too.

Although the benefit of better compression performance in web image formats would have obvious speed benefits, I think the consequences of having such a limited feature set in the current range of supported image formats on the web is holding web developers back far more than file size issues.

Jeff Muizelaar

unread,
Oct 19, 2013, 7:30:15β€―AM10/19/13
to battle...@gmail.com, dev-pl...@lists.mozilla.org


----- Original Message -----
> On Saturday, October 19, 2013 12:12:14 AM UTC+1, Ralph Giles wrote:
> > On 2013-10-18 1:57 AM, Yoav Weiss wrote:
> > Do you have such a sample?
>
> For what it's worth here's an image I made quite awhile ago showing the
> results of my own blind subjective comparison between codecs:
> http://www.filedropper.com/lossy

I agree that in this comparison JPEG is clearly the worst. However, the bitrate that you are using here is well below the target for which JPEG is designed to be used and the quality of all of the image formats is lower than would be acceptable for nearly all purposes. This makes these results much less interesting than at quality levels typically used on the web.

-Jeff

battle...@gmail.com

unread,
Oct 19, 2013, 7:55:34β€―AM10/19/13
to
I completely agree. This is why I'm don't want this image to be considered as good proof of which codec is superior. I had another image exactly like this where I had comparisons around 35kb which was a much more realistic quality level for all the codecs but the differences in visual quality loss was still noticeable, but I seem to have lost it. My own subjective findings were that quality of each codec was roughly the same order, with JPEG always being identifiable the worst until file size raised to the point were it was impossible to easiy tell the difference between any of the lossy compressed images. I think if this test image were compressed on a typical web page it would typically be a 80kb JPEG or so, and at this level of quality it's difficult for a human to differentiate between a webp or jpeg-xr image of the same size or even the original lossless image for that matter in some cases. In this respect subjective blind testing by humans fails, but my point is I never observed anything like JPEG performing worse than JPEG-XR or Web-P and I'm very surprised to see that some algorithms are reporting that JPEG out performs WebP and Jpeg-XR by a wide margin at some quality settings, in stark contrast to what other algorithms report (RGB-SSIM) and what my own and what I believe other subjective blind tests would report.

stephan...@gmail.com

unread,
Oct 19, 2013, 12:14:40β€―PM10/19/13
to
I'll just talk about the quality evaluation aspects of this study, as it is a field I know quite well (PhD on the topic, even if in video specifically).

> I think the most important kind of comparison to do is a subjective blind test with real people. This is of course produces less accurate results, but more meaningful ones.

I don't get how more meaningful results may be less accurate... Running subjective quality tests is not as trivial as it sounds, at least to get meaningful results, as you say. Of course, you can throw a bunch of images to some naive observers with a nice web interface, but what about their screens differences? what about their light conditions differences? how do you validate people for the test (vision acuity, color blindness)? I've ran more than 600 test sessions with around 200 different observers. Each one of them was tested before the session, and a normalized (ITU-R BT.500) room was dedicated to the process. I don't want to brag, I just mean it's a complicated matter, and not as sexy as it sounds :-)

In this study, you used several objective quality criteria (Y-SSIM, RGB-SSIM, IW-SSIM, PSNR-HVS-M). You say yourself: "It's unclear which algorithm is best in terms of human visual perception, so we tested with four of the most respected algorithms." Still, the ultimate goal of your test is to compare different degrading systems (photography coders here) at equivalent *perceived* quality. As your graphs show, they don't produce very consistent results (especially RGB-SSIM). SSIM-based metrics are structural, which means they evaluate how the structure of the image differ from one version to the other. Then they are very dependent of the content of the picture. Y-SSIM and IW-SSIM are only applied to luma channel, which is not optimal in your case, as image coders tend to blend colors. Still, IW-SSIM is the best performer in [1] (but it was the subject of the study), so why not. Your results with RGB-SSIM are very different than the others, disqualifying it for me. Plus, averaging SSIM over R, G and B channels has no sense for the human visual system. PSNR-HVS-M has the advantage to take into account a CSF to ponderate their PSNR, but it was designed over artificial artefacts, then you don't know how it performs over compression artefacts. None of these metrics use the human visual system at their heart. At best, they apply some HVS filter to PSNR or SSIM. For a more HVS-related metric, which tend to perform well (over 0.92 in correlation), look at [2] (from the lab I worked in). The code is a bit old now though, but an R package seems to be available.

You cite [1], in which they compare 5 algorithms (PSNR, IW-PSNR, SSIM, MS-SSIM, and IW-SSIM) over 6 subject-rated independent image databases (LIVE database, Cornell A57 database, IVC database, Toyama database, TID2008 database, and CSIQ database). These databases contain images and subjective quality evaluations obtained in normalized (i.e. repeatable) conditions. Most of them use JPEG and JPEG2000 compression, but not the others you want to test. The LIVE database is known not to be spread enough, resulting in high correlation in most studies (yet the reason why other databases emerged). If you want to perform your study further, consider using some of these data to start with.

Finally, be careful when you compute average of values, did you check their distribution first?

StΓ©phane PΓ©chard

[1] https://ece.uwaterloo.ca/~z70wang/research/iwssim/
[2] http://www.irccyn.ec-nantes.fr/~autrusse/Komparator/index.html

Yoav Weiss

unread,
Oct 20, 2013, 4:34:56β€―PM10/20/13
to
On Saturday, October 19, 2013 1:12:14 AM UTC+2, Ralph Giles wrote:
> On 2013-10-18 1:57 AM, Yoav Weiss wrote:
>
>
>
> > Would you consider a large sample of lossless Web images (real-life images served as PNG24, even though it'd be wiser to serve them as JPEGs) to be unbiased enough to run this research against? I believe such a sample would better represent Web images.
>
>
>
> Do you have such a sample?
>

Assuming Mozilla would consider such a sample valid, I can get such a sample using data from httparchive.org.

danbr...@gmail.com

unread,
Oct 20, 2013, 8:24:25β€―PM10/20/13
to
I have a couple of fundamental issues with how you're calculating 3 of the 4 metrics (all but RGB-SSIM, which I didn't think too much about)

First, am I correct in my reading of your methodology that for all metrics, you encode a color image (4:2:0) and use that encoded filesize? If so, then all the results from greyscale metrics are invalid, as the filesize would include chroma, but the metric only measures luma. An encoder could spend 0 bits on chroma and get a better score than an encoder that spent more bits on chroma than luma.

Second, for Y-SSIM and IW-SSIM only, it appears you encode as color, then afterwards convert both the original image and encoded image to greyscale and calculate SSIM between those two images. This is fundamentally wrong - the original converted to greyscale was not the image the codec encoded, so you're not measuring the distortion of the codec. It looks like PSNR-HVS-M is calculated from the YUV fed into the encoder, which is how Y-SSIM and IW-SSIM should be calculated as well.

Fortunately, the solution is easy - for greyscale metrics, simply convert to greyscale before encoding, not after. Or, if that's what you're already doing, make it clear.

stephan...@gmail.com

unread,
Oct 21, 2013, 6:29:45β€―AM10/21/13
to
> I have a couple of fundamental issues with how you're calculating 3 of the 4 metrics (all but RGB-SSIM, which I didn't think too much about)

You are right about it, methodology is not clear on this point.

> First, am I correct in my reading of your methodology that for all metrics, you encode a color image (4:2:0) and use that encoded filesize?

All metrics compute data on *decoded* data, being RGB or YUV. Maybe authors could publish a flow of their methodology, that would ease the discussion.

> Fortunately, the solution is easy - for greyscale metrics, simply convert to greyscale before encoding, not after. Or, if that's what you're already doing, make it clear.

As I already said, it's not really a good idea to use luma-only metrics to assess colored image, as coders tend to blend colors as the quality index decreases.

Henri Sivonen

unread,
Oct 21, 2013, 8:35:24β€―AM10/21/13
to cry...@free.fr, dev-platform
On Fri, Oct 18, 2013 at 1:08 AM, <cry...@free.fr> wrote:
> Which leads to think that doing some blinded experiment (real people evaluating the images) to compare compressed images has still some value.

I think it would be worthwhile to do two experiments with real people
evaluating the images:
1) For a given file size with artifacts visible, which format
produces the least terrible artifacts?
2) Which format gives the smallest file size with a level of
artifacts that is so mild that people don't notice the artifacts?

My limited experience suggests that the ranking of the formats could
be different for those two different questions. Also, my understanding
is that the quality metric algorithms are foremost about answering
question #1 while question #2 is often more important for Web
designers.

--
Henri Sivonen
hsiv...@hsivonen.fi
http://hsivonen.fi/

Henri Sivonen

unread,
Oct 21, 2013, 8:43:02β€―AM10/21/13
to ch...@improbable.org, dev-platform
On Fri, Oct 18, 2013 at 5:16 PM, <ch...@improbable.org> wrote:
> I think JP2 support could potentially be very interesting because it would make responsive images almost trivial without requiring separate files (i.e. srcset could simply specify a byte-range for each size image) but the toolchain support needs some serious attention.

Are there now JPEG 2000 encoders that make images such that if you
want to decode an image in quarter of the full-size in terms of number
of pixels (both dimensions halved), it is sufficient to use the first
quarter of the file length?

Last I tried, which was years ago, in order to decode a quarter-sized
image in terms of number of pixels with quality comparable to the
full-size image in terms of visible artifacts, it was necessary to
consume half of the file length. That is, in order to use the image
with both dimensions halved, it was necessary to load twice as many
bytes as would have been necessary if there was a separate pre-scaled
file available. Having to transfer twice as much data does not seem
like a good trade-off in order to avoid creating separate files for
responsive images.

Chris Adams

unread,
Oct 21, 2013, 10:21:49β€―AM10/21/13
to Henri Sivonen, dev-platform
It's not as simple as reading n% of the bit-stream – the image needs
to be encoded using tiles so a tile-aware decoder can simply read only
the necessary levels. This is very popular in the library community
because it allows a site like e.g. http://chroniclingamerica.loc.gov/
to serve tiles for a deep-zoom viewer without having to decode a full
600 DPI scan. This is in common usage but not using open-source
software because the venerable libjasper doesn't support it (and is
excruciatingly slow) but the newer OpenJPEG added support for it so
it's now possible without relying on a licensed codec.

As far as transfer efficiency goes, it's slightly more overhead with
the tile wrappers but not enough to come anywhere close to cancelling
out compression win from using JP2 instead of JPEG. For those of us
running servers, it's also frequently a win for cache efficiency
versus separate images – particularly if a CDN miss means you have to
go back to the origin and your stack allows streaming the cached
initial portion of the image while doing byte-range requests for the
other half.

Chris

tric...@accusoft.com

unread,
Oct 21, 2013, 10:54:24β€―AM10/21/13
to
There are probably a couple of issues here:

> - Why didn't you include JPEG 2000?

This is the first one. However, I would also include various settings of the codecs involved. There is quite a bit one can do. For example, the overlap settings for XR or visual weighting for JPEG 2000, or subsampling for JPEG.

> - Correct me if I'm wrong but JPEG-XR native color space is not Y'CbCr this means that this format had to perform an extra (possibly lossy) color space conversion.

The question is whether PSNR was measured in YCbCr space or RGB space. The JPEG measures in RGB, the MPEG in YUV.

> - I suppose that the final lossless step used for JPEGs was the usual Huffman encoding and not arithmetic coding, have you considered testing the later one independently?

Uninteresting since nobody uses it - except a couple of compression gurus, the AC coding option is pretty much unused in the field.

> - The image set is some what biased toward outdoor photographic images and highly contrasted artificial black and white ones, what about fractal renderings, operating systems and 2D/3D games screen-shots, blurry, out of frame or night shots?

That depends very much on the use case you have. For artificial images, I would suggest not to use JPEG & friends in first place since they depend on natural scene statistics.

Anyhow: Here is the JPEG online test which lets you select (many) parameters and measure (many) curves, as much as you want:


http://jpegonline.rus.uni-stuttgart.de/index.py

This is a cut-down version of the JPEG-internal tests, though using essentially the same tools.

Greetings,

Thomas

tric...@accusoft.com

unread,
Oct 21, 2013, 11:00:38β€―AM10/21/13
to

> Are there now JPEG 2000 encoders that make images such that if you
>
> want to decode an image in quarter of the full-size in terms of number
>
> of pixels (both dimensions halved), it is sufficient to use the first
>
> quarter of the file length?

Yes, certainly. Just a matter of the progression mode. Set resolution to the "slowest progression variable", and off you go.


tric...@accusoft.com

unread,
Oct 21, 2013, 11:05:36β€―AM10/21/13
to

> I think it would be worthwhile to do two experiments with real people
>
> evaluating the images:
>
> 1) For a given file size with artifacts visible, which format
>
> produces the least terrible artifacts?
>
> 2) Which format gives the smallest file size with a level of
>
> artifacts that is so mild that people don't notice the artifacts?

Such studies are called "subjective tests", and they have been performed by many people (not by me, though, since I don't have a vision lab, i.e. a well-calibrated environment). Yes, the outcome of such tests is of course task-dependent, and dependent on the method you choose for the test.

There is probably a good study by the EPFL from, IIRC, 2011, published at the SPIE, Applications of Digital Image Processing, and many many others.

Outcome is more or less that JPEG 2000 and JPEG XR are on par for a given set of options (which I don't remember off my head) when evaluating quality by MOS-scores.

This specific test did not attempt to measure the "detectibility" of defects (which I would call a "near-threshold" test), but rather a "scoring" or "badness" of defects (thus, "above threshold").

battle...@gmail.com

unread,
Oct 21, 2013, 5:35:44β€―PM10/21/13
to
On Monday, October 21, 2013 4:05:36 PM UTC+1, tric...@accusoft.com wrote:
> There is probably a good study by the EPFL from, IIRC, 2011, published at the SPIE, Applications of Digital Image Processing, and many many others.
>
> Outcome is more or less that JPEG 2000 and JPEG XR are on par for a given set of options (which I don't remember off my head) when evaluating quality by MOS-scores.

Any idea where we might be able to find the published results of these tests? I for one would be very interested in seeing them.

Yoav Weiss

unread,
Oct 22, 2013, 3:12:08β€―AM10/22/13
to
I have a couple of points which IMO are missing from the discussion.

# JPEG's missing features & alpha channel capabilities in particular

Arguably, one of the biggest gains from WebP/JPEG-XR support is the ability to send real life photos with an alpha channel.

Last time I checked, about 60% of all PNG image traffic (so about ~9% of all Web traffic, according to HTTPArchive.org) is PNGs of color type 6, so 24 bit lossless images with an alpha channel. A large part of these PNGs are real-life images that would've been better represented with a lossy format, but since they require an alpha channel, authors have no choice but to use lossless PNG. (I have no data on *how many* of these PNGs are real life photos, but I hope I'll have some soon).

This is a part of Web traffic that would make enormous gains from an alpha-channel capable format, such as WebP or JPEG-XR (Don't know if HEVC-MSP has an alpha channel ATM), yet this is completely left out of the research. I think this point should be addressed.

# Implementability in open-source

HEVC seemed to be the "winner" of the study, with best scores across most measurements. Yet, I'm not sure an HEVC based format is something Mozilla can implement, since it's most likely to be patent encumbered.

If this is not the case, it should be stated loud and clear, and if it is, HEVC should probably be in the research as a point of reference (which other formats should aspire to beat), rather than as a contender.


porn...@gmail.com

unread,
Oct 22, 2013, 5:15:08β€―AM10/22/13
to
On Tuesday, 22 October 2013 08:12:08 UTC+1, Yoav Weiss wrote:

> This is a part of Web traffic that would make enormous gains from an alpha-channel capable format, such as WebP or JPEG-XR (Don't know if HEVC-MSP has an alpha channel ATM), yet this is completely left out of the research. I think this point should be addressed.

If this is researched I'd love to see how it compares to lossy PNG from pngquant2 http://pngquant.org and "blurizer" tool https://github.com/pornel/mediancut-posterizer/tree/blurizer

IMHO these tools already take PNG from "too big to use" to "good enough" level.

Marcos Caceres

unread,
Oct 22, 2013, 7:01:18β€―AM10/22/13
to dev-pl...@lists.mozilla.org, porn...@gmail.com



On Tuesday, October 22, 2013 at 10:15 AM, porn...@gmail.com wrote:

> On Tuesday, 22 October 2013 08:12:08 UTC+1, Yoav Weiss wrote:
>
> > This is a part of Web traffic that would make enormous gains from an alpha-channel capable format, such as WebP or JPEG-XR (Don't know if HEVC-MSP has an alpha channel ATM), yet this is completely left out of the research. I think this point should be addressed.

I strongly agree with this. This is the killer feature why people want these new formats (apart from the byte savings) and is kinda weird that it was not part of the study.

> If this is researched I'd love to see how it compares to lossy PNG from pngquant2 http://pngquant.org and "blurizer" tool https://github.com/pornel/mediancut-posterizer/tree/blurizer
That would be great.
--
Marcos Caceres



new...@gmail.com

unread,
Oct 25, 2013, 4:17:35β€―PM10/25/13
to
On Tuesday, October 22, 2013 11:12:08 AM UTC+4, Yoav Weiss wrote:
>
> Last time I checked, about 60% of all PNG image traffic (so about ~9% of all Web traffic, according to HTTPArchive.org) is PNGs of color type 6, so 24 bit lossless images with an alpha channel. A large part of these PNGs are real-life images that would've been better represented with a lossy format, but since they require an alpha channel, authors have no choice but to use lossless PNG. (I have no data on *how many* of these PNGs are real life photos, but I hope I'll have some soon).

Searching for transparent PNG images, there are mostly logos and graphs, not real-life photos:

https://www.google.com/search?as_st=y&tbm=isch&hl=en&as_q=&as_epq=&as_oq=the+OR+be+OR+to+OR+of&as_eq=&cr=&as_sitesearch=&safe=images&tbs=ic:trans,ift:png

Maybe because it's easy to export transparent PNG logos from a vector editor, but drawing clean alpha channel for real-life photos takes time and effort.

geeta....@gmail.com

unread,
Oct 26, 2013, 2:41:33β€―AM10/26/13
to
On Thursday, October 17, 2013 7:48:16 AM UTC-7, Josh Aas wrote:
> This is the discussion thread for the Mozilla Research blog post entitled "Studying Lossy Image Compression Efficiency", and the related study.

Few queries regarding the study's methodology:

1.) The compression_test.py code converts the input PNG image to YUV data via following command (for Lenna image for instance):
convert png:Lenna.png -sampling-factor 4:2:0 -depth 8 Lenna.png.yuv

Not sure, what's the default colorspace used for imagemagick's convert command. Seems, as per imagemagick's documentation, this will produce YUV data (Luma range [0..255] and not digtal YCbCr (Luma range [16..235]), unless '-colorspace Rec601Luma' is specified). If the YUV intermediate data produced above is not YCbCr, not sure if that YUV data is a valid input for WebP like encoder. Can we verify that the correct YCbCr data is generated from the convert command (above).

2.) How the quality scores (Y-SSIM etc) will look like, if we skip generating this intermediate (YUV) data and encode (at some lossy quality) the source (RGB colorspace) PNG image directly to Jpeg & other codecs (WebP etc) via imagemagick's convert command and convert these images (Jpeg, WebP etc) back to PNG format (without intermediate YUV step) and evaluate the quality scores (Y-SSIM, RGB-SSIM etc) on the source and re-converted PNG files.

3.) JPEG compression being equal or better (on Y-SSIM quality score) than HEVC at higher qualities for Technique image set looks little suspicious to me.

pals...@gmail.com

unread,
Oct 27, 2013, 1:45:24β€―PM10/27/13
to
About the methodology of using identical colorspace conversion for all formats, the study asserts
> and manual visual spot checking did not suggest the conversion
> had a large effect on perceptual quality

I think this claim should be examined more carefully.

Take this image, for example: https://i.imgur.com/3pgvjFl.png
WebP quality 100, decoded to PNG: https://i.imgur.com/O6KKOZy.png
JPEG q99, 4:2:0 subsampling: https://i.imgur.com/jqdMv0d.jpg

WebP (in lossy mode) can't code it without visible banding. Even if you set the quality to 100, the 256 -> 220 range crush destroys enough information that even if nothing was lost in the later stages it'd still show significant problems.

How visible this is depends on the particular screen or viewing environment. I notice it immediately on the three devices I have at hand, but I guess on some screens it might not be so obvious. Here are the same 3 files with their contrast enhanced for easy differentiation:

Original: https://i.imgur.com/zXQ4Z5D.png
WebP: https://i.imgur.com/NBm9abp.png
JPEG: https://i.imgur.com/ASU94A7.png

Then there's the issue of only supporting 4:2:0, which is terrible for synthetic images (screen captures, renders, diagrams) and shows in natural images too. 4:4:4 subsampling is used a lot in practice, for example Photoshop uses it automatically for the upper half of its quality scale when saving JPEGs. At high qualities it's often better to turn chroma subsampling off even at the expense of slightly higher quantization.

I suspect these two issues are the reason WebP hits a quality wall in some JPEG/JXR/J2K comparisons done in RGB space.

evac...@gmail.com

unread,
Feb 23, 2014, 5:17:52β€―PM2/23/14
to
On Monday, October 21, 2013 8:54:24 AM UTC-6, tric...@accusoft.com wrote:
> > - I suppose that the final lossless step used for JPEGs was the usual Huffman encoding and not arithmetic coding, have you considered testing the later one independently?
>
>
>
> Uninteresting since nobody uses it - except a couple of compression gurus, the AC coding option is pretty much unused in the field.

Nobody uses it because there's no browser support, but that doesn't change the fact that it's overwhelmingly better. And if you're going to compare JPEG to a bunch of codecs with horrible support in the real world, it seems pretty unfair to hold JPEG only to features that are broadly supported. Also, last I looked, the FF team refused to add support for JPEGs with arithmetic encoding, even though the patent issues have long since expired and it's already supported by libjpeg.

IMO, it's silly not to let JPEG use optimal settings for a test like this, because promulgating an entirely new standard (as opposed to improving an existing one) is much more difficult.

I would also like to see the raw libjpeg settings used; were you using float? Were the files optimized?

jbho...@gmail.com

unread,
Mar 7, 2014, 10:04:01β€―AM3/7/14
to
Why did you choose jpeg quality as your independent variable? Wouldn't it make more sense to use the similarity value? When trying to match other formats to the jpeg's value, you can get close but can't exactly match it. This creates an inherent bias.

So for one thing, the data should have included the similarity values for all images and not just the jpeg - when the jpeg value was even included. We could see the range of values and then we could figure out how important the bias is. But beyond that, trying to match all formats to the same fixed value would at least give them all the same chance at bias.

When searching for the matching quality values for other formats, how precise did you make them? Was it only integers, or did you use decimal values for formats that support them? It would have been nice to see the quality values for the other formats in the data too.

Jeff Muizelaar

unread,
Mar 7, 2014, 10:21:52β€―AM3/7/14
to evac...@gmail.com, dev-pl...@lists.mozilla.org
Perhaps it’s easier. However, the point though was to see if new image formats were sufficiently better than what we have now to be worth adding support for, not to compare image formats to see which one is best.

> I would also like to see the raw libjpeg settings used; were you using float? Were the files optimized?

These are easy questions for you to answer by reading the source yourself.

-Jeff

e.blac...@gmail.com

unread,
May 9, 2014, 12:23:44β€―PM5/9/14
to
On Saturday, October 19, 2013 12:14:40 PM UTC-4, stephan...@gmail.com wrote:

> Of course, you can throw a bunch of images to some naive observers with a nice web interface, but what about their screens differences? what about their light conditions differences? how do you validate people for the test (vision acuity, color blindness)?

Is the goal to find the best results for the actual audience and conditions of usage of the web, with its naive observers of varying visual acuity, varying light conditions and equipment, or to find the best results for some rarified laboratory setup? If a format were to prove superior in the lab test but prove not significantly different in the messy real world test, that I think we could conclude it wasn't a worth implementing. And vice versa.

mikethed...@gmail.com

unread,
Dec 25, 2014, 7:38:43β€―PM12/25/14
to
> color blindness
I know this is a common way to call color vision deficiency, but it's the wrong term. So called "color blindness" really means you see colors *differently* than other people, sometimes it means you cannot see some shades that others do, but it never means you don't see colors.

Please use "color vision deficiency" for clarity and out of respect.

Also I hope that you do not just exclude people with color vision deficiency as biased. Compression should cater for all. Some shades appear to have more differences than for typical people, some less. It is not accurate to think of people with color vision deficiency as seeing the same as typical people but with less colors or vibrancy. Like I said, some colors differences are *more* pronounced when you have color vision deficiency (compared to what they are for typical people) some less (the only case people seem to know about).

In summary, people see colors differently, not just less or more.

Philip Chee

unread,
Dec 25, 2014, 10:38:22β€―PM12/25/14
to
What about tetrachromats?

Phil

--
Philip Chee <phi...@aleytys.pc.my>, <phili...@gmail.com>
http://flashblock.mozdev.org/ http://xsidebar.mozdev.org
Guard us from the she-wolf and the wolf, and guard us from the thief,
oh Night, and so be good for us to pass.

audiosc...@gmail.com

unread,
Feb 24, 2018, 12:51:07β€―PM2/24/18
to
On Thursday, October 17, 2013 at 10:50:49 AM UTC-4, Josh Aas wrote:
> Blog post is here:
>
> https://blog.mozilla.org/research/2013/10/17/studying-lossy-image-compression-efficiency/
>
> Study is here:
>
> http://people.mozilla.org/~josh/lossy_compressed_image_study_october_2013/

Hi,
The link to the study is broken.
Is HEVC/BPG another abandoned project?

mhoye

unread,
Feb 24, 2018, 2:00:52β€―PM2/24/18
to dev-pl...@lists.mozilla.org


On 2018-02-24 12:51 PM, audiosc...@gmail.com wrote:
> On Thursday, October 17, 2013 at 10:50:49 AM UTC-4, Josh Aas wrote:
>> Blog post is here:
>>
>> https://blog.mozilla.org/research/2013/10/17/studying-lossy-image-compression-efficiency/
>>
>> Study is here:
>>
>> http://people.mozilla.org/~josh/lossy_compressed_image_study_october_2013/
> Hi,
> The link to the study is broken.

Josh works over at Let's Encrypt now, and people.mozilla.org got
decommed some time ago. You can still see that page over at the wayback
machine.

It looks like the repo it links to -
https://github.com/bdaehlie/lossy-compression-test - hasn't seen any
motion in some time.

- mhoye


Daniel Holbert

unread,
Feb 26, 2018, 12:26:40β€―PM2/26/18
to audiosc...@gmail.com, dev-pl...@lists.mozilla.org, jos...@gmail.com
The people.mozilla.org site was a general-purpose webserver for Mozilla
folks, and it was decommissioned entirely over the past few years. So,
that's why the study link there is broken.

You'd have to ask Josh (CC'd) if he has reposted (or could repost) the
study docs somewhere else.

~Daniel
> _______________________________________________
> dev-platform mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>

Daniel Holbert

unread,
Feb 26, 2018, 12:30:19β€―PM2/26/18
to audiosc...@gmail.com, dev-pl...@lists.mozilla.org, jos...@gmail.com
Ah, I missed that Mike had replied -- it sounds like archive.org's
Wayback Machine is the easier way to get at the study, as compared to
bothering Josh. :)

On 2/26/18 9:26 AM, Daniel Holbert wrote:
> The people.mozilla.org site was a general-purpose webserver for Mozilla
> folks, and it was decommissioned entirely over the past few years. So,
> that's why the study link there is broken.
>
> You'd have to ask Josh (CC'd) if he has reposted (or could repost) the
> study docs somewhere else.
>
> ~Daniel
>
> On 2/24/18 9:51 AM, audiosc...@gmail.com wrote:
0 new messages