|Studying Lossy Image Compression Efficiency||Josh Aas||10/17/13 7:48 AM|
This is the discussion thread for the Mozilla Research blog post entitled "Studying Lossy Image Compression Efficiency", and the related study.
|Re: Studying Lossy Image Compression Efficiency||Josh Aas||10/17/13 7:50 AM|
|Re: Studying Lossy Image Compression Efficiency||pgas...@gmail.com||10/17/13 10:27 AM|
On Thursday, 17 October 2013 10:48:16 UTC-4, Josh Aas wrote:Would be interesting if you could post your conclusions from these tests.
|Re: Studying Lossy Image Compression Efficiency||cry...@free.fr||10/17/13 10:50 AM|
Thank you for publishing this study, here are my first questions:
- Why didn't you include JPEG 2000?
- Correct me if I'm wrong but JPEG-XR native color space is not Y'CbCr this means that this format had to perform an extra (possibly lossy) color space conversion.
- I suppose that the final lossless step used for JPEGs was the usual Huffman encoding and not arithmetic coding, have you considered testing the later one independently?
- The image set is some what biased toward outdoor photographic images and highly contrasted artificial black and white ones, what about fractal renderings, operating systems and 2D/3D games screen-shots, blurry, out of frame or night shots?
- I've found only two cats and not a single human face in the Tecnick image set, no fancy à la Instagram filters, this can't be seriously representative of web images, a larger image corpus would be welcome.
|Re: Studying Lossy Image Compression Efficiency||Leman Bennett (Omega X)||10/17/13 11:20 AM|
On 10/17/2013 9:48 AM, Josh Aas wrote:HEVC-MSP did really well. Its unfortunate that Mozilla could not use it
in any capacity since its tied to the encumbered MPEG HEVC standard.
Also, I didn't know that someone was working on a JPEG-XR FOSS encoder.
I wonder how it compares to the Microsoft reference encoder.
MozillaZine Nightly Tester
|Re: Studying Lossy Image Compression Efficiency||Josh Aas||10/17/13 2:07 PM|
On Thursday, October 17, 2013 12:50:12 PM UTC-5, cry...@free.fr wrote:We couldn't test everything, we picked a small set of the formats that we hear the most about and that seem interesting. We're not opposed to including JPEG 2000 in future testing, particularly if we see more evidence that it's competitive.
We considered improving the image sets in some of the ways you suggest, we just didn't get to it this time. Trying to be thorough and accurate with these kinds of studies is more work that it seems like it'll be, we couldn't do everything. We'll try to do better with image sets in future work. I still think this set produces meaningful results.
Thanks for the feedback. Maybe Tim, Gregory, or Jeff can respond to some of your other questions.
|Re: Studying Lossy Image Compression Efficiency||cry...@free.fr||10/17/13 3:08 PM|
HDR-VDP-2 is relatively recent metric that produces predictions for difference visibility and quality degradation.
It could been interesting to add this metric in future studies.
Rafał Mantiuk (the guy behind HDR-VDP-2) also worked on this paper: "New Measurements Reveal Weaknesses of Image Quality Metrics in Evaluating Graphics Artifacts" http://www.mpi-inf.mpg.de/resources/hdr/iqm-evaluation/
Which leads to think that doing some blinded experiment (real people evaluating the images) to compare compressed images has still some value. It could be fun to conduct such an experience, presenting 2 or 3 versions of the same image compressed with different methods and asking a wide panel (could be open to anyone on the web) to pick their favorite one.
|Re: Studying Lossy Image Compression Efficiency||Yoav Weiss||10/18/13 1:57 AM|
On Thursday, October 17, 2013 4:48:16 PM UTC+2, Josh Aas wrote:Thank you for publishing this research!
While I like the methodology used a lot, I find the image sample used extremely small to accurately represent images on today's Web (or tomorrow's Web for that matter).
I understand that one of the reasons you used artificial benchmarks instead of real-life Web images is to avoid the bias of images that already went through JPEG compression.
Would you consider a large sample of lossless Web images (real-life images served as PNG24, even though it'd be wiser to serve them as JPEGs) to be unbiased enough to run this research against? I believe such a sample would better represent Web images.
|Re: Studying Lossy Image Compression Efficiency||battle...@gmail.com||10/18/13 2:31 AM|
Very interesting study. I’m shocked to see WebP and JPEG-XR perform so poorly on so many of the tests. Do they really perform *that* much *worse* than JPEG? It seems hard to imagine. I've done my own tests on jpeg, web-p and jpeg-xr by blindly comparing files of the same size and deciding subjectively which one I thought looked closest to the uncompressed version. The conclusions I came to were very close I thought to the RGB-SSIM tests which showed web-p best, with JPEG-XR much better than JPEG but significantly behind Web-P and JPEG much worse than all. This seemed consistent to me at all encoding qualities with many kind of images just as the RGB-SSIM tests show. It seems very curious that Y-SSIM, IW-SSIM and PSNR-HVS-M all show JPEG-XR and Web-P both dipping below JPEG quality at the same file sizes. I’d be very interested in seeing the images that those comparisons are determining JPEG-XR and Web-P are doing a worse job than JPEG.
I think the most important kind of comparison to do is a subjective blind test with real people. This is of course produces less accurate results, but more meaningful ones. It doesn't really matter if a certain algorithm determines a certain codec produces less lossy images than another codec if actual humans looking at the compressed images don’t tend to feel the same way. All that matters in the end is if a codec does a good job of keeping the details that the human compressing and viewing the image thinks are important, not what various algorithms testing image quality think are important.
Although it’s outside the scope of this study, I wonder what interest Mozilla is taking on image formats with more features being supported on the Web? Lossy + transparency seems like a particularly desirable one for games and certainly for web developers in general. RGB565 colour format support sounds like it could be useful for optimized WebGL applications.
|Re: Studying Lossy Image Compression Efficiency||ch...@improbable.org||10/18/13 7:16 AM|
On Thursday, October 17, 2013 1:50:12 PM UTC-4, cry...@free.fr wrote:You might find https://bugzilla.mozilla.org/show_bug.cgi?id=36351#c120 interesting: it discusses what it would take to get a new format into Firefox: a high-quality open-source library, some level of security vetting and a solid rights statement.
I think JP2 support could potentially be very interesting because it would make responsive images almost trivial without requiring separate files (i.e. srcset could simply specify a byte-range for each size image) but the toolchain support needs some serious attention.
|Re: Studying Lossy Image Compression Efficiency||jmar...@google.com||10/18/13 8:03 AM|
The blog post indicates "we’re primarily interested in the impact that smaller file sizes would have on page load times". I looked at the study, http://people.mozilla.org/~josh/lossy_compressed_image_study_october_2013/, but couldn't find any reference to PLT impact from smaller file sizes.
Moreover, the web performance community is moving away from PLT as a proxy for user experience in favor of Speed Index (https://sites.google.com/a/webpagetest.org/docs/using-webpagetest/metrics/speed-index) and other metrics. See http://velocityconf.com/velocityny2013/public/schedule/detail/32820 from last week's Velocity conference in NYC.
To deliver a sub-second mobile web experience we have very tight time budget. Ideally we'd like to deliver any important above-the-fold images in the first CWND. Ilya Grigorik (see "High Performance Browser Networking" on amazon) gives a guideline of 40k bytes of network traffic with no blocking external resources to have a fighting chance. Images can be inlined if they are important to the initial user experience.
Full disclosure: I work on mod_pagespeed and ngx_pagespeed, which do those transformations automatically, as well as transcoding to webp for compatible browsers.
In addition to the quality graphs, I'd like to see web page videos (www.webpagetest.org) and waterfalls to see the impact of making images smaller. This can be done in the context of Firefox which doesn't have support for new image formats, but you could simulate that by recompressing jpeg at a lower quality until you get the size you'd get using webp, and relying on your quality metrics to determine the target byte size for acceptable quality.
|Re: Studying Lossy Image Compression Efficiency||lept...@gmail.com||10/18/13 8:34 AM|
I think you are attacking from the wrong angle. Being responsible in an Enterprise for quite a few sites, most issues I have are, where all current formats fail miserably. To make the point, see the two following Images, were I have to live with PNG-24 huge sized files, due to a) alpha-transparancy necessary or b) other formats fail from a visual perspective.
So I am not so much in a need of a compressor, which is able to shave off a couple of bytes on standard photo-like images, but in a desperate need of an efficient encoder for 256+ colour like palette-driven colour-gradient graphs, photos with alpha transparancy, or, even more so, of a mixture of both.
|Re: Studying Lossy Image Compression Efficiency||Ralph Giles||10/18/13 4:12 PM|
On 2013-10-18 1:57 AM, Yoav Weiss wrote:Do you have such a sample?
Note that the scripts to run the image comparisons are on github, so you
can verify the results and try them on other image sets.
|Re: Studying Lossy Image Compression Efficiency||battle...@gmail.com||10/19/13 3:15 AM|
On Saturday, October 19, 2013 12:12:14 AM UTC+1, Ralph Giles wrote:
> Do you have such a sample?For what it's worth here's an image I made quite awhile ago showing the results of my own blind subjective comparison between codecs: http://www.filedropper.com/lossy
The image shows the original lossless image alongside a JPEG, JPEG-XR, JPEG2000 and Web-P version of the image all of which have been compressed to 7.5kb. I used the leadtools compression suite for all images except the web-p one, where i used Google's libwebp. I'll be *very* clear here that I don't consider this image very good proof of how good each codec is, clearly the JPEG compressed image could be optimized more. The lossy compressed images are ordered as JPEG, JPEG-XR, JPEG2000, Web-P with respect to the results I personally came to about their performance, web-p being the best and jpeg being the worst. I did this comparison at every quality level and using many different image sources and found the subjective results were the same. The difference between Web-P, JPEG200 and JPEG-XR can at times be hard to call as it felt like i was deciding which compression artifacts bothered my most personally rather than which image felt closest to the original. What was consistent however was that all the modern codecs seems clearly superior to JPEG, or at best appeared the same as JPEG at higher compression qualities but certainly never worse. What I'm saying is based off my own experiences I'd be shocked if anyone could go through a subjective blind test like this and feel that JPEG was performing better at any quality level or with any images.
I'd also agree the points brought up by lept...@gmail.com. I think the actual features supported by the current range of web image formats is quite lacking. It's common on the web for web and game developers to compress photographic images as PNG's because they need transparency. Animated Gif's are also popular for compressing short live action video clips, something the format is terribly inadequate at. Both JPEG-XR and Web-P include transparency + alpha support. Only Web-P supports animation, though I believe animation could be added to JPEG-XR easily http://ajxr.codeplex.com/. The extra color formats supported in JPEG-XR could one day be useful on the web too.
Although the benefit of better compression performance in web image formats would have obvious speed benefits, I think the consequences of having such a limited feature set in the current range of supported image formats on the web is holding web developers back far more than file size issues.
|Re: Studying Lossy Image Compression Efficiency||Jeff Muizelaar||10/19/13 4:30 AM|
I agree that in this comparison JPEG is clearly the worst. However, the bitrate that you are using here is well below the target for which JPEG is designed to be used and the quality of all of the image formats is lower than would be acceptable for nearly all purposes. This makes these results much less interesting than at quality levels typically used on the web.
|Re: Studying Lossy Image Compression Efficiency||battle...@gmail.com||10/19/13 4:55 AM|
I completely agree. This is why I'm don't want this image to be considered as good proof of which codec is superior. I had another image exactly like this where I had comparisons around 35kb which was a much more realistic quality level for all the codecs but the differences in visual quality loss was still noticeable, but I seem to have lost it. My own subjective findings were that quality of each codec was roughly the same order, with JPEG always being identifiable the worst until file size raised to the point were it was impossible to easiy tell the difference between any of the lossy compressed images. I think if this test image were compressed on a typical web page it would typically be a 80kb JPEG or so, and at this level of quality it's difficult for a human to differentiate between a webp or jpeg-xr image of the same size or even the original lossless image for that matter in some cases. In this respect subjective blind testing by humans fails, but my point is I never observed anything like JPEG performing worse than JPEG-XR or Web-P and I'm very surprised to see that some algorithms are reporting that JPEG out performs WebP and Jpeg-XR by a wide margin at some quality settings, in stark contrast to what other algorithms report (RGB-SSIM) and what my own and what I believe other subjective blind tests would report.
|Re: Studying Lossy Image Compression Efficiency||stephan...@gmail.com||10/19/13 9:14 AM|
I'll just talk about the quality evaluation aspects of this study, as it is a field I know quite well (PhD on the topic, even if in video specifically).
I don't get how more meaningful results may be less accurate... Running subjective quality tests is not as trivial as it sounds, at least to get meaningful results, as you say. Of course, you can throw a bunch of images to some naive observers with a nice web interface, but what about their screens differences? what about their light conditions differences? how do you validate people for the test (vision acuity, color blindness)? I've ran more than 600 test sessions with around 200 different observers. Each one of them was tested before the session, and a normalized (ITU-R BT.500) room was dedicated to the process. I don't want to brag, I just mean it's a complicated matter, and not as sexy as it sounds :-)
In this study, you used several objective quality criteria (Y-SSIM, RGB-SSIM, IW-SSIM, PSNR-HVS-M). You say yourself: "It's unclear which algorithm is best in terms of human visual perception, so we tested with four of the most respected algorithms." Still, the ultimate goal of your test is to compare different degrading systems (photography coders here) at equivalent *perceived* quality. As your graphs show, they don't produce very consistent results (especially RGB-SSIM). SSIM-based metrics are structural, which means they evaluate how the structure of the image differ from one version to the other. Then they are very dependent of the content of the picture. Y-SSIM and IW-SSIM are only applied to luma channel, which is not optimal in your case, as image coders tend to blend colors. Still, IW-SSIM is the best performer in  (but it was the subject of the study), so why not. Your results with RGB-SSIM are very different than the others, disqualifying it for me. Plus, averaging SSIM over R, G and B channels has no sense for the human visual system. PSNR-HVS-M has the advantage to take into account a CSF to ponderate their PSNR, but it was designed over artificial artefacts, then you don't know how it performs over compression artefacts. None of these metrics use the human visual system at their heart. At best, they apply some HVS filter to PSNR or SSIM. For a more HVS-related metric, which tend to perform well (over 0.92 in correlation), look at  (from the lab I worked in). The code is a bit old now though, but an R package seems to be available.
You cite , in which they compare 5 algorithms (PSNR, IW-PSNR, SSIM, MS-SSIM, and IW-SSIM) over 6 subject-rated independent image databases (LIVE database, Cornell A57 database, IVC database, Toyama database, TID2008 database, and CSIQ database). These databases contain images and subjective quality evaluations obtained in normalized (i.e. repeatable) conditions. Most of them use JPEG and JPEG2000 compression, but not the others you want to test. The LIVE database is known not to be spread enough, resulting in high correlation in most studies (yet the reason why other databases emerged). If you want to perform your study further, consider using some of these data to start with.
Finally, be careful when you compute average of values, did you check their distribution first?
|Re: Studying Lossy Image Compression Efficiency||Yoav Weiss||10/20/13 1:34 PM|
On Saturday, October 19, 2013 1:12:14 AM UTC+2, Ralph Giles wrote:Assuming Mozilla would consider such a sample valid, I can get such a sample using data from httparchive.org.
|Re: Studying Lossy Image Compression Efficiency||danbr...@gmail.com||10/20/13 5:24 PM|
I have a couple of fundamental issues with how you're calculating 3 of the 4 metrics (all but RGB-SSIM, which I didn't think too much about)
First, am I correct in my reading of your methodology that for all metrics, you encode a color image (4:2:0) and use that encoded filesize? If so, then all the results from greyscale metrics are invalid, as the filesize would include chroma, but the metric only measures luma. An encoder could spend 0 bits on chroma and get a better score than an encoder that spent more bits on chroma than luma.
Second, for Y-SSIM and IW-SSIM only, it appears you encode as color, then afterwards convert both the original image and encoded image to greyscale and calculate SSIM between those two images. This is fundamentally wrong - the original converted to greyscale was not the image the codec encoded, so you're not measuring the distortion of the codec. It looks like PSNR-HVS-M is calculated from the YUV fed into the encoder, which is how Y-SSIM and IW-SSIM should be calculated as well.
Fortunately, the solution is easy - for greyscale metrics, simply convert to greyscale before encoding, not after. Or, if that's what you're already doing, make it clear.
|Re: Studying Lossy Image Compression Efficiency||stephan...@gmail.com||10/21/13 3:29 AM|
> I have a couple of fundamental issues with how you're calculating 3 of the 4 metrics (all but RGB-SSIM, which I didn't think too much about)You are right about it, methodology is not clear on this point.
All metrics compute data on *decoded* data, being RGB or YUV. Maybe authors could publish a flow of their methodology, that would ease the discussion.
As I already said, it's not really a good idea to use luma-only metrics to assess colored image, as coders tend to blend colors as the quality index decreases.
|Re: Studying Lossy Image Compression Efficiency||Henri Sivonen||10/21/13 5:35 AM|
On Fri, Oct 18, 2013 at 1:08 AM, <cry...@free.fr> wrote:I think it would be worthwhile to do two experiments with real people
evaluating the images:
1) For a given file size with artifacts visible, which format
produces the least terrible artifacts?
2) Which format gives the smallest file size with a level of
artifacts that is so mild that people don't notice the artifacts?
My limited experience suggests that the ranking of the formats could
be different for those two different questions. Also, my understanding
is that the quality metric algorithms are foremost about answering
question #1 while question #2 is often more important for Web
|Re: Studying Lossy Image Compression Efficiency||Henri Sivonen||10/21/13 5:43 AM|
On Fri, Oct 18, 2013 at 5:16 PM, <ch...@improbable.org> wrote:Are there now JPEG 2000 encoders that make images such that if you
want to decode an image in quarter of the full-size in terms of number
of pixels (both dimensions halved), it is sufficient to use the first
quarter of the file length?
Last I tried, which was years ago, in order to decode a quarter-sized
image in terms of number of pixels with quality comparable to the
full-size image in terms of visible artifacts, it was necessary to
consume half of the file length. That is, in order to use the image
with both dimensions halved, it was necessary to load twice as many
bytes as would have been necessary if there was a separate pre-scaled
file available. Having to transfer twice as much data does not seem
like a good trade-off in order to avoid creating separate files for
|Re: Studying Lossy Image Compression Efficiency||Chris Adams||10/21/13 7:21 AM|
It's not as simple as reading n% of the bit-stream – the image needs
to be encoded using tiles so a tile-aware decoder can simply read only
the necessary levels. This is very popular in the library community
because it allows a site like e.g. http://chroniclingamerica.loc.gov/
to serve tiles for a deep-zoom viewer without having to decode a full
600 DPI scan. This is in common usage but not using open-source
software because the venerable libjasper doesn't support it (and is
excruciatingly slow) but the newer OpenJPEG added support for it so
it's now possible without relying on a licensed codec.
As far as transfer efficiency goes, it's slightly more overhead with
the tile wrappers but not enough to come anywhere close to cancelling
out compression win from using JP2 instead of JPEG. For those of us
running servers, it's also frequently a win for cache efficiency
versus separate images – particularly if a CDN miss means you have to
go back to the origin and your stack allows streaming the cached
initial portion of the image while doing byte-range requests for the
|Re: Studying Lossy Image Compression Efficiency||tric...@accusoft.com||10/21/13 7:54 AM|
There are probably a couple of issues here:
This is the first one. However, I would also include various settings of the codecs involved. There is quite a bit one can do. For example, the overlap settings for XR or visual weighting for JPEG 2000, or subsampling for JPEG.
The question is whether PSNR was measured in YCbCr space or RGB space. The JPEG measures in RGB, the MPEG in YUV.
Uninteresting since nobody uses it - except a couple of compression gurus, the AC coding option is pretty much unused in the field.
That depends very much on the use case you have. For artificial images, I would suggest not to use JPEG & friends in first place since they depend on natural scene statistics.
Anyhow: Here is the JPEG online test which lets you select (many) parameters and measure (many) curves, as much as you want:
This is a cut-down version of the JPEG-internal tests, though using essentially the same tools.
|Re: Studying Lossy Image Compression Efficiency||tric...@accusoft.com||10/21/13 8:00 AM|
Yes, certainly. Just a matter of the progression mode. Set resolution to the "slowest progression variable", and off you go.
|Re: Studying Lossy Image Compression Efficiency||tric...@accusoft.com||10/21/13 8:05 AM|
> I think it would be worthwhile to do two experiments with real people
> evaluating the images:
> 1) For a given file size with artifacts visible, which format
> produces the least terrible artifacts?
> 2) Which format gives the smallest file size with a level of
> artifacts that is so mild that people don't notice the artifacts?
Such studies are called "subjective tests", and they have been performed by many people (not by me, though, since I don't have a vision lab, i.e. a well-calibrated environment). Yes, the outcome of such tests is of course task-dependent, and dependent on the method you choose for the test.
There is probably a good study by the EPFL from, IIRC, 2011, published at the SPIE, Applications of Digital Image Processing, and many many others.
Outcome is more or less that JPEG 2000 and JPEG XR are on par for a given set of options (which I don't remember off my head) when evaluating quality by MOS-scores.
This specific test did not attempt to measure the "detectibility" of defects (which I would call a "near-threshold" test), but rather a "scoring" or "badness" of defects (thus, "above threshold").
|Re: Studying Lossy Image Compression Efficiency||battle...@gmail.com||10/21/13 2:35 PM|
On Monday, October 21, 2013 4:05:36 PM UTC+1, tric...@accusoft.com wrote:Any idea where we might be able to find the published results of these tests? I for one would be very interested in seeing them.
|Re: Studying Lossy Image Compression Efficiency||Yoav Weiss||10/22/13 12:12 AM|
I have a couple of points which IMO are missing from the discussion.
# JPEG's missing features & alpha channel capabilities in particular
Arguably, one of the biggest gains from WebP/JPEG-XR support is the ability to send real life photos with an alpha channel.
Last time I checked, about 60% of all PNG image traffic (so about ~9% of all Web traffic, according to HTTPArchive.org) is PNGs of color type 6, so 24 bit lossless images with an alpha channel. A large part of these PNGs are real-life images that would've been better represented with a lossy format, but since they require an alpha channel, authors have no choice but to use lossless PNG. (I have no data on *how many* of these PNGs are real life photos, but I hope I'll have some soon).
This is a part of Web traffic that would make enormous gains from an alpha-channel capable format, such as WebP or JPEG-XR (Don't know if HEVC-MSP has an alpha channel ATM), yet this is completely left out of the research. I think this point should be addressed.
# Implementability in open-source
HEVC seemed to be the "winner" of the study, with best scores across most measurements. Yet, I'm not sure an HEVC based format is something Mozilla can implement, since it's most likely to be patent encumbered.
If this is not the case, it should be stated loud and clear, and if it is, HEVC should probably be in the research as a point of reference (which other formats should aspire to beat), rather than as a contender.
|Re: Studying Lossy Image Compression Efficiency||porn...@gmail.com||10/22/13 2:15 AM|
On Tuesday, 22 October 2013 08:12:08 UTC+1, Yoav Weiss wrote:If this is researched I'd love to see how it compares to lossy PNG from pngquant2 http://pngquant.org and "blurizer" tool https://github.com/pornel/mediancut-posterizer/tree/blurizer
IMHO these tools already take PNG from "too big to use" to "good enough" level.
|Re: Studying Lossy Image Compression Efficiency||Marcos Caceres||10/22/13 4:01 AM|
I strongly agree with this. This is the killer feature why people want these new formats (apart from the byte savings) and is kinda weird that it was not part of the study.
That would be great.
|Re: Studying Lossy Image Compression Efficiency||new...@gmail.com||10/25/13 1:17 PM|
On Tuesday, October 22, 2013 11:12:08 AM UTC+4, Yoav Weiss wrote:Searching for transparent PNG images, there are mostly logos and graphs, not real-life photos:
Maybe because it's easy to export transparent PNG logos from a vector editor, but drawing clean alpha channel for real-life photos takes time and effort.
|Re: Studying Lossy Image Compression Efficiency||geeta....@gmail.com||10/25/13 11:41 PM|
On Thursday, October 17, 2013 7:48:16 AM UTC-7, Josh Aas wrote:
> This is the discussion thread for the Mozilla Research blog post entitled "Studying Lossy Image Compression Efficiency", and the related study.
Few queries regarding the study's methodology:
1.) The compression_test.py code converts the input PNG image to YUV data via following command (for Lenna image for instance):
convert png:Lenna.png -sampling-factor 4:2:0 -depth 8 Lenna.png.yuv
Not sure, what's the default colorspace used for imagemagick's convert command. Seems, as per imagemagick's documentation, this will produce YUV data (Luma range [0..255] and not digtal YCbCr (Luma range [16..235]), unless '-colorspace Rec601Luma' is specified). If the YUV intermediate data produced above is not YCbCr, not sure if that YUV data is a valid input for WebP like encoder. Can we verify that the correct YCbCr data is generated from the convert command (above).
2.) How the quality scores (Y-SSIM etc) will look like, if we skip generating this intermediate (YUV) data and encode (at some lossy quality) the source (RGB colorspace) PNG image directly to Jpeg & other codecs (WebP etc) via imagemagick's convert command and convert these images (Jpeg, WebP etc) back to PNG format (without intermediate YUV step) and evaluate the quality scores (Y-SSIM, RGB-SSIM etc) on the source and re-converted PNG files.
3.) JPEG compression being equal or better (on Y-SSIM quality score) than HEVC at higher qualities for Technique image set looks little suspicious to me.
|Re: Studying Lossy Image Compression Efficiency||pals...@gmail.com||10/27/13 10:45 AM|
About the methodology of using identical colorspace conversion for all formats, the study asserts
> and manual visual spot checking did not suggest the conversion
> had a large effect on perceptual quality
I think this claim should be examined more carefully.
Take this image, for example: https://i.imgur.com/3pgvjFl.png
WebP quality 100, decoded to PNG: https://i.imgur.com/O6KKOZy.png
JPEG q99, 4:2:0 subsampling: https://i.imgur.com/jqdMv0d.jpg
WebP (in lossy mode) can't code it without visible banding. Even if you set the quality to 100, the 256 -> 220 range crush destroys enough information that even if nothing was lost in the later stages it'd still show significant problems.
How visible this is depends on the particular screen or viewing environment. I notice it immediately on the three devices I have at hand, but I guess on some screens it might not be so obvious. Here are the same 3 files with their contrast enhanced for easy differentiation:
Then there's the issue of only supporting 4:2:0, which is terrible for synthetic images (screen captures, renders, diagrams) and shows in natural images too. 4:4:4 subsampling is used a lot in practice, for example Photoshop uses it automatically for the upper half of its quality scale when saving JPEGs. At high qualities it's often better to turn chroma subsampling off even at the expense of slightly higher quantization.
I suspect these two issues are the reason WebP hits a quality wall in some JPEG/JXR/J2K comparisons done in RGB space.
|Re: Studying Lossy Image Compression Efficiency||evac...@gmail.com||2/23/14 2:17 PM|
On Monday, October 21, 2013 8:54:24 AM UTC-6, tric...@accusoft.com wrote:
> > - I suppose that the final lossless step used for JPEGs was the usual Huffman encoding and not arithmetic coding, have you considered testing the later one independently?
> Uninteresting since nobody uses it - except a couple of compression gurus, the AC coding option is pretty much unused in the field.
Nobody uses it because there's no browser support, but that doesn't change the fact that it's overwhelmingly better. And if you're going to compare JPEG to a bunch of codecs with horrible support in the real world, it seems pretty unfair to hold JPEG only to features that are broadly supported. Also, last I looked, the FF team refused to add support for JPEGs with arithmetic encoding, even though the patent issues have long since expired and it's already supported by libjpeg.
IMO, it's silly not to let JPEG use optimal settings for a test like this, because promulgating an entirely new standard (as opposed to improving an existing one) is much more difficult.
I would also like to see the raw libjpeg settings used; were you using float? Were the files optimized?
|Re: Studying Lossy Image Compression Efficiency||jbho...@gmail.com||3/7/14 7:04 AM|
Why did you choose jpeg quality as your independent variable? Wouldn't it make more sense to use the similarity value? When trying to match other formats to the jpeg's value, you can get close but can't exactly match it. This creates an inherent bias.
So for one thing, the data should have included the similarity values for all images and not just the jpeg - when the jpeg value was even included. We could see the range of values and then we could figure out how important the bias is. But beyond that, trying to match all formats to the same fixed value would at least give them all the same chance at bias.
When searching for the matching quality values for other formats, how precise did you make them? Was it only integers, or did you use decimal values for formats that support them? It would have been nice to see the quality values for the other formats in the data too.
|Re: Studying Lossy Image Compression Efficiency||Jeff Muizelaar||3/7/14 7:21 AM|
Perhaps it’s easier. However, the point though was to see if new image formats were sufficiently better than what we have now to be worth adding support for, not to compare image formats to see which one is best.
These are easy questions for you to answer by reading the source yourself.
|Re: Studying Lossy Image Compression Efficiency||e.blac...@gmail.com||5/9/14 9:23 AM|
On Saturday, October 19, 2013 12:14:40 PM UTC-4, stephan...@gmail.com wrote:
> Of course, you can throw a bunch of images to some naive observers with a nice web interface, but what about their screens differences? what about their light conditions differences? how do you validate people for the test (vision acuity, color blindness)?
Is the goal to find the best results for the actual audience and conditions of usage of the web, with its naive observers of varying visual acuity, varying light conditions and equipment, or to find the best results for some rarified laboratory setup? If a format were to prove superior in the lab test but prove not significantly different in the messy real world test, that I think we could conclude it wasn't a worth implementing. And vice versa.
|Re: Studying Lossy Image Compression Efficiency||mikethed...@gmail.com||12/25/14 4:38 PM|
> color blindness
I know this is a common way to call color vision deficiency, but it's the wrong term. So called "color blindness" really means you see colors *differently* than other people, sometimes it means you cannot see some shades that others do, but it never means you don't see colors.
Please use "color vision deficiency" for clarity and out of respect.
Also I hope that you do not just exclude people with color vision deficiency as biased. Compression should cater for all. Some shades appear to have more differences than for typical people, some less. It is not accurate to think of people with color vision deficiency as seeing the same as typical people but with less colors or vibrancy. Like I said, some colors differences are *more* pronounced when you have color vision deficiency (compared to what they are for typical people) some less (the only case people seem to know about).
In summary, people see colors differently, not just less or more.
|Re: Studying Lossy Image Compression Efficiency||Philip Chee||12/25/14 7:38 PM|