Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Alternative to trellis

156 views
Skip to first unread message

matthia...@yahoo.com

unread,
Feb 19, 2015, 2:03:29 AM2/19/15
to mozilla-d...@lists.mozilla.org
I've been comparing mozjpeg 3.0 to some other jpeg encoders and have discovered that the following technique produces results almost identical to trellis quantization but without the added complexity or performance hit:

Given a quantized dct coefficient computed with floating point precision, if the coefficient is between -0.75 and 0.75, encode a zero otherwise encode as normal.

I've tried this using baseline sequential as well as progressive jpegs and with the standard jpeg specification quantization tables as well as the new mozjpeg default quantization tables. I haven't tried with arithmetic coding.

I'm using a couple of different quality measures: MSSIM and my own proprietary measure. I get the same results with both. i.e. if you plot out measured quality vs file size, at various different jpeg quality levels, using trellis or the above technique, all the points lie pretty much on the same curve.

Since I've done this using some proprietary technology and since quality measures vary, would anyone care to try and reproduce my results ? It looks as though making the necessary changes to mozjpeg or libjpeg-turbo would be trivial for the floating point FDCT algorithm and not too hard for the others.

matthia...@yahoo.com

unread,
Feb 19, 2015, 2:03:29 AM2/19/15
to mozilla-d...@lists.mozilla.org
I've been testing a couple of different jpeg encoders and have discovered that the following method produces an improved rate/quality almost identical to the default trellis quantization as implemented in mozjpeg 3.0 but without the complexity or performance hit:

Calculate quantized DCT coefficients with floating point precision. If a coefficient falls between -0.75 and +0.75 encode it as zero otherwise encode in the usual way.

I have tried this with baseline sequential and progressive jpegs and also with jpeg specification standard quantization tables as well as with mozjpeg 3.0 default quantization tables. I haven't tried with arithmetic encoding.

I am measuring quality using MSSIM and also with my own proprietary full reference quality measure. Results are the same.

My test technique is to encode a high quality original image (all quantization table entries set to 1) at different quality levels from 50 to 95. Then I compare each image to the original using one of the above quality measurement tools. This allows me to plot a graph of image quality vs file size. Enabling trellis in mozjpeg shifts the graph slightly down (i.e smaller files for a given measured quality) but rounding DCT coefficients below 0.75 to zero produces almost exactly the same result.

I should point out that I'm using a fair bit of home grown software in my testing (so far I've only implemented the coefficient rounding technique in my own jpeg encoder.) Also, I'm using a limited set of test images and quality measurement seems like a bit of a black art anyway.

I could work on this some more but it would be much more convincing if someone else were able to reproduce the same results. It looks as though modifying mozjpeg to round down coefficients below 0.75 is trivial if you use the floating point FDCT method. Anyone care to have a go ?
0 new messages