matthia...@yahoo.com
unread,Feb 19, 2015, 2:03:29 AM2/19/15You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to mozilla-d...@lists.mozilla.org
I've been comparing mozjpeg 3.0 to some other jpeg encoders and have discovered that the following technique produces results almost identical to trellis quantization but without the added complexity or performance hit:
Given a quantized dct coefficient computed with floating point precision, if the coefficient is between -0.75 and 0.75, encode a zero otherwise encode as normal.
I've tried this using baseline sequential as well as progressive jpegs and with the standard jpeg specification quantization tables as well as the new mozjpeg default quantization tables. I haven't tried with arithmetic coding.
I'm using a couple of different quality measures: MSSIM and my own proprietary measure. I get the same results with both. i.e. if you plot out measured quality vs file size, at various different jpeg quality levels, using trellis or the above technique, all the points lie pretty much on the same curve.
Since I've done this using some proprietary technology and since quality measures vary, would anyone care to try and reproduce my results ? It looks as though making the necessary changes to mozjpeg or libjpeg-turbo would be trivial for the floating point FDCT algorithm and not too hard for the others.