'JPEGmini works with standard JPEGs. The input is a standard JPEG and the output is a standard JPEG. We recompress that standard JPEG photo by up to 80%, and the resolution remains the same and the perceptual quality of the image remains the same. When we talk about 'perceptual image quality' we mean that if you took this photo and viewed it on your screen at Actual Pixels, or 100% magnification, and compared it to the original you wouldn't be able to determine which was the original and which one was the optimized. That's what we call 'perceptually identical' to the original.'
'Most of our customers are professional photographers, and they have realised that the photos that they get out of JPEGmini are as good as the originals and that they can use them in the same situations and for the same uses. Of course, the JPEG process introduces artefacts that you don't find in the RAW file, so any JPEG produced by Photoshop or Lightroom will have artefacts, but our claim is that our processed image will look the same as the original JPEG and the compression will not introduce further artefacts. Any JPEG compression introduces artefacts, but the question is, are these artefacts visible by humans or not? We have developed a quality measure that gives us that answer with very high accuracy. This quality measure has much better correlation with human results than other scientific quality measures.'
The software works by analyzing the content of each image, and determining how much compression can be applied to each individual area. Images are broken down into tiles of a set number of pixels, and the degree of compression acceptable is assessed according to the level of information recorded in the tile. Gill wouldn't say how the tiles interact with each other, but we worked on the presumption that the tiles were about 150 pixels square.
Gill says camera manufacturers don't like to use a lot of compression because too many reviewers and customers think that image quality and the amount of detail in an image can be determined by the size of the file created, and that people associate smaller file sizes with lower levels of picture information. Camera brands, he says, don't want to produce files that are smaller than their competitors as some reviewers will immediately mark them down for it without studying the comparison images.
Gill says that out-of-focus backgrounds can be compressed more than focused areas, as the software analysis works by detecting the amount of detail and information present. This brings up the question of whether a poor lens will be made to look worse by the compression compared to the same area captured by a sharp lens, but Gill maintains that the difference wouldn't show. Tests, I suppose, will give us the sure answer to that.
If you view the optimized images at 800% Gill admits that you would see the differences, but at normal viewing and for normal use you won't. 'These optimised files are designed to be viewed at 100% and to be printed. In print it is even harder to see the differences than on screen.'
Gill's father is Aaron Gill, who was one of the chief scientists who worked on the original JPEG standard in the 1980s. I ask how he feels about his son tampering with the way JPEGs are created. 'At first he was sceptical and asked me what I was doing getting mixed up with this company that wants to reduce file sizes, but after he tried it I think he was proud of me.'
JPEGmini supports JPEG files up to 28MP, while its JPEGmini pro and JPEGmini Server siblings support up to 60MP images. To give an idea of what JPEGmini does, I ran a 25.45MB Raw file through Lightroom and exported a 'best quality' JPEG of 10.12MB. After being exported again via the JPEGmini plug-in the file was compressed to 2.66MB, and still measured the same 4608x3456 (16MP) pixels it did originally - so the JPEGmini file is a quarter of the size of the normal JPEG.
The software still makes considerable savings even if you don't usually convert your images using the best quality settings. For comparison, that Raw file exported as a JPEG at 80% quality in Lightroom (not using JPEGmini) resulted in a 4.8MB file. The 2.6MB JPEGmini file is just over half the size.
Although photographers might like the idea of saving space most are not interested in doing so at the cost of quality, and frankly I think most of us struggle to believe that such a dramatic file size reduction can be achieved without any detrimental effect on the content of the picture.
In my very brief tests I have been able to see slight differences in levels of micro contrast and the amount of very fine texture that is resolved when the images I used were viewed at 100% on screen. More tests will be required to see exactly what is lost and what is at stake, and I'm compelled to make those tests by the carrot of saving a massive amount of space in storage and by the prospect of having a website with large images that runs quickly. At this stage I can say that in the image I tested the plug-in with tiny differences could be seen when the images were compared at 100%, but at print size (33%) the differences were certainly not apparent.
If you can't wait for the results of my testing you can download the $19.99 standard standalone version of JPEGmini for a free trial. JPEGmini Pro costs $149 but can work with images of up to 60MP, is up to 8x quicker and comes with the Lightroom plug-in option as well as the standalone application. At the moment however, JPEGmini only accepts JPEG files. That means even using the Lightroom plug-in, a Raw file must first be converted to JPEG to then be re-saved as a smaller JPEG by the application.
I would like to see how JPEGmini preforms not only on high quality cameras, but on consumer level cheaper cameras such as the Sony DSC-HX30V where the JPEGs are already not so great. I would like to see 8 x 12 prints from said camera, or equivalent.
I noticed another artifact in JPGmini files: It seem to emphasize sharpening halos (dark and light lines around edges). In one image area it would turn a mushy very small detail mushy brown + gray area into more pronounced beige + black, so maybe it even applies its own kind of sharpening? You need to look closely to see it, though.
High frequency detail is blurred away. While this results in loss of finest details it also seems to smooth JPG artifacts, especially around edges, even more so with tonal gradations going towards edges. I suspect that this is not done via a final noise-filter, but by applying different matrix coefficients in the discrete cosines transformation (DCT). In fact I suspect that all JPGmini does is to transcode the original JPG file by changing its DCT matrix coefficients. This would speed the process up compared to having to fully recompress each file and would also explain why only YCbCR JPG files can be used as source by JPGmini.
HEVC still uses DCT as it's transform, which is a floating point transform. Since most codec implementations of DCT are written with integer maths (fixed point) for speed, it means with each pass through the codec, information is still lost even if you don't do the quantisation step for lossy compression. It's not what maths geeks call "reversible".
Whereas JPEG XR uses a completely different transform that is integer from the start, so you can turn off the lossy compression steps, and the image you get back after decompression will be pixel for pixel identical to the one you fed in.
This is not a new format. This is a new encoder for the same format. You can open these files in any JPEG viewer because they are JPEG. All they need to do is convince companies that embed encoders into their devices or software to use their algorithm. Obviously "Forensic" style encoders should be using something other than JPEG altogether. This seems ideal for phones and tablets to encode before publishing on services like Flickr or Instagram, etc.
it's not a valid comparison because the authors of this software are not suggesting there is NO difference. They are saying there is a difference, but at normal viewing these differences are not perceived. That huyzer had to flip back and forth proves that point.
What huyzer has done is "pixel peeping" - which is pathetic. To "isloate the differences" all one would have to do is make a difference layer in photoshop or some such software; which is a lot less hassle, quicker, and tells you exactly how different the images are (which is not a lot, just a few percent in a few areas)
Flipping between the two images is a perfectly valid way to see where any differences may be.
If you evaluate by just looking at one image and it's JPEGmini version, yes, you may not see a difference. But that, in part, is because humans have a very poor memory for image content when doing that kind of comparison. I would argue that pretty much any JPEG compression except for the most intense will not result in an image that is perceptually different than the original if you just evaluate by looking at the full original image and then looking at the compressed version. So, there's no special value to adding an extra step, and cost, to the workflow.
@BadScience
What is pathetic is your pathetic attitude.
You're trying to show you're so smart with the difference layer in photoshop. Great, I know how to do that too. Let's open the program, copy the image, paste it, copy the second image, paste it, then set the layer to difference. Lots of steps that's not necessary compared to opening two images in separate tabs. Get the points?
My point is that I'm pixel peeping to see if it's worth the quality loss. You have to test/pixel peep to do that. And later changes, effects, heavy handed editing will suffer more from the compression.
From a business standpoint, google rankings are affected by how long a page takes to load, especially on mobile. We carved about 3 mb off the front page with no loss of quality. It was an easy improvement. Over time, this increases business.
If you have an image heavy website, jpeg mini works great. I put it through its paces before buying it, trying some of the other plugins like RIOT, but it was by far the easiest to use and its claims held up.
795a8134c1