Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Lossless compression of existing JPEG files

328 views
Skip to first unread message

Darryl Lovato

unread,
Jan 7, 2005, 2:36:23 PM1/7/05
to
Yesterday, Allume Systems, a division of IMSI (and creators of the popular
"StuffIt" compression technology) announced a new technology which allows
users and developers to losslessly recompress JPEG files an average of 30%
smaller than the original JPEG file (as well as other Compressed data
types/files), WITHOUT additional data loss.

While the "Compression" of existing compressed files has thus far been
viewed as "impossible", the company has acquired and further developed, and
submitted patents, on a technology which allows for Jpeg to be further
compressed. The method is applicable to other compressed data types (Zip,
MPEG, MP3 and others) to be losslessly re-compressed.

This technology results in a smaller file than the original compressed data
with no data loss.

Working Pre-release test tools have been sent to (and verified by)
independent compression test sites, including:

<http://www.maximumcompression.com/>
<http://www.compression.ca/>

The new technology does NOT break any Information Theory Laws, and will be
shipped later this qtr in commercial products as well as be available for
licensing. The new technology does NOT compress "random files", but rather
previously "compressed files" and "compressed parts" of files. The
technology IS NOT recursive.

The company has filed patents on the new technologies.

The press releases regarding the technology can be found here:

<http://www.allume.com/company/pressroom/releases/stuffit/010605stuffit9.htm
l>
<http://www.allume.com/company/pressroom/releases/stuffit/010605jpeg.html>

Additionally, a white paper has been posted which details the companies
expansion into image compression from it's traditional lossless
archiving/text compression focus, along with results of the technology.

http://www.stuffit.com/imagecompression/

These technologies will be included in future versions of the StuffIt
product line as well as new products and services, and technology licenses
available from Allume and IMSI.

The core technology will also be licensed to companies in the Medical,
Camera, Camera Phone, Image management, internet acceleration, and many
other product areas.

- Darryl

Jeff Gilchrist

unread,
Jan 7, 2005, 4:03:40 PM1/7/05
to
Unlike others claiming to be able to compress already compressed data,
Allume actually sent me working code so I could verify for myself that
what they claim is true.

The test program they sent me is a beta version and not the final
product so things could change for the commercial release. The test
program they sent was a single Windows executable (.exe) that is
118,784 bytes in size and performs both compression and uncompression.
The executable itself is not compressed and UPX would bring it down to
about 53KB.

I will describe the method of testing I used to show that no tricks
were being performed and nothing could "fool" me into thinking the
algorithm was working if it was not. I placed the compressor onto a
floppy disk and copied the .exe file to Machine A and Machine B. Both
machines have no way to communicate with each other and are not
connected to any wired or wireless network. I used my Nikon Coolpix
3MP digital camera to generate JPEG files for use in the test. The
three image files were copied to Machine A. They were not copied to
and did not previously exist on Machine B. On Machine A, the SHA-1
hash of the 3 JPEG files were taken and the digests written down. The
Allume software was used to individually compress the 3 JPEG files and
sure enough they all got around 30% compression. The compressed files
only were copied to a floppy disk and then walked over to Machine B.
They were then copied to Machine B and using the Allume executable
already installed, the 3 JPEG files were uncompressed. The file sizes
on Machine B matched the originals on Machine A which is a good start.
The images were viewable in a JPEG reader as expected. Finally the
SHA-1 hashes of the uncompressed files on Machine B were taken and
confirmed to match the SHA-1 hashes of the original JPEG files on
Machine A which means they are bit for bit identical. This is not a
hoax, but the algorithm actually works.

Here are some details from my testing and how their compression
algorithm compares to some popular ones. Even if you don't want to
believe me, this algorithm should be available in the next release of
their compression product so you can verify it for yourself.


Test JPEGs:

DSCN3974.jpg (National Art Gallery, Ottawa, Canada)
File size : 1114198 bytes
Resolution : 2048 x 1536
Jpeg process: Baseline (Fine Compression)
SHA-1 Hash : f6b3b306f213d3f7568696ed6d94d36e58b4ce1b

DSCN4465.jpg (Golden Pagoda, Kyoto, Japan)
File size : 694895 bytes
Resolution : 2048 x 1536
Jpeg process: Baseline (Normal Compression)
SHA-1 Hash : 5f3d92f558d7cc2d850aa546ae287fa7b61f890d

DSCN5081.jpg (AI Building, MIT, USA)
File size : 516726 bytes
Resolution : 2048 x 1536
Jpeg process: Baseline (Fine Compression)
SHA-1 Hash : 3dcf29223076c4acae5108f6d2fa04cd1ddc5e70

Test Machine: P4 1.8GHz, 512MB RAM, Win2000


Results
=======

DSCN3974.jpg (1,114,198 bytes)

Program Comp Time Uncomp Time Compressed Size % Smaller
------- --------- ----------- --------------- ---------
Allume JPEG 7.9 sec 8.4 sec 835,033 bytes 25.0%
bzip2 1.02 1.6 sec 0.5 sec 1,101,627 bytes 1.1%
7-Zip 3.13 (PPMd) 4.3 sec 3.9 sec 1,102,032 bytes 1.1%
zip 2.3 -9j 0.2 sec 0.1 sec 1,104,866 bytes 0.8%
rar 3.42 -m5 1.7 sec 0.1 sec 1,107,336 bytes 0.6%
7-Zip 3.13 (LZMA) 2.6 sec 0.4 sec 1,113,492 bytes 0.1%

DSCN4465.jpg (694,895 bytes)

Program Comp Time Uncomp Time Compressed Size % Smaller
------- --------- ----------- --------------- ---------
Allume JPEG 5.8 sec 6.1 sec 526,215 bytes 24.3%
bzip2 1.02 1.0 sec 0.3 sec 683,344 bytes 1.7%
zip 2.3 -9j 0.1 sec 0.1 sec 683,462 bytes 1.6%
rar 3.42 -m5 1.2 sec 0.1 sec 685,283 bytes 1.4%
7-Zip 3.13 (PPMd) 2.5 sec 2.4 sec 687,425 bytes 1.1%
7-Zip 3.13 (LZMA) 1.6 sec 0.3 sec 689,264 bytes 0.8%

DSCN5081.jpg (516,726 bytes)

Program Comp Time Uncomp Time Compressed Size % Smaller
------- --------- ----------- --------------- ---------
Allume JPEG 5.8 sec 6.0 sec 374,501 bytes 27.5%
7-Zip 3.13 (PPMd) 2.0 sec 1.8 sec 504,718 bytes 2.3%
rar 3.42 -m5 0.8 sec 0.1 sec 505,296 bytes 2.2%
zip 2.3 -9j 0.1 sec 0.1 sec 505,334 bytes 2.2%
bzip2 1.02 0.7 sec 0.2 sec 506,714 bytes 1.9%
7-Zip 3.13 (LZMA) 1.2 sec 0.2 sec 508,449 bytes 1.6%


A couple of sample JPGs sent by Allume showed even better compression
performance. One JPG (1610 x 3055) with a file size of 315,085 bytes
compressed by 54.8% to 142,281 bytes. A second sample JPG (1863 x
2987) with a file size of 40,367 bytes compressed by 90.9% to 3,656
bytes. Your mileage will vary.
Regards,
Jeff Gilchrist
(www.compression.ca)

Fulcrum

unread,
Jan 7, 2005, 4:50:00 PM1/7/05
to
My testing method was similar to Jeff's but I didn't use a SHA-1 hash
but did a binary diff between the original and the
compressed/decompressed jpeg. The average compression rate I got was a
bit less then 30% (around 24%). Almost all files I tested scored
compression ratio's over 20%.
---
Regards,
Werner Bergmans
(www.maximumcompression.com)

Phil Carmody

unread,
Jan 7, 2005, 8:12:54 PM1/7/05
to
"Jeff Gilchrist" <jsgil...@hotmail.com> writes:
> Program Comp Time Uncomp Time Compressed Size % Smaller
> ------- --------- ----------- --------------- ---------
> Allume JPEG 7.9 sec 8.4 sec 835,033 bytes 25.0%
> bzip2 1.02 1.6 sec 0.5 sec 1,101,627 bytes 1.1%
> 7-Zip 3.13 (PPMd) 4.3 sec 3.9 sec 1,102,032 bytes 1.1%
> zip 2.3 -9j 0.2 sec 0.1 sec 1,104,866 bytes 0.8%
> rar 3.42 -m5 1.7 sec 0.1 sec 1,107,336 bytes 0.6%
> 7-Zip 3.13 (LZMA) 2.6 sec 0.4 sec 1,113,492 bytes 0.1%

The comparison of a specific JPEG compressor (i.e compressor
of JPEGs) against a general purpose compressor does not seem
particularly informative. It's not comparing like with like.
What happens when you run Allume JPEG on the calgary corpus?

It's long been known that JPEG is suboptimal in many places.
However, it does appear that Allume have demonstrated by how
much JPEG can be improved, much more than I expected, and so
all kudos to them for that remarkable feat. Great work guys!

Phil


--
The gun is good. The penis is evil... Go forth and kill.

Phil Carmody

unread,
Jan 7, 2005, 8:15:44 PM1/7/05
to
Darryl Lovato <dlo...@allume.com> writes:
> The new technology does NOT compress "random files", but rather
> previously "compressed files" and "compressed parts" of files.

You're not cranks - I'd change the wording of that sentence a bit,
as it says too much, and could be deliberately misinterpreted such
that it actaully is demonstrably false.

Darryl Lovato

unread,
Jan 7, 2005, 8:24:33 PM1/7/05
to
On 1/7/05 5:12 PM, in article 87u0psa...@nonospaz.fatphil.org, "Phil
Carmody" <thefatphi...@yahoo.co.uk> wrote:

> "Jeff Gilchrist" <jsgil...@hotmail.com> writes:
>> Program Comp Time Uncomp Time Compressed Size % Smaller
>> ------- --------- ----------- --------------- ---------
>> Allume JPEG 7.9 sec 8.4 sec 835,033 bytes 25.0%
>> bzip2 1.02 1.6 sec 0.5 sec 1,101,627 bytes 1.1%
>> 7-Zip 3.13 (PPMd) 4.3 sec 3.9 sec 1,102,032 bytes 1.1%
>> zip 2.3 -9j 0.2 sec 0.1 sec 1,104,866 bytes 0.8%
>> rar 3.42 -m5 1.7 sec 0.1 sec 1,107,336 bytes 0.6%
>> 7-Zip 3.13 (LZMA) 2.6 sec 0.4 sec 1,113,492 bytes 0.1%
>
> The comparison of a specific JPEG compressor (i.e compressor
> of JPEGs) against a general purpose compressor does not seem
> particularly informative. It's not comparing like with like.
> What happens when you run Allume JPEG on the calgary corpus?

Depends on how you look at it - this technology will be shipped in StuffIt
(as well as other things), so you can substitute the "Allume JPEG" title of
the test tool with "StuffIt" - a general purpose compression product, just
like the others.

> It's long been known that JPEG is suboptimal in many places.
> However, it does appear that Allume have demonstrated by how
> much JPEG can be improved, much more than I expected, and so
> all kudos to them for that remarkable feat. Great work guys!

Thank you.

- Darryl

> Phil

Darryl Lovato

unread,
Jan 7, 2005, 8:50:50 PM1/7/05
to
On 1/7/05 5:15 PM, in article 87pt0ga...@nonospaz.fatphil.org, "Phil
Carmody" <thefatphi...@yahoo.co.uk> wrote:

> Darryl Lovato <dlo...@allume.com> writes:
>> The new technology does NOT compress "random files", but rather
>> previously "compressed files" and "compressed parts" of files.
>
> You're not cranks - I'd change the wording of that sentence a bit,
> as it says too much, and could be deliberately misinterpreted such
> that it actaully is demonstrably false.

Phil,

It doesn't say "ALL compressed files" or even "ALL compressed parts of
files", but what is stated above is factual - it does compress previously
compressed files, and/or, previously compressed parts of files - it just
depends on what those "compressed files, and compressed parts of files are".

I DO understand what you are saying, however - and believe me, we want to
distance ourselves from what has gone on here (comp.compression) in the past
regarding hoaxes.

So, for now, I'll refine the statement to compressed jpeg files, and parts
of files that include jpegs (the technology covers more than JPEG, but since
that's all we submitted for independent benchmarking/testing/verification so
far, I'm OK with limiting the statement to that at this time).

Thanks for pointing this out.

- Darryl

> Phil

Phil Carmody

unread,
Jan 7, 2005, 10:50:30 PM1/7/05
to
Darryl Lovato <dlo...@allume.com> writes:
> I DO understand what you are saying, however - and believe me, we want to
> distance ourselves from what has gone on here (comp.compression) in the past
> regarding hoaxes.

Absolutely. That's why that particular sentence seemed to stand out
just a little.


Side note - is there a maintainer for the comp.compression FAQ?
The "some compressions schemes leave room for further compression"
and "recursive compression" concepts probably need to have a great
fat wedge driven between them, lest loons or innocents confuse the
two.


Cheerio,

Alexis Gallet

unread,
Jan 8, 2005, 3:54:47 AM1/8/05
to
Hi,

very interesting results indeed ! I'm curious, I'd like to ask you a few
questions :

* how does the lossless recompression rate varies with the JPEG's quality
factor ? I would expect that the lower the quality factor of the original
JPEG, the better the recompression works...?

* About the JPEG files used for the tests : were they baseline or baseline
optimized JPEGs ? would one get similar recompression ratios with JPEGs
generated by the famous IJG library ?

Anyway, I'm impressed by the figures ! It would be very interesting to run a
JPEG+stuffIt versus JPEG2000 comparison on your test files, eg by plotting
(x=bitrate, y=PSNR) graph...

Regards,
Alexis Gallet


Severian

unread,
Jan 8, 2005, 4:20:59 AM1/8/05
to
On 7 Jan 2005 13:03:40 -0800, "Jeff Gilchrist"
<jsgil...@hotmail.com> wrote:

You don't mention the processor types and speeds involved, but unless
the time required for this extra compression is reduced considerably
in the released product, it will have very limited usage until
processors are considerably faster.

Disk space and bandwidth are both relatively cheap; making users wait
7-10 seconds to save or view an extra-compressed image (vs. my
estimate of 1-3 seconds for the original JPEG) is simply annoying.

>A couple of sample JPGs sent by Allume showed even better compression
>performance. One JPG (1610 x 3055) with a file size of 315,085 bytes
>compressed by 54.8% to 142,281 bytes.

That's likely a low-quality JPEG to begin with. Why care that the
recompression is lossless? The image most likely looks like shit
already.

>A second sample JPG (1863 x
>2987) with a file size of 40,367 bytes compressed by 90.9% to 3,656
>bytes. Your mileage will vary.

At that compression, the original JPEG is useless noise, or at least a
useless image.

I'm not surprised that JPEG-compressed data can be compressed further;
but the time required for the extra compression will make it initially
useful only in fringe applications (archival storage, etc.), until it
is reasonably quick.

By that time, JPEG2000 or newer methods should be ubiquitous, and
provide better results at least as quickly. Also, they handle the
higher color depths necessary for decent digital photography.

--
Sev

Jeff Gilchrist

unread,
Jan 8, 2005, 6:48:35 AM1/8/05
to
Hi Severian,

Actually I do mention processor types and speeds. If you re-read my
post you will find:

"Test Machine: P4 1.8GHz, 512MB RAM, Win2000"

The sample files are special cases. I saw around 25% compression with
my own files. They only claim 30% on average. I was just pointing out
what the algorithm can do. The sample images do not look like "shit"
already, but even if they did not look that great, you would not want
to lose any more quality.

>From what the company has said, they will be including the algorithm in
their Stuffit archiving software so it will be used for what you
suggest (archival storage, etc...).

Regards,
Jeff.

Jeff Gilchrist

unread,
Jan 8, 2005, 6:52:56 AM1/8/05
to
Phil,

Their JPEG algorithm will be part of a general purpose compressor so it
seems like a good comparison to me. Also, many people are not
"experts" in compression so do not even realize that programs such as
ZIP and RAR get almost no compression from image data like JPEG. I was
also posting the details so people could see time-wise how the
algorithm compares in speed.

Regards,
Jeff.

Darryl Lovato

unread,
Jan 8, 2005, 10:58:14 AM1/8/05
to
On 1/8/05 12:54 AM, in article 41df9fda$0$31792$626a...@news.free.fr,
"Alexis Gallet" <alexis.galle...@free.fr> wrote:

> Hi,
>
> very interesting results indeed ! I'm curious, I'd like to ask you a few
> questions :
>
> * how does the lossless recompression rate varies with the JPEG's quality
> factor ? I would expect that the lower the quality factor of the original
> JPEG, the better the recompression works...?

The more initial JPEG image loss, the more %additional lossless compression
with our technology. I.e If someone is really concerned about file size,
they can go as far as possible with JPEG without seeing visual artifacts,
then use our technology on the result to get it even smaller w/no additional
loss.

Here are results for the Kodak image set (24 files) with various JPEG
quality settings:

Uncompressed JPG Level JPG Stuffit/jpg StuffIt/jpg vs JPG
28,325,140 10 385,260 209,233 45.7%
28,325,140 20 599,579 389,672 35.0%
28,325,140 30 778,323 539,459 30.7%
28,325,140 40 926,698 664,426 28.3%
28,325,140 50 1,068,196 783,370 26.7%
28,325,140 60 1,222,806 912,331 25.4%
28,325,140 70 1,461,298 1,111,188 24.0%
28,325,140 80 1,852,334 1,432,695 22.7%
28,325,140 90 2,767,835 2,172,781 21.5%
28,325,140 100 8,005,968 6,462,394 19.3%

> * About the JPEG files used for the tests : were they baseline or baseline
> optimized JPEGs ? would one get similar recompression ratios with JPEGs
> generated by the famous IJG library ?

It works for any jpeg - baseline, grayscale, progressive. I would expect
that we'd get similar ratio's on any valid JPG file no matter what it was
created with, but haven't specifically tested output from the IJG lib.

> Anyway, I'm impressed by the figures !

Thanks.

>It would be very interesting to run a
> JPEG+stuffIt versus JPEG2000 comparison on your test files, eg by plotting
> (x=bitrate, y=PSNR) graph...

You can get an idea of what it would be by taking an existing comparison
test that someone has done, and reduce the bitrate of JPEG by about 20-40%.

It'll be released to the public in a couple months so feel free to run the
comparison - I'd be interested in the results you get.

> Regards,
> Alexis Gallet
>
>

Matt Mahoney

unread,
Jan 8, 2005, 7:39:50 PM1/8/05
to
"Darryl Lovato" <dlo...@allume.com> wrote in message
news:BE0424B5.15860%dlo...@allume.com...

> Yesterday, Allume Systems, a division of IMSI (and creators of the
popular
> "StuffIt" compression technology) announced a new technology which allows
> users and developers to losslessly recompress JPEG files an average of 30%
> smaller than the original JPEG file (as well as other Compressed data
> types/files), WITHOUT additional data loss.
>
> While the "Compression" of existing compressed files has thus far been
> viewed as "impossible", the company has acquired and further developed,
and
> submitted patents, on a technology which allows for Jpeg to be further
> compressed. The method is applicable to other compressed data types (Zip,
> MPEG, MP3 and others) to be losslessly re-compressed.

Interesting. I guess the idea might be to uncompress the data, then
compress it with a better model. This is probably tricker for lossy formats
like JPEG than lossless ones like Zip.

> Working Pre-release test tools have been sent to (and verified by)
> independent compression test sites, including:
>
> <http://www.maximumcompression.com/>

The benchmark has one JPEG file, so far not yet updated. The best
compression currently is 3.5% (WinRK 2.0). (There is also a .bmp file that
was converted from a JPEG so it has JPEG like artifacts that make it easier
to compress losslessly.)

> <http://www.compression.ca/>

This benchmark hasn't been updated since 2002 and doesn't have any JPEG
files. However I'm not aware of any other benchmarks that do.

-- Matt Mahoney


Severian

unread,
Jan 8, 2005, 7:51:39 PM1/8/05
to
On 8 Jan 2005 03:48:35 -0800, "Jeff Gilchrist"
<jsgil...@hotmail.com> wrote:

>Hi Severian,
>
>Actually I do mention processor types and speeds. If you re-read my
>post you will find:
>
>"Test Machine: P4 1.8GHz, 512MB RAM, Win2000"

Sorry, I missed it, even though I looked for it.

>The sample files are special cases. I saw around 25% compression with
>my own files. They only claim 30% on average. I was just pointing out
>what the algorithm can do. The sample images do not look like "shit"
>already, but even if they did not look that great, you would not want
>to lose any more quality.

I haven't seen them, but I find it hard to believe the second example
is useful; perhaps it's an inappropriate file for JPEG compression in
the first place.

As far as losing more quality, that is true.

>>From what the company has said, they will be including the algorithm in
>their Stuffit archiving software so it will be used for what you
>suggest (archival storage, etc...).

Yes, but it also makes their announcement a bit less amazing than they
seem to want everyone to believe!

Anyone seriously dealing with images would archive the originals
(losslessly compressed). I'm not sure I understand the market for this
new compression. Porn collectors? Google.com and archive.org?

It just doesn't seem to be as big a deal as they make it out to be.

--
Sev

Darryl Lovato

unread,
Jan 8, 2005, 8:59:45 PM1/8/05
to
On 1/8/05 4:51 PM, in article dlv0u09m1jv8t7r2f...@4ax.com,
"Severian" <seve...@chlamydia-is-not-a-flower.com> wrote:

> On 8 Jan 2005 03:48:35 -0800, "Jeff Gilchrist"
> <jsgil...@hotmail.com> wrote:
>
>> Hi Severian,
>>
>> Actually I do mention processor types and speeds. If you re-read my
>> post you will find:
>>
>> "Test Machine: P4 1.8GHz, 512MB RAM, Win2000"
>
> Sorry, I missed it, even though I looked for it.
>
>> The sample files are special cases. I saw around 25% compression with
>> my own files. They only claim 30% on average. I was just pointing out
>> what the algorithm can do. The sample images do not look like "shit"
>> already, but even if they did not look that great, you would not want
>> to lose any more quality.
>
> I haven't seen them, but I find it hard to believe the second example
> is useful; perhaps it's an inappropriate file for JPEG compression in
> the first place.

The two sample files I sent Jeff were:

A portrait of a pretty young lady. (fully clothed) :-)
A NASA/Space Image of a star cluster.

The JPEG compression was not high enough to make noticeable artifacts in
either image. As stated elsewhere, the more compression jpeg gets, the
further % reduction we can get, so higher jpeg settings do make a
difference, but the nature of the original pre-jpeg'd image also makes a
difference.

> As far as losing more quality, that is true.
>
>>> From what the company has said, they will be including the algorithm in
>> their Stuffit archiving software so it will be used for what you
>> suggest (archival storage, etc...).
>
> Yes, but it also makes their announcement a bit less amazing than they
> seem to want everyone to believe!

Tons of users send and store jpegs. Being able to compress them, even 20%
(about the least we get), without creating additional image loss adds up.

Many times, the user no longer has the original image (camera's storing
directly into jpeg on the compact flash card, etc). You can always take an
image, and recompress it with a higher jpeg compression setting, but that
makes the original JPEG "loss" permanent, AND adds more loss. Our
technology allows you to reduce the size without messing with the quality at
all.

> Anyone seriously dealing with images would archive the originals
> (losslessly compressed). I'm not sure I understand the market for this
> new compression. Porn collectors? Google.com and archive.org?

I'm sure it will be used for all the above :-)

> It just doesn't seem to be as big a deal as they make it out to be.

I suppose it depends on the user. Going from 1-3% compression of JPEGs to
20-40% (in some fringe cases much more) is a pretty significant advancement
to the field of lossless compression IMHO. Especially given the average
"large size" and "large popularity" of these files.

You have no idea how many people, in the past, put a JPEG (or many jpeg
files) into a StuffIt archive, then complain to us that we barely compressed
it at all. Even "reviewers", that should know better, do this more often
than you would think. The technology we announced will be applied to other
compressed file types as well.

- Darryl

> --
> Sev

code_wrong

unread,
Jan 8, 2005, 9:24:58 PM1/8/05
to

"Darryl Lovato" <dlo...@allume.com> wrote in message
news:BE05CF2A.158FB%dlo...@allume.com...

It brilliant .. when can we see the algorithm ;-)
or the patents even


Darryl Lovato

unread,
Jan 8, 2005, 9:53:25 PM1/8/05
to
On 1/8/05 6:24 PM, in article 110523747...@demeter.uk.clara.net,
"code_wrong" <t...@tac.ouch.co.uk> wrote:

<Snipped out a bunch of stuff not relevant to this reply>

>>> It just doesn't seem to be as big a deal as they make it out to be.
>>
>> I suppose it depends on the user. Going from 1-3% compression of JPEGs to
>> 20-40% (in some fringe cases much more) is a pretty significant
>> advancement
>> to the field of lossless compression IMHO. Especially given the average
>> "large size" and "large popularity" of these files.
>>
>> You have no idea how many people, in the past, put a JPEG (or many jpeg
>> files) into a StuffIt archive, then complain to us that we barely
>> compressed
>> it at all. Even "reviewers", that should know better, do this more often
>> than you would think. The technology we announced will be applied to
>> other
>> compressed file types as well.
>
> It brilliant .. when can we see the algorithm ;-)
> or the patents even

Our patent lawyer has advised us to not give out details of the inventions
at this time. There are actually 2 patents.

I assume you can all see the patents when the patent office posts it to
their site - it describes the processes involved. I'm not really sure when
it will be posted on their site, the full utility patent application was
submitted prior to our announcement (and a provisional application a few
months ago as well).

The first consumer versions of software (StuffIt, etc) that include the new
technologies/inventions will be shipped late this qtr. Licensed versions
will be available shortly thereafter.

You'll have to take our word for it (and more importantly Werner Bergmans,
and Jeff Gilchrists' independent verification) for now.

It works.

- Darryl

Darryl Lovato

unread,
Jan 8, 2005, 11:11:02 PM1/8/05
to
On 1/8/05 4:39 PM, in article
q3%Dd.1638$KJ2...@newsread3.news.atl.earthlink.net, "Matt Mahoney"
<matma...@yahoo.com> wrote:

<snip>

>> Working Pre-release test tools have been sent to (and verified by)
>> independent compression test sites, including:
>>
>> <http://www.maximumcompression.com/>
>
> The benchmark has one JPEG file, so far not yet updated. The best
> compression currently is 3.5% (WinRK 2.0). (There is also a .bmp file that
> was converted from a JPEG so it has JPEG like artifacts that make it easier
> to compress losslessly.)
>
>> <http://www.compression.ca/>
>
> This benchmark hasn't been updated since 2002 and doesn't have any JPEG
> files.

They posted "verification" as a reply directly to the newsgroup as a
follow-up to my original message - I'm not sure when their web sites will be
updated.

> However I'm not aware of any other benchmarks that do.

Agreed, the only test/benchmarking site that includes a JPEG as a test file
for lossless compression (www.maximumcompression). It did so (it appears)
in order to test worst case performance of programs, because this has
previously been viewed as "impossible" - nobody has pulled it off.

Just an understandable, but unfortunate, association of lossless compression
of "compressed data" and lossless compression of "random data" together.

Part of my job is to look into "compression claims" - no matter how crazy
they might seem (I've talked to a lot of people in the past because of this
- I won't mention names, but we all know who they are) - on the odd chance
that someone did have something.

It's my responsibility to make sure we don't "miss something" important.
Anyway, I understand the questions, and pessimism, which is why we felt it
was important to send working test tools to Jeff and Werner for independent
verification. Having an open mind is a good thing - without it, I would
have passed on this invention - that turned out, works! :-)

- Darryl

> -- Matt Mahoney
>
>

Uwe Herklotz

unread,
Jan 9, 2005, 3:52:00 AM1/9/05
to
"Darryl Lovato" <dlo...@allume.com> wrote:

> The more initial JPEG image loss, the more %additional lossless
compression
> with our technology. I.e If someone is really concerned about file size,
> they can go as far as possible with JPEG without seeing visual artifacts,
> then use our technology on the result to get it even smaller w/no
additional
> loss.

Very interesting results!
Does your new technology work also for JPEGs created in lossless mode?
If yes, what %additional lossless compression is possible in this case?
What happens if random data is enclosed in a lossless JPEG picture and
this file is then re-compressed with your technology? Surely you will
not be able to get a result smaller than size of original random data.

Regards
Uwe


Uwe Herklotz

unread,
Jan 9, 2005, 3:52:20 AM1/9/05
to
"Matt Mahoney" <matma...@yahoo.com> wrote:

> Interesting. I guess the idea might be to uncompress the data, then
> compress it with a better model. This is probably tricker for lossy
formats
> like JPEG than lossless ones like Zip.

I also thought about such an idea. But even for Zip files it seems
to be very difficult. Uncompressing the data and recompressing with
a better model is easy. But how to ensure that this can be reversed
i.e. it must be possible to recover the original Zip file. Without
knowing the parameters of the original Zip compression this is very
difficult, maybe impossible at all.

Regards
Uwe


Darryl Lovato

unread,
Jan 9, 2005, 5:10:01 AM1/9/05
to
On 1/9/05 12:52 AM, in article 34c9m9F...@individual.net, "Uwe
Herklotz" <_no_s...@yahoo.com> wrote:

> "Darryl Lovato" <dlo...@allume.com> wrote:
>
>> The more initial JPEG image loss, the more %additional lossless
> compression
>> with our technology. I.e If someone is really concerned about file size,
>> they can go as far as possible with JPEG without seeing visual artifacts,
>> then use our technology on the result to get it even smaller w/no
> additional
>> loss.
>
> Very interesting results!
> Does your new technology work also for JPEGs created in lossless mode?
> If yes, what %additional lossless compression is possible in this case?

The tool I created the JPEG's with (previous posted test results show)
allows quality of 100, which is the near lossless mode - and we still get
about 20% additional compression. Again, this technology is applicable to
many other compressed types other than regular lossy JPEG files. Initially
we picked JPEG as the first compressed file type to ship support for -
mostly due to the mass number of JPEG files out there - but we will roll out
support for other file types over the coming year as we have time to tune
and test the technology with other compressed file types.

> What happens if random data is enclosed in a lossless JPEG picture and
> this file is then re-compressed with your technology? Surely you will
> not be able to get a result smaller than size of original random data.

As I've stated before, we don't claim to compress random data only some
types of compressed data.

I haven't specifically tried to compress a JPEG of a picture that is
essentially "random" pixels to start with, but I doubt we would be able to
do as well - if anything in that case. This is an educated guess based on
the fact that the more compressible the original picture is via JPEG, the
more we get, so the opposite should also be true. This really doesn't
matter that much though in practice, since I don't expect many users have
jpeg files that contain pixels of random noise :-)

If you have such a JPEG, send it to me and I'll tell you what happens.

- Darryl

> Regards
> Uwe
>
>

Alexis Gallet

unread,
Jan 9, 2005, 6:10:42 AM1/9/05
to
"Uwe Herklotz" <_no_s...@yahoo.com> wrote:

> I also thought about such an idea. But even for Zip files it seems
> to be very difficult. Uncompressing the data and recompressing with
> a better model is easy. But how to ensure that this can be reversed
> i.e. it must be possible to recover the original Zip file. Without
> knowing the parameters of the original Zip compression this is very
> difficult, maybe impossible at all.

Indeed ! Even with given parameters (eg size of the sliding window), I think
there are (in general) several valid zip files that correspond to the same
original file.

On the other hand, the JPEG case looks much easier to me, because I don't
think there is this uniqueness issue in the JPEG format.

On the compression side, you would have to :
1) entropy decode the JPEG file (ie, Huffman decode all the coeffs, then
DPCM decode the DC coeffs and RLE decode the AC coeffs) - but don't
unquantize the coeffs neither perform an inverse DCT
2) reencode the coeffs using a context-adaptive arithmetic coder, with
contexts tuned for 8x8 DCT data. Prior to that, one could further
decorrelate the DC coeffs by applying once or twice a reversible wavelet
filter (eg the 5/3 integer filter) on them. But the major compression gain
would imo be obtained by using the correlation between the AC coeffs of
adjacent blocks (same frequency & adjacent spatial locations), which is
completely ignored in JPEG.
3) don't forget to copy in the archive the huffman tables from the header of
the original JPEG file. Although they aren't needed to recover losslessly
the image itself, they are needed to recover a JPEG file which is
bit-identical to the original JPEG.

On the decompression side :
1) entropy decode the archive
2) reencode it with DPCM/RLE followed by Huffman, using the original JPEG
file's tables

Still, I'm not sure this scheme would yield a 20% improvement on the
high-quality JPEGs (quality factor >= 80)... I think that's the part where
Allume's results are the most impressive. And the slowness of their
algorithm (according to Jeff Gilchrist's figures) suggests that they are
doing something more sophisticated than what I'm guessing... Anyone got an
idea ?

Regards,
Alexis Gallet

Phil Carmody

unread,
Jan 9, 2005, 8:51:03 AM1/9/05
to
Severian <seve...@chlamydia-is-not-a-flower.com> writes:
> Anyone seriously dealing with images would archive the originals
> (losslessly compressed). I'm not sure I understand the market for this
> new compression. Porn collectors? Google.com and archive.org?
>
> It just doesn't seem to be as big a deal as they make it out to be.

I think it's more a geek thing. One-upmanship.

I remember people who used LHA, or ARJ or LZH or whatever it was that
was better than PKZip back in the late 80s or early 90s, waltzing around
the place pretending to be oh-so-superior to the lamo's that used the
utterly mediocre PKZip.

Never underestimate the gadget-affinity of geeks.

Anton Kratz

unread,
Jan 9, 2005, 9:01:43 AM1/9/05
to

"Jeff Gilchrist" <jsgil...@hotmail.com> wrote:

> Unlike others claiming to be able to compress already compressed data [...]

Why do so many people think it is not possible to compress
already compressed data? It's easy: take alice29.txt for example
from the canterbury corpus, compress it with a RLE compressor.
result: a slightly compressed file because of very few long runs of spaces
in the formatting. The file is compressed, okay? Now, compress it with
Huffman or whatever. Now you see it is more compressed.

It is not possible to compress already compressed data with the *same*
compression algo you used in the first pass. Maybe that is what you mean?!

Anton


Fulcrum

unread,
Jan 9, 2005, 10:22:24 AM1/9/05
to
> > Working Pre-release test tools have been sent to (and verified by)
> > independent compression test sites, including:
> >
> > <http://www.maximumcompression.com/>
>
> The benchmark has one JPEG file, so far not yet updated. The best
> compression currently is 3.5% (WinRK 2.0). (There is also a .bmp
file that
> was converted from a JPEG so it has JPEG like artifacts that make it
easier
> to compress losslessly.)
>
> -- Matt Mahoney

I will update my website when the new Stuffit with this technology is
shipped. The application I tested is just an experimental testbed to
compress jpeg's.

Results for my a10.jpg file:
On my AMD Athlon 1800+ a10.jpg 842468 -> 643403 76.37% in about 4 sec
---
Regards,
Werner Bergmans

Fulcrum

unread,
Jan 9, 2005, 10:38:44 AM1/9/05
to
> Agreed, the only test/benchmarking site that includes a JPEG as
> a test file for lossless compression (www.maximumcompression).
> It did so (it appears) in order to test worst case performance
> of programs, because this has previously been viewed as
> "impossible" - nobody has pulled it off.
> - Darryl

No, I didn't add it to only test worst case behaviour (I would have
tested the 1 million random digit file instead). But I agree it's a
nice site effect to see some compressors expand the file by 23% after
compressing :)

A long time before I started my site I already noticed some simple
compressors like szip and arj where able to (slightly) compress
jpg-files, and other more advanced compressors like ACE had much more
difficulty to do so. That made it an interessting testcase to add to
the site.


Regards,
Werner Bergmans

code_wrong

unread,
Jan 9, 2005, 11:21:56 AM1/9/05
to

"Phil Carmody" <thefatphi...@yahoo.co.uk> wrote in message
news:878y729...@nonospaz.fatphil.org...

eh? What are you talking about?
This is a million dollar breakthrough


Matt Mahoney

unread,
Jan 9, 2005, 1:01:35 PM1/9/05
to
"Uwe Herklotz" <_no_s...@yahoo.com> wrote in message
news:34c9mhF...@individual.net...

I agree it would be difficult. Perhaps from the zip file you could tell
which program and version and options produced it, then regenerate the zip
file using the exact same algorithm. In theory you could do this by
unzipping and rezipping for each case until you find a match.

The real problem occurs when you come across an unknown format. Given a
string like

..ABC...ABC...ABC

LZ77 allows the third ABC to be coded as a pointer to the first or second
copy, or as literals, or as a mix of literals and pointers. All of these
would unzip correctly. The general solution would be to record which choice
was made, but this would negate most of the compression savings.

I recall a paper which proposed steganographic zip files using the choice of
coding to hide information. I doubt that such files could be compressed
further using this method.

-- Matt Mahoney


Konsta Karsisto

unread,
Jan 9, 2005, 6:59:20 PM1/9/05
to
Matt Mahoney wrote:
> ..ABC...ABC...ABC
>
> LZ77 allows the third ABC to be coded as a pointer to the first or second
> copy, or as literals, or as a mix of literals and pointers. ...

> I recall a paper which proposed steganographic zip files using the choice of
> coding to hide information.

There was a paper in DCC in 2003 or earlier where they used
this redundancy for, I think, error correction. Or, it could
have been something else, too. ;-) Unfortunately, I couldn't
find the reference.


--
KKK

Malcolm Taylor

unread,
Jan 10, 2005, 6:59:22 AM1/10/05
to
Hi Darryl,

Firstly, congrats! It seems that you guys have created something rather
unique. There are many of us who have thought of such things before, but
none of us has managed to make the ideas actually work... :). My own
attempts in WinRK do not come close to the 20+% you reportedly achieve.

Anyway, I do have a question for you. How well does your technology cope
with corrupt JPEG files. For example, if I were to take a JPEG and blat
a few random numbers into it with a hex editor, would your tech still
compress it well?

Malcolm

Phil Carmody

unread,
Jan 10, 2005, 8:16:33 AM1/10/05
to
"code_wrong" <t...@tac.ouch.co.uk> writes:

Yes, you're right, no gadget has ever made a company anything like
a million dollars. Thank you so much for pointing out this undisputable
fact to me and the rest of comp.compression.

Phil Carmody

unread,
Jan 10, 2005, 8:31:38 AM1/10/05
to

Even that's not true. Gzip can repeatedly positively compress at least 3 times.

I would guess that David Scott's BICOM (or at least the algorithm he described
here a few days back) might be able to positively compress 4 or 5 times.
(assuming all zeroes becomes all zeroes.)

I've designed on paper an algorithm which I believe should positively compress
its own output many dozen times if seeded with something like plain text
or html. I don't expect its steady state to be any better than Huffman, though,
so it's only been done as a flight of fancy into _deliberately_ countering
the inaccurate statement that you and others propagate.

All that can be said is:
It's not possible to compress data without redundancies.
Compression programs are designed to remove, to various extents, redundancies.

Design an algorithm to remove the smallest possible amount of redundancy,
and you can be iterating a very large number of times.

Darryl Lovato

unread,
Jan 10, 2005, 10:32:52 AM1/10/05
to
On 1/10/05 3:59 AM, in article 41e26d5e$1...@127.0.0.1, "Malcolm Taylor"
<m...@me.com> wrote:

> Hi Darryl,
>
> Firstly, congrats! It seems that you guys have created something rather
> unique. There are many of us who have thought of such things before, but
> none of us has managed to make the ideas actually work... :). My own
> attempts in WinRK do not come close to the 20+% you reportedly achieve.

Thanks. These technologies are definitely not "simple" to develop, even
after you know they are possible.

> Anyway, I do have a question for you. How well does your technology cope
> with corrupt JPEG files. For example, if I were to take a JPEG and blat
> a few random numbers into it with a hex editor, would your tech still
> compress it well?

I haven't tried it.

- darryl

> Malcolm

Darryl Lovato

unread,
Jan 10, 2005, 11:02:10 AM1/10/05
to
On 1/10/05 5:16 AM, in article 87hdlp5...@nonospaz.fatphil.org, "Phil
Carmody" <thefatphi...@yahoo.co.uk> wrote:

I personally don't mind if what we invented and filed patents for, is called
a "gadget". I'm sure there were people that called the first light bulb a
gadget, the first jet engine a gadget, the first telephone a gadget, the
first personal computer, etc.

We have something that compresses possibly "billions" of files that were
previously "uncompressible" (1-3% isn't significant compression).

These files in question are generally "larger" than the average file size,
they are commonly stored on hard drives, flash cards, backed up, sent via
e-mail, and downloaded/viewed from the web. Now they can be compressed
20-40% on average, and in some cases up to 90%.

This is a very useful, and valuable, "gadget". :-)

And yes, a driving force is "one-upmanship" it's called commercial
competition. There is nothing wrong with trying to make your product better
than the competition. It benefits users, and has happened from the
beginning of time - since "Commerce" was first invented.

- Darryl

> Phil

Anton Kratz

unread,
Jan 10, 2005, 11:43:28 AM1/10/05
to

Hi Phil,

I thought about what you wrote and you are right.

I was wrong indeed when I wrote that you can't compress
already compressed data with the same algo again, because
indeed you can.

But it's still wrong when someone writes that you can't
compress already compressed data (neither with the same algo nor
another one), because in fact you can.


Anton


"Phil Carmody" <thefatphi...@yahoo.co.uk> schrieb im Newsbeitrag news:87d5wd5...@nonospaz.fatphil.org...

Phil Carmody

unread,
Jan 10, 2005, 8:12:23 PM1/10/05
to
Darryl Lovato <dlo...@allume.com> writes:
> On 1/10/05 5:16 AM, in article 87hdlp5...@nonospaz.fatphil.org, "Phil
> Carmody" <thefatphi...@yahoo.co.uk> wrote:

My message id should be in your references header, there's no need to have it
in the body text too.

> > "code_wrong" <t...@tac.ouch.co.uk> writes:
> >> "Phil Carmody" <thefatphi...@yahoo.co.uk> wrote in message
> >> news:878y729...@nonospaz.fatphil.org...

> >>> Never underestimate the gadget-affinity of geeks.
> >>
> >> eh? What are you talking about?
> >> This is a million dollar breakthrough
> >
> > Yes, you're right, no gadget has ever made a company anything like
> > a million dollars. Thank you so much for pointing out this undisputable
> > fact to me and the rest of comp.compression.

I forgot possibly the biggest gadget fad of them all in recent years -
cameras on mobile phones.

> I personally don't mind if what we invented and filed patents for, is called
> a "gadget". I'm sure there were people that called the first light bulb a
> gadget, the first jet engine a gadget, the first telephone a gadget, the
> first personal computer, etc.

Indeed.



> This is a very useful, and valuable, "gadget". :-)
>
> And yes, a driving force is "one-upmanship" it's called commercial
> competition. There is nothing wrong with trying to make your product better
> than the competition. It benefits users, and has happened from the
> beginning of time - since "Commerce" was first invented.

Absolutely. Geek pockets can be fairly deep, and there are certainly large
numbers of them. However, they can be exceptionally fickle too, so even
the best gadget can fail in the market. That's what VC was invented for.

news...@comcast.net

unread,
Jan 12, 2005, 10:25:21 AM1/12/05
to
Phil Carmody <thefatphi...@yahoo.co.uk> wrote:
> Darryl Lovato <dlo...@allume.com> writes:
>> On 1/10/05 5:16 AM, in article 87hdlp5...@nonospaz.fatphil.org, "Phil
>> Carmody" <thefatphi...@yahoo.co.uk> wrote:
>
> My message id should be in your references header, there's no need to have it
> in the body text too.
>
>> > "code_wrong" <t...@tac.ouch.co.uk> writes:
>> >> "Phil Carmody" <thefatphi...@yahoo.co.uk> wrote in message
>> >> news:878y729...@nonospaz.fatphil.org...
>> >>> Never underestimate the gadget-affinity of geeks.
>> >>
>> >> eh? What are you talking about?
>> >> This is a million dollar breakthrough
>> >
>> > Yes, you're right, no gadget has ever made a company anything like
>> > a million dollars. Thank you so much for pointing out this undisputable
>> > fact to me and the rest of comp.compression.
>
> I forgot possibly the biggest gadget fad of them all in recent years -
> cameras on mobile phones.

Unfortunately, taking over 5 seconds on a desktop processor puts it
out of the reasonable range for embedded processors like in phones.

If they can get the time down by an order of magnitude, then things
might start getting more interesting. Under half a second on a
desktop system would mean that you could browse compressed images on a
desktop system fairly painlessly (imagine -- burning 30% more images
on a CD -- great for people who take a lot of digital pictures). But
I'm not going to deal with a lag in image rendering just to fit a
little more on a CD. If you could get it under a second or two on an
embedded processor, then maybe it could be used to increase capacity
directly in things like cameras and phones -- but processor-intensive
algorithms take too long and burn too much power/battery on embedded
devices.

Anyway, I think this is a great development. It's just that they have
too much hype in their original press release (impossible to compress
already compressed data? Pshaw), and they need to improve efficiency
a bit....

--

That's News To Me!
news...@comcast.net

Aleks Jakulin

unread,
Jan 12, 2005, 11:42:39 AM1/12/05
to
Darryl Lovato:
> It works for any jpeg - baseline, grayscale, progressive. I would
> that we'd get similar ratio's on any valid JPG file no matter what
> created with, but haven't specifically tested output from the IJG

Let's rehash some history. JPEG does have the arithmetic coding mode,
which is better than Huffman+RLE combination. However, arithmetic
coding is not used because it was encumbered by patents. Now you're
proposing another solution which is encumbered by patents.

What kind of improvements are you getting as compared with the
arithmetic coding mode of JPEG?

--
mag. Aleks Jakulin
http://www.ailab.si/aleks/
Artificial Intelligence Laboratory,
Faculty of Computer and Information Science,
University of Ljubljana, Slovenia.


John Reiser

unread,
Jan 12, 2005, 1:05:10 PM1/12/05
to
Matt Mahoney wrote:
> [snip] Given a string like

>
> ..ABC...ABC...ABC
>
> LZ77 allows the third ABC to be coded as a pointer to the first or second
> copy, or as literals, or as a mix of literals and pointers. All of these
> would unzip correctly. The general solution would be to record which choice
> was made, but this would negate most of the compression savings.
>
> I recall a paper which proposed steganographic zip files using the choice of
> coding to hide information. [snip]

Please provide more info about that paper.

For a fixed encoder, even if you require that the coded output for the whole
string be the shortest possible, then there still are choices. The choices
form a lattice, the paths through the lattice can be enumerated, and choosing
a specific path conveys log2(total_paths) additional bits of information.
In practice the amount can be 0.1% to a few percent. But it's hardly hidden,
because there are a few obvious canonical paths: always choose the smallest
offset, favor a literal over a match of the same cost (or vice versa), etc.

--

matma...@yahoo.com

unread,
Jan 12, 2005, 4:19:26 PM1/12/05
to

John Reiser wrote:
> Matt Mahoney wrote:
> > [snip] Given a string like
> >
> > ..ABC...ABC...ABC
> >
> > LZ77 allows the third ABC to be coded as a pointer to the first or
second
> > copy, or as literals, or as a mix of literals and pointers. All of
these
> > would unzip correctly. The general solution would be to record
which choice
> > was made, but this would negate most of the compression savings.
> >
> > I recall a paper which proposed steganographic zip files using the
choice of
> > coding to hide information. [snip]
>
> Please provide more info about that paper.

Here is a program that does it. There is some cost in compression.
http://www.mirrors.wiretapped.net/security/steganography/gzip-steg/gzip-steg-README.txt

Guido Vollbeding

unread,
Jan 13, 2005, 3:40:07 AM1/13/05
to
Aleks Jakulin wrote:
>
> Let's rehash some history. JPEG does have the arithmetic coding mode,
> which is better than Huffman+RLE combination. However, arithmetic
> coding is not used because it was encumbered by patents. Now you're
> proposing another solution which is encumbered by patents.

Well, but at least the JPEG arithmetic coding algorithm is published
in detail in the JPEG standard and JPEG book, so anybody who wishes
can reproduce it. And I have done an open and portable implementation
for use with the IJG software for evaluation purposes.

But this "offer" we see here is a pure commercial offer, and no
further information is given about how the algorithm works.
It is a shame that this newsgroup, which was an open-minded
years ago, is now more and more encumbered by commercial interests,
and that many people don't mind and only few people mind.
We see similar behaviour from the commercial JPEG-2000 proponents
here (and it's a fact for me that JPEG-2000 is technically inferior
and thus obsolete, contrary to the false propaganda of its proponents).

I must say that I absolutely don't care about "black-box offers"
which don't provide information about how the algorithm works,
and I think that this is the WRONG newsgroup for such offers
(he should go to some advertising group).

> What kind of improvements are you getting as compared with the
> arithmetic coding mode of JPEG?

At least the JPEG arithmetic coding mode can be used freely in a
few years when the patents expire.

Regards
Guido

Thomas Richter

unread,
Jan 13, 2005, 4:27:40 AM1/13/05
to
Once again, Guido,

> But this "offer" we see here is a pure commercial offer, and no
> further information is given about how the algorithm works.
> It is a shame that this newsgroup, which was an open-minded
> years ago, is now more and more encumbered by commercial interests,
> and that many people don't mind and only few people mind.

I hope you include yourself here in those that "don't mind".

> We see similar behaviour from the commercial JPEG-2000 proponents
> here (and it's a fact for me that JPEG-2000 is technically inferior
> and thus obsolete, contrary to the false propaganda of its proponents).

We observe similar behavour of Guido here who has still failed to
backup his claim by providing published measurements of his
claim. (-; BTW, I'm working for the TU Berlin. I don't need to sell
anything, I'm getting paid either way.

> I must say that I absolutely don't care about "black-box offers"
> which don't provide information about how the algorithm works,
> and I think that this is the WRONG newsgroup for such offers
> (he should go to some advertising group).

I absolutely don't care about people that don't follow scientific
standards. This includes the proposed JPEG compressor unless I'm able
to verify its claims - but which don't look too far off to be correct
- and it also includes your claims about "propaganda" you're so good
in generating yourself. I agree that possibly similar improvements
(at least, in the same range) can be obtained from classical JPEG,
but I'm not the one who bans their compression codec in first place
for that. All I'm saying that I'm not especially excited about it
(thus, no post from my side up to now).

Propaganda: The kind of news that is spread without giving data
to backup its claim.

Interesting that especially you are using this word.

BTW, my data is (still) here:

http://www.math.tu-berlin.de/~thor/imco/Downloads/jpg/

PSNR measurements are here:

http://www.math.tu-berlin.de/~thor/imco/Downloads/jpg/psnr.txt

These are updated measurements with a "arithmetic encoding enabled"
JPEG, waiting there to be verified by other parties (like you) for
quite a while. Source and compressed images are also available at this
URL. Unlike your claim, we have PSNR(j2k) >= PSNR(jpg + arithcoder).
Images with various compression methods and options are found on this
side as well because PSNR is not a very good measurement. The test
image is a pretty "tough" one (lots of edges), I don't need to cheat.
It is pretty large - this is what I always claimed to be necessary.

Now where are your measurements, Guido? Or was this just propaganda?
I'm curious. Still haven't done your homework? Oh well, it's so much
easier to insult people...

Greetings,
Thomas

P.S.: And note that I'm not the one who claims that "JPEG2000" is
always the better choice. It is not. But the word "inferiour" is also
very wrong. It has other applications. (Large images, low-contrast
images, all posted here.)

Guido Vollbeding

unread,
Jan 13, 2005, 5:44:14 AM1/13/05
to
Thomas Richter wrote:
>
> > But this "offer" we see here is a pure commercial offer, and no
> > further information is given about how the algorithm works.
> > It is a shame that this newsgroup, which was an open-minded
> > years ago, is now more and more encumbered by commercial interests,
> > and that many people don't mind and only few people mind.
>
> I hope you include yourself here in those that "don't mind".

I don't like the commercial direction in this newsgroup,
and I myself have no commercial interests or products here.

> We observe similar behavour of Guido here who has still failed to
> backup his claim by providing published measurements of his
> claim. (-; BTW, I'm working for the TU Berlin. I don't need to sell
> anything, I'm getting paid either way.

I have provided working code, and anybody who wishes can perform
their own evaluations. I'm not here to convince other people -
I have done my own tests and drawn my own conclusions.
You have at least done and mentioned a commercial implementation
of your own here.

> BTW, my data is (still) here:
>
> http://www.math.tu-berlin.de/~thor/imco/Downloads/jpg/
>
> PSNR measurements are here:
>
> http://www.math.tu-berlin.de/~thor/imco/Downloads/jpg/psnr.txt
>
> These are updated measurements with a "arithmetic encoding enabled"
> JPEG, waiting there to be verified by other parties (like you) for
> quite a while. Source and compressed images are also available at this
> URL. Unlike your claim, we have PSNR(j2k) >= PSNR(jpg + arithcoder).
> Images with various compression methods and options are found on this
> side as well because PSNR is not a very good measurement. The test
> image is a pretty "tough" one (lots of edges), I don't need to cheat.
> It is pretty large - this is what I always claimed to be necessary.
>
> Now where are your measurements, Guido? Or was this just propaganda?
> I'm curious. Still haven't done your homework? Oh well, it's so much
> easier to insult people...

The sample image you used apparently originates from a digital camera
with mosaic sensor.
I have already pointed out to you earlier that you CAN'T use images
from mosaic sensor cameras for such evaluation!
Mosaic cameras produce *artificial*, unnatural images (they miss 2/3s
of image information which must be artificially calculated afterwards),
and thus can't be used for evaluation with compression technologies
which were developed for *natural* images!
The blurry image results of mosaic cameras benefit the blurry properties
of Wavelet compression. They are both artificial.

Furthermore, you check only low quality domain (level 75 and lower).
The default quantization parameters in standard JPEG were NOT
optimized for low quality domain, therefore your results are
onesided and, again, in favor for your proposed method, which
was deliberately optimized for low quality.
(JPEG quality level 75 or lower is *much* less than what quality-oriented
people in the digital photography domain normally use in practice!)

You see from my critics that all kind of specific tests are
questionable and only good for propaganda purposes.
That's why I urge people to perform their OWN tests with their
OWN material in their OWN configurations with their OWN criteria
and draw their OWN conclusions. Anything else is propaganda.

Regards
Guido

Thomas Richter

unread,
Jan 13, 2005, 8:46:28 AM1/13/05
to
Now, Guido, once again,

> I have provided working code, and anybody who wishes can perform
> their own evaluations.

The data on the mentioned page is generated with that code. Now what?

> I'm not here to convince other people -
> I have done my own tests and drawn my own conclusions.
> You have at least done and mentioned a commercial implementation
> of your own here.

Correct. A tiny nice side income; I'm neither selling the code, nor
does my income depend on the sales or whatever. I've a full position
at the TU that pays my bills if that is what you mean. I'm not getting
a cent more if more licences are sold.

> The sample image you used apparently originates from a digital camera
> with mosaic sensor.
> I have already pointed out to you earlier that you CAN'T use images
> from mosaic sensor cameras for such evaluation!

Well, then please provide some images of your own. Minimum size
1280x1024, BW or color, whatever. Or measure yourself if you don't
thrust me. "lena" is a bit too small for my purpose. (-;

On the other hand, if we assume for a moment that these cameras are
used, and if we assume further that people use them for making images,
and further want to compress images, and further that these images
contain some redundancy due to the process they have been created, how
come that JPEG2000 is able to detect and make use of this redundancy,
but JPEG isn't? How come that if this hardware is even popular that
JPG misses this target? (-;

You see, I can also turn this argument around. An aparent image redunancy
in a popular technology is not made use of. (-; So, even then there
is aparently an interesting market? (-;

> Mosaic cameras produce *artificial*, unnatural images (they miss 2/3s
> of image information which must be artificially calculated afterwards),
> and thus can't be used for evaluation with compression technologies
> which were developed for *natural* images!

Send me an image that you consider natural. Or even better, test yourself,
no problem. The mentioned image is actually pretty tough to compress which
is why I picked it - one of my "stress tests".

> The blurry image results of mosaic cameras benefit the blurry properties
> of Wavelet compression. They are both artificial.

Most blur is rather a result of the limited quality of the optics you
find nowadays in the "claimed to be 4MPixel sector", but then...

> Furthermore, you check only low quality domain (level 75 and lower).

This is one of the regions where JPEG2000 becomes interesting. I've
always said so: i) Large images ii) high compressing iv) low contrast.

The image is not specifically tuned for anything, I haven't
picked it on purpose, but rather to have something with straight high-
contrast edges that are usually a problem for wavelets.

Anyhow, provide your own if you like. Just make sure it they're large
enough. A 800x600 image won't compress very well - I'm not denying it.
The larger the image gets, the more JPEG2000 outperforms it the
traditional technology. And the reason is of course that correlation
lengths in larger images grow over the maximum decorrelation length
of the 8x8 blocks of the JPEG.

> The default quantization parameters in standard JPEG were NOT
> optimized for low quality domain, therefore your results are
> onesided and, again, in favor for your proposed method, which
> was deliberately optimized for low quality.

"Optimized" is not quite the right word, it just works well in this
domain, though not on purpose. Anyhow, feel free to optimize your
quantizer for low compression. If I'm allowed to make a prediction:
You're going to face the same problems: The DCT won't be able to catch
all correlations.

> (JPEG quality level 75 or lower is *much* less than what quality-oriented
> people in the digital photography domain normally use in practice!)

> You see from my critics that all kind of specific tests are
> questionable and only good for propaganda purposes.

Guido, post your results. DO IT. Speak is cheap. I never made a secret
of JPEG2000 abilities and its natural target domain, and its weakness
in small images. Now that I'm compressing there, and you don't like
the results, you're declaring them as irrelevant:

"JPEG is better except in cases where it isn't and which I like to ignore."

Now *THAT* is a result. You're not a cent better than the marketing
folks. Unlike you, I'm *telling* people which technology is good for
what.

And then, as a second point, why don't you just make some tests of
your own and make the results public?

> That's why I urge people to perform their OWN tests with their
> OWN material in their OWN configurations with their OWN criteria
> and draw their OWN conclusions. Anything else is propaganda.

Now, very fine. Then *STOP TALKING* and *START ACTING*. I want your
material online, reproducable. You still haven't done your homework,
I'm waiting. Stop claiming something you haven't data to back it up.
Should that ever happen, we'll see exactly what I'm claiming all the
time: "In a certain compression domain, JPEG2000 outperforms JPEG."

Whether this domain is important or not is then the matter of the
customer, there's no discussion about it. Just as a hint: Seems like
it is important for medical images. Up to then, stop your talk about
"inferiour" because it could easily strike back at you. Currently, all
your arguing is quite inferiour. Really.

So long,
Thomas

Phil Carmody

unread,
Jan 13, 2005, 9:12:39 AM1/13/05
to
Guido Vollbeding <gu...@jpegclub.org> writes:
> But this "offer" we see here is a pure commercial offer, and no
> further information is given about how the algorithm works.

I assume that the patent pending status implies that the information
describing, (though possibly in as confusing a manner is possible)
the techniques has been made publically available.

> It is a shame that this newsgroup, which was an open-minded
> years ago, is now more and more encumbered by commercial interests,

It is impossible for a commercial entity, by telling a group of
enthusiasts about a new product, can encumber those enthusiasts.

Yes, we've not been given something that we can immediately use,
but we've been given something that we can immediately think
about, debate about the inner workings of, and even treat as a
challenge to try to reproduce the levels of compression using
our own possibly new ideas.

If new ideas, non-patent-encumbered, are so important - then
start coming up with some!

Phil
--
The answer to life's mystery is simple and direct:
Sex and death. -- Ian 'Lemmy' Kilminster.

Darryl Lovato

unread,
Jan 13, 2005, 9:39:00 AM1/13/05
to
On 1/13/05 12:40 AM, in article 41E633E7...@jpegclub.org, "Guido
Vollbeding" <gu...@jpegclub.org> wrote:

> Aleks Jakulin wrote:
>>
>> Let's rehash some history. JPEG does have the arithmetic coding mode,
>> which is better than Huffman+RLE combination. However, arithmetic
>> coding is not used because it was encumbered by patents. Now you're
>> proposing another solution which is encumbered by patents.
>
> Well, but at least the JPEG arithmetic coding algorithm is published
> in detail in the JPEG standard and JPEG book, so anybody who wishes
> can reproduce it. And I have done an open and portable implementation
> for use with the IJG software for evaluation purposes.
>
> But this "offer" we see here is a pure commercial offer, and no
> further information is given about how the algorithm works.

I'm assuming you are talking about my "original post". Yes, we are a
commercial company, and we DO intend to make money off what we came up with.

I'm not hiding the fact that we are a commercial interest. We ARE trying to
one-up our competition (by providing significant benefits to users, that our
competition does not), but it really isn't any different than professor A
trying to outdo professor B in the academic realm.

Your argument basically says "unless you are giving away your ideas, which
may have cost significant time and money to develop - don't post here".

What we did is significant. Not just from the JPEG case we have thus far
announced, and did the leg-work to have independently verified, but also to
show that compression of compressed data != to compression of random data,
which has been a long-time association on this newsgroup. And that our
"proof to the contrary" - it is possible, is a significant contribution to
this newsgroup.

The patents give the details, and they will be readable by everyone when the
USPTO posts them to their site. Until then, I was advised by our patent
lawyer to not give out details.

> It is a shame that this newsgroup, which was an open-minded
> years ago, is now more and more encumbered by commercial interests,
> and that many people don't mind and only few people mind.

So... You are saying that this newsgroup should be open-minded, and at the
same time saying that only non-commercial discoveries should be posted?
Isn't that a contradiction?

As far as I can tell, the only reason someone didn't come up with what we
did - before we did, is that many people in this field were NOT open minded
- "you cant compress already compressed data", and didn't bother to try to
solve the problem.

> We see similar behaviour from the commercial JPEG-2000 proponents
> here (and it's a fact for me that JPEG-2000 is technically inferior
> and thus obsolete, contrary to the false propaganda of its proponents).

Whatever.

> I must say that I absolutely don't care about "black-box offers"
> which don't provide information about how the algorithm works,
> and I think that this is the WRONG newsgroup for such offers
> (he should go to some advertising group).

This is clearly the right group for an announcement, and subsequent
submission of "proof" from independent and "trusted" individuals, for
something that has thus far not been done (as well as "deemed" impossible)
in the data compression space. That is what we did.

>> What kind of improvements are you getting as compared with the
>> arithmetic coding mode of JPEG?
>
> At least the JPEG arithmetic coding mode can be used freely in a
> few years when the patents expire.

As will ours, when our patents expire :-) Until then, we will be reasonable
in our licensing, but we must at least re-coup our time, money, and effort
that was expended in making this possible. Commercial companies have to
make money to pay the people (like me) that work on this stuff.

It really isn't much different than universities - the prestige a university
gets by professors publishing "research" attracts students (which means
income), which pays the professors salaries. Plus, a lot of the research
universities do IS patented, and licensed to commercial interests.

- Darryl

> Regards
> Guido

Guido Vollbeding

unread,
Jan 13, 2005, 10:21:37 AM1/13/05
to
Thomas Richter wrote:
>
> > I have provided working code, and anybody who wishes can perform
> > their own evaluations.
>
> The data on the mentioned page is generated with that code. Now what?

Well, I suspected that (because I'm not aware of other available
implementations), and that is good :-). So I had no problem to
download and open one of your arithmetic compressed sample jpgs
in my Jpegcrop program...

> Well, then please provide some images of your own. Minimum size
> 1280x1024, BW or color, whatever. Or measure yourself if you don't
> thrust me. "lena" is a bit too small for my purpose. (-;

I have lots of high-quality JPEG images available (from scans or
full-color digital cameras), but of course for this purpose you
should take an *uncompressed* source image. Either you have a
high-definition scanner and images yourself, or you can find some
raw or uncompressed images from a full-color-sensor camera.
I think at the Sigma-Photo sites you will find some sample tiffs
or raws (x3f) from the Foveon-equipped SD-9 or SD-10 cameras.
These would be good source material for natural digital image
compression test (for use of the x3f raw files you must take the
Photo-Pro software to convert to tiff - this software is also
available there).

> On the other hand, if we assume for a moment that these cameras are
> used, and if we assume further that people use them for making images,
> and further want to compress images, and further that these images
> contain some redundancy due to the process they have been created, how
> come that JPEG2000 is able to detect and make use of this redundancy,
> but JPEG isn't? How come that if this hardware is even popular that
> JPG misses this target? (-;
>
> You see, I can also turn this argument around. An aparent image redunancy
> in a popular technology is not made use of. (-; So, even then there
> is aparently an interesting market? (-;

Thomas, here we apparently have quite different point of views.
My priority is reason and quality, I follow my own quality standards
and NOT those of the masses. Didn't you know what all sage people
in history ever prayed? "The masses *always* follow the mistake,
so don't follow the masses if you are hunting for the truth!"
I cannot agree enough with this statement from my own experiences.
Now set market = masses and "popular technology" = "obsolete technology"
and you are done...

> Send me an image that you consider natural. Or even better, test yourself,
> no problem. The mentioned image is actually pretty tough to compress which
> is why I picked it - one of my "stress tests".

See above. I have tested some pics with available j2k implementations,
and the results weren't convincing. I know your opinion that the openly
available j2k implementations are inferior compared with the commercial
ones, but that doesn't help (me) either. I have currently no incentive
to perform own j2k tests - perhaps someday when I duly present my ideas
for better use of DCT JPEG compression to outperform J2K eventually also
in the low definition domain ;-)...

> Most blur is rather a result of the limited quality of the optics you
> find nowadays in the "claimed to be 4MPixel sector", but then...

*Any* claim of *any* "megapixels" in the current digital camera market
is a hoax! The mosaic sensors produce only *fake* megapixel images,
because their "pixel" captures only one of the three color components,
not the full color! The missing components (2/3s of data) are then
artificially created! Talk about "natural"!

> The image is not specifically tuned for anything, I haven't
> picked it on purpose, but rather to have something with straight high-
> contrast edges that are usually a problem for wavelets.

Again, you should definitely take more care for picking your test image!
You easily fall for hoaxes nowadays.

> Anyhow, provide your own if you like. Just make sure it they're large
> enough. A 800x600 image won't compress very well - I'm not denying it.
> The larger the image gets, the more JPEG2000 outperforms it the
> traditional technology. And the reason is of course that correlation
> lengths in larger images grow over the maximum decorrelation length
> of the 8x8 blocks of the JPEG.

Well, later, perhaps, if I can show you that you can use larger
decorrelation lengths with JPEG (I can use up to 16x16 in my
implementation) and thus the "traditional" technology perhaps
outperforms the other...

> "Optimized" is not quite the right word, it just works well in this
> domain, though not on purpose. Anyhow, feel free to optimize your
> quantizer for low compression. If I'm allowed to make a prediction:
> You're going to face the same problems: The DCT won't be able to catch
> all correlations.

Again, I have up to 16x16 DCT easily available now, and that should
help a lot and extend the traditional JPEG efficiency far into the
low definition area. (See also the "NIMA METHOD 4" approach.)

> Guido, post your results. DO IT. Speak is cheap. I never made a secret
> of JPEG2000 abilities and its natural target domain, and its weakness
> in small images. Now that I'm compressing there, and you don't like
> the results, you're declaring them as irrelevant:
>
> "JPEG is better except in cases where it isn't and which I like to ignore."
>
> Now *THAT* is a result. You're not a cent better than the marketing
> folks. Unlike you, I'm *telling* people which technology is good for
> what.
>
> And then, as a second point, why don't you just make some tests of
> your own and make the results public?

Thomas, unlike perhaps you, I have other things to do than publishing
my results. I have given some hints, I have some results and some
ideas for further investigation. But I don't have the time to do
all this and prepare publication *now*.

> Now, very fine. Then *STOP TALKING* and *START ACTING*. I want your
> material online, reproducable. You still haven't done your homework,
> I'm waiting. Stop claiming something you haven't data to back it up.
> Should that ever happen, we'll see exactly what I'm claiming all the
> time: "In a certain compression domain, JPEG2000 outperforms JPEG."
>
> Whether this domain is important or not is then the matter of the
> customer, there's no discussion about it. Just as a hint: Seems like
> it is important for medical images. Up to then, stop your talk about
> "inferiour" because it could easily strike back at you. Currently, all
> your arguing is quite inferiour. Really.

Thomas, please understand, I simply can't express all my arguments
and ideas now, it's just to large. I have made some important
discoveries about JPEG and DCT in particular, and I will publish
it, but not now, sorry, be patient...
I have just prepared an article which explains in detail my earlier
introduction of the lossless transformations in JPEG DCT domain
(jpegtran rotation etc.). This article will presumably appear end
of january or start of february in a popular German cOMPUtER
magazine (I will announce it on my site then and put it online
after magazine is out).
If this goes well and I can convince the responsible editor, I will
try to start a series of further publications about my more recent
JPEG discoveries over the year in this magazine...
Note that this is not an academic but more popular journal, so
I will present the results in a very understandable form for a
large audience.

Regards
Guido

news...@comcast.net

unread,
Jan 13, 2005, 10:54:11 AM1/13/05
to
Darryl Lovato <dlo...@allume.com> wrote:

> What we did is significant. Not just from the JPEG case we have thus far
> announced, and did the leg-work to have independently verified, but also to
> show that compression of compressed data != to compression of random data,
> which has been a long-time association on this newsgroup. And that our
> "proof to the contrary" - it is possible, is a significant contribution to
> this newsgroup.

This is (at least) the second time you've said this, and it's
blatantly not true. No one who knows compression would get even
slightly confused over the differences between "compression of
compressed data" and "compression of random data." It's in fact
nonsensical to even talk about this like it's a big deal, and is one
of the reasons your original announcement (touting this as if it were a
big deal) rubbed some people the wrong way and made you look like
something of a crank (although I don't believe you are).

Guido Vollbeding

unread,
Jan 13, 2005, 10:54:35 AM1/13/05
to
Darryl Lovato wrote:
>
> I'm not hiding the fact that we are a commercial interest. We ARE trying to
> one-up our competition (by providing significant benefits to users, that our
> competition does not), but it really isn't any different than professor A
> trying to outdo professor B in the academic realm.

And it isn't any better...

> What we did is significant. Not just from the JPEG case we have thus far
> announced, and did the leg-work to have independently verified, but also to
> show that compression of compressed data != to compression of random data,
> which has been a long-time association on this newsgroup. And that our
> "proof to the contrary" - it is possible, is a significant contribution to
> this newsgroup.

I don't know what you did, because you don't give any information.
I don't see any significant contribution unless you provide some
information about what you are doing.

> The patents give the details, and they will be readable by everyone when the
> USPTO posts them to their site. Until then, I was advised by our patent
> lawyer to not give out details.

But then you should have waited until then for post in this newsgroup.
For now you are only advertising, and this is also called spam.

> So... You are saying that this newsgroup should be open-minded, and at the
> same time saying that only non-commercial discoveries should be posted?
> Isn't that a contradiction?

But you did NOT post any commercial or whatever discovery at all.
I can't see what your "discovery" is.

> As far as I can tell, the only reason someone didn't come up with what we
> did - before we did, is that many people in this field were NOT open minded
> - "you cant compress already compressed data", and didn't bother to try to
> solve the problem.

Until now I don't see that you solved any problem.

> It really isn't much different than universities - the prestige a university
> gets by professors publishing "research" attracts students (which means
> income), which pays the professors salaries. Plus, a lot of the research
> universities do IS patented, and licensed to commercial interests.

Yes, and that's why the academic "research" is similar mistaken as
commercial nowadays.

Regards
Guido

Thomas Richter

unread,
Jan 13, 2005, 11:21:20 AM1/13/05
to
Hi Guido, again.

> I have lots of high-quality JPEG images available (from scans or
> full-color digital cameras), but of course for this purpose you
> should take an *uncompressed* source image. Either you have a
> high-definition scanner and images yourself, or you can find some
> raw or uncompressed images from a full-color-sensor camera.

Post URLs. Send me an image. Talk is cheap.

> Thomas, here we apparently have quite different point of views.
> My priority is reason and quality,

Quality is relative. If you want optimal quality, compress losslessy.
Quality of a compressor: Average compression factor for given
acceptance level for given set of input images.

> See above. I have tested some pics with available j2k implementations,
> and the results weren't convincing.

Results? Where are the images? How have they been compressed? What is
the PSNR?

> I know your opinion that the openly
> available j2k implementations are inferior compared with the commercial
> ones, but that doesn't help (me) either. I have currently no incentive
> to perform own j2k tests - perhaps someday when I duly present my ideas
> for better use of DCT JPEG compression to outperform J2K eventually also
> in the low definition domain ;-)...

Then *DO* or shut quiet.

> Again, you should definitely take more care for picking your test image!

If you don't accept my choice, sent me yours. DO. Don't talk. DO IT.

> Well, later, perhaps, if I can show you that you can use larger
> decorrelation lengths with JPEG (I can use up to 16x16 in my
> implementation) and thus the "traditional" technology perhaps
> outperforms the other...

Currently, we're talking about existing standardized codecs. The
world continuous rotating, new methods are thought about. The future
lies elsewhere, neither in DCT or in DWT.

> Again, I have up to 16x16 DCT easily available now, and that should
> help a lot and extend the traditional JPEG efficiency far into the
> low definition area. (See also the "NIMA METHOD 4" approach.)

Is this an available codec? ISO certified? You're talking about
another cathegory. Anyhow, the same problem applies again (to any
block-based codec!), just at a different scale. Blocking images
before transforming is not exactly a bright idea.

> Thomas, unlike perhaps you, I have other things to do than publishing
> my results.

But aparently, you do have enough time to insult people, right?

PUBLISH or PERISH. You have the two choices. Up to now, you've spread
a lot of hot air. The only way to prevent that is to show some results
and allow others to reproduce them. That's the way how the thing works.

Otherwise, we could exchange arguments that the world is flat or not...
I tell you a secret: Without *looking* at it (i.e. making
experiments) you never find out.

> I have given some hints, I have some results and some
> ideas for further investigation. But I don't have the time to do
> all this and prepare publication *now*.

I don't care about ideas right now. I care about published,
reproducable data that backs up your claim of
"inferiourity". That. Not less, not more.

> Thomas, please understand, I simply can't express all my arguments
> and ideas now, it's just to large. I have made some important
> discoveries about JPEG and DCT in particular, and I will publish
> it, but not now, sorry, be patient...

Do a couple of measurements. *This* is your homework. If you want to
proof your claim that JPEG is better, you need to give some data. You
continue to repeat yourself, over and over again, and from time to
another you still haven't given reasons why this is true. I *did* my
homework. Don't like the image? Ok, send me data you'd like me to
measure. (My goddess, I'm even offering to do that for you - how
stubburn can one be?)

> I have just prepared an article which explains in detail my earlier
> introduction of the lossless transformations in JPEG DCT domain
> (jpegtran rotation etc.). This article will presumably appear end
> of january or start of february in a popular German cOMPUtER
> magazine (I will announce it on my site then and put it online
> after magazine is out).

Your arguments are fading away. How does image rotation in the DCT
domain (which is just application of linear algebra, BTW) has anything
to do with image compression quality? How does that back up your claim?

Right: Not at all. You're just try to draw attention away.

BTW, ct is not exactly a technical magazine with a high scientific(!)
reputation. If you want to get heard by the really important folks in
this business, you should publish elsewhere.

> Note that this is not an academic but more popular journal, so
> I will present the results in a very understandable form for a
> large audience.

And you *still* failed to do your homework.

Once and for all: If you dare to insult people, you better give
reason. You haven't until now, and you've failed again. This
is so sad, this is so insane. In fact, you're the one doing
propaganda, and you don't even notice...

Thomas

Guido Vollbeding

unread,
Jan 13, 2005, 11:39:10 AM1/13/05
to
Thomas Richter wrote:
>
> Post URLs. Send me an image. Talk is cheap.

I have given the hints. You can choose a picture.
I haven't the time now, and if you don't want, it's ok for now.

> Then *DO* or shut quiet.

I shut quiet *for now*!

> If you don't accept my choice, sent me yours. DO. Don't talk. DO IT.

See above.

> Currently, we're talking about existing standardized codecs.

My proposed method can be used *within* existing stantard codecs,
but it can also (slightly) extend the standard for more features.

> > low definition area. (See also the "NIMA METHOD 4" approach.)
>
> Is this an available codec? ISO certified? You're talking about
> another cathegory. Anyhow, the same problem applies again (to any
> block-based codec!), just at a different scale. Blocking images
> before transforming is not exactly a bright idea.

It's a useful idea, and hardly outperformed by anything else.
For "NIMA METHOD 4" see:

http://ismc.nga.mil/ntb/baseline/docs/n010697/bwcguide25aug98.pdf

> But aparently, you do have enough time to insult people, right?

enough time to correct mistakes.

> PUBLISH or PERISH. You have the two choices. Up to now, you've spread
> a lot of hot air. The only way to prevent that is to show some results
> and allow others to reproduce them. That's the way how the thing works.

FOR NOW I PERISH.

> Your arguments are fading away. How does image rotation in the DCT
> domain (which is just application of linear algebra, BTW) has anything
> to do with image compression quality? How does that back up your claim?
>
> Right: Not at all. You're just try to draw attention away.

No, I try to explain that this is a *start* for publication of my
results.

> Once and for all: If you dare to insult people, you better give
> reason. You haven't until now, and you've failed again. This
> is so sad, this is so insane. In fact, you're the one doing
> propaganda, and you don't even notice...

I WILL present reasons, but it's up to me to decide when the time
is right. You may ask, and I explain what I can now, but *I*
decide for publication, sorry.

Regards
Guido

Darryl Lovato

unread,
Jan 13, 2005, 11:59:54 AM1/13/05
to
On 1/13/05 7:54 AM, in article K5idnc8hvf4...@comcast.com,
"news...@comcast.net" <news...@comcast.net> wrote:

> Darryl Lovato <dlo...@allume.com> wrote:
>
>> What we did is significant. Not just from the JPEG case we have thus far
>> announced, and did the leg-work to have independently verified, but also to
>> show that compression of compressed data != to compression of random data,
>> which has been a long-time association on this newsgroup. And that our
>> "proof to the contrary" - it is possible, is a significant contribution to
>> this newsgroup.
>
> This is (at least) the second time you've said this, and it's
> blatantly not true.

> No one who knows compression would get even
> slightly confused over the differences between "compression of
> compressed data" and "compression of random data."

The two things are indeed different, but they have been confused as the same
by some people. Product reviewers, some (not all) people on this newsgroup,
etc.

> It's in fact
> nonsensical to even talk about this like it's a big deal, and is one
> of the reasons your original announcement (touting this as if it were a
> big deal) rubbed some people the wrong way and made you look like
> something of a crank (although I don't believe you are).

Thanks for the complement :-)

No I'm not a crank (as is evidenced by independent verification), and I'm
sorry if anyone was rubbed the wrong way.

This is a big deal though (IMHO). There are a lot of compressed files
(JPEG, and others) - (stand-alone, as well as embedded in other formats - a
JPEG in a PDF, etc) out there and users do add them to archives, or in other
ways try to compress them, or would like to make them smaller. So far, this
hasn't been addressed - until now. There are a LOT of these files - so
being able to do something with them is a big gain for users.

- Darryl

Darryl Lovato

unread,
Jan 13, 2005, 12:24:07 PM1/13/05
to
On 1/13/05 7:54 AM, in article 41E699BB...@jpegclub.org, "Guido
Vollbeding" <gu...@jpegclub.org> wrote:

> Darryl Lovato wrote:
>>
>> I'm not hiding the fact that we are a commercial interest. We ARE trying to
>> one-up our competition (by providing significant benefits to users, that our
>> competition does not), but it really isn't any different than professor A
>> trying to outdo professor B in the academic realm.
>
> And it isn't any better...

But it's a fact of life. Striving to do better than the next guy,
commercial, academic, or otherwise is what makes technology advance.

>> What we did is significant. Not just from the JPEG case we have thus far
>> announced, and did the leg-work to have independently verified, but also to
>> show that compression of compressed data != to compression of random data,
>> which has been a long-time association on this newsgroup. And that our
>> "proof to the contrary" - it is possible, is a significant contribution to
>> this newsgroup.
>
> I don't know what you did, because you don't give any information.
> I don't see any significant contribution unless you provide some
> information about what you are doing.

You will, when the patents are posted.

>> The patents give the details, and they will be readable by everyone when the
>> USPTO posts them to their site. Until then, I was advised by our patent
>> lawyer to not give out details.
>
> But then you should have waited until then for post in this newsgroup.
> For now you are only advertising, and this is also called spam.

Just letting people know it is possible (and verified as such), is an
advancement in the art of data compression. I'm sure a few of you will
figure it out before the details are available publicly, but we are
protected via the patent applications. Having the benefit of knowing it is
possible is something we didn't have when we went into this. You all now
have this knowledge.

>> So... You are saying that this newsgroup should be open-minded, and at the
>> same time saying that only non-commercial discoveries should be posted?
>> Isn't that a contradiction?
>
> But you did NOT post any commercial or whatever discovery at all.
> I can't see what your "discovery" is.

Exactly why we patented it. I'm normally not a huge fan of patents, but
this was a case that once it was known as being possible, everyone and their
brother would "try to do it". We spent a lot of money on this, and we had
to protect our investment. It's not obvious - otherwise the JPEG test file
on www.Maximumcompression.com would have had some archivers get more than a
few percent compression. Archivers have been around a long time, Jpeg (and
other compressed formats) have been around for a long time as well.

Archivers have thus far not "dealt with the issue" of compressing previously
compressed formats. This is a fact that will allow our patent to be
granted.

>> As far as I can tell, the only reason someone didn't come up with what we
>> did - before we did, is that many people in this field were NOT open minded
>> - "you cant compress already compressed data", and didn't bother to try to
>> solve the problem.
>
> Until now I don't see that you solved any problem.

Hmm. We did some surveys - I'll let you do the same. Just guess the number
of JPEG (and other) compressed files are on people's machines (desktop,
camera's, phones, servers, etc), think about the number of users of these
"machines" out there.... Then think about how many of these files are
backed up, sent, etc. Now think about how much compression existing (before
us) compression got on these files. Further, think about how many
compressed files are imbedded in other files (PDF, word, etc).

A clear problem... And now solved.

>> It really isn't much different than universities - the prestige a university
>> gets by professors publishing "research" attracts students (which means
>> income), which pays the professors salaries. Plus, a lot of the research
>> universities do IS patented, and licensed to commercial interests.
>
> Yes, and that's why the academic "research" is similar mistaken as
> commercial nowadays.

Whatever.

- Darryl

> Regards
> Guido

Aleks Jakulin

unread,
Jan 13, 2005, 1:31:21 PM1/13/05
to
Darryl Lovato:

> This is clearly the right group for an announcement, and subsequent
> submission of "proof" from independent and "trusted" individuals,
> something that has thus far not been done (as well as "deemed"
> impossible) in the data compression space. That is what we did.

Interestingly, none of your "trusted" individuals included JPEG with
arithmetic coding into comparison, and neither included JPEG2000 into
comparison at the perceptually identical quality levels. Furthermore,
neither of the sites you have listed has the actual images online so
that someone else would independently verify your claims. One of them
does not even include your technology in the list with the actual
compressed file sizes.

Furthermore you have avoided my direct question: what is the
performance gain as compared to JPEG with arithmetic coding on the
same benchmarks you have published?

For anyone who's interested, here is the link to Guido's JPEG
implementation with arithmetic coding, that comes with the source:
http://sylvana.net/jpeg-ari/

Aleks Jakulin

unread,
Jan 13, 2005, 1:42:22 PM1/13/05
to
Darryl Lovato:

> What we did is significant. Not just from the JPEG case we have
> announced, and did the leg-work to have independently verified, but
> show that compression of compressed data != to compression of random
> which has been a long-time association on this newsgroup. And that
> "proof to the contrary" - it is possible, is a significant
> contribution to this newsgroup.

Judge the novelty of your "significant contribution" in the context of
this snippet from Guido Vollbeding's documentation to jpeg-ari
(28-Mar-98):
===
Transcode given JPEG files simply with a command like

jpegtran -arithmetic [-progressive] < orig.jpg > arit.jpg

into an arithmetic coded version LOSSLESSLY! Since there are
practically no applications in existence which can handle such
files, you can only transform it back with the same tool

jpegtran [-optimize] [-progressive] < arit.jpg > orig2.jpg

to verify correct operation.

Thus, you can easily verify the enhanced compression performance
of the arithmetic coding version compared to the Huffman (with
fixed or custom tables) version.

The claim to evaluate was that arithmetic coding gives an average
5-10% compression improvement against Huffman.
Early tests with this implementation support this claim, and you
can perform tests with own material.
===

SuperFly

unread,
Jan 13, 2005, 2:43:35 PM1/13/05
to
On Thu, 13 Jan 2005 17:24:07 GMT, Darryl Lovato <dlo...@allume.com>
wrote:

[snip]

>> But you did NOT post any commercial or whatever discovery at all.
>> I can't see what your "discovery" is.
>
>Exactly why we patented it. I'm normally not a huge fan of patents, but
>this was a case that once it was known as being possible, everyone and their
>brother would "try to do it". We spent a lot of money on this, and we had
>to protect our investment. It's not obvious - otherwise the JPEG test file
>on www.Maximumcompression.com would have had some archivers get more than a
>few percent compression. Archivers have been around a long time, Jpeg (and
>other compressed formats) have been around for a long time as well.
>
>Archivers have thus far not "dealt with the issue" of compressing previously
>compressed formats. This is a fact that will allow our patent to be
>granted.

The million dollar question here is: can you compress a jpeg file with
the header chopped off, or any segment of a jpeg file for that matter.
This would prove that you can actually model&compress a jpeg
compressed file.

But i think you mean uncompress the jpeg file, re-model it, and
recompress it with a state of the art compression scheme. And do the
reverse to build back the jpeg file. Which is something completely
different, and has nothing to do with compressing already compressed
data.

It's like decompressing a huffman compressed file, and compressing it
with an arithmetic compressor which will probably give you an extra
20%+ compression.

Just my 2 eurocents ..

-SF-


Michael Collins

unread,
Jan 13, 2005, 5:30:39 PM1/13/05
to
Malcolm Taylor wrote:

> For example, if I were to take a JPEG and blat
> a few random numbers into it with a hex editor, would your tech still
> compress it well?

Do you mean "Will it compress random data I'm pretending is a JPEG?" :-)

Regards,
Mike...
--


Darryl Lovato

unread,
Jan 13, 2005, 5:32:41 PM1/13/05
to
Aleks Jakulin,

Sorry, nice try though...

But as I stated previously, I can't give details until the patents are made
public, per our lawyer's advice.

If you are as smart as you think you are - you might (maybe) be able to
figure it out - especially since you have been given a head start by now by
knowing it is possible - this is something (critical knowledge) we didn't
have when we attempted to solve the problem in the first place.

Good luck.

- Darryl

Malcolm Taylor

unread,
Jan 13, 2005, 7:34:14 PM1/13/05
to

No, I mean will it crash and burn if the format is not exactly as expected.
When considering recoding formats for better compression, and also
maintaining the lossless requirement (reproducing what came in exactly
always), you must also consider cases where the format doesn't exactly
match your expectations. This is a big problem (and one I've dealt with
a lot in WinRK) and often has been the determining factor in me trying
to develop techniques such as the one being discussed here.
In a situation where a file has been corrupted or just is badly
formatted, you do not want your archiver to crash or produce invalid
output (data corruption).


Anyway, on another note, this discussion has inspired me to have a go at
JPEG compression and see what ideas I can come up with. Unfortunately,
due to the software patents mentioned, I will not be able to release
anything until they are published, in case I happen upon something
similar to the patents. Still it might be interesting to see if we can
discover independantly what redundancy he has made use of.

I'll report back in a week or so if I make any progress :).

Malcolm

Aleks Jakulin

unread,
Jan 13, 2005, 7:38:21 PM1/13/05
to
You're trying to frame me as some sort of a wannabe competitor. This
is not the case. After the basic specification and implementation work
on JPEG-LS and PNG, I've no longer been active in this area.

I've read your white paper. There it says:

Allume Systems

StuffIt® Image Compression White Paper
StuffIt Deluxe® 9.0 release

Lossless Compression of JPEG images

Kodim01.jpg 62KB 62KB 47KB 25%
Kodim02.jpg 34KB 34KB 24KB 30%

etc.

Could you provide links to these JPG images, so that we (potential
customers, potential journalists, or those of us who may be asked to
express an opinion about your offer) can verify your claims? Given the
possibility that you overfitted to these images, could you provide the
results for the publicly available benchmarks, which includes larger
images than these? You're making it needlessly hard to verify the
results.

Besides, it's quite impossible to guess your method from a limited set
of the original JPEG images and the file sizes. I'm definitely not
smart enough to do this: rather I believe I can prove that this would
be outright impossible.

After you've done the press release, it's too late to talk to your
lawyer. You have to bite the bullet and defend your claims like
everyone else. Else, you should have waited a bit longer. At any rate,
I will no longer engage in a debate on this topic. I've provided the
information that might be useful, and this is the third time I've
asked for evidence. Now I give up.

--
mag. Aleks Jakulin
http://www.ailab.si/aleks/
Artificial Intelligence Laboratory,
Faculty of Computer and Information Science,
University of Ljubljana, Slovenia.


Darryl Lovato:


> Aleks Jakulin,
>
> Sorry, nice try though...
>
> But as I stated previously, I can't give details until the patents
> are made public, per our lawyer's advice.
>
> If you are as smart as you think you are - you might (maybe) be able

> figure it out - especially since you have been given a head start by

Darryl Lovato

unread,
Jan 13, 2005, 10:04:29 PM1/13/05
to
On 1/13/05 4:38 PM, in article cs74a8$fi8$1...@planja.arnes.si, "Aleks Jakulin"
<a_jakulin@@hotmail.com> wrote:


<snip>


>
> Could you provide links to these JPG images, so that we (potential
> customers, potential journalists, or those of us who may be asked to
> express an opinion about your offer) can verify your claims? Given the
> possibility that you overfitted to these images, could you provide the
> results for the publicly available benchmarks, which includes larger
> images than these? You're making it needlessly hard to verify the
> results.


These are the well known "kodak images", compressed with jpeg quality 50.

There has already been independent confirmation that this works though. Two
of the best known authors of compression test sites have already posted to
this thread that they were able to test what we have. Jeff recently posted
results on a few of his own files.

<http://www.compression.ca/act/act-jpeg.html>

> Besides, it's quite impossible to guess your method from a limited set
> of the original JPEG images and the file sizes. I'm definitely not
> smart enough to do this: rather I believe I can prove that this would
> be outright impossible.
>
> After you've done the press release, it's too late to talk to your
> lawyer. You have to bite the bullet and defend your claims like
> everyone else.

I believe I have done more than enough to defend the claims - independent
confirmation, etc.

> Else, you should have waited a bit longer. At any rate,
> I will no longer engage in a debate on this topic.

Well, that's good :-)

> I've provided the
> information that might be useful, and this is the third time I've
> asked for evidence. Now I give up.

Whatever,

- Darryl

Errol Smith

unread,
Jan 13, 2005, 11:48:45 PM1/13/05
to
On Fri, 14 Jan 2005 13:34:14 +1300, Malcolm Taylor wrote:
>No, I mean will it crash and burn if the format is not exactly as expected.
>When considering recoding formats for better compression, and also
>maintaining the lossless requirement (reproducing what came in exactly
>always), you must also consider cases where the format doesn't exactly
>match your expectations. This is a big problem (and one I've dealt with
>a lot in WinRK) and often has been the determining factor in me trying
>to develop techniques such as the one being discussed here.
>In a situation where a file has been corrupted or just is badly
>formatted, you do not want your archiver to crash or produce invalid
>output (data corruption).

A can only assume they have an "escape-clause" in the encoder so that
if it cannot cope with a jpeg file, it is then just stored (or an
attempt is made to compress it with a conventional method, and if that
yields no improvement, then it is stored).

>Anyway, on another note, this discussion has inspired me to have a go at
>JPEG compression and see what ideas I can come up with. Unfortunately,
>due to the software patents mentioned, I will not be able to release
>anything until they are published, in case I happen upon something
>similar to the patents. Still it might be interesting to see if we can
>discover independantly what redundancy he has made use of.

No, you can release anything you want any time. If you just happen to
come up with the same method as theirs then that's fine and you can
publish it all you want and even release a free compressor based on
your work.
The "BUT" is that they have a patent pending, so they OWN the idea
(that ownership is in the patent application), even if you come up
with it completely independantly. That means they can stop you (or
anyone else) from using the idea and/or claiming it was yours.
(unless of course, prior art exists for their method...)

>I'll report back in a week or so if I make any progress :).

Please do, and don't be afraid to publish what you find :)

Personally, I think what they do is to first take the data back to
the stage before it is RLE and Huffman encoded. I doubt they go back
further than this becuase you then have to allow for non-standard
quantization tables, but I could be wrong.
Then re-encode data with a modelling arithmetic (or range) coder (no
I don't know what model :).
The data outside the RLE/huffman compressed data (other segment types
like comments, etc) are probably just stored or compressed with a
conventional coder.
They would have to do something like this becuase it needs to be 100%
reversible. They can't completely decode the JPEG and use a different
coder (wavelet or something) unless they stored every single bit
(literally) of information needed to restore it back completely which
would probably outweigh the gains, so I doubt they do that.
There may be some kind of modelling you could apply to the auxilary
segments to improve compression there (like allowing for common ones
like photoshop, thumbnails etc). For example, you could generate the
thumbnail from the full image data (because you have that anyway) and
therefore not need to encode the thumbnail...
Other options include making the file (reversibly) progressive which
almost always results in compression.

(of course, I could be completely wrong :-)

I will be quite interested to see how it copes with well optimised
files (those with any auxilary segments removed, and optimised huffman
tables).

Errol Smith
errol <at> ros (dot) com [period] au

Phil Carmody

unread,
Jan 14, 2005, 2:53:05 AM1/14/05
to
news...@comcast.net writes:

I suspect taht if you trawl the archives for n00bie posts such as
"How do I compress a 680MB .mpg file to put it onto a 650MB CDR?"
you'll see many responses that say something like the zeroth order approximation
"You can't compress a .mpg file, it's already compressed."

It's a case of having to lie to keep things simple.

Actually, Derryl - if you ask me your biggest killing will be not
in trying to satisfy the 1MB JPEG market, but the 1GB MPEG (QT/
avi/whatever) market.

Phil Carmody

unread,
Jan 14, 2005, 3:07:15 AM1/14/05
to
SuperFly <n...@mail.com> writes:
> On Thu, 13 Jan 2005 17:24:07 GMT, Darryl Lovato <dlo...@allume.com>
> wrote:
[...]

> The million dollar question here is: can you compress a jpeg file with
> the header chopped off, or any segment of a jpeg file for that matter.
> This would prove that you can actually model&compress a jpeg
> compressed file.

That is the free question with the packet of cornflakes, alas.

Your question traslates to "can you model something as well without the
correct context". To expect any answer apart from "no" is absurd.

> But i think you mean uncompress the jpeg file, re-model it, and
> recompress it with a state of the art compression scheme. And do the
> reverse to build back the jpeg file. Which is something completely
> different, and has nothing to do with compressing already compressed
> data.
>
> It's like decompressing a huffman compressed file, and compressing it
> with an arithmetic compressor which will probably give you an extra
> 20%+ compression.

Compressors are functions, and if s and j are compressors, then
(s o j^-1) is also a compressor which has as its domain the range
of j.

Do you really think that a compressor should deliberately ignore
information it knows or can deduce about its input? Why?

Thomas Richter

unread,
Jan 14, 2005, 3:52:06 AM1/14/05
to
Hi Darryl,

> These are the well known "kodak images", compressed with jpeg quality 50.

But you know of course as good as I do that this is not even closely enough
to reproduce the result, right? (-;

It would be rather helpful (indeed, not only for me, but for your customers
as well) to provide the original, uncompressed image, and to state which
jpg implementation you've been using to get these results. As you know, of
course, results may differ dramatically on the settings, and "quality 50"
says about nothing. JPG allows to customize quantizer tables, for example,
and at least the classical freely available IJG jpeg is not exactly high
quality either. One can do quite some of improvement there without leaving
the jpeg specifications.

> There has already been independent confirmation that this works though. Two
> of the best known authors of compression test sites have already posted to
> this thread that they were able to test what we have. Jeff recently posted
> results on a few of his own files.

> <http://www.compression.ca/act/act-jpeg.html>

I'm currently not in doubt that this works for "typical jpeg
files". In principle, the way to handle this issue is more or less
"decompress and recompress with a better codec". The current idea I
would come up with would be to leave the data quantized as it is
(without redoing the DCT), then use a smarter (possibly inter-block)
context model and an arithmetic coder afterwards.

This would make it hard, though, to recompress images that have been
jpg-compressed by anything but the standard options. Not that this isn't
particulary useful for the millions of jpeg images out there. Question
rather is whether this is a "high invention" as you put it. (-;

Thus, "compressing jpgs" sounds - sorry to say - more like a marketing
gag to me. Instead, you're competing in the market of new image
compression schemes, and you should try to compete with something more
modern instead of trying to beat the traditional jpeg, which isn't too
hard to begin with. At least when making claims about compression
performance. That your move is a wise one marketing wise I do not
doubt, though. (-; I do believe there is indeed a market for what
you're doing. Just be a bit careful when posting here.

So long,
Thomas

Guido Vollbeding

unread,
Jan 14, 2005, 3:52:35 AM1/14/05
to
Darryl Lovato wrote:
>
> These are the well known "kodak images", compressed with jpeg quality 50.

If this is the IJG quality scale, then your results are not very
surprising. It is well known that similar compression gain can
be achieved with lossless transcoding from huffman to arithmetic
JPEG mode, for example.
Why do you think that this possible compression improvement isn't
used in practice? I can tell you: Because of the patent encumbering!
Once those patents expire (only 5 years or so from now), people will
be able to LOSSLESSLY transcode all their existing JPEG files to
arithmetic coding and gain some space, so your advantage,
if existent at all, will shrink.

Regards
Guido

Guido Vollbeding

unread,
Jan 14, 2005, 4:00:32 AM1/14/05
to
Aleks Jakulin wrote:
>
> Judge the novelty of your "significant contribution" in the context of
> this snippet from Guido Vollbeding's documentation to jpeg-ari
> (28-Mar-98):

Yes, and the examples I used for test were rather low compression
and high noise images. You should expect more gain in higher
compression and smoother image area. (BTW, you can often get
yet more compression with using progressive coding mode, or
yet more with using custom progressive configuration.)

Regards
Guido

Aleks Jakulin

unread,
Jan 14, 2005, 4:35:53 AM1/14/05
to
Darryl Lovato:

> These are the well known "kodak images", compressed with jpeg
> quality 50.

Can you post a link?

> There has already been independent confirmation that this works

> of the best known authors of compression test sites have already

> this thread that they were able to test what we have. Jeff recently
> posted results on a few of his own files.
>
> <http://www.compression.ca/act/act-jpeg.html>

Yes, the links to the images don't work. Fishy.

> I believe I have done more than enough to defend the claims -
> independent confirmation, etc.

Looks like there is a bunch of skeptical people over here...

Aleks

SuperFly

unread,
Jan 14, 2005, 4:46:06 AM1/14/05
to
On 14 Jan 2005 10:07:15 +0200, Phil Carmody
<thefatphi...@yahoo.co.uk> wrote:

[snip]

>> It's like decompressing a huffman compressed file, and compressing it
>> with an arithmetic compressor which will probably give you an extra
>> 20%+ compression.
>
>Compressors are functions, and if s and j are compressors, then
>(s o j^-1) is also a compressor which has as its domain the range
>of j.
>
>Do you really think that a compressor should deliberately ignore
>information it knows or can deduce about its input? Why?

No i don't. But my point was that the original claim should have been
phrased differently. If the new codec doesn't directly compress the
jpeg stream, we're basically dealing with a new image codec that can
outperform at least one jpeg flavour by 20% and ships with transcoding
capabilities. Which should have been announced and tested as such.

And the fact that it can outperform most regular compressors isn't
that spectacular and somewhat besides the point imho. I think the new
image codec should be tested against other image codecs using similar
circumstances.

And if we use the maximum compression example the question should be:
if we decompress the a10.jpg back to raw data, is there any image
codec out there that can losslessly compress it to +/- 643.403 bytes
and decompress it back to raw data. If there isn't we're dealing with
something special, otherwise not.

And from what i've seen so far, my bet would be that we're not. But i
admit i'm not an image expert.

Again, just my 2 eurocents ..

-SF-

Matt Mahoney

unread,
Jan 14, 2005, 9:24:54 AM1/14/05
to
SuperFly wrote:
> And if we use the maximum compression example the question should be:
> if we decompress the a10.jpg back to raw data, is there any image
> codec out there that can losslessly compress it to +/- 643.403 bytes
> and decompress it back to raw data. If there isn't we're dealing with
> something special, otherwise not.

I think there is more to it than that. The decompressor also has to
produce a bit for bit identical jpeg file. Just recompressing the
image with jpeg isn't guaranteed to do that. Also, I don't think that
switching from Huffman codes to arithmetic would improve the
compression by 24% unless there were a lot of 1 bit codes.

I'm not an expert on jpeg, but the way I understand it is this: 8x8 DCT
-> quantization (lossy) -> Huffman coding. Correct me if I'm wrong,
but jpeg doesn't seem to exploit any reduncancy between adjacent 8x8
blocks. I think this is where the big gains can be made.
-- Matt Mahoney

Guido Vollbeding

unread,
Jan 14, 2005, 9:53:31 AM1/14/05
to
Matt Mahoney wrote:
>
> I'm not an expert on jpeg, but the way I understand it is this: 8x8 DCT
> -> quantization (lossy) -> Huffman coding. Correct me if I'm wrong,
> but jpeg doesn't seem to exploit any reduncancy between adjacent 8x8
> blocks. I think this is where the big gains can be made.

The arithmetic JPEG coder does this via context conditioning.
Also, in the progressive JPEG mode the AC coding can use EOB (End-Of-Block)
runs over multiple blocks. Especially with the successive approximation
feature in the progressive mode you can thus gain noticeable compression
over sequential coding. This is all possible via lossless transcoding
(jpegtran).

Regards
Guido

news...@comcast.net

unread,
Jan 14, 2005, 12:06:39 PM1/14/05
to

You simultaneously overestimate and underestimate what you've done
(assuming your results are valid, etc., etc.).

Here's why you overestimate what you claim:

"Knowing it is possible" isn't new knowledge in the least. What
you've done is functionally equivalent to replacing the lossless codec
of JPEG (done after transform and quantization) with a better codec.
Lots of people have worked on doing this, which they certainly didn't
do because they thought it was impossible. People have gotten mild
improvements -- just switching from Huffman to an arithmetic coder
gets you maybe 5% savings on average.

Here's why you underestimate what you claim:

The fact that people have known improvements are possible, and that
lots of people have worked on it and haven't gotten close to the 30%
improvement you claim, means that you have found something that lots
of other very smart people, who have considered the problem for the
past several decades, haven't been able to do.

Anyway, lots of people understand JPEG, and lots of people have worked
on lossless codecs following the lossy transform and quantization
phase. I am familiar with a lot of this (I was an outside expert
hired by one of the big electronics companies in the JPEG patent
ugliness about a year and a half ago, which dealt specifically with
the standard codec), so you can count me as (a) unimpressed by your
claims about "knowing it can be done" and distinguishing from
compressing random data, (b) impressed by the claimed results, and (c)
disappointed in the quoted efficiency numbers (which makes me curious
as to what you're doing -- very few lossless schemes are that slow).

Matt Mahoney

unread,
Jan 14, 2005, 12:55:43 PM1/14/05
to
news...@comcast.net wrote:
> "Knowing it is possible" isn't new knowledge in the least.

Well, I have to disagree. People will not try to solve a problem
unless they believe they can succeed. The jpeg benchmark at
maximumcompression.com has been posted for some time, but nobody ever
got very far with it because we all "know" that compressed data can't
be compressed again, so nobody bothered to try. But now that we know
it is possible I think it won't be long before others implement the
"obvious" solution. Now the only question is can you do better than
643,403 bytes?

-- Matt Mahoney

Fabio Buffoni

unread,
Jan 14, 2005, 1:05:24 PM1/14/05
to
> Results for my a10.jpg file:
> On my AMD Athlon 1800+ a10.jpg 842468 -> 643403 76.37% in about 4 sec

A couple of tests recompressing a10.jpg using jpegcrop:

856024 (huffman default)
824441 (huffman optimized)
758340 (order-0 arithmetic)

780872 (progressive - huffman default)
780872 (progressive - huffman optimized)
737555 (progressive - arithmetic)

643403 seem to be a quite good work.

I'm wondering if the algorithm is completely lossless or if it preserves
only pixels colors. How does it work if the image has steganography
information hidden in it?

FB

Jeff Gilchrist

unread,
Jan 14, 2005, 2:02:17 PM1/14/05
to
Hi Aleks,

If you look at the bold writing at the top of the page just above the
images, you will see that Slashdot linked to my server which caused
230000+ people to hit the site in one day. I removed the full-size
test JPEG images to save bandwidth and my server. Now that the /.
effect is over, I have put the images back on again. They have been
there since the beginning, only offline for two days.

Jeff Gilchrist

unread,
Jan 14, 2005, 2:08:44 PM1/14/05
to
Aleks Jakulin wrote:

> Interestingly, none of your "trusted" individuals included JPEG with
> arithmetic coding into comparison, and neither included JPEG2000 into
> comparison at the perceptually identical quality levels. Furthermore,
> neither of the sites you have listed has the actual images online so
> that someone else would independently verify your claims. One of them
> does not even include your technology in the list with the actual
> compressed file sizes.

Some people are busy and don't have 24/7 to test every piece of
compression software out there. ;-)

I have since updated the website to include results of JPEG with
arithmetic compression and also lossless JPEG "optimization". I don't
currently have access to software that does JPEG2000 and I'm not quite
sure how to take a JPEG file and convert it to a JPEG2000 format at
perceptually identical quality levels. I'm not even sure if that is
possible/fair starting out with a lossy JPEG.

If you want to see the latest results and grab a copy of the test files
to do your own analysis, you can find it here:
http://compression.ca/act/act-jpeg.html

Regards,
Jeff Gilchrist

Aleks Jakulin

unread,
Jan 14, 2005, 3:28:21 PM1/14/05
to
Jeff:

Many thanks! The new results are most informative and very helpful.
Allume yields a 10%-20% jump over arithmetic coding, which is fine,
but not great considering an order of magnitude increase in
compression *and* decompression time. It's about the same level of
improvement over arithmetic coding as arithmetic coding is over
RLE+Huffman. Now, for a 12% jump there would be less of the needless
media frenzy as we have seen over the past few days.

I think Thomas would be well-qualified and hopefully motivated to
handle the JPEG2k part for a losslessly encoded reference image ;-)
Two pairings would be nice: one for RMSE and one for the approximately
perceptually identical quality level. Or there are any links?

As for JPEG2k software:
http://datacompression.info/JPEG2000.shtml

There are the following implementations:
Jasper: http://www.ece.uvic.ca/~mdadams/jasper/ (nice, comes with the
source)
Kakadu: http://www.kakadusoftware.com/ (fine, no source)
jj2000: http://jj2000.epfl.ch/ (OK, Java, with the source)

--
mag. Aleks Jakulin
http://www.ailab.si/aleks/
Artificial Intelligence Laboratory,
Faculty of Computer and Information Science,
University of Ljubljana, Slovenia.


Jeff Gilchrist:


> I have since updated the website to include results of JPEG with
> arithmetic compression and also lossless JPEG "optimization". I

> currently have access to software that does JPEG2000 and I'm not

Malcolm Taylor

unread,
Jan 14, 2005, 3:48:06 PM1/14/05
to

Guido Vollbeding wrote:
> The arithmetic JPEG coder does this via context conditioning.
> Also, in the progressive JPEG mode the AC coding can use EOB (End-Of-Block)
> runs over multiple blocks. Especially with the successive approximation
> feature in the progressive mode you can thus gain noticeable compression
> over sequential coding. This is all possible via lossless transcoding
> (jpegtran).

I feel like throwing a spanner in the works. A lot of people here are
saying 'this is not new, my xx brand jpeg code can do well to by
transcoding losslessly'. Unfortunately you are taking a loose definition
of lossless.
Can you reverse your transcode and get the identical original file out?

I must say that there does seem to be something at least slightly novel
here. It remains to be seen how novel, but time will tell, and we may
learn something interesting by trying to replicate their work in the
mean time!

Malcolm

Errol Smith

unread,
Jan 14, 2005, 9:19:41 PM1/14/05
to
On 14 Jan 2005 11:08:44 -0800, Jeff Gilchrist wrote:
>Some people are busy and don't have 24/7 to test every piece of
>compression software out there. ;-)
>
>I have since updated the website to include results of JPEG with
>arithmetic compression and also lossless JPEG "optimization". I don't
>currently have access to software that does JPEG2000 and I'm not quite
>sure how to take a JPEG file and convert it to a JPEG2000 format at
>perceptually identical quality levels. I'm not even sure if that is
>possible/fair starting out with a lossy JPEG.
>
>If you want to see the latest results and grab a copy of the test files
>to do your own analysis, you can find it here:
>http://compression.ca/act/act-jpeg.html

Jeff,

I ran my lossless optimiser 'webpack' on your test files and got the
following results (number in brackets is original filesize).

DSCN3974.jpg 1,030,805 (1,114,198)
DSCN4465.jpg 652,585 (694,895)
DSCN5081.jpg 481,546 (516,726)
A10.jpg 780,872 (842,468)
(A10.jpg is the jpg test file from maximumcompression.com)

Webpack uses jpegtran but produces files that are progressive (or
not, whichever is smaller - as long as you use brute mode), which is
smaller than your tests with only jpegtran -optimise, yet pixel
identical. If you use "jpegtran -optimize -progressive" you should get
identical results. I found that progressive mode is smaller (sometimes
significantly) with most images, unless they are particularly small.
I would be quite curious if you could run the progressive+optimized
jpegtran versions through Allume and see what differences it makes.

My util also strips comments & other unneeded chunks ("-copy none")
but I modified it to not do that for this test and it made no
difference, so the source files must not contain any such data.
(webpack is at http://www.kludgesoft.com/nix/webpack.html if you are
curious).

Errol

Matt Mahoney

unread,
Jan 14, 2005, 11:30:05 PM1/14/05
to
Fabio Buffoni wrote:
> > Results for my a10.jpg file:
> > On my AMD Athlon 1800+ a10.jpg 842468 -> 643403 76.37% in about 4
sec
>
> A couple of tests recompressing a10.jpg using jpegcrop:
>
> 856024 (huffman default)
> 824441 (huffman optimized)
> 758340 (order-0 arithmetic)
>
> 780872 (progressive - huffman default)
> 780872 (progressive - huffman optimized)
> 737555 (progressive - arithmetic)

But can you reverse it and get an identical file?

> 643403 seem to be a quite good work.

Especially since the image was apparently compressed with high quality
setting, which makes it harder to compress losslessly because not much
noise was removed. I looked at the quantization tables and the values
are all in the range 1-3 (DC = 1). Also the chroma is not downsampled.
(Also it seems to be nonstandard if I read the JPEG standard right -
the 4 Huffman and 2 quantization tables are concatentated without
separate headers for each table).

Still, you could improve over the JPEG version of arithmetic coding.
It uses a multiplication free algorithm, essentially rounding the
binary decision probability of the least probable symbol to the nearest
power of 2. That's got to cost a few percent.

> I'm wondering if the algorithm is completely lossless or if it
preserves
> only pixels colors. How does it work if the image has steganography
> information hidden in it?
>
> FB

They claim it's lossless. Steganography would be a good test of this.

SuperFly

unread,
Jan 15, 2005, 5:59:19 AM1/15/05
to
On 14 Jan 2005 20:30:05 -0800, "Matt Mahoney" <matma...@yahoo.com>
wrote:

>> 856024 (huffman default)
>> 824441 (huffman optimized)
>> 758340 (order-0 arithmetic)
>>
>> 780872 (progressive - huffman default)
>> 780872 (progressive - huffman optimized)
>> 737555 (progressive - arithmetic)
>
>But can you reverse it and get an identical file?
>
>> 643403 seem to be a quite good work.
>
>Especially since the image was apparently compressed with high quality
>setting, which makes it harder to compress losslessly because not much
>noise was removed. I looked at the quantization tables and the values
>are all in the range 1-3 (DC = 1). Also the chroma is not downsampled.
>(Also it seems to be nonstandard if I read the JPEG standard right -
>the 4 Huffman and 2 quantization tables are concatentated without
>separate headers for each table).

I tested a raw bmp version of the a10.jpg file using several image
suites and image formats (including jpeg2000) and noticed that nothing
i used could get it smaller then +/-720.000 lossless.

But I did notice, jpeg2000 could get it down to +/-600.000 when using
a 98% quality setting. And even much smaller with just a few extra
percents quality loss.

I also noticed jpeg2000 produced far better visual results then
regular jpeg. I think the jpeg2000 wavelet function just preserves
more data then a jpeg dct given the same input data. And i couldn't
distinguish the 98% jpeg2000 compressed image from the original with
the bare eye. So i wouldn't be surprised if the original jpg could be
theoretically build back from a 95%/98% jpeg2000 file. But to be sure
i think one would need to know how much actual data is
stored/preserved in a jpeg/jpeg2000 file using a certain quality
level. Which is a question for the (jpeg) image experts ..

However if a 98% quality-level jpeg2000 file hasn't preserved enough
data to build back the ??% quality-level jpeg a10.jpg file, I agree
they have outperformed everything that's out there by +/-10%. Which is
pretty amazing especially if you consider they claim they can do the
same with audio and video formats. And i'd certainly like to know how
they did that, and what else it possible with their method.

-SF-

Jeff Gilchrist

unread,
Jan 16, 2005, 11:40:21 AM1/16/05
to
Matt Mahoney wrote:

> They claim it's lossless. Steganography would be a good test of
> this.

Good idea. To test this out, I grabbed OutGuess 2.0
(http://www.outguess.org/) and steganographically hid a message within
one of the test JPG files:

./outguess -k "willthiswork" -d input.txt DSCN3974.jpg stegged.jpg

The original DSCN3974.jpg test image is 1114198 bytes. The stegged
image stegged.jpg became much smaller at 329258 bytes.

Compressing the stegged image with the Allume JPEG algorithm, brought
that size down to 229783 bytes (30% smaller). As in previous tests,
when I uncompressed the JPEG I got back an identical file (SHA-1
confirmed).

I used outguess again to try and retrieve the encoded message and I was
successful:

./outguess -k "willthiswork" -r stegged.jpg output.txt

The decoded message matched the encoded one, bit for bit (SHA-1).

e3b723bc1571d1a432978c705d8ec7a38e868faa *input.txt
e3b723bc1571d1a432978c705d8ec7a38e868faa *output.txt
So it looks like their algorithm truly is lossless.

Jeff.

Matt Mahoney

unread,
Jan 16, 2005, 4:11:16 PM1/16/05
to

Hmm, according to their paper, the hidden message is in the low order
bits of the DCT coefficients. I would expect this to work because
otherwise the decompressed image would not be pixel for pixel
identical. But I can think of some other places to hide data that
would cause trouble for a JPEG compressor, although a message would be
easily detected if you knew to look.

1. Byte stuffing. In the JPEG standard, all elements start with a FF
(hex) byte followed by a code byte. For example, a JPEG file always
starts with FF D8 FF E0 which code SOI (start of image) and APP0
(application data). The standard allows any number of FF bytes to be
equivalent to one byte, so for example, FF FF FF D8 FF FF E0 would be
the start of a legal (though unusual) JPEG file. A more likely place
to stuff FF bytes would be in the entropy coded data. If the Huffman
coded data results in a FF byte, then this must be "escaped" by a
following 00 byte. You could replace the sequence FF 00 with FF FF 00
(or some other number of repeats) without modifying the displayed
image. Would the decompressor replace these stuffed bytes?

2. Redundant Huffman codes. The JPEG Huffman tables allow more than
one code to represent the same symbol. Although unusual, you could
construct such a table and use the choice of codes to hide data. Would
the JPEG decompressor work on this?

I could probably come up with other anomalies that might uncover
problems, like out of order or misplaced RSTx markers, hiding data in
APPx tags and comments, unused quantization tables, Huffman tables
defining codes for invalid symbols, overflowing the Huffman tables
(i.e. 5 2-bit codes), nested thumbnails, etc. It would be interesting
to see if the program compresses these, rejects them as valid JPEGs and
falls back to standard methods, or crashes.

BTW I was looking at the maximumcompression.com benchmarks, and there
are actually 4 files that could benefit from JPEG compression: a10.jpg,
mso97.dll, flashmx.pdf and ohs.doc. These are all baseline DCT images
except for one progressive DCT image embedded in ohs.doc. Not all of
them have APP0 tags. I also scanned my own computer and there are
embedded JPEGs all over. I would say it is about 95% baseline DCT and
5% progressive, but none of the other 11 formats like lossless,
arithmetic coded, hierarchical, differential, etc.

-- Matt Mahoney

Malcolm Taylor

unread,
Jan 16, 2005, 7:05:50 PM1/16/05
to
> BTW I was looking at the maximumcompression.com benchmarks, and there
> are actually 4 files that could benefit from JPEG compression: a10.jpg,
> mso97.dll, flashmx.pdf and ohs.doc. These are all baseline DCT images
> except for one progressive DCT image embedded in ohs.doc. Not all of
> them have APP0 tags.

Now, that is an interesting idea, and one I may well implement in a
future WinRK version. I do not currently look for embedded JPEG images,
however it would be a rather easy task to perform, and wouldn't break
backwards compatibility.

Malcolm

Guido Vollbeding

unread,
Jan 17, 2005, 4:12:47 AM1/17/05
to
Malcolm Taylor wrote:
>
> I feel like throwing a spanner in the works. A lot of people here are
> saying 'this is not new, my xx brand jpeg code can do well to by
> transcoding losslessly'. Unfortunately you are taking a loose definition
> of lossless.
> Can you reverse your transcode and get the identical original file out?

In image compression I don't really need to.
Lossless transcoding on the basis of quantized DCT-coefficients is enough
to get the identical *image* out.

> I must say that there does seem to be something at least slightly novel
> here. It remains to be seen how novel, but time will tell, and we may
> learn something interesting by trying to replicate their work in the
> mean time!

At the moment there is nothing reproducible present here from the poster.
But I have seen lots of "experts" in the past who thought they invented
something novel, but then it turned out that they didn't even know all
the JPEG modes and progressive JPEG features in particular. So they
better spend their time by learning more about JPEG first, before
claiming an invention.

Regards
Guido

Thomas Richter

unread,
Jan 17, 2005, 4:32:36 AM1/17/05
to
Hi Aleks,

> I have since updated the website to include results of JPEG with
> arithmetic compression and also lossless JPEG "optimization". I don't
> currently have access to software that does JPEG2000 and I'm not quite
> sure how to take a JPEG file and convert it to a JPEG2000 format at
> perceptually identical quality levels. I'm not even sure if that is
> possible/fair starting out with a lossy JPEG.

There are of course several algorithms to test that part. (-;

The easiest one would be to sent me the images, so I can handle it.
Might take a couple of days depending on my workload.

The second easiest is to get the really quite nice JJ2000 reference
software to do the compression. It is not really "ultra-high-quality",
but it serves its purpose. Requires a java installation:

http://jj2000.epfl.ch/

Then there is a reference software in C called "Jasper". I don't post
its link here because it's a pretty bad implementation - it manages to
be slower than the java software. /-:

> If you want to see the latest results and grab a copy of the test files
> to do your own analysis, you can find it here:
> http://compression.ca/act/act-jpeg.html

Now, this is at least something. (-; Ok, just another question: Obviously,
as for all "lossy" compressions, we need to have means to define the image
quality to compare the compression results (as in: compress to the same
quality and see which file is shorter). What would that be for you? Easiest
I can manage is PSNR. (Though argueably not a very good one, but it would
take a while to get another image quality measure implemented as a program
here...)

Is that acceptable?

So long,
Thomas

Guido Vollbeding

unread,
Jan 17, 2005, 4:38:11 AM1/17/05
to
Errol Smith wrote:
>
> If you use "jpegtran -optimize -progressive" you should get
> identical results. I found that progressive mode is smaller (sometimes
> significantly) with most images, unless they are particularly small.
> I would be quite curious if you could run the progressive+optimized
> jpegtran versions through Allume and see what differences it makes.

"-progressive" always includes "-optimize" with jpegtran, so there is
no need to test an extra "-optimize -progressive" (identical to
"-progressive").
"-optimize" means building custom Huffman tables by an extra pass.
This is always done by jpegtran (and other IJG-based programs such
as Jpegcrop) when progressive mode is selected, since the statistics
of progressive scans are different from sequential mode, and thus it
is not advisable to use the default tables (and progressive coding
needs multpile passes anyway, so there's no noticeable performance
penalty).

Regards
Guido

Guido Vollbeding

unread,
Jan 17, 2005, 5:10:48 AM1/17/05
to
Fabio Buffoni wrote:
>
> > Results for my a10.jpg file:
> > On my AMD Athlon 1800+ a10.jpg 842468 -> 643403 76.37% in about 4 sec
>
> A couple of tests recompressing a10.jpg using jpegcrop:

Yes, that's easy to use and test :).

> 856024 (huffman default)
> 824441 (huffman optimized)
> 758340 (order-0 arithmetic)
>
> 780872 (progressive - huffman default)
> 780872 (progressive - huffman optimized)

Progressive Huffman always includes "optimized", so they must be the same.

> 737555 (progressive - arithmetic)
>
> 643403 seem to be a quite good work.

That's about 13% less than the "state of the art" (progressive arithmetic).
Nice, but that's about the same amount as the advantage of arithmetic
over Huffman. Now see the history: *Despite* that advantage of
arithmetic over Huffman coding, the arithmetic coding variant is
until today virtually unused - due to the patent encumbering!
Again: You *could* reduce your JPEG image space requirements by about
15% *immediately* by using arithmetic instead of Huffman coding, but
*nobody* is doing so.
So why should one assume that people will use another method under these
circumstances?
Another story would be if a new *unencumbered* method would be presented,
but so it's not very impressive.
(BTW, the JPEG arithmetic coding patents are to expire in about 5 or so
years - we will see what this changes in usage...)

Regards
Guido

Aslan Kral

unread,
Jan 17, 2005, 5:36:02 AM1/17/05
to

"Darryl Lovato" <dlo...@allume.com>, haber iletisinde sunlari
yazdi:BE0424B5.15860%dlo...@allume.com...
> The new technology does NOT break any Information Theory Laws, and will be
> shipped later this qtr in commercial products as well as be available for
> licensing. The new technology does NOT compress "random files", but
rather
> previously "compressed files" and "compressed parts" of files. The
> technology IS NOT recursive.
>

Guys, I am not JPEG expert but may I ask a simple question?

I guess this is a special transformation that creates some redundant data
out of some special data
(lossy compressed data), which you would further compress. But what kind of
transformation would it turn out to be to create more redundant data out of
more lossy compressed data?

My guess is they do not decode it but reorganize the data in a special way.


Michel Bardiaux

unread,
Jan 17, 2005, 7:44:59 AM1/17/05
to

IIRC both the arithmetic coder *and* decoder are patented. If the touted
new recoder is patented the same way, it is likely to go the same way as
GIF-LZ, RSA, and the flip-top toothpaste tube: you can do without at an
acceptable cost, so you do without.

The developpers of this new codec should license the decoder for free,
and allow use of the encoder for development and demonstration purposes,
so there would be an incentive to include the decoder in many software
products.

> (BTW, the JPEG arithmetic coding patents are to expire in about 5 or so
> years - we will see what this changes in usage...)

If my take on arithmetic coding is right, probably not, since every
software piece dealing with JPEG would have to be upgraded to include
the arithmetic decoder.

>
> Regards
> Guido


--
Michel Bardiaux
Peaktime Belgium S.A. Bd. du Souverain, 191 B-1160 Bruxelles
Tel : +32 2 790.29.41

Guido Vollbeding

unread,
Jan 17, 2005, 8:02:58 AM1/17/05
to
Michel Bardiaux wrote:
>
> > (BTW, the JPEG arithmetic coding patents are to expire in about 5 or so
> > years - we will see what this changes in usage...)
>
> If my take on arithmetic coding is right, probably not, since every
> software piece dealing with JPEG would have to be upgraded to include
> the arithmetic decoder.

Which isn't difficult at all - almost all applications use the IJG codec
as a "black-box" anyway, and so you just grab a new version with enabled
arithmetic coding support and you are done - no other adaptation necessary!
(It's available today in my Jpegcrop program and enhanced library source.)

Regards
Guido

Thomas Richter

unread,
Jan 17, 2005, 8:37:37 AM1/17/05
to
Hi Guide,

> Which isn't difficult at all - almost all applications use the IJG codec
> as a "black-box" anyway,

Where do you take your numbers from? Clearly, IJG is *popular*, and it
might be used for a lot of low-cost end-user applications, but it is
not that heavily used in the B-B market segment for various
reasons. Companies like Kodak or Siemens don't use IJG, for
example. Besides, JPEG is also part of a lot of other standards
(PDF,DICOM to name two) and these standards might also demand various
restrictions on the layout of the JPEG stream, or may require updating
the corresponding "host software". In other words, it is not at all as
easy as you may wish.

The market works a bit different: If people can "compress" their JPEGs
using a custom compressor like "Zip" or "StuffIt", things might turn
out easier for them - even though from an scientific p.o.v. this usage
pattern is pretty pointless, and this feels quite "wrong" from my side
as well. However, such is life.

So long,
Thomas

Guido Vollbeding

unread,
Jan 17, 2005, 9:21:24 AM1/17/05
to
Thomas Richter wrote:
>
> > Which isn't difficult at all - almost all applications use the IJG codec
> > as a "black-box" anyway,
>
> Where do you take your numbers from? Clearly, IJG is *popular*, and it
> might be used for a lot of low-cost end-user applications, but it is
> not that heavily used in the B-B market segment for various
> reasons. Companies like Kodak or Siemens don't use IJG, for
> example. Besides, JPEG is also part of a lot of other standards
> (PDF,DICOM to name two) and these standards might also demand various
> restrictions on the layout of the JPEG stream, or may require updating
> the corresponding "host software". In other words, it is not at all as
> easy as you may wish.

OK, I don't speak for the B-B market, I rather speak for the free and
open source segment.
But, contrary to your position (and perhaps other than in the J2K area),
I think that IJG is a *superior* JPEG implementation compared to many
commercial ones. If you ask Tom Lane (organizer IJG) or me, we could
tell you dozens of stories of erroneous proprietary JPEG implementations
which we had to deal with for interoperability issues.
The IJG implementation is only inferior in my eyes compared to my
enhanced version, and given the new features it brings and future
potential, people will be well advised to prefer such implementation
over a proprietary one.
It was, it is, and it will be the open source IJG JPEG implementation
which brings the most advanced JPEG features to the user. Why do you
think that people today are able to rotate or crop their digicam JPEGs
losslessly? This is only due to the IJG jpegtran features introduced
in 1998! I'm not aware of any commercial entity which ever presented
such features independent from or before IJG. And more features are
to come...
I tell you this: The commercial market is not particularly interested
in practical JPEG improvements, because THEY CAN'T MAKE MONEY FROM IT!
So they rather pursue other "technologies" such as Wavelets and JPEG2000
in particular, which are unsuited and inferior, but which have a much
greater potential TO MAKE MONEY FROM.
If you are on this latter train, beware! You won't be able actually
and in the forseeable future to compete with widely used JPEG and IJG
features in particular. Your only chance will be to convince naive
people in other obsolete businesses ("digital cinema" as I've seen on
the commercial JPEG [JPEG2000] site, or established technical "medicine"
as you mentioned earlier, which is undoubtedly the greatest [and
obsolete] business on earth).

Regards
Guido

Matt Mahoney

unread,
Jan 17, 2005, 10:02:41 AM1/17/05
to

You can't compress data by creating redundant data. What you can do is
reorganize data to make existing redundancies easier to find by the
compressor. The Burrows-Wheeler transform is an example of this. It
reorganizes data, sorting the bytes by context. It makes data with
high order redundancies compressible by a 0-order model.

In the case of JPEG, the image is split into 8 by 8 pixel blocks and
transformed into a set of 64 discrete cosine transform (DCT)
coefficients. The transform does not compress, it just reorganizes the
data to make it more compressible. Compression comes from quantizing
the coefficients (lossy) and Huffman coding the result (lossless).

One obvious solution would be to uncompress the JPEG and compress it
with a better algorithm, but this would not work because the
decompressor would have to compress the image with JPEG, which is
lossy, so the output would be different. You could solve this problem
by backing up only to the DCT coefficients and compressing these with
something better than a Huffman code. You still need to be careful to
decompress to a file that is bit for bit identical, not just pixel for
pixel indentical. In another post I pointed out some pitfalls, such as
0xFF byte stuffing and redundant Huffman codes. These are probably
rare, but you still need to check for them.

You can do better than the JEPG Huffman codes, at a cost in speed. The
JPEG Huffman code is essentially an order 0 model with the exception of
the DC coefficient, which is subtracted from the block to the left, and
for coding runs of zeros along a zigzag path through the 63 AC
coefficients. There are actually 5 dimensions in which you could model
the coefficients: 2 for adjacent blocks, 2 for adjacent coefficients in
the block, and 1 for color.

You could also improve compression by using arithmetic coding rather
than Huffman. I don't mean the arithmetic mode in JPEG either. It
uses a speedup trick at a significant cost in compression: rounding the
LPS probability to a power of 1/2 to avoid multiplication (which is
cheap on modern processors anyway).

-- Matt Mahoney

Aslan Kral

unread,
Jan 17, 2005, 11:12:24 AM1/17/05
to

"Matt Mahoney" <matma...@yahoo.com>, haber iletisinde sunlari
yazdi:1105974161.0...@f14g2000cwb.googlegroups.com...

> One obvious solution would be to uncompress the JPEG and compress it
> with a better algorithm, but this would not work because the ...

I was trying to say it. If you uncompress it, you get a bitmap image. Can
you have the same JPEG data if you compress it again?
(I don't know the answer.)
However their algorithm gives you the same JPEG data. My guess is they do
not uncompres the JPEG data but something else (reorganize the data?).
But you also say you can't reorganize it to create some redundant data.
But there is an answer though we do not know it yet.


Thomas Richter

unread,
Jan 17, 2005, 11:36:43 AM1/17/05
to
Hi,

> But, contrary to your position (and perhaps other than in the J2K area),
> I think that IJG is a *superior* JPEG implementation compared to many
> commercial ones. If you ask Tom Lane (organizer IJG) or me, we could
> tell you dozens of stories of erroneous proprietary JPEG implementations
> which we had to deal with for interoperability issues.

Well, how come that "war stories" include bugs in the IJG
implementation? (12bpp? Hint, hint?) I don't know how the state is
now, but IJG had for quite a while not the state of stability good(!)
commercial implementations do - I'm not telling about the "Startup
Enterprise" editions you find here and there, but serious stable ones.
This is one of the top reasons why it is not used for applications
where it really matters (medical, for example, and there it really
matters).

You've some "wishful thinking" here; people at Siemens for example
did specifically *not* use IJG because it had too many problems - they
tried, and they decided against it. I state this as a fact since I
talked to these guys. Have you ever talked to industrial partners?
Do you know what matters for them?

> I tell you this: The commercial market is not particularly interested
> in practical JPEG improvements, because THEY CAN'T MAKE MONEY FROM IT!
> So they rather pursue other "technologies" such as Wavelets and JPEG2000
> in particular, which are unsuited and inferior, but which have a much
> greater potential TO MAKE MONEY FROM.

Guido, you have absolutely no clue. Really. The JPEG2000 market is
absolutely tiny compared to what's earned on traditional JPEG. You
should've worked a couple of years in the industry before you'd make
nonsense statements like this one. (-;

One example: Do you know what it requires to get a piece of software
into a medical device?

Cleary, IJG played a major role of establishing the standard as it is
today, nothing said against that. It still plays a very major
role. But everything where it belongs. JPEG still sells well -
despite, or because of - the IJG implementation, its quality and its
shortcomings (no ranting implied) And that's not alone due to "code"
issues - you need to sell a lot more than "code".

Well, you're here under some kind of "locked in syndrom". Sure it's
"your baby" to some degree, but you should really look out what's going
on otherwise and what's else available. Sure everyone likes "his" code
best; but that doesn't mean that it's "superiour" due to that, leave
this decision to others; the market decides.

> If you are on this latter train, beware! You won't be able actually
> and in the forseeable future to compete with widely used JPEG and IJG
> features in particular. Your only chance will be to convince naive
> people in other obsolete businesses ("digital cinema" as I've seen on
> the commercial JPEG [JPEG2000] site,

There is no commercial JPEG2000 side; it is a standard of the ISO.
The peoples in here are volunteers that earn nothing to work at it.
Rather, a company has to pay money to contribute to it. (-;

> or established technical "medicine" as you mentioned earlier, which
> is undoubtedly the greatest [and obsolete] business on earth).

Guido, you're really naive. Sorry, but this is almost funny. You've
absolutely no experience in the market, don't tell me what's selling
how and why.

Or maybe, what about a career in marketing? You'd make a good marketing
guy. You're very convinced about yourself. Bad for an engineer, though.

So long,
Thomas

Matt Mahoney

unread,
Jan 17, 2005, 11:57:54 AM1/17/05
to
Aslan Kral wrote:
> "Matt Mahoney" <matma...@yahoo.com>, haber iletisinde sunlari
> yazdi:1105974161.0...@f14g2000cwb.googlegroups.com...
> > One obvious solution would be to uncompress the JPEG and compress
it
> > with a better algorithm, but this would not work because the ...
>
> I was trying to say it. If you uncompress it, you get a bitmap image.
Can
> you have the same JPEG data if you compress it again?
> (I don't know the answer.)

No, and that is what makes it hard. JPEG is lossy. If you compress it
again, you lose more data.

> However their algorithm gives you the same JPEG data. My guess is
they do
> not uncompres the JPEG data but something else (reorganize the
data?).
> But you also say you can't reorganize it to create some redundant
data.
> But there is an answer though we do not know it yet.

I mean that you can reorganize the data to discover redundant data that
is already there, not "create" it. For example, encrypted text looks
like random data so you can't compress it. But if you decrypt it, then
you can compress it. Decryption doesn't change the size of the
message. It just reorganizes the data. The encrypted text still had
redundant data. You just couldn't find it without the key.

Anyway they haven't said how it's done so we are speculating. My guess
is they partially decompress the JPEG back to the DCT coefficients and
then recode it more efficiently. This part is lossless, so it
preserves every pixel. But it is not guaranteed to preserve every bit
unless you take extra steps. In other words, this might not work:

To compress:
1. Huffman decode the DCT coefficients (transform)
2. Compress with better model

To decompress:
1. Decompress with better model
2. Huffman code the coefficients (inverse transform)

It could fail because the Huffman decoding discards information about
the location of stuffed bytes, restart markers, and redundant codes
(and maybe other things I didn't think of) that make the transform many
to one. I think the safest way to fix this is to verify that
decompression will produce a bitwise identical copy at compression
time:

1. Huffman decode the DCT coefficients (transform)
2. Huffman code (inverse transform) to temporary copy
3. Compare original input and temporary copy
4. If identical then compress with better model

-- Matt Mahoney

Guido Vollbeding

unread,
Jan 17, 2005, 12:04:44 PM1/17/05
to
Thomas Richter wrote:
>
> You've some "wishful thinking" here; people at Siemens for example
> did specifically *not* use IJG because it had too many problems - they
> tried, and they decided against it. I state this as a fact since I
> talked to these guys. Have you ever talked to industrial partners?
> Do you know what matters for them?

If they see problems and appear to have better solutions, why don't
they contribute to the development?
I've never heard from Siemens regarding IJG use, but I've heard from
lots of other companies which do use the code, have no problems and
appreciate it.

> Guido, you have absolutely no clue. Really. The JPEG2000 market is
> absolutely tiny compared to what's earned on traditional JPEG. You
> should've worked a couple of years in the industry before you'd make
> nonsense statements like this one. (-;

OK, nice to hear that you know that.
But of course some marketiers tell otherwise.

> One example: Do you know what it requires to get a piece of software
> into a medical device?

I have worked several years for a medical research company which also
tried to market medical devices (you see I sayed "tried" cause it wasn't
very successful) ;-).

> Well, you're here under some kind of "locked in syndrom". Sure it's
> "your baby" to some degree, but you should really look out what's going
> on otherwise and what's else available. Sure everyone likes "his" code
> best; but that doesn't mean that it's "superiour" due to that, leave
> this decision to others; the market decides.

As said I have got some insights into the market, especially the medical.
Modern medicine is actually the most evil business! It damages people
more than it helps them, all for the profit. I know what I speak about
because I have actually seen it!
Got to http://www.bfgev.de and order the book to learn how modern
medicine damages people and how you can get real health.

> There is no commercial JPEG2000 side; it is a standard of the ISO.
> The peoples in here are volunteers that earn nothing to work at it.
> Rather, a company has to pay money to contribute to it. (-;

So what is http://www.jpeg.org ? Why do they advertise JPEG2000 and
what are the mentioned "sponsors" doing? I must tell you that I have
met some people from those "sponsors" in real life, and thus I know
what they are interested in - they are certainly NOT interested in
real JPEG improvements, that's my experience, and they really know
less about JPEG than the people who made that standard until 1992!

Regards
Guido

Fabio Buffoni

unread,
Jan 17, 2005, 1:30:44 PM1/17/05
to

> But can you reverse it and get an identical file?

Of course I cant.

Jeff said the images were all bit for bit identical. This means that the
compressor should resist also to redundant huffman code steganography.
Their work really impresses me.

I'd like to test the program using invalid jpegs, and see how does the
compression vary when 1..n bits/bytes are randomly changed in data stream.

A second test could be also done removing the huffman coder from a jpeg
compression program and trying to pack the result with paq or winrk.
This has the problem you cant reverse it, but you can figure out how
good is their work.
And i think it's very good, both technically and commercially.

FB

It is loading more messages.
0 new messages