Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

basic image compression problem

0 views
Skip to first unread message

ALEX NG

unread,
Feb 27, 2003, 3:51:47 AM2/27/03
to
Hi,
In general, you cannot have one image coder can compress well for ALL
images.
some coder is good for some images and some are good for others.

Why this happen??


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.455 / Virus Database: 255 - Release Date: 2003/2/13


Thomas Richter

unread,
Feb 27, 2003, 4:46:23 AM2/27/03
to
Hi,

> In general, you cannot have one image coder can compress well for ALL
> images.
> some coder is good for some images and some are good for others.

> Why this happen??

Because each encoding process more or less depends on a statistical
model that is implied in how the compression algorithm is setup. It
depends on the image how well this statistical model can be applied.

This does not only hold for images, of course. Each compression
technique needs to do some kind of modelling. Hence, rougly speaking,
the information you compress moved from the data into the model. The
better the model fits to the data, the more data can be moved from
the target data into the model, hence allowing compression.

So long,
Thomas

ALEX NG

unread,
Feb 27, 2003, 9:32:42 AM2/27/03
to
Hi,
hm....for example, why the wavelet ,e.g. 5/3 lossless mode can't acheive
better performance than JPEG-LS??

Thomas Richter

unread,
Feb 27, 2003, 10:09:08 AM2/27/03
to
Hi,

> hm....for example, why the wavelet ,e.g. 5/3 lossless mode can't acheive
> better performance than JPEG-LS??

This statement is of course not correct in generality. "It pretty much
depends", as for all other compressors. The target application for
JPEG2000 was, however, the lossy compression route, and the lossless mode
was "nice to have". The way it is done fits nicely to all the remaining
compression schemes in JPEG2000, it is just a minor modification of
the quantizer, the rate allocator and the wavelet; as such, it offers all
the features an embedded wavelet codec could possibly offer (-; Ok, well,
"most", but still much more than JPEG-LS. JPEG-LS, on the other
hand, was designed for this kind of compression in mind from scratch, and
didn't target on features or lossy compression. Hence, as JPEG2000 has a
broader target, it was designed for a wider range of compression
applications, and has hence a less specialized model so to say.

To speak in the language I used above: JPEG2000 implies a statistical
model that is good for lossy image compression; it also provides more
features: Specifically, the bitplane encoder part costs rate for lossless,
especially for the lowest bitplanes that could be neglected in lossy
compression; if one would have wanted to sacrify the embeddedness, a
multi-level arithmetic encoder instead of the EBCOT/MQ-Coder would have
performed better in terms of raw compression.

JPEG-LS, on the other hand, considers a model without any kind of "bitplane"
paradigm involved.

Greetings,
Thomas

ALEX NG

unread,
Feb 27, 2003, 11:29:19 PM2/27/03
to
So for lossless image mode, the prediction scheme is always better than
wavelet lossless compression. The predictor can predict well as it predicts
from local pixels(as a result of bette model), can I say that? On the other
hand, wavelet is a general method and didn't concern the image type and so
we need wavelet as it can provide other features like region-of-interest,
zoom...etc.
Am I right to say that?

"Thomas Richter" <th...@cleopatra.math.tu-berlin.de> 撰寫於郵件新聞
:b3l9mk$22g$2...@mamenchi.zrz.TU-Berlin.DE...

Thomas Richter

unread,
Feb 28, 2003, 4:34:47 AM2/28/03
to
Hi Alex,

> So for lossless image mode, the prediction scheme is always better than
> wavelet lossless compression.

Never say "always" in compression buisiness. For typical images, you may
get better prediction.

> The predictor can predict well as it predicts
> from local pixels(as a result of bette model), can I say that? On the other
> hand, wavelet is a general method and didn't concern the image type and so
> we need wavelet as it can provide other features like region-of-interest,
> zoom...etc.
> Am I right to say that?

A wavelet filter is also a predictor. Specifically, the 5/3 wavelet
predicts the pixel between two pixels as the mean between these two. (See
for example: "Building Your Own Wavelets at Home" by Wim Sweldens and Peter
Schröder). Hence, I wouldn't say that the wavelet is the part where
JPEG2000 has its losses, though this wavelet predictor looks different
than the JPEG-LS predictor. The IMHO problematic part is the bitplane encoder
behind that. Now, for lossless coding, it is quite obvious that we need to
encode *all* bitplanes, and it is also easy to suppose how the data of the
least-significant bitplanes looks like: It consists more or less of
quantization noise that is rather hard to encode. On the other hand, it is
just this bitplane encoder that guarantees the embedded stream of JPEG2000,
and for lossy encoding, you won't encode this noise anyhow but would just
throw it away.

As for the statistical model behind this bitplane encoding: This is of
course rather hard to describe. Let me try: JPEG2000 predicts that the
most significant bits of two neighbouring pixels look similar. (Very
roughly speaking) The wavelet highpass filter of such a pixel pair will
have only non-zero data in the lower bitplanes which will be encoded
last, and for lossy compression, possibly not at all. Now, I would say
that this "bitplane oriented" paradigm is not quite natural and hence
the cause of the loss.

Greetings,
Thomas

ALEX NG

unread,
Feb 28, 2003, 10:15:34 AM2/28/03
to
hm....prediction in wavelet is a global one, it doesn't concern the details
in the image. All the pixels will undergo the mean value (5/3) as u
mentioned. However, JPEG-LS or CALIC uses adaptive prediction which
consider the local changes and so I believe predictors can perform well over
wavelet.

I agree that compression should depend on the image type / details. It is
hard to have a general scheme that can perform well over all the images
"Thomas Richter" <th...@cleopatra.math.tu-berlin.de>
???????:b3nafn$jja$1...@mamenchi.zrz.TU-Berlin.DE...

Thomas Richter

unread,
Feb 28, 2003, 11:07:07 AM2/28/03
to
Hi Alex,

> hm....prediction in wavelet is a global one, it doesn't concern the details
> in the image.

I'm sorry, but I don't agree here. The first high-pass collects details on
a scale of one-pixel, I would call this a very "local" detail level, indeed.
The next high-pass covers pixel differences at a scale of two, and so on.

> All the pixels will undergo the mean value (5/3) as u
> mentioned. However, JPEG-LS or CALIC uses adaptive prediction which
> consider the local changes and so I believe predictors can perform well over
> wavelet.

Sorry, but I wouldn't accept this as an argument. The complete filter that
is required to get the deepest low-pass in a wavelet decomposition tree from
the original image has a finite support and hence covers a domain that is
similar to that of a JPEG-LS predictor. The only thing that differs is the
value to which pixels are predicted, and how the difference between
prediction and real value is encoded. While one can clearly argue whether
wavelet based prediction outperforms JPEG-LS prediction or not, I do not
see *that much* of a difference in the working principle from an abstract
point of view. Both are predictors of some kind.

Greetings,
Thomas

ALEX NG

unread,
Mar 1, 2003, 8:15:39 PM3/1/03
to
OK, Let's discuss some real figures
Last night, I did the lossless compression using JPEG2000 and JPEG-LS.

I got some results (in terms of compression ratios).

JPEG2000 JPEG-LS
Barb1 1.73 1.70
Barb2 1.67 1.70
Boats 1.96 2.03
cats 3.16 3.11
water 2.30 2.30


For Barb1, cats JPEG2000 did well while other JPEG-LS perform better.

Can you exaplain the difference of the results?

Thnaks

Thomas Richter

unread,
Mar 2, 2003, 8:23:11 AM3/2/03
to
Hi,

> I got some results (in terms of compression ratios).

> JPEG2000 JPEG-LS
> Barb1 1.73 1.70
> Barb2 1.67 1.70
> Boats 1.96 2.03
> cats 3.16 3.11
> water 2.30 2.30


> For Barb1, cats JPEG2000 did well while other JPEG-LS perform better.

> Can you exaplain the difference of the results?

The differences sound all pretty marginal.
Well, without looking at the images, hardly likely. (-;

So long,
Thomas

ALEX NG

unread,
Mar 2, 2003, 12:31:44 PM3/2/03
to
you mean the content will affect the results??

"Thomas Richter" <th...@cleopatra.math.tu-berlin.de> 撰寫於郵件新聞
:b3t0jv$l3r$1...@mamenchi.zrz.TU-Berlin.DE...

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).

Version: 6.0.455 / Virus Database: 255 - Release Date: 2003/2/14


Raymond Wan

unread,
Mar 2, 2003, 4:34:34 PM3/2/03
to

Hi,

Oh, of course! That's why for compression, you need a standard
set of files (Lena, for example) but doing well for all of these files is
(unfortunately) still not proof that it's a great compression system and
all research in compression can stop. :)

As for the results you posted, they're actually not that
significant. The difference is in the second decimal place, but more
importantly, neither is consistantly better... For some files, JPEG2000
is better, for others, it's JPEG-LS. For many compression results, many
people compare algorithms with let's say 10 files and the conclusion is
usually "for 7 of the files, our system was better". And that's a fact.
But if you did well for all 10 files and concluded "our system is the
best" -- well, few people ever do that because that's too strong a claim.

Ray

On Mon, 3 Mar 2003, ALEX NG wrote:

> you mean the content will affect the results??
>

> "Thomas Richter" <th...@cleopatra.math.tu-berlin.de> ¼¶¼g©ó¶l¥ó·s»D

ALEX NG

unread,
Mar 2, 2003, 7:59:15 PM3/2/03
to
Yes the results are marginal. But what makes the differences?
"Raymond Wan" <rw...@cs.mu.oz.au>
???????:Pine.LNX.3.96.103030...@vike.cs.mu.OZ.AU...

Hi,

Oh, of course! That's why for compression, you need a standard
set of files (Lena, for example) but doing well for all of these files is
(unfortunately) still not proof that it's a great compression system and
all research in compression can stop. :)

As for the results you posted, they're actually not that
significant. The difference is in the second decimal place, but more
importantly, neither is consistantly better... For some files, JPEG2000
is better, for others, it's JPEG-LS. For many compression results, many
people compare algorithms with let's say 10 files and the conclusion is
usually "for 7 of the files, our system was better". And that's a fact.
But if you did well for all 10 files and concluded "our system is the
best" -- well, few people ever do that because that's too strong a claim.

Ray

On Mon, 3 Mar 2003, ALEX NG wrote:

> you mean the content will affect the results??
>

> "Thomas Richter" <th...@cleopatra.math.tu-berlin.de> 撰寫於郵件新聞


> :b3t0jv$l3r$1...@mamenchi.zrz.TU-Berlin.DE...
> > Hi,
> >
> > > I got some results (in terms of compression ratios).
> >
> > > JPEG2000 JPEG-LS
> > > Barb1 1.73 1.70
> > > Barb2 1.67 1.70
> > > Boats 1.96 2.03
> > > cats 3.16 3.11
> > > water 2.30 2.30
> >
> >
> > > For Barb1, cats JPEG2000 did well while other JPEG-LS perform
better.
> >
> > > Can you exaplain the difference of the results?
> >
> > The differences sound all pretty marginal.
> > Well, without looking at the images, hardly likely. (-;
> >
> > So long,
> > Thomas

---


Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).

Version: 6.0.455 / Virus Database: 255 - Release Date: 2003/2/13


Raymond Wan

unread,
Mar 2, 2003, 10:56:41 PM3/2/03
to

Hi,

Sorry, but I guess I wasn't precise enough. In compression, you
can not answer the question, "what makes the difference" if (a) the
results are marginally different, but more importantly, (b) if neither
consistantly out-performs the other.

Sounds like you want a definite answer...but you're not going to
get one because the "proof" given by your results are basically
inconclusive for the above two reasons.

ALEX NG

unread,
Mar 2, 2003, 11:43:57 PM3/2/03
to
alright. If the results, let say, with big difference, what is the main
factors to affect?

Raymond Wan

unread,
Mar 3, 2003, 1:22:30 AM3/3/03
to

On Mon, 3 Mar 2003, ALEX NG wrote:
> alright. If the results, let say, with big difference, what is the main
> factors to affect?

Hmmmm...well compression algorithms generally apply a model to the
data to arrive at some probability estimates. One algorithm may be better
than another based on how good the model is. In text (and yes, I realise
you're talking about images), PPM is known to perform better than LZ77
techniques since PPM uses a better model. To be precise, PPM generates
probability estimates using far more of the data than LZ77 -- LZ77 uses a
fixed window size. For Deflate, that's 32 kB while PPM can be set to use
MB's of data.

How about using different data? I suppose if you found data that
didn't need more than the previous 32 kB to compress the current position
in the message, then the extra "power" of PPM would not be required and in
this case, Deflate might do better. So, even saying that PPM is great has
a small exception attached to it.

But, conclusions such as this can be drawn by either knowing the
algorithms well, or running experiments and noticing a pattern.
Unfortunately, from the few trials you ran, no conclusion can be drawn. If
you're still curious, I suggest you either understand both algorithms well
and/or run more experiments and start grouping files together...for
images, I'm not sure what you would use...maybe the number of objects,
variations in colour, etc. For text, one would group by the structure of
the data (English text, HTML, etc.), the type of data, or even the
language of the data. It is known that the results you see for English
text would be worse for, let's say Chinese text. When you go into
two-byte or four-byte character sets, I think you'll end up with a
compression ratio of about 50% at best (don't quote me on this, though).

Sorry, I know I was talking about text compression, but some of
what I said is still relevant...just scatter the word "lossy" around my
message... ;-)

Ray


Dmitry Shkarin

unread,
Mar 2, 2003, 6:33:39 PM3/2/03
to
Hi, ALEX!

> For Barb1, cats JPEG2000 did well while other JPEG-LS perform better.
>
> Can you exaplain the difference of the results?
Both, Cats & Barb have peridical patterns (meshes for Cats and
table-cloth for Barb), it is rare case on real photos.

More results (bpp):
LOCO(HP) JasPer
BALOON 2.904 3.032
BARB 4.691 4.601
BARB2 4.686 4.788
BOARD 3.675 3.771
BOATS 3.932 4.063
GIRL 3.925 4.063
GOLD 4.477 4.601
HOTEL 4.382 4.585
LENA 4.237 4.303
ZELDA 3.888 3.878
Average 4.080 4.168


ALEX NG

unread,
Mar 3, 2003, 3:37:22 AM3/3/03
to
Hi,
Thanks for your results. I guess because the images contains lots of
variations, it is hard to perform the prediction well and thus, the results
are not well.

Your results show that JasPer did better then LOCO for most of the images
except BARB and ZELDA (though it is marginal better).

I just have a question, why choose 5/3 integer wavelet in lossless mode
of JPEG2000??
Is it the simplest to perform?

"Dmitry Shkarin" <dmitry....@mtu-net.ru> 撰寫於郵件新聞
:b3v35m$on$1...@gavrilo.mtu.ru...

ALEX NG

unread,
Mar 3, 2003, 3:41:04 AM3/3/03
to
sorry I made a mistake, u said the results are in bpp (I supposed is a
compression ratios). So JPEG-LS is certainly performed well
"ALEX NG" <al...@gorex.com.hk> 撰寫於郵件新聞
:b3v47o$t2...@imsp212.netvigator.com...

Thomas Richter

unread,
Mar 3, 2003, 4:35:24 AM3/3/03
to
Hi,

> Thanks for your results. I guess because the images contains lots of
> variations, it is hard to perform the prediction well and thus, the results
> are not well.

The "variations" are not the problem per se. The question is how well the
variations fit into the model of the apriopriate encoder. Typically, wavelet
encoding works well if you have areas with a very flat statistics and
smooth edges (sky, water, clouds,...) and performs worse for contrast edges
that generate high peaks in the wavelet bands. The effective length of the
wavelet "predictor" depends on the number of decomposition levels, but
I guess it has been set to five here (that's the default for Jasper). In
this setting, the predictor is longer than that of JPEG-LS and hence adapts
better to long-range interactions within the image.

> I just have a question, why choose 5/3 integer wavelet in lossless mode
> of JPEG2000??
> Is it the simplest to perform?

The 5/3 wavelet has an integer lifting that requires only powers of two
in the denominator. Hence, the implementation requires only integer
arithmetic (addition, subtraction and rightshift) and is therefore very easy
to implement. Due to its integer nature, it can be implemented lossless. The
9/7 filter of the lossy path of JPEG2000, however, has a lifting that contains
non-rational numbers and hence can be only implemented approximately on a
finite machine (as on your PC, for example. ;-). IIRC, the 13/7 wavelet would
be the next candidate for lossless, but it was considered too complex.

Greetings,
Thomas

Thomas Richter

unread,
Mar 3, 2003, 4:37:12 AM3/3/03
to
Hi Alex,

> you mean the content will affect the results??

Yes, definitely. The better the image fits to the model of the compressor,
the better the results will be. That's one of the important lessons of
the compression buisiness. There is no such thing as "the best image
compression". It pretty much depends on the image.

Greetings,
Thomas

0 new messages