Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[Discussion] Sign Prediction for DCT coefficients

11 views
Skip to first unread message

James Colin

unread,
Nov 22, 2006, 12:22:49 AM11/22/06
to
Hi all,

I hope some guys might have read Stuffit's patent. It claims that sign
of the DCT coefficients can be predicted using Up predictor and Left
predictor. It seems that Up predictor works based on data from upper
DCT block and Left predictor works based on data from Left DCT block.

+------+------+
| | Up |
+------+------+
|Left |Curr |
+------+------+


+------+
| | = 1 DCT block 8x8
+------+

+------+
| Curr | is the current block for which sign of the DCT
+------+ coefficients are predicted.

Any guys have any idea about these predictors.

James Colin

Guido Vollbeding

unread,
Nov 22, 2006, 7:35:47 AM11/22/06
to
James Colin wrote:
>
> I hope some guys might have read Stuffit's patent. It claims that sign
> of the DCT coefficients can be predicted using Up predictor and Left
> predictor.

This is not new.
The standard JPEG lossless mode with arithmetic coding uses a similar
technique. See section H.1.2.3 and Figure H.2 in the JPEG standard,
or section 12.5 and Figure 12-8 in the Pennebaker/Mitchell JPEG book.
A 2-D statistical model is used as extension to the normal DC
difference coding.

Note that in the course of our JPEG enhancement efforts at ITU and IJG
we also consider similar extensions to the usual DCT mode of operation,
particularly 2-D DC coding. It seems that Stuffit is getting more
benefit with also extending the AC coding in this direction.

But beware: This kind of processing requires more buffer memory than
the usual sequential JPEG coding. The Stuffit patent mentions this:
You need to buffer two full rows of DCT blocks.
In our JPEG enhancement discussions at ITU/IJG we therefore propose
a special "semi-progressive" mode to account for this different
requirement (it will be a special configuration within the
progressive mode which can be detected by smart decoders,
thus leaving the sequential requirements unchanged).

The Stuffit results are truly remarkable. I confirmed that by own
testing of their program that it achieves additional 25% average
compression of JPEG images *in real application conditions*
(as opposed to the unsubstantial claims of JPEG-2000 for example).
So the results given in their white paper:

http://www.stuffit.com/imagecompression/wp_stuffit_imgcomp.pdf

are really justified and remarkable.
It confirms my firm finding that the core DCT is the proper base
for image coding and future advancements, and all other attempts
deviating from the core DCT are a waste of time, will prove as a
mistake and fail eventually.
Anyone still seriously pursuing other techniques is a fool.

Regards
Guido Vollbeding
Organizer Independent JPEG Group

CBS

unread,
Nov 22, 2006, 8:21:19 AM11/22/06
to Guido Vollbeding
Guido Vollbeding wrote:
> James Colin wrote:
.
.

.
> Anyone still seriously pursuing other techniques is a fool.


~500 years ago: Anyone having doubts in the geocentric view of the world is a fool.

cr88192

unread,
Nov 22, 2006, 9:04:27 AM11/22/06
to

"Guido Vollbeding" <gu...@jpegclub.org> wrote in message
news:45644423...@jpegclub.org...

> James Colin wrote:
>>
>> I hope some guys might have read Stuffit's patent. It claims that sign
>> of the DCT coefficients can be predicted using Up predictor and Left
>> predictor.
>
> This is not new.
> The standard JPEG lossless mode with arithmetic coding uses a similar
> technique. See section H.1.2.3 and Figure H.2 in the JPEG standard,
> or section 12.5 and Figure 12-8 in the Pennebaker/Mitchell JPEG book.
> A 2-D statistical model is used as extension to the normal DC
> difference coding.
>
> Note that in the course of our JPEG enhancement efforts at ITU and IJG
> we also consider similar extensions to the usual DCT mode of operation,
> particularly 2-D DC coding. It seems that Stuffit is getting more
> benefit with also extending the AC coding in this direction.
>
<snip>

> It confirms my firm finding that the core DCT is the proper base
> for image coding and future advancements, and all other attempts
> deviating from the core DCT are a waste of time, will prove as a
> mistake and fail eventually.
> Anyone still seriously pursuing other techniques is a fool.
>

yes, albeit I will argue there are cases where alternatives make more sense
(lossless compression, simplicity, speed, ...). however, my main tool of
choice in these cases has been linear prediction.
as for wavelets, personally I don't understand them.


in my case, recently I have been using plain jpegs for image storage, but
mentioned above (among other things) are a few major complaints about jpeg.

my complaints are:
I don't much like the specifics of the image structure;
the way they have done components and downsampling doesn't make much sense
imo;
the way they have done interlacing is imo annoying;
...


so, assuming I were just to do my own jpeg-style format, I would likely make
a few changes:
less headers (SOF and SOS, and some info from FIF, merged into a single
header);
components are managed via some number of fixed modes;
downsampling is either fixed with the mode, or some power-of-2 based rule;
likely I would change how escaping is handled slightly (mostly making it a
little less expensive);
potentially changing how coded values and rle are handled;
differently predicted dc coefficients;
...

or, in short, clearing up a few places that have given me trouble, or just
raised my annoyance, when implementing jpeg encoding/decoding.

some of them though, I have to wonder why they were done that way originally
(in particular, the handling of components, subsampling, and interlacing).

ie:
mode 0: Y, Y
mode 1: YCbCr (interlaced, 1:1:1), YCbCr
mode 2: YCbCr (interlaced, 2x2:1:1), YYYYCbCr
mode 3: YCbCr (non-interlaced, 1:1:1), Y* Cb* Cr*
mode 4: YCbCr (non-interlaced, 2x2:1:1), Y* Cb* Cr*
mode 5: YCbCrA (interlaced, 1:1:1), YCbCrA
mode 6: YCbCrA (interlaced, 2x2:1:1), YYYYCbCrA
mode 7: YCbCrA (non-interlaced, 1:1:1), Y* Cb* Cr* A*
mode 8: YCbCrA (non-interlaced, 2x2:1:1), Y* Cb* Cr* A*

or less...


then again, one can doubt if there is much point.
much like zip and deflate, jpeg is de-facto to the point of being seemingly
more or less set in stone.

cr88192

unread,
Nov 22, 2006, 10:34:11 AM11/22/06
to

"CBS" <cbs.s...@web.de> wrote in message news:45644EC...@web.de...

[me drifting well into OT land here]

yes, this is a danger of mixing beliefs, philosophy, and religion...

so, I guess one needs to avoid the risk of being closed minded, but also
needs to be discerning.


a particularly popular example of this right now, is the ongoing battle
between people who believe in evolution, and those who believe in
creationism.


so, the former camp will argue that evidence points their way, and that
everytning just going poof out of nowhere is 'impossible' anyways.

the later camp will largely argue that there are strong reasons against
evolution as well (in particular, 'irreducible complexity' and similar).


so, the reasoning of the former is this:
things can't suddenly occure without some cause, and the existance of a
creator is assumed not to be the case (in particular, one who is uninhibited
by the laws of conservation).

however, just the same, many religious types will deny the possibility of
evolution outright, and may think of it solely as a consipacy intended to
demean their orthodoxy.

who knows really, this is a difficult question, probably also with a
difficult answer.


an example I can imagine is this:
we have a person using a computer;
the user can just the same code stuff up or interact with stuff, but from
within the world of software and data, such a user would be an impossibility
(what is this user? where is his code? where are the files defining his
existance? where is his installation path? ...).

user exists, computer exists, but the user is not a part of the computer,
but at the same time exerts influences far greater than any piece of code or
data.


so, one can debate about whether or not things were created, and by what
means, and what is the correct view of said creator, ...

and further on this extends, what branch is true, what is correct doctrine,
what are the rules and practices that need to be adhered to, ...


in the end, what does it matter?
if people think this is the point of religion, likely they are missing the
point.

it is like, if one looks into it enough, the answers start to become just as
odd as the questions.


then ago, until not too long ago (measurable in months), I could hardly be
called religious either. still a long hard grind though...

some will disagree here, but these are my beliefs:

and the most central and emphasised points seem those most often forgotten.

things seem to become a mass of rituals and doctrine, thinking the point is
certain rituals, beliefs, or even good moral standing or works (paying
tithes, giving money to those in need, ...).

so, the central premise is to love God, love others (eg: not being greedy or
self-centered), and being willing to pursue the truth (even if it happens to
go right against ones' own beliefs or doctrine).

and, in a way, all the practices, occurances, beliefs, and doctrine, are
largely a means by which to emphasise this point (and also to pay respect),
yet so many end up thinking that they are the point in themselves (rituals
and/or moral living being the key to good standing).

as odd as it would seem, religion almost seems to rule out its own purpose
for existance (or obligation for its adherence), yet in a way, its purpose
is very well evident...


and on the opposite side, there is a simple statement, so commonly adhered
to by most people that people think of it as nothing ('do what you will').

and so, they practice religion, for means unknown, thinking maybe that by
their practices or profession of beliefs they will be saved from destruction
(aka: eternal pain and torment), but who knows?...

and just as confusing is the answer to the central point, is that of who is
doomed to this fate, and who is free from this fate.


then again, I guess any real explanation is doomed to crashing and burning
here...


or something...

Guido Vollbeding

unread,
Nov 22, 2006, 11:33:29 AM11/22/06
to
CBS wrote:
>
> > Anyone still seriously pursuing other techniques is a fool.
>
> ~500 years ago: Anyone having doubts in the geocentric view of
> the world is a fool.

You forget one thing:
JPEG HAS PROVEN to be the image format of choice for the vast
majority of applications! Nothing else comes nearly close to
the wide application support of JPEG. This has reasons.
If you don't see that, you are simply ignoring the reality.

And what I say is that there ARE possible advances. But these
advances are NOT to be found by deviating from the DCT, as many
people mistakenly believe, but by digging deeper in the essence
of the DCT! Until today, nobody else has nearly understood the
essence of the DCT - it is largeley misunderstood. That is the
reason for some missing features and for the mistaken research
direction.

You can go on to ignore the reality and to misunderstand the
basics - that is what many people still do today, and I don't
assume this to change soon. But I'm not following the crowd
going the wrong direction and wasting my time - I'm going to
realize the true findings against all the the mistakes...

Pete Fraser

unread,
Nov 22, 2006, 11:58:57 AM11/22/06
to
"cr88192" <cr8...@NOSPAM.hotmail.com> wrote in message
news:e5fab$45646dff$ca83a8d6$24...@saipan.com...

[Lengthy OT ramble on evolution / creationism]

Reminds me of a cartoon:

http://www.doonesbury.com/strip/dailydose/index.html?uc_full_date=20051218


Thomas Richter

unread,
Nov 22, 2006, 12:11:39 PM11/22/06
to
James Colin wrote:
> Hi all,
>
> I hope some guys might have read Stuffit's patent. It claims that sign
> of the DCT coefficients can be predicted using Up predictor and Left
> predictor. It seems that Up predictor works based on data from upper
> DCT block and Left predictor works based on data from Left DCT block.

> Any guys have any idea about these predictors.

Sure, the idea is that there is some correlation between the DCT blocks
that is otherwise not explored. The same type of idea is used in many
other compression schemes, e.g. JBIG or JPEG-LS, but the coefficients
are here rather pixels rather than DCT coefficients, or in the EBCOT,
and the coefficients are here DWT coefficients rather than DCT coefficients.

The important point is here, as already stated, "correlation". Clearly,
one would expect that the correlation depends on the pixel distance,
i.e. there is a correlation length that decreases with pixel distance.

In that sense, DCT blocks are not optimal because they limit the length
from below how *this* specific type of prediction can be explored - it
would be more interesting to really have a transform that is "local
enough" in both the frequency and the spatial domain to explore the
correlation between neighbouring pixels. Or, if you want to say so,
find a decorrelation function that is *not* linear as DCT or DWT
(because that is what this is: A nonlinear decorrelator as a patch-up
of a linear DCT decorrelator.)

Well, it's a good "patch" on top of an existing technology that works
fine. It's not the best way to go if you're seeking for new compression
technology because it - the initial DCT step - constructs rather
artificial barriers that limits the usefulness of the predictor due to
the implied pixel distance. Specifically, why do *only* linear
transformation within the block, and *only* do prediction outside? (I
think that rather puts it best.) As for StuffIt depending on JPEG-1,
there's not really a choice, of course.

Besides, of course, exploring all types of decorrelation you can find,
is always a good idea for sure. (-:

So long,
Thomas


Guido Vollbeding

unread,
Nov 22, 2006, 12:09:45 PM11/22/06
to
cr88192 wrote:
>
> yes, this is a danger of mixing beliefs, philosophy, and religion...
>
> so, I guess one needs to avoid the risk of being closed minded, but also
> needs to be discerning.

Your problem, and the problem of most other people dealing with the
subject, is that you are confused. You don't know what is right,
you don't understand the basics, you don't have a firm knowledge,
you don't have a solid position in your point of view.

And that is exactly the difference to my position:
I DO know what is right, I DO understand the DCT basics, I DO have
a firm knowledge and experience about the subject, and I DO have
a solid position with the approach. I am NOT confused, I do NOT
have a weak position, I can NOT fall for hoaxes and misleading
marketing hypes.

The reason for this is that I have a different source of knowledge.
Your, and most other people's, source of knowledge is your intellect
- and that is weak, instable, and prone to error.

You must understand one thing: The human intellect has no root in
reality. If you are identified with your intellect alone, you will
never understand anything about reality, you will be virtually dead.
You will endlessly dream and speculate and calculate, but as long as
you don't see the reality, you will get nowhere. As long as the
intellect is the base of your competence, you will be lost.
You must EXCEED your intellect, and get intuition. Intellect plus
intuition is intelligence. So I say you must be intelligent, and
that is not the same as intellectual. Intelligence is the sum of
intellect and intuition.

The problem is that intuition is in a complete different sphere of
reality, it is a different dimension. It is the land of mystery.
From an intellectual perspective intuition is a land of mystery,
because the intellect has no access to this dimension, the intellect
cannot explain the contents of this sphere.
Intuition is the higher dimension, intellect is the lower dimension.
The higher dimension can enter the lower dimension, but the lower
dimension cannot enter the higher dimension. So an individual with
access to the higher dimension can bring some fragments into the
lower dimension, but not the other way. If you want a deeper
understanding, you must discard identification with your intellect,
and open the door so that intuition can enter. You must clamber
the land of intuition, and the method is to discard identification
with your intellect.
Intuition is the mountain crest. You cannot bring the crest to the
valley, you can only climb the crest.

The problem is that most people today are completely identified
with their intellect and corresponding ego, so there is no way
for intuition to enter. Not the intellect itself is the problem,
but the identification with it. The intellect can be a strong tool
in the hands of an intelligent individual, but it is only a tool.
If you identify yourself with the tool, the tool gets power over
you. The tool dominates you, and not you dominate the tool - you
lose control. That is the reason for the chaos we see today
everywhere in the world - the uncontrolled intellect has taken over
the power.

The human intellect is only a biocomputer. It has no understanding
of good and bad, luck and pain, peace and war, fun and sadness.
The computer is a machine and can be used for anything - bad or good,
war or peace, destructive or constructive. So you must add another
component - intuition, understanding.

I may sound like a priest for you. But this is intentional, because
I must reach you below your intellectual surface to communicate the
things to you, I must find a way deeper into your soul, so that you
can FEEL the truth in my words beyond any proof or argument.
Because proofs and arguments are means of the logical intellect and
thus not capable to touch the truth. Feelings, or emotions, are
in the midway between the intellect and intuition, so are a bridge
to the mystery. The words do not come from myself, they come through
me from "God", from the source of life. I am nothing, God is all.
I am a medium for God's voice. You are also God's media. The
only difference between you and me is that you forgot this and
I remembered it.

What I have expressed so far is, with my own words and after my own
experiences, the common consensus of all "enlightened" people after
meditation.

So if one clearly understands the situation, then the direction
to go becomes clear. And that is what I outline. And that
is the fun - to follow a clear path with total engagement.

You are in a similar position as the great british philosopher
David Hume (1711-1776) - he has not found a solid position inside
himself, and so he disbelieved everything. This philosophical
direction is also called "scepticism". And that is why you are
confused.

As I have said, the solution to the problem is to discard
identification with your intellect, to EXCEED your intellect,
and then use it as a tool in your hands on a solid position.

Guido Vollbeding

unread,
Nov 22, 2006, 12:48:23 PM11/22/06
to
cr88192 wrote:
>
> yes, albeit I will argue there are cases where alternatives make
> more sense (lossless compression, simplicity, speed, ...). however,
> my main tool of choice in these cases has been linear prediction.
> as for wavelets, personally I don't understand them.
>
> in my case, recently I have been using plain jpegs for image storage, but
> mentioned above (among other things) are a few major complaints about jpeg.

Your complaints are due to the fact that people have not understood
the fundamental DCT properties, and so were not able to exploit its
full potential in the original JPEG standard.
However, the original JPEG standard is a good base for common
application, and we only need to extend it here and there to
leverage it's full potential and thus improve it's application
features.
See

http://www.itu.int/ITU-T/studygroups/com16/jpeg1x/index.html

and

http://jpegclub.org/temp/ITU-T-JPEG-Plus-Proposal_R3.doc

for current activities in the JPEG enhancement process.

> some of them though, I have to wonder why they were done that way originally
> (in particular, the handling of components, subsampling, and interlacing).
>
> ie:
> mode 0: Y, Y
> mode 1: YCbCr (interlaced, 1:1:1), YCbCr
> mode 2: YCbCr (interlaced, 2x2:1:1), YYYYCbCr
> mode 3: YCbCr (non-interlaced, 1:1:1), Y* Cb* Cr*
> mode 4: YCbCr (non-interlaced, 2x2:1:1), Y* Cb* Cr*
> mode 5: YCbCrA (interlaced, 1:1:1), YCbCrA
> mode 6: YCbCrA (interlaced, 2x2:1:1), YYYYCbCrA
> mode 7: YCbCrA (non-interlaced, 1:1:1), Y* Cb* Cr* A*
> mode 8: YCbCrA (non-interlaced, 2x2:1:1), Y* Cb* Cr* A*

Oh, it is nice to see that you are not in a position to define an
imaging standard. This kind of proposal is exactly what we do NOT
need - the JPEG manner of handling components and subsampling is
much more clean and flexible than this weired example.
Apparently you are a good candidate for Microsoft's odd "Windows
Media Photo" campaign - they have done something like that in their
weird format.
By the way, Microsoft has just renamed their format - they call it
now "HD Photo" instead of Windows Media Photo (I guess it should
stand for "High Definition" Photo).

But the WMP or HD Photo is just another hoax and marketing hype,
similar to JPEG-2000.
The reason is that they use only a crippled kind of DCT, not a
true DCT, and so they cannot achieve the full potential of the
true DCT as necessary for reasonable image coding application.

> then again, one can doubt if there is much point.
> much like zip and deflate, jpeg is de-facto to the point of
> being seemingly more or less set in stone.

Yes, and in the case of JPEG this is good and reasonable.
Anything else would be a mistake.

cr88192

unread,
Nov 22, 2006, 6:17:05 PM11/22/06
to

"Guido Vollbeding" <gu...@jpegclub.org> wrote in message
news:45648D67...@jpegclub.org...

> cr88192 wrote:
>>
>> yes, albeit I will argue there are cases where alternatives make
>> more sense (lossless compression, simplicity, speed, ...). however,
>> my main tool of choice in these cases has been linear prediction.
>> as for wavelets, personally I don't understand them.
>>
>> in my case, recently I have been using plain jpegs for image storage, but
>> mentioned above (among other things) are a few major complaints about
>> jpeg.
>
> Your complaints are due to the fact that people have not understood
> the fundamental DCT properties, and so were not able to exploit its
> full potential in the original JPEG standard.
> However, the original JPEG standard is a good base for common
> application, and we only need to extend it here and there to
> leverage it's full potential and thus improve it's application
> features.
> See
>
> http://www.itu.int/ITU-T/studygroups/com16/jpeg1x/index.html
>
> and
>
> http://jpegclub.org/temp/ITU-T-JPEG-Plus-Proposal_R3.doc
>
> for current activities in the JPEG enhancement process.
>

yes.


>> some of them though, I have to wonder why they were done that way
>> originally
>> (in particular, the handling of components, subsampling, and
>> interlacing).
>>
>> ie:
>> mode 0: Y, Y
>> mode 1: YCbCr (interlaced, 1:1:1), YCbCr
>> mode 2: YCbCr (interlaced, 2x2:1:1), YYYYCbCr
>> mode 3: YCbCr (non-interlaced, 1:1:1), Y* Cb* Cr*
>> mode 4: YCbCr (non-interlaced, 2x2:1:1), Y* Cb* Cr*
>> mode 5: YCbCrA (interlaced, 1:1:1), YCbCrA
>> mode 6: YCbCrA (interlaced, 2x2:1:1), YYYYCbCrA
>> mode 7: YCbCrA (non-interlaced, 1:1:1), Y* Cb* Cr* A*
>> mode 8: YCbCrA (non-interlaced, 2x2:1:1), Y* Cb* Cr* A*
>
> Oh, it is nice to see that you are not in a position to define an
> imaging standard. This kind of proposal is exactly what we do NOT
> need - the JPEG manner of handling components and subsampling is
> much more clean and flexible than this weired example.

yes, however, it is more awkward, and less efficient to code.

ok, yes, maybe they could have been split into several fields.
color_mode: 0=mono, 1=YCbCr
interlace: 0=non-interlace, 1=interlace
subsampling: 0=1:1:1, 1=2x2:1:1

my comment is based on what I feel is most convinient for an implementor,
leveraged by what we can expect from graphics files.

in short, it is my opinion that jpeg's design is, overkill...

we would need at least animation, layers, alpha blending, ... to justify
this kind of design imo.


instead, we get a design that is a major hassle to work with, and has almost
nothing to give us for it. one may as well almost just hard-code the decoder
to only just accept a fixed types of input, eg:

jpeg-restricted:
1 or 3 components;
1:1:1 or 2x2:1:1 subsampling only;
only 1 huffman or quantizer table per marker;
...

and this will accept the vast majority of input images.

and, the decoder looks at the image, and if it is not what it expects, it
can then refuse to process the image any further.


and on the other hand, jpeg is flexible enough to let me stuff an alpha
channel in the image, and what then?...

either it will be ignored or rejected by existing software, so there is no
real point (and so for images where I need an alpha channel, I am left with
some other format, for example, png).


> Apparently you are a good candidate for Microsoft's odd "Windows
> Media Photo" campaign - they have done something like that in their
> weird format.
> By the way, Microsoft has just renamed their format - they call it
> now "HD Photo" instead of Windows Media Photo (I guess it should
> stand for "High Definition" Photo).
>
> But the WMP or HD Photo is just another hoax and marketing hype,
> similar to JPEG-2000.
> The reason is that they use only a crippled kind of DCT, not a
> true DCT, and so they cannot achieve the full potential of the
> true DCT as necessary for reasonable image coding application.
>

maybe.
sometimes MS designs make sense, sometimes they don't.

one has to consider what can be offered, what is expected, and what is
useful.


>> then again, one can doubt if there is much point.
>> much like zip and deflate, jpeg is de-facto to the point of
>> being seemingly more or less set in stone.
>
> Yes, and in the case of JPEG this is good and reasonable.
> Anything else would be a mistake.
>

yes, now that I think of it, a decoder that barfs whenever it sees an image
varying from its pre-set rules, would be a more more pragmatic approach.
then again, in my case, why should I bother? I already wrote a decoder that
can deal with jpeg, as defined.

I guess, the encoder+decoder I do have is nearly 2 kloc, and by restricting
the input further I could write a smaller and simpler decoder.

however, the damage has already been done...

cr88192

unread,
Nov 22, 2006, 6:57:19 PM11/22/06
to

"Guido Vollbeding" <gu...@jpegclub.org> wrote in message
news:45648459...@jpegclub.org...

> cr88192 wrote:
>>
>> yes, this is a danger of mixing beliefs, philosophy, and religion...
>>
>> so, I guess one needs to avoid the risk of being closed minded, but also
>> needs to be discerning.
>
> Your problem, and the problem of most other people dealing with the
> subject, is that you are confused. You don't know what is right,
> you don't understand the basics, you don't have a firm knowledge,
> you don't have a solid position in your point of view.
>
> And that is exactly the difference to my position:
> I DO know what is right, I DO understand the DCT basics, I DO have
> a firm knowledge and experience about the subject, and I DO have
> a solid position with the approach. I am NOT confused, I do NOT
> have a weak position, I can NOT fall for hoaxes and misleading
> marketing hypes.
>
> The reason for this is that I have a different source of knowledge.
> Your, and most other people's, source of knowledge is your intellect
> - and that is weak, instable, and prone to error.
>

<snip>


>
> As I have said, the solution to the problem is to discard
> identification with your intellect, to EXCEED your intellect,
> and then use it as a tool in your hands on a solid position.
>

however, I will argue on an different front:
intuition is good for doing things which are subjectively definable as
better or worse (how is this design? how usable is this interface? ...).

however, I have doubts as to the reliance on intuition for matters of truth.

note here:
on MBTI tests I have taken in the past, I fall under the category of INTP,
however, my N/S bias is fairly close to center, so depending on my mood it
seems I can also come out as ISTP as well...


and so, we have arguments which can't be easily resolved:
creation vs evolution;
monoverse vs multiverse;
deterministic vs non-deterministic time progression.


the view that would be more readily held from a doctrinal position would be
that of a created deterministic monoverse;
others will interpret evidence to point more twards evolution and a
non-deterministic multiverse.

and I will argue, it is not the point of religion for people to get all
fussy about such things, but be willing to go where evidence takes them, but
also to not forget or deny the doctrinal position.


and, on the topic of religion, is another controversy:
exactly how much flexibility is allowed?
who is applicable?

one extreme is that only a very elite group can gain salvation or
enlightenment;
the other extreme is that everyone gains it by default.

and, others will argue about the means, faith vs good works/moral living.

but, in the end, who can be sure?
so, one can wait until death, and maybe then they will have their answer, as
pleasant or horrible as they may find it to be, or maybe not (one faced with
the risk of a slip into non-existance).


so, given my experience, I am inclined to believe that the border is
probably (mostly) christians (probably for people in other religions, an
alternative is provided to belief in the death and ressurection), that it is
by faith, and that a pleasant or horrible experience is based on this.

then again, good works and moral behavior are definitely encouraged, so if
it turns out that is the path that is required, one is not totally left out
in case they were wrong, I guess one has to just avoid though the position
of thinking that their standing is based on how often the attend church or
how much they pay in tithing...

and, one can also avoid the other extreme, that they can do whatever they
want and still gain forgiveness (living a life of heavy drinking, orgies,
and trying to impress everyone with their money and/or supposed good
religious standing...).

cr88192

unread,
Nov 22, 2006, 7:23:09 PM11/22/06
to

"Pete Fraser" <pfr...@covad.net> wrote in message
news:a63a6$456481d2$44a4bd5b$18...@msgid.meganewsservers.com...

yes, ok.

also ammusing: where I live, TB is particularly common, and as such one
needs a screening about every 6 months or so (eg, for attending classes,
...).


as for creation vs evolution, I can't say for sure which is true, or maybe
some combination thereof.

so, yeah, in the middle we have the intelligent design camp, which the
secular types lump in with creationists, for believing there were deliberate
divine acts in the process, and the creationists lump in with evolutionists,
for believing that the earth is really old and that evolutionary processes
were taking place in the first place...

one can't be sure though...

clearly enough things mutate and change, but there is also a lot that is
difficult to account for.

how was it at the beginning? I can't say for certain, but eventually likely
the truth will be known (either religious types turning out to be dogmatic,
or evolutionary types being wrong, or both, or neither).


but, yes, I am inclined to believe at least that quite possibly some kind of
divine text-editing was going down, but it is hard to say imo.

...

Pete Fraser

unread,
Nov 22, 2006, 8:21:53 PM11/22/06
to
"cr88192" <cr8...@NOSPAM.hotmail.com> wrote in message
news:ad4c$4564da79$ca83a8d6$51...@saipan.com...

>
> instead, we get a design that is a major hassle to work with, and has
> almost nothing to give us for it. one may as well almost just hard-code
> the decoder to only just accept a fixed types of input, eg:
>
> jpeg-restricted:
> 1 or 3 components;
> 1:1:1 or 2x2:1:1 subsampling only;

Not true. There are many files that are 4:2:2. It's as common as 4:2:0
(I hope you'll forgive me using video nomenclature).


stefa...@yahoo.com

unread,
Nov 22, 2006, 8:34:53 PM11/22/06
to

Guido Vollbeding wrote:

> And what I say is that there ARE possible advances. But these
> advances are NOT to be found by deviating from the DCT, as many
> people mistakenly believe, but by digging deeper in the essence
> of the DCT! Until today, nobody else has nearly understood the
> essence of the DCT - it is largeley misunderstood. That is the
> reason for some missing features and for the mistaken research
> direction.

Good for you, take advantage from this; why to put so much nerves
crying that people are fools. Fools have to be punished and smarts have
to be reworded so, do not disturb a natural flow.

Regards,
Stefan

cr88192

unread,
Nov 22, 2006, 11:13:24 PM11/22/06
to

"Pete Fraser" <pfr...@covad.net> wrote in message
news:47b1a$4564f7b3$44a4bd5b$22...@msgid.meganewsservers.com...

not sure, but (wrt decoding and interlacing) shouldn't 4:2:2 be equivalent
to 2x2:1x1:1x1 (2:1:1). thinking, now, I am not completely sure I understand
jpegs' interlace magic.

4:2:0, not sure what this would be.

now that I think of it, forcing a fixed subsampling mode allows one to hard
code this aspect of the decoder, and thus greatly simplify this.

of course, this does mean that the decoder will be like "ok, I don't know
what this is, barf...".


originally I had felt it should have used linear, rather than geometric
interlacing (that or DC prediction not relying on encoding order), but oh
well. just it is a hassle to take this into account in some cases, forcing
either it to be taken account when decoding the blocks (as done in my
decoder), or as a seperate pass which quantizes the blocks in the correct
order, as done in my encoder.

ideally I would have prefered non-interlaced blocks (simplest encoder and
decoder, but effectively breaks if the whole image is not present, and
reasonably assumes that there is enough free memory to buffer everything).


it turns out that some assumptions I had made in my decoder already (eg:
that the colorspace is Y or YCbCr, ...), are not actually a core part of
jpeg, but actually part of JFIF.

actually only a small part of the decoder actually notices or cares (since I
took the "doing it by the spec" route), but oh well.

theoretically, my decoder should verify that the JFIF header is present
before making such assumptions, and my encoder should output said header,
but oh well...

the encoder doesn't really care, since it took the hard-coded route (all it
really bothers to care about is if it is outputing color or greyscale
images...).

or such...

>


Nishu

unread,
Nov 23, 2006, 12:16:16 AM11/23/06
to

cr88192 wrote:

> not sure, but (wrt decoding and interlacing) shouldn't 4:2:2 be equivalent
> to 2x2:1x1:1x1 (2:1:1). thinking, now, I am not completely sure I understand
> jpegs' interlace magic.
>
> 4:2:0, not sure what this would be.

Interlacing is different from the file formats. You have YUV (420, 411,
422H, 422V, 444) and other RGB formats in either planar or interlaced
formats. 422 is equivalent to 1x2:1x1:1x1 or 2x1:1x1:1x1 depending
whether it is 422H or 422V. Then again, whether the MCUs come in
interlaced format or in planar format. IMO JPEG(or JFIF.. I assume
both are same) is quite good and easier to implement.

-N

Nishu

unread,
Nov 23, 2006, 12:25:18 AM11/23/06
to

Guido Vollbeding wrote:

> It confirms my firm finding that the core DCT is the proper base
> for image coding and future advancements, and all other attempts
> deviating from the core DCT are a waste of time, will prove as a
> mistake and fail eventually.
> Anyone still seriously pursuing other techniques is a fool.

I wonder whether it is (DCT) or (quantization and VLE) which does the
more compression. Pursuing other VLE techniquies had good returns in
terms of forming better video compression standards. Then, again it is
not pure DCT but a variant of DCT (integer transform) which gained much
popularity in recent video coding standards. Isnt that DCT seems
crippled in terms of compression after advanced prediction and VLE
techinques?

-N

Pete Fraser

unread,
Nov 23, 2006, 1:00:10 AM11/23/06
to

"cr88192" <cr8...@NOSPAM.hotmail.com> wrote in message
news:c13a0$45651fea$ca83a8d6$26...@saipan.com...

>>
> not sure, but (wrt decoding and interlacing) shouldn't 4:2:2 be equivalent
> to 2x2:1x1:1x1 (2:1:1). thinking, now, I am not completely sure I
> understand jpegs' interlace magic.
>
> 4:2:0, not sure what this would be.

I don't think interlacing changes anything.

4:2:2 = 2x1:1x1:1x1
4:2:0 = 2x2:1x1:1x1
4:1:1 = 4x1:1x1:1x1

Both 4:2:2 and 4:2:0 are common JPEG formats.
I have not seen 4:1:1 in JPEG.

I don't see that dealing with these is a big deal though.
If you want to do a good job you use a half-band
filter for the up-converter.

You make it skew symmetric about fs/4, and that makes
your central coefficient 0.5, and half of the rest are zero.
You can also exploit symmetry in the spatial domain to
halve the number of multiplies.


Guido Vollbeding

unread,
Nov 23, 2006, 2:04:32 PM11/23/06
to
Nishu wrote:
>
> Then, again it is
> not pure DCT but a variant of DCT (integer transform) which gained
> much popularity in recent video coding standards. Isnt that DCT seems
> crippled in terms of compression after advanced prediction and VLE
> techinques?

Yes, but this is a mistaken development.
First they cripple the DCT and thus damage its scalability property
(because they don't understand this correlation), and then they
introduce some weird "SVC" (Scalable Video Coding) extension mode.
That is a phantastic strategy to keep yourself busy, but does it
make any sense? No, it makes no sense - can you see how stupid
this is?

When I was at the ITU-T JPEG meeting in April in Geneva, I also
attended the MPEG meeting there. I was kindly invited by the
chairs/editors of the MPEG-4 AVC/SVC to present my proposal in
this group. Those people were relatively smart to recognize
the importance of my findings. But still I think that the
majority of participants there did not fully understand my
message.

So I see two possible further developments in this area:
Either they keep going the wrong way of crippling the DCT and thus
sacrificing features and image quality, or they come back to the
real DCT. There are some proponents for the latter way.

But you must realize that there are today serious handicaps in
this image and video coding research for finding the proper way:
They adhere to this odd behaviour of judging resulting image quality
mainly on the basis of artificial measurement indicators. In this
way they will never find appropriate results, and they do not see
that image quality gets ever worse with their techniques. They
are basically blind, so the current research results are mainly
produced by blind people, thus having no real substance.
So they claim ever higher compression ratios with their "advanced"
techniques, not being aware of the fact that image quality is
sacrificed in a similar proportion.

This danger is also present in current image compression approaches.
Especially the Microsoft WMP/HD Photo development suffers from this
fatality, and JPEG-2000 could only be established on this fatal
basis.

A good example for this fatality is the popular digital image
capture industry, which is an important application for JPEG
image coding techniques.
You have to realize that the first digital camera with *reasonable*
direct JPEG image capture is just entering the market in these
days. I speak of the Sigma SD14 model. All other prior attempts
were basically rubbish, the mainstream "digital images" produced
were artificial, not natural. Your only choice for reasonable true
image capture devices in recent years were the Sigma SD9 and Sigma
SD10 models, and those had no direct JPEG output.
The Sigma SD14 will be the first digital still camera on the market
with direct true image JPEG capture capability, so this will be the
first appropriate JPEG application in digital image capture.

And note that nobody in the established research community was
aware of the inferiority of prior digital camera application.
They couldn't notice it, because they did their research with
closed eyes and stupid intellect only.

This example alone shows that you must be very careful with image
quality claims if it is based on unknown compression techniques
with artificial measurements. This is a major cause for mistake.

cr88192

unread,
Nov 23, 2006, 6:09:29 PM11/23/06
to

"Nishu" <naresh...@gmail.com> wrote in message
news:1164258976....@m7g2000cwm.googlegroups.com...

ok.

I was just thinking of what most 'typical' jpegs, as generated and
understood by 'most apps' look like.

2x2:1x1:1x1 seems to be the de-facto rule here, so could be assumed (albeit
also verified) by a decoder.

but, it doesn't matter really, I wrote my decoder for the general case, so
no strong need to change it to a more restricted version (I am not that
tight on LOC...).


> -N
>


cr88192

unread,
Nov 23, 2006, 6:36:30 PM11/23/06
to

"Pete Fraser" <pfr...@covad.net> wrote in message
news:c515a$456538eb$44a4bd5b$25...@msgid.meganewsservers.com...

>
> "cr88192" <cr8...@NOSPAM.hotmail.com> wrote in message
> news:c13a0$45651fea$ca83a8d6$26...@saipan.com...
>>>
>> not sure, but (wrt decoding and interlacing) shouldn't 4:2:2 be
>> equivalent to 2x2:1x1:1x1 (2:1:1). thinking, now, I am not completely
>> sure I understand jpegs' interlace magic.
>>
>> 4:2:0, not sure what this would be.
>
> I don't think interlacing changes anything.
>
> 4:2:2 = 2x1:1x1:1x1
> 4:2:0 = 2x2:1x1:1x1
> 4:1:1 = 4x1:1x1:1x1
>
> Both 4:2:2 and 4:2:0 are common JPEG formats.
> I have not seen 4:1:1 in JPEG.
>

ok, my notation is a bit confused as I have only really recently started
using it.

by '4:1:1' (or 2x2:1:1), I had meant 2x2:1x1:1x1, or your 4:2:0 above.

the encodings I were assuming were:
2x2:1x1:1x1 and 1x1:1x1:1x1.

which, I had figured were the only 'common' ones.
however, if others are common, I guess it makes sense to keep a more generic
decoder.


> I don't see that dealing with these is a big deal though.
> If you want to do a good job you use a half-band
> filter for the up-converter.
>

my complaint here is not solely technical (or practical...).


well, the big deal is not so much anything of this sort, but more that the
decoder has to deal with some annoying pieces of logic, eg:
i=(pdjpg_xs+pdjpg_chm*8-1)/(pdjpg_chm*8);
j=(pdjpg_ys+pdjpg_chn*8-1)/(pdjpg_chn*8);
n=i*j;

for(i=0; i<(4*64); i++)dbuf[i]=0;

for(i=0; i<n; i++)
{
for(j=0; j<ns; j++)
for(k=0; k<cv[j]; k++)
for(l=0; l<ch[j]; l++)
{
i1=(i/(wi[j]/ch[j]))*ch[j];
j1=(i%(wi[j]/ch[j]))*ch[j];
k1=((i1+k)*wi[j])+(j1+l);

PDJHUFF_DecodeBlock(dbuf+j*64,
cdt[j]*2+0, cat[j]*2+1, i, n);
PDJHUFF_DequantBlock(dbuf+j*64,
pdjpg_scbuf[j]+k1*64, qid[j]);
PDJPG_TransIDCT(pdjpg_scbuf[j]+k1*64,
pdjpg_sibuf[j]+k1*64);
}
}

which could otherwise, be hard coded into a much less ugly loop, or allow me
to spread it into 3 different passes...

but, as it would seem, not everone agrees as to how the components should be
scaled, which screws up the whole idea.


> You make it skew symmetric about fs/4, and that makes
> your central coefficient 0.5, and half of the rest are zero.
> You can also exploit symmetry in the spatial domain to
> halve the number of multiplies.

well, I guess my complaint isn't much of anything all that important, just I
can't really think how it is justified to allow this much flexibility in a
format that 'simply' compresses and stores compressed images.

with the level of complexity, one would almost expect it to have some other
new and amazing feature, that people actually use...


but, from the prospective of using the format, it has actually less useful
functionality than even tga or png (in that it lacks an alpha channel, ...).

from a developer perspective, about the only really compelling reasons to
use it are:
it is super common;
it has very small file sizes.

but afaik its complexity (and terribly inconvinient spec) forces most people
that intend to use it to use either libjpeg, or some other assumably
available jpeg library (rather than rolling their own, as is typically done
with formats like pcx, bmp, or targa...).

then again, the dct is a terrible beast as well, so a format like this can
hardly be 'simple', but oh well...


but oh well, I guess it is a good enough format.

>


Pete Fraser

unread,
Nov 23, 2006, 7:02:33 PM11/23/06
to

"cr88192" <cr8...@NOSPAM.hotmail.com> wrote in message
news:ddc51$45663080$ca83a8d6$40...@saipan.com...

>
> ok, my notation is a bit confused as I have only really recently started
> using it.
>
> by '4:1:1' (or 2x2:1:1), I had meant 2x2:1x1:1x1, or your 4:2:0 above.

Correct. The notation I use is confused also.
The MPEG hijacke a confusing EBU notation, and made it worse.

>
> the encodings I were assuming were:
> 2x2:1x1:1x1 and 1x1:1x1:1x1.
>
> which, I had figured were the only 'common' ones.
> however, if others are common, I guess it makes sense to keep a more
> generic decoder.

Others are common.


>
> well, I guess my complaint isn't much of anything all that important, just
> I can't really think how it is justified to allow this much flexibility in
> a format that 'simply' compresses and stores compressed images.

It's not a huge amount of complexity.
Compare it with some other specs.

>
> with the level of complexity, one would almost expect it to have some
> other new and amazing feature, that people actually use...
>
>
> but, from the prospective of using the format, it has actually less useful
> functionality than even tga or png (in that it lacks an alpha channel,
> ...).
>
> from a developer perspective, about the only really compelling reasons to
> use it are:
> it is super common;
> it has very small file sizes.

These are pretty compelling reasons.

>
> but afaik its complexity (and terribly inconvinient spec) forces most
> people that intend to use it to use either libjpeg, or some other
> assumably available jpeg library (rather than rolling their own, as is
> typically done with formats like pcx, bmp, or targa...).
>
> then again, the dct is a terrible beast as well,

Now now. Enough of the Guido baiting.


cr88192

unread,
Nov 23, 2006, 7:34:28 PM11/23/06
to

"Pete Fraser" <pfr...@covad.net> wrote in message
news:ee268$45663695$44a4bd5b$87...@msgid.meganewsservers.com...

>
> "cr88192" <cr8...@NOSPAM.hotmail.com> wrote in message
> news:ddc51$45663080$ca83a8d6$40...@saipan.com...
>
>>
>> ok, my notation is a bit confused as I have only really recently started
>> using it.
>>
>> by '4:1:1' (or 2x2:1:1), I had meant 2x2:1x1:1x1, or your 4:2:0 above.
>
> Correct. The notation I use is confused also.
> The MPEG hijacke a confusing EBU notation, and made it worse.
>

ok.


>>
>> the encodings I were assuming were:
>> 2x2:1x1:1x1 and 1x1:1x1:1x1.
>>
>> which, I had figured were the only 'common' ones.
>> however, if others are common, I guess it makes sense to keep a more
>> generic decoder.
>
> Others are common.
>

yeah.


>
>>
>> well, I guess my complaint isn't much of anything all that important,
>> just I can't really think how it is justified to allow this much
>> flexibility in a format that 'simply' compresses and stores compressed
>> images.
>
> It's not a huge amount of complexity.
> Compare it with some other specs.
>

yes, I guess my main complaint is more likely that T.81 is nearly 200 pages.
if is were a much shorter spec, for example, like the deflate spec, then
maybe things would be better.

if the spec were one that a person could skim though and get a good idea how
everything is put together, maybe that would be better still...


>>
>> with the level of complexity, one would almost expect it to have some
>> other new and amazing feature, that people actually use...
>>
>>
>> but, from the prospective of using the format, it has actually less
>> useful functionality than even tga or png (in that it lacks an alpha
>> channel, ...).
>>
>> from a developer perspective, about the only really compelling reasons to
>> use it are:
>> it is super common;
>> it has very small file sizes.
>
> These are pretty compelling reasons.
>

yes, and that is why I use it.
albeit, I still have my complaints about the internals.


it is odd though, in that at the same time I am considering comming up with
a kind of 'generalized' compressor spec which would, just the same, mimic
but modify some of jpeg's internals.


>>
>> but afaik its complexity (and terribly inconvinient spec) forces most
>> people that intend to use it to use either libjpeg, or some other
>> assumably available jpeg library (rather than rolling their own, as is
>> typically done with formats like pcx, bmp, or targa...).
>>
>> then again, the dct is a terrible beast as well,
>
> Now now. Enough of the Guido baiting.
>

ok.

actually, I was more meaning that it was a complex thing to implement,
understand, and get working right.

initially, one spends a good number of hours messing with the outputs of the
transform, trying to get the thing to work, and eventually then it does...

pcx, bmp, and tga more simply use RLE, but this doesn't compress well.
png uses linear filtering and deflate, which is a lot more complex.

something falling somewhere between tga and png, would probably be a format
using linear prediction, rle, and rice codes or similar.


the main formats I use are tga, png, and jpeg.
tga: is just useful sometimes;
png: good for saving space over tga and has alpha channels;
jpeg: very small, but lossy and lacks alpha channels.


a lot of what I end up doing with graphics depends on alpha channels though,
which is why they are such a big deal.

Guido Vollbeding

unread,
Nov 23, 2006, 7:57:12 PM11/23/06
to
cr88192 wrote:
>
> however, I will argue on an different front:
> intuition is good for doing things which are subjectively definable as
> better or worse (how is this design? how usable is this interface? ...).
>
> however, I have doubts as to the reliance on intuition for matters of
> truth.

You argue...
That is exactly the reason that you don't get the message and stay
in confusion.
Intuition is the *only* base for truth!
But this is what you can not recognize, because you can't stop
to argue. The truth can only be seen if you stop to argue and see
the reality. Your conditioned intellect is like a dark cloud which
prevents the sunlight to reach you.

> one extreme is that only a very elite group can gain salvation or
> enlightenment;
> the other extreme is that everyone gains it by default.

The truth is that everyone can gain salvation and enlightenment,
because your pure existence is a proof that God loves you.
Otherwise you would not exist. God has put all treasures for
your happiness inside yourself - but you never look inside.
You only look outside - and there you cannot find firm satisfaction.

But God leaves the choice to you - he gives you freedom - either
to suffer, or to be satisfied. Being identified with your intellect
and corresponding ego means suffering, climbing the mountain of
intuition means happiness and satisfaction. It's your choice.

The problem is that all established "religions" have told you to
suffer. First, on earth, you must suffer, and then, after your
death, in heaven, you can gain satisfaction. And you just believed
that - life is suffering, death is salvation. That is why most
people choose to suffer - they just were told to and believed that.
But the truth is that it is your own choice. If you WANT to suffer,
then you WILL suffer - if you WANT salvation, then you WILL be saved.
God just loves you so much that he instantly satisfies your desires
- even if you want to suffer.

But in order to be saved instantly, you need not just a weak desire,
you need an existential demand. It is only one step away to be
saved - a jump into another dimension. And it is exactly your
conditioned intellect which keeps you in-plane with the suffering
world, and prevents you from reaching the paradise.

All people have a natural capacity for intuition, but many times
social conditioning and formal education works against it. People
are taught to ignore their instincts rather than to understand and
use them as a foundation for individual growth and development, and
in the process they undermine the very roots of the innate wisdom
that is meant to flower into intuition.

Intelligence is the inborn capacity to see, to perceive.
Every child is born intelligent, then made stupid by the society.
We educate him in stupidity.
Sooner or later he graduates in stupidity.

What do you think that I'm doing here with you?
I'd like to help you to overcome your stupidity.
If I call people's behaviour stupid and foolish, then it is not
to denounce them - it is rather to help them recognize it.
How could they overcome it if they didn't recognize it?

The religious claims mentioned above are just misunderstood.
You don't have to die *physically* in order to overcome the
suffering - what has to die is your intellect and corresponding
ego! In fact, that is what happened to me: I already died -
that is, my ego, constructed by the intellect, has died. That
is the real salvation, because then you can not die anymore.
How can you still die if you already died? That is not
possible. If you are died in the described way, than you are
reborn to eternal life. That is the real resurrection.
Yes, your physical body can still die, of course, but that is
not YOU - you are no longer identified with your physical body.
All the fear vanishes - and when your body dies, you know that
you will just watch it from a transcended perspective. It is
just like disrobing your clothes.

Many people get annoyed when I call somebody or something stupid,
corrupt, or ignorant. They think it is an arrogant denouncement.
Yes, I know it is inconvenient. But what else can I do? If I
want to help you getting saved, I must tell you honestly what I
think. That is the only chance for you to recognize it and
overcome it. The society has taught you always to be pleasant
to anyone, but you cannot be saved in this way. Somebody must
hit on your head to make you awake.

You have a perverted point of view, which was educated into your
mind. You think that intuition has only subjective relevance,
and intellect is objective. However, the opposite is true, or
at least this is only part of the truth. The truth is such:
There is an outside world science, and an inside world science.
The established scientific methods today are only related to
the outside world. The inside world consideration is left to
"religion", and is not considered scientific, but rather based
on "belief". However, the outside science alone is incomplete,
and therefore deficient. Ordinary scientists observe the outside
world, but they don't observe themselves, the observer. That is
a major defect, and so all their findings are built on an obscure
fundament. They don't know anything about themselves, the
observer. The established religion on the other hand is also
incomplete, because it lacks reproducible understanding.
So what is the reasonable conclusion? The reasonable conclusion
is to *extent* somehow the scientific approach to the inside
world to make it complete. But the methods you need to explore
the inside world are somehow different than the usual scientific
methods - you need to learn them like other skills, but
nevertheless they are scientific and thus objective in a higher
sense. The problem is that no established school or university
teaches this subject - this is the major fault of our society.

The scepticism of a David Hume was just a reaction to the
dogmatic religious beliefs. As such it is a necessary step
in development. But you should not stay on this level.
This is just the starting point for the real exploration for
fundaments of knowledge.

I can give you some hints here how this method of exploration of
your inside world works and thus how to find higher perception
and develop intuition.
Please read this carefully, and try to withdraw your analyzing
and arguing intellect a bit, so that I can reach your deeper
character.

The matter is to *understand* what I say, not to learn what I
say. If you listen to me here, don't start to collect knowledge.
Don't start to collect and to hoard as you are used to.
Listening to me here should be an exercise in understanding.
You should listen with full intensity, totally, and with so much
awareness as is possible for your. In this awareness you will see
it, and this seeing means transformation. Not that you had to do
something afterwards - the seeing itself causes the metamorphosis.

If you argue pro or con, then it shows only that you have missed
it.
While you are listening to me, simply follow me.
While you are listening to me, follow me into the rooms to which
I lead you, and understand what I say.
Do not argue - don't say Yes, don't say No; don't agree with me,
don't disagree with me.
Just follow me in this moment - and suddenly the insight is there.
If you read carefully... And with careful awareness I do not mean
concentration. Relax! With careful awareness I just mean that you
read with total awareness, not with dull mind; that you read with
intelligence, with vibrancy, with openness. You read this here,
now - that I mean with awareness. You are not anywhere else.
You do not compare what I say with your old ideas. You don't
compare anything, you don't judge. You are not sitting there and
evaluating in your mind whether that what I say is right or not
or how much of it is right.

If you see a rose, do you agree with it or not?
If you see a sunrise, do you agree with it or not?
If you see the moon in the night, then you just see it!
Either you see it or you don't see it, but it is not a question
of agreement or disagreement.

I don't try to convince you of something.
I don't try to convert you to some theory, to a philosophy, to
a dogma, to a belief, no.
I just communicate to you what happens to a mystic, and in this
communication the same thing can happen to you, if you are really
present.
It is infective.
Insight transforms.

If I say that knowledge, cognition is a barrier, you can agree
or disagree - and you have missed it! Just listen, just observe
it, just go into the process which constitutes knowledge.
Then you can see how your knowledge leads to distance, how your
knowledge becomes a barrier. How knowledge always stays between,
how knowledge ever grows and the distance ever grows.
How the innocence gets lost by knowledge, how the amazement
gets destroyed, impaired, and mortified by knowledge, how life
becomes a boring and dreary business by your knowledge.
The mystery gets lost. The mystery vanishes, because you start
to believe that you know. How can still a mystery exist if
you know? The mystery is only possible if you don't know.

And notice, man virtually doesn't know anything! Everything
what we have collected is just scrap. The ultimate truth stays
withdrawn from our access. What we have collected are only
facts, the truth stays untouched from our efforts.
And that is not only the experience of Jesus or Buddha, it is
also the experience of Newton and Albert Einstein. It is the
experience of poets, painters, musicians, and dancers.
All the great intelligent people of this world agree entirely
in one thing: The more we know, the more we understand that
life is an absolute mystery. Our knowledge can never solve
this mystery.

It is only the stupid people who think that there is no more
mystery in life because they have collected some knowledge.
It is only the ordinary intellect which relies too much on
knowledge; the intelligent intellect stays above the knowledge.
He uses it, of course he uses it - it is useful, it is well
applicable -, but he actually knows that all what truth means
is hidden and stays hidden. We can know further and further,
but the mystery stays unexhaustible.

Listen with insight, with awareness, totally. If you are
present in this way, you will see something. And this seeing
transforms you - you don't still ask why. Insight negates
knowledge. And when something is negated and nothing else
is postulated, then something was destroyed and nothing else
was set on its place. Then silence rules, because space
exists. Silence rules, because the old was destroyed and the
new has not yet appeared. This silence is emptiness, is
Nothing. And only this Nothing functions in the world of
truth.
Ideas can not function there. Ideas function only in the
world of things, because even ideas are things - very subtle,
but nevertheless material. That's why ideas can be recorded,
that's why they can be communicated and transmitted.
I can throw an idea at you, and you can catch it, you can
have it. It can be taken and it can be given, it is
transferable, because it is a thing. It is a material
phenomenon.

Emptiness can not be given, emptiness can not be thrown at you.
You can take part in the emptiness, you can betake yourself
into emptiness, but nobody can give it to you. It is not
transferable. And only this emptiness works in the world of
truth.

The truth is only recognized if the intellect is not active.
In order to recognize the truth, the intellect must stop to
work. He must be quiet, still, motionless.

Ideas don't work in the world of truth, but the truth can be
expressed by ideas.
The truth cannot be recognized by the mind, but when one has
recognized it, one can clamp the mind for it.
That is what I do, that is what all mystics did.
What I say is an idea, but behind that idea is emptiness.
This emptiness was not generated by ideas, this emptiness
is beyond ideas. Ideas cannot touch it, ideas can not even
see it. The ideas must disappear so that the emptiness can
appear; both can never meet each other. But once the
emptiness has appeared, it can use all possible means to
express itself.

Insight is a condition without ideas. Whenever you understand
something, you see it in this moment when no ideas exist.
Also here, while you are listening to me and reading this,
it sometimes happens that you *see* - but these moments are
gaps, interspaces. One idea has gone, another has not yet
arrived, and inbetween is a gap; and within this gap
something can happen, something starts to vibrate.
It is like if somebody plays on a drum - the drum is empty
inside; that's why one can play on it. This emptiness
vibrates. The wonderful sound which thus arises evolves
from the emptiness.
If you *are*, without an idea, then something becomes
possible, immediately possible. Then you can see what I say.
Then it will not only be a word that you read, then it will
become an intuition, an insight, a vision. You have looked
at it, you have shared it with me.

Insight is a condition of no-mind, without ideas.
It is a gap, an interspace in the process of mind,
and in this gap the perception emerges, the truth.

cr88192

unread,
Nov 23, 2006, 10:08:56 PM11/23/06
to

"Guido Vollbeding" <gu...@jpegclub.org> wrote in message
news:45664368...@jpegclub.org...

> cr88192 wrote:
>>
>> however, I will argue on an different front:
>> intuition is good for doing things which are subjectively definable as
>> better or worse (how is this design? how usable is this interface? ...).
>>
>> however, I have doubts as to the reliance on intuition for matters of
>> truth.
>
> You argue...
> That is exactly the reason that you don't get the message and stay
> in confusion.
> Intuition is the *only* base for truth!
> But this is what you can not recognize, because you can't stop
> to argue. The truth can only be seen if you stop to argue and see
> the reality. Your conditioned intellect is like a dark cloud which
> prevents the sunlight to reach you.
>

maybe, one needs to be cautious though, especially in religious matters.
I know my past...


>> one extreme is that only a very elite group can gain salvation or
>> enlightenment;
>> the other extreme is that everyone gains it by default.
>
> The truth is that everyone can gain salvation and enlightenment,
> because your pure existence is a proof that God loves you.
> Otherwise you would not exist. God has put all treasures for
> your happiness inside yourself - but you never look inside.
> You only look outside - and there you cannot find firm satisfaction.
>
> But God leaves the choice to you - he gives you freedom - either
> to suffer, or to be satisfied. Being identified with your intellect
> and corresponding ego means suffering, climbing the mountain of
> intuition means happiness and satisfaction. It's your choice.
>

<large snip>


yes, however, I will still argue.
a lot depends on ones' perspective I guess.

from what I can tell, you seem to be taking more of a Budhist or similar
perspectice (placing an emphasis on introspection, meditation, inner-peace,
...).


I am attempting to take a perspective matching my understanding of
Christianity (albeit, attempting to generalize it a little further than is
usually advised).

it is the usual assumption that salvation is gained through faith in the
crucifiction and resurection, and the diety of Jesus (being one and the same
as God).

now, an important point comes with the crucifiction and ressurection, and
the mystery of why it was done this way. the orthodox perspective is that
this act accomplished in itself the task of making forgiveness and salvation
possible for all who have faith, and that this is the only way possible.

another possible interpretation, is that this was more symbolic, and done as
a way that people will remember this and have faith (but that alternatives
may also exist for those who don't yet know about or believe in the
ressurection).

as of yet, I am not sure which of these is the truth.


could be that my interpretations are not correct, but oh well.


does take a lot of effort though trying to figure out what the correct
perspective is.

and, yes, my weakness may in fact be my analytical nature.


note that in the past, my perspectives were different.
as I once was, I was atheist.

then I ended up in a more bizarre situation, basically, conversing with
spirits and working with energy (if anyone believes this). at the time, I
was unsure, technically I was a lot closer to following a mix of New-Age and
Wiccan beliefs (albeit I didn't believe in their dieties).

so, then I converted to Christianity, and have since been ridding myself of
what remains of this aspect of my past (spirits, energy, subworld, ...).

long hard grind working out all the theology though, and a lot seems to lead
in confusing directions.

one can't say for certain whether or not they are headed in the right
direction though, I guess this itself is an act of faith.

or such...

Nishu

unread,
Nov 24, 2006, 12:44:50 AM11/24/06
to

cr88192 wrote:

> > Both 4:2:2 and 4:2:0 are common JPEG formats.
> > I have not seen 4:1:1 in JPEG.
> >
>
> ok, my notation is a bit confused as I have only really recently started
> using it.
>
> by '4:1:1' (or 2x2:1:1), I had meant 2x2:1x1:1x1, or your 4:2:0 above.
>

411 and 420 are different. 411 takes all luma pixels in 1D, it is
basically 4x1 but 420 takes 2x2 luma pixels. for unknown reasons, 411
is not commonly used.

> the encodings I were assuming were:
> 2x2:1x1:1x1 and 1x1:1x1:1x1.
>
> which, I had figured were the only 'common' ones.
> however, if others are common, I guess it makes sense to keep a more generic
> decoder.

well, here again apart form 420, it is 422 which is most commonly used.
YUV444 (or for you, 1x1:1x1:1x1, wack..what a tedious job to write it)
is not as common as YUV422 interleaved ( also known as yuyv, or its
variant). The reason is obvious, we dont need high resoultion for U(Cb)
or V(Cr).

>
>
> > I don't see that dealing with these is a big deal though.
> > If you want to do a good job you use a half-band
> > filter for the up-converter.
> >
>
> my complaint here is not solely technical (or practical...).
>
>

> then again, the dct is a terrible beast as well, so a format like this can
> hardly be 'simple', but oh well...

Well, I was going thro' different algos for DCT. There are quite a
decent and faster ones. and unquestionably, doing processing in DCT
domain is far superior than doing it in spacial domain.

-N

Nishu

unread,
Nov 24, 2006, 2:11:43 AM11/24/06
to

Guido Vollbeding wrote:
> Nishu wrote:
> >
> > Then, again it is
> > not pure DCT but a variant of DCT (integer transform) which gained
> > much popularity in recent video coding standards. Isnt that DCT seems
> > crippled in terms of compression after advanced prediction and VLE
> > techinques?
>
> Yes, but this is a mistaken development.
> First they cripple the DCT and thus damage its scalability property
> (because they don't understand this correlation), and then they
> introduce some weird "SVC" (Scalable Video Coding) extension mode.
> That is a phantastic strategy to keep yourself busy, but does it
> make any sense? No, it makes no sense - can you see how stupid
> this is?
>

Your recommendation "ITU-T-JPEG-Plus-Proposal_R3.doc" claims that...
"---Annex B contains a short description of the underlying
"fundamental DCT property for image representation". This property
was found by the author during implementation of the new DCT scaling
features and is after his belief one of the most important discoveries
in digital image coding after releasing the JPEG standard in 1992---."
.. which I found quite errorous (and really, a bit offensive too) since
DCT scaling property is well known to people and its implementation in
the industry is well beyond what ever the date was in the annex B
snippet.

I believe that this is a wrong notion that industry doesnt know about
DCT and its properties. Image processing Industry has been exploiting
the linearity and scalabilty properties of DCT, but yes, may be
stealthily, to get an edge over the competitors, especially the one
which deals in embedded software solutions.

I did like the new proposed zigzag scanning but I wonder how would one
come to know about the end of 8x8 block during VLD, so that this new
proposed scanning can exploit the scalibility property. A kind of end
of block marker needs to be included in the standard, which thou',
again, is an added overhead.

I didnt read the whole of paper yet..but it is quite interesting.

Regards,
Nishu

Thomas Richter

unread,
Nov 24, 2006, 4:02:19 AM11/24/06
to
Nishu wrote:
> Guido Vollbeding wrote:
>
>
>>It confirms my firm finding that the core DCT is the proper base
>>for image coding and future advancements, and all other attempts
>>deviating from the core DCT are a waste of time, will prove as a
>>mistake and fail eventually.
>>Anyone still seriously pursuing other techniques is a fool.
>
>
> I wonder whether it is (DCT) or (quantization and VLE) which does the
> more compression.

DCT doesn't do *any* compression. It's a linear, lossless transformation
(lossless, in principle: Leaving numerical errors aside). It is used as
a decorrelation transformation.

> Pursuing other VLE techniquies had good returns in
> terms of forming better video compression standards. Then, again it is
> not pure DCT but a variant of DCT (integer transform) which gained much
> popularity in recent video coding standards. Isnt that DCT seems
> crippled in terms of compression after advanced prediction and VLE
> techinques?

What do you mean by that?

So long,
Thomas

Nishu

unread,
Nov 24, 2006, 5:56:59 AM11/24/06
to

Thomas Richter wrote:
> Nishu wrote:
> > Guido Vollbeding wrote:
> >
> >
> >>It confirms my firm finding that the core DCT is the proper base
> >>for image coding and future advancements, and all other attempts
> >>deviating from the core DCT are a waste of time, will prove as a
> >>mistake and fail eventually.
> >>Anyone still seriously pursuing other techniques is a fool.
> >
> >
> > I wonder whether it is (DCT) or (quantization and VLE) which does the
> > more compression.
>
> DCT doesn't do *any* compression. It's a linear, lossless transformation
> (lossless, in principle: Leaving numerical errors aside). It is used as
> a decorrelation transformation.
>

I agree with you. DCT is not a compression tool at all. It is an aid
for compression.

> > Pursuing other VLE techniquies had good returns in
> > terms of forming better video compression standards. Then, again it is
> > not pure DCT but a variant of DCT (integer transform) which gained much
> > popularity in recent video coding standards. Isnt that DCT seems
> > crippled in terms of compression after advanced prediction and VLE
> > techinques?
>
> What do you mean by that?
>

What I meant was that it isnt DCT alone that is helping in achieving
compression but advanced VLE and prediction techniques. Leave the DCT,
use another transform, you will get high compression compartive to
leaving the advanced prediction or vle techniques.
It is said that computationally extensive and cumbersome DCT is not
required in video coding techinques, and integer transform would
suffice.

-N

Thomas Richter

unread,
Nov 24, 2006, 9:22:09 AM11/24/06
to
Nishu wrote:

>>>Pursuing other VLE techniquies had good returns in
>>>terms of forming better video compression standards. Then, again it is
>>>not pure DCT but a variant of DCT (integer transform) which gained much
>>>popularity in recent video coding standards. Isnt that DCT seems
>>>crippled in terms of compression after advanced prediction and VLE
>>>techinques?
>>
>>What do you mean by that?
>>
>
>
> What I meant was that it isnt DCT alone that is helping in achieving
> compression but advanced VLE and prediction techniques.

Sure. The compression is archived by removing data in the quantizer.

> Leave the DCT,
> use another transform, you will get high compression compartive to
> leaving the advanced prediction or vle techniques.

I don't quite see how this is an argument for or against the DCT. It
is one transform of many others.

> It is said that computationally extensive and cumbersome DCT is not
> required in video coding techinques, and integer transform would
> suffice.

Well, the video world has just different problems. First of all, you
typically work here with dedicated hardware or DSP chips - and integer
transformations are just simpler and cheaper in terms of required chip
functions. Second, you also need to consider that video does motion
prediction and motion estimation, and this step must fit well to the
decorrelation transformation.

To give you an example: One of the typical problems you get with block-
based motion predictions is that the image resulting from motion
compensation contains of lots of blocks that partially overlap and fall
into the block boundaries of the DCT blocks. This is bad because the
block edges are equivalent to high frequencies in the DCT domain. On the
other hand, DCT blocks are useful *because* they can also be used as the
basis for the motion estimation.

Thus, the basis transformation picked in video must fit fine to the
motion part, but also must be able to handle the block boundaries due to
the motion part, which is part of the problem. Well, I do not know
enough about the integer transform the newer H26x standards use, but it
is likely that they have picked exactly for this purpose.

So long,
Thomas

0 new messages