Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Jules Gilbert's Compression Technology

112 views
Skip to first unread message

Jules Gilbert

unread,
Aug 17, 1996, 3:00:00 AM8/17/96
to

Well, I'm going to give the system a shot. I'm filing.

After numerous off-line discussions with various members of this community, I
decided to file my patents and executed the initial process.

One application relates to MR1, a lossless compressor/decompressor system for
conveying a list of random-appearing data values.

A second application relates to MR2, a lossless compressor/decompressor system
which conveys signals in such a manner so as to avoid the 'Shannon constraints'
imposed on conventional signalling methods.

I am interested in hearing from others who have experience with compression
patents.

Jules Gilbert

U137

unread,
Aug 18, 1996, 3:00:00 AM8/18/96
to

One other question. When compressed data reaches an entropic state, the
distribution of bits evens out to about 50/50. Meaning statistical
analysis must fail as every byte has a 1/256 chance of occuring,
indicating that the entropy of the message is the message. Similar to the
question I posed earlier on, "What is the theoretical compression limit
for 2 bytes generated at random". The answer is 16 bits, (There's not
mine.)

Will your compression program compress evenly distributed data streams,
where the entropy is at it's absolute highest?

U137

unread,
Aug 18, 1996, 3:00:00 AM8/18/96
to

Wh have just been issued a patent (US. #5,533,051) and have several more
pending on a new method for data compression. It will compess all types of
data, including "random", and data containing a uniform distribution of
"0's" and "1's".

In a few days I will be posting an article discussing one of our methods.
It will be entitled, Hyper Space(R) a method for data compression. This
particular method is patent pending, but I am releasing the source code
and the core encoding and decoding algorithms.

The code is written in BASIC and is about 5K.

Tom Lane

unread,
Aug 19, 1996, 3:00:00 AM8/19/96
to

u1...@aol.com (U137) writes:
> Wh have just been issued a patent (US. #5,533,051) and have several more
> pending on a new method for data compression. It will compess all types of
> data, including "random", and data containing a uniform distribution of
> "0's" and "1's".

It's not bad enough we have Jules Gilbert flooding the newsgroup with
mathematically-impossible lies, we have to have more crap from you too?

Go away.

regards, tom lane
organizer, Independent JPEG Group

Memo to USPTO: your policy for perpetual motion machine patents
("give us a working model first") ought to be applied to universal
compression claims too.

U137

unread,
Aug 19, 1996, 3:00:00 AM8/19/96
to

Thanks for your reply.

Let me know when you have figured the algorithm out. Of course it may be
to difficult for you. 28 lines of basic code is really complex.


Steve Tate

unread,
Aug 19, 1996, 3:00:00 AM8/19/96
to

TroyL4857 (troy...@aol.com) wrote:

> One of the founding principals of Chaos theory is that seemingly random
> series often (if not always) have an underlying (and often simple) order.
> This "order" means that seemingly random data can be expressed as a
> formula (that's one reason why it's random APEARING data instead of just
> random data). Therefore, in order to compress any long series of numbers
> one need only find the underlying formula.

Some series that seem random do indeed have an underlying structure.
That simply makes them "complex", it does not make them "random".
There *are* truly random sequences, where randomness can by described
by Kolmogorov complexity. Those sequences cannot be compressed in any
way, shape, or form. Period.

> Of course, if I could tell you how to find the underlying formula I'd be
> in Scandinavia accepting the Noble prize instead of posting to a
> newsgroup. Still, while we currently are unable to do RAD compression it
> is hardly "mathematically-impossible".

A complex and non-random series may certainly be compressible if we
knew of a way to efficiently come up with a description of that
sequence. But Jules included truly random data (not just "complex"
data) in his definition of "RAD". Truly random data simply cannot be
compressed. If you don't understand the argument behind this, then I
suggest you either learn the required mathematics first or stop making
irresponsible claims.

--
Steve Tate --- s...@cs.unt.edu | "As often as a study is cultivated by narrow
Dept. of Computer Sciences | minds, they will draw from it narrow
University of North Texas | conclusions." -- John Stuart Mill, 1865.
Denton, TX 76201 |

Charles Bloom

unread,
Aug 19, 1996, 3:00:00 AM8/19/96
to

>It's not bad enough we have Jules Gilbert flooding the newsgroup with
>mathematically-impossible lies, we have to have more crap from you too?

It's really getting ridiculous. Perhaps we should all agree to move
to c.c.research to take advantge of the moderation...

--------------------------------------------
Charles Bloom cbl...@mail.utexas.edu
http://wwwvms.utexas.edu/~cbloom/index.html
opinions expressed may not be mine


Andy McFadden

unread,
Aug 19, 1996, 3:00:00 AM8/19/96
to

In article <4vahf4$b...@geraldo.cc.utexas.edu>,
Charles Bloom <cbl...@mail.utexas.edu> wrote:
>Your message symbolizes the fundamental misconception that many people
>have.
[...]
>It IS mathematically impossible to compress all data. No matter what
>kind of transforms you do on the data to try to find "underlying order"
>you will not being able to pack all data. Why? because you will
>have to write more flags to indicate the transforms you have done
>that you will gain from doing those transforms.


This is also why certain compression claims (e.g. the WEB 16:1 compressor)
have always stated that it won't work beyond a certain point. If you
claim that you can pack 16 bits into 15, it's obvious to anyone with a
pencil and spare time that you can't do it. At 16K or 100K though it
starts to sound like there's enough bits floating around to make it work.

The mathematics involved tell us that it doesn't matter if you have 2 bits,
16 bits, or 2GB. There's a fundamental problem with trying to store N bits
of information in N-1 bits. Any algorithm that can do it must store some
set if inputs in N+1 bits.


Another way of looking at it is the code snippet Mr. Bloom posted recently,
which compressed an 8-bit byte into an average of 6 bits. Too bad you'd
have to store the 3-bit length as well. You might think, "hey, if I
compress more than 8 bits with the same technique, I'll eventually reach a
break-even point where the savings is larger than the number of bits needed
to encode the length". It reduces down to the same problem though, where
you're trying to guarantee that you can take N bits and store it in less
than N... and anyone who can do that would be able to compress an infinite
amount of data down to almost nothing.

--
fad...@netcom.com (Andy McFadden) [These are strictly my opinions.] PGP

Anyone can do any amount of work provided it isn't the work he is
supposed to be doing at the moment. -- Robert Benchley

U137

unread,
Aug 19, 1996, 3:00:00 AM8/19/96
to

Lighten up Charles. Free speech and all that. Your the scientist. This is
how we move forward. Proof. Don't run, either it works or it doesn't. If
his algortihm works, the market place will reward him. If he doesn't want
to show the algoritm so what. Even when you do show the algorithm and the
a discription of how it works and have a patent people still think it's
worthless.

Probalby good that Stac decided to go for a Patent. I think they ended up
with a 100 million good reasons. Do trash someone else's work until you
are convinced, trying to do something different is never easy on a good
day.

U137

unread,
Aug 19, 1996, 3:00:00 AM8/19/96
to

How about coming up with a method using implied reasoning to determine the
size. Now that's productive.

TroyL4857

unread,
Aug 19, 1996, 3:00:00 AM8/19/96
to

t...@netcom.com (Tom Lane) writes:

>It's not bad enough we have Jules Gilbert flooding the newsgroup with
>mathematically-impossible lies, we have to have more crap from you too?

I've been reading this newsgroup for quite a while and while I think Jules
Gilbert clearly does not have the ground breaking product he claims to
have I'm equally frustrated with those of you who insist that RAD
compression is "mathematically-impossible". While I conceed that such
compression is currently unusable it's hardly "impossible".

One possible theoretical way to perform RAD compression is to translate
data into a formula which, when given a successive series of whole numbers
produces the desired output.

First, lets take a non-RAD example:

the data: 1234567891011121314151617181920

can be expressed "1X20" where 1 is the starting sequence X is the formula
to be implemented and 20 is the number of iterations. By reducing this
number to a formula we can store a series of almost any length by adding a
single bit per order of magnitude.

One of the founding principals of Chaos theory is that seemingly random
series often (if not always) have an underlying (and often simple) order.
This "order" means that seemingly random data can be expressed as a
formula (that's one reason why it's random APEARING data instead of just
random data). Therefore, in order to compress any long series of numbers
one need only find the underlying formula.

Of course, if I could tell you how to find the underlying formula I'd be


in Scandinavia accepting the Noble prize instead of posting to a
newsgroup. Still, while we currently are unable to do RAD compression it
is hardly "mathematically-impossible".


Troy Lowry
V.P. of Research and Development
Polaris Systems
Malvern PA
(610)754-8703

Jules Gilbert

unread,
Aug 19, 1996, 3:00:00 AM8/19/96
to

In article <4v9uqo$p...@newsbf02.news.aol.com>,

TroyL4857 <troy...@aol.com> wrote:
>t...@netcom.com (Tom Lane) writes:
>
>>It's not bad enough we have Jules Gilbert flooding the newsgroup with
>>mathematically-impossible lies, we have to have more crap from you too?
>
>I've been reading this newsgroup for quite a while and while I think Jules
>Gilbert clearly does not have the ground breaking product he claims to
>have I'm equally frustrated with those of you who insist that RAD
>compression is "mathematically-impossible". While I conceed that such
>compression is currently unusable it's hardly "impossible".
>
---------------------------------------------------------------------------

65,63,248,89,52,63,166,127,235,192,172,12,125,64,100,55,214,
65,20,52,245,192,176,145,136,64,187,219,41,193,152,206,124,193,
210,103,59,65,43,168,110,66,117,218,19,65,43,115,206,194,48,
138,100,193,165,9,119,65,139,190,67,0,2,16,0,5,4,2,6,3,5,3,
3,3,3,3,4,5,5,5,4,4,3,5,3,5,5,4,3,3,4,2,3,1,6,4,4,3,
4,3,4,3,4,5,3,3,3,4,3,5,3,3,2,6,3,3,3,3,4,5,4,2,4,1,
6,2,3,3,3,2,3,3,2,3,3,3,3,2,3,2,4,2,3,2,3,2,3,2,3,3,
- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -
about 200 lines cut
- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -
4,4,3,3,3,4,5,3,3,2,4,1,17,2,4,2,3,3,3,3,2,4,2,4,2,3,
3,2,3,3,3,3,4,2,3,3,2,3,3,2,3,3,3,2,3,2,4,2,4,2,3,3,
2,3,3,3,3,2,3,3,2,3,3,2,3,2,3,3,1,0

The above fragment is a byte ASCII representation. This is the result of
compressing a 64k buffer. This required perhaps 15 seconds of CPU time on a
P130 machine.

So mind your manners!

Charles Bloom

unread,
Aug 19, 1996, 3:00:00 AM8/19/96
to

>Of course, if I could tell you how to find the underlying formula I'd be
>in Scandinavia accepting the Noble prize instead of posting to a
>newsgroup. Still, while we currently are unable to do RAD compression it
>is hardly "mathematically-impossible".

Your message symbolizes the fundamental misconception that many people
have.

It is not a question of processor power or difficult algorithms or
anything of the sort.

It IS mathematically impossible to compress all data. No matter what
kind of transforms you do on the data to try to find "underlying order"
you will not being able to pack all data. Why? because you will
have to write more flags to indicate the transforms you have done
that you will gain from doing those transforms.

--------------------------------------------

Heinrich Opgenoorth

unread,
Aug 20, 1996, 3:00:00 AM8/20/96
to

In <4va0pf$g...@hermes.acs.unt.edu> s...@zaphod.csci.unt.edu (Steve Tate) writes:

>TroyL4857 (troy...@aol.com) wrote:
>> Of course, if I could tell you how to find the underlying formula I'd be
>> in Scandinavia accepting the Noble prize instead of posting to a
>> newsgroup. Still, while we currently are unable to do RAD compression it
>> is hardly "mathematically-impossible".

>A complex and non-random series may certainly be compressible if we


>knew of a way to efficiently come up with a description of that
>sequence.

I think even this simpler problem (finding the algorithm to generate a
given complex series) is at least NP-complete (if not undecideable).
Volunteers to do the proof, step forward.

>But Jules included truly random data (not just "complex"
>data) in his definition of "RAD". Truly random data simply cannot be
>compressed.

Yup. And I noticed that in this discussion, everybody seems to use his
own definition of "RAD", since no exact definition was ever given ---
goes to show what comes of using wishy-washy terms introduced by
someone like Jules Gilbert.

So if "RAD" means "looks random to the human eye, but is in fact not
random" then, OK, compression is possible (e.g., the output of gzip
still has some redundancies left).

But if we take the posting of J.G., where he included truly random data
in his definition, then "RAD" means just "random" and we should not
use the misleading term "RAD" at all.

Heinrich

(Boy, this Jules Gilbert thing really makes me tired. Been following
this silently for quite a while now. OTOH, in some way it is amusing
to see Gilbert argue: "Of course I know 2 + 2 = 4. Nobody can get
around that. But I can anyway.")

--
Heinrich Opgenoorth opgen...@gmd.de (+49 22 41) 14 22 77
GMD - German National Research Center for Information Technology

Jules Gilbert

unread,
Aug 20, 1996, 3:00:00 AM8/20/96
to

I have not been saying "mathematically impossible lies" and to say that about
me is simple libel.

I am moving to comp.compression.research, and if I had known more about the
dynamics of this community that is where I would have gone first.

As to my remarks, (for the very last time):

I have a class of methods that make it possible to do two things:

1) Compress random-appearing data, and

2) Convey information across a channel without 'information theory' con-
straints, as perhaps first/best described by Shannon. (For the signal
conveyed, normal constraints apply.)

These are not two separate developments, but the result of a single apparatus.

The 'channel' described above can be part of an actual communication channel or
storage media.

Jules Gilbert
<cof...@ici.net>

Steve Rencontre

unread,
Aug 20, 1996, 3:00:00 AM8/20/96
to

In message <4vb8cb$e...@soran.ici.net>, cof...@soran.ici.net (Jules Gilbert) said:

> 65,63,248,89,52,63,166,127,235,192,172,12,125,64,100,55,214,
> 65,20,52,245,192,176,145,136,64,187,219,41,193,152,206,124,193,
> 210,103,59,65,43,168,110,66,117,218,19,65,43,115,206,194,48,
> 138,100,193,165,9,119,65,139,190,67,0,2,16,0,5,4,2,6,3,5,3,
> 3,3,3,3,4,5,5,5,4,4,3,5,3,5,5,4,3,3,4,2,3,1,6,4,4,3,
> 4,3,4,3,4,5,3,3,3,4,3,5,3,3,2,6,3,3,3,3,4,5,4,2,4,1,
> 6,2,3,3,3,2,3,3,2,3,3,3,3,2,3,2,4,2,3,2,3,2,3,2,3,3,
> - -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -
> about 200 lines cut
> - -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -
> 4,4,3,3,3,4,5,3,3,2,4,1,17,2,4,2,3,3,3,3,2,4,2,4,2,3,
> 3,2,3,3,3,3,4,2,3,3,2,3,3,2,3,3,3,2,3,2,4,2,4,2,3,3,
> 2,3,3,3,3,2,3,3,2,3,3,2,3,2,3,3,1,0
>
> The above fragment is a byte ASCII representation. This is the result of
> compressing a 64k buffer. This required perhaps 15 seconds of CPU time on a
> P130 machine.
>
> So mind your manners!

Jules, if you're worried about manners, don't you think it's time you stopped
posting such meaningless junk?

Can't you understand that *EVEN IF YOU COULD* do what we say you can't, the
above is utterly, totally, wholly, unequivocally, indubitably, unquestionably,
literally, definitely, positively, absolutely content-free?

The ability to write a list of partially sorted integers does not demonstrate
*ANYTHING*.

-----------------------------------------------------------------------
Steve Rencontre | st...@dstrip.demon.co.uk (business)
If it works, it's obsolete. | stev...@cix.compulink.co.uk (private)
-----------------------------------------------------------------------

Joseph H Allen

unread,
Aug 20, 1996, 3:00:00 AM8/20/96
to

In article <4vc9h3$j...@soran.ici.net>,
Jules Gilbert <cof...@soran.ici.net> wrote:

>I am moving to comp.compression.research, and if I had known more about the
>dynamics of this community that is where I would have gone first.

I feel legal presure starting to build on the comp.compression.research
moderator....

--
/* jha...@world.std.com (192.74.137.5) */ /* Joseph H. Allen */
int a[1817];main(z,p,q,r){for(p=80;q+p-80;p-=2*a[p])for(z=9;z--;)q=3&(r=time(0)
+r*57)/7,q=q?q-1?q-2?1-p%79?-1:0:p%79-77?1:0:p<1659?79:0:p>158?-79:0,q?!a[p+q*2
]?a[p+=a[p+=q]=q]=q:0:0;for(;q++-1817;)printf(q%79?"%c":"%c\n"," #"[!a[q-1]]);}

Mark Nelson

unread,
Aug 20, 1996, 3:00:00 AM8/20/96
to

TroyL4857 wrote:
>
> t...@netcom.com (Tom Lane) writes:
>
> I've been reading this newsgroup for quite a while and while I think Jules
> Gilbert clearly does not have the ground breaking product he claims to
> have I'm equally frustrated with those of you who insist that RAD
> compression is "mathematically-impossible". While I conceed that such
> compression is currently unusable it's hardly "impossible".
>
> One possible theoretical way to perform RAD compression is to translate
> data into a formula which, when given a successive series of whole numbers
> produces the desired output.

Just last week, with great hubris I proposed that this concept could perhaps
be labeled "Nelson's Fallacy."

Yes, you can use very simple formulae to generate nice long strings of
numbers. Perhaps even random-appearing if you like.

But you will drown in the overwhelming flood of sequences that can't be modeled
using any simple algorithm. There are just too many of them.

For example, use a nice iterative forumla that produces a chaotic stream of
numbers based on a one byte seed. How many unique sequences can this
generator create? No more than 256. Now, how many unique four byte sequences
exist? Quite a bit more than 256. Imagine how easy it would be to find
a generator to exactly match the contents of, for example, a disk sector.
It just isn't going to happen!


-----------------------------
Mark Nelson
ma...@tiny.com
http://web2.airmail.net/markn

Jeppe Øland

unread,
Aug 20, 1996, 3:00:00 AM8/20/96
to

Charles Bloom wrote:
> It IS mathematically impossible to compress all data. No matter what
> kind of transforms you do on the data to try to find "underlying order"
> you will not being able to pack all data. Why? because you will
> have to write more flags to indicate the transforms you have done
> that you will gain from doing those transforms.

Sp why not just stop talking about it ... sooner or later he's gonna
publish his methods (and either be ridiculed or famous) :-)

Graeme Gill

unread,
Aug 21, 1996, 3:00:00 AM8/21/96
to

Jules Gilbert (cof...@soran.ici.net) wrote:
| As to my remarks, (for the very last time):
| I have a class of methods that make it possible to do two things:

| 1) Compress random-appearing data, and
| 2) Convey information across a channel without 'information theory' con-
| straints, as perhaps first/best described by Shannon. (For the signal
| conveyed, normal constraints apply.)

| These are not two separate developments, but the result of a single apparatus.
| The 'channel' described above can be part of an actual communication channel or
| storage media.

Sounds a lot like the Burrows Wheeler Transform, as described in Dr. Dobb's,
September 1996, page 46.

Graeme Gill.

c: *.*

unread,
Aug 21, 1996, 3:00:00 AM8/21/96
to

Anyone ever figure out just how much compression is needed to fit a Human
Being, or a Redwood Tree into a strand of DNA? Now, I've heard if you
unravel a DNA molecule, it will be over 3 feet long, and there are more
than one of them (forty chromosome pairs, mind you), but you gotta figure
how many billions of complex chemical reactions are contained in a
organic body. Enzymes, catalysts, etc. Also , whatever we call
intelligence has also got to be encoded. And this compression must be
lossless. I gotta figure it has to be more than 3 or 4 to 1.

Dave.


Jules Gilbert

unread,
Aug 21, 1996, 3:00:00 AM8/21/96
to

>
> In article <TGL.96Au...@netcom17.netcom.com>,

> Tom Lane <t...@netcom.com> wrote:
> >u1...@aol.com (U137) writes:
> >>Wh have just been issued a patent (US. #5,533,051) and have several more
> >>pending on a new method for data compression.It will compess all types of

> >>data, including "random", and data containing a uniform distribution of
> >>"0's" and "1's".
> >
>It's not bad enough we have Jules Gilbert flooding the newsgroup with
>mathematically-impossible lies, we have to have more crap from you too?
>
>Go away.
>
> regards, tom lane
> organizer, Independent JPEG Group
>
>Memo to USPTO: your policy for perpetual motion machine patents
>("give us a working model first") ought to be applied to universal
>compression claims too.


Tom:

Your remarks about me are ill-founded, libelous, and plainly malicious.

More than that, you seem intent on claiming that I must be lying because I
won't disclose my method. Now it is true that I have no intention of telling
you (and indeed, I doubt if I would do business with any company that knowingly
allowed it's consultants/employees to behave as you do).

But, for the record, I have been telling you (and others) the truth when I say
that I have the process I've been describing. After reviewing the various
comp.compression remarks with my attorney, I think the worst I've done is to
merge some of the specific characteristics of MR1 and MR2, but I have exactly
what I've been saying I have, a process for compressing random-appearing
data.[1]

Tom, this is not the first time you've made these kinds of remarks about me.
Frankly, I'm at the point where I'm searching for redress through the criminal
courts. Can I do this? Ask me in two weeks or so.

Your behavior is beginning to remind me of that fellow in England who has no
concern for the truth of things, he just wants to be a member of a mob.

I have no idea whether patent (US. #5,533,051) has merit or not. But neither
that inventor or I should be treated so unfairly by you.

Maybe you are judgement-proof today, Tom, But a judgement is good essentially
forever, and since you've intentionally provoked me so more than once, do not
expect me to treat you kindly.

If you expect to prevail in court becuase I could not have what I claim, you
will experience what modern society calls 'a wake-up call'. Not only do I have
what I claim, but I can prove it to the satisfaction of a court. And you won't
learn anything, as the court very likely will appoint a 'Master' to do the re-
view and he will simply report YES or NO.

(I've been told that it would be very unlikely that my trade-secrets would leak
when I proceed against you or someone else for the kind of remarks you have
made.)

[1] My process is the only compressor which avoids the 'counting' problem,
which limits all other compressor technologies to single compressive
instances. With my technology, it is possible to compress large files as
much 100 times, and especially large files perhaps as much as 1000:1.
(This is not done with a single pass over the data, but with several
iterative passes.)

How is this possible? (Or 'where do I put the missing bits?') Of course,
that's what this imbroglio is about.

U137

unread,
Aug 21, 1996, 3:00:00 AM8/21/96
to

Jules,

I have some plain advice for you. Take two computers, compress on one and
decompress on the other. Start with a file size greater than a 1.44 floppy
can store, and then compress your data. I am more than willing to generate
a file for you. How about 2MB of random data.

Having spent the last few days answering a flood of e-mail, some good,
some nasty and some respectful, if you really have what you say, then the
riches will be yours. No more debate is needed. I put up our patent, our
source code and a discription of how it works. It was reviewed and of the
people who spent the time to work through, nothing earth shattering
appeared.

So be it. Maybe one day the algorithms will be of use, until then I will
move on. My partner and I spent a total of 19 man years on this project.
Do I regret it. Not one bit. I learn't more than I ever dreamed.I meet
interesting people and got to argue with the best.

However there comes a time to be gracious in defect, and respectfully
thank those people who reviewed the document.

You only have to do one thing. Demo the product. Nothing more. Go for it.
And then be prepared to either be welcomed like a hero or, you know the
rest.

I sincerely wish you luck, but don't waste any more time, if you have it.
Demo it.

Gord Cormack

unread,
Aug 21, 1996, 3:00:00 AM8/21/96
to cof...@soran.ici.net

>
> From: cof...@soran.ici.net (Jules Gilbert)

> As to my remarks, (for the very last time):

^^^^^^^^^^^^^^

So have you given up on the Calgary Corpus? You said you could compress
it to 50K, and you also said that you could decompress it with a program
that gcc compiles to 20K. Do it and make available the following:

- the original file
- the compacted file
- the decompression program (PGP encrypted if you like)
- the expanded file (which should be identical to the original)
- a statement by you that you created the expanded file from the
compacted file and the decompression program and nothing else

I do not believe you can truthfully post such a result.

--
Gordon V. Cormack CS Dept, University of Waterloo, Canada N2L 3G1
gvco...@uwaterloo.ca http://cormack.uwaterloo.ca/cormack


Mark Nelson

unread,
Aug 21, 1996, 3:00:00 AM8/21/96
to

This doesn't seem to be the case. The human genome seems to have
a fair amount of static, plus some possible redundancy. Without
knowing much about it, I think that the low relative cost of
bandwidth has prevented the creation of compression. However,
there is potentially a *very* high cost associated with transcription
errors, so coding has evolved to attack that problem.

Tom Lane

unread,
Aug 22, 1996, 3:00:00 AM8/22/96
to

I'm honored. Gilbert has now decided that open, in-front-of-all-the-world
legal intimidation is the way to resolve his differences with his critics,
and I seem to be it. Behind-the-scenes threats (as I presume happened
with Charles Bloom) apparently won't do the job any more.

Gilbert, I've said it before and I'll say it again: if you want us to
believe you, then post the *full* details of a working algorithm.
Nothing less will convince me that you can do what you say. If you're
not prepared to do that, then go away; stop wasting bandwidth in this
newsgroup, because you're not accomplishing anything except wasting our
time and yours.

If you really can do what you claim, there is no reason for you to give
a damn what the readership of comp.compression thinks --- you'll be able
to get quite decently rich without us. The fact that you're still here,
trying to convince us of an unbelievable claim without giving any real
evidence, does less than nothing for your credibility.

regards, tom lane

Christopher Burke

unread,
Aug 22, 1996, 3:00:00 AM8/22/96
to


Maybe if a deaf ear is turned toward Mr Gilbert, he will get tired of
posting and just go away.

Jules, why don't you take your nifty idea and go make a million
dollars, then come back and make us all feel stupid for doubting your
incredible new technology. Or better yet, get it published in a
respectable journal. Until then, would you mind stop polluting this
news group.

I'm petitioning for a news group to be created named
alt.compression.IknowitsimpossibleButIcandoitButImnotgonnashowyouhowAnd
fYoutryanddebunkmeIllgetmylawyertosendyouareallynastylettersothere for
any further posts to this thread.

CB

Carsten Wiethoff

unread,
Aug 22, 1996, 3:00:00 AM8/22/96
to

Jules Gilbert wrote:

> [1] My process is the only compressor which avoids the 'counting' problem,
> which limits all other compressor technologies to single compressive
> instances. With my technology, it is possible to compress large files as
> much 100 times, and especially large files perhaps as much as 1000:1.
> (This is not done with a single pass over the data, but with several
> iterative passes.)
>
>
> How is this possible? (Or 'where do I put the missing bits?') Of course,
> that's what this imbroglio is about.

Hey, I found an algorithm to accomplish exactly, what Jules Gilbert is
claiming.
And it's so simple that I will put it in the public domain :-)

The algorithm is even more powerful: With careful application you can
reduce
ANY file to ZERO bits. I will prove it.

The algorithm is as follows:
One repetition is guaranteed not to increase the number of bits and
eventually
decrease it. So after enough repetitions the size of the file will be
ZERO
bits. All you have to remember is the number of cycles to apply to
reconstruct
the file.
As an algorithm for the step I suggest to view the file as a big binary
number
(with leading zeroes) and to subtract one until you reach a file with
only zeroes, but the same length as the original. These operations do
not
reduce the length (they only make the file more compressible :-).
When you have reached a file with only zeroes, you reduce the length by
one
and set all bits to one. This reduces the length of the file. You then
repeat
the process until you have reached the desired size of the file.

Unfortunately this is a rough implementation having two drawbacks:
For longer files the algorithm will take a LONG time. I hope to improve
this by clever assembler hacking.
Of course you have to remember the number of times that you applied the
step,
because you need this number to reconstruct the file. Maybe Jules
Gilbert
knows a way to avoid this. Unfortunately the binary representation of
this
number will usually be larger than the original file...

--
Carsten Wiethoff Siemens AG AUT 33 E1
email: wiethoff%sag...@germany.eu.net Tel.: +49/30/386-24679

TroyL4857

unread,
Aug 22, 1996, 3:00:00 AM8/22/96
to

cof...@soran.ici.net (Jules Gilbert) writes:

>Tom, this is not the first time you've made these kinds of remarks about
me.
>Frankly, I'm at the point where I'm searching for redress through the
criminal
>courts. Can I do this? Ask me in two weeks or so.

To win a libel case you need to prove that the libel caused damage to your
person. While you have an outside chance of winning such a case (love
that jury system!) it's unlikely because:

1) Those who have disbelieved you have stated well established
mathematical theory to be their reason for disbelief.
2) You have offered no REAL proof that your product works.
3) When asked for proof you give us a lot of chatter and meaningless data.

The fact is that even if your product works as well as you claim, industry
experts claiming your product can't work due to mathematical laws is
unlikely to be seen as libel. People aren't saying your product doesn't
work as part of a malicious attempt to insure your potential customers
don't buy you product, they're saying it doesn't work because they believe
that, mathematically, it CAN'T work.

Tom Lane

unread,
Aug 22, 1996, 3:00:00 AM8/22/96
to

> c: *.* wrote:
>> Anyone ever figure out just how much compression is needed to fit a Human
>> Being, or a Redwood Tree into a strand of DNA?

Actually, I believe that DNA is a pretty darn inefficient coding.
According to current biological theory, the useful info content is only
two bits per base (since DNA is made of just four different bases, and
the sequencing of bases conveys all the information that cells use).
Now there are several dozen atoms in each base --- I forget how many,
exactly, but no one dreaming of quantum computing is going to be
impressed with that kind of information density ;-).

And as Mark pointed out, the encoding of real DNA is mighty wasteful
even if you take the bit sequence at face value --- every species' DNA
that's been examined seems to have large chunks that do nothing, and
probably are just leftover historical artifacts.

I'm no expert though. Those who would know about this hang out in
the bionet.molbio.* newsgroups ... go ask there if you really wanna
know.

regards, tom lane

Andy McFadden

unread,
Aug 22, 1996, 3:00:00 AM8/22/96
to

In article <4vgfdg$j...@newsbf02.news.aol.com>, U137 <u1...@aol.com> wrote:
>I have some plain advice for you. Take two computers, compress on one and
>decompress on the other. Start with a file size greater than a 1.44 floppy
>can store, and then compress your data. I am more than willing to generate
>a file for you. How about 2MB of random data.


Looking through the recently-posted comp.compression FAQ, which has a
greatly expanded section on impossible claims, I found something that needs
to be featured prominently in the Jules Gilbert's Greatest Hits
compilation.

From a posting by him, on May 14th, regarding a demo that never happened:

I will compress the CALGARY-CORPUS for transfer from the src-CPU onto 3.5"
disks and transfer it (by sneaker-net) to the other machine for decompression
and produce a perfect copy of the CORPUS file on the 'dst-CPU'.

[...]

I claim that I can perform this process several times, and each iteration
will reduce the overall file by at least 50%, ie., a ratio of 2:1. An
'iteration' will constitute copying, using compression, from the src-CPU to
the dst-CPU, and then reversing the direction to achieve another iteration.


Read the last line again, it's kinda neat. After the first iteration you
can have a completely expanded copy of the file on both machines. Using
this "technology" you could compress the file down to any size you want,
and expand it on either machine, since both machines have a full copy of
the original.

You could even tuck the "compressed" data into the decompressor executable,
if you copy the decompressor and the data at the same time. With suitable
sleight-of-hand, you could have a fully working demo of something that's
impossible.

It's nice to know that there's still room for magic in a field like data
compression. :-)


Somebody recently posted the correct order of operations, which is:

1. copy the decompressor to the target machine.
2. copy the file to be compressed onto the source machine.
3. compress the file onto a floppy, then copy it onto the
destination machine from the floppy.
4. expand the file with the executable from step 1.

Any deviance from the above makes the demonstration invalid.

--
fad...@netcom.com (Andy McFadden) [These are strictly my opinions.] PGP

Anyone can do any amount of work provided it isn't the work he is

supposed to be doing at the moment. -- Robert Benchley

Steve Rencontre

unread,
Aug 22, 1996, 3:00:00 AM8/22/96
to

In message <4vfvr2$5...@soran.ici.net>, cof...@soran.ici.net (Jules Gilbert) said:

[regarding Tom Lane]

> Tom, this is not the first time you've made these kinds of remarks about me.
> Frankly, I'm at the point where I'm searching for redress through the criminal
> courts. Can I do this? Ask me in two weeks or so.
>

> Your behavior is beginning to remind me of that fellow in England who has no
> concern for the truth of things, he just wants to be a member of a mob.

Hi again from "that fellow in England".

Jules, you're so full of crap. I don't normally bother with killfiles, but
I'm starting to think I should make an exception. You have not said anything
new in a long time, and it is still rubbish. You went very quiet when I gave
you some reasons to think that suing me might not be in your best interest,
and now you are threatening someone else.

JULES GILBERT: YOU ARE A LIAR AND A FRAUD, OR AT BEST FOOLISHLY DELUDED.

NOW SUE ME!

OTHERWISE WILL YOU SHUT THE FUCK UP UNTIL YOU HAVE SOMETHING WORTH SAYING?!!!

> [snip]


> [1] My process is the only compressor which avoids the 'counting' problem,

> [snip]

There is no such process. You are lying or hallucinating. Do I make myself
clear?

Sampo Syreeni

unread,
Aug 22, 1996, 3:00:00 AM8/22/96
to

Tom Lane wrote:

> Actually, I believe that DNA is a pretty darn inefficient coding.
> According to current biological theory, the useful info content is only
> two bits per base (since DNA is made of just four different bases, and
> the sequencing of bases conveys all the information that cells use).
> Now there are several dozen atoms in each base --- I forget how many,
> exactly, but no one dreaming of quantum computing is going to be
> impressed with that kind of information density ;-).

And there are also other inefficiencies in the coding. Some of the base
triplets that encode the different aminoacid configurations are
redundant. The ending markers, for example, can take three different
forms of 64 available. However, the molecular structure of DNA is very
well suited to its purpose, I would say. That's because it must be
possible to copy, transfer and store the molecule safely and
efficiently. That wouldn't be possible with a simpler molecule.

Regards,
Sampo Syreeni

Charles Bloom

unread,
Aug 22, 1996, 3:00:00 AM8/22/96
to

>If
>his algortihm works, the market place will reward him. If he doesn't want
>to show the algoritm so what.

You should replace "his" with "my" and "him" with "me" or "I".

Steve Rencontre

unread,
Aug 22, 1996, 3:00:00 AM8/22/96
to

In message <4vhk8o$a...@sjx-ixn6.ix.netcom.com>, cbu...@ix.netcom.com(Christopher Burke ) said:

> Maybe if a deaf ear is turned toward Mr Gilbert, he will get tired of
> posting and just go away.

I was more than half thinking of suggesting we all put Mr G in our
killfiles and pretend he doesn't exist any more!

Sampo Syreeni

unread,
Aug 22, 1996, 3:00:00 AM8/22/96
to

Mark Nelson wrote:

> > Anyone ever figure out just how much compression is needed to fit a Human

> > Being, or a Redwood Tree into a strand of DNA? Now, I've heard if you
> > unravel a DNA molecule, it will be over 3 feet long, and there are more
> > than one of them (forty chromosome pairs, mind you), but you gotta figure
> > how many billions of complex chemical reactions are contained in a
> > organic body. Enzymes, catalysts, etc. Also , whatever we call
> > intelligence has also got to be encoded. And this compression must be
> > lossless. I gotta figure it has to be more than 3 or 4 to 1.
>
> This doesn't seem to be the case. The human genome seems to have
> a fair amount of static, plus some possible redundancy. Without
> knowing much about it, I think that the low relative cost of
> bandwidth has prevented the creation of compression. However,
> there is potentially a *very* high cost associated with transcription
> errors, so coding has evolved to attack that problem.

And the bandwidth is really huge: the human genome contains some 125000
genes most of which are extremely complex. So there are some gigabytes
of information contained in the human genome. And the human chemistry,
by it very nature, is very redundant: the chemistry is based on
proteines and alike. So this limits the amount of information to be
coded considerably. And there are auxiliary channels to transmit more
info: the ribosome and mitochondric DNA and certain hormonal paths. So
there is not necessarily such a great amount of information involved
anyway. And the basic human structure is rather redundant itself: the
whole body is made of variations of the same exact cell structure. The
whole is something inherently present in the structural details that
develop under quite restrictive conditions: the induction mechanism by
which the individual is shaped is just a strict series of hormonal
interactions that produce different variations of the basic cell
structure during the human embryo development. So the actual amount of
information is much less than one would think. So I agree. The
compression view isn't very realistic. But I'd say the whole genome
could be viewed much like the 'compression algorithms' presented all
over this newsgroup lately: the natural laws and conditions under which
the human embryo grows form a rather restrictive system that the genome
is used to key. Rather like compressing a huge pseudorandom number by
finding the key to the generator...
About the error resistance of the human genome: it isn't very good in
its own right. The only protective measure (without being an expert
myself) I know of are some enzymes that guard the DNA strain from
oxidation and breakage. But as all the mutations (and there are
thousands of them happening all the time) show, the effects are not that
serious after all. Nor should they be: they merely make evolution
possible...

Regards,
Sampo Syreeni

Regards,
Sampo Syreeni

John H Meyers

unread,
Aug 22, 1996, 3:00:00 AM8/22/96
to

In article <faddenDw...@netcom.com>,
fad...@netcom.com (Andy McFadden) writes:

> From a posting by [JG], on May 14th, regarding a demo that never happened:
>
> I will compress the CALGARY-CORPUS ...


> and produce a perfect copy of the CORPUS file on the 'dst-CPU'.

> from the src-CPU to the dst-CPU, and then reversing the direction ...

> Read the last line again, it's kinda neat. After the first iteration you

> can have a completely expanded copy of the file on both machines...


> you could compress the file down to any size you want, and expand it on
> either machine, since both machines have a full copy of the original.

Since the Calgary Corpus is a constant, it should not be difficult to have
the original already on hand everywhere; the "two machine" protocol is even
more susceptible to problems when the original input is known in advance.

Do you suppose JG has been packaging "Wavelet" technology all along?

-----------------------------------------------------------
With best wishes from: John H Meyers ( jhme...@mum.edu )

Steve Rencontre

unread,
Aug 22, 1996, 3:00:00 AM8/22/96
to

In message <TGL.96Au...@netcom23.netcom.com>, t...@netcom.com (Tom Lane) said:

> > c: *.* wrote:
> >> Anyone ever figure out just how much compression is needed to fit a Human
> >> Being, or a Redwood Tree into a strand of DNA?
>
> Actually, I believe that DNA is a pretty darn inefficient coding.
> According to current biological theory, the useful info content is only
> two bits per base (since DNA is made of just four different bases, and
> the sequencing of bases conveys all the information that cells use).

It's worse than that. Quite apart from the fact that a proportion of the
DNA may be completely valueless (although so-called 'noncoding' regions
are proving to have uses after all), the actual code is seriously
redundant.

Each sequence of three bases (called a 'codon') determines an amino acid.
There are 4^3=64 possible codons, but only about 20 (I forget the exact
number) amino acids (plus a 'stop' signal). Many codons are therefore
duplicates.

Brian Damon

unread,
Aug 23, 1996, 3:00:00 AM8/23/96
to

Jules Gilbert wrote:

> Tom:
>
> Your remarks about me are ill-founded, libelous, and plainly malicious.
>
> More than that, you seem intent on claiming that I must be lying because I
> won't disclose my method. Now it is true that I have no intention of telling
> you (and indeed, I doubt if I would do business with any company that knowingly
> allowed it's consultants/employees to behave as you do).
>
> But, for the record, I have been telling you (and others) the truth when I say
> that I have the process I've been describing. After reviewing the various
> comp.compression remarks with my attorney, I think the worst I've done is to
> merge some of the specific characteristics of MR1 and MR2, but I have exactly
> what I've been saying I have, a process for compressing random-appearing
> data.[1]
>

> Tom, this is not the first time you've made these kinds of remarks about me.
> Frankly, I'm at the point where I'm searching for redress through the criminal
> courts. Can I do this? Ask me in two weeks or so.
>
> Your behavior is beginning to remind me of that fellow in England who has no
> concern for the truth of things, he just wants to be a member of a mob.
>

> I have no idea whether patent (US. #5,533,051) has merit or not. But neither
> that inventor or I should be treated so unfairly by you.
>
> Maybe you are judgement-proof today, Tom, But a judgement is good essentially
> forever, and since you've intentionally provoked me so more than once, do not
> expect me to treat you kindly.
>
> If you expect to prevail in court becuase I could not have what I claim, you
> will experience what modern society calls 'a wake-up call'. Not only do I have
> what I claim, but I can prove it to the satisfaction of a court. And you won't
> learn anything, as the court very likely will appoint a 'Master' to do the re-
> view and he will simply report YES or NO.
>
> (I've been told that it would be very unlikely that my trade-secrets would leak
> when I proceed against you or someone else for the kind of remarks you have
> made.)
>

> [1] My process is the only compressor which avoids the 'counting' problem,

> which limits all other compressor technologies to single compressive
> instances. With my technology, it is possible to compress large files as
> much 100 times, and especially large files perhaps as much as 1000:1.
> (This is not done with a single pass over the data, but with several
> iterative passes.)
>
>
> How is this possible? (Or 'where do I put the missing bits?') Of course,
> that's what this imbroglio is about.

Do you think that threatening legal action against any who don't
believe your claims, will make your claims more believeable? I
am very tired of this approach. You must not have any clue about
the internet. You could move your operation to Cuba, China, or
North Korea. Once you get the government there on your side, you
can silence all critics. I personally would never deal with
anyone whose sweeping public claims are never verifiable in the
public domain. A web site with purported performance numbers
does not constitute anything. Your threats of legal action also
engender skepticism. Someone on this newsgroup offered to sign
your precious nondisclosure agreement if you would send him the
decompressor and the compressed calgary corpus or some part
thereof. His only disclosure would be if your compression passed
the test. Why don't you put up or SHUT UP!!!!! If you cannot do
this, we won't have to listen to your advertisements. If you can
do this, you will probably have many more customers.

I view this whole episode to be too similar to the "cold fusion"
controversy a few years back. If your results are not publicly
verifiable by independent reviewers, no one should believe you.

Brian

--
Disclaimer: Unless noted in the correspondence, all opinions
expressed are my personal opinions and should not be construed
as representing my employer, Lexmark International, Inc.

Gord Cormack

unread,
Aug 23, 1996, 3:00:00 AM8/23/96
to cof...@soran.ici.net

>
> From: cof...@soran.ici.net (Jules Gilbert)

>
> But, for the record, I have been telling you (and others) the truth when I say
> that I have the process I've been describing. After reviewing the various
> comp.compression remarks with my attorney, I think the worst I've done is to
> merge some of the specific characteristics of MR1 and MR2, but I have exactly
> what I've been saying I have, a process for compressing random-appearing
> data.[1]

Jules, it is completely appropriate to compare your invention to
a perpetual motion machine, a time machine, visitors from outer
space and any number of paranormal things. That is because your
claims, as stated in this forum and privately, are just that
preposterous.

It annoys me greatly that you repeat your impossible claims over
and over and then threaten those who call your bluff. It may be
that you can use the U.S. courts to silence your critics, or even
to make a living in nuisance settlements, but that does not make
your claims true.

> ... After reviewing the various


> comp.compression remarks with my attorney, I think the worst I've done is to
> merge some of the specific characteristics of MR1 and MR2, but I have exactly
> what I've been saying I have, a process for compressing random-appearing
> data.[1]

Perhaps your lawyer told you that you erred by posting the following
two statements (but never in the same message):

1. you can compress RAD
2. RAD includes uniformly distributed random data

When you said you were planning to lay off, Jules, I decided to let
things be. But now you repeat not only your unsupported claims
but your distasteful threats.

Of course you've reneged on posting your results to the Calgary
Corpus. Perhaps your lawyer said that wouldn't be a good idea?

Paul Shirley

unread,
Aug 23, 1996, 3:00:00 AM8/23/96
to

In article <TGL.96Au...@netcom23.netcom.com>, Tom Lane
<t...@netcom.com> writes

>I'm honored. Gilbert has now decided that open, in-front-of-all-the-world
>legal intimidation is the way to resolve his differences with his critics,
>and I seem to be it. Behind-the-scenes threats (as I presume happened
>with Charles Bloom) apparently won't do the job any more.

I believe he openly threatened legal action in this group before the
'behind the scenes' work. And of course the US legal system means that
the truth is no defence.

Maybe he planned on showing people the endorsement Charles Bloom was
obviously forced to make, without any of the other posts.

>If you really can do what you claim, there is no reason for you to give
>a damn what the readership of comp.compression thinks --- you'll be able
>to get quite decently rich without us. The fact that you're still here,
>trying to convince us of an unbelievable claim without giving any real
>evidence, does less than nothing for your credibility.

What everyone has forgotten is that JG is attempting to sell this. Being
able to walk into an office and legitimately claim that experts on
c.compresion have been discussing his technique for months will impress
the gullible. How many managers will actually have read the group?
He can even claim to be a persecuted lone genius if they have read it.

If anything we have helped him to refine his claims to remove obvious
errors even the uninformed could spot.

My guess about his technique: write any old pattern compressor, compress
DOS, Win3.1, Win95, NT and perhaps Linux with it. Use the result as the
initial dictionary then try to convince people that program source is
random... compress the host OS as an example.

--
Paul Shirley

Roalt R. Aalmoes

unread,
Aug 23, 1996, 3:00:00 AM8/23/96
to

cof...@soran.ici.net (Jules Gilbert) writes:

>
> Well, I'm going to give the system a shot. I'm filing.
>
> After numerous off-line discussions with various members of this community, I
> decided to file my patents and executed the initial process.

Okay, your choice. But once you've filed it, there's no reason for you
anymore to keep your algorithms a secret, as the patent is freely
available. It's even possible to get money from people if they use
your algorithm after you've filed your patent (and before the patent
is actually given).

As the USA uses the file first, study later principle, you (probably)
only know if your patent is worth anything after some kind of lawsuit
where someone used your algorithm without getting permission (license)
from you.

I hope your next post will be the algorithm

Roalt

-------------------------------------------------------------------
Roalt Aalmoes http://wwwspa.cs.utwente.nl/~aalmoes/e_index.html
@ Work : e-mail aal...@pegasus.esprit.ec.org
Phone +31 53 489 4658
@ Home : e-mail r.aa...@student.utwente.nl
Phone +31 53 489 5026
-------------------------------------------------------------------


Colin Broughton

unread,
Aug 23, 1996, 3:00:00 AM8/23/96
to

In article <TGL.96Au...@netcom23.netcom.com>, t...@netcom.com (Tom
Lane) wrote:

> > c: *.* wrote:
> >> Anyone ever figure out just how much compression is needed to fit a Human
> >> Being, or a Redwood Tree into a strand of DNA?
>
> Actually, I believe that DNA is a pretty darn inefficient coding.
> According to current biological theory, the useful info content is only
> two bits per base (since DNA is made of just four different bases, and
> the sequencing of bases conveys all the information that cells use).

> Now there are several dozen atoms in each base --- I forget how many,
> exactly, but no one dreaming of quantum computing is going to be
> impressed with that kind of information density ;-).
>

> And as Mark pointed out, the encoding of real DNA is mighty wasteful
> even if you take the bit sequence at face value --- every species' DNA
> that's been examined seems to have large chunks that do nothing, and
> probably are just leftover historical artifacts.
>
> I'm no expert though. Those who would know about this hang out in
> the bionet.molbio.* newsgroups ... go ask there if you really wanna
> know.
>
> regards, tom lane

There are some excellent data compression techniques employed in
human DNA. One of the best examples is the way all 10,000,000 IGG
antibodies are encoded in the space of a few hundred genes, organized
in cassettes. AFAIK, DNA compression works best where there is a
common framework, with variable regions, but we sure as hell don't
know everything, do we.

BTW, the material and energy cost of supporting the genetic framework
for each cell, is so high that I expect most DNA has a purpose and
even the apparent redundancy is there for a damn good reason.

Another BTW, the compressor described at http://datacompression.com
compresses its internal genetic representation, as well as the
data stream passing through. You might find it entertaining.

It is great to see people in this group trying to learn from Nature.
More power to you.

- Colin Broughton

Tim Iverson

unread,
Aug 24, 1996, 3:00:00 AM8/24/96
to

In article <4vfvr2$5...@soran.ici.net>,

Jules Gilbert <cof...@soran.ici.net> wrote:
|>It's not bad enough we have Jules Gilbert flooding the newsgroup with
|>mathematically-impossible lies, we have to have more crap from you too?
|>
|> regards, tom lane
|> organizer, Independent JPEG Group
|>
|>Memo to USPTO: your policy for perpetual motion machine patents
|>("give us a working model first") ought to be applied to universal
|>compression claims too.
|
|Tom, this is not the first time you've made these kinds of remarks about me.
|Frankly, I'm at the point where I'm searching for redress through the criminal
|courts. Can I do this? Ask me in two weeks or so.
...

|I have no idea whether patent (US. #5,533,051) has merit or not. But neither
|that inventor or I should be treated so unfairly by you.

Every community has it's beloved town fool, but this is the first time I've
seen that worthy threatening suit for well-earned laughter.

Jules, when someone brags of incredible accomplishment, but declines to
fulfill the boast, I find ridicule to be a most appropriate and fair
response. Until and unless you come clean in this forum you will remain a
laughingstock, and there is no one to blame for this but yourself.


- Tim Iverson
ive...@lionheart.com

Jerome MUFFAT-MERIDOL

unread,
Aug 24, 1996, 3:00:00 AM8/24/96
to

To every one fed up with this Jules Gilbert Son of a God, I suggest
the following :

Send a file you'd like him to compress on his email
(cof...@soran.ici.net) , this won't bother him too much since it won't
occupy his mail box too much room (his compression, I heard, can
achieve 1:1000). I'm personnaly thinking of emailing a 640Mb data
block of already compressed data, perhaps he can help me sell my game
on floppies instead of CD roms ;-)

Big emails usually teach bandwidth respect to people.


Sverre H. Huseby

unread,
Aug 24, 1996, 3:00:00 AM8/24/96
to

[Jerome MUFFAT-MERIDOL]

| Big emails usually teach bandwidth respect to people.

You haven't understood anything, have you? Mailbombing won't only
harm the targeted victim, but all others sharing his maildirectory
too. In addition, his sysadmins get a hell of a job cleaning up. Not
to mention the terrible waste of bandwith that occurs if you manage to
convince a couple of hundred people around the net that they should
mail megabytes of junk through every wire available.

Please don't.

Sverre.

--
sver...@ifi.uio.no # http://www.ifi.uio.no/~sverrehu/ # citizen since '89

Zach Heilig

unread,
Aug 24, 1996, 3:00:00 AM8/24/96
to

jmu...@planete.net (Jerome MUFFAT-MERIDOL) writes:

> occupy his mail box too much room (his compression, I heard, can
> achieve 1:1000). I'm personnaly thinking of emailing a 640Mb data

1:1000 compression ratio! No wonder everybody thinks he's a lunatic.
Anybody can compress 1:1000. Here's the source:

#include <stdio.h>
int main(void) { int ch; int i; while ((c = getchar()) !=
EOF) { for (i = 0; i < 1000; ++i) putchar(ch); } return 0; }

and there you have it, 1:1000 compression every time, and the bonus is
that it works for multiple passes!

> Big emails usually teach bandwidth respect to people.

This usually isn't very useful. The only case it works as intended is
if the recipient is also the system administrator over there, and he
is also the only person on that other network.

--
Zach Heilig (za...@blizzard.gaffaneys.com) | ALL unsolicited commercial email
Support bacteria -- it's the | is unwelcome. I avoid dealing
only culture some people have! | with companies that email ads.

Charles M. Hannum

unread,
Aug 25, 1996, 3:00:00 AM8/25/96
to

cof...@soran.ici.net (Jules Gilbert) writes:

>
> 2) Convey information across a channel without 'information theory' con-
> straints, as perhaps first/best described by Shannon. (For the signal
> conveyed, normal constraints apply.)

I think you're missing a fundamental point here.

Consider what you refer to above as `information' to be the signal
you're trying to transmit. Consider your compressor and decompressor
to be part of the communications channel over which you're sending the
data (just like the compressor in a modern modem is actually part of
the communications channel). Now you have a signal and a
communications channel. The axioms of information theory, until and
unless they are proven incorrect, apply just as much to this signal
and this channel as they do to the compressed data and the channel
between the compressor and decompressor.[*]

Now, your claims imply that you can exceed the bandwidth of this
larger channel. (I'm using `larger' to refer to scope, not
necessarily bandwidth.) If this is true, then you are effectively
claiming to have violated the axioms of information theory, just as
much as if you had said that you were exceeding the bandwidth of the
smaller channel.

It seems to me that either your claims aren't true, or you do not
understand information theory well enough to explain your claims in
those terms. (I'm not going to state any assumptions about which is
true, because there is just the faintest possibility that if I did I
might have to eat my words later. B-))

[*] Of course it's possible to actually send less information content
through this smaller channel. However, to the best of my knowledge
(and that of many other contributors), the only way to do this is to
encode some extra knowledge of the data set within the decompressor,
or to communicate it through another channel. This doesn't violate
any of our axioms, and doesn't pretend to, but is nevertheless an
effective way to increase bandwidth in some circumstances.


Maynard Handley

unread,
Aug 26, 1996, 3:00:00 AM8/26/96
to

In article <cbrghtn-2308...@remote62.edm.incentre.net>,
cbr...@supernet.ab.ca (Colin Broughton) wrote:

> Another BTW, the compressor described at http://datacompression.com


> compresses its internal genetic representation, as well as the
> data stream passing through. You might find it entertaining.

> It is great to see people in this group trying to learn from Nature.
> More power to you.

This is a very misleading claim. DNA DOES NOT function like what we would
consider a standard compression scheme. You cannot think of a chromosome
as something like a zip file.
DNA is more like a computer program, with all the side-effects that implies.
In particular, like a computer program it expects to execute in a certain
environment (a particular CPU/OS) on particular input data, and the output
result is not specified purely by the DNA but also by the environment in
which the DNA operates, ie the host cell in which the DNA acts to creats
proteins.
Thus to try to measure the information content of DNA without regarding
the vast amount of implicit information in the host is very misleading.
Secondly, like a program, the DNA can only follow certain paths. Not
everything can be encoded, thus the "output space" is substantially
smaller than simple-minded calculations would lead one to believe. Obvious
examples of this are the various work done on embryo morphology where it
has become clear that certain types of beings one might imagine simply
cannot be created because of the way the embryo evolves---for example
certain camouflage patterns that might work well in the real world simply
cannot "grow" out of the way the embryo grows.
Third, like JPEG or CELP DNA compression is "lossy" in the sense that one
is not striving for perfect replicas but simply something that is good
enough to keep going until it can reproduce. So, like JPEG or CELP,
measures like entropy that are valid for lossless encoding or pretty much
meaningless here. And like JPEG different "bits" in the bit stream are
more important than others---a wrong bit here may have no visible effect,
while a wrong bit there may render further parsing of the stream
impossible.

I'm sure it's cross-pollination across scientific fields is always useful,
but please when we want to learn from biology, let's investigate biology
as it really works, not the simplified toy model of biology we learn in
7th grade.

Maynard Handley

--
My opinion only

Lee Daniel Crocker

unread,
Aug 26, 1996, 3:00:00 AM8/26/96
to

troy...@aol.com (TroyL4857) writes:
>To win a libel case you need to prove that the libel caused damage to your
>person. While you have an outside chance of winning such a case (love
>that jury system!) it's unlikely because:

Such a suit would have very little chance of success since the Uri
Geller/Amazing Randi case. Geller lost bigtime, but of course he
continues to peddle his trickery as genuine psychic power, and still
has followers and fans (even among academics). At least in Geller's
case, his lies are entertaining in themselves. Too bad Mr. Gilbert
couldn't make money selling tickets to a rigged compression demo.

--
Lee Daniel Crocker <l...@piclab.com>

Jules Gilbert

unread,
Aug 26, 1996, 3:00:00 AM8/26/96
to

In article <lcrocker....@web1.calweb.com>,
============================================================================

No one who knows me would think I'd rig a demo.

Remember: My business is selling off my technology in a single (or a few)
large transactions. That's it. And any qualified prospect get's the C source
code for review prior to closing any business transaction.

I quote you, "Too bad Mr. Gilbert couldn't make money selling tickets to a
rigged compression demo.".

Lee, I am quite angry at the false and malicious statements that you and others
have made about me. Last week I got together with my lawyer and began focusing
on suit targets. Don't become one of them.

Certainly, it is alright for you not to believe me. (I acknowledge that I have
not been exactly forthcoming! with helpful hints! about the technical details
of my process); But when you assume that my claims are fraudulent, you err.

Tom Lane

unread,
Aug 27, 1996, 3:00:00 AM8/27/96
to

cof...@soran.ici.net (Jules Gilbert) writes:
> No one who knows me would think I'd rig a demo.

Perhaps the problem is that few people on this newsgroup know you...
because from what I've been reading, that's *exactly* what everybody
here thinks.

> Lee, I am quite angry at the false and malicious statements that you and
> others have made about me. Last week I got together with my lawyer and
> began focusing on suit targets. Don't become one of them.

I can see it now, J. Gilbert v. the entire readership of comp.compression.
Sort of a reverse class action suit? (BTW, Jules, I believe that
groundless threats of legal action are themselves actionable.
I suggest you ask your lawyer, if he really exists, about the
penalties for barratry.)

> Certainly, it is alright for you not to believe me. (I acknowledge
> that I have not been exactly forthcoming!

No, you have not, and that's exactly the problem. You continue to
waste the time of the readership of this newsgroup, when you have made
it plain that you will not provide any indisputable proof of your
assertions. Considering that your claims appear to violate all
the known mathematical laws of data compression, is it any wonder
that most of us think you do not provide that proof because you CANNOT?

I continue to wonder why you keep trying to convince us of the
impossible-to-believe without any hard evidence.

regards, tom lane

Steve Rencontre

unread,
Aug 27, 1996, 3:00:00 AM8/27/96
to

In message <lcrocker....@web1.calweb.com>, lcro...@web1.calweb.com (Lee Daniel Crocker)
said:

> troy...@aol.com (TroyL4857) writes:
> >To win a libel case you need to prove that the libel caused damage to your
> >person. While you have an outside chance of winning such a case (love
> >that jury system!) it's unlikely because:
>
> Such a suit would have very little chance of success since the Uri
> Geller/Amazing Randi case. Geller lost bigtime, but of course he
> continues to peddle his trickery as genuine psychic power, and still
> has followers and fans (even among academics).

Yeah, I've noticed the parallels with Uri Geller too. Just about every
professional magician agrees that Geller is a highly accomplished
conjurer, and would welcome him into the fold with open arms if only
he'd come clean.

> At least in Geller's
> case, his lies are entertaining in themselves. Too bad Mr. Gilbert
> couldn't make money selling tickets to a rigged compression demo.

:-)

Brian Damon

unread,
Aug 27, 1996, 3:00:00 AM8/27/96
to

Jules Gilbert wrote:
> No one who knows me would think I'd rig a demo.
>
> Remember: My business is selling off my technology in a single (or a few)
> large transactions. That's it. And any qualified prospect get's the C source
> code for review prior to closing any business transaction.
>
> I quote you, "Too bad Mr. Gilbert couldn't make money selling tickets to a
> rigged compression demo.".

>
> Lee, I am quite angry at the false and malicious statements that you and others
> have made about me. Last week I got together with my lawyer and began focusing
> on suit targets. Don't become one of them.
>
> Certainly, it is alright for you not to believe me. (I acknowledge that I have
> not been exactly forthcoming! with helpful hints! about the technical details
> of my process); But when you assume that my claims are fraudulent, you err.

Your business seems to be to threaten anyone who doesn't
believe you. Why do you have to make the threats public
in this newsgroup? Who are you trying to intimidate? Do
you think that you can use this newsgroup for your sole
profit? I do wish you would go away until an independent
source has verified your claims. I grow very tired of
reading your daily threats and your non-answers to
legitimate questions. Your style of threats and evasion
is enough to make any prudent investor back away; however,
there are probably many whose greed could cause them to
ignore any warning signs. Is this the audience you are
playing to?

Jules Gilbert

unread,
Aug 27, 1996, 3:00:00 AM8/27/96
to

Tom Lane <t...@netcom.com> wrote:

>cof...@soran.ici.net (Jules Gilbert) wrote:
>
>> Lee, I am quite angry at the false and malicious statements that you and
>> others have made about me. Last week I got together with my lawyer and
>> began focusing on suit targets. Don't become one of them.
>
>No, you have not, and that's exactly the problem. You continue to
>waste the time of the readership of this newsgroup, when you have made
>it plain that you will not provide any indisputable proof of your
>assertions. Considering that your claims appear to violate all
>the known mathematical laws of data compression, is it any wonder
>that most of us think you do not provide that proof because you CANNOT?
>
> regards, tom lane

-------------------------------------------------------------------------

Tom, since on more than two occasions you have been very foul in your public
statements about me, and without cause! and without following even the loosest
journalistic ethics, I don't think you are in a position to criticize me.

About what I can and cannot do: I have no obligation to prove anything to you
in this forum. I do have an obligation (which I have met) to be honest. And
until you (and some others') so unfairly made me your target, I also had the
responsibility to be friendly and courteous.

With these events, I have the right to defend myself. My criticism's of you
will have to await an appropriate forum.

Jules Gilbert

David Knapp

unread,
Aug 27, 1996, 3:00:00 AM8/27/96
to

Steve Rencontre wrote:
> lcro...@web1.calweb.com (Lee Daniel Crocker) said:
> > Such a suit would have very little chance of success since the Uri
> > Geller/Amazing Randi case. Geller lost bigtime, but of course he
> > continues to peddle his trickery as genuine psychic power, and still
> > has followers and fans (even among academics).
>
> Yeah, I've noticed the parallels with Uri Geller too. Just about every
> professional magician agrees that Geller is a highly accomplished
> conjurer, and would welcome him into the fold with open arms if only
> he'd come clean.

As the resident magician, I disagree. Uri Geller is a mediocre
conjurer; he'd be welcome, but certainly no superstar. His act is
effective precisely _because_ he claims it is real. Other mentalists,
like Richard Weber and Lee Earle, do the same kind of act without the
explicit claim that it is real; they are better magicians, but get
less press.

At any rate, my opinion of Jules Gilbert has been reinforced by
recent events; his behavior has all the earmarks of a perpetual
motion machine.

If you study the history of perpetual motion machines, you will
discover an interesting phenomenon: many of the inventors who come
up with these things actually _believe_ in them! They are not,
strictly speaking, "frauds," because fraud requires intent. That may
well be the case here; Jules may actually believe that he can do what
he claims, but that "it isn't quite finished" or "it would be stolen."
Excuses like these have often been used by perpetual motion machine
inventors to avoid rigorous examination.

-- Dave
--
+-----------------+----------------------------------------------------+
| Dr. David Knapp | http://www-phys.llnl.gov/N_Div/people/davek.html |
+-----------------+----------------------------------------------------+
| d...@usa.net | If you're going to be a perfectionist, |
| (510) 422-1023 | do it right or don't do it at all. |
+-----------------+----------------------------------------------------+

Jeff Almeida

unread,
Aug 29, 1996, 3:00:00 AM8/29/96
to

John H Meyers (jhme...@miu.edu) wrote:

: Do you suppose JG has been packaging "Wavelet" technology all along?

I resent that remark, and so do my 200:1 WAVELET-compressed bitmap images...

(you don't suppose JG has discovered the elusive REgressive-jpeg???)

--
Jeff Almeida alm...@infinop.com
VP, Technology voice: (817) 891-1538
Infinop, Inc. pager: (817) 450-0752
3401 East University #104 fax: (817) 484-0586
Denton, TX 76208 http://www.infinop.com/jeff

mischa_...@mindlink.bc.ca

unread,
Aug 30, 1996, 3:00:00 AM8/30/96
to

Could we change the subject line on this thread? If you are posting in
response to an article in this thread, could you alter the subject
line? Modern newsreaders collect responses by their follow-up message
id's, not the contents of the subject line, anyway.
Engineers believe that the equations approximate reality.
Physicists believe that reality approximates the equations.
Mathematicians never make the connection.


Steve Rencontre

unread,
Sep 2, 1996, 3:00:00 AM9/2/96
to

In message <322320...@usa.net>, David Knapp <d...@usa.net> said:

> As the resident magician, I disagree. Uri Geller is a mediocre
> conjurer; he'd be welcome, but certainly no superstar.

I stand corrected, then. It looks impressive enough to me, but I'm
no expert.

0 new messages