Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Introducing LEL Compression

15 views
Skip to first unread message

Lynn Lantz

unread,
Jan 27, 1999, 3:00:00 AM1/27/99
to
As I promised a few of you once I got the patent filed that I would come
back to let this group know.
I'm pleased to introduce LEL Compression (Patent Pending).

The main features are:
1. Recursive - it can be done over and over any number of times down to a
set size or set number of iterations. It will work on files already
compressed by any other compression program. It will work on files that
have been encrypted. (Spare me the arguments about the Counting Number
proof until you've read my arguments below).
2. Adaptive - it does not care about the type of file and works the same
way on text, image, video, audio, etc. It works at the lowest level (the
binary level) so it is not concerned with what the file contains. Also, for
every recursive pass it adapts to what would be the best technique for that
recursive pass.
3. Lossless - even if it has gone through 5, 10, or N number of passes, it
will still de-compress to an exact copy of the original.

Imagine being able to compress a 1meg file to 20 bytes, or a 1gig file to 50
bytes, or a whole hard disk onto a floppy, or a patients whole medical
history onto a mag strip on a card, etc., etc. I know you are thinking that
it sounds impossible...but it does work! It won't be as fast as other
compression programs because it goes through more than one pass of the data,
but it could be adapted to real-time data by doing it dynamically in
"chunks" of the data file.

If you are interested in learning more about this new technique or in
licensing this invention then please email me at lel...@bignet.net. In
particular, I am interested in talking to companies that would be interested
in licensing this technology. Don't ask for info if you aren't willing to
sign a confidential non-disclosure agreement.

Thanks in advance for your interest,
Lynn (FYI - I'm a man even though my parents saw fit to name me
Lynn).

For those who are interested in the Counting Number argument:
1. Why is it that everyone accepts the fact that any file can be compressed
once into a smaller number of bits, but as soon as you say you can do it
twice or more then they go nuts with this argument?
2. The argument is true that you can't compress all 256 occurences of 8
bits into 7. But...
a) it doesn't take into account that a set number of bits may go through
different numbers of recursive passes to get to the same end. For example:
file A may go from 100 bytes to 70 bytes during one pass. File B may go
from 100 bytes to 85 bytes during one pass and then from 85 bytes to 70
bytes during the 2nd pass. They both ended up at 70 bytes but the number of
passes is different.
b) it doesn't take into account that some of the possible combinations may
"grow" on one pass to be able to incur better compression on the next pass.
For example: file A may go from 100 to 101 bytes on the first pass but then
go from 101 bytes to 60 bytes on the next pass.
c) it doesn't take into account that the results can be different lengths.
For example: file A may end at 22 bytes, file B may end at 21 bytes, file C
may end at 23 bytes, etc.
d) it doesn't take into account that one bit may be controlling/leveraging
thousands of bits. While a bit may only have 2 choices, a single choice
could influence/control/leverage what thousands of bits will do during the
compression pass or the de-compression pass.

Ian S. Nelson

unread,
Jan 27, 1999, 3:00:00 AM1/27/99
to
Lynn Lantz wrote:

Has this been implemented or is it still 'in the design stages.'

These algorithms tend to have a similarity in that they are all announced and
then 'have trouble' being implemented. I think 'matrix problems' is a common
excuse..

Good luck doing the impossible.

Ian Nelson


Tim Iverson

unread,
Jan 27, 1999, 3:00:00 AM1/27/99
to

In article <3IPr2.103$w64...@news5.ispnews.com>,

Lynn Lantz <lel...@bignet.net> wrote:
|I'm pleased to introduce LEL Compression (Patent Pending).
|
|The main features are:
|1. Recursive - it can be done over and over any number of times down to a
|set size or set number of iterations. It will work on files already
|compressed by any other compression program. It will work on files that
|have been encrypted. (Spare me the arguments about the Counting Number
|proof until you've read my arguments below).

[yada yada yada ya]

We've heard it before. Post some results; eg. on the calgary
corpus. Better still, post results and codec source, though your
lawyer may advise you to wait on source until the patent has been
granted -- filing a patent doesn't mean much legally, unless you're
certain it will be granted. ;)

BTW, as long as you don't claim to be able to compress ALL files or
ANY file, no one is going to try to pigeon hole you. Recursive
compression arguments usually end up in variants of the halting
problem, so "recursive" is a hot-button for this group, too.

|Imagine being able to compress a 1meg file to 20 bytes, or a 1gig file to 50
|bytes, or a whole hard disk onto a floppy, or a patients whole medical
|history onto a mag strip on a card, etc., etc. I know you are thinking that

This kind of claim coined the phrase, "suspension of disbelief".
Though, I think we might need an entire mint here. Post your
results on the corpus so we don't strain our imaginations.


- Tim Iverson
iverson@cisco-com


Mark Nelson

unread,
Jan 27, 1999, 3:00:00 AM1/27/99
to
Lynn Lantz wrote in message <3IPr2.103$w64...@news5.ispnews.com>...

>If you are interested in learning more about this new technique or in
>licensing this invention then please email me at lel...@bignet.net. In
>particular, I am interested in talking to companies that would be
interested
>in licensing this technology. Don't ask for info if you aren't willing to
>sign a confidential non-disclosure agreement.


Hi Lynn,

If you have filed for a patent, you might not need that super-duper NDA.
After all, once your patent is granted the whole works are open to the
public anyway.

If you really want to show up all those naysayers, I recommend a carefully
designed demonstration. I would be glad to MC, just let me know and we can
work out the details! I have no doubt that we can create a demonstration
that protects your interests while proving that you aren't just some
crackpot!

Tom Lane

unread,
Jan 28, 1999, 3:00:00 AM1/28/99
to
"Mark Nelson" <ma...@ieee.org> writes:
> I have no doubt that we can create a demonstration
> that protects your interests while proving that you aren't just some
> crackpot!

... or, more likely, proves that Lynn *is* a crackpot. If that post
wasn't crackpot-bait then I've never seen any.

Lynn: your claims are just about as non-credible as a perpetual motion
machine. You may be aware that the USPTO won't consider patent
applications for perpetual motion machines (unless the applicant can
supply a working model). If the USPTO were halfway competent in
software, which unfortunately they generally are not, they would have
a similar requirement for recursive compression claims.

Your armwaving about how using variable numbers of compression passes
defeats the counting argument is just armwaving, because you are
ignoring the fact that it takes extra out-of-band bits to store the
number of passes used. Account for that honestly, and you'll find
there is no savings.

Now, go away and don't bother us. Please.

regards, tom lane
organizer, Independent JPEG Group

Janne Johansson

unread,
Jan 28, 1999, 3:00:00 AM1/28/99
to
"Lynn Lantz" <lel...@bignet.net> writes:

> Thanks in advance for your interest,

> For those who are interested in the Counting Number argument:
> 1. Why is it that everyone accepts the fact that any file can be compressed
> once into a smaller number of bits,

Hold it. Any file can be compressed by a particurlar compressor, but
no single compressor can compress ALL files.
Big difference.

> but as soon as you say you can do it
> twice or more then they go nuts with this argument?

And of course, then it doesn't matter how many times you do it in a row.
And if it cant be done once, it wont help doing it five times either.

> 2. The argument is true that you can't compress all 256 occurences of 8
> bits into 7. But...

> d) it doesn't take into account that one bit may be controlling/leveraging
> thousands of bits. While a bit may only have 2 choices, a single choice
> could influence/control/leverage what thousands of bits will do during the
> compression pass or the de-compression pass.

It does. A compressed file at the size of 8 bits (1byte), can only be
decompressed to 256 unique files. It doesn't state anything about
their final size. This leveraging stuff is about the final size,
whereas the Counting Argument *isn't*.

You can easily make a compression scheme that only knows
how to pack 256 different 386Mb files, and store the resulting file as
a single byte, but that single byte still can't expand to more than 256
different outputs.

Making the figures larger doesn't alter the facts. Trust me.

People tend to get swayed by large numbers, claiming that if their
recursive compressor gets any file down to 1k or some similar figure,
this will cover <INSERT LARGE NUMBER> of possible outputs, and that
this somehow will be "enough". The problem is that all files with a
size larger than 1k upto practical limits, like terabytes or something,
(a likely target for an any_size_to_1k_recursive-compressor) will have
<INSERT ENORMOUS LARGE NUMBER> combinations.

A (compressed) file with only LARGE NUMBER different values can't hold
ALL of the ENORMOUS NUMBER possibilities.

Therefore, either the compressor only compresses some files and not
all, or there exists some files it can never compress.

Claiming that a 1k file can contain all possible files upto and
exceeding 1k of size is bound to fail. A proof based on a single bit,
8 bits, 40 bits does scale and is still valid even if the numbers
grows, just like one can prove that if:
1 + 1 = 2, then
100 + 100 = 200, and
100000 + 100000 = 200000 and so on.

There is no provable point where
1<real many zeros> + 1<same amount of zeroes> wouldn't equal
2<same amount of zeroes> just because the amount gets big.

The same goes for the counting argument. If it's valid for 1 bit, or 8
bits, it's still valid for 1024*8 bits or one million bits.

--
"Backwards compatible" means: "if it isn't backwards, it's not compatible."

Http://www.it.kth.se/~jj

graham...@hotmail.com

unread,
Jan 28, 1999, 3:00:00 AM1/28/99
to
In article <3IPr2.103$w64...@news5.ispnews.com>,
"Lynn Lantz" <lel...@bignet.net> wrote:
> As I promised a few of you once I got the patent filed that I would come
> back to let this group know.
> I'm pleased to introduce LEL Compression (Patent Pending).

[...]

> For those who are interested in the Counting Number argument:
> 1. Why is it that everyone accepts the fact that any file can be compressed

> once into a smaller number of bits, but as soon as you say you can do it


> twice or more then they go nuts with this argument?

> 2. The argument is true that you can't compress all 256 occurences of 8
> bits into 7. But...

> a) it doesn't take into account that a set number of bits may go through
> different numbers of recursive passes to get to the same end. For example:
> file A may go from 100 bytes to 70 bytes during one pass. File B may go
> from 100 bytes to 85 bytes during one pass and then from 85 bytes to 70
> bytes during the 2nd pass. They both ended up at 70 bytes but the number of
> passes is different.
> b) it doesn't take into account that some of the possible combinations may
> "grow" on one pass to be able to incur better compression on the next pass.
> For example: file A may go from 100 to 101 bytes on the first pass but then
> go from 101 bytes to 60 bytes on the next pass.
> c) it doesn't take into account that the results can be different lengths.
> For example: file A may end at 22 bytes, file B may end at 21 bytes, file C
> may end at 23 bytes, etc.

> d) it doesn't take into account that one bit may be controlling/leveraging
> thousands of bits. While a bit may only have 2 choices, a single choice
> could influence/control/leverage what thousands of bits will do during the
> compression pass or the de-compression pass.


This sort of thing pops up every once in a while. It irks me a bit, and
here's why:

The Counting Argument takes EVERYTHING into account!

The Couting Argument is a highly generalized proof that makes NO ASSUMPTIONS
about the inner operations of the encoder, and therefore applies to EVERY
POSSIBLE ENCODER.

If you really feel that you must work out the math for some specific case to
make sure that the counting argument still applies (which is HAS to), then
feel free. You will find that it does indeed apply.

Saying "I have an encoder that evades the Counting Argument" is analogous to
saying "I have a triangle whose inner angles don't add up to 180 degrees". It
really is.

- GLYPH

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own

tomst...@my-dejanews.com

unread,
Jan 28, 1999, 3:00:00 AM1/28/99
to

> "Mark Nelson" <ma...@ieee.org> writes:
> > I have no doubt that we can create a demonstration
> > that protects your interests while proving that you aren't just some
> > crackpot!
>
> ... or, more likely, proves that Lynn *is* a crackpot. If that post
> wasn't crackpot-bait then I've never seen any.

First off, that is not very nice.


Second, if your algorithm is so great, and that it removes all posible info,
why does it need more then one pass? DEFLATE a very popular algorithm
requires no more then one pass of the data to compress.

Maybe (like others have posted) you could provide an algorithm, or even source
code to backup your wild claims.

Me thinks you are a liar. Let's stop the posting of LEL or any other stupid
algorithm that can compress anything, because quite frankly about one person
per 3 months makes a post like yours. Some people have even started companies
and sites based on their claims, and have never EVER back them up.

Antaeus Feldspar

unread,
Jan 28, 1999, 3:00:00 AM1/28/99
to
Lynn Lantz <lel...@bignet.net> wrote:

[snip]


> For those who are interested in the Counting Number argument:
> 1. Why is it that everyone accepts the fact that any file can be compressed
> once into a smaller number of bits, but as soon as you say you can do it
> twice or more then they go nuts with this argument?

Because the statement "any file can be compressed once into a
smaller number of bits" is true, but the statement becomes UNTRUE if you
try to add the clause "by the same compressor."
For any given compressor, there is a set of files L that, when
used as inputs, will produce smaller outputs; however, there is also a
set of files G that will produce *larger* inputs. G is ALMOST always a
larger set than L; and the number of bits gained by files in G is ALWAYS
larger than the number of bits lost by files in L.

> 2. The argument is true that you can't compress all 256 occurences of 8
> bits into 7. But...
> a) it doesn't take into account that a set number of bits may go through
> different numbers of recursive passes to get to the same end. For example:
> file A may go from 100 bytes to 70 bytes during one pass. File B may go
> from 100 bytes to 85 bytes during one pass and then from 85 bytes to 70
> bytes during the 2nd pass. They both ended up at 70 bytes but the number of
> passes is different.

Yes. And the *number of passes* it takes to get to the smaller
input is information necessary for decompression -- therefore, it must
be counted as part of the output.

> b) it doesn't take into account that some of the possible combinations may
> "grow" on one pass to be able to incur better compression on the next pass.
> For example: file A may go from 100 to 101 bytes on the first pass but then
> go from 101 bytes to 60 bytes on the next pass.
> c) it doesn't take into account that the results can be different lengths.
> For example: file A may end at 22 bytes, file B may end at 21 bytes, file C
> may end at 23 bytes, etc.
> d) it doesn't take into account that one bit may be controlling/leveraging
> thousands of bits. While a bit may only have 2 choices, a single choice
> could influence/control/leverage what thousands of bits will do during the
> compression pass or the de-compression pass.

It's not really accurate to say the counting argument doesn't
"take into account" this matter and that matter... those matters don't
MATTER to the counting argument. The action of one compression
algorithm, applied once, is a compressor. The action of two compression
algorithms, applied in succession, is a compressor. The action of one
compression algorithm, applied a constant number of times, is a
compressor. The action of one compression algorithm, applied a variable
number of times according to the characteristics of the data, and
accompanied by the data necessary to reconstruct the variable, is a
compressor. An infinite list that randomly assigns one unique output
file to each possible input file is a compressor (as long as at least
one output is smaller than its input.)
And the counting argument DOESN'T CARE which of the above
possibilities a compressor happens to be. If it is a compressor, the
counting argument applies to it.

-jc

--
* -jc IS *NOW* feld...@cryogen.com
* Home page: http://members.tripod.com/~afeldspar/index.html
* The home of >>Failed Pilots Playhouse<<
* "Better you hold me close than understand..." Thomas Dolby

Lynn Lantz

unread,
Jan 28, 1999, 3:00:00 AM1/28/99
to
Tim Iverson wrote in message <78ont8$l4s$1...@smbmail1.cisco.com>...

>BTW, as long as you don't claim to be able to compress ALL files or
>ANY file, no one is going to try to pigeon hole you. Recursive
>compression arguments usually end up in variants of the halting
>problem, so "recursive" is a hot-button for this group, too.


Tim, I've been in systems and programming for 20 years but not in
compression however. I admit that I am "outside the fold" but think they
may have been an asset in developing this to not have prior beliefs.
Therefore, I'm not completely familiar with "variants of the halting
problem". Can you explain that to me a little further?

>This kind of claim coined the phrase, "suspension of disbelief".
>Though, I think we might need an entire mint here. Post your
>results on the corpus so we don't strain our imaginations.

I can appreciate your disbelief. I really appreciate you not just calling
me a bunch of names and at least taking time to respond. This is not a
scam, or hidden bits, etc. I truly believe that it works.
Thanks.

Lynn Lantz

unread,
Jan 28, 1999, 3:00:00 AM1/28/99
to

Ian S. Nelson wrote in message <36AFE6A6...@ibm.net>...

>These algorithms tend to have a similarity in that they are all announced
and
>then 'have trouble' being implemented. I think 'matrix problems' is a
common
>excuse..
>Good luck doing the impossible.


It does not use matrices. It is all done at the binary level. I've read
all the FAQ's and the patents on the previous attempts that did not work.
I'm not trying to perpetuate a hoax, I am sincere in my attempt.
Thanks.


Lynn Lantz

unread,
Jan 28, 1999, 3:00:00 AM1/28/99
to

Tom Lane wrote in message ...

>
>... or, more likely, proves that Lynn *is* a crackpot. If that post
>wasn't crackpot-bait then I've never seen any.
><snip>

>Now, go away and don't bother us. Please.
>
> regards, tom lane
> organizer, Independent JPEG Group

Tom,
Thanks for the kind words and thanks for your support.

Lynn Lantz

unread,
Jan 28, 1999, 3:00:00 AM1/28/99
to
Janne Johansson wrote in message ...

>
>It does. A compressed file at the size of 8 bits (1byte), can only be
>decompressed to 256 unique files. It doesn't state anything about
>their final size. This leveraging stuff is about the final size,
>whereas the Counting Argument *isn't*.
>
>You can easily make a compression scheme that only knows
>how to pack 256 different 386Mb files, and store the resulting file as
>a single byte, but that single byte still can't expand to more than 256
>different outputs.
>
>Making the figures larger doesn't alter the facts. Trust me.
><snip>

>The same goes for the counting argument. If it's valid for 1 bit, or 8
>bits, it's still valid for 1024*8 bits or one million bits.
>
Let me try to explain it this way:
8 bits can represent 256 options.
7 bits can represent 128 options.
6 bits can represent 64 options.
5 bits can represent 32 options.
4 bits can represent 16 options.
3 bits can represent 8 options.
2 bits can represent 4 options.
1 bit can represent 2 options.
Over the span of 8 bits it is then possible to have 510 different options
represented. The problem has always been how to know where one set stops
and the next set starts if all the answers can be variable lengths. One of
the "keys" to LEL compression is the ability to know this and to know how
many bits you need before you are done (even though the length changes over
and over all throughout the file).

Kelsey Bjarnason

unread,
Jan 28, 1999, 3:00:00 AM1/28/99
to
Lynn Lantz wrote in message <3IPr2.103$w64...@news5.ispnews.com>...
>As I promised a few of you once I got the patent filed that I would come
>back to let this group know.
>I'm pleased to introduce LEL Compression (Patent Pending).
>
[snips]

>For those who are interested in the Counting Number argument:
>1. Why is it that everyone accepts the fact that any file can be
compressed
>once into a smaller number of bits, but as soon as you say you can do it
>twice or more then they go nuts with this argument?

They don't. First, "anyone accepts that any file can be compressed once"
is fallacious. For any given compressor, there will be some class of files
it cannot compress.

>2. The argument is true that you can't compress all 256 occurences of 8
>bits into 7. But...

>a) it doesn't take into account that a set number of bits may go through
>different numbers of recursive passes to get to the same end. For example:
>file A may go from 100 bytes to 70 bytes during one pass. File B may go
>from 100 bytes to 85 bytes during one pass and then from 85 bytes to 70
>bytes during the 2nd pass. They both ended up at 70 bytes but the number
of
>passes is different.

And it makes no difference at all. As a simple example, using this
reasoning, I
can with WhizzBang Compressor Z, recursively compress N files to 3 bits.
Let's
see what happens:

File X compresses to 101
File Y compresses to 101

Great! Now I take my "101" and I decompress it. Umm, wait a sec. If this
is
a lossless algorithm, it has to be able to decompress both X *and* Y from
this
same data - but with only 3 bits, there is absolutely no way to tell which
file
it should decompress! *poof* goes the data.

Before you jump in with something such as "But you forgot the compression
count!" keep in mind that if you're requiring the transmission or storage of
*additional* information *beyond* those three bits, this information needs
to
be included in the total resultant file size before you can state that the
system
is, in fact, lossless. Or indeed, reversible.

[other irrlevant points deleted]

Peter Schepers

unread,
Jan 28, 1999, 3:00:00 AM1/28/99
to
Lynn Lantz <lel...@bignet.net> wrote:
>Let me try to explain it this way:
>8 bits can represent 256 options.
>7 bits can represent 128 options.
>6 bits can represent 64 options.
>5 bits can represent 32 options.
>4 bits can represent 16 options.
>3 bits can represent 8 options.
>2 bits can represent 4 options.
>1 bit can represent 2 options.
>Over the span of 8 bits it is then possible to have 510 different options
>represented. The problem has always been how to know where one set stops
>and the next set starts if all the answers can be variable lengths. One of
>the "keys" to LEL compression is the ability to know this and to know how
>many bits you need before you are done (even though the length changes over
>and over all throughout the file).

Lynn, Lynn, Lynn... it may take a while to see, but the counting argument
will make sense soon. But to the matter at hand, your little list above of
bits vs byte expansion is correct, but not your final total. Out of 8 bits
you can only have 256 possibles. When you strip down to 7 bits, yes you
only have 128 possibles, but those 7 bits are a sub-set of the previous 8
bits, and cannot be added later to increase your total past the original
256.

-----------------------------------------------------------------------------
Peter Schepers, | Author of : 64COPY, The C64 EMU converter
Info Services and Technology | get from: http://www.fairlight.to
University of Waterloo, | My opinion is not likely that of the
Waterloo, Ontario, Canada. | University, its employees, or anybody
1-519-888-4567 ext 2456 | on this planet. Too bad!

Lynn Lantz

unread,
Jan 28, 1999, 3:00:00 AM1/28/99
to

Peter Schepers wrote in message <78q65u$h58$1...@watserv3.uwaterloo.ca>...

>Lynn, Lynn, Lynn... it may take a while to see, but the counting argument
>will make sense soon. But to the matter at hand, your little list above of
>bits vs byte expansion is correct, but not your final total. Out of 8 bits
>you can only have 256 possibles. When you strip down to 7 bits, yes you
>only have 128 possibles, but those 7 bits are a sub-set of the previous 8
>bits, and cannot be added later to increase your total past the original
>256.


Peter, your answer above is not true. Consider this very simple example:
0=1
1=2
00=3
01=4
10=5
11=6

0 is a different answer from 00 or 01. The answer for 0 is not a "sub-set"
of the 2 bit answers. The difference is in knowing where one stops and the
next starts.


Matt Kennel

unread,
Jan 28, 1999, 3:00:00 AM1/28/99
to
On Thu, 28 Jan 1999 06:53:16 GMT, Tom Lane <t...@netcom.com> wrote:
:"Mark Nelson" <ma...@ieee.org> writes:
:> I have no doubt that we can create a demonstration
:> that protects your interests while proving that you aren't just some
:> crackpot!
:
:... or, more likely, proves that Lynn *is* a crackpot. If that post

:wasn't crackpot-bait then I've never seen any.
:
:Lynn: your claims are just about as non-credible as a perpetual motion
:machine.

No they're less credible. A 'perpetual motion machine' depends on
the second law of thermodynamics, which is a macroscopic approximation about
facts regarding the typical chaotic dynamics of lots of micro particles.

You might conceivably come up with some very odd system that looking at
it in some strange basis you might think that something is going the "wrong
way". (and I've heard that energy conservation isn't globally true in
General Relativity).

But true mathematical arguments are true and false ones are false.

--
* Matthew B. Kennel/Institute for Nonlinear Science, UCSD
*
* "do || !do; try: Command not found"
* /sbin/yoda --help

Lynn Lantz

unread,
Jan 28, 1999, 3:00:00 AM1/28/99
to
Tim Iverson wrote in message <78ont8$l4s$1...@smbmail1.cisco.com>...
>
>BTW, as long as you don't claim to be able to compress ALL files or
>ANY file, no one is going to try to pigeon hole you. Recursive
>compression arguments usually end up in variants of the halting
>problem, so "recursive" is a hot-button for this group, too.


Tim, I posted this earlier today but don't see it in the newsgroup so am not
sure if it went through. Anyways, could you please explain more to me about
"variants of the halting problem". I've worked in systems and programming
for 20 years but not in compression. I think that was a help in being
"outside the fold" and not having pre-conceived ideas. But I would like to
know more about the halting problem.
Thanks.


Lynn Lantz

unread,
Jan 28, 1999, 3:00:00 AM1/28/99
to
Tim Iverson wrote in message <78ont8$l4s$1...@smbmail1.cisco.com>...
>BTW, as long as you don't claim to be able to compress ALL files or
>ANY file, no one is going to try to pigeon hole you. Recursive
>compression arguments usually end up in variants of the halting
>problem, so "recursive" is a hot-button for this group, too.


Let me try to clarify something. I did not say that ALL files would
compress into a smaller size on the first iterative pass. My research shows
that over 90% of all bit distributions would compress into a smaller size
during a single pass using my technique. However, about 10% will grow by
one byte. This byte tells how the file should be re-ordered to achieve the
best compression during the next pass. So as I said in a previous post a
file may go from 100 to 101 on the first pass, and then 101 to 80 on the
second pass. It will eventually compress all files but not in every pass,
not in the same number of passes, not down to same size, etc.
Thanks.


Ian S. Nelson

unread,
Jan 28, 1999, 3:00:00 AM1/28/99
to
Lynn Lantz wrote:

I didn't want to sound like I was attacking you. I believe you are sincere.
I also believe that it has been mathematically proven that this is
impossible, it's really fairly simple discrete math.

BTW, saying it works 'on the binary level' doesn't help your cause.

Ian Nelson


tomst...@my-dejanews.com

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to

> Let me try to explain it this way:
> 8 bits can represent 256 options.
> 7 bits can represent 128 options.
> 6 bits can represent 64 options.
> 5 bits can represent 32 options.
> 4 bits can represent 16 options.
> 3 bits can represent 8 options.
> 2 bits can represent 4 options.
> 1 bit can represent 2 options.
> Over the span of 8 bits it is then possible to have 510 different options
> represented. The problem has always been how to know where one set stops
> and the next set starts if all the answers can be variable lengths. One of
> the "keys" to LEL compression is the ability to know this and to know how
> many bits you need before you are done (even though the length changes over
> and over all throughout the file).


In theory that would work, but the problem is you spend more time saying how
many bits there is.

For example 510 (512 with rounding) is 9 bits. You could use 8 code bits, and
3 bits for length, coming to 11bits. 2 bits bigger then 9.

BTW, what you are trying is similar to LZSS, except LZSS uses distance/length
pairs not code/length pairs.

tom

tomst...@my-dejanews.com

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to

> Peter, your answer above is not true. Consider this very simple example:
> 0=1
> 1=2
> 00=3
> 01=4
> 10=5
> 11=6
>
> 0 is a different answer from 00 or 01. The answer for 0 is not a "sub-set"
> of the 2 bit answers. The difference is in knowing where one stops and the
> next starts.

Your right... How do you represent the length of the code then?

Tom

Marek Wierzbicki

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to
Lynn Lantz wrote:

> Peter, your answer above is not true. Consider this very simple example:
> 0=1
> 1=2
> 00=3
> 01=4
> 10=5
> 11=6
>
> 0 is a different answer from 00 or 01. The answer for 0 is not a "sub-set"
> of the 2 bit answers. The difference is in knowing where one stops and the
> next starts.

from who You will know that 00 = 3 not 11 ???

Marek

Earl Pottinger

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to
Lynn Lantz (lel...@bignet.net) wrote:

: Peter Schepers wrote in message <78q65u$h58$1...@watserv3.uwaterloo.ca>...


: >Lynn, Lynn, Lynn... it may take a while to see, but the counting argument
: >will make sense soon. But to the matter at hand, your little list above of
: >bits vs byte expansion is correct, but not your final total. Out of 8 bits
: >you can only have 256 possibles. When you strip down to 7 bits, yes you
: >only have 128 possibles, but those 7 bits are a sub-set of the previous 8
: >bits, and cannot be added later to increase your total past the original
: >256.


: Peter, your answer above is not true. Consider this very simple example:


: 0=1
: 1=2
: 00=3
: 01=4
: 10=5
: 11=6

: 0 is a different answer from 00 or 01. The answer for 0 is not a "sub-set"
: of the 2 bit answers. The difference is in knowing where one stops and the
: next starts.

Based on the above data two diffirent files containing the bytes

File #1 = 1313131313
File #2 = 3131313131

When compress give to the following files in bits.

File #1 compressed = 000000000000
File #2 compressed = 000000000000

How do I uncompress File #1 and File #2?????????

Earl Colby Pottinger

----------------------------------------------------------------
: Stop on by the Internet TeleCafe! telnet://telecafe.com:9000 :
----------------------------------------------------------------

Dave Mullen

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to
Well here we are again. I love these kind of things, it makes
comp.compression a fun place to be.

Lynn, why didn't you reply to my EMail ?

It seems that you have an answer for the impossible ( it has been referred
to as perpetual motion etc. ), but are unwilling to prove it.

C'mon stop playing with us - tell us your secret !!

Dave


graham...@hotmail.com

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to
In article <UU2s2.249$Ja7...@news5.ispnews.com>,

"Lynn Lantz" <lel...@bignet.net> wrote:
>
> Peter Schepers wrote in message <78q65u$h58$1...@watserv3.uwaterloo.ca>...
> >Lynn, Lynn, Lynn... it may take a while to see, but the counting argument
> >will make sense soon. But to the matter at hand, your little list above of
> >bits vs byte expansion is correct, but not your final total. Out of 8 bits
> >you can only have 256 possibles. When you strip down to 7 bits, yes you
> >only have 128 possibles, but those 7 bits are a sub-set of the previous 8
> >bits, and cannot be added later to increase your total past the original
> >256.
>
> Peter, your answer above is not true. Consider this very simple example:
> 0=1
> 1=2
> 00=3
> 01=4
> 10=5
> 11=6
>
> 0 is a different answer from 00 or 01. The answer for 0 is not a "sub-set"
> of the 2 bit answers. The difference is in knowing where one stops and the
> next starts.

So tell me, using this marvelous scheme, what does "001010010101100" decode
to?

Sorry for being slightly condescending, but I'm trying to nail the point.

- GLYPH

Janne Johansson

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to
"Lynn Lantz" <lel...@bignet.net> writes:

> Let me try to explain it this way:
> 8 bits can represent 256 options.
> 7 bits can represent 128 options.
> 6 bits can represent 64 options.
> 5 bits can represent 32 options.
> 4 bits can represent 16 options.
> 3 bits can represent 8 options.
> 2 bits can represent 4 options.
> 1 bit can represent 2 options.
> Over the span of 8 bits it is then possible to have 510 different options
> represented. The problem has always been how to know where one set stops
> and the next set starts if all the answers can be variable lengths. One of
> the "keys" to LEL compression is the ability to know this and to know how
> many bits you need before you are done (even though the length changes over
> and over all throughout the file).

Yes, every "token" must have its own predetermined lenght, OR some
sort of stopbit or table-of-lenghts. When you do the maths on it, you
see that accounting for different lenghts of each token will cost you
exactly the same amount that you gained.
That is overall, for all possible inputs of course.

Janne Johansson

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to
"Lynn Lantz" <lel...@bignet.net> writes:

> Let me try to clarify something. I did not say that ALL files would
> compress into a smaller size on the first iterative pass. My research shows
> that over 90% of all bit distributions would compress into a smaller size
> during a single pass using my technique. However, about 10% will grow by
> one byte. This byte tells how the file should be re-ordered to achieve the
> best compression during the next pass. So as I said in a previous post a
> file may go from 100 to 101 on the first pass, and then 101 to 80 on the
> second pass. It will eventually compress all files but not in every pass,
> not in the same number of passes, not down to same size, etc.
> Thanks.

Ok, based on your text above, the 100 byte file could be:
"AAAAAAAAAAAAAAAAAAAAAAAAAAAAA..." for instance, that compresses down
to: "BBBBBBBBBBBBBBB..." going from 100 A's to 80 B's. For instance.
Then you have the "hard" file:
"XXXXXXXXXXXXXXXXXXXXXXXXXXXX..." that has 100 bytes that really doesn't
want to compress, so the first round, it actually adds a byte, making
it something like this:
"XXXXXXXXXXXXXXX...XXXXXXXXXXZ" which would have the "magic"
informative Z-byte at the end, that can make it go from "XXXX...XXZ"
to "CCCCCCCCCCCCCCCCC" that's only 80 bytes long on the second stage.

Apart from that my examples use simple strings, is it a correct
description of your thoery?

Because if it is, the problem of decompressing shows already.
Imagine someone packs the file:
"XXXXXXXXXXXXXXXXXXXXXXXXXXX.." <- X*100
and then packs another file that looks like this:
"XXXXXXXXXXXXXXXXXXXXXXXXX...XZ" <- X*100 + 1*Z

Then you will end up with the 100X file becoming the second file
during stage2. Imagine we pack these two, using your 2-stage
compressor.
The (second) 100X+Z will compress to some 80 bytes, lets say:
"CCCCCCCCCCCCCCCCCCCC" <- C*80. Fine. Its smaller.

The first file though, it will go from being X*100, to X*100+Z, and as
shown above, the X*100+Z will become C*80. So you end up with two
files that now are C*80, but they came from different sources.

If you send this C*80 file to your friend, how will he know which of
the outputs it will be?

This is the heart of the counting argument. You cannot compress all
possible inputs and have an overall gain, since that would exclude
some possible outputs, and if you exclude some possible outputs, you
dont have a working lossless compressor. People will get angry if they
compress my_lovely_wife.jpeg and when someone unpacks it it becomes
my_boss_in_womens_underwear.jpg just because the decompressor isn't
lossless. =)

Steven Pigeon

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to


> File #1 compressed = 000000000000
> File #2 compressed = 000000000000
>
> How do I uncompress File #1 and File #2?????????
>
> Earl Colby Pottinger
>
> ----------------------------------------------------------------
> : Stop on by the Internet TeleCafe! telnet://telecafe.com:9000 :
> ----------------------------------------------------------------


Every 2 or 3 month some twit posts a wonderful, recursive,
universal, 16:1 compressor... I think we should ALWAYS ignore this
kind of thread. Hey, there's some real compression thinking
to do!!!


Best,

S.


----------------------------------------------------------
Steven Pigeon Ph. D. Student.
University of Montreal.
pig...@iro.umontreal.ca Topics: data compression,
pig...@jsp.umontreal.ca signal processing,
ste...@research.att.com non stationnary signals
and wavelets.
----------------------------------------------------------
http://www.iro.umontreal.ca/~pigeon

Antaeus Feldspar

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to
Lynn Lantz <lel...@bignet.net> wrote:

> Tim Iverson wrote in message <78ont8$l4s$1...@smbmail1.cisco.com>...
> >BTW, as long as you don't claim to be able to compress ALL files or
> >ANY file, no one is going to try to pigeon hole you. Recursive
> >compression arguments usually end up in variants of the halting
> >problem, so "recursive" is a hot-button for this group, too.
>
>

> Let me try to clarify something. I did not say that ALL files would
> compress into a smaller size on the first iterative pass.

OK, now let me try to clarify something: compressor +
compressor = compressor. That is, the practice of applying first one
compressor, and then applying the same or a different compressor, IS
ITSELF a single compressor.
Suppose I was doing simple RLE compression with a very stupid
algorithm: on each pass, it compared the current character or
character-plus-run-length to the next character or
character-plus-run-length. If the characters matched, the algorithm
replaces both with a character-plus-run-length where the run-length is
the total of the two components.
Obviously, the algorithm will not achieve anywhere near its full
potential unless it is applied in several successive steps, and in each
of these steps (after the first) it can only make the output smaller.
Plus, the actual number of iterative steps that's needed to reach output
of minimum size is not predetermined, but dependent upon the data.
But the action of that compressor, iterated however many times
the data determines, is identical to the SINGLE-ITERATION action of A
SINGLE COMPRESSOR: run-length compression with a better algorithm. And
like all compressors (including your iterated-as-many-times-as-need-be
LEL) it is subject to the counting argument.


My research shows
> that over 90% of all bit distributions would compress into a smaller size
> during a single pass using my technique.

This is where you should start looking for the errors, then.

[claims based on erroneous premises snipped.]

Lynn Lantz

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to
Dave, I responded to your email today.
Thanks.

Dave Mullen wrote in message <78rcr4$joo$1...@newton.pacific.net.sg>...

Lynn Lantz

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to
tomst...@my-dejanews.com wrote in message
<78r276$aq3$1...@nnrp1.dejanews.com>...

>
>In theory that would work, but the problem is you spend more time saying
how
>many bits there is.
>
>For example 510 (512 with rounding) is 9 bits. You could use 8 code bits,
and
>3 bits for length, coming to 11bits. 2 bits bigger then 9.
>
>BTW, what you are trying is similar to LZSS, except LZSS uses
distance/length
>pairs not code/length pairs.
>
>tom
Tom,
I'm not familiar with what LZSS stands for. I assume the LZ is Lempel Ziv
but what is the SS?
You are correct in your example above. The difference is that I don't need
the 3 bits added to every single byte to determine the length. The one byte
key I use at the front of the file tells me how to interpret the lengths all
throughout the file.


Lynn Lantz

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to

Janne Johansson wrote in message ...
>Ok, based on your text above, the 100 byte file could be:
>"AAAAAAAAAAAAAAAAAAAAAAAAAAAAA..." for instance, that compresses down
>to: "BBBBBBBBBBBBBBB..." going from 100 A's to 80 B's. For instance.
>Then you have the "hard" file:
>"XXXXXXXXXXXXXXXXXXXXXXXXXXXX..." that has 100 bytes that really doesn't
>want to compress, so the first round, it actually adds a byte, making
>it something like this:
>"XXXXXXXXXXXXXXX...XXXXXXXXXXZ" which would have the "magic"
>informative Z-byte at the end, that can make it go from "XXXX...XXZ"
>to "CCCCCCCCCCCCCCCCC" that's only 80 bytes long on the second stage.
>
>Apart from that my examples use simple strings, is it a correct
>description of your thoery?

Not exactly. Here is what would happen:
1. All cases get a "key" added to the front (not back) of the file.
2. An iterative counter is also kept.
So, the first file would end up as:
1ZCCCCCCC.... out to 80 c's
The second file would end up as:
2ZCCCCCCC.... out to 80 c's (herein is one of the "tricks". this file had
to go through 2 passes to get compressed and it must also go through 2
passes to get un-compressed. After the first pass is done the very first C
becomes the Z (or key) for use during the 2nd pass).

Therefore the 2 are separate and would not switch the images as you describe
below:

Steve Tate

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to
Lynn Lantz (lel...@bignet.net) wrote:

> Let me try to explain it this way:
> 8 bits can represent 256 options.
> 7 bits can represent 128 options.
> 6 bits can represent 64 options.
> 5 bits can represent 32 options.
> 4 bits can represent 16 options.
> 3 bits can represent 8 options.
> 2 bits can represent 4 options.
> 1 bit can represent 2 options.

> Over the span of 8 bits it is then possible to have 510 different options
> represented.

It's amazing, but with great regularity someone notices that and
thinks it's important. Several years ago someone had a similar
notion, and claimed to have actually patented it (it was called
"Hyperspace"). They sent it to me for review, and I pointed out why
it couldn't work. I'm not sure they ever believed me, but you simply
won't every get something like that to work on all files (on pass, two
passes, 100 passes, it doesn't matter).

> The problem has always been how to know where one set stops
> and the next set starts if all the answers can be variable lengths.

Yes, that's exactly the problem. And it's also exactly what the
counting argument shows is *impossible* to do. ANY way of including
where one stops and the next begins will take at LEAST as much space
as the amount you save from your technique. The net result will
always, ALWAYS be that you do not get any net compression on most
files.

Again, it doesn't matter how many passes you do. That's totally
irrelevant. It doesn't matter how you represent stop/start or code
lengths or whatever. There are *NO* assumptions here except binary
code in and binary code out. And it's simply a fact that no single
compressor (including multi-pass compressors) can compress all files
(or even *most* files).

--
Steve Tate --- srt[At]cs.unt.edu | Gratuitously stolen quote:
Dept. of Computer Sciences | "The box said 'Requires Windows 95, NT,
University of North Texas | or better,' so I installed Linux."
Denton, TX 76201 |

Sean Lambert

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to
On Fri, 29 Jan 1999, Steven Pigeon wrote:

> Every 2 or 3 month some twit posts a wonderful, recursive,
> universal, 16:1 compressor... I think we should ALWAYS ignore this
> kind of thread. Hey, there's some real compression thinking
> to do!!!

I prefer the term "misguided but well meaning individual" to "twit."
And I also believe that these discussions are good for us. It is
important to be able to explain as well as understand the basic
principles of compression, and this sort of thing is good practice.
In fact, I first joined the group during one of these discussions,
and it really help me understand. I got to read about things from
many points of view.

Besides, we don't want anyone wasting lots of money trying to
develop an impossibility. It could ruin him. If things like this
make it to market then it will eventually be found to not work, and
that could affect the whole field. (Cold fusion anyone?)

Sometimes it is difficult to accept being wrong. One town, when
told that the laws of gravity prevented their construction project
from working, passed an ordinance stating that the laws of gravity
did not apply in their jurisdiction...

Lynn: Please let someone help you! We are not out to steal your
ideas, and we have no interests to protect. If you are certain your
method works, then you ought to be able to come up with some method
of proving it.

Best of luck,
--- Sean

Sean Lambert sum...@mindless.com http://www.cs.montana.edu/~sum1els/

Steven Pigeon

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to

Sean Lambert wrote:

> On Fri, 29 Jan 1999, Steven Pigeon wrote:
>
> > Every 2 or 3 month some twit posts a wonderful, recursive,
> > universal, 16:1 compressor... I think we should ALWAYS ignore this
> > kind of thread. Hey, there's some real compression thinking
> > to do!!!
>
> I prefer the term "misguided but well meaning individual" to "twit."

I stand politically corrected.

> And I also believe that these discussions are good for us. It is
> important to be able to explain as well as understand the basic
> principles of compression, and this sort of thing is good practice.
> In fact, I first joined the group during one of these discussions,
> and it really help me understand. I got to read about things from
> many points of view.

The first few were good, agreed. But now, it's a recurring plague.That's not fun
anymore. And it's always presented the same way:
Miracle! New universal compressor! Recursive! Crushes anything into
one bit! Won't tell how it works, but I can sell it to you for a ludicrous
amount of money! Order now! (or buy shares, or... )


> Besides, we don't want anyone wasting lots of money trying to
> develop an impossibility. It could ruin him.

Does survival of the fitest ring a bell?That's mean, but that's the game; you
can't improvise yourself a
computer scientist if you're not one.


> If things like this
> make it to market then it will eventually be found to not work, and
> that could affect the whole field. (Cold fusion anyone?)

That's right. But I guess you won't find anyone to try again anyiridium rod -
based cold fusion experiments.


> Sometimes it is difficult to accept being wrong. One town, when
> told that the laws of gravity prevented their construction project
> from working, passed an ordinance stating that the laws of gravity
> did not apply in their jurisdiction...

> Best of luck,
> --- Sean
>
> Sean Lambert sum...@mindless.com http://www.cs.montana.edu/~sum1els/


Best,

S.

--

Kelsey Bjarnason

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to
Steven Pigeon wrote in message <36B1FC3F...@iro.umontreal.ca>...

>
>
>Sean Lambert wrote:
>
>> On Fri, 29 Jan 1999, Steven Pigeon wrote:
>>
>> > Every 2 or 3 month some twit posts a wonderful, recursive,
>> > universal, 16:1 compressor... I think we should ALWAYS ignore
this
>> > kind of thread. Hey, there's some real compression thinking
>> > to do!!!
>>
>> I prefer the term "misguided but well meaning individual" to
"twit."
>
>I stand politically corrected.
>
>> And I also believe that these discussions are good for us. It is
>> important to be able to explain as well as understand the basic
>> principles of compression, and this sort of thing is good practice.
>> In fact, I first joined the group during one of these discussions,
>> and it really help me understand. I got to read about things from
>> many points of view.
>
>The first few were good, agreed. But now, it's a recurring
plague.That's not fun
>anymore. And it's always presented the same way:
>Miracle! New universal compressor! Recursive! Crushes anything into
>one bit! Won't tell how it works, but I can sell it to you for a
ludicrous
>amount of money! Order now! (or buy shares, or... )


Hmm... actually... no problem at all. I'll happily send you a
compressor and decompressor, lossless, which will compress any file
you have down to a mere 64 bytes. The only drawback is the size of
the decompressor... which varies... :)


tomst...@my-dejanews.com

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to

> Tom,
> I'm not familiar with what LZSS stands for. I assume the LZ is Lempel Ziv
> but what is the SS?
> You are correct in your example above. The difference is that I don't need
> the 3 bits added to every single byte to determine the length. The one byte
> key I use at the front of the file tells me how to interpret the lengths all
> throughout the file.
>

LZSS is a form of coding plaintext into distance/length pairs. You make a
sliding window (of the previously compressed plaintext) then as you encode
more plaintext you add to a string. You try to find this string in the
previous X ammount of encoded plaintext (usually 4kb) and if you find it, you
encode the distance (from where you are, to the matching string) and the
length of the matching string. This works ok on most text, and some binary
files too.

Tom

graham...@hotmail.com

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to
In article <G0ls2.26$2k5...@news5.ispnews.com>,

"Lynn Lantz" <lel...@bignet.net> wrote:
> tomst...@my-dejanews.com wrote in message
> <78r276$aq3$1...@nnrp1.dejanews.com>...
> >
> >In theory that would work, but the problem is you spend more time saying
> how
> >many bits there is.
> >
> >For example 510 (512 with rounding) is 9 bits. You could use 8 code bits,
> and
> >3 bits for length, coming to 11bits. 2 bits bigger then 9.
> >
> >BTW, what you are trying is similar to LZSS, except LZSS uses
> distance/length
> >pairs not code/length pairs.
> >
> >tom
> Tom,
> I'm not familiar with what LZSS stands for. I assume the LZ is Lempel Ziv
> but what is the SS?
> You are correct in your example above. The difference is that I don't need
> the 3 bits added to every single byte to determine the length. The one byte
> key I use at the front of the file tells me how to interpret the lengths all
> throughout the file.

I believe the "SS" stands for "Stacker Sucks". :)

Actually, last I heard the "SS" was something to do with Stac electrionics.
It does suck, however, compared to other faster better LZ implementations.
It sometimes seems like making a compressor that beats LZSS hands down is an
exercise that compression enthusiasts like to do in their spare time.

- GLYPH

Binder Edgar

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to

Steven Pigeon wrote:

> Every 2 or 3 month some twit posts a wonderful, recursive,
> universal, 16:1 compressor... I think we should ALWAYS ignore this
> kind of thread. Hey, there's some real compression thinking
> to do!!!

Let me describe a wonderful, non-recursive, universal compressor
that reduces any file down to a few bytes...
- to compress the file, we store the space-time coordinates of the
computer (a few bytes. Probably not more than 20-30)
- to decompress, we read the space-time location from the file. Then we
use our real-time simulation of the universe to find the computer at
that location. Next we copy from the computer in the simulation the
original file, and this way ... we have a magical decompressor :-)

This algorithm is as unusable as any "wonderfull" recursive algorithm,
but there is a difference: If a simulation of the universe could be
implemented the algorithm might actually work, and the counting argument
doesn't apply to it... :-)

Edgar


Steven Pigeon

unread,
Jan 29, 1999, 3:00:00 AM1/29/99
to

Binder Edgar wrote:

That SO stupid... fortunately, it's a joke.

Gwenolé Beauchesne

unread,
Jan 30, 1999, 3:00:00 AM1/30/99
to
<graham...@hotmail.com> wrote:

> I believe the "SS" stands for "Stacker Sucks". :)

That's Storer-Skimanski (sorry for the spelling)

A++
--
Gwenolé Beauchesne

Woody

unread,
Jan 30, 1999, 3:00:00 AM1/30/99
to
I think we all wanna know what you wrote ;)

-Woody^dsd

http://www.come.to/cib

Jon

unread,
Jan 30, 1999, 3:00:00 AM1/30/99
to
just adding my 2cents since evey one else is. keep in mind i'm no great
mind not even that good at math
it would be great to see u do this IMHO of course i don't see how then
again if it was ez it would have been done allready
but my problem is this
0=0
1=1
11=2
10=3
01=4
00=5
and of course one bit to tell if its 00 or 0
log2(6)=2.585; and it seems to me that bit has a %50 %50 chance of
encoding 1 bits or 2 bits
so it cost u .585 of a bit hafe the time or then u save .415 the other
hafe
if u revser it the odds or agiest u worse 2 to 1
anyway g*luck


Dave Mullen

unread,
Jan 30, 1999, 3:00:00 AM1/30/99
to
For the benefit of Woody (and anyone else), today Lynn sent me a
confidentiality agreement to sign - very sensible practice. I have signed it
and mailed it to him. Let's see what happens next.

Just to add my 2 pesos to the argument...

It is possible to recursively compress the same data with only a minimum
overhead, lets say 4 bits for up to 15 recursions. The key fact is that this
will only work for a very small set of files, the majority of files will
always expand by at least those 4 bits, and probably a lot more. And to
imagine that you could compress any 1mb file down to 20 bits is obviously
crazy (however you might just swing it for a 1mb file containing all 0's or
all 1's)

Each pass will perform some kind of transform / compression on the data. For
each transform / compress, 1 is added to the recursion count. Here's two
contrived examples, which also illustrates that two compressed files of
000000000000 can decompress to different strings - NOTE, only if the
recursion count is different i.e. an extra 4 bits is always required.

Example 1

Source Data = 00010001000100010001000100010001 (32 bits)

Transform 1= (if next 2 bits are 00, code as 0, else code as 1+the 2 bits)

Thus = 01010101010101010101010101010101 (32 bits)

Transform 2 = (if next bit is different from previous, code as 0, else code
as 1)

Thus = 00000000000000000000000000000000 (32 bits)

Transform 3 = (if next 2 bits are 00, code as 0, else code as 1+the 2 bits)

Thus = 0000000000000000 (16 bits)

At this point, we can write our 16 bits + 4 header bits (value 3), and be
done

Now, you can see that if the value of header is 3, the file will decompress
to the original 00010001000100010001000100010001, but if the value of header
is 2, the file will decompress to the string
01010101010101010101010101010101.

Okay, it's specially designed so that the source data "fits" the compression
used, but the principle can be done. Both files will compress to
"0000000000000000" + a header of 4 bits, (both files compressed to 20 bits
i.e. 62.5%).

Lets say that if your header was 16 bits, you could run up to 65535
different techniques on a file - but you are confined to using the same
techniques in the same order for any file, your only choice to make is at
what point to stop. This is a difficult decision to make - consider that
transform x expands the data - do you stop at transform x-1, or try
transform x+1 to see if you get better ??

The important thing to note is that I do not (and would not) claim it to
work for all files - obviously it doesn't !!!

Dave


Earl Pottinger

unread,
Jan 30, 1999, 3:00:00 AM1/30/99
to
I would like to point out that my text compressor BlackHole does compress
text 16:1, however it only works on sorted USENET messages when there is a
flame war going and the same messages get quoted over and over :)

Earl Colby Pottinger


Steven Pigeon (pig...@iro.umontreal.ca) wrote:

: > File #1 compressed = 000000000000


: > File #2 compressed = 000000000000
: >
: > How do I uncompress File #1 and File #2?????????
: >
: > Earl Colby Pottinger
: >
: > ----------------------------------------------------------------
: > : Stop on by the Internet TeleCafe! telnet://telecafe.com:9000 :
: > ----------------------------------------------------------------


: Every 2 or 3 month some twit posts a wonderful, recursive,


: universal, 16:1 compressor... I think we should ALWAYS ignore this
: kind of thread. Hey, there's some real compression thinking
: to do!!!


: Best,

: S.


: ----------------------------------------------------------


: Steven Pigeon Ph. D. Student.
: University of Montreal.
: pig...@iro.umontreal.ca Topics: data compression,
: pig...@jsp.umontreal.ca signal processing,
: ste...@research.att.com non stationnary signals
: and wavelets.
: ----------------------------------------------------------
: http://www.iro.umontreal.ca/~pigeon


---------------------------------------------------------------------
: Internet Direct (416)233-2999 1000 lines SLIP, 9600 - 33,600 bps :
---------------------------------------------------------------------

tomst...@my-dejanews.com

unread,
Jan 30, 1999, 3:00:00 AM1/30/99
to
Define crazy. The algorithm he described, would work!!! Not on a lot of
files, but it would work. I compressed/decompressed data the way he
described and it would work. (I did it on paper).

I can't imagine it working fast/and/or well though. Let's see what happens


(Lynn, if your algorithm works, why not let FSF use it (non-profit), well me I
am gonna stick to LZSS and LZP though)

tom

Lynn Lantz

unread,
Jan 30, 1999, 3:00:00 AM1/30/99
to
tomst...@my-dejanews.com wrote in message
<78uvsv$lbh$1...@nnrp1.dejanews.com>...

>Define crazy. The algorithm he described, would work!!! Not on a lot of
>files, but it would work. I compressed/decompressed data the way he
>described and it would work. (I did it on paper).


Tom, I never described my algorithm on here (just the concepts of it) so am
not sure what you were working with. If you were looking at the previous
post that was written by Dave Mullen and was his ideas (not written by me).

>
>I can't imagine it working fast/and/or well though. Let's see what happens
>
>
>(Lynn, if your algorithm works, why not let FSF use it (non-profit), well
me I
>am gonna stick to LZSS and LZP though)

Tom, it costs several thousand dollars just to file a patent alone not
counting everything else to get a product to market. Why in the world would
I want to just give it away to FSF?

Dave Mullen

unread,
Jan 30, 1999, 3:00:00 AM1/30/99
to
Oops, sorry Lynn, did I steal your thunder ?

Tom, I was using a very extreme and purpose designed case to prove both
sides of the argument are correct, but only up to a certain point. If you
can find a practical use for those very simple algs, you are welcome to use
them (patent them even lol).

Dave

tomst...@my-dejanews.com

unread,
Jan 30, 1999, 3:00:00 AM1/30/99
to

> Tom, it costs several thousand dollars just to file a patent alone not
> counting everything else to get a product to market. Why in the world would
> I want to just give it away to FSF?

Cause me like many other people wouldn't pay a cent for an algorithm.
Personally I think patenting an algorithm is silly. You can copyright an
implementation but patents just hinder the world.

(Can we say LZW, however deflate is free and way better then LZW...)

See patents are what keep people from working on compression algorithms. Like
if I independantly make an algorithm, oops, someone posted a patent 3 months
ago, so I can't use something I INVENTED ???

I am not targetting you specifically just all patent holders on algoithms
(that goes for encryption too).

What does it hurt to copyright your implementation, if people are really
worried about people stealing their algorithms, make them confidential.

Lynn Lantz

unread,
Jan 30, 1999, 3:00:00 AM1/30/99
to
Dave Mullen wrote in message <78v6je$g9i$1...@newton3.pacific.net.sg>...

>Oops, sorry Lynn, did I steal your thunder ?

Dave, I'm extremely disappointed with this post of yours. Where did you get
this idea?? All I was doing in the previous post was telling Tom that the
unworkable algorithm in your posting was from you and was your idea, NOT
mine.

>Tom, I was using a very extreme and purpose designed case to prove both
>sides of the argument are correct, but only up to a certain point. If you
>can find a practical use for those very simple algs, you are welcome to use
>them (patent them even lol).

Dave, you asked to see my algorithm. I can't ask for respect since you
don't know me from Adam and have not seen my ideas, but if want to work with
me then I at least expect civility and an open mind. If you want to LOL and
ridicule me then don't bother sending the request.


Lynn Lantz

unread,
Jan 30, 1999, 3:00:00 AM1/30/99
to
tomst...@my-dejanews.com wrote in message
<78v9l9$smg$1...@nnrp1.dejanews.com>...

>What does it hurt to copyright your implementation, if people are really
>worried about people stealing their algorithms, make them confidential.
Tom, could you expand on this further. There is some protection from
copyrighting an implementation, but as I understand it, as long as someone
else's implementation is "just enough" different from yours then they can
get away with it.
Secondly, what do you mean by making an algorithm confidential? Do you mean
by releasing an executable only without source code? My understanding is
that any executable could be reversed engineered.

Lynn Lantz

unread,
Jan 30, 1999, 3:00:00 AM1/30/99
to
Lynn Lantz wrote in message ...

>me then I at least expect civility and an open mind. If you want to LOL
and
>ridicule me then don't bother sending the request.
Dave, if you were talking about LOL about your algorithm and not mine then I
apologize for misreading the post. My skin just got a little thin after
reading 100 posts telling me I'm an idiot.


Steven Pigeon

unread,
Jan 30, 1999, 3:00:00 AM1/30/99
to

Lynn Lantz wrote:

> tomst...@my-dejanews.com wrote in message
> <78uvsv$lbh$1...@nnrp1.dejanews.com>...
> >Define crazy. The algorithm he described, would work!!! Not on a lot of
> >files, but it would work. I compressed/decompressed data the way he
> >described and it would work. (I did it on paper).
>
> Tom, I never described my algorithm on here (just the concepts of it) so am
> not sure what you were working with. If you were looking at the previous
> post that was written by Dave Mullen and was his ideas (not written by me).
>
> >
> >I can't imagine it working fast/and/or well though. Let's see what happens
> >
> >
> >(Lynn, if your algorithm works, why not let FSF use it (non-profit), well
> me I
> >am gonna stick to LZSS and LZP though)
>

> Tom, it costs several thousand dollars just to file a patent alone not
> counting everything else to get a product to market. Why in the world would
> I want to just give it away to FSF?

Having a few patents (some pending), I might answer this question. It costs alot
(I mean a lot... probably O(10,000$) if you deal with a decent patent attorney
office) to pass through the process without certainty about the results. You may

have to go through more than once because something or other in your claims
'hurts' somebody else's patent(s). Then, after that, you have to evaluate how
much $$$ you're going to do with your algorithm, and that's not clear. First,
you'll
have to convince somebody big to use your algorithm to get royalties, or have
yourself a mighty nice product to sell it yourself.

If you're planning to sell, say, your compression algorithm as an archiver, it's

already a competitive environment. You could also sell it as firmware for
Nintendo
or Sony, or something, so that they'll use it in new appliances. But this is
certain-
ly not an easy market either.

On the other hand, handing your algorithm to gnu/fsf, you get all the benifits
for
free: visibility, and your name getting known. That might be much more
profitable
in the long term than the few (thousand) bucks you could get otherwise, because
some big compagny may offer you a really interesting job.

( And, of course, my patents are corporate patents... I'm not paying anything
from the fees, so... the situation is different)


Best,

S.

>
>
> >
> >tom
> >
> >-----------== Posted via Deja News, The Discussion Network ==----------
> >http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own

--

tomst...@my-dejanews.com

unread,
Jan 30, 1999, 3:00:00 AM1/30/99
to

> Secondly, what do you mean by making an algorithm confidential? Do you mean
> by releasing an executable only without source code? My understanding is
> that any executable could be reversed engineered.


What I mean is, if you are going to make it private, why bring it up in the
first place?

Tom

Antaeus Feldspar

unread,
Jan 30, 1999, 3:00:00 AM1/30/99
to
Lynn Lantz <lel...@bignet.net> wrote:

Perhaps not an idiot.
But misguided, yes. Definitely, misguided. Anyone who says and
believes "I have a lossless compressor to which the counting argument
doesn't apply" is misguided.
As someone else eloquently pointed out, the counting argument is
actually more verifiable than any of Newton's laws of thermodynamics.
If you think that there's anyone in the world who could come into this
group and say "Guess what, fellas! I just came up with a lossless
compressor that defies all the laws of mathematics but works anyways!"
and *not* be greeted with skepticism, you're wrong.

-jc

--
* -jc IS *NOW* feld...@cryogen.com
* Home page: http://members.tripod.com/~afeldspar/index.html
* The home of >>Failed Pilots Playhouse<<
* "Better you hold me close than understand..." Thomas Dolby

Dave Mullen

unread,
Jan 31, 1999, 3:00:00 AM1/31/99
to
Of course I was talking about my algorithm !!

graham...@hotmail.com

unread,
Jan 31, 1999, 3:00:00 AM1/31/99
to
In article <1dmf744.13i...@dyn-1-1-028.rns.dialup.oleane.fr>,

gbea...@ens.insa-rennes.fr (=?ISO-8859-1?Q?Gwenol=E9_Beauchesne?=) wrote:
> <graham...@hotmail.com> wrote:
>
> > I believe the "SS" stands for "Stacker Sucks". :)
>
> That's Storer-Skimanski (sorry for the spelling)

Hmm... Maybe I'm thinking of something else.

- GLYPH

Mark Nelson

unread,
Jan 31, 1999, 3:00:00 AM1/31/99
to
graham...@hotmail.com wrote in message
<792og8$kpf$1...@nnrp1.dejanews.com>...

>In article <1dmf744.13i...@dyn-1-1-028.rns.dialup.oleane.fr>,
> gbea...@ens.insa-rennes.fr (=?ISO-8859-1?Q?Gwenol=E9_Beauchesne?=)
wrote:
>> <graham...@hotmail.com> wrote:
>>
>> > I believe the "SS" stands for "Stacker Sucks". :)
>>
>> That's Storer-Skimanski (sorry for the spelling)
>
>Hmm... Maybe I'm thinking of something else.


Stac referred to their implementation LZSS as LZS. That may be what you had
in mind.

Janne Johansson

unread,
Feb 1, 1999, 3:00:00 AM2/1/99
to
"Lynn Lantz" <lel...@bignet.net> writes:

> >Ok, based on your text above, the 100 byte file could be:
> >"AAAAAAAAAAAAAAAAAAAAAAAAAAAAA..." for instance, that compresses down
> >to: "BBBBBBBBBBBBBBB..." going from 100 A's to 80 B's. For instance.
> >Then you have the "hard" file:
> >"XXXXXXXXXXXXXXXXXXXXXXXXXXXX..." that has 100 bytes that really doesn't
> >want to compress, so the first round, it actually adds a byte, making
> >it something like this:
> >"XXXXXXXXXXXXXXX...XXXXXXXXXXZ" which would have the "magic"
> >informative Z-byte at the end, that can make it go from "XXXX...XXZ"
> >to "CCCCCCCCCCCCCCCCC" that's only 80 bytes long on the second stage.
> >
> >Apart from that my examples use simple strings, is it a correct
> >description of your thoery?
>
> Not exactly. Here is what would happen:
> 1. All cases get a "key" added to the front (not back) of the file.

Well, that doesn't matter.

> 2. An iterative counter is also kept.

How big can this be? I mean, it would be "unfair" if it's possible to
not count that size too.

> So, the first file would end up as:
> 1ZCCCCCCC.... out to 80 c's

And, if the "1Z" part can be of any size, there will be a case when
the value of "1Z" actually is "CC". There you go again.

> The second file would end up as:
> 2ZCCCCCCC.... out to 80 c's (herein is one of the "tricks". this file had
> to go through 2 passes to get compressed and it must also go through 2
> passes to get un-compressed. After the first pass is done the very first C
> becomes the Z (or key) for use during the 2nd pass).
>
> Therefore the 2 are separate and would not switch the images as you describe
> below:

No, you just moved the problem one step in front of you. How would you
stop uncompressing a file that is supposed to be "1ZCCCCC..." in the
end? This is "the halting problem".

If I compress the file "XXXX...." which in turn becomes "1ZXXXX..." and in
the next step becomes "ABCDE", and the compress the file "1ZXXX..." it
will also become "ABCDE", meaning that if we uncompress an
"ABCDE"-file, we can never get the "1ZXXXX..." out of it, but only the
"XXXX..." since your compressor will think it needs to do another round.

--
"Backwards compatible" means: "if it isn't backwards, it's not compatible."

Http://www.it.kth.se/~jj

Graham Shaw

unread,
Feb 1, 1999, 3:00:00 AM2/1/99
to
In article <G0ls2.26$2k5...@news5.ispnews.com>,
Lynn Lantz <lel...@bignet.net> wrote:
> I'm not familiar with what LZSS stands for. I assume the LZ
> is Lempel Ziv but what is the SS?

(James A) Storer and (Thomas G) Szymanski

See "Text Compression via Textual Substitution"
(JACM, vol 29 no 4, October 1982, pp928-951)

LZSS is a variant of LZ77. It differs in that each codeword
represents either a copied string or a literal character, as
opposed to a copied string followed by a literal character.

> You are correct in your example above. The difference is that
> I don't need the 3 bits added to every single byte to determine
> the length. The one byte key I use at the front of the file
> tells me how to interpret the lengths all throughout the file.

Try applying the counting argument to the whole of the input and
the whole of the (final) output. That way, it doesn't matter how
many passes you use - and neglecting an 'out-of-band' signal may
cause a small error, but the errors won't accumulate.

If (as in one of your examples) the output consists of 20 bytes
(160 bits) then it can represent 2**160 possibilities. Add in
all outputs shorter than 160 bits and you reach 2**161. Allow
(say) up to 2**32 iterations and you reach 2**183. Even if we
ignore the fact that the length and iteration count are now being
transmitted out-of-band, this is still a long way short of the
2**8388608 possibilities needed to represent any 1 megabyte file.

--
Graham Shaw

David Marshall

unread,
Feb 2, 1999, 3:00:00 AM2/2/99
to
Lynn Lantz wrote:
> 1. Why is it that everyone accepts the fact that any file can be compressed
> once into a smaller number of bits,

Whoa there! The vast majority of possible files *cannot* be compressed
into fewer bits - the trick is finding an algorithm that compresses the
vast majority of *useful* files into fewer bits.

> but as soon as you say you can do it
> twice or more then they go nuts with this argument?

I'm guessing that the fundamental flaw (and yes, there is a fundamental
flaw) in your algorithm is that you're not including a record of what
was done at each recurse of the algorithm in the size of the resulting
data. But then how many times have we seen that?

Dave
--

David Marshall

unread,
Feb 2, 1999, 3:00:00 AM2/2/99
to
Lynn Lantz wrote:
> 0 is a different answer from 00 or 01. The answer for 0 is not a "sub-set"
> of the 2 bit answers. The difference is in knowing where one stops and the
> next starts.

Which needs more bits...

Dave
--

David Marshall

unread,
Feb 2, 1999, 3:00:00 AM2/2/99
to
Lynn Lantz wrote:
> My skin just got a little thin after reading 100 posts telling me I'm
> an idiot.

101. You're an idiot.

Dave
--

Lars Gregersen

unread,
Feb 3, 1999, 3:00:00 AM2/3/99
to
On Wed, 27 Jan 1999 21:13:27 -0500, "Lynn Lantz" <lel...@bignet.net>
wrote:

>As I promised a few of you once I got the patent filed that I would come
>back to let this group know.
>I'm pleased to introduce LEL Compression (Patent Pending).
>
>The main features are:
>1. Recursive - it can be done over and over any number of times down to a
>set size or set number of iterations. It will work on files already
>compressed by any other compression program. It will work on files that
>have been encrypted. (Spare me the arguments about the Counting Number
>proof until you've read my arguments below).

The nice thing about mathematicla proofs is that they are always
right. Saying that a proof, which is beleived to be perfectly correct
by almost all people reading this group, is incorrect for your
algorithm is not something that gets you a lot of support or funding.

To me you can do two things. Either you accept the counting number
theorem and the other arguments that have been written in this group
or you find where the proof is incorrect. Mathematical proofs aren't
necessarily correct; it it human to err. It is just that the counting
number argument only fills a single page (with figures) and there is
little in the proof to be confused about.

Note that it is very much possible to make very good compressors
without having to "break" one or the other mathematical proof. A lot
of companies are making perfectly good money with their compression
algorithms; you could too. Although everybody in this group believe
that it is not possible to make one single algorithm that can compress
any file it just might be so that your algorithm works on most _real_
problems with a good result and hopefully it is also sufficiently
fast/reliable when doing compression/decompression. In order to see
how you algorithm works you have to give more details on where and how
you store information, whether you allow your algorithm to change (and
where you store that information) and finally just to make things
clear to all of us: how you make account of all bits before and after
compression.

Lars Gregersen, M.Sc. Chem. Engng.
Department of Chemical Engineering, DTU, Denmark
E-mail: l...@kt.dtu.dk
Homepage: http://www.gbar.dtu.dk/~matlg/

Andy Fraser

unread,
Feb 11, 1999, 3:00:00 AM2/11/99
to
SPAMBGONEmbke...@yahoo.com (Matt Kennel) writes:

> On Thu, 28 Jan 1999 06:53:16 GMT, Tom Lane <t...@netcom.com> wrote:
> :"Mark Nelson" <ma...@ieee.org> writes:
> :> I have no doubt that we can create a demonstration
> :> that protects your interests while proving that you aren't just some
> :> crackpot!
> :
> :... or, more likely, proves that Lynn *is* a crackpot. If that post
> :wasn't crackpot-bait then I've never seen any.
> :
> :Lynn: your claims are just about as non-credible as a perpetual motion
> :machine.
>
> No they're less credible. A 'perpetual motion machine' depends on
> the second law of thermodynamics, which is a macroscopic approximation about
> facts regarding the typical chaotic dynamics of lots of micro particles.

I believe that if you can find a way to beat the counting argument and
compress any file, you can build a Maxwell demon and beat the second
law. IMO the most persuasive argument against Maxwell demons is
C. Bennett's, which goes:

When the demon measures a system and determines that it is in some
subset of the energy surface, he must set some number of bits in a
memory to record the measurement. Then, he puts down some kind of
constraint to hold the system in the measured subset. Next he
isothermally relaxes the constraint thereby extracting work from a
heat source. At this point, it seems that the second law has been
violated, but to go back to the initial state, the bits in memory must
be reset. That is necessarily a dissipative process and it wastes
everything you've gained and closes the cycle.

But if you could represent each of 2**N equiprobable subsets of the
energy surface with less than N bits, then you would be able to close
the cycle and extract work from a heat source without the need to dump
heat to lower temperature sink, i.e., you could beat the second law.

Andy Fraser

0 new messages