--
"What worries me is not the violence of the few, but the indifference of
the many"
Eric
--
Arithmetic Coding at a glance
http://ac.bodden.de
>What worries me is the indifference of the many
Oh well, better get used to it.
-- Mat.
But he still needs to work on the signature ;-)
> I can compress any 100 bytes down to 7 bytes lossless
> takes 30 seconds max to compress it and about 5 seconds to decompress it.
> Or 40 bytes down to 7 bytes in one second.
> I am not looking for money because I have all that I need so im not looking
> to bilk anyone out of money on such a "crazy idea".
> I merely wish to demonstrate the fact that the application does indeed work
> to someone in the industry so they take it seriously that's it.
There are several well-established "challenges" on this newsgroup if
you should choose to try to prove your system. Mark Nelson has a file
of random digits that you would need to compress, but you need to add
in the size of the decompression program when counting how much
compression you achieved. There's another challenge along those lines
with a cash prize. Here's my own challenge, which has been listed in
the comp.compression FAQ for many years, and yet oddly enough no one
has won:
1) You send me the decompression program (I don't need the
compression program).
2) I send you the data to compress. I'll send you 100 bytes because
of your description above.
3) You send me the compressed version -- 7 bytes from your
description.
4) I will run your decompression program with this compressed
data, and test to see if the original 100 bytes are restored.
If you manage to do this, I send you $100. Fair enough?
--
Steve Tate - srt[At]cs.unt.edu | "A computer lets you make more mistakes faster
Dept. of Computer Sciences | than any invention in human history with the
University of North Texas | possible exceptions of handguns and tequila."
Denton, TX 76201 | -- Mitch Ratliffe, April 1992
This is like the cold virus. Every new
season brings us a mutant strand of the
bug...
Best,
S.
--
Steven Pigeon, Ph. D.
pig...@iro.umontreal.ca
"Eric Bodden" <e...@ukc.ac.uk> wrote in message
news:b82073$eip$1...@athena.ukc.ac.uk...
<s...@nospam.unt.edu> wrote in message news:b829c0$bg$1...@hermes.acs.unt.edu...
This is not the kind of thing I would get anyone else involved with,
but if you want to send me a plane ticket to Toronto.....
> Ur in North Texas University i think so how about at least contacting some
> other university in Toronto With the same department and getting them at
> least to look at it for u and relay the facts that it works to you.
> Buy the way i dont accept checks. Get a money order ready.
> I hope you do see that i am willing to verify my claim and not making
> excuses cause i am trying to find a solution.
I think we're stuck with the long-distance thing. As I've offered to
others, I'd happily sign a non-disclosure agreement as far as your
decompression code goes. I'd store it safely, run it when you gave me
the compressed data, delete it, and never look at it. That much you
can guarantee legally. Of course, the results of the test will be
announced to all...
<s...@nospam.unt.edu> wrote in message news:b82c0c$fl$1...@hermes.acs.unt.edu...
You're welcome! Why should it just be us who is laughing?
Eric
> Well im not about to pay for ur airfare unless i know u have lots of good
> contacts in the compression industry.
> Get back to me if u ever find anyone in / near Toronto who wants to see it
> preferably if there are beneficial to me to spread the word.
I do have "lots of good contacts in the compression industry", for
what it's worth. In fact, I know quite a few people up at the
University of Toronto too. What you're missing is that I wouldn't
involve any of them with this because it's really a waste of time
unless you're in it for the entertainment value, like me.
Your technique doesn't work, it can't work, and that could be
demonstrated in a matter of minutes if you'd just try....
<s...@nospam.unt.edu> wrote in message
news:b83dl8$2no$1...@hermes.acs.unt.edu...
> I can compress any 100 bytes down to 7 bytes lossless
No, you can't, as anyone with the ability to master complex subjects such
as counting can figure out.
Oh, come now.
One needs exponentiation, too, even if only for base 2.
Perhaps we should suffer fools a bit more gladly? ... Nah.
RRS
>
> Kelsey Bjarnason wrote:
>> On Mon, 21 Apr 2003 23:48:16 +0000, Tim Bernard wrote:
>>
>>
>>> I can compress any 100 bytes down to 7 bytes lossless
>>
>>
>> No, you can't, as anyone with the ability to master
>> complex subjects such as counting can figure out.
>
>
> Oh, come now.
>
> One needs exponentiation, too, even if only for base 2.
Not even:
00
01
10
11
I count 4. :)
My hat is off to you, sir!
But wait a minute... How do I know you didn't use a computer to do that?
Surely no mere human mind could pull that off?
RRS
On Tue, 22 Apr 2003 20:15:28 +0000, Randall R Schulz wrote:
>>>One needs exponentiation, too, even if only for base 2.
>>
>>
>> Not even:
>>
>> 00
>> 01
>> 10
>> 11
>>
>> I count 4. :)
>
>
> My hat is off to you, sir!
>
> But wait a minute... How do I know you didn't use a computer to do that?
A Galactic Stellar Cluster 197, to be exact. 193 trillion gigaflops, 512
petabytes core RAM, 120 exabytes storage. The first computer designed to
be sufficiently powerful as to be able to track the U.S. national deficit
in real time - to the nearest megabuck.
But it counted them things. No exponentiation needed. I'd show you the
source code to prove it, but Earth isn't advanced enough to cope with
quintelinear algorithms yet.
> Surely no mere human mind could pull that off?
Human? Who said anything about... oh, crap, I blew my cover, didn't I?
Actually, they don't have to understand exponentiation completely. All they
need to understand is that:
k^m > k^n if m > n and k > 0
in this case we have k = 256, m = 100, and n = 7
256^100 > 256^7
so there cannot be a one-to-one function from the set of 100-byte inputs to
the set of 7-byte outputs.
--
Dale King
No, you do not have to include entropy into it at all. All you have to do is
look at a compressor as function (more properly a binary relation, since the
question is whether it actually fits the definition of a function). You put
an input in and you get an output out. The decompressor is the reverse
process. For it to be lossless, it cannot be that for 2 different inputs I
get the same output (i.e. it must be one-to-one not many-to-one). If the
number of inputs is greater than the number of outputs then you cannot
generate unique outputs for each input.
--
Dale King
Dale King wrote:
>
> No, you do not have to include entropy into it at all. All you have to do is
> look at a compressor as function (more properly a binary relation, since the
> question is whether it actually fits the definition of a function). You put
> an input in and you get an output out. The decompressor is the reverse
> process. For it to be lossless, it cannot be that for 2 different inputs I
> get the same output (i.e. it must be one-to-one not many-to-one). If the
> number of inputs is greater than the number of outputs then you cannot
> generate unique outputs for each input.
I agree entropy doesn't enter into it, but an "input" is not so well
defined. The input can be a message composed of symbols, in which case
the order of the symbols can be important, and different symbols can
produce the same output as when using adaptive tables. Or the input
can be a set of messages, and again different messages can produce the
same output depending on their order, e.g. the output 1,1,1,1 for a
series of four messages can mean each is the expected one within the
current context. Ultimately it seems to me that a lossless compressor
is a reversible mapping from a set of possible concatenated messages
to a set of concatenated outputs. Each output indeed must uniquely
decompress, but when context is allowed the input set is not simply
the set of all possible binary messages. I am reminded of the SF story
in which superintelligent beings sent a message back to Earth
containing the theory of everything, which said something like
"concatenate the prime factors of 2^233985-42 and read the resulting
file as ASCII" :).
In the case in question it was very well defined. All sequences of exactly
100 bytes.
> The input can be a message composed of symbols, in which case
> the order of the symbols can be important, and different symbols can
> produce the same output as when using adaptive tables. Or the input
> can be a set of messages, and again different messages can produce the
> same output depending on their order, e.g. the output 1,1,1,1 for a
> series of four messages can mean each is the expected one within the
> current context. Ultimately it seems to me that a lossless compressor
> is a reversible mapping from a set of possible concatenated messages
> to a set of concatenated outputs. Each output indeed must uniquely
> decompress, but when context is allowed the input set is not simply
> the set of all possible binary messages. I am reminded of the SF story
> in which superintelligent beings sent a message back to Earth
> containing the theory of everything, which said something like
> "concatenate the prime factors of 2^233985-42 and read the resulting
> file as ASCII" :).
The definition of the input has nothing to do with it in what I was saying.
All you have to do is be able to count how many inputs you have and how many
outputs you have. If you have more inputs to the compressor than outputs the
compressor cannot be lossless. I agree that when the numbers of inputs and
the number of outputs are both infinite or when it is difficult to determine
an exact count it takes a little more to show you can't compress everything.
But in the finite cases as we have here all you have to do is count.
--
Dale King