Old news "DVD to 30Mb":
http://www.cdfreaks.com/news/2990
Old news "picture and CEBIT 2002 demonstration":
http://www.mobilemag.com/content/100/102/C1102/
Looks like Dutch inventor Jan Sloot who was not using the binary scheme
anymore. Jan Sloot build/programmed first his own hardware/OS later he
managed it in combination with a Windows laptop. Jan Sloot told a good
friend he used multiples from 4, 8, 16, 32, 64, 128, 256, etc. to store
and replay movies with sound from a 64kb or 128kb chip card
Jan Sloot's inventing managed to resize DVD video to a 1Kb key, many
people saw it working in documented demonstrations in 1999:
- Philips (Roel Pieper, ex. Tandem Computers CEO and ex. Compaq senior
vice president, later main investor in Jan Sloot's inventing)
- Oracle The Netherlands
- Computer Associates (CA) New York (Charles Wang)
- Venture Capitalist Kleiner Perkins Caufield & Byers (KPCB) Silicon
Valley
See also pictures of the trip to New York (Jan Sloot in yellow or white
tie)
http://www.debroncode.nl/?section=fotos&lang=NL
13 July 1999 Jan Sloot died by hart attack working in his garden, just
some days before he needs to register his inventing source code and
become millionaire. Detectives and technicians never succeeded to find
the source code or reverse engineer the working device.
Special this part:
"The machine language - which is the heart of this invention -
comprises amongst others: a whole of codes and instructions which
realise the controlling, distributions and storage of electric pulses
in hardware and/or hardware components. Here, all clusters or almost
all clusters of a hard disk and/or other digital memory types are used,
so that e.g. a better compression is enabled and smaller files are
achieved."
This sounds if Defossé Guillaume invented a holistic way to store
information in memory or hard disk devices.
> 13 July 1999 Jan Sloot died by hart attack working in his garden, just
> some days before he needs to register his inventing source code and
> become millionaire. Detectives and technicians never succeeded to find
> the source code or reverse engineer the working device.
And the simple answer to that was that there never was any.
Earl Colby Pottinger
--
I make public email sent to me! Hydrogen Peroxide Rockets, OpenBeos,
SerialTransfer 3.0, RAMDISK, BoatBuilding, DIY TabletPC. What happened to
the time? http://webhome.idirect.com/~earlcp
At least one witness (investor) saw him writing down the ten thousand
rows source code. The working inventor's equipment (not bigger than a
cigar box) is even demonstrated (by and to investors) and secret opened
(as discovered by the inventor later) in the USA while the inventor was
in The Netherlands. With an other demonstration live recording from a
camera via the inventor's device to the 128kb chip card was recorded
and direct afterwards played back from the 128kb chip card at an other
similar device without connection between the two devices to proof that
only the key code and the playback device with algorithm was needed to
play back the recording from some minutes before. At a 24inc monitor
the device was capable to view 16 movies at the same time and to jump
to every fragment in the movie without waiting, a movie could also be
played back in full screen. Remember this is all done in 1999 and
before and there was 100% no hard disk (checked) in the little device.
This is written by Jan Sloot in his organizer two months before his
dead and retrieved by detective agency "Control Risks Group"
(http://www.crg.com) and published in the Dutch book "De Broncode"
written by Smit (http://www.debroncode.nl), scanned by OCR and
published at Internet in a Dutch message board
(http://gathering.tweakers.net/forum/list_messages/1050357/9) and
translated with online translation to English:
Appendix 1
By Control Risks Group text retrieved, originating from the organizer
from Jan Sloot
Jan Sloot
Principle functioning Sloot Digital Coding system
10 May 1999
For a long time there is a search for compression methods to store data
as small as possible on a media. With current techniques it is, insofar
confessed, not possible to obtain a compression of less than 50 per
cent. Since I also do not believe there is a compression methods
possible for example to store a video film in less than 100kB - one
second movie frames costs at least 1 MB, 90 minutes therefore 5400 MB -
I have gone in search to another method. After many years experimenting
I have succeeded in with a very new technique, without using of
compression methods, all storing types of data on a media of maximum
128 kb and then to play back without loss of quality and speed (each
bit becomes exactly read such as he has been read in). Each form of
data (movie frame, sound, text) is encoded to a key that n kb contains.
By offering this key to a program which can decode this key with
enormously high speed to exactly the same information which has been
rather encoded I a new technique has developed which proves that
compression are meaningless and this new technique (SDCS) has only
future perspective. Each given, data, movie frame or sound is converted
in a number code in a memory stands for each type a fixed value at each
type for data x numbers of character codes, for sound x numbers of tone
codes and for movie frame x numbers of pixel codes. By storing a number
of calculations in a fixed program on a chip a key is generated which
contains only n kilo byte on an external storage medium. By the
inventor is worked more than 12 years to invent the calculation which
is necessary for making a key which data, music or a movie contain,
astuteness has arisen because the inventor had need to the store of
enormous quantities information on a relatively small media. It is
assumed the principle that each basis information only n time to store.
For example: n time the character A, a, up to and including Z, z. n
time color information for the pixels red, green, blue etc. from this
reference a digital code to make to make an unique key code thereby
with this calculation by means of an unique algorithm. The eventually
generated key code is stored on chip card, playing a program then later
by with its own algorithm which is, stored in processors as fixed
storage. This way a key of maximum n kilo byte arises whereas the
storage of the program for several (5) algorithms, some megabyte
contains, at the experimenting of the inventor maximum 12 megabytes by
algorithm. Has been taken into account a temporary storage of temporary
calculations for algorithm. This program can be stored of course in
several processors. Because of this it is possible large quantities of
data, pieces of music and store movies on chip card of for example
128kB and to play back on each random playback equipment, which are
provide with the chips with the calculation program. Because each
program is, unequally the length, in fact only one key code that
maximum 1 kilo byte to raise capacity necessary has, be possible chip
card program there more than 100 on called. 100 Movies of approximately
2 hours or 100 TV-programs with evening filling program of some or
dozens of hours, which determine nothing. Each key code, whatever type
program asks, a storage of only 1 kilo byte. It only is or desirable 5
the memories where algorithm is temporary data gone must put, larger to
make. At this moment by the inventor it is worked for an application
with a processor of more than 550 megahertz and 5 time a memory of 74
megabytes. What means this for the future at the production of (new)
audiovisual apparatus, computers, camcorders etc., Shall you as a
reader clear is. The current storage using tape, diskette, hard disk,
CD etc., is in a slap no longer necessary. A video tape recorder is
possible in pocket format and is fed batteries with a small flow
purchase. Video films can be put automats (or by means of Internet) in
little second (against payment) on chip card and are then at home
examined on player. This of course also applies to music and data. This
way there are untold many application possibilities. Her will be clear
that the core exists from the formula in which algorithm takes place
and for making a key which belongs the formula for the algorithm of the
key for the read back of the information to the storage and. Of course
about this in this letter no explanation will be given, these
calculations have been patented by the inventor and by the inventor
will never be published!! At this principle we go there from that each
information only n time are stored. Therefore each is primarily given n
time in a fixed memory has been stored. If we would take, as an
example, a book, then a lot of thousands of times prevent the
characters a up to and including z, as well as read - and write
characters. This by book a lot of megabyte would mean to storage. When
we only store the primary data in a memory with a reference number then
we let us need only some kilo bytes for the storage in the memory. Now
the characters in a book at the correct place to get, we create a key
code by algorithm which a number of, contains calculation formulas of
which also the development resists in a memory this memory to call we
Data Key Decoder. This decoder calculates which data where placed must
become and on which page in the book, to place where there them later
exact they belong in the book. We must make a calculation where each
different word subdues, unequally the number of character, a unique
number value with this way a possible value. This means a difficult
calculation method where eventually to end up is not. I have eventually
chosen for another method. We have already stored all character as
unique reference why we shall do word storage? This method becomes for
this reason skipped. Then we reach the rows. I think that the chance
that there in a book twice the same row occurs, is seldom we all
encoded figures of the words from a row would add up, then get we
really unique number still no. In the complete book several sums of
rows an equal outcome will not have, therefore method also this, is
useful. Chosen is to encode each page in its total. By the inventor an
algorithm which encoded firstly all character of a page in a unique
page code, was considered. With a reference code for each page a first
storage is this way made of code keys in a number which is equal to the
number of pages in a book. For this system a program of which the
functioning is as follows was developed: The text is read in the Data
Key Decoder (DKD). In the Program Key (PK) all program algorithms for
calculations (encode and decode) are stored, if the calculation of the
first page has been carried out, the value it is stored temporarily in
the Character Key Decoder (CKD). Afterwards the coding of the second
page follows etc. As soon as all pages have been encoded and have been
stored in the DKD, all there collected page codes are edited by the PK
to a total program code. There the storage of the total book this way
only one number of contains codes, the total storage only 1 kilo byte
contain for film the same technique to be used, only we talk then
concerning pixels instead of character.
It doesn't matter what he "demonstrated" or what he wrote; simple
counting theorum disproves this. As James Randi and Adam's Platform
:-) have demonstrated, it is possible to fool anybody.
I have no doubt that he came up with an algorithm to turn gigabytes of
data into a 128KB key. I also have no doubt that he never realized
before his death that there is no possible decompression method that
exists to turn that key back into the gigabytes of video+audio data it
came from (unless a ludicrously physics-defying dictionary were
accessed by the key -- assuming a 64-bit length and a 64-bit offset,
the size of the dictionary would be, what, an Exabyte squared?)
> Earl Colby Pottinger wrote:
> > And the simple answer to that was that there never was any.
>
> At least one witness (investor) saw him writing down the ten thousand
> rows source code.
So what? Unless it was indepentantly compiled and tested the fact that code
existed says nothing about if it worked or if it was just a con-job.
> The working inventor's equipment (not bigger than a
> cigar box) is even demonstrated (by and to investors) and secret opened
> (as discovered by the inventor later) in the USA while the inventor was
> in The Netherlands.
But never indepentantly tested I see. In otherwords a con job.
> With an other demonstration live recording from a
> camera via the inventor's device to the 128kb chip card was recorded
> and direct afterwards played back from the 128kb chip card at an other
> similar device without connection between the two devices to proof that
> only the key code and the playback device with algorithm was needed to
> play back the recording from some minutes before. At a 24inc monitor
> the device was capable to view 16 movies at the same time and to jump
> to every fragment in the movie without waiting, a movie could also be
> played back in full screen. Remember this is all done in 1999 and
> before and there was 100% no hard disk (checked) in the little device.
And this mean nothing. Read up on some of the other compression cons done in
the past, the number of times there is a hidden cable somewhere is amazing.
> This is written by Jan Sloot in his organizer two months before his
> dead and retrieved by detective agency "Control Risks Group"
> (http://www.crg.com) and published in the Dutch book "De Broncode"
> written by Smit (http://www.debroncode.nl), scanned by OCR and
> published at Internet in a Dutch message board
> (http://gathering.tweakers.net/forum/list_messages/1050357/9) and
> translated with online translation to English:
I don't care one hoot about what he wrote - I want code!!!!
The fact that he died and now there is no proof of his claims to live on
after him is just another proof of the stupidly of so-called secret super
compressions. As I have already pointed out in this newsgroup, people who
reveal their code (BWT algorithm) or atleast mass release working code (RAR
program) get known and make money off thier ideas. The only time people make
money/success on secret compression that only they know is when they are
running a con-job on investors.
cheery o.
For me the question is not why inventors fraud but why the Shannon
entropy theory is not covering this inventings? Scientists must have
overlooked something.
Everybody can understand that 2 bits give 4 possibilities and 4 bits
give 16 possibilities etc. But when the amount of bits and
possibilities rise, I can imagine there is room to use some of those
bits to add some intelligence to code the rest more efficient.
Adam Clark didn't release much about his source code but Jan Sloot
wrote two patents and released a little more info, but not his
algorithms.
If I read the two Jan Sloot patents it looks like everything is done to
make data unique and avoid all doubles and add references to the unique
data that are littler then the original data. He first converted all
binary data to numbers and used only numbers to calculate the key that
he finally store binary at a chip card.
If he converted everything to numbers like 4, 8, 16, 32, 64, 128, 256
etc. then he can add the numbers and still find the sequence back as
long there are no double numbers. In this case he also needed a way to
find out at what position the numbers where before. He talks about
there are three dimensions (like a cube matrix?) and about to sieve
(like the game four in a row?). What kinds of tricks are possible with
unique numbers or unique number sequences?
I don't know the details of Sloot's demonstrations, but Adam's Platform
demonstrations were *proven* to be faked. Upon examination of the hard
drives used in the infamous computer->modem->computer demo, hidden
copies of the video files used were found (and they were not streamed
cache, because they played beyond the length of the playback used in
the demonstration). In another test -- one that actually worked over a
LAN connection -- it was found that he was using ON Technology's VP3
codec, which has since been donated to the open-source community and is
in use in Ogg Theora. VP3 is one of the most advanced vector
quantization family codecs out there, but it is not capable -- nor is
anything -- of sending DVD-quality media down a phone line (unless
buffering for three days is acceptable :-)
> The investors only blame themselves that they give Jan
> Sloot not his money in an early stage in exchange for the source code
> and that they didn't protect the inventor 24 hours a day against
> security risks.
Well... if you dial up the conspiracy juice by a factor of 5, you can
make a case that Sloot faked his own death so he could retire under an
assumed name and keep the money. :-)
> For me the question is not why inventors fraud but why the Shannon
> entropy theory is not covering this inventings? Scientists must have
> overlooked something.
No, the "inventors" overlooked something. Adam is a pathological liar
and a con man, so he didn't invent anything. Sloot seemed genuinely
dedicated to finding a new form of compression, but all of the
information he provided was incomplete and/or rely on "magic functions"
to come up with "unique" hashs for a set of data, when the counting
theorm easily proves it is impossible to come up with a unique hash for
a set of data that is smaller than the original set and only maps back
to that set (for *all* sets of data).
> Everybody can understand that 2 bits give 4 possibilities and 4 bits
> give 16 possibilities etc. But when the amount of bits and
> possibilities rise, I can imagine there is room to use some of those
> bits to add some intelligence to code the rest more efficient.
That's not off the mark, but there are limits (see the comp.compression
FAQ for "counting theorum") that cannot be broken. Let me try to
explain data compression for you so that you can understand. Here is
my layman's view: Data compression is the act of taking a set of data,
determining the amount of redundant information in that data, and
writing out a new set with less redundancy (ie. smaller) that can be
processed to recreate the original set. This is usually done using one
or more of three basic methods:
1. Statistical re-ordering (huffman/arithmatic/golomb/etc. codes), to
represent an input symbol with an output symbol that has less
redundant/unnecessary information in it. For example, the number "5"
is stored in a computer or disk file as a byte, which is 8 bits that
look like this: "00000101". As you can see, only three bits are
actually used to represent the number. Huffman (and other) codes are
ways to represent a data set using less bits than the original; the
most common input symbols get the smallest available codes to
(hopefully) reduce the size of the file. I say "hopefully" because, in
a file where each symbol has an equal chance of appearing, there is no
way to "rank" symbols based on frequency (they all have the same
"rank") and therefore no compression is possible.
2. Dictionary encoding (LZ77/LZ78/RLE/etc.), which is the technique of
recognizing patterns in the source data and representing them with data
that is smaller than the pattern it represents. In the case of LZ77
and LZ78, codes are built that reference a "dictionary". In the case
of LZ77 specifically, the dictionary is the output data itself, with
each "pointer" signifying both a position into the dictionary and a
length; since the "length-offset" pointer is smaller than the data it
references, compression is achieved. In the case of RLE, a single
repeating symbol is coded as one instance of the symbol plus the length
of the repetition.
3. Context modeling, where algorithms monitor the input stream and try
to predict what the next input symbol will be. The *output* of that
prediction is in a form that is more easily compressed via the above
two methods already listed. This can be a bit hard to understand, so
let's try an example: Let's assume you have a data stream that looks
like this:
02 04 06 08 10 12 14 16 18 20 24 26 28 30 (Note that "22" is omitted)
Let's assume we know that the data will, most of the time, follow this
pattern (next number = this number + 2). That pattern will be our
predictor, and the output of this whole process will be *difference* of
the predicted number and the actual number. Running our predictor
against the above data, we get:
Original: 02 04 06 08 10 12 14 16 18 20 24 26 28 30
PredDiff: 00 00 00 00 00 00 00 00 00 00 02 00 00 00
("PredDiff" is the difference between the prediction and the actual
number). As you can see, the output of the process is much easier to
compress than the source data, so after a prediction pass is made, the
OUTPUT is what is actually compressed. To reconstruct the original
data, the process just happens in reverse. The most advanced
compression methods today (PPM/PAQAR/etc.) use highly complex forms of
context modeling in an attempt to discover redundancy so that their
prediction is as accurate as possible. (This takes a LOT of CPU time
and large amounts of memory, btw. -- they are not fast.)
NOW -- having said all that -- how does it help you to understand that
Sloot's claims were bogus? To comfirm/deny Sloot's claims, you have to
look at the data he was claiming to compress. Was it very very high in
redundancy? No, moving video+audio only has so much redundancy. Sure,
you can compare frames of video and only encode what changes between
frames, but that still leaves a massive amount of information to encode
(raw video frames are at least 20MB each), and the only way modern MPEG
codecs get it any smaller is through a set of transformations that
throw away information (and Sloot claimed 100% reproduction of the
source data).
I'm sorry, but no matter what he demonstrated or what book(s) were
written about him or by him, his claims simply aren't valid.
> Jan Sloot
> wrote two patents and released a little more info, but not his
> algorithms.
How can the patent not contain the algoritm and still be granted?
Without the algorithm, it would be unenforcable...
> If I read the two Jan Sloot patents it looks like everything is done to
> make data unique and avoid all doubles and add references to the unique
> data that are littler then the original data. He first converted all
> binary data to numbers and used only numbers to calculate the key that
> he finally store binary at a chip card.
Binary data *is* numbers.
> If he converted everything to numbers like 4, 8, 16, 32, 64, 128, 256
> etc. then he can add the numbers and still find the sequence back as
> long there are no double numbers. In this case he also needed a way to
> find out at what position the numbers where before.
Again, this can't exceed the argument laid out in the counting theorum.
The sequence data, plus the data used to determine at what position
they were at before, cannot be smaller than the original set of data
*for all data sets* (which is what he was claiming -- all video files,
all audio files, etc.)
I have read two test reports where Adam Platform (AP) is tested:
Little parts from report December 2000:
Two independent test consultants:
Rofin Australia Pty Ltd, represented by systems analyst Sofiaan Fraval
Centre for Telecommunications and Information Engineering (CTIE), of
the Department of electrical & computer systems engineering Monash
University, represented by senior research fellow Terry Cornall
They did the floppy test:
The one-minute of video data from a previously unknown source measured
1.985 Gbyte and was compressed using AP software onto a floppy disk of
1.0 Mbyte size.
Conclusion:
The compressed video file as played back in real time across a full
screen and captured the full richness of the original material without
loss of color quality or the introduction of other distortions or
artifacts. The quality of the playback was most favorable when viewed
against the quality of the original tape material.
They did two 2400 baud modem tests (1,000 and 1,000,000 video
compression), conclusion:
The overall video compression ratio observed in the test from the first
compression through to the compression of transmitted video can be
estimated. By factoring the transmission time against the capacity
and/or limit of a 2400 baud modem from the original one minute of
unknown video data to the transmitted compressed video data was
estimated to be in excess of 945,405:1.
Overall conclusion:
Essentially, full screen, high quality video data was successfully
compressed and transmitted in real time, without quality loss or the
introduction of distortions and artifacts.
The estimated overall test compression ratio matches very closely the
proposition that AP software, implemented in two stages, results in a
compression ratio of 1,000,000:1. This is more than significantly in
advance of any existing video compression technology.
Source audio/video material unknown to AP:
In order to ensure that the material to be compressed during the test
has never been seen by AP before Terry Cornall (TC) provided a test
sequence from CTIE. This was help by Media World (MW) during the
preliminary phases of the test setup. Special note must be taken of the
fact that the digitized video was not released my MW to AP until all
setup including installation of AP software had been completed on the
server and client machines.
Warning:
Above report parts are not allow to be disclosed to any person,
reproduced in any form or stored in any way in an information retrieval
system without Media World Pty Ltd's written consent.
Little part from the famous The Tolly Group test report
September/October 2003:
Modem tests conclusions:
Tests reveal that AP technology can deliver high-quality, full-screen
video streamed over 56Kbps network connections. Furthermore, engineers
observed good quality, full-screen video streamed over 14.4 Kbps
network connections.
The radical advancements claimed by AP technology bring with them a
heavy burden of proof. Thus, for this test, extremely strict measures
were implemented to guarantee the integrity of the test environment and
the testing process.
Film clips used for this test where selected by The Tolly Group who
oversaw the conversion process and retained possession of both the tape
and digital data until the AP software that was to be used for the test
was physically secured.
The CDs containing the software used for the test were placed in a bank
vault before the digitized video was provided to the AP team for
pre-processing.
New, unformatted hard-drives were provided by the test team for this
test. Test engineers physically inspected these machines prior to
installation to verify that no additional hardware components were
present in the machines.
Engineers verified that the machines were loaded only with commercial
OS and utility programs and that the CDs used for the AP code came from
the bank vault. Furthermore, at the end of each session, the
hard-drives were uninstalled and stored in the portable safe to which
only the test engineer had the access code and override keys.
Even with this little parts of the test reports it's clear both in both
test is done everything to block out fraud and both test proof well
that Adam Clark's inventing is real and working as he claimed. What
happen later I only know from the media and is very strange just before
going stockmarket and after everything passed a dudiligence research
before. For me it can never wipe the two early reports and many
successful demonstration done before.
>
> > The investors only blame themselves that they give Jan
> > Sloot not his money in an early stage in exchange for the source code
> > and that they didn't protect the inventor 24 hours a day against
> > security risks.
>
> Well... if you dial up the conspiracy juice by a factor of 5, you can
> make a case that Sloot faked his own death so he could retire under an
> assumed name and keep the money. :-)
That two statements are from the investors not from me. If Jan Sloot is
died because he had a weak hart or helped a little is an other story.
> No, the "inventors" overlooked something. Adam is a pathological liar
> and a con man, so he didn't invent anything. Sloot seemed genuinely
> dedicated to finding a new form of compression, but all of the
> information he provided was incomplete and/or rely on "magic functions"
> to come up with "unique" hashs for a set of data, when the counting
> theorm easily proves it is impossible to come up with a unique hash for
> a set of data that is smaller than the original set and only maps back
> to that set (for *all* sets of data).
Adam Clark looks to me as a man who likes to be in the middle point but
that makes him not yet a liar. Jan Sloot was more the invisible and
shy, the typical inventor type. Both had a practical problem related
with their daily job where they need to compress data. Both found a
different solution, both finished it round 1997 and both protected
their inventing with a paranoid behavior.
> NOW -- having said all that -- how does it help you to understand that
> Sloot's claims were bogus? To comfirm/deny Sloot's claims, you have to
> look at the data he was claiming to compress. Was it very very high in
> redundancy? No, moving video+audio only has so much redundancy. Sure,
> you can compare frames of video and only encode what changes between
> frames, but that still leaves a massive amount of information to encode
> (raw video frames are at least 20MB each), and the only way modern MPEG
> codecs get it any smaller is through a set of transformations that
> throw away information (and Sloot claimed 100% reproduction of the
> source data).
>
> I'm sorry, but no matter what he demonstrated or what book(s) were
> written about him or by him, his claims simply aren't valid.
Thanks for the examples it tells me about the compression nowadays
used.
As Jan Sloot said more then once and wrote more then once he didn't
used compression but he code the input data to little a key not using
zero's and ones any more in the coding process and used a different way
to store data. Also he said it was not difficult so if somebody know
the principle he can reproduce it easy.
This remember me to a heavy discussion with a teacher in the past. I
didn't agree with a formula about how many data was possible to
transmit over a analog phone line. Some years later I wakeup my
girlfriend to tell her my just found idea how to change a modem so more
data can be transmit over a phone line without compression. Many years
later I met an other girl who showed me in a book the name of a
modulation what did the same as my idea many years before.
Jan Sloot dedicated many years every night without sleep to his
inventing. One night he wakeup his wife and children and showed the
result. For the first time he could play back the video from the chip
card without shaking or waiting what it did before according his son.
> How can the patent not contain the algorithm and still be granted?
> Without the algorithm, it would be unenforceable...
If you invent this system, do you publish the algorithm before you have
received any benefits?
> Binary data *is* numbers.
Yes I know, but I wrote this specially to show Jan Sloot thought not
binary but in numbers to force to think out of the binary box too.
> Again, this can't exceed the argument laid out in the counting theorum.
> The sequence data, plus the data used to determine at what position
> they were at before, cannot be smaller than the original set of data
> *for all data sets* (which is what he was claiming -- all video files,
> all audio files, etc.)
Specialist can ignore this inventings and spend the next 10 years
making small improvements in compression or try for 6 months to find a
way to code data at a different way as Jan Sloot did.
I expect a formula where every size of binary data bigger than x Mb is
to recode to y Kb with method z (and who knows how many methods there
are).
In the time I wend to school, I only carried the books I needed that
day and not all the books I needed in the total year. This was possible
because I know before what books I needed what day. Why do we need to
predict if we know the total data already before? Is it not possible to
design a smart diagram what contains a fix amount of combinations and
generate keys who are littler then the original because you don't use
all the combinations all the time?
> well it would be nice if even little snipets of code ideas were
> brought forth. Me the phantom data compression troll is
> currently involved setting up my own home site via dyndns.org,
> and hope to have something available soon.
> ...
Gidday mate,
Do you know this phantom:
Extract from blog article: Maximum Data Compression.
"I created my prototype and tested with the commercial archivers. And this
is the result I got for the Calgary Corpus:
An average bps of 2.05 is not bad at all for a simple Java implementation.
Lot of possibilities for improvement.
Getting excited, I benchmarked my cutie with some commercial compressers.
And here is the result:
This is too good for a test program in Java. Moreover there is no cheating
involved, no arithmetic encoders. Only entropy encoders. More on this
later."
Hmm!
j.
You can cite anything you want from the Tolly Group report but they
have publicly admitted that the test was compromised and have retracted
their results (and the report itself; it is absent from their website).
It is false information. Unfortunately, they have never cited *what*
was compromised, which I am very curious to know. I suspect that
someone was bribed.
> Thanks for the examples it tells me about the compression nowadays
> used.
Glad I could help. I tried to use standard terminology; you'll notice
that I abstracted the input/output information as "symbols" instead of
bytes, characters, etc. since compression is, at its core, information
theory.
> As Jan Sloot said more then once and wrote more then once he didn't
> used compression but he code the input data to little a key not using
> zero's and ones any more in the coding process and used a different way
> to store data.
Changing the representation of the data does not change basic
information theory. Let me give you a simple example of why Sloot's
(and others) claims cannot be true, and hopefully this will help you to
understand. I'll give you a set of data to compress:
00
01
10
11
The above is four numbers, 0 through 3, using two bits each to
represent. Now, according to Sloot, his method could achieve
compression on *any* set of data. To do that to the above data, you'd
have to somehow shorten the dataset by at least one bit. However,
since the data itself is represented by 2-bit symbols, that means that
at least one of the above values would need to be represented using
only one bit. Since one bit can only represent two states -- 0 or 1 --
how do you get it to map to one of *four* possible states? The answer:
You can't. You can use variable-length codes to encode smaller
numbers with smaller bits, but the "housekeeping" information needed to
seperate codes from one another ends up eating the space you saved, and
you're back to where you started.
Please do read the comp.compression FAQ Section 8 (I think),
specifically the part that mentions the Counting Theorum, which
discusses "recursive compression" and why it cannot possibly work for
all sets of data.
Even if you don't want to read the FAQ, or don't understand the
counting theorum, here is another way to think about it: The inventors
of these kinds of schemes claim to reduce any set of data by at least
one bit. If that were true, you could run the compression routine
again on the last set of output data and reduce it by one more bit...
and again, by one more bit, and so on. What happens if you keep doing
this? You would eventually be left with a single bit. It should be
obvious that you cannot reconstruct megabytes of data using compressed
data that consists of a single bit, yet that is most of these
"inventors" claim.
I don't think most of them are con men; I think they honestly thought
they had invented something. The human brain is a very persistent
pattern recognition engine and can make you perceive that you have
witnessed a pattern that doesn't exist. (I myself used to believe that
I could see patterns in truly random data, like rain droplets on a
windshield, but I later proved to myself that what I was "seeing" was
an artifact of the human visual system/cortex.) Sloot seemed like a
very nice man who wanted very much for his idea to work. Adam Clark,
however, was clearly a con man; I heard him speak in 1997 when he was
just announcing his technology and it was clear to me that he was
lying. The only reason Adam Clark got away with it for as long as he
did was because his claim was made slightly more believable in that he
never claimed lossless/exact decompression, but rather lossy "DVD
quality" results.
> > How can the patent not contain the algorithm and still be granted?
> > Without the algorithm, it would be unenforceable...
> If you invent this system, do you publish the algorithm before you have
> received any benefits?
If you don't, how do you enforce the patent when there is a dispute?
> I expect a formula where every size of binary data bigger than x Mb is
> to recode to y Kb with method z (and who knows how many methods there
> are).
No, that would require changing the laws of physics. The fact that
this cannot be true is what I wrote about earlier. In fact, if you
still believe this, then what is your view on the "reduce data by one
bit" section I wrote? Do you believe it is possible to represent
megabytes of data with a single bit?
I am anticipating your answer will be "But that is different than
encoding data of size x MB down to y KB", to which I answer, no, it is
no different. The sizes (MB to KB) may be different but the claim is
still the same ("able to take *any* set of data and reduce it"). Let's
go ahead and change the sizes anyway: Let's say someone has claimed to
compress any 1MB set of data down to 1KB. If that were true, then we
could do this recursively: We could take 1000 sets of 1MB, compress
them down to 1000 1KB encoded sets -- and then string all the encoded
sets together into a 1MB set of data and compress *that* down. You
could do this endlessly until the entire world's data is in a single
1KB set of data. Does the idea still sound plausible?
The idea of recursive compression is very appealing, so I understand
why people pursue it. The problem is, you can only "ignore all others
and work on your own" if what you are trying to do has not yet been
DISproven.
> Is it not possible to
> design a smart diagram what contains a fix amount of combinations and
> generate keys who are littler then the original because you don't use
> all the combinations all the time?
You can do that, but then you need an "index" or similar to indicate
what sets of original data the keys refer to. And you'll find, if you
try to implement something like this, that the key *plus the size of
the index* (because it is necessary to reconstruct the data) can never
be smaller than the size of the original data for *all* sets of data.
And now, having written all of this, it just dawned on me that the idea
of recursive compression is very similar to the idea of a perpetual
motion engine, which also claims to break the laws of physics
(specifically, conservation of energy). Both are wonderful ideas to
dream about, but can never be realized.
> ...
> What kinds of tricks are possible with
> unique numbers or unique number sequences?
All sorts of tricky things actually. Unique number sets are my speciality.
If you can call marketing promotions software a specialty ;-)
Within physical constraints, I would posit that there is no such thing as a
"random" set of two or more unique numbers. While such a set of unique
numbers, no matter how large, may appear on many occasions to be "random",
it is not. I.e.
S = (1, 2)
How many ways can S be uniquely arranged? Answer: 2 ways. 1,2 and 2,1
How much space is needed to store these two unique arrangements? Answer: 1
space.
Simply put the above set is compressible and thus is non-random by
definition - Chaitin et al. More elaborately, this set, and any other
unique number set for that matter, can be represented by something else, a
seed if you will, something that occupies less space than the set itself.
IMO it is because of this aspect, people like Sloot and co. pursued and
continue to pursue this particular avenue.
IME compressing unique number sets, if you can call it compressing, is the
easy part of this particular trick. The not so easy part is encoding
non-unique number sets into unique number sets that occupy less space -
which is another thing altogether.
j.
Answer: 2 spaces. One for the set (either 1,2 or 2,1) and one for the
information that tells you which arrangement of the set you are
representing.
Thank you for perfectly illustrating the counting theorum :-)
This suggests that a movie could be compressed (lossy) to about 1 KB.
A successful compression test would mean that if you watched the
original movie, then with 50% probability watched either the same movie
again or a compressed and reconstructed copy, you could not tell the
difference better than random guessing.
As a simple example of how far we are from what is possible, consider
compressing speech by converting it to text and compressing it to 1 bit
per character as Shannon estimated in 1950. This is about 10 bits per
second. The rate is higher because interactive speech uses short term
memory, which has a higher rate. Short term memory is not used in the
video compression test.
I don't believe that Sloot solved the problem. It requires technology
far in advance of what has been achieved today. For speech it requires
solving the AI problem just to achieve Shannon's estimate of 1 bpc.
Realistic speech synthesis with tone of voice is also an unsolved AI
problem.
Video compression would be much harder. It would involve representing
the movie as a script, then regenerating the movie in a realistic way,
which would depend in many ways on the life experiences of the viewer,
which would not be known to the decompressor even if it had AI
capabilities.
-- Matt Mahoney
This remembers me the follow stories:
The Human Genome Project reveals genetic complexity.
Prior to the completion of the Human Genome Project it was assumed that
the human genome contained about 100,000 genes. Nobody really knew, but
that figure was based on the number of known proteins. When the Human
Genome Project ended in 2003, scientists discovered there were only
20,000-25,000 genes. Their long-held theory that there was one gene for
one protein was wrong. So, how do 20,000 genes generate 100,000
proteins?
http://laboratorian.advanceweb.com/common/editorialsearch/Aviewer.aspx?AN=MT_05jun20_mtp22.html&AD=06-20-2005
The Biological Chip in our Cells;
http://www.fosar-bludorf.com/archiv/biochip_eng.htm
The DNA-wave Biocomputer:
http://www.rialian.com/rnboyd/dna-wave.doc
Is Shannon's Information Theory only based on the simple system that
every bit double the virtual value, so by four bits 8, 4, 2, 1 and by
eight bits 128, 64, 32, 16, 8, 4, 2, 1 etc. or does it also cover other
(multi-dimensional) value and reading assignments? Because Jan Sloot
tweaked that assignments.
About Adam Clark I only know he did not use frames and the output was
variable bit rate (VBR). Also Adams Platform compression involved two
stages, compression of the file and then it had some sort of transport
layer compression. Adam's bit and byte repeating compression method cut
often only 1 bit of 32 bits or max 5% compression every cycle and cost
a lot of CPU power. In 1996 Adam used a 132 MHz power PC for his
lossless compression. An other hint is Adam's requested patent 7 "Data
encoding using multi-dimensional redundancies".
Adam Clark even heard about Jan Sloot when he had meeting at a US
Telco. Adam also demonstrated a live version over GSM to Time Warner.
SAIC http://www.saic.com approached Adam via John Scully (former Apple
CEO) because they thought it had military applications. John Scully
offered Adam Clark 50 million for his inventing but Adam refused he
thought he could get more money and look who is nowadays in the Qbit
Board of Managers http://www.qbit.com/management.html.
No, Shannon defined entropy as a measure of information and proved this
to be the lower bound of any lossless coding method. Shannon also
estimated the entropy of written English (about 1 bit per character) by
how well humans could predict characters in running text. Such a
coding (compression) has not been achieved by any algorithm so far.
Achieving it imples passing the Turing test for AI, because knowing the
probability distribution of strings in English (necessary for optimal
coding) implies knowing the probability distribution of answers given
by a human to questions posed by an interrogator playing Turing's
imitation game, which Turing proposed in 1950 as the (now broadly
accepted) definition of AI.
> About Adam Clark I only know he did not use frames and the output was
> variable bit rate (VBR). Also Adams Platform compression involved two
> stages, compression of the file and then it had some sort of transport
> layer compression. Adam's bit and byte repeating compression method cut
> often only 1 bit of 32 bits or max 5% compression every cycle and cost
> a lot of CPU power. In 1996 Adam used a 132 MHz power PC for his
> lossless compression. An other hint is Adam's requested patent 7 "Data
> encoding using multi-dimensional redundancies".
Adam's platform was a scam. For details see google.
-- Matt Mahoney
Sorry but doesn't prove or disprove anything. Any test involving "human"
perception cannot be arised as scientific standard IMHO (unless of course
we're talking psychology or human-related science)...and unless you exactly
know how the human process involved works (with "exactly" meaning
mathematically, were applicable).
Example of a digital industry standard that came from mere "answers" posed
through a test, is the CIE XYZ color space.
If I remember well, people were asked how bright they perceived a given
color based on others placed beside. (who's the brightest?! :D)
It resulted, and became "industry standard", that Green was the brightest
component, with Red scoring a 2nd place and Blue a 3rd bronze...all by
gathering answers...very scientific! :))
Best,
E.
Well, it is a fact that the eye is more sensitive to green light than
to red, than to blue, and this fact is very useful in designing color
monitors, image formats like JPEG, and so on. It was measurements like
this that allowed the 1953 NTSC standard to compress a color TV channel
into the space reserved for black and white, because it was observed
that the human eye is insensitive to high frequency variations in
red-green and blue-yellow compared to black-white, so the chroma signal
could be transmitted with low bandwidth. Do you have a better way to
measure this sensitivity than with human subjects?
My point is that there is a great deal of room for improvement in lossy
image and speech compression, and to a smaller extent in lossless text
compression. We know this from experiments in psychology done decades
ago. The solutions are obviously hard or they would have been solved
by now.
I don't see how you can avoid psychology. Lossy compression quality
must be measured by human judges, or else your definition of quality
would be completely arbitrary.
-- Matt Mahoney
>> j. wrote
>> ... Answer: 1 space.
> Answer: 2 spaces. One for the set (either 1,2 or 2,1) and one for
> the information that tells you which arrangement of the set you are
> representing.
> Thank you for perfectly illustrating the counting theorum :-)
Cheers ;-)
program for 1st answer:
if input = 1 then
S = (1, 2)
else
S = (2, 1)
program for 2nd answer:
?
j.
My point wasn't that CIE XYZ isn't useful, but rather that it is probably
far by some extent as a space, as it's based on "answers". One day, when
you'll be able to insert some needles into the brain and exactly measure the
perceived RGB values of a single sane person, then you can call that
scientific...not with a "oh yeah, it's brighter!". ;)
Also, I don't know how much relevant would pysichology be in text
compression...probably for the very same reason. It could be of some help in
this particular field, but always until you're able to "reverse engineer"
the "algorithm" used by the brain...by *direct* observation, rather than
human feedback.
You can't really think to do the double work of filtering the feedback with
N tests, considering influencing environment variables that could have
changed the feedback...or create an emotional scale to introduce feedback
biasing and so on.
Build an interface to a brain, get a rat, study its primitive memory
system... that would be a step, IMHO.
Or, get there from a different road...who said the brain can't completely
fail in lossy compression?
Imagine a brain fail like 10 or 20 16x16 Mpeg block completely black in a
movie....that'd be unacceptable, even though the rest is compressed at
better ratios. :))
Best,
E.
Text compression is equivalent to AI. Understanding how the brain
processes natural language can lead to better models. For example, the
best models, which are used in speech recognition research, use latent
semantic analysis, essentially a 3 layer neural network to relate the
meanings of words inferred by their proximity in running text. There
is a good correlation between word perplexity (equivalent to
compression ratio) and word error rate.
-- Matt Mahoney
What are the bit compression limits or methods nowadays?
For example how many bits (5=00000 or 6=000000 or 7=0000000 etc.) with
a preferred value are minimum needed to compress it one bit or more?
With what method? How many percentage of all possibilities (....000,
....001, ....010 etc.) in 32 bits are to compress with that method?
With what bit length and method can you compress the most
possibilities?
Is it possible to write a computer model what test all the know
possibilities to compress a fix bit length (increased with one bit
every round) to find an optimal minimum bit length to compress or even
better find self new methods?
It has nothing to do with number of bits. Data is compressible if the
probability distribution is nonuniform. Natural language has a
nonuniform distribution because it is constrained by vocabulary,
semantics, syntax, and all sorts of knowledge from life experiences.
If we could model this knowledge, we could compress text to 1 bit per
character.
-- Matt Mahoney
Yes but the 1 bit per character text compression limit brought me to
the questions in the first subparagraph and AI to the questions in the
second subparagraph.
I'm also curious if there is name difference between compressing by
eliminating redundancy and compressing by using a different coding
system?
You mean, at best, with a revolutionary AI knowledge compressor...the best
would be a 8:1 compression ratio for any english text?
Is that worth it? :)
Best,
E.
If think you are right on Ketnet/Canvas site is standing:
Reportage from Fons de Poel and Heleen Minderaa for Netwerk (KRO)
http://195.0.110.55/canvas_master/programmas/terzake/terzake_vandaag/index.html
KRO is a Dutch TV station who transmitted the first stories about Jan
sloot at 02/02/2001:
http://www.debroncode.nl/files/debroncode.nl-quicktime-high.mp4
In this first documentary almost two years after Jan Sloot died little
Jan Sloot investor Leo Mierop showed the chip card where Jan Sloot
stored his key code and big investor Roel Pieper told he still has hope
that the source code is found one day and that it still can change the
IT/Telecom industry. Roel Pieper also showed the internal email Philips
send to him as Senior Vice President of Strategy at Philips where they
(Nat. Lab.) showed no interest in the inventing Jan Sloot showed him at
Philips. Roel Pieper left Philips shortly after this email to setup a
company round the inventing from Jan Sloot what he estimated to have a
market value of 100 billion. Because Roep Pieper left Philips he lost
the option to become the next Philips CEO what was the reason Philips
brought him in the company at the first place. Also Jan Sloot's
inventing is showed shortly, a with box little bigger than a chip card
reader. Jan de Jong (old house owner of Roel Pieper and partner in an
investment fund with Roel Pieper) had some years ago that device or a
similar device from Jan Sloot at his home in Aerdenhout, The
Netherlands.
Some days ago was the news that Colin Powell
http://www.state.gov/r/pa/ei/biog/1349.htm joined Kleiner Perkins
Caufield & Byers (KPCB)
http://www.internetnews.com/bus-news/article.php/3519946 the same
investment firm who prepared the biggest IPO ever (1000 million seed
fund) round the company the Fifth Force from Jan Sloot, Leo Mierop,
Marcel Boekhoorn, Roel Pieper and other investors. But Jan Sloot died
some days to early so no source code, no IPO. Roep Pieper received his
doctoral in computer science and mathematics and is a part-time
Professor of computer sciences and business administration
http://www.evca-specials.com/symposium05/cv_s/cv_pieper.php?width=655&height=530
he has also a good relation with KPCB and owned Google stocks before
the Google IPO.
And later again a new longer version at two different days:
09/10/2004:
http://www.netwerk.tv/templates/videoasx.jsp?f=131455
09/12/2004:
http://www.netwerk.tv/templates/videoasx.jsp?f=131598
If your goal is to save disk space, then no. If your goal is to solve
the man-machine interface by giving computers the ability to converse
in natural language, then yes. Compression is just a useful way to
evaluate your language model.
-- Matt Mahoney
You might want to read a book on information theory. Here is one
online.
http://www.inference.phy.cam.ac.uk/mackay/itila/
Compression means coding to eliminate redundancy. If a message has
probability p, then you want to choose a code with length close to
log_2 1/p bits. Shannon proved in 1949 that that is the best you can
do.
We don't know the exact probability distribution of written English.
We know it close enough to compress it to about 1.3 bits per character
(bpc) with the best language models. In 1950 Shannon did an experiment
in which humans guessed successive characters in text to show that the
entropy is about 1 bpc.
In 1950 Turing proposed a definition for artificial intelligence which
is now widely accepted: you ask questions to a human and a machine
without knowing which is which. If you can't tell them apart based on
their answers, then the machine has AI. So far no machine has passed
this test (e.g. the Loebner prize).
If you knew the distribution of English, i.e. you knew p(x) for any
string x, then you could pass the Turing test. Here is how. If you
know p(x) for all x, then you certainly know for any question Q and any
answer A the probabilities p(Q) and p(QA), so you choose answers from
the distribution p(A|Q) = p(QA)/p(Q). You can generalize this to all
dialogs by letting Q be the entire question-answer sequence up to the
most recent question.
Humans know p(x), but don't know how to put that knowledge into a
machine. If we did, we could easily code any x in at most log_2 1/p(x)
+ 1 bits using arithmetic coding.
-- Matt Mahoney
umm? you supply all answers to all questions, and then you don't add to
the distribution with intelligence?
Are you trying to prove my point, or disprove it?
Let we test this community with one question, over 24 hours I gave the
correct answer and I tell you why I asked this question. Don't post
your answer here, you can email the answer to me, so I can also publish
statistics about the different answers, I don't publish your name.
There is a bridge that is so weak that only 2 persons can cross that
bridge at the same time.
There are four people, two pretty fit people and two not so fit people:
1 person can cross the bridge in 1 minute.
1 person can cross the bridge in 2 minutes.
1 person can cross the bridge in 5 minutes.
1 person can cross the bridge in 10 minutes.
Because it's dark and the bridge is very dangerous the bridge can only
be crossed with a flashlight.
The question is:
How many minutes are needed to let all four people cross the bridge in
the shortest time?
So we have 4 people at one site at the bridge and we end with 4 people
at the other site of the bridge.
It's not allowed that one or more (max 2 at the same time) persons
cross the bridge without flashlight and the flashlight can only be
brought back by one or more max 2 at the same time) persons to the
other site (starting point).
It's not allowed to carry somebody on you back or whatever of trick,
the minutes above are fixed for each person but a quicker person is
allowed to adjust the speed to a slower person.
Solute this question with your brain without help of anything else.
And again don't post answers only email me the answer.
If there is only one flashlight, then only one person can cross, and
the problem cannot be solved.
What does this have to compression?
In my previous answer, I didn't read the entire post quickly enough and
gave a quick answer that was incorrect. My apologies. However, my
question still stands: What does this have to do with compression?
More importantly, I'd like you to clear something up for me: Is your
belief in recursive compression the result of being un*able* to
understand our explanations, or simply being un*willing* to? If the
former, that's fine; if the latter, that's inexcusable.
Recursive (what you call "endless") compression has nice entertainment
value (for example, see the movie "Sneakers") but is mathematically
impossible. There is no conspiracy; there is not some "hidden" or
"previously undiscovered" theory to discover; people are not
withholding information from you. It is mathematically impossible to
compress *all* sets of data by a single bit. If anyone claims that,
like Sloot, they are automatically incorrect.
Maybe you're missing the distinction between "any" and "all". To
summarize:
It is impossible to compress any set of data = FALSE
It is impossible to compress all sets of data = TRUE
I forgot about this bit on your website:
*** On2 codec company website www.on2.com started January 25, 1999,
years after Adam Clark showed his inventing!
That is the start of the website, not the company. The company existed
as The Duck Corporation long before Adam Clark and his con. See
http://www.duck.com/ for more info.
I haven't yet been able to work out what your point is, but I do know what
your answer is. So I will break down your answer into its component parts
so that I am clear what it is you may be on about. From there I hope to
work out your point and any proof that may flow from this.
Part 1) One for the set (either 1,2 or 2,1)
Part 2) and one for the information that tells you which arrangement of the
set you are representing.
1 + 1 = Answer: 2 spaces.
Succinct and concise, an easy to follow requirement. I'm still struggling
to see your point though, so lets implement your requirement.
There is a set (1, 2) lets call it Set1.
Apparently there is other set (2, 1) according to the requirement. It
appears to have the same values as Set1, only that the elements are arranged
differently. But be that as it may, the requirement says its different to
Set1, so lets call it Set2.
Part1 of the requirement says a space in which to store the one-for-the-set
identifier is required. Lets call this the Set space. A bit space will do,
so assign the bit value 0 to Set1 as its Set identifier, and bit value 1 as
the Set identifier for Set2.
OK, so thats 1 space taken and Part1 implemented.
Part2 requires another space in which to store the information that tells us
the arrangement of the set we are representing. So how many arrangements
are there. Set1 - 2 arrangements, Set2 - 2 arrangements. Now the
arrangements will only apply to the set indicated by the value in the Set
space at any given time. So yes we can handle this in a bit space. Lets
call this second bit the Arrangement space. A bit value of 0 will indicate
that the current set is in Ascending order, a bit value of 1 will indicating
Descending order.
So thats Part2 done. Lets walk through and have a look at the results for
the 2 space store:
0,0 = Set1, Ascending order = (1, 2)
0,1 = Set1, Descending order = (2, 1)
1,0 = Set2, Ascending order = (1, 2)
1,1 = Set2, Descending order = (2, 1)
2 sets and 2 arrangements for each set = 2 * 2 = 4 = 2 storage spaces
Yep, the requirement is met. Presents an interesting picture though.
I wonder if the same outcomes could be achieved with 1 space given that Set1
and Set2 have the same elements. Time to bring out the ol' counting theorem
<count, count, count, bing>
1 set only and 2 arrangements for the set = 1 * 2 = 2 = 1 storage space
0 = Set, Ascending order = (1, 2)
1 = Set, Descending order = (2, 1)
Now where have I seen something like that before.
The results of the 2 space answer speak for themselves. As do those for the
1 space answer. The point you are trying to make still eludes me.
j.
The right answer is 17 minutes.
I received 4 emails in total, thanks!:
2 x 17 minutes (50% very good!)
2 x 19 minutes (50%)
On person with the good answer wrote "This puzzle looks similar to the
river game". I didn't know that puzzle and I don't know if that
person solved it without knowledge of the right solution.
Till now all people I asked this question answer 19 minutes (included
myself) and all people where almost sure it can't be faster (included
me). This is also the reason why I asked this question, to show that
our mind can be pretty sure about something if it doesn't see any other
solution. We believe in it, till somebody explains the surprising
solution than we make it logical again.
I don't explain the solution, so people who didn't find the right
answer can still figure out how to do it in 17 minutes. This can show
that the brain do more work when it believe there is an better solution
possible.
it is impossible to know all coding schemes at any point in time :)
any sequence can be moved out of step depending on a bit value all of
the time.
detection of the restepping of the stream is where the key lies. This
places stringent but not implossible conditions on the sequence, and
the restepping that can be applied.
ok?
ok?
Exactly -- your mind is pretty sure that recursive compression exists,
probably because you can't understand the proof that it can't exist.
> We believe in it, till somebody explains the surprising
> solution than we make it logical again.
Which I and several others have already done.
I agree as I explained already with the schoolbook example. I want to
add that sometimes you must accept some losings at short term to gain
at long term. Also a cycle (everything moving) can be needed to gain in
the next round.
>As promised here the answer:
>
>The right answer is 17 minutes.
That's what you think... but I have found a way that makes it possible
in only 5 minutes. I will not disclose how (of course), because I'm
afraid someone will steal my idea and make a lot of money... are you
seeing the parallel between me and Jan Sloot yet?
> This is also the reason why I asked this question, to show that
>our mind can be pretty sure about something if it doesn't see any other
>solution. We believe in it, till somebody explains the surprising
>solution than we make it logical again.
You ignore the fact that it is possible to show that it is impossible
to be faster than 17 minutes (unless you use my special technique of
course). Just like it is impossible to compress any DVD to 1 kB.
Most inventors are not a scientist or high educated, they don't talk
the language of science. Because this, the chance one of their friends
is scientist or talk the science language is also less. Inventors often
face a practical problem and solute it, sometimes they only discover
later what the impact of the inventing can be.
Let we take Jan Sloot as example, he run a shop where they also repair
electronic equipment like televisions. Jan Sloot is so experienced
specialist in repairing TV's that when an employee name the problem,
TV brand and type than Jan Sloot could say without seeing the TV; check
that voltage at that transistor point and if it's lower than x Volt
check the voltage over that resister if it's higher then x Volt that
capacitor must be replaced. One day Jan Sloot got the idea to store all
information about how to repair all known problems of all known TV
brands and all known TV models in one computer database. Included
diagrams of the electronics and explanation where to measured step by
step and replacing components. In that time computers had so little
storage that he couldn't store all that information, he started to
search for a solution. Finally he found a way to store all that data in
a database.
In 1995 a client (Jos van Rossem) entered his shop with a defect remote
control and start a chat with Jan Sloot, while chatting he hear about
the database and discover the possibilities of the inventing. They made
a plan with three steps; to expand the repair shop, to sell his repair
database and to develop the Sloot Digital Coding System (SDCS) and Jos
van Rossem start financing that plan. The repair shop and repair
database didn't become a success because Jan Sloot spend all his time
with the development of SDCS to compress video. Jan Sloot applied in
April 1997 his first patent application what become active in November
1998. In this patent he only used two 4 Mb flash memories and one 2 Kb
memory and two CPU's and one 128Kb external storage for the key code.
End 1997 the investor stopped because he had not enough capital left
and there where to less earnings. Jos van Rossem invested round 2
million guilder (round 1 million dollar) in total, most money was spend
at the expensive sales team for the repair database what Jan Sloot
never totally finished, because he preferred to be busy with his SDCS
system.
Round begin 1999 Jan Sloot inventing SDCS was working well enough for
demonstrations. Jan Sloot found new investors and build two new devices
one with 5 times 12 Mb memory and 5 CPU's and an external chip card
reader where he stored a 1 Kb key for every movie. Before he died he
was busy with similar version with 5 times 74 Mb instead of 5 times 12
Mb.
What happen when somebody hear about the SDCS system?
They don't believe it.
What is a company doing when Jan Sloot come to demonstrate his
inventing?
They send their best specialists and highest management to see this
incredible inventing.
What happen when they see the working device?
They see its working but can't believe it and check and double check
if there is not a trick and ask Jan Sloot questions.
If you are a cheater shall you choose a subject where you already must
start with proving you don't cheat because people expect you are
cheating?
Almost not
If you are cheater, how big is the chance you get money when you say
you have a good idea without proof of concept?
Almost zero
If you are cheater, can you earn money by only demonstrating you
cheating device?
Almost not
If you are cheater, how big is the chance you get your money before you
show the investors how the device is exactly working and how they must
rebuild that device and before they test if the rebuild device is
working?
Almost zero
What percentage of the real world are cheaters?
A very little percentage.
How big is the change that 100% of the incredible lossless compression
inventors are cheaters?
Almost zero
What if only one inventor really did it?
Then he faces the problem that he can't proof it without releasing
his method.
What if the inventor describe the principle as close as possible
without releasing the exact method?
The first problem is that he must have the right education and talk the
right language and it must be theoretical possible otherwise
specialists don't trust it by for hand. The second problem is that
the story can be for the inventor logical because he know how the
system work and know what is missing in the stripped description, but
for the reader it can be still a big mystery. Because the inventor has
no benefits when other people can rebuild the inventing before he
received his money, he shall do everything to hide that part.
So what to do?
Assume everything is possible but check everything yourself.
First make personal contact with the inventor and win trust by saying
he did an incredible job and that you like to see the inventing. Make
an appointment on a place where the inventor feels himself comfortable
and check everything and ask critical questions but never say that what
he is doing is impossible. The result can only be I can or can not find
proof that it's working. When it's in your opinion not working,
because you see something strange, ask the inventor what kind of test
he could do to avoid that problem. This must be in the range of
what's reasonable, you can't expect the inventor is going to show
the source code or show something else what makes coping possible. Stay
decent and nice, then the inventor is the most helpful.
And proof of theory?
You can't expect a theory from an inventor what science accept as
proof. You can try to find a theory yourself or you wait till somebody
else find it. In the mean time you can only accept that the inventor
proof that his device is working while there is no theory what support
it.
This is a good question.
First of all I like the true otherwise you get a labyrinth of lies or
half lies what one day is going to collapse.
I think my interest started already as little kid where I wanted to
explain everything and I asked questions and found solutions who where
quit new for my surrounding. At age of five I discovered how most
people take common accept things as business as usual while they
couldn't answer my skeptical questions. As most people I faced
phenomena's who couldn't be explained by science and I started to
proof they where to good to be true. Surprisingly I couldn't proof
they where not true, I even needed to accept that these phenomena's
exist without scientific explanation. In one occasion I got so strong
proof that something exist while it's not possible according to
science, that I researched this subject for long time. My conclusion
was that almost everything you can imagine is possible and is reality
or done before. I felt myself living in the middle age comparing with
what's possible and started to research why things are not available.
I'm still busy with that research and don't know the answer.
When I read begin 2001 in a Dutch magazine Quote (something similar as
Forbes) the story about Jan Sloot it's getting my attention, because
this story is similar to most inventors I studied before. I researched
this subject and found no proof that his inventing didn't worked. In
the media the most attention went to Roel Pieper because they say he
cheated Philips by starting to setup the company round Jan Sloot while
he was still working at Philips and second how is it possible that one
of the biggest IT icons believed in something what can not be possible
and involved his powerful friend in this project. I think Roel Pieper
saw with the first demonstration of Jan Sloot's inventing the Holy
Grail set top box, exactly the subject where he was busy with but
Philips didn't want to follow his set top box direction. He probably
saw the difficulties the inventing shall face Philips where believing
and impact are the two biggest problems. By starting a new company
round the inventing and let join all concurrent of the inventing with
seed capital he avoid the impact problem. Roel Pieper talked a lot with
the inventor and joined most of the demonstrations and must be very
sure about the working of the device before he shows it to his powerful
friends.
Jan Sloot's family and investors face comments that the inventing is
not possible only because one law in information theory says it's
impossible, while they all saw the impossible working for many times.
For an inventor an inventing is as a child for a mother. Jan Sloot
protected his inventing by hiding the sensitive information in a
deposit safe, they found one deposit safe with old useless information
where the story ends.
If Jan Sloot did not have a working device then he made a fake one. How
it's technical possible to make in 1997 a cigar box size device what
play 16 movies simultaneous and you can cue them back and forward (also
jump from begin to end and back) almost instantly and also full screen
with good quality without hard disk and without use of his super
coding? How can a very shy man as Jan Sloot survive all (international)
demonstrations and requested tests in front of specialists even when
they test and opened his device without him? Adam Clark even did much
more (international) demonstrations, don't forget demonstrations cost
money (airplane tickets, hotels) and are very tired to do. Jan Sloot
and Adam Clark run both their own company and had only to lose.
That may have been the case up to 100 years ago, but not today and most
definitely not in the realm of computer science. You don't just go
"stumbling" onto a new invention in the realm of computer science or
information theory just because you "had an idea to do it a different
way". We're not talking about making a recumbant bicycle or a light
bulb, things you can just screw around with until you stumble onto a
better design -- these "inventions" claim to break the laws of physics
and mathematics.
There is no such thing as free energy, nor is there such a thing as a
method to compress all bitstreams by a single bit. I'm sorry you don't
want to accept this, but it's the simple truth.
Not being versed in "science" or scientific terms does not excuse
ignoring facts -- but it does help to prove or disprove your theories.
I'm reminded of the concise summary by Brian Raiter regarding this:
"One of the hallmarks of a mathematical crank is that they invent their
own terminology, thus obscuring (to themselves, as well as other
people) that their new method is either poorly thought-out or is
isomorphic to something much, much simpler."
> Inventors often
> face a practical problem and solute it, sometimes they only discover
> later what the impact of the inventing can be.
That is a nice, romantic idea, but is just that: A romantic idea.
And now, I'd like to call on your beliefs in the impossible to explain
the following:
> Jan Sloot found new investors and build two new devices
> one with 5 times 12 Mb memory and 5 CPU's and an external chip card
> reader where he stored a 1 Kb key for every movie. Before he died he
> was busy with similar version with 5 times 74 Mb instead of 5 times 12
> Mb.
Why was he busy with a similar version that used more RAM? If his
first invention actually worked, why would he need to improve it?
> What happen when they see the working device?
Where *is* this working device? Does it still "work" now that he is
dead?
> If you are cheater, can you earn money by only demonstrating you
> cheating device?
Of course, this is what all con men do. They earn money by getting
investments on the promise that it will work, and even show
"demonstrations" of it working. It's snake oil -- you're left with
nothing when they depart.
> If you are cheater, how big is the chance you get your money before you
> show the investors how the device is exactly working and how they must
> rebuild that device and before they test if the rebuild device is
> working?
Very high. The "con" in "con man" stands for "confidence". In Britian
I routinely heard people like this referred to as "confidence
tricksters", which I believe is the root of the slang term. A con man
builds your confidence in whatever he is selling, purely to get your
money.
> The first problem is that he must have the right education and talk the
> right language and it must be theoretical possible otherwise
> specialists don't trust it by for hand.
This isn't a problem, it's a requirement. Assuming the invention
wasn't completely bogus, you'll need proper terminology to have it
analyzed so that it can be fabricated for mass production. See
previous point. Even if you lack the motivation or means to become
educated, you should at least explicitly define the terminology you use
so that it can be mapped to more widely-understood terminology.
> In the mean time you can only accept that the inventor
> proof that his device is working while there is no theory what support
> it.
Sorry, science != faith. And I definitely won't open up *that* can of
worms in this forum.
Yes, and some people can't accept this, hence religion was born.
> I researched
> this subject and found no proof that his inventing didn't worked.
It's clear you didn't research enough :-) so that is why we are
referring you to the counting theorum. Please read the
comp.compression FAQ section 9. For further research, please read a
book on information theory, as Matt Mahoney suggested.
> Jan Sloot's family and investors face comments that the inventing is
> not possible only because one law in information theory says it's
> impossible, while they all saw the impossible working for many times.
They saw *something*. Information theory says what the inventor
claimed was impossible, so what they actually saw cannot be determined.
> If Jan Sloot did not have a working device then he made a fake one. How
> it's technical possible to make in 1997 a cigar box size device what
> play 16 movies simultaneous and you can cue them back and forward (also
> jump from begin to end and back) almost instantly and also full screen
> with good quality without hard disk and without use of his super
I can easily think of a few ways. 802.11 (wireless), for one.
You say that he owned a television repair shop and could easily fix
televisions and remote controls -- did it ever occur to you that his
"demonstration" box was merely a remote-control to some other device?
Or that he could have hidden the extra video information inside the
television set that was playing it?
> demonstrations and requested tests in front of specialists even when
> they test and opened his device without him?
If that is the case, then where is this working device, and does it
still work?
> Adam Clark even did much
> more (international) demonstrations, don't forget demonstrations cost
> money (airplane tickets, hotels) and are very tired to do. Jan Sloot
> and Adam Clark run both their own company and had only to lose.
Adam Clark lost, that's for sure. Running a con as long as he did is a
house of cards just waiting for a gust of wind. While I am not
advocating the practice of conning people out of their money, he should
have taken the investment money and disappeared while he had the
chance. At least then he would be remembered as a successful con man,
not a pathological liar.
Jim I have to 90% agree with, but I have a few 'small' disagreements.
> Sportman wrote:
> > Most inventors are not a scientist or high educated, they don't talk
> > the language of science.
> That may have been the case up to 100 years ago, but not today and most
> definitely not in the realm of computer science. You don't just go
> "stumbling" onto a new invention in the realm of computer science or
> information theory just because you "had an idea to do it a different
> way".
Infact, in the 1980s and early 1990s a lot of the work of people was them
'just fooling around' who did not have degrees in information science or
computer science.. However, you are right because in your defense if you sat
down and counted up all the hours these people spend, all the books they
studied, all the time they spent in researching thier subject, you would see
they spent as much or more effort and/or money that the people with real
degree.
Sportman, it is that you must have formal education, it is that you must have
education. Most of the stuff you need to learn you can go to the local
library or better the local college/university and use thier library to teach
yourself. That is how I learnt my digital logic skills - and there is no
teacher to force you to learn this and only this fact. But you still need to
learn facts before you can do real work.
> We're not talking about making a recumbant bicycle or a light
> bulb, things you can just screw around with until you stumble onto a
> better design -- these "inventions" claim to break the laws of physics
> and mathematics.
That is where Sportman lack of education really is short. There is only a
small list of things we know to be absoluty, the logic and math behind them
are fixed in the nature of the universe itself, change those rules and the
universe we observe around us could not exist. The ratio Pi represents for
example, or the fact you can't square a circle with a compass and a straight
edge, like these the counting arguement is fixed in nature, it can not be
broken in this universe.
> There is no such thing as free energy, nor is there such a thing as a
> method to compress all bitstreams by a single bit. I'm sorry you don't
> want to accept this, but it's the simple truth.
The problem I think is he will not sit down and develop/follow the logic step
by step for himself, he just reads the words and rejects them - to think
carefully on the counting arguement is to see there are no loopholes.
> Not being versed in "science" or scientific terms does not excuse
> ignoring facts -- but it does help to prove or disprove your theories.
> I'm reminded of the concise summary by Brian Raiter regarding this:
> "One of the hallmarks of a mathematical crank is that they invent their
> own terminology, thus obscuring (to themselves, as well as other
> people) that their new method is either poorly thought-out or is
> isomorphic to something much, much simpler."
Your last point is very important, in the past I have even invent my own type
of matrix math (no it does not follow the standard rules) to solve certain
problems. For my needs it works great and even gives me the correct answers.
However, a friend of mine (Larry Baydak) who studied nuclear science showed
me how to solve the problem the way he was taught in university. Guess what,
I could not understand how he did the math but he did have the correct answer.
Too often if a kook does invent 'working' math system to solve a problem then
get reject over his math he assumes he knows something that other don't. He
is wrong! It rather is 99.99% more probable that the math to solving the
problem is already invented and the general experts out there already know
it. They see no reason to learn one individuals new math when it gains them
nothing over what they already know. It really is up to the kook to first
find out if there is already a system of math to solve the problem at hand
before invent a new system.
> > Inventors often
> > face a practical problem and solute it, sometimes they only discover
> > later what the impact of the inventing can be.
> That is a nice, romantic idea, but is just that: A romantic idea.
Personally, I consider it a true statement, but the statement applies to
anyone making new discoveries - Inventors - Sciencist - Explorers - Colonists
- none of them know what the true long term results of thier efforts will be.
The point I am making is the effect is real, but it does not make inventor
special compared to others.
> And now, I'd like to call on your beliefs in the impossible to explain
> the following:
> > Jan Sloot found new investors and build two new devices
> > one with 5 times 12 Mb memory and 5 CPU's and an external chip card
> > reader where he stored a 1 Kb key for every movie. Before he died he
> > was busy with similar version with 5 times 74 Mb instead of 5 times 12
> > Mb.
> Why was he busy with a similar version that used more RAM? If his
> first invention actually worked, why would he need to improve it?
Sounds like a con job to me! Remember the number of supercompressor claims
we have seen, and just before the software is to ship the inventor annouces
the move of the compressor to hardware is needed, and another round of get
money from the investors starts up? How many time have we heard this before.
> > What happen when they see the working device?
> Where *is* this working device? Does it still "work" now that he is
> dead?
The $X million dollar question, all these years and yet there never seems to
be a working device to indepentantly test.
> > If you are cheater, can you earn money by only demonstrating you
> > cheating device?
Oh boy, can you! Been done lots of times in the past.
> Of course, this is what all con men do. They earn money by getting
> investments on the promise that it will work, and even show
> "demonstrations" of it working. It's snake oil -- you're left with
> nothing when they depart.
Every single time for so many years I have lost count.
> > If you are cheater, how big is the chance you get your money before you
> > show the investors how the device is exactly working and how they must
> > rebuild that device and before they test if the rebuild device is
> > working?
It has worked in the past.
> Very high. The "con" in "con man" stands for "confidence". In Britian
> I routinely heard people like this referred to as "confidence
> tricksters", which I believe is the root of the slang term. A con man
> builds your confidence in whatever he is selling, purely to get your
> money.
Sportman should do a Google Search on standard cons - it is amazing how easy
it is to get money of greedy people. Example:
http://www.eastnorritontwp.org/conartist.html
> > The first problem is that he must have the right education and talk the
> > right language and it must be theoretical possible otherwise
> > specialists don't trust it by for hand.
> This isn't a problem, it's a requirement. Assuming the invention
> wasn't completely bogus, you'll need proper terminology to have it
> analyzed so that it can be fabricated for mass production. See
> previous point. Even if you lack the motivation or means to become
> educated, you should at least explicitly define the terminology you use
> so that it can be mapped to more widely-understood terminology.
Heck, another point I disagree with. When conning people who are not experts
in the field he just has to 'sound' like he knows what he is talking about.
How many things has a red flag gone up for the people here about compression
system when you see a string of words that just don't belong together? Too
many people just need to hear a person use techincal terms and for some
reason thier brain turns off and they just assume they know what they are
talking about.
> > In the mean time you can only accept that the inventor
> > proof that his device is working while there is no theory what support
> > it.
I don't care which. Either supply the full theory in enough detail so I can
write the software myself, or show us a working example of the hardware. But
these claims that a third party was told by the inventor it works thus it
must be real is a waste of time.
> Sorry, science != faith. And I definitely won't open up *that* can of
> worms in this forum.
Science == Results! That is what make real science so great, what it says
works, does work, period.
You can stand in front of a moving car and believe as hard as you want that
it can no move, but it will still drive over you. In Comparison you can
claim this so-called compression works all you want but without a working
example you are just pushing hot air.
Earl Colby Pottinger
--
I make public email sent to me! Hydrogen Peroxide Rockets, OpenBeos,
SerialTransfer 3.0, RAMDISK, BoatBuilding, DIY TabletPC. What happened to
the time? http://webhome.idirect.com/~earlcp
"Jim Leonard" <Moby...@gmail.com> :
Jim I have to agree with 90% of what you said, but I have a few 'small'
disagreements.
>Sportman wrote:
>> Most inventors are not a scientist or high educated, they don't talk
>> the language of science.
Mostly not true today.
>That may have been the case up to 100 years ago, but not today and most
>definitely not in the realm of computer science. You don't just go
>"stumbling" onto a new invention in the realm of computer science or
>information theory just because you "had an idea to do it a different
>way".
Infact, in the 1980s and early 1990s a lot of the work in compression coding
for microcomputers were by people who were 'just fooling around'. They often
did not have degrees in information science, data processing or computer
science.. However Jim, you still right because in your defense if you sat
down and counted up all the hours these people spend working, all the books
they studied, all the time they spent in researching thier subject, you would
see that they spent as much or more effort and/or money that the people with
the real degrees from colleges/universities.
Sportman, it is not that you must have formal education!
It is that you must have an education.
Most of the stuff you need to learn you can go to the local library or better
yet the local college/university and use thier library to teach yourself.
That is how I learnt my digital logic skills - and there is no teacher to
force you to learn this and only this one fact. But you still need to learn
facts before you can do real work.
> We're not talking about making a recumbant bicycle or a light
>bulb, things you can just screw around with until you stumble onto a
>better design -- these "inventions" claim to break the laws of physics
>and mathematics.
That is where Sportman lack of education really is pulling himself up short.
There is only a small list of things we know to be absolute, the logic and
math behind them are fixed in the nature of the universe itself, change those
rules and the universe we observe around us could not exist. The ratio Pi
represents an example, or the fact you can't square a circle with a compass
and a straight edge, like these limitations the counting arguement is fixed
in nature, it can not be broken in this universe.
>There is no such thing as free energy, nor is there such a thing as a
>method to compress all bitstreams by a single bit. I'm sorry you don't
>want to accept this, but it's the simple truth.
The problem I think is he will not sit down and develop/follow the logic step
by step for himself, he just reads the words and rejects them - to think
carefully on the counting arguement is to see there are no loopholes.
>Not being versed in "science" or scientific terms does not excuse
>ignoring facts -- but it does help to prove or disprove your theories.
>I'm reminded of the concise summary by Brian Raiter regarding this:
>"One of the hallmarks of a mathematical crank is that they invent their
>own terminology, thus obscuring (to themselves, as well as other
>people) that their new method is either poorly thought-out or is
>isomorphic to something much, much simpler."
Your last point is very important, it is the reason why professors/sciencists
will not waste time reading a paper writting in a strange new math.
In the past I have even invent my own type of matrix math (no it does not
follow the standard rules) to solve certain problems. For my needs it works
great and even gives me the correct answers. However, a friend of mine
(Larry Baydak) who studied nuclear science showed me how to solve the problem
the way he was taught in university. Guess what, I could not understand how
he did the math but he did have the correct answer.
Too often if a kook does invent a 'working' math system to solve a problem
then gets reject over his math he assumes he knows something that others
don't. He is wrong!
It rather is 99.99% more probable that the math to solving the problem is
already invented and the general experts out there already know of it. They
see no reason to learn one individual's new math system when it gains them
nothing over what they already know. It really is up to the kook to first
find out if there is already a system of math to solve the problem at hand
before invent a new system.
>>Inventors often
>>face a practical problem and solute it, sometimes they only discover
>>later what the impact of the inventing can be.
>That is a nice, romantic idea, but is just that: A romantic idea.
Personally, I consider it a true statement, but the statement applies to
anyone making new discoveries - Inventors - Sciencist - Explorers - Colonists
- none of them know what the true long term results of thier efforts will be.
The point I am making is the effect is real, but it does not make inventors
special compared to others.
>And now, I'd like to call on your beliefs in the impossible to explain
>the following:
>>Jan Sloot found new investors and build two new devices
>>one with 5 times 12 Mb memory and 5 CPU's and an external chip card
>>reader where he stored a 1 Kb key for every movie. Before he died he
>>was busy with similar version with 5 times 74 Mb instead of 5 times
>>12 Mb.
>Why was he busy with a similar version that used more RAM? If his
>first invention actually worked, why would he need to improve it?
Sounds like a con job to me! Remember the number of supercompressor claims
we have seen already before? And how just before the software is to ship
the inventor claims a need to move the compressor to hardware is really,
really needed? And then another round of getting money from the investors
starts up to support the hardware version? How many times have we heard this
before.
>>What happen when they see the working device?
>Where *is* this working device? Does it still "work" now that he is
>dead?
The $X million dollar question, all these years and yet there never seems to
be a working device to indepentantly test.
>>If you are cheater, can you earn money by only demonstrating you
>>cheating device?
Oh boy, can you! Been done lots of times in the past.
>Of course, this is what all con men do. They earn money by getting
>investments on the promise that it will work, and even show
>"demonstrations" of it working. It's snake oil -- you're left with
>nothing when they depart.
Every single time so far, for so many years I have lost count.
>>If you are cheater, how big is the chance you get your money before you
>>show the investors how the device is exactly working and how they must
>>rebuild that device and before they test if the rebuild device is
>>working?
It has worked in the past.
>Very high. The "con" in "con man" stands for "confidence". In Britian
>I routinely heard people like this referred to as "confidence
>tricksters", which I believe is the root of the slang term. A con man
>builds your confidence in whatever he is selling, purely to get your
>money.
Sportman should do a Google Search on standard cons - it is amazing how easy
it is to get money of greedy or careless people. Example:
http://www.eastnorritontwp.org/conartist.html
>>The first problem is that he must have the right education and talk the
>>right language and it must be theoretical possible otherwise
>>specialists don't trust it by for hand.
>This isn't a problem, it's a requirement. Assuming the invention
>wasn't completely bogus, you'll need proper terminology to have it
>analyzed so that it can be fabricated for mass production. See
>previous point. Even if you lack the motivation or means to become
>educated, you should at least explicitly define the terminology you use
>so that it can be mapped to more widely-understood terminology.
Heck, another point I disagree with. When conning people who are not experts
in the field he just has to 'sound' like he knows what he is talking about.
How many things has a red flag gone up for the people here in this usenet
group about compression claims when you see a string of words that just don't
belong together? Too many people just need to hear a person use techincal
terms and for some reason thier brain turns off and they just assume they
know what they are talking about.
>>In the mean time you can only accept that the inventor
>>proof that his device is working while there is no theory what support
>>it.
I don't care which. Either supply the full theory in enough detail so I can
write the software myself, or show us a working example of the hardware. But
these claims that a third party was told by the inventor it works thus it
must be real is a waste of time.
> Sorry, science != faith. And I definitely won't open up *that* can of
> worms in this forum.
Science == Results! That is what make real science so great, what it says
No, but if I must gamble Gauss.
>In one occasion I got so strong
>proof that something exist while it's not possible according to
>science, that I researched this subject for long time.
Care to disclose what this one occasion was?
> My conclusion
>was that almost everything you can imagine is possible and is reality
>or done before.
What is your proof for this?
>I researched
>this subject and found no proof that {Jan Sloot's} inventing didn't worked.
That's not how you come to valid conclusions. You have to find proof
that it did work.
For the people who are still struggling how to bring 19 minutes down to
17 minutes here one hint; what if the person who walked back with
flashlight skips one crossing.
Your fellow retired scientist Bearden - which as an ex-scientist for the US
Army I suppose he's a bit more experienced - says the Maxwell laws you study
in universities are "polished up" for the sake of a beatiful academic
"formula"...missing input tension spikes a hundred times the input.
Science can be ...uhmm...driven by politics? ...pretty much like everything
else. ;)))
Best,
E.
Thanks, Denis.
> Now someone can say I use this compressor to compress N movie and then I
> connect
> all these movie toghethere to obtain a new incompressible movie , if I send
> this movie to the compressor what happen? The answer is the compressor does
> not
> compress it , after compression the movie has the same dimension becouse it
> is
> not a movie!
It doesn't matter if the data is a movie or not; it's still impossible
if the claim is to reproduce the movie *exactly*.
Now, if you can accept a *lossy* conversion -- the movie will look
*almost* the same -- then there is still a discussion here (although
compression ratios of 1000:1 is still extremely far-fetched).
I think it is possible becouse does not exist 2^8.000.000.000 differents
movies.
Do you agree with me ?
Denis.
NO! As has already been pointed out and single movie can be cut up into
hundreds of segments and and shuffled about to produce that many diffirent
movies from just that one set of data.
Heck, the way movies are made there is so much cut out for timing reasons
that you probably could shuffle the contents and make that many 'meaningful'
movies if you wanted to. But that would still be onlt the film from one
movie - and there are pently more out there.
I don't think the inventors named in this posting used this approach,
but the idea is interesting.
For example if you take 10.000 commercial DVD video movies and lay all
frames on each other, to check what combinations in every frame is used
and not used, what shall be the result? What percentage of the
theoretical combinations in one frame is used and not used? How quick
are the used frame combinations increasing by every new compared movie?
Are there significant combinations often used and others almost not
used? Is it possible to write a video alphabet as Samuel Morse did with
his Morse code alphabet? Are the results the same when using a matrix
instead of frame where multiple matrixes form exactly one frame?
Or turn it around, how many different frame combinations a true random
generator can generate in for example 1 hour on a fast computer. What
percentage of the theoretical frame combinations is generated? How
quick new combinations are found? Is it possible to write a video
alphabet for only the combinations not found?
And if both test are done, what percentage of the random frames
combinations found equal the combinations found in the DVD frames?
Are these tests done? If yes what where the results?
Yes , you can do something like that but you can not obtain 2^8.000.000.000
differents data .
From a set of data in the best case you can subdivide the number in
8.000.000.000 differents segment of one bit and the number of differents new
data you can obtain are 8.000.000.000! .
How can you compute all these combinations? It is impossible . In this
number there many and many configurations impossible to find , impossible to
compute .
When I make a movie a make some operations and this set of operations is a
"program" and there is not a program to build 2^8.000.000.000 combinations!
If you subdivide the data into 100 segments you have 100! combinations about
10e+157, using 1024 bits you can map 10e+307 only using 128 bytes. Only 128
bytes to map 100! movie of 1Gb .
Denis.
Mapping is not the same as compression. What do you map to? Where is
the original data stored?
I have 100 movies on my shelf. I have given them all a number. If I
put a number into a text file and give it to you, do you have all of
the digital information needed to watch the movie?
No flashlight is lost, here the answer:
1 minute person and 2 minute person cross together with the flashlight
(takes 2 minutes).
1 minute person returns with the flashlight to the original side (takes
1 minute).
5 minute person and 10 minute person cross together with the flashlight
(takes 10 minutes).
2 minutes person returns with the flashlight to the original side
(takes 2 minutes).
1 minute person and 2 minute person cross together with the flashlight
(takes 2 minutes).
Adding all the times together (2+1+10+2+2) provides a total time of 17
minutes.
Sometimes things can be surprising :-)
Dr. Piotr Blass
2001
Chief Architect, Zeosync Inc. Data Compression, Financial Software,
Controlled Fusion
Research. Video on Demand, HDTV. Simultaneously teaching Software
Engineering and Data structures classes at Florida Atlantic University
as well as Mathematics classes at Broward Community College. Several
patents are in progress. Developed software for Time Warner AOL and for
Sony. Lead a group of developers including top national and
international experts in data compression, financial computing and
controlled fusion research. Multimillion-dollar revenue for the company
resulted. Chairman of Zaamen Inc a newly established industry leader.
2004
US Senate Candidate for the State of Florida. Placed third in the 2004
General Election.
2004
Candidate for Governor of the State of Florida in 2006 General
Election.
Full CV:
http://www.pblass.com
http://uk.geocities.com/pblass2002/
http://www.floridian.biz
Statement of Dr. Piotr Blass:
http://www.backseatdriver.com/clients/ZeoSync/docs/drblass.htm
Cover-Up Of 2000 And 2004 Florida Vote Continues:
http://rense.com/general59/FLORID.HTM
Where There's Smoke...:
http://www.bradblog.com/archives/00000968.htm
Candidate for U.S. Senate, FL (Write-In):
http://216.239.59.104/search?q=cache:Rths8PWXHaIJ:action.endabuse.org/congressorg/e4/cinfo/%3Fstate%3DFL%26id%3D123296+Dr.+Piotr+Blass+site:org&hl=en
DR PIOTR BLASS FOR FLORIDA GOVERNOR IN 2006:
http://www.petitionspot.com/petitions/BLASSFORGOVERNOR
I didn't build a compression program but I think it is possible to build it
Thanks , Denis