Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Security Analyses of One-time System

6 views
Skip to first unread message

hel...@126.com

unread,
Oct 21, 2007, 12:54:31 AM10/21/07
to
Security Analyses of One-time System
Yong WANG
(School of Computer and Control, GuiLin University Of Electronic
Technology ,Guilin City, Guangxi Province , China, 541004)
hel...@126.com
Abstract-Shannon put forward the concept of perfect secrecy and proved
that some kinds of cryptosystems are perfectly secure. The paper
analyzes Shannon's proof that some kinds of cryptosystems were of
perfect secrecy and points out that Bayes' theorem was used mistakenly
in his proof because of his mixing up the probabilities under
different conditions. An example is given to show that one-time system
is not perfectly secure and this leads to a foundation for further
study of cryptosystem's secrecy.
Index Terms-probability, one-time system, cryptography, perfect
secrecy
I. Introduction
One-time system (one-time pad) has been thought to be unbreakable
since 1859[1]. Shannon put forward the concept of perfect secrecy and
proved that some kinds of cryptosystems, including one-time system,
are perfectly secure [2]. For a long time, its prefect secrecy is not
questioned. Recently it is found that Shannon's attestation to prove
that one-time system is perfectly secure is false. In the literature
[3-5], Shannon's mistake was analyzed from different aspects. One-time
pad is found not to be perfectly secure and betterment was given. In
the literature [6], the cryptanalysis method based on probability is
presented though the method cannot give certain result, and the method
is used to attack one-time pad . In this paper, the author will give
analyses on the problem.
II. Shannon's statement and proof about perfect secrecy
Shannon defined perfect secrecy by requiring of a system that after a
cryptogram is intercepted by the enemy the a posteriori probabilities
of this cryptogram representing various messages be identically the
same as the a priori probabilities of the same messages before the
interception. The following is Shannon's proof:
Let us suppose the possible messages are finite in number M1,...,Mn and
have a priori probabilities P(M1),..., P(Mn), and that these are
enciphered into the possible cryptograms E1...,Em by
E=TiM.
The cryptanalyst intercepts a particular E and can then calculate, in
principle at least, the a posteriori probabilities for the various
messages, PE(M). It is natural to define perfect secrecy by the
condition that, for all E the a posteriori probabilities are equal to
the a priori probabilities independently of the values of these. In
this case, intercepting the message has given the cryptanalyst no
information. Any action of his which depends on the information
contained in the cryptogram cannot be altered, for all of his
probabilities as to what the cryptogram contains remain unchanged. On
the other hand, if the condition is not satisfied there will exist
situations in which the enemy has certain a priori probabilities, and
certain key and message choices may occur for which the enemy's
probabilities do change. This in turn may affect his actions and thus
perfect secrecy has not been obtained. Hence the definition given is
necessarily required by our intuitive ideas of what perfect secrecy
should mean.
A necessary and sufficient condition for perfect secrecy can be found
as follows: We have by Bayes' theorem

in which:
P(M)= a priori probability of message M
PM(E)= conditional probability of cryptogram E if message M is chosen
i.e. the sum of the probabilities of all keys which produce cryptogram
E from message M.
P(E)= probability of obtaining cryptogram E from any cause.
PE(M)= a posteriori probability of message M if cryptogram E is
intercepted.
For perfect secrecy PE(M) must equal P(M) for all E and all M. Hence
either P(M)=0, a solution that must be excluded since we demand the
equality independent of the values of
P(M), or
PM(E)= P(E)
for every M and E. Conversely if PM(E)= P(E) then
PE(M)= P(M)
and we have perfect secrecy. Thus we have the result:
Theorem 1. A necessary and sufficient condition for perfect secrecy is
that
PM(E)= P(E)
for all M and E. That is, PM(E) must be independent of M.
Stated another way, the total probability of all keys that transform
into a given cryptogram E is equal to that of all keys transforming Mj
into the same E, for all Mi,Mj and E.
Now there must be as many E's as there are M's M's since, for a fixed
i, Ti gives a one-to-one correspondence between all the M's and some
of the E's. For perfect secrecy PM(E)= P(E)≠0 for any of these E's and
any M. Hence there is at least one key transforming any M to any of
these E's. But all the keys from a fixed M to different E's must be
different, and therefore the number of different keys is at least as
great as the number of M's. It is possible to obtain perfect secrecy
with only this number of keys, as one shows by the following example:
Let the Mi be numbered 1 to n and the Ei the same, and using n keys
let
TiMj=Es
where s=i+j(Mod n). In this case we see that PE(M)=1/n=P(E) and we
have perfect secrecy. An example is shown in Fig. 1 with s=i+j-1(mod
5).
Perfect systems in which the number of cryptograms, the number of
messages, and the number of keys are all equal are characterized by
the properties that (1) each M is connected to each E by exactly one
line, (2) all keys are equally likely. Thus the matrix representation
of the system is a Latin square[2].

Fig. 1 Perfect system
III. Counterexamples for the perfect secrecy of one-time system
In order to discover the mistake and educe the below analysis, the
following counterexample is given to show that one-time system is not
perfectly secure.
Example 1: The plaintext space is M = (0, 1). According to the prior
condition that is generally the correspondence context, it is first
known that the prior probability of plaintext being 0 is 0.9, while
the prior probability of plaintext being 1 is 0.1. The ciphertext
space is C = (0, 1) and the key space is K = (0, 1) and the keys are
equally likely. The cryptoalgorithm is one-time system. Later the
information is obtained that the ciphertext is 0. When only the later
information is considered(regardless of the prior probability of
plaintext), for the fixed ciphertext, there is a one-to-one
correspondence between all the plaintexts and keys, so it can be
concluded that the plaintexts are equally likely, that is, the
probability of plaintext being 1 is 0.5. As the probability obtained
above isn't consistent with the prior probability, the compromise is
needed. The compromised posterior probability of the plaintext would
be no more than the larger and no less than the smaller of the two
corresponding probabilities of the two conditions.
When considering the probability, the different conditions should be
distinguished. For the above example, there are mainly several
conditions, including the prior probability distribution of M, and the
condition about the cryptosystem, probability distributions of C,
probability distributions of K, and value of C. Under different
condition, we get different probability distributions of plaintext.
According to the mapping of M, K and C, the probabilities of M, K and
C are complicatedly interactional. In the above example, the
probability of plaintext changes when the ciphertext is fixed, even
though the ciphertext is unknown.
IV. Analyses of Shannon's Mistake
We can find that Shannon had the result PE(M)=1/n in his example
above. According to the definition of perfect secrecy,
PE(M)= P(M)
We can get that P (M)=1/n, but the prior probabilities of the
plaintexts are seldom the same as 1/n. So there maybe some mistakes in
Shannon's proof. As perfect secrecy require PE(M) = P(M) = 1/n, one-
time system is not perfectly secure unless P(M) = 1/n.
The following is the analyses of the mistakes in Shannon's proof.
Shannon used Bayes' theorem to prove Theorem 1 and put PE(M)= P(M)
into the equation. There is a crytic presupposition is that all the
probabilities in the equation

are under the same precondition. That is to say, in the equation, P(M)
and P(E) are the probabilities under the condition which we call prior
condition. PE(M) is the probability of M under both conditions of the
prior condition and the condition of E's value. PM(E) is the
probability of E under both conditions of the prior condition and the
condition of M's value.
But we can find that P(M) is the probability under the prior condition
and PE(M) is the probability under the condition that E and the key's
probability distribution are known. The two probabilities are not
under the same prior condition. Relative to P(M), the condition of
PE(M) is not only the value of E, but also E is a fixed value, and the
mapping of M, K and C. Therefore strictly speaking, PE(M) should be
PEFG(M) if we call other condtions as F and G, so PE(M) ( strictly
speaking, PEFG(M)) cannot be put into the equation. But Shannon did
and educed the wrong conclusion. When only considering the conditions
that E and the key's probability distribution are known, we can infer
that all the plaintexts are equally likely for one-time system gives a
one-to-one correspondence between all the M's and some of the K's when
E is fixed (even though unknown). But the prior probabilities of
plaintexts are seldom equally likely, so the probability isn't
consistent with the prior probability. When considering all the
conditions, the compromise is needed. The compromised posterior
probability of the plaintext would not be the same as prior
probability, then we have the result
PE(M) ≠ P(M)
We can find that the probability distribution of M changes when
considering C is a fixed value, but not a random variable (even though
C is unknown). That is the sticking point of the mistake.
As we take the PE(M) as the probability under all the conditions, it
must be a compromise between the probabilities under the sectional
conditions. In the literature [6], we gave an algorithm to compromise
the two probability distributions. As the probabilities under each of
the two conditions are usually inconsistent, The compromise is not the
same as any of the probabilities, so we can usually get the same
result that PE(M) ≠ P(M) and one-time system is not perfectly secure
unless in particular case.
>From Shannon's result that PE(M)=1/n, we can see Shannon completely
ignored the prior condition when he considered the posterior
probability, so that posterior probability is unilateral. We can
affirm Shannon's mistake by using his result to get cockeyed result.
Using Shannon's result that the given example is perfectly secure, we
can get PE(M)= P(M), as Shannon got PE(M)=1/n, so we can get P(M)=1/n.
But that is wrong for plaintexts are seldom equally likely.
What's more, there is another crytic presupposition in Shannon's proof
that the plaintext should all be the same in length. But usually the
plaintext cannot meet the presupposition. When the ciphertext is
intercepted, the ciphertext length is known as L. The length of all
possible plaintext must be L for one-time system; otherwise, the prior
probability is not the same as the posterior probability that is
zero.
V. Conclusion
>From the above analyses, we can find that one-time system is not
perfectly secure unless extra conditions are given. In despite of
that, it has good cryptographic property. We can take measures to
improve its security.
Reference
[1]. Bauer, F.L. Decrypted Secrets-Methods and Maxims of
Cryptology[M], Berlin, Heidelberg, Germany: Springer-verlag, 1997.
[2]. C.E.Shannon, Communication Theory of Secrecy Systems[J], Bell
System Technical journal, v.28, n. 4, 1949, 656-715.
[3]. Yong WANG, Security of One-time System and New Secure System
[J],Netinfo Security, 2004, (7):41-43
[4]. Yong WANG, Perfect Secrecy and Its Implement [J],Network &
Computer Security,2005(05)
[5]. Yong WANG, Fanglai ZHU, Security Analysis of One-time System and
Its Betterment, Journal of Sichuan University (Engineering Science
Edition), 2007, supp. 39(5):222-225
[6]. Yong WANG, Shengyuan Zhou, On Probability Attack, Information
Security and Communications Privacy, 2007,(8):39-40


The Project Supported by Guangxi Science Foundation (0640171) and
Modern Communication National Key Laboratory Foundation (No.
9140C1101050706)

Biography:
Yong WANG (1977-) Tianmen city, Hubei province, Male, Master of
cryptography, Research fields: cryptography, information security,
generalized information theory, quantum information technology. GuiLin
University of Electronic Technology, Guilin, Guangxi, 541004 E-mail:
hel...@126.com wang197...@sohu.com
Mobile 13978357217 fax: (86)7735601330(office)
School of Computer and Control, GuiLin University Of Electronic
Technology, Guilin City, Guangxi Province, China, 541004

wangyong

unread,
Oct 21, 2007, 7:15:34 AM10/21/07
to
Request for comments and criticisms about Shannon's mistakes

I have found Shannon's mistakes about one-time pad and conditional
entropy.The mistakes have relative to a lot of problem about maths,
information theory and others. The papers can be found at http://www.arxiv.org.
1.http://arxiv.org/abs/0709.4420
Title: Confirmation of Shannon's Mistake about Perfect Secrecy of One-
time-pad
2. http://arxiv.org/abs/0709.4303
3. http://arxiv.org/abs/0709.3334
Title: Mistake Analyses on Proof about Perfect Secrecy of One-time-
pad
4.http://arxiv.org/abs/0708.3127
Title: Question on Conditional Entropy
As the works relate to Shannon's mistakes, I ask for comments and
criticisms about my papers. But note that you should read my papers
(there are more than 3 papers about OTP and one paper about
conditional entropy available at http://arxiv.org in English)
carefully and think twice before you get your conclusion for there are
a few scholars who are wrong and I have thought the problem for more
than three years. If any expression is not proper or my English is
poor, please point out the place and that will help me to improve on
them. I am very glad to see your letters about comments and
criticisms. My email is hel...@126.com. If I am wrong, you can submit
papers and inform me my mistake, I will withdraw my papers.
The later research about the problem will have important impact on
cryptography,probability,and informtaion.

John E. Hadstate

unread,
Oct 21, 2007, 7:57:25 AM10/21/07
to

"wangyong" <hel...@126.com> wrote in message
news:1192965334.9...@t8g2000prg.googlegroups.com...

> Request for comments and criticisms about Shannon's
> mistakes
>
> The later research about the problem will have important
> impact on
> cryptography,probability,and informtaion.
>

I sincerely hope that your papers influence the government
of China to forever abandon the use of one-time pads.


Peter Pearson

unread,
Oct 21, 2007, 2:20:45 PM10/21/07
to
On Sun, 21 Oct 2007 04:15:34 -0700, wangyong <hel...@126.com> wrote:
> Request for comments and criticisms about Shannon's mistakes

Dear Wang Yong -

This newsgroup receives posts from some very intelligent
people and from some crazy people. The typical reader will
give you a limited "word budget" in which to establish
whether you belong to the former set or the latter.
Regrettably, because of the difficulty of your exposition,
you probably get a relatively small word budget.

You appear to be asserting that the security of the One-Time
Pad is deficient in some respect. I am offering to help you
to establish this assertion clearly and concisely. The
following proposal is guided by what little I could
understand of your posts, and may require some adjustment to
meet your true goals.

I will use a cryptographically strong random bit generator
to generate keystream to encrypt, say, one thousand
single-bit messages. The single-bit messages will be
generated by a biased generator of independent bits, the
bias being set to whatever you specify. I will post to this
newsgroup the ciphertexts for all the messages and, if you
require it, the plaintexts for some subset of the messages.
You will then make some prediction about the unknown
plaintexts. If your prediction is both surprising and true,
readers of this group will have some concrete evidence that
your posts are worth the trouble of reading.

--
To email me, substitute nowhere->spamcop, invalid->net.

wangyong

unread,
Oct 21, 2007, 8:48:00 PM10/21/07
to
I have received some opposite views, they show One-time pad has some
good property, I agree with that, but that is not perfect secrecy.
so when you opposite views are sent, make sure your views is based on
Shannon's definition of perfect secrecy.
Any opposite views are welcome.
Yong Wang

Harris

unread,
Oct 22, 2007, 12:12:25 AM10/22/07
to
wangyong <hel...@126.com> wrote in news:1193014080.810210.222300
@t8g2000prg.googlegroups.com:


Regarding the texts posted here for the in-depth analysis of this proposal, they are not well-written in a
sense that they make clear of the claim and the proof to it, they seem more like a longer explanation of
the original claim itself.

Regarding the validity of the claim itself, it doesn't need a super-dooper math proof to show that, given
a trully random bit stream for pad and an unbiased bit function like xor, you cannot infer the pad from
the encrypted message bits. Problems emerge when the pad is re-used (not of infinite length) or the
random source is not trully random or the bit function is biased.

And of course, the major problem with OTP is key generation and distribution, not the (trivial)
encryption function. Shannon has made that very clear in his work.


--
Harris

Michael Scott

unread,
Oct 22, 2007, 3:37:05 AM10/22/07
to
I think maybe he is saying that the length of the ciphertext reveals the
length of the plaintext, which in turn leaks information about the
plaintext.

??

Mike Scott

rossum

unread,
Oct 22, 2007, 4:46:40 AM10/22/07
to
On Mon, 22 Oct 2007 08:37:05 +0100, "Michael Scott" <msc...@indigo.ie>
wrote:

Has he never heard of padding?

rossum

wangyong

unread,
Oct 22, 2007, 5:26:11 AM10/22/07
to

not so.
Not only reveal the length.

wangyong

unread,
Oct 22, 2007, 5:27:41 AM10/22/07
to
On 10月22日, 下午12时12分, Harris <xgeorg...@pathfinder.gr> wrote:
> wangyong <hell...@126.com> wrote in news:1193014080.810210.222300

what you discuss is not shannons perfect secrecy.

Harris

unread,
Oct 22, 2007, 6:50:19 AM10/22/07
to
wangyong <hel...@126.com> wrote in
news:1193045261.6...@e34g2000pro.googlegroups.com:


No, it's OTP encryption, as the thread topic is.


--
Harris

wangyong

unread,
Oct 22, 2007, 10:57:55 AM10/22/07
to

>
> No, it's OTP encryption, as the thread topic is.

No, it's OTP encryption, as the thread topic is.


--------------------------can you express it clearly? My topic is to
find shannon's mistake

Herbert Paulis

unread,
Oct 23, 2007, 5:13:50 AM10/23/07
to
FWIW, apart from a single exception, and there only to denote the questioned
fact, the author has only used references into other texts by himself,
covering more or less the same topic with other words. Very convincing
literature indeed ...

Herbert


frankg...@gmail.com

unread,
Oct 23, 2007, 7:32:32 AM10/23/07
to

> And of course, the major problem with OTP is key generation and distribution, not the (trivial)
> encryption function. Shannon has made that very clear in his work.


Well, during Shannon's life there were no cheap mass-storage media
such as CDs or DVDs.
Just think of a diplomatic service of a country named utopia. They set
up a well-protected communications center in
the capital of utopia, named upotia city. For each of utopia's 150
embassies two DVDs with an OTP are generated at
the communications center. Then, couriers will distribute two DVDs to
each embassy. Of course, there is a copy of
each DVD in the communications centre. To transmit a message from
utopia's embassy in London to the embassy
in Tokio, the diplomat in London will encrypt data with the OTP and
will send it to utopia city, where it is decrypted.
There, it is reencrypted using the Tokio OTP and send to the Tokio
embassy. So, by generating a very limited amount
of key material (300 DVDs), a global diplomatic communication system
can be set up. Of course, diplomats may not
do broadband communications, because thart would quickly "consume" the
DVDs, but by just doing textual communication
(email or text chat), a single DVD with 4 Gig of data will suffice for
several months...
My guess is that this is definitely the easiestand most economic way
of securing confidential communications against the most resourceful
opponents. Utopia does not need its own cryptologic service, but it
would still be safe from rich-nation cryptanalysis.

Harris

unread,
Oct 23, 2007, 8:45:04 AM10/23/07
to
"frankg...@gmail.com" <frankg...@gmail.com> wrote in
news:1193139152.2...@v29g2000prd.googlegroups.com:

>
>> And of course, the major problem with OTP is key generation and
>> distribution, not the (trivial) encryption function. Shannon has made
>> that very clear in his work.
>
>
> Well, during Shannon's life there were no cheap mass-storage media

> such as CDs or DVDs........................


> My guess is that this is definitely the easiestand most economic way
> of securing confidential communications against the most resourceful
> opponents. Utopia does not need its own cryptologic service, but it
> would still be safe from rich-nation cryptanalysis.
>
>


This logic is precisely why OTP-like schemes are the primary encryption method for military-grade
communications, especially between static and low-traffic nodes. The real problem, even today, is that
you must be able transfer in a completely secure way the keys, with volume equal to the intented
furure traffic.

For open-air communications, military uses codebooks that are changed monthly (at least). Even this
is not always practical, carrying around a bag of codebooks and keys every 31st. If someone really
wants to, sooner or later one of these bags will get in the wrong hands.

However, the basic idea of OTP is so popular that it is used in everyday practice without knowing,
from the codes of pre-paid telephone cards to the activation of software over the phone. I've actually
created an OTP-like engine in Java, like the one you describe, for creation of massive volumes of
random bits (OTP keys using secure PRNG), it's not more than 50-60 lines of code, including
encryption/decryption streams.


--
Harris

Kristian Gjųsteen

unread,
Oct 23, 2007, 8:57:31 AM10/23/07
to
frankg...@gmail.com <frankg...@gmail.com> wrote:
>My guess is that this is definitely the easiestand most economic way
>of securing confidential communications against the most resourceful
>opponents. Utopia does not need its own cryptologic service, but it
>would still be safe from rich-nation cryptanalysis.

If Utopia had a reasonably competent communications security service,
they would realise that the one time pad does not have integrity. There
are of course information-theoretic MACs available, but most silly one
time pad proponents do not even realise the need for integrity.

Now count the cost and inconvenience of this "solution" against the
cost and convenience of generating 150 or so public keys for SSL.

The words "no-brainer" comes to mind.

--
Kristian Gjųsteen

wangyong

unread,
Oct 23, 2007, 11:00:08 AM10/23/07
to

Herbert
--------we have found no other question the problem.

frankg...@gmail.com

unread,
Oct 24, 2007, 4:17:06 AM10/24/07
to

>
> If Utopia had a reasonably competent communications security service,
> they would realise that the one time pad does not have integrity. There
> are of course information-theoretic MACs available, but most silly one
> time pad proponents do not even realise the need for integrity.
Quite right. I did not write about the problem of integrity.

>
> Now count the cost and inconvenience of this "solution" against the
> cost and convenience of generating 150 or so public keys for SSL.
>
> The words "no-brainer" comes to mind.
Oh yes. Germans probably said the same about Enigma 65 years ago. Now
they look like bloody fools...
My point was that a small country with limited resources will probably
be better off by setting up a
secure courier service for OTPs than trying to compete with UKUSA
cryptanalysis.


>
> --
> Kristian Gj steen


David Wagner

unread,
Oct 24, 2007, 2:19:27 PM10/24/07
to
frankg...@gmail.com wrote:
>My point was that a small country with limited resources will probably
>be better off by setting up a
>secure courier service for OTPs than trying to compete with UKUSA
>cryptanalysis.

But a small country does not need to compete with UKUSA cryptanalysts to
use TLS. I think the point Kristian was making is that solutions like
TLS are freely available to everyone, small or large, and are widely
believed to be good enough that the crypto algorithms are unlikely to
be the weakest link in the system. Of course it's always possible that
the conventional wisdom is wrong.

wangyong

unread,
Oct 24, 2007, 11:29:52 PM10/24/07
to
Oh yes. Germans probably said the same about Enigma 65 years ago. Now
they look like bloody fools...
My point was that a small country with limited resources will
probably
be better off by setting up a
secure courier service for OTPs than trying to compete with UKUSA
cryptanalysis.


- Hide quoted text -
- Show quoted text -

-----------it is hard to understand.

wangyong

unread,
Oct 26, 2007, 11:35:30 PM10/26/07
to

wangyong

unread,
Oct 26, 2007, 11:48:00 PM10/26/07
to

Dav170627

unread,
Oct 27, 2007, 7:28:04 AM10/27/07
to
wangyong wrote:
> hot discussion
>
> http://groups.google.com/group/sci.math/browse_thread/thread/df158aa13e6a94b4/2b558f1bf5b277fb
>
>
This "hot discussion" is just more of your embarrassing spew in news
groups.

I will tell you directly that you are wrong.

If you still think there is a flaw in Shannon's security proof then
write is up and submit it to a math or cryptography journal and wait for
the rejections.

hagman

unread,
Oct 27, 2007, 11:10:48 AM10/27/07
to

Probably because OTP perfect secrecy is so simple,
i.e. Shannon's proof is very straightforward with today's knwledge.

I have the impression that e.g. in the first sentence of section IV
of http://arxiv.org/abs/0709.4303, you simply mix up P_E(M) with
P_M(E).
(You write that "[w]e can find that Shannon had the result [...]"
without pinning that down more precisely).

Without access to Shannon's original right now, it might even be the
case that some typo to the same effect exist there.
But since you additionally claim that the *result* of perfect secrecy
is
wrong -- a straightforward resul, as I said above -- I advise you
to rethink your ideas.

hagman

wangyong

unread,
Oct 28, 2007, 11:32:20 AM10/28/07
to
This "hot discussion" is just more of your embarrassing spew in news
groups.

I will tell you directly that you are wrong.

--------you are unreasonable.


If you still think there is a flaw in Shannon's security proof then
write is up and submit it to a math or cryptography journal and wait
for
the rejections.

-----some papers were published, though i have met reviewers like
you.but their proofs were anlyzed and wrong.

wangyong

unread,
Oct 28, 2007, 11:36:53 AM10/28/07
to
Probably because OTP perfect secrecy is so simple,
i.e. Shannon's proof is very straightforward with today's knwledge.

I have the impression that e.g. in the first sentence of section IV
of http://arxiv.org/abs/0709.4303, you simply mix up P_E(M) with
P_M(E).
(You write that "[w]e can find that Shannon had the result [...]"
without pinning that down more precisely).

----your can see his paper in 1949

Without access to Shannon's original right now, it might even be the
case that some typo to the same effect exist there.
But since you additionally claim that the *result* of perfect secrecy
is
wrong -- a straightforward resul, as I said above -- I advise you
to rethink your ideas.

---------I have thought it more than 4years

Phil Carmody

unread,
Oct 28, 2007, 12:15:24 PM10/28/07
to

For fuck's sake, learn how to attibute and quote messages.

Phil
--
Dear aunt, let's set so double the killer delete select all.
-- Microsoft voice recognition live demonstration

Quadibloc

unread,
Oct 28, 2007, 12:33:32 PM10/28/07
to
hell...@126.com wrote:
> Shannon defined perfect secrecy by requiring of a system that after a
> cryptogram is intercepted by the enemy the a posteriori probabilities
> of this cryptogram representing various messages be identically the
> same as the a priori probabilities of the same messages before the
> interception.

> We can find that Shannon had the result PE(M)=1/n in his example
> above. According to the definition of perfect secrecy,
> PE(M)= P(M)
> We can get that P (M)=1/n, but the prior probabilities of the
> plaintexts are seldom the same as 1/n. So there maybe some mistakes in
> Shannon's proof. As perfect secrecy require PE(M) = P(M) = 1/n, one-
> time system is not perfectly secure unless P(M) = 1/n.

No. P(M) = 1/n is in no way required for perfect secrecy by the
Shannon definition, even if he used a case where P(M) = 1/n as an
example in part of his proof.

Perfect secrecy requires only PE(M) = P(M), as is clear from the
definition you yourself quote.

It can be shown that P(M) = 1/n is the "worst case", and if P(M) = 1/
n, and PE(M) = 1/n also, then for any other set of values for P(M),
PE(M) will also not be changed. This is because P(M) = 1/n is the
state of minimum advance information about the messages.

Regardless of the probability distribution of the plaintexts, P(C) = 1/
n for each of the n possible ciphertexts in the one-time-pad. So the
probability distribution of the ciphertexts remains absolutely
identical, regardless of whether all plaintexts are equally likely,
or only one plaintext is possible.

Since the plaintext is powerless to change anything about the
ciphertext, the ciphertext is powerless to reveal anything about the
plaintext, because it is the same, exactly, for every probability
distribution of plaintexts.

John Savard

Unruh

unread,
Oct 28, 2007, 3:15:26 PM10/28/07
to
wangyong <hel...@126.com> writes:

Analysed by whom? You? But you are hardly a disinterested party and it is
well known that people with a stake in the outcome accept much weaker
proofs in favour of their prefered outcome than they do in opposition.

By the way, your quoting style stinks. The standard on netnews is to quote
by placing a > at the beginning of the line you are quoting. Thus your text
and teh quoted text are clearly distingishable and teh level of quotes is
also clear.

Placing a ---- in front of your new contribution is bad, because, as in the
example above, you forgot to place it in front of your third sentence.

Sebastian G.

unread,
Oct 28, 2007, 5:09:28 PM10/28/07
to
wangyong wrote:

> Probably because OTP perfect secrecy is so simple,
> i.e. Shannon's proof is very straightforward with today's knwledge.


You can write it even more elegantly using simple group theory.

The OTP is an addition + within a group G using a random element. The
concatenation of the addition is a group that is isomorphic to the additive
group itself. So from the equidistribution of the second element of the
addition the equidistribution of the result of the addition follows. Since
the first element of the addition doesn't even account here, P(M)=P_E(M).

q.e.d

BTW, since this proof can be reversed, one can show that every cipher with
the perfect secrecy property can be written as an addition within a group
with equidistributed second element and thus must be a OTP, so all claims of
a second perfect secrecy cipher beside the OTP are void.

Kristian Gjųsteen

unread,
Oct 28, 2007, 5:42:37 PM10/28/07
to
Sebastian G. <se...@seppig.de> wrote:
>so all claims of
>a second perfect secrecy cipher beside the OTP are void.

This is, as I'm sure I've told you before, perfect nonsense.

I just checked. I told you this Jan 14, 2006.

--
Kristian Gjųsteen

Simon Johnson

unread,
Oct 28, 2007, 6:59:16 PM10/28/07
to

> Analysed by whom? You? But you are hardly a disinterested party and it is
> well known that people with a stake in the outcome accept much weaker
> proofs in favour of their prefered outcome than they do in opposition.
>

I personally find this one of the most interesting things about humanity in
general.

We're much better at attacking other people's ideas than our own.

I think all half-decent programmers have had at least one humbling
experience where their proposed approach has been showed to be dim-witted,
suboptimal or at worse downright in-comptent.

I think that the inability for us to attack our own ideas is directly
evolved. In the past and perhaps more-so today, the modeling of other
people's intentions and finding faults in those thoughts is far more
important than insuring that your own thoughts are self-consistent.

As such, when you have incomplete data on a situation, boldness in the face
of ignorance is not as dumb as it sounds.

Only five hundred years ago, half of all people born died in their
childhood. In other species the survival rate is often more worse. Often,
being passive is more deadly than being bold in the face of an unknown
threat. This trait for boldness has been selected for down countless
generations and the result of this process has been that people have
developed "conviction."

In other words, the more ignorant you are on a subject the stronger you hold
some unsupported fact to be true. We even go so far as to ask for this in
our politicans. Afterall, in politics to change your mind is a sign of
weakness.

So is it any surprise that people can utterly convince themselves of the
security of their own ciphers or pet theories? I think not.

Simon.


Paul Rubin

unread,
Oct 28, 2007, 7:39:58 PM10/28/07
to
Simon Johnson <Simon....@gmail.com> writes:
> We're much better at attacking other people's ideas than our own....

> So is it any surprise that people can utterly convince themselves of the
> security of their own ciphers or pet theories? I think not.

But it should be the other way in cryptography. Attacking other
people's ideas (e.g. by finding ways to break AES) is far more
worthwhile than concocting bogus new ciphers of our own.
Unfortunately the clueless who like to attack other people's ideas in
every subject except cryptography, do the opposite once they enter a
field where attacking existing ideas is better than spewing out new
ones (that are usually bogus). I think it's just a tendency to do
whatever's easier...

Simon Johnson

unread,
Oct 28, 2007, 7:46:59 PM10/28/07
to
Kristian Gjøsteen wrote:

An honest question: Why is his argument bogus, it seemed pretty water tight
to me?

Simon.

Simon Johnson

unread,
Oct 28, 2007, 7:58:21 PM10/28/07
to
Paul Rubin wrote:

I completely agree. The problem with cryptography is that it is hard. Really
really hard. I've been following cryptography for many years now and it
doesn't any get easier.

I still consider myself rather ignorant - take me more than ten feet off the
beaten track of questioning and I get lost rather quickly.

What is the hope of me making any contribution towards breaking AES or any
mainstream cipher? None! Zero! Nadda! Zilch.

How easy is it for me to create a cipher I'm convinced is totally secure?
Very easy!

That ease is very seductive. Given that paranoia is a fundamental part of
threat modeling, it is easy to convince yourself that no public
block-cipher can be trusted.

Therefore it is easier to trust your own flawed design.

I'm not saying that's right, I'm just saying that as a race we're
predisposed to it.

Simon.


wangyong

unread,
Oct 28, 2007, 9:50:17 PM10/28/07
to
For fuck's sake, learn how to attibute and quote messages.

you are reasonable.

attibute ???????????????????

wangyong

unread,
Oct 28, 2007, 9:57:29 PM10/28/07
to
No. P(M) = 1/n is in no way required for perfect secrecy by the
Shannon definition, even if he used a case where P(M) = 1/n as an
example in part of his proof.

Perfect secrecy requires only PE(M) = P(M), as is clear from the
definition you yourself quote.


It can be shown that P(M) = 1/n is the "worst case", and if P(M) = 1/
n, and PE(M) = 1/n also, then for any other set of values for P(M),
PE(M) will also not be changed. This is because P(M) = 1/n is the
state of minimum advance information about the messages.

----------------------------------------PE(M) = 1/n is shannon's
result , see the paper in 1949

Regardless of the probability distribution of the plaintexts, P(C) =
1/
n for each of the n possible ciphertexts in the one-time-pad. So the
probability distribution of the ciphertexts remains absolutely
identical, regardless of whether all plaintexts are equally likely,
or only one plaintext is possible.

-----------------------------------you are unreasonable,
we should consider all the conditions. I never deny P(C) = 1/
n

Since the plaintext is powerless to change anything about the
ciphertext, the ciphertext is powerless to reveal anything about the
plaintext, because it is the same, exactly, for every probability
distribution of plaintexts.

-----------------------------Is the posterior equal to the prior? that
is the key.


wangyong

unread,
Oct 28, 2007, 10:19:33 PM10/28/07
to
>This "hot discussion" is just more of your embarrassing spew in news
>groups.
>I will tell you directly that you are wrong.
>--------you are unreasonable.
>If you still think there is a flaw in Shannon's security proof then
>write is up and submit it to a math or cryptography journal and wait
>for
>the rejections.
>-----some papers were published, though i have met reviewers like
>you.but their proofs were anlyzed and wrong.


Analysed by whom? You? But you are hardly a disinterested party and it
is
well known that people with a stake in the outcome accept much weaker
proofs in favour of their prefered outcome than they do in
opposition.

-------------you are just windbaggary without any reason.


By the way, your quoting style stinks. The standard on netnews is to
quote
by placing a > at the beginning of the line you are quoting. Thus your
text
and teh quoted text are clearly distingishable and teh level of quotes
is
also clear.


Placing a ---- in front of your new contribution is bad, because, as
in the
example above, you forgot to place it in front of your third
sentence.

----------------what is a,

The standard on netnews is to quote
by placing a > at the beginning of the line you are quoting.

Placing a ---- in front of your new contribution is bad, because, as
in the
example above, you forgot to place it in front of your third
sentence.

------can you quote it and state clearly.


wangyong

unread,
Oct 28, 2007, 10:20:40 PM10/28/07
to
On 10 29 , 5 42 , Kristian Gjøsteen <kristiag+n...@math.ntnu.no>
wrote:
> Kristian Gjøsteen

This is, as I'm sure I've told you before, perfect nonsense.

I just checked. I told you this Jan 14, 2006.

-------you are cheating, how you told me?

wangyong

unread,
Oct 28, 2007, 10:29:25 PM10/28/07
to
this paper is helpful to


On Relativity of Probability
Yong WANG
(School of Computer and Control, GuiLin University Of Electronic
Technology ,Guilin City, Guangxi Province , China, 541004)
hel...@126.com
Abstract-This paper points out the limitations of present probability
theory that don't recognize characteristics of probability as follows:
Firstly, division of prior probability and posterior probability is
not absolute in that prior probability is conditional probability
actually; Secondly, probability is not absolutely fixed, and it may be
random. Thirdly, probability is evolving with the increase of the
conditions. Finally, the probability is complicated. Meantime, it
analyzes some misuses of probability theory in application.
Index Terms-probability theory, relativity, conditional probability,
information theory

MR:60A10
1.Introduction
The present probability theory cannot solve all problems arising from
probability. For example, as for probability of event, it may be
different in different conditions or for different people. However,
probability theory doesn't solve how to make fusion and compromise.
These issues have not been studied and solved due to the limitations
of probability theory itself. The present probability theory is based
on the kolmogorov axiomatic system [1]. The system has its own
limitations, just as the scholars B. de Finetti and Xiong Daguo and so
on have pointed out some disadvantages and deficiencies of the system
[2]. But the relativity of probability is not realized. Some of the
factors, such as probability, are limited and absolute, thus the
theory has limitations.

2. Relativity of probability and limitations of probability theory
The present probability theory doesn't take into consideration that
conditions are varied. For instance, people may hastily give a
conditional probability value without hesitation. However, the value
is usually unknown and may be random. It is impossible for us to solve
the problem of probability by present probability theory under
multiple conditions if the conditional probability is unknown. It is
possible that the value of probability is random, but we usually
describe probability as a fixed value, which leads to the fact that
probability is always considered as a fixed value while probability
theory is applied. But only in limited conditions, the probability is
fixed. That results in many limitations. Similar problem also exists
in information theory [3]. We analyze limitations of probability
theory and relativity of probability from the following perspectives:
Firstly, prior probability and posterior probability are absolutely
divided in probability theory. In fact, the order is relative. Prior
probability can be elicited only under certain conditions. There ought
to be some known conditions, or the elicitation of probability is
groundless. Suppose that we have no idea of an event, we can't know
how many possible random values there are, not to mention the
corresponding probabilities of the possible values. It is sure that
the prior probability distribution itself can be referred to as one
condition. In addition, there also exists the case of more than one
condition, under which their order can be interchangeable. For
example, some conditions can influence birth gender of baby. Under
condition A, the probability of male baby P(A) is c, and under
condition B, the probability of male baby P(B) is d. When we want to
gain the the probability of male baby under the two conditions, P(A)
and P(B) can be respectively considered as prior probability for they
are parallel. Therefore, the prior probability we have obtained is
based on the known conditions and it is also a conditional
probability. Under different conditions, we can get different
probability, so probability is relative to corresponding conditions.
The understanding that probability is relative to corresponding
various conditions is helpful to analyze and recognize each existing
condition consciously and carefully so as to differentiate various
conditions rather than get confused. Actually, many existing
conditions can not be realized because they are covert.
Secondly, probability is not fixed under some conditions. Probability
is to depict random uncertainty, but it may be uncertain and
inconstant in itself, that is to say, it maybe a random variable. In
reality, there are more uncertain events than certain ones. Fixed
value is only a special case of random variable. Probability has its
own random uncertainty, just as derivative has its derivative and
multiorder derivatives etc. Take an example to explain uncertainty of
probability: we gain the pass rate of a product by sampling
inspection. The true pass rate may not the same as the gained pass
rate, for the probability gained by sampling inspection is random.
Relative to the gained pass rate, the true pass rate is a random
variable distributed round the gained one if we have no more idea of
the product. Take another example: we get the corresponding
probabilities of possible results of an event in an unreliable way,
then the probabilities are unreliable, and each true probability in
theory may have more than one value, so the true probability in theory
is still a random variable around the probability obtained by the
unreliable way, therefore, the probability is uncertain at that time.
Some people hold the viewpoint that probability is certain due to the
limitation of probability theory. What's worse is that sometimes, they
don't differentiate changes of conditions, so take the probability in
one condition as the probability under another condition, which leads
to mistakes. Being aware of that probability has the character of
uncertainty is helpful for us to free ourselves from the limitations
and framework of traditional probability theory, and not to take
random values as fixed values. In reality, the conditions and
information we usually get are not absolutely reliable, so the true
probability is random in certain degree. That is to say, we replace
the true probability with the probability based on the unreliable
conditions and information, which is relative and unreliable.
Thirdly, probability keeps evolving with the increase of known
conditions, and is relative to our known conditions. Both Popper and
Darwin hold the thinking of evolution, probability is evolved with
conditions too. Just like human beings usually acquaint themselves
with events from unknown to known, they realize probability
distribution from uncertain to certain under most circumstances. The
author brings forth the issue of the geometric mean of probability.
When certain qualifications were satisfied, by normalizing geometric
mean of the probabilities under different conditions, we can work out
the probability in the circumstance where all the conditions exist[4].
In this way, if any of the conditions can ascertain that the
occurrence probability of one possible result is 1 and that of other
results 0, the final probability of the result always remains 1. The
present probability theory doesn't realize the evolution thoroughly,
and lacks of a way of probability computation which can fuse different
conditions. People realize objects from uncertain to certain finally
for the known conditions changes and probability also changes with the
known conditions. The understanding of the relativity of the gradual
evolution is helpful for us to understand and apply the probability
theory better and to realize that the change itself from unknown to
known and from uncertain to certain is also a sort of evolution of
probability. In reality, the probability gained from imperfect
conditions is unilateral. Ihe probability is relative to our known
conditions and is seldom the same as true probability. To illustrate
the problem, we can analyze an example. We hope to determine the
probability of an event m when certain conditions occur together,
these conditions may be relevant or irrelevant with the probability.
We can select all of the relevant conditions, assume to be c1, c2, ...,
cn. We can assume that the probability can be determined by those
conditions c1, c2, ..., cn, then the probability may be expressed as
P(m)=f(c1, c2, ..., cn)
For a sophisticated case, P(m) may be a random variable. In order to
be convenient for the analysis, we consider the value of the above
function is fixed. When certain conditions are still unknown, the
probability P(m) itself is not fixed, the probability gained from the
imperfect conditions is not reliable. The more conditions we know, the
more reliable the corresponding probability is. Due to the imperfect
conditions, the probability gained from that conditions does not mean
the probability gained from overall conditions. And therefore the
conflicts under these imperfect conditions are understandable that
they do not perfectly represent the probability under complete
conditions. In fact the conditions are usually imperfect in
probability theory. It is not strict to take the probability in that
case as the probability in complete conditions. In practice, it is
prone to ignore the existence of substitution, thereby confusion and
absurdity may appear. For instance, it is certain whether it rained or
not in some place yesterday, but when we do not hold complete
conditions and information about that, we can only get randomly
uncertain result. The result cannot be taken as the probability of
rain yesterday. Strictly speaking, it should be taken as the
probability of rain under all the grasped conditions and information.
As a stopgap, we expediently take the probability as the probability
of rain yesterday for the moment when no more information is given.
Finally, probabilities in reality are much more complicated than those
in the present probability theory. We usually get the relatively true
probability. For instance, as for the probability of the rise-fall of
Stock Market, it is very complicated. To begin with, provided that
some experts sum up the law of probabilities of the rise-fall of Stock
Market on the basis current situation and attitudes of shareholders,
the rule of Stock Market would change once people know the law and
apply it to share trading. For example, all shareholders just simply
follow the trend which may lead to a state that Stock Market will keep
rising or falling. However, if finding out the law through research
that Stock Market will turn to the opposite when reaching its extreme,
they will change their ways of share trading, they trade shares as
soon as stocks rise to an appropriate point. By doing that, the law of
Stock Market changes and then the corresponding probability comes to
change. Then, let's still take Stock Market for example. If a
shareholder doesn't know others' decisions, especially their future
decisions, he may pay much attention to the rise-fall of Stock Market
and afterwards work out the probability of the rise-fall. However, if
he knows future decisions of other shareholders, he will optimize his
decision according to the corresponding decisions, which results in
change of the rise-fall probability of Stock Market. Finally, for the
fluctuation of Stock Market lies on many uncertain factors, such as
the shareholders and operational positions of the listed companies,
and the factors are interactive, the rise-fall probability of Stock
Market is also a value with multi-uncertainty. The above-mentioned
examples are sufficient to prove that probability is complicated.
However, because the present probability theory doesn't take into
consideration the complexity of problems of probability, some
expressions such as Full Probability Formula, Bayes' formula take
conditional probability as known and fixed value, which leads to
misunderstanding of some scholars and then results in some mistakes
while they applies the expressions in a clumsy manner.
The aforesaid analyses also show that conditions themselves are
various. For example, conditions can be result of experiment, result
of sampling inspection, theorem, rules, knowledge, common sense,
grammar, language translation, encoding scheme and reliability of
information etc. Some conditions such as encoding scheme, grammar and
definition are commonly neglected for they are quotidian. In like
manner, the conditions that are connotative may be neglected. If we
can't realize the conditions, we can produce some paradox.
3.Some problems in application of probability theory
Traditional probability theory doesn't definitely show it is absolute
probability theory, but it is trapped into the thought of absolute
probability theory both in theory and in application due to lack of
realization of relativity of probability.
Information theory is an important application field of probability
theory. In information theory, Shannon usually adopted the mode of
traditional probability theory when he utilized the probability
theory. Therefore, information theory has its limitations in many
fields. For instance, when computing X conditional entropy while Y
occurs, Shannon only thought of Y as a known condition, and did not
list conditional probabilities as known conditions but placed them
into the formula as a known value. Conditional entropy has other
problems [5]. Additionally, another problem comes out: when many
conditions coexisted, how to compute the final conditional entropy
when the conditional probabilities are unknown and only the
probabilities are known when each condition solely exists. It is a
problem that Shannon's information theory doesn't take into account.
For information theory only consider the case of authentic
communication, a lot of problems, such as the reliability of
information and the uncertainty of probability, can be ignored, but
the problems should be solved if considering information issues in
practice.
Shannon only realizes uncertainty of events, but don't realize that
probability itself may also be uncertain. For instance, if the
information is unreliable, it is possible that the probability is not
a fixed value but a random variable, namely, the random uncertainty
itself in Shannon's information theory is actually of random
uncertainty. When information is imperfect, the probability maybe a
random variable too. The case of unreliable and imperfect information
widely exists. Information theory cannot efficaciously solve the
problem in that case partly due to neglect the mulriple random
uncertainty. The present hyper entropy theory in nature does
researches on the problem as well. Hyper entropy means entropy of
entropy, namely means random uncertainty of random uncertainty.
However, it is not sure whether the uncertainty of probability is
accurately denoted as hyper entropy.
In Full Probability Formula, and Bayes' Formula, all probabilities are
in the same precondition, that is to say, the preconditions of the
prior probability are same, and subsequent conditions are built upon
the prior condition. That is presupposition of the equation forming,
but it is not directly mentioned in the probability theory. What's
more, in the probability theory the conditions are not well recognized
and differentiated consciously. That may lead to mistake. The author
once pointed out [6, 7] that Bays Formula is misused for two different
conditions is not differentiated.
4.Conclusion
Only when we have a good command of all the conditions related to an
unknown event and their relations such as their conditional
probability, and when these conditions are reliable, can we know the
probability of the unknown event. However, in reality the probability
we get is usually relative, different from the true probability, for
the reason of unreliable and incomplete conditions and information.
This paper analyses the relativity of the probability, only to offer
something common to induce more valuable viewpoints for more new
problems are needed to be studied on this problem, for example, the
theory study related to the reliability and completeness of
information. Besides probability theory, those questions universally
exist in other fields, especially in the field of information
technology. The relativity put forward in this paper is not only
helpful to the development of probability theory, but can effectively
rectify some of the current misuses and shallow cognition of the
probability theory, which can promote development and improvement of
other applied subjects.


wangyong

unread,
Oct 28, 2007, 10:30:49 PM10/28/07
to
Analyses on the Origins of Shannon's Blemish about OTP

Yong WANG
(School of Computer and Control, GuiLin University Of Electronic
Technology ,Guilin City, Guangxi Province , China, 541004)
hel...@126.com
Abstract: This paper further analyzes the origin of Shannon's proof
that one-time-pad is perfectly secure based on the prior analyses. The
limitations of information theory and probability theory are analyzed.
The key lies in that the two theories take probability as fixed value
and the conditions are not carefully recognized. It is pointed out
they are the origins of the blemish.
Keywords: one-time-pad; cryptography; perfect secrecy; information
theory; probability theory
1. Introduction
Shannon put forward the concept of perfect secrecy and proved that one-
time-pad (one-time system, OTP) was perfectly secure [1, 2]. For a
long time, OPT has been thought to be unbroken and is still used to
encrypt high security information. In literature [3], example was
given to prove that OPT was not perfectly secure. In literature [4],
detailed analyses about the mistake of Shannon's proof were given. It
was proven that more requirements were needed for OTP to be perfectly
secure and homophonic substitution could make OTP approach perfect
secrecy [5]. Literature [6] analyzed the problem and gave ways to
disguise the length of plaintext. In literature [7], the cryptanalysis
method based on probability was presented, and the method was used to
attack one-time-pad. Literature [4] considered an especially
understanding following which OTP could be thought perfectly secure if
some added conditions were satisfied. In literature [8], the
especially understanding was confirmed not to be Shannon's notion. It
was pointed out that the conditions in OPT could not coexist, when all
the conditions were considered, some of the conditions must change, so
it was not proper to use these conditions when computing the final
posterior probability. The above works provoked extensive debates, so
the problem and its origins should be further analyzed in detail.
2. Debate and Analysis on Perfect Secrecy of OTP
As the works relate to Shannon's mistakes, our analyses should be
prudent and long-tested. We have sent the papers to a lot of scholars
and released some of the papers in the internet, and discussed the
problem with some scholars, carefully analyzed all the opposite views
and replied all of them. Some opposite views owed to misapprehend my
papers, such as confusion of proof and reduction to absurdity,
confusion of my conclusion and Shannon's conclusion, taking the
probability when only considering partial conditions as the posterior
probability. Some opposite views cited the proofs about perfect
secrecy of OTP. It was found the proofs were largely identical but
with minor differences and were wrong for they took different
conditions as the same conditions [9].
There is a kind of opposite view that directly and erroneously
believes that the probability is unchanged, that is to say, since
plaintexts having had a prior probability distribution, its
probability distribution changes not any longer. it is proved that the
statement would bring about contradiction finally when ciphertext as a
fixed value and equiprobable keys are considered, and that is also
inconsistent with Shannon argumentation for he got the result that
posterior probability is equiprobable[2]. But there is still necessary
for in-depth analyses on the problem.
Firstly, if this argument is proved to be established, we do not have
to prove and can directly get the conclusion that after the ciphertext
is intercepted the posterior probability of each plaintext is equal to
its prior probability for the probability unchanges. Namely the
cryptosystem is automatically perfectly secure.
Secondly, we can also understand it from the view of the probability.
Because the prior probability we call P(x) is gained from the
condition when only the context of communications is known, and
subsequently when gaining the ciphertext and cryptography interrelated
conditions we call all of them as y, the final probability can be seen
as conditional probability P (x | y), the final conditional
probability is not the same as prior probability unless the two events
are independent. It seems Shannon have proven that two events are
independent. But Shannon only took the value of ciphertext as y and
ignored the existence of the condition that ciphertext is a fixed
value, but not a random variable. It is just this condition that
influences the probabilities of plaintexts in company with the
cryptosystem.
Thirdly, our knowledge to events is often unreliable and incomplete.
Although sometimes the event is certain, if our knowledge to the event
is incomplete, we may get random results [9]. For example, it is

certain whether it rained or not in some place yesterday, but when we
do not hold complete conditions and information about that, we can
only get randomly uncertain result. The result cannot be taken as the
probability of rain yesterday. Strictly speaking, it should be taken
as the probability of rain under all the grasped conditions and
information. As a stopgap, we expediently take the probability as the
probability of rain yesterday for the moment when no more information
is given. The probability under this condition is seldom equal to the
probability under that condition. In the case of OTP, based on
communication contexts, we can gain a prior probability distribution
of the plaintexts. As we have no further understanding and no further
related information about the plaintext, we have to use the prior
probability distribution for the moment [10]. After gaining the
ciphertext, we can try to get more information according to the
ciphertext and probability distribution of keys. At that time the
information is still incomplete, but cryptanalyst will not abandon the
use of such information and make full use of such information and
conditions to obtain more reliable probability. When only considering
that the ciphertext is given, there is a one-to-one correspondence
between all the keys and plaintexts and all keys are equally likely,
we can educe that the plaintexts are equally likely. That is
inconsistent with the prior probability and hence the fusion and
compromise of the probabilities is needed. The compromise probability
may be more perfect and reliable, but it is not equal to the prior
probability.
Fourthly, we can illustrate the problem that we can get different
probabilities under different conditions and the probability of an
event changes with the corresponding conditions from a new
perspective, we want to determine the probability of an event m in the
situation that certain conditions occur together. These conditions may

be relevant or irrelevant with the probability. We can select all of
the influencing conditions; assume to be c1, c2... cn. We can assume
that the probability can be determined by the n influencing conditions
c1, c2... cn, then probability can be expressed as

P(m)=f(c1, c2, ..., cn)
For a sophisticated case, P(m) may be a random variable. In order to
be convenient for the analysis, we consider the value of the above
function is fixed. When certain conditions are still unknown, the
probability P(m) itself is not fixed, we may get the expectation of
P(m) under the imperfect conditions. But the expectation (probability)
gained from the imperfect conditions is not reliable and seldom equal
to the probability gained from the complete conditions. The more

conditions we know, the more reliable the corresponding probability
is. Due to the imperfect conditions, the probability gained from those
conditions does not mean the probability gained from complete

conditions. And therefore the conflicts under these imperfect
conditions are understandable that they do not perfectly represent the
probability under complete conditions. In fact the conditions are
usually imperfect in probability theory. It is not strict to take the
probability in that case as the probability in complete conditions. In
practice, it is prone to ignore the existence of unequal substitution,

thereby confusion and absurdity may appear.
Another kind of opposite views argued the plaintext and ciphertext in
the OPT are completely independent, and thus OPT is perfectly secure.
It argued that any ciphertext might be regarded as the same for OTP
and then the plaintext and ciphertext in the OPT are completely
independent. The independence can be explained from the angles of
probability theory and usual understanding. The kind of opposite views
confused the angles of probability theory and usual understanding.
>From the angle of probability theory, the plaintext and ciphertext in
the OPT are completely independent, and thus OPT is perfectly secure.
But from the angle of probability theory, that any ciphertext may be
regarded as the same for OTP does not mean the plaintext and
ciphertext in the OPT are completely independent as probability theory
gives a strict definition of independent. The above conclusion may be
correct when it is considered from the angle of usual understanding.
For example, any ciphertext changes the probability of plaintext in
the same way. In this case, any ciphertext may be regarded as the same
for the gives the same influence on the probability of plaintext. But
this does not mean perfect secrecy for the probability changes. It is
the same in OTP.
3. Origin analyses of probability theory and information theory

The mistakes in Shannon's proof have certain relations with the
limitations of probability theory. Moreover, Shannon did not realize
the limitations of probability theory when studying his information
theory.
>From the view of probability theory, he realized the random
uncertainty of event, and expressed this uncertainty with probability,
but ignored the random uncertainty of probability itself. Though it
was not directly stated that the probability was a fixed value [11].
But it can be seen from a lot of formals in probability theory and
information theory that probability is always taken as a fixed value,
otherwise, the formals may be impossible to compute, for example, the
formal of entropy would be impossible to compute if probability is a
random variable. The case that probability is a random variable is
universal as fixed value is only a special case of random variable.
For instance, the probability that comes from the incomplete or
unreliable conditions has more than one possible value, it is not
fixed, so the probability is a random variable and has random
uncertainty correspondingly [10]. The expectation of the probability
as a random variable is a simple static token, with merely local
significance, but its concrete probability distribution and its
concentrative degree are of great significance to the compromise of
probabilities and and the reliability of information. As probability
is always treated as fixed value in probability theory and information
theory, it caused the limitation of the two theories that they can not
solve the problems when the probability is a random variable. It also
caused a few scholars to believe that once the probability was given,
the probability would not change any more for the probability is
fixed. Generally speaking, for fixed probability, neither the analysis
of the reliability of information itself nor the fusion of unreliable
and incomplete information is doable. Most information in reality is
not absolutely reliable or perfect, we should compromise and fuse
different information. As information is expressed by probability, so
information is unchangeable if the corresponding probability is
considered as fixed value. To take probability as a fixed value is one
of the fundamental reasons why information theory can not be used to
research the reliability of information itself and information fusion,
while it can be used to research authentic communication.
We usually get information from the imperfect conditions we know and
take information under imperfect conditions as information under
complete conditions as a stopgap. When imperfect information under
imperfect conditions is taken as information under complete
conditions, the information is unreliable for the information is not
the same as the information under complete conditions, so the
imperfect information can be taken as unreliable information. When
utterly knowing nothing about the event, we cannot know how many


possible random values there are, not to mention the corresponding

probabilities of the possible values. Therefore, the prior probability
we obtained is based on the known conditions and and it is also a
conditional probability. Therefore the information and probability is
relative to our known conditions. But the unequal substitution is
never pointed out directly in probability theory and information
theory. In practice, it is prone to ignore the existence of the
unequal substitution of the imperfect one and the perfect one, thereby
confusion and absurdity may appear. When a lot of conditions that
influence the probability of an event are considered and the
conditions are parallel but not in series, there may be more than one
prior probability or posterior probability according to nowadays
probability theory. For example, some conditions can influence birth
gender of baby. Under condition A, the probability of male baby P(M︱A)
is c and under condition B, the probability of male baby P(M︱B) is d.
Now a question occurs: if both of the above conditions exist, how can
we gain the probability of male baby P(M︱A, B)? Other than problems in
nowadays probability theory, in this case the two conditions are
parallel for when A occurs, B is not considered and when B occurs, A
is not considered. and we do not know the conditional probability P(A︱
B) or P(B︱A). It is the same in the example of one-time pad, if the
final posterior probability is considered, it is the compromise of the
two probabilities under two conditions. The problem of compromise is
not settled in nowadays probability theory and is very complex. When
there are some parallel conditions, there may be more than one
probability and the probabilities may be conflicting. The uncertainty
theory and information fusion theory attempts to solve the problem,
but they are not strict and their algorithms are approximate but not
accurate algorithms, what's more, there are absurdities in some
algorithms. It is necessary to develop relevant accurate theory from
the perspective of probability theory.
In addition, probability theory does not realize the complexity of the
conditions, some conditions may be cryptic and interactional, so to
carefully recognize and strictly distinguish each condition is
necessary, and it should be realized that the probabilities under
different conditions are not the same, for example, the probability is
very easy to calculate when only condition x or y exists , but when
both the condition x and y coexist , the calculation of the
corresponding probability is difficult. This causes difficulty in
using the ready-made formulas of probability theory when conditional
probability is unknown. In the information theory, these problems
issues are not under consideration as well, so information theory can
not be used in some fields such as information fusion and artificial
intelligence. Due to the problem, Shannon failed to discover and
distinguish these different conditions in his proof [10].
>From Shannon's result PE(M) =1/n in his example, we can find Shannon
confused the posterior probability under all the conditions with the
imperfect probability when only the ciphertext and OPT were considered
(the prior probability as a condition was not considered, otherwise,
the result is impossible unless the prior probability P(M) =1/n
uncommonly happens). That may owe to the limitation of probability
theory that ignore the discrimination and recognition of different
conditions.
In the case of OTP, the uncertainty of plaintext is increased after
the compromise and fusion, but the conditional entropy was not
increased in information theory. It seems to be contradictory with
information theory. In fact, Shannon's conditional entropy is just one
kind of weighted average of a series of conditional entropies. It was
pointed out that Shannon's conclusion is not absolutely correct [12].
>From the above examples, we can find that the reliability and
completeness of the information are not considered in information
theory. However, the reliability and completeness are very important.
Information is valuable only when it is reliable at a certain extent,
otherwise it would be worthless. However, the relevant theories about
information reliability and completeness (such as uncertainty theory,
information fusion) are far from perfect and systematic, and there are
only approximate algorithms. Some of them even have absurdities.
Probability theory lags behind the study of the relevant theories yet.

4. Conclusion
This paper gives further and profound analyses on the blemish in
Shannon's proof and points out the root. Also, the debates about the
problem with a few experts and scholars are analyzed. This means that
the OPT is not as commonly thought to be perfectly secure and
unbreakable. However, it still has good statistical property and
superior security. At the same time, it analyzes the origin of
Shannon's mistakes in probability theory and information theory for
the probability in the two theories is always taken as fixed value,
but not random variable. This also provides a good direction for the
development of the probability theory and information theory and helps
the two theories to expand so as to solve more problems in reality.
Shannon made a lot of great contributions in information theory and
cryptography, but due to the limitation of probability theory and the
cognition of that time, it is hard to ensure absolutely no mistake in
so many contributions.

Reference
[1]. Bruce Schneier, Applied Cryptography Second Edition: protocols,
algorithms, and source code in C[M],John Wiley &Sons,Inc,1996
[2]. C.E.Shannon, Communication Theory of Secrecy Systems [J], Bell
System Technical journal, v.28, n.4, 1949, 656-715.
[3]. Yong WANG, Security of One-time System and New Secure System [J],
Netinfo Security, 2004, (7):41-43
[4]. Yong WANG, Fanglai ZHU, Reconsideration of Perfect Secrecy,
Computer Engineering, 2007, 33(19)
[5]. Yong WANG, Perfect Secrecy and Its Implement [J],Network &
Computer Security,2005(05)
[6]. Yong WANG, Fanglai ZHU, Security Analysis of One-time System and
Its Betterment, Journal of Sichuan University (Engineering Science
Edition), 2007, supp. 39(5):222-225
[7]. Yong WANG, Shengyuan Zhou, On Probability Attack, Information
Security and Communications Privacy, 2007,(8):39-40
[8]. Yong WANG, Confirmation of Shannon's Mistake about Perfect
Secrecy of One-time-pad, http://arxiv.org/abs/0709.4420.
[9]. Yong WANG, Mistake Analyses on Proof about Perfect Secrecy of One-
time-pad, http://arxiv.org/abs/0709.3334
[10]. Yong WANG, On Relativity of Probability, www.paper.edu.cn, Aug,
27, 2007.
[11]. Yong WANG, On Relativity of Information, presented at First
National Conference on Social Information Science in 2007, Wuhan,
China, 2007.
[12]. Yong Wang. Question on Conditional Entropy. arXiv:
http://arxiv.org/pdf/0708.3127.

>
> ignored the prior condition when he considered the posterior
> probability, so that posterior probability is unilateral. We can
> affirm Shannon's mistake by using his result to get cockeyed result.
> Using Shannon's result that the given example is perfectly secure, we
> can get PE(M)= P(M), as Shannon got PE(M)=1/n, so we can get P(M)=1/n.
> But that is wrong for plaintexts are seldom equally likely.
> What's more, there is another crytic presupposition in Shannon's proof
> that the plaintext should all be the same in length. But usually the
> plaintext cannot meet the presupposition. When the ciphertext is
> intercepted, the ciphertext length is known as L. The length of all
> possible plaintext must be L for one-time system; otherwise, the prior
> probability is not the same as the posterior probability that is
> zero.
> V. Conclusion>From the above analyses, we can find that one-time system is not
>
> perfectly secure unless extra conditions are given. In despite of
> that, it has good cryptographic property. We can take measures to
> improve its security.
> Reference
> [1]. Bauer, F.L. Decrypted Secrets-Methods and Maxims of
> Cryptology[M], Berlin, Heidelberg, Germany: Springer-verlag, 1997.
> [2]. C.E.Shannon, Communication Theory of Secrecy Systems[J], Bell
> System Technical journal, v.28, n. 4, 1949, 656-715.
> [3]. Yong WANG, Security of One-time System and New Secure System
> [J],Netinfo Security, 2004, (7):41-43
> [4]. Yong WANG, Perfect Secrecy and Its Implement [J],Network &
> Computer Security,2005(05)
> [5]. Yong WANG, Fanglai ZHU, Security Analysis of One-time System and
> Its Betterment, Journal of Sichuan University (Engineering Science
> Edition), 2007, supp. 39(5):222-225
> [6]. Yong WANG, Shengyuan Zhou, On Probability Attack, Information
> Security and Communications Privacy, 2007,(8):39-40
>
> The Project Supported by Guangxi Science Foundation (0640171) and
> Modern Communication National Key Laboratory Foundation (No.
> 9140C1101050706)
>
> Biography:
> Yong WANG (1977-) Tianmen city, Hubei province, Male, Master of
> cryptography, Research fields: cryptography, information security,
> generalized information theory, quantum information technology. GuiLin
> University of Electronic Technology, Guilin, Guangxi, 541004 E-mail:
> hell...@126.com wang197733y...@sohu.com
> Mobile 13978357217 fax: (86)7735601330(office)

Kristian Gjųsteen

unread,
Oct 29, 2007, 4:38:03 AM10/29/07
to
Simon Johnson <Simon....@gmail.com> wrote:

Do you really understand what "The concatenation of the addition is a
group..." means? If you do, can you tell me what it means? His entire
argument seems like nonsense to me.

Anyway, I gave a counter-example last time he made this claim.

http://groups.google.com/group/sci.crypt/browse_frm/thread/d45b863a221e4181/63e19fe32799821b?lnk=st&q=#63e19fe32799821b

--
Kristian Gjųsteen

wangyong

unread,
Oct 30, 2007, 4:57:05 AM10/30/07
to
On Oct 29, 4:38 pm, Kristian Gjøsteen <kristiag+n...@math.ntnu.no>
wrote:
> Simon Johnson <Simon.John...@gmail.com> wrote:

>
> >Kristian Gjøsteen wrote:
>
> >> Sebastian G. <se...@seppig.de> wrote:
> >>>so all claims of
> >>>a second perfect secrecy cipher beside the OTP are void.
>
> >> This is, as I'm sure I've told you before, perfect nonsense.
>
> >> I just checked. I told you this Jan 14, 2006.
>
> >An honest question: Why is his argument bogus, it seemed pretty water tight
> >to me?
>
> Do you really understand what "The concatenation of the addition is a
> group..." means? If you do, can you tell me what it means? His entire
> argument seems like nonsense to me.
>
> Anyway, I gave a counter-example last time he made this claim.
>
> http://groups.google.com/group/sci.crypt/browse_frm/thread/d45b863a22...
>
> --
> Kristian Gjøsteen

I don't know,
do you mean he used it to prove OPT was perfectly secure.
If so, cite the proof.

rossum

unread,
Oct 30, 2007, 7:17:36 AM10/30/07
to
On 30 Oct 2007 01:57:05 -0700, wangyong <hel...@126.com> wrote:

>On Oct 29, 4:38 pm, Kristian Gjųsteen <kristiag+n...@math.ntnu.no>


>wrote:
>> Simon Johnson <Simon.John...@gmail.com> wrote:
>>

>> >Kristian Gjųsteen wrote:
>>
>> >> Sebastian G. <se...@seppig.de> wrote:
>> >>>so all claims of
>> >>>a second perfect secrecy cipher beside the OTP are void.
>>
>> >> This is, as I'm sure I've told you before, perfect nonsense.
>>
>> >> I just checked. I told you this Jan 14, 2006.
>>
>> >An honest question: Why is his argument bogus, it seemed pretty water tight
>> >to me?
>>
>> Do you really understand what "The concatenation of the addition is a
>> group..." means? If you do, can you tell me what it means? His entire
>> argument seems like nonsense to me.
>>
>> Anyway, I gave a counter-example last time he made this claim.
>>
>> http://groups.google.com/group/sci.crypt/browse_frm/thread/d45b863a22...
>>
>> --

>> Kristian Gjųsteen


>
>I don't know,
>do you mean he used it to prove OPT was perfectly secure.
>If so, cite the proof.

A much clearer way of indicating who posted what. Thankyou for
changing.

rossum

wangyong

unread,
Nov 1, 2007, 11:40:39 PM11/1/07
to
On Oct 30, 7:17 pm, rossum <rossu...@coldmail.com> wrote:
> On 30 Oct 2007 01:57:05 -0700, wangyong <hell...@126.com> wrote:
>
>
>
>
>
> >On Oct 29, 4:38 pm, Kristian Gjøsteen <kristiag+n...@math.ntnu.no>

> >wrote:
> >> Simon Johnson <Simon.John...@gmail.com> wrote:
>
> >> >Kristian Gjøsteen wrote:
>
> >> >> Sebastian G. <se...@seppig.de> wrote:
> >> >>>so all claims of
> >> >>>a second perfect secrecy cipher beside the OTP are void.
>
> >> >> This is, as I'm sure I've told you before, perfect nonsense.
>
> >> >> I just checked. I told you this Jan 14, 2006.
>
> >> >An honest question: Why is his argument bogus, it seemed pretty water tight
> >> >to me?
>
> >> Do you really understand what "The concatenation of the addition is a
> >> group..." means? If you do, can you tell me what it means? His entire
> >> argument seems like nonsense to me.
>
> >> Anyway, I gave a counter-example last time he made this claim.
>
> >>http://groups.google.com/group/sci.crypt/browse_frm/thread/d45b863a22...
>
> >> --
> >> Kristian Gjøsteen

>
> >I don't know,
> >do you mean he used it to prove OPT was perfectly secure.
> >If so, cite the proof.
>
> A much clearer way of indicating who posted what. Thankyou for
> changing.
>
> rossum- Hide quoted text -

>
> - Show quoted text -

A much clearer way of indicating who posted what. Thankyou for
changing.

rossum
--------------------------------do you thank for me?why do you thank
for me?

wangyong

unread,
Nov 5, 2007, 7:27:16 PM11/5/07
to
all of you except one bring me to mindThe Emperor's New Clothes
you just think shannon is right,but all of you except one read through
shannon's paper or my papers.
you are so addlepated for you do not what is shannon's result.
his posterior probability of plaintext is uniform.I have mentioned in
my papers.

0 new messages