Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Confirmation of Shannon’s Mistake about Perfect Secrecy of One-time-pad

13 views
Skip to first unread message

wangyong

unread,
Oct 22, 2007, 12:22:38 AM10/22/07
to
Confirmation of Shannon’s Mistake about Perfect Secrecy of One-time-
pad
Yong WANG
(School of Computer and Control, GuiLin University Of Electronic
Technology ,Guilin, 541004, Guangxi Province, China)
hel...@126.com
Abstract: This paper analyzes Shannon’s proof that one-time-pad is
perfectly secure and discusses all the possibilities, and then
confirms which of them is Shannon’s notion. The confirmed notion is
found to be wrong for its neglect of the prior probability and the
absolutely irreconcilable conditions. Therefore, one-time-pad is not
perfectly secure.
Keywords: one-time system, cryptography, perfect secrecy, information
theory, probability
1. Introduction
Shannon put forward the concept of perfect secrecy and proved that one-
time-pad (one-time system, OTP) was perfectly secure [1, 2]. For a
long time, OPT has been thought to be unbroken and is still used to
encrypt high security information. In literature [3], example was
given to prove that OPT was not perfectly secure. In literature [4],
detailed analyses about the mistake of Shannon’s proof were given. It
was proven that more requirements are needed for OTP to be perfectly
secure and homophonic substitution could make OTP approach perfect
secrecy [5]. Literature [6] analyzed the problem and gave ways to
disguise the length of plaintext. In literature [7], the cryptanalysis
method based on probability was presented, and the method was used to
attack one-time-pad. Literature [4] considered an especially
understanding following which OTP could be thought perfectly secure if
some added conditions were satisfied. In this paper, I will confirm
that the especially understanding is not Shannon’s notion and analyze
the source of his mistake.
2. Counterexample of Shannon’s Conclusion

For the moment, we do not consider the inaccessible limitation that
the length of any plaintext must be the same as the length of
ciphertext in OTP. The following discussion supposes the all the
plaintexts, ciphertexts and keys are the same in length.
We give a simple example of OPT to discuss the problem, plaintext
space is M={0,1}, ciphertext space is C={0,1} and key space is K=
{0,1}. According to the information that cryptanalysts got beforehand,
they can get the prior probability of plaintext as P(M=0) = 0.9 and
P(M=1) = 0.1. Later the ciphertext C=0 is intercepted. When only
considering C=0 and the cryptosystem (regardless of the prior
probability of plaintext), we can educe that the plaintexts are
equally likely, for there is a one-to-one correspondence between all
the plaintexts and keys for C=0. The prior probabilities of plaintexts
are seldom the same, so the two probability distributions of
plaintexts gained from different conditions are conflicting. Then the
compromise of the two probability distributions is indispensable. The
compromised posterior probability of the plaintext would be between
the two corresponding probabilities of the two sectional conditions.
When C=0 is intercepted, the posterior probability P(M=0) is between
0.9 and 0.5, and P(M=1) is between 0.1 and 0.5. The compromised
posterior probability of the plaintext isn’t equal to the prior
probability, so OTP is not perfectly secure.
3. Mistake Confirmation of Shannon’s Proof
Due to the mapping of M, K and C, the probabilities of M, K and C are
complicatedly interactional. For the above example, the probability of
plaintext changes when the ciphertext is fixed, even though the
ciphertext is unknown.
When only considering the fixed ciphertext and the equiprobability of
key, we can gain that plaintexts are equally likely for there is a one-
to-one correspondence between all the plaintexts and keys for fixed
ciphertext. There is conflict between the prior probability and the
uniformly distributed probability gained above.
In order to understand the inconsistency of probability in the example
and the need for fusion of the probabilities in this case, we adopt
the combinations of different conditions for the following deduction
to analyze the existence of probability conflict.
For our simple example about OTP, when considering the condition that
ciphertext is 0, the probability of ciphertext being 0 is 1, and the
probability of ciphertext being 1 is 0. But according to the prior
probability distribution of plaintexts given and uniformly distributed
keys, we can easily find that ciphertext is uniformly distributed,
that is to say, all ciphertext are equally likely. We can see the two
probability distributions of ciphertext in different conditions are
conflictive.
When only considering that the intercepted ciphertext is 0 and prior
probability of plaintext being 0 we call P(M=0) is 0.9, and prior
probability of plaintext being 1 we call P(M=1) is 0.1, the
probability of key being 0 we call P(K=0) is 0.9, and the probability
of key being 1 we call P(K=1) is 0.1 because there is a one-to-one
correspondence between all the plaintexts ands keys. However,
according to the requirement of OTP, all the keys are equally likely,
so conflict of the probabilities occurs as before.
Such conflicts show that under different conditions we may draw
inconsistent probabilities, so it needs to fuse and compromise. The
probabilities obtained by the different combinations of unilateral
conditions are inconsistent. That is to say, the conditions in OPT can
not coexist. When all the conditions are considered, some of the
conditions must change, so it is not proper to use these conditions
when computing the final posterior probability. It likes four
irregular feet of a same table. There is always one foot that is
turnup when the table is on the horizontal ground. If the four feet
should touch the horizontal ground at the same time, distortion would
happen. In literature [7], formula was presented to fuse the
inconsistent probabilities.
Shannon did not realize that the conditions were impossible to
coexist. When taking them into the formula, there must be mistake for
the conditions cannot coexist and the probabilities have changed when
all the conditions are considered at the same time.
For the conditions in the example are very complex, and some are
connotative, it is essential to list them and analyze the impact of
the conditions on the probability. Literature [4] considered an
especially understanding following which OTP could be thought
perfectly secure if some added conditions were satisfied and analyzed
that was unlikely to be Shannon’s view. This paper analyzes the
problem in detail and confirms the result that the especially
understanding is not Shannon’s view using the information gained from
Shannon’ s proof.
As different conditions can gain different probability distribution,
we list the conditions those impact on the probability distribution of
plaintext and the corresponding probabilities of plaintext when only
considering some of the conditions.

(1) Considering the information that cryptanalysts got beforehand, we
can get P1(M=0)=0.9, P1(M=1)=0.1
(2) Considering the cryptosystem (including that the keys are equally
likely) and unknown but fixed ciphertext(C can be 0 or 1, but not
random variable), we can get P2(M=0)=0.5 and P2(M=1)=0.5 for there is
a one-to-one correspondence between all the plaintexts and keys for
fixed ciphertext.
(3) Considering the cryptosystem and known ciphertext C=0, we can get
P3(M=0)=0.5 and P3(M=1)=0.5 for there is a one-to-one correspondence
between all the plaintexts and keys for fixed ciphertext.
(4) Considering the information that cryptanalysts got beforehand and
the cryptosystem, we can get P4(M=0)=0.9, P4(M=1)=0.1 for the
cryptosystem does not impact on the probability of plaintext.
(5) Considering the information that cryptanalysts got beforehand, the
cryptosystem and unknown but fixed ciphertext, we can get that P5(M=0)
is between 0.9 and 0.5 and P5(M=1) is between 0.1 and 0.5 after
compromise.
(6) Considering the information that cryptanalysts got beforehand, the
cryptosystem and ciphertext C=0, we can get that P6(M=0) is between
0.9 and 0.5, P6(M=1) is between 0.1 and 0.5 after compromise.
Posterior probability is known from conditional probability, but there
is still precondition or information to get prior probability. Suppose
that we have no idea of an event, we can’t know how many possible
random values there are, not to mention the corresponding
probabilities of the possible values. Therefore, the prior probability
is based on the known conditions and it is also a conditional
probability. Shannon did not confirm what the precondition for the
prior probability is and did not define prior probability definitely.
Considering the possibility of wrong understanding, prior probability
may be one of (1), (4) and (5). Posterior probability may be one of
(3) and (6). If we take prior probability as that in (5) and take
posterior probability as that in (6). The two probabilities may be the
same. Suppose that is Shannon’s view, he should have the concept of
information fusion and could distinguish the conditions, but he never
did so. What’s more, there was no algorithm to fuse the probabilities
at that time. Shannon always thought the fixedness of probability to
be perfectly secure. In the especially supposed case, the initial
probability in (1) is still changed and translated to the probability
in (5), the change should be thought to be not secure by Shannon’s
view.
Strictly speaking, the posterior probability should be that in (6),
but there is another understanding that the posterior probability is
that in (3) for it is not distinctly defined. In (3) we can easily get
the probability and do not have to explain the conflictive
probabilities, but the probability in (6) is hard to understand and
compute because the probabilities in different partial conditions are
inconsistent and compromise is needed. We can find that Shannon did
not realize the problem of compromise from his proof, so Shannon may
take the posterior probability as that in (3). What’s more, we can
confirm that Shannon thought of the posterior probability as that in
(3) from his following proof [2]:
It is possible to obtain perfect secrecy with only this number of
keys, as one shows by the following example: Let the Mi be numbered 1
to n and the Ei the same, and using n keys let
TiMj=Es
where s=i+j(Mod n). In this case we see that PE(M)=1/n=P(E) and we
have perfect secrecy.
P(E)= probability of obtaining cryptogram E from any cause.
PE(M)= a posteriori probability of message M if cryptogram E is
intercepted.
P(M)= a priori probability of message M
Shannon got the result PE(M)=1/n=P(E), that is to say, the plaintexts
are equally likely when the cryptogram E is intercepted. That is just
the case of (3). But it is wrong for the prior probability of
plaintext is not considered; otherwise the plaintexts are not equally
likely. Shannon seemed to deem the result was very easy to get for
Shannon got the result without strict proof in detail. If he
considered the case in (6) but not that in (3), the problem would be
very complex.
We can confirm Shannon’s mistake by using his result to get cockeyed
result. Using Shannon’s result that the given example is perfectly
secure, we can get PE(M)= P(M), as Shannon got PE(M)=1/n, so we can
get P(M)=1/n. But that is wrong for plaintexts are seldom equally
likely.
In summary, Shannon is wrong and one-time-pad is not perfectly secure.
The especially understanding is not Shannon’s view.
Shannon first proved that OTP was perfectly secure, but his proof is
very simple. Detailed proofs about perfect secrecy of OTP were given
by later scholars who used Shannon’s proof for reference. There are
many proofs about perfect secrecy of OTP. Some proofs directly made
the conclusion that all plaintexts were equally likely. But it is
obviously wrong, for the prior probabilities of all plaintexts are
seldom equally likely. Other proofs are generally identical with minor
differences. It is found that the latter proofs made similar mistakes.
4. Conclusion
The paper analyzes Shannon’s proof, discusses all the possibilities
and confirms which of them Shannon’s notion is. The confirmed notion
is found to be wrong for its neglect of the prior probability and the
absolutely irreconcilable conditions. Though one-time-pad is not
perfect secure, it still has good cryptographic property. We can take
measures to improve its security. The above analyses not only confirm
Shannon’s mistake, but also fetch out the limitation of probability
theory and information theory. In the two theories, the value of
probability is deemed to be fixed, but in some cases, probability may
be random variable in practice [8, 9].
Reference
[1]. Bruce Schneier, Applied Cryptography Second Edition: protocols,
algorithms, and source code in C[M],John Wiley &Sons,Inc,1996
[2]. C.E.Shannon, Communication Theory of Secrecy Systems [J], Bell
System Technical journal, v.28, n.4, 1949, 656-715.
[3]. Yong WANG, Security of One-time System and New Secure System [J],
Netinfo Security, 2004, (7):41-43
[4]. Yong WANG, Fanglai ZHU, Reconsideration of Perfect Secrecy,
Computer Engineering, 2007, 33(19)
[5]. Yong WANG, Perfect Secrecy and Its Implement [J],Network &
Computer Security,2005(05)
[6]. Yong WANG, Fanglai ZHU, Security Analysis of One-time System and
Its Betterment, Journal of Sichuan University (Engineering Science
Edition), 2007, supp. 39(5):222-225
[7]. Yong WANG, Shengyuan Zhou, On Probability Attack, Information
Security and Communications Privacy, 2007,(8):39-40
[8]. Yong WANG, On Relativity of Probability, www.paper.edu.cn, Aug,
27, 2007.
[9]. Yong WANG, On the Perversion of information’s Definition,
presented at First National Conference on Social Information Science
in 2007, Wuhan, China, 2007.
The Project Supported by Guangxi Science Foundation (0640171) and
Modern Communication National Key Laboratory Foundation (No.
9140C1101050706)

Biography:

matt271...@yahoo.co.uk

unread,
Oct 22, 2007, 9:59:23 AM10/22/07
to
On Oct 22, 5:22 am, wangyong <hell...@126.com> wrote:
> Confirmation of Shannon’s Mistake about Perfect Secrecy of One-time-
> pad
> Yong WANG
> (School of Computer and Control, GuiLin University Of Electronic
> Technology ,Guilin, 541004, Guangxi Province, China)
> hell...@126.com

Let's say that the prior probabilities are Pr(M=0) = p and Pr(M=1) = 1-
p. We assume that the key is chosen randomly, independently of M. So,
Pr(K=0) = 1/2 and Pr(K=1) = 1/2, independent of M. And let's say, just
to be explicit, that K=0 maps 0->0, 1->1, and K=1 maps 0->1, 1->0. We
intercept the encrypted message C=0. Then,

Pr(M=0|C=0) = Pr(M=0 & C=0)/Pr(C=0) = (p*1/2)/(p*1/2 + (1-p)*1/2) = p

Pr(M=1|C=0) = Pr(M=1 & C=0)/Pr(C=0) = ((1-p)*1/2)/(p*1/2 + (1-p)*1/2)
= 1-p

So, intercepting and reading the message has, as expected, yielded no
new information.

matt271...@yahoo.co.uk

unread,
Oct 22, 2007, 10:07:40 AM10/22/07
to
On Oct 22, 5:22 am, wangyong <hell...@126.com> wrote:
> Confirmation of Shannon’s Mistake about Perfect Secrecy of One-time-
> pad
> Yong WANG
> (School of Computer and Control, GuiLin University Of Electronic
> Technology ,Guilin, 541004, Guangxi Province, China)
> hell...@126.com

[Repost due to continuing problems with Google Groups. Apologies for
any duplications.]

wangyong

unread,
Oct 22, 2007, 10:36:27 AM10/22/07
to


Thank you, the proof you give is analyzed by my other paper.you can
see my paper.

wangyong

unread,
Oct 22, 2007, 10:41:34 AM10/22/07
to

matt271...@yahoo.co.uk

unread,
Oct 22, 2007, 1:25:45 PM10/22/07
to

Fair enough, I admit that I have not read all your posts and papers.
However, this particularly simple example seems like an excellent way
for a lazy person to get to the bottom of what you are talking about.
So, if you feel like explaining here in brief and simple terms what
you think is wrong with my calculation, or how the scenario that you
are envisaging differs from the one I described, then it might lead
more quickly to some better understanding. If not then no worries...

hagman

unread,
Oct 22, 2007, 2:48:22 PM10/22/07
to
On 22 Okt., 16:41, wangyong <hell...@126.com> wrote:
> http://groups.google.com/group/sci.math/browse_thread/thread/831c48af...

>
> please consider the case that ciphertext is a fixed value.

??? If the cyphertext were a fixed value, there would be no need to
transfer it,
hence it could not be intercepted.


hagman

unread,
Oct 22, 2007, 3:53:07 PM10/22/07
to
On 22 Okt., 06:22, wangyong <hell...@126.com> wrote:
> Confirmation of Shannon’s Mistake about Perfect Secrecy of One-time-
> pad
> Yong WANG
> (School of Computer and Control, GuiLin University Of Electronic
> Technology ,Guilin, 541004, Guangxi Province, China)
> hell...@126.com
> [8]. Yong WANG, On Relativity of Probability,www.paper.edu.cn, Aug,

> 27, 2007.
> [9]. Yong WANG, On the Perversion of information’s Definition,
> presented at First National Conference on Social Information Science
> in 2007, Wuhan, China, 2007.
> The Project Supported by Guangxi Science Foundation (0640171) and
> Modern Communication National Key Laboratory Foundation (No.

> 9140C1101050706)
>
> Biography:


If I understand you right, then you can get more characters right than
by guessing from the encrypted message found below.

I use the following C code:
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char* argv[] ) {
srand( atoi(argv[1]) );
int limit = atoi(argv[2]) * (RAND_MAX/100);
for (int row = 0; row < 20; ++row) {
for (int col=0; col < 50; ++col)
printf( (rand()<limit)? "0" : "1" );
printf("\n");
}
}

to generate a 1000 character plaintext with P(M=0) = 0.9:
#./foo (non-disclosed-random-seed) 90 > plaintext
and a onetime pad
#./foo (non-disclosed-random-seed) 50 > onetimepad
Just to make sure I cannot change the data afterwards behind your
back:
#cat plaintext onetimepad | md5sum
044fc613804c45291d6e557a1ef265da -

Applying the onetimepad to the plaintext (xor'ing them), I obtain
00001011001110101101100100010000001010101110000010
10100110010110101111100011011101110101010100101011
00011101101111111101111010111101000101111100101000
01000111100101010000000100000110110111100011010111
10110101010111001001100010001010000110100010100100
01110011110001100100111110111110001111100111100111
10000110000111010001000001100011011011000001110000
01100011001010011000000100010000111011111101101000
10101001111101011000100101110111110110000101010001
11010101011110110110000000011011111111111010101111
00101011011101111010000101011111111100000101010100
11011111110011101110011010111000110001011100111110
11011011110001000110101001111110011001101001011010
01100110001101000101110001011000110010110111011001
00111001010000000011101011110010111010011101000111
10111100011110110011111011001010111010111001000010
01110111111101111000101101110010010100001011101100
00111110011000111000000110011001001011111111111110
00001010010111110111110110000000011000011110001110
01111111011000101001011101101100101010110011000010

Your task is to guess the plaintext or rather to be significantly
better than guessing: Simply saying "0000...00" will get ~900
characters
right; producing a new random text with P(M=0)=0.9 would get
about 0.9*900 0's and about 0.1*100 1's correct, in total only ~820.

But can *your* theory produce a suggested decrypted text that
conincides with the plaintext at significantly more than 900
characters? It seems to claim so.
If you can post a text that has 905 or more correct
characters, I'll read your paper more thoroughly
and become one of your supporters.
If you can get 910 or more characters right, you'll have
totally shattered my gut feelings about one-time pads.

Will you try?

hagman

matt271...@yahoo.co.uk

unread,
Oct 22, 2007, 5:00:13 PM10/22/07
to
On Oct 22, 8:53 pm, hagman <goo...@von-eitzen.de> wrote:
> On 22 Okt., 06:22, wangyong <hell...@126.com> wrote:
>
<snip>

Well, the probability that your original message contains >= 905 zeros
is about 0.32, and the probability that it contains >=910 zeros is
about 0.16. So if one ignored the encrypted message and just guessed
all zeros it would not be that remarkable to hit those targets.

wangyong

unread,
Oct 23, 2007, 6:58:00 AM10/23/07
to


Will you try?


hagman
--------------------------YOU should see Shannon's definition. but not
yours.

wangyong

unread,
Oct 23, 2007, 7:02:08 AM10/23/07
to
Fair enough, I admit that I have not read all your posts and papers.
However, this particularly simple example seems like an excellent way
for a lazy person to get to the bottom of what you are talking about.
So, if you feel like explaining here in brief and simple terms what
you think is wrong with my calculation, or how the scenario that you
are envisaging differs from the one I described, then it might lead
more quickly to some better understanding. If not then no worries...

----i have given the link. what I want to say is you are lazy.

wangyong

unread,
Oct 23, 2007, 7:06:36 AM10/23/07
to
Well, the probability that your original message contains >= 905
zeros
is about 0.32, and the probability that it contains >=910 zeros is
about 0.16. So if one ignored the encrypted message and just guessed
all zeros it would not be that remarkable to hit those targets.


---------------the way the crytoanlysisist guess is not the way you
guess, he should base on theory, such as probability theory, but not
horsefeathers

wangyong

unread,
Oct 23, 2007, 7:09:26 AM10/23/07
to
If the cyphertext were a fixed value, there would be no need to
transfer it,
hence it could not be intercepted.

-----------fixed but unknown.
that is, not a random variable.

Denis Feldmann

unread,
Oct 23, 2007, 7:41:01 AM10/23/07
to
wangyong wrote [cut of a non-pertinent copy (twice) of hagman's post, then:

> Your task is to guess the plaintext or rather to be significantly
> better than guessing: Simply saying "0000...00" will get ~900
> characters
> right; producing a new random text with P(M=0)=0.9 would get
> about 0.9*900 0's and about 0.1*100 1's correct, in total only ~820.
>
>
> But can *your* theory produce a suggested decrypted text that
> conincides with the plaintext at significantly more than 900
> characters? It seems to claim so.
> If you can post a text that has 905 or more correct
> characters, I'll read your paper more thoroughly
> and become one of your supporters.
> If you can get 910 or more characters right, you'll have
> totally shattered my gut feelings about one-time pads.
>
>
> Will you try?
>
>
> hagman


> --------------------------YOU should see Shannon's definition. but not
> yours.
>

This is not an answer. Actually, it is a proof you are a complete crank,
with P>0.99 . So *Plonk*


wangyong

unread,
Oct 23, 2007, 7:54:20 AM10/23/07
to
--------------------------YOU should see Shannon's definition. but
not
> yours.

This is not an answer. Actually, it is a proof you are a complete
crank,
with P>0.99 . So *Plonk*


---------his question is not relative to the perfect secrecy of OPT,
how can i guess babblingly by the way he asked.
It is a proof you are uneducated

wangyong

unread,
Oct 23, 2007, 8:12:36 AM10/23/07
to
Firstly , your question is completely out of my topic
as someone say This is not an answer. Actually, it is a proof you are
a complete crank,
I tell you the answer . even in the case of P(M=0)=0.9 or 0.1
anyone cannot guess at a probability more than 0.9 is right. as I
state the P(M=0)=0.9-0.5 or 0.1-0.5. so it is more impossible toget
more than 0.9.
----------------------------------------------------------------------------------------------------

matt271...@yahoo.co.uk

unread,
Oct 23, 2007, 9:30:08 AM10/23/07
to

You referred me to your "other paper" and I don't know which one you
mean. Anyway, in your post at http://groups.google.com/group/sci.math/msg/ac43dd1ffe1a3ffb
you say:

<begin quote>
A necessary and sufficient condition for perfect secrecy can be found
as follows: We have by Bayes' theorem

in which:


P(M)= a priori probability of message M

PM(E)= conditional probability of cryptogram E if message M is chosen
i.e. the sum of the probabilities of all keys which produce
cryptogram
E from message M.


P(E)= probability of obtaining cryptogram E from any cause.
PE(M)= a posteriori probability of message M if cryptogram E is
intercepted.

For perfect secrecy PE(M) must equal P(M) for all E and all M.
<end quote>

I have already shown that, in the simple scenario being discussed,
PE(M) = P(M) for E = 0 ("E" was called "C" in my post). It is equally
easy to show that PE(M) = P(M) for E = 1. So, your conditions for
"perfect secrecy" are satisfied.

Where's the problem?

wangyong

unread,
Oct 23, 2007, 10:09:43 AM10/23/07
to
Shannon misused Bayes' formula, similarly the above proof misused
Bayes' formula. From P(M = x)·P (K = (x⊕y)) = P(M = x) ·2-n, we can
see the condition that the ciphertext y is a fixed value is never
considered when computing P(M = xΛC = y). We can get that result by
reductio ad absurdum. Suppose for fixed y, if P (K = (x⊕y))=2-n (that
is used in the proof, but indeed it is wrong. It is used just to get
wrong conclusion), we can get P(M = x|C = y)= 2-n because there is a
one-to-one correspondence between all the plaintexts and keys for the
fixed ciphertext in OTP. But it is obviously wrong, for the prior
probabilities of all plaintexts are seldom equally likely. So P(M =
x)·P (K = (x⊕y)) stand for the joint probability of x and y when y is
not fixed. But Shannon thought of the posterior probability as the
probability of plaintext when ciphertext had been intercepted, we can
see that there is a presupposition in P(M = x|C = y) that y is fixed,
but in P(M = x), P (K = (x⊕y)) and P(C=y), y is not fixed, otherwise
we can get obviously wrong results. In such way, the Bayes's formula
was misused for the probability was not on the same presupposition and
the equation does not come into existence.
In OTP there are complex and crytic conditions that influence the
probability of plaintext, key and ciphertext, so it is essential to
cognize all the conditions and carefully use probability theory. The
proof did not realize the crytic condition that ciphertext was a fixed
value (even though unknown) rather than a random variable.

William Hughes

unread,
Oct 23, 2007, 12:49:16 PM10/23/07
to
On Oct 22, 12:22 am, wangyong <hell...@126.com> wrote:


>The following discussion supposes the all the
> plaintexts, ciphertexts and keys are the same in length.
> We give a simple example of OPT to discuss the problem, plaintext
> space is M={0,1}, ciphertext space is C={0,1} and key space is K=
> {0,1}.

And note that the probability distribution on C is always the
same, no matter what the probability distribution on M.
So no observation of C can give us any information about the
probability distribution on M. In particular, observing
C=0 cannot change how likely we consider the uniform
distribution on M.


> According to the information that cryptanalysts got beforehand,
> they can get the prior probability of plaintext as P(M=0) = 0.9 and
> P(M=1) = 0.1. Later the ciphertext C=0 is intercepted. When only
> considering C=0 and the cryptosystem (regardless of the prior
> probability of plaintext), we can educe that the plaintexts are
> equally likely,

No. This is simply false. We can educe nothing about the
probablities of the plaintext.

> for there is a one-to-one correspondence between all
> the plaintexts and keys for C=0.

So what? There is no relationship between the probabilities.

- William Hughes

hagman

unread,
Oct 23, 2007, 1:29:24 PM10/23/07
to

I counted the one's and zero's before posting, so you I can assure you
that this method happens not to work for the specific (though
random) text I posted. :)

hagman

matt271...@yahoo.co.uk

unread,
Oct 23, 2007, 4:26:20 PM10/23/07
to

I suspect that a combination of typos and/or character set
incompatibilities have garbaged some of your equations. So, let me
have a guess at what you're saying.

Let's keep with the simple scenario where P(M=0) = p, P(M=1) = 1-p,
P(K=0) = 1/2, P(K=1) = 1/2, K and M independent. K=0 maps 0->0, 1->1,
and K=1 maps 0->1, 1->0. We intercept the encrypted message C=0.

I'm guessing that you are reasoning as follows: Given that C=0, there
is no longer an equal chance of K=0 and K=1. Because the proof uses
the fact that these probabilities are equal, the proof must be wrong.

In fact, the K-probabilities used in the calculation of the
conditional probabilities must be the *a priori* probabilities, which
are indeed equal, and the proof is sound.

You've obviously spent a while working with this symbolically, so you
might like to try a different approach to satisfy yourself. From your
affiliation I assume you are familiar with computer programming, so
try running a Monte Carlo-style simulation such as the following. You
will find that always M_equals_0 / total ~ p, and M_equals_1 / total ~
1 - p. This demonstrates that that probabilities of M=0 and M=1 are,
as expected, unaffected by the fact that C=0.

-----------------------------------------------------------

trials = 100000
p = 0.9

total = 0
M_equals_0 = 0
M_equals_1 = 0

For trial = 1 To trials

If Rnd < p Then M = 0 Else M = 1
If Rnd < 0.5 Then K = 0 Else K = 1
C = M Xor K

If C = 0 Then
total = total + 1
If M = 0 Then M_equals_0 = M_equals_0 + 1 Else M_equals_1 =
M_equals_1 + 1
End If

Next

Print M_equals_0 / total
Print M_equals_1 / total

-----------------------------------------------------------


matt271...@yahoo.co.uk

unread,
Oct 23, 2007, 7:32:48 PM10/23/07
to

Hey ... that's CHEATING!!!

(just kidding)

wangyong

unread,
Oct 23, 2007, 9:34:25 PM10/23/07
to
And note that the probability distribution on C is always the
same, no matter what the probability distribution on M.
So no observation of C can give us any information about the
probability distribution on M.
------------I see the probability distribution on M is changed due to
OPT,but no the value of C. But that does not mean Perfect secrecy.
Posterior=prior

In particular, observing
C=0 cannot change how likely we consider the uniform
distribution on M.

-----------will you express clearly

> According to the information that cryptanalysts got beforehand,
> they can get the prior probability of plaintext as P(M=0) = 0.9 and
> P(M=1) = 0.1. Later the ciphertext C=0 is intercepted. When only
> considering C=0 and the cryptosystem (regardless of the prior
> probability of plaintext), we can educe that the plaintexts are
> equally likely,


No. This is simply false. We can educe nothing about the
probablities of the plaintext.

-----you do not think roundly, the probability changes.

> for there is a one-to-one correspondence between all
> the plaintexts and keys for C=0.


So what? There is no relationship between the probabilities.

---what is perfect secrecy.

wangyong

unread,
Oct 23, 2007, 9:39:03 PM10/23/07
to
> hagman- 隐藏被引用文字 -
>
> - 显示引用的文字 -

I counted the one's and zero's before posting, so you I can assure
you
that this method happens not to work for the specific (though
random) text I posted. :)

hagman
-----it is out of the problem of my paper.

wangyong

unread,
Oct 23, 2007, 9:48:26 PM10/23/07
to

well

William Hughes

unread,
Oct 23, 2007, 10:16:49 PM10/23/07
to
On Oct 23, 9:34 pm, wangyong <hell...@126.com> wrote:

> > According to the information that cryptanalysts got beforehand,
> > they can get the prior probability of plaintext as P(M=0) = 0.9 and
> > P(M=1) = 0.1. Later the ciphertext C=0 is intercepted. When only
> > considering C=0 and the cryptosystem (regardless of the prior
> > probability of plaintext), we can educe that the plaintexts are
> > equally likely,
>
> No. This is simply false. We can educe nothing about the
> probablities of the plaintext.
> -----you do not think roundly, the probability changes.


The probability distribution of the plaintext has no relationship at
all to the possible values of the cyphertext or their probabilities.
How can any
observation of the cyphertext provide any information about the
probability distribution of the plaintext?


- William Hughes

wangyong

unread,
Oct 23, 2007, 10:29:31 PM10/23/07
to

>The following discussion supposes the all the
> plaintexts, ciphertexts and keys are the same in length.
> We give a simple example of OPT to discuss the problem, plaintext
> space is M={0,1}, ciphertext space is C={0,1} and key space is K=
> {0,1}.

And note that the probability distribution on C is always the
same, no matter what the probability distribution on M.

So no observation of C can give us any information about the
probability distribution on M. In particular, observing


C=0 cannot change how likely we consider the uniform
distribution on M.

===========That is one of the good property of OPT, but not mean
perfect secrecy.
The probability changes even the c is unknown.
IN my paper i state the condition changes the probability is that the
ciphertext is fixed value,but not random variable.

> According to the information that cryptanalysts got beforehand,
> they can get the prior probability of plaintext as P(M=0) = 0.9 and
> P(M=1) = 0.1. Later the ciphertext C=0 is intercepted. When only
> considering C=0 and the cryptosystem (regardless of the prior
> probability of plaintext), we can educe that the plaintexts are
> equally likely,

No. This is simply false. We can educe nothing about the
probablities of the plaintext.

-----where is the mistake.

> for there is a one-to-one correspondence between all
> the plaintexts and keys for C=0.

So what? There is no relationship between the probabilities.

------------that does not mean perfect secrecy.

wangyong

unread,
Oct 24, 2007, 2:54:15 AM10/24/07
to
I counted the one's and zero's before posting, so you I can assure
you
that this method happens not to work for the specific (though
random) text I posted. :)

hagman

wangyong

unread,
Oct 24, 2007, 5:45:40 AM10/24/07
to

The probability distribution of the plaintext has no relationship at
all to the possible values of the cyphertext or their probabilities.
How can any
observation of the cyphertext provide any information about the
probability distribution of the plaintext?

- William Hughes
iit It is the conditonion that ciphertext as a fixed
value.
The paper has state thaet view

matt271...@yahoo.co.uk

unread,
Oct 24, 2007, 7:38:28 AM10/24/07
to

By this phrase "fixed value" that you keep using, do you just mean
that after we've observed the ciphertext we know for sure what it is?

If so, then that is the *WHOLE POINT* of conditional probabilities:
the value of some random variable becomes known (i.e. "fixed"), or is
constrained in some way, and this potentially affects other
probabilities, which then become conditional probabilities. It is not
some subtle effect that someone might have not noticed when applying
the conditional probability formula; it is the very essence of what
the formula is calculating.

William Hughes

unread,
Oct 24, 2007, 10:53:31 AM10/24/07
to

There is no connection between the probablity distribution
on the plaintext and that on the cyphertext. Thus, knowing the
value of the cyphertext does not tell us anything about the
distribution of the plaintext.

- William Hughes

wangyong

unread,
Oct 24, 2007, 12:16:30 PM10/24/07
to

> conditional probabilities must be the *a priori* probabilities===========, which


> are indeed equal, and the proof is sound.

-----------------you are wrong at this place.if you are right

=============================================================
=====The probabilities are all the ones in the case c is a random
variable, but not C=0.

> You've obviously spent a while working with this symbolically, so you
> might like to try a different approach to satisfy yourself. From your
> affiliation I assume you are familiar with computer programming, so
> try running a Monte Carlo-style simulation such as the following. You
> will find that always M_equals_0 / total ~ p, and M_equals_1 / total ~
> 1 - p. This demonstrates that that probabilities of M=0 and M=1 are,
> as expected, unaffected by the fact that C=0.

====as I have pointed out above.

matt271...@yahoo.co.uk

unread,
Oct 24, 2007, 4:05:20 PM10/24/07
to

I don't think so.

> if you are right
>
> =============================================================
> =====The probabilities are all the ones in the case   c is a random
> variable, but not C=0.
>
> > You've obviously spent a while working with this symbolically, so you
> > might like to try a different approach to satisfy yourself. From your
> > affiliation I assume you are familiar with computer programming, so
> > try running a Monte Carlo-style simulation such as the following. You
> > will find that always M_equals_0 / total ~ p, and M_equals_1 / total ~
> > 1 - p. This demonstrates that that probabilities of M=0 and M=1 are,
> > as expected, unaffected by the fact that C=0.
>
> ====as I have pointed out above.

Huh???? So, now you *agree* that the probabilities of M=0 and M=1 are
unaffected by observing C=0? Then you should have no difficulty in
also agreeing that these probabilites are unaffected by observing C=1,
and, therefore, that they are unaffected by observing C, whatever
value we may find that C takes. So, intercepting the message gives us
no additional information about M. What's the problem?

I get the impression that you are inventing complexities where none
exist. EITHER C is unknown and random (and M has its prior, or
initial, distribution), OR C is known and fixed (and M has its
conditional distribution). In the scenario we are discussing, the
conditional distribution of M (after C is observed) is exactly the
same as the prior distribution of M (before C is observed), whatever
value of C actually is observed. And that's all there is to it.

David Bernier

unread,
Oct 24, 2007, 5:52:04 PM10/24/07
to

[matt271829:]



> I don't think so.
>
> > if you are right
> >
> >
> ======================================================
> =======
> > =====The probabilities are all the ones in the case
>   c is a random
> > variable, but not C=0.
> >
> > > You've obviously spent a while working with this
> symbolically, so you
> > > might like to try a different approach to satisfy
> yourself. From your
> > > affiliation I assume you are familiar with
> computer programming, so
> > > try running a Monte Carlo-style simulation such
> as the following. You
> > > will find that always M_equals_0 / total ~ p, and
> M_equals_1 / total ~
> > > 1 - p. This demonstrates that that probabilities
> of M=0 and M=1 are,
> > > as expected, unaffected by the fact that C=0.
> >
> > ====as I have pointed out above.

[matt271829:]

I thought of an experiment that might help.
If I give an adversary the ciphertext C and a
randomly chosen string of data with the same length
as C, called D, then what can the adversary do
to tell apart the real ciphertext C from the fake
one D?

I don't see how the adversry can tell them apart, or
say "this one has a better chance of being the
real ciphertext."

David Bernier

wangyong

unread,
Oct 24, 2007, 8:55:20 PM10/24/07
to
On 10月24日, 上午7时32分, matt271829-n...@yahoo.co.uk wrote:
> (just kidding)- 隐藏被引用文字 -
>
> - 显示引用的文字 -

>The following discussion supposes the all the

wangyong

unread,
Oct 24, 2007, 11:44:02 PM10/24/07
to
I thought of an experiment that might help.
If I give an adversary the ciphertext C and a
randomly chosen string of data with the same length
as C, called D, then what can the adversary do
to tell apart the real ciphertext C from the fake
one D?

I don't see how the adversry can tell them apart, or
say "this one has a better chance of being the
real ciphertext."


David Bernier
----------------This is similar to the above statement.but that do not
mean perfect secrecy

wangyong

unread,
Oct 24, 2007, 11:51:24 PM10/24/07
to

-----It is hard to understand, what is whole point.

wangyong

unread,
Oct 24, 2007, 11:53:41 PM10/24/07
to
There is no connection between the probablity distribution
on the plaintext and that on the cyphertext. Thus, knowing the
value of the cyphertext does not tell us anything about the
distribution of the plaintext.

----you should obey perfect secrecy, I state OPt has good
attribute.

wangyong

unread,
Oct 25, 2007, 12:01:14 AM10/25/07
to
Huh???? So, now you *agree* that the probabilities of M=0 and M=1 are
unaffected by observing C=0? Then you should have no difficulty in
also agreeing that these probabilites are unaffected by observing
C=1,
and, therefore, that they are unaffected by observing C, whatever
value we may find that C takes. So, intercepting the message gives us
no additional information about M. What's the problem?
-----------you mistake lies in not obeying perfect secrecy. The
probability changed in the whole process. I point out OPt has good
attributes.

I get the impression that you are inventing complexities where none
exist. EITHER C is unknown and random (and M has its prior, or
initial, distribution), OR C is known and fixed (and M has its
conditional distribution). In the scenario we are discussing, the
conditional distribution of M (after C is observed) is exactly the
same as the prior distribution of M (before C is observed), whatever
value of C actually is observed. And that's all there is to it.

-------------your prior is not EITHER C is unknown and random (and M
has its prior)

but shannon's is. you do not see this paper clearly.you perpetrate a
fraud by substitute.
we discuss perfect secrecy.

matt271...@yahoo.co.uk

unread,
Oct 25, 2007, 9:58:47 AM10/25/07
to

This sentence does not make sense. If you have an "EITHER" you must
also have an "OR".

> you do not see this paper clearly.you perpetrate a
> fraud by substitute.
> we discuss perfect secrecy.

>From some of your follow-up replies, I'm wondering if your English
(which seemed reasonably intelligible in the main text of your
original posts) is actually good enough to continue this discussion.
Unfortunately I do not understand any Chinese.

But let's try one more thing. Let me repeat the definition of "perfect
secrecy" that you gave:

<begin quote>
A necessary and sufficient condition for perfect secrecy can be found
as follows: We have by Bayes' theorem in which:

P(M)= a priori probability of message M
PM(E)= conditional probability of cryptogram E if message M is chosen
i.e. the sum of the probabilities of all keys which produce cryptogram
E from message M.
P(E)= probability of obtaining cryptogram E from any cause.
PE(M)= a posteriori probability of message M if cryptogram E is
intercepted.

For perfect secrecy PE(M) must equal P(M) for all E and all M. Hence
either P(M)=0, a solution that must be excluded since we demand the
equality independent of the values of P(M), or PM(E)= P(E)
for every M and E. Conversely if PM(E)= P(E) then PE(M)= P(M)
and we have perfect secrecy. Thus we have the result:
Theorem 1. A necessary and sufficient condition for perfect secrecy is
that PM(E)= P(E) for all M and E. That is, PM(E) must be independent
of M.
<end quote>

Assume, as before, that the a priori probabilities are P(M=0) = p,


P(M=1) = 1-p, P(K=0) = 1/2, P(K=1) = 1/2, K and M independent. K=0
maps 0->0, 1->1, and K=1 maps 0->1, 1->0.

Now please show explicitly how you calculate each of the relevant
quantities mentioned in your above definition of perfect secrecy, and
explain how you conclude that perfect secrecy is not achieved.

William Hughes

unread,
Oct 25, 2007, 12:17:20 PM10/25/07
to

You also state that knowing the value of the cyphertext does
tell us something about the distribution of the plaintext (i.e.
if all we know is that the value of the cypertext is 0
then we know that the probability distribution of the plaintext is
uniform).
Are you now admitting this statement is false?

- William Hughes

Puppet_Sock

unread,
Oct 25, 2007, 12:41:29 PM10/25/07
to
On Oct 22, 12:22 am, wangyong <hell...@126.com> wrote:
> Confirmation of Shannon's Mistake about Perfect Secrecy of One-time-
> pad
[snip]

Carry out item six.
Socks

wangyong

unread,
Oct 26, 2007, 11:40:27 PM10/26/07
to

0---------------------where? cite it.

wangyong

unread,
Oct 26, 2007, 11:41:18 PM10/26/07
to
[snip]

Carry out item six.
Socks

------what?

wangyong

unread,
Oct 26, 2007, 11:46:57 PM10/26/07
to
EITHER C is unknown and random (and M has its prior, or
initial, distribution), OR C is known and fixed (and M has its
conditional distribution).

your above statement is wrong, I do not see clearly.

There are three conditions.
1

C is unknown and random (and M has its prior, or
initial, distribution),

2
C is unknown and fixed
3


C is known and fixed

my paper has discussed the three conditions.

William Hughes

unread,
Oct 27, 2007, 7:51:16 AM10/27/07
to


I did give a paraphrase

if all we know is that the value of the cypertext is 0
then we know that the probability distribution of
the plaintext is uniform

The actural quote from you original post in this thread was

When only considering C=0 and the cryptosystem
(regardless of the prior probability of plaintext),
we can educe that the plaintexts are equally likely

Are you now admitting this statement is false?

- William Hughes

matt271...@yahoo.co.uk

unread,
Oct 27, 2007, 9:35:45 AM10/27/07
to

>From the perspective of the person intercepting the message (which is
the perspective we're interested in), there are two cases: either the
message has been intercepted (case 3) or it hasn't (case 1).

In case 1 it doesn't matter if the message has actually been written.
Is that what you're getting at? The intercepter has some prior
probabilities from whatever existing knowledge he possesses, and
nothing relating to the decision about the contents of the message, or
the writing or despatching of the message, all of which events are
presumed unknown to the intercepter, can change these. Intercepting
the message potentially *could* change the probabilities, but we know
in the OTP case that actually it doesn't.

Of course, the intercepter's probabilities may change over time for
other reasons, such as information received from other sources, but
whatever they are when the message is intercepted, the act of
intercepting and reading the encrypted message does not change them.

hagman

unread,
Oct 27, 2007, 10:24:11 AM10/27/07
to

The way I see it, the problem of your paper is that it tries to show
that OTP is not perfectly secure.
BTW, as you keep referring to fixed cypher texts, you may try to
decrypt only the part of my message consisting of all 0's;
then try the part consisting of all 1's.

hagman

matt271...@yahoo.co.uk

unread,
Oct 27, 2007, 9:40:45 PM10/27/07
to

In fact, rather than bogging down in detailed sequences of events, it
may be simpler just to focus on the last statement above.

At the point immediately before the interception of the message, the
message probabilities, from the intercepter's point of view, reflect
the intercepter's knowledge at that time. Immediately after the OTP-
encrypted message has been intercepted and read, the intercepter's
knowledge has not increased: the message probabilities are identical
to what they were before.

(When I say "the intercepter's knowledge has not increased" I am
ignoring the possibility that the intercepter's knowledge is enhanced
by the very fact that a message has been sent at all, or by the length
of message, or by the mode of message transmission etc. I assume that
all these things are outside the scope of what's being discussed.)

wangyong

unread,
Oct 28, 2007, 10:47:29 AM10/28/07
to

another reply I posted is not seen.

wangyong

unread,
Oct 28, 2007, 10:50:30 AM10/28/07
to
When only considering C=0 and the cryptosystem
(regardless of the prior probability of plaintext),
we can educe that the plaintexts are equally likely


Are you now admitting this statement is false?


================================When only considering C=0 and the


cryptosystem
(regardless of the prior probability of plaintext),

can you see the precondition?

wangyong

unread,
Oct 28, 2007, 10:54:51 AM10/28/07
to
> intercepting and reading the encrypted message does not change them.- -
>
> - -

you are unwilling to accept the important condition . It is just that
condition effect the probability.
one of my reply to you do not appear.

wangyong

unread,
Oct 28, 2007, 11:10:57 AM10/28/07
to
On 10 27 , 10 24 , hagman <goo...@von-eitzen.de> wrote:
> On 24 Okt., 08:54, wangyong <hell...@126.com> wrote:
>
> > I counted the one's and zero's before posting, so you I can assure
> > you
> > that this method happens not to work for the specific (though
> > random) text I posted. :)
>
> > hagman
> > -----it is out of the problem of my paper.
>
> The way I see it, the problem of your paper is that it tries to show
> that OTP is not perfectly secure.

----Yes, but what is my mistake.


> BTW, as you keep referring to fixed cypher texts, you may try to
> decrypt only the part of my message consisting of all 0's;
> then try the part consisting of all 1's.
>

is what you discussed perfect secrecy

wangyong

unread,
Oct 28, 2007, 11:25:28 AM10/28/07
to
> all these things are outside the scope of what's being discussed.)- -
>
> - -
-----The probability in opt Is very complex,
the case 2 should be considered.

David R Tribble

unread,
Oct 28, 2007, 11:36:37 AM10/28/07
to
wangyong wrote:
> If I understand you right, then you can get more characters right
> than by guessing from the encrypted message found below.
>
> I use the following C code:
> [snipped]

> to generate a 1000 character plaintext with P(M=0) = 0.9:
> and a onetime pad

Why are you using a pseudo-random sequence to generate the
OTP data? Such a sequence is entirely predictable once the
first few bytes have been deciphered, and defeats the whole
design of a OTP encryption scheme.

William Hughes

unread,
Oct 28, 2007, 12:24:40 PM10/28/07
to


You have agreed

A: No observation on the cyphertext gives you
any information about the probablility distribution
on the plaintext,

Even when not considering a prior probability
on the plaintext ("regardless of the prior
probability of plaintext") the observation of the
value of the cyphertext tells you nothing about
the probability distribution on the plaintext.
In particular, you cannot "educe that the plaintexts
are equally likely"

- William Hughes

wangyong

unread,
Oct 28, 2007, 1:09:32 PM10/28/07
to

===========================what you discuss is under the condition
that ciphertext is random,
under the condition ciphertext is fixed ,even though unknown, K and M
are not independent

matt271...@yahoo.co.uk

unread,
Oct 28, 2007, 2:27:28 PM10/28/07
to


I don't understand what case 2 ("unknown and fixed") means. Does it
refer to the time after the message has been written and encrypted,
and yet before the message is intercepted?

Please explain how case 2 arises.

wangyong

unread,
Oct 28, 2007, 10:31:28 PM10/28/07
to
Analyses on the Origins of Shannon's Blemish about OTP
Yong WANG
(School of Computer and Control, GuiLin University Of Electronic
Technology ,Guilin City, Guangxi Province , China, 541004)
hel...@126.com
Abstract: This paper further analyzes the origin of Shannon's proof
that one-time-pad is perfectly secure based on the prior analyses. The
limitations of information theory and probability theory are analyzed.
The key lies in that the two theories take probability as fixed value
and the conditions are not carefully recognized. It is pointed out
they are the origins of the blemish.
Keywords: one-time-pad; cryptography; perfect secrecy; information
theory; probability theory
1. Introduction
Shannon put forward the concept of perfect secrecy and proved that one-
time-pad (one-time system, OTP) was perfectly secure [1, 2]. For a
long time, OPT has been thought to be unbroken and is still used to
encrypt high security information. In literature [3], example was
given to prove that OPT was not perfectly secure. In literature [4],
detailed analyses about the mistake of Shannon's proof were given. It
was proven that more requirements were needed for OTP to be perfectly
secure and homophonic substitution could make OTP approach perfect
secrecy [5]. Literature [6] analyzed the problem and gave ways to
disguise the length of plaintext. In literature [7], the cryptanalysis
method based on probability was presented, and the method was used to
attack one-time-pad. Literature [4] considered an especially
understanding following which OTP could be thought perfectly secure if
some added conditions were satisfied. In literature [8], the
especially understanding was confirmed not to be Shannon's notion. It
was pointed out that the conditions in OPT could not coexist, when all
the conditions were considered, some of the conditions must change, so
it was not proper to use these conditions when computing the final
posterior probability. The above works provoked extensive debates, so
the problem and its origins should be further analyzed in detail.
2. Debate and Analysis on Perfect Secrecy of OTP
As the works relate to Shannon's mistakes, our analyses should be
prudent and long-tested. We have sent the papers to a lot of scholars
and released some of the papers in the internet, and discussed the
problem with some scholars, carefully analyzed all the opposite views
and replied all of them. Some opposite views owed to misapprehend my
papers, such as confusion of proof and reduction to absurdity,
confusion of my conclusion and Shannon's conclusion, taking the
probability when only considering partial conditions as the posterior
probability. Some opposite views cited the proofs about perfect
secrecy of OTP. It was found the proofs were largely identical but
with minor differences and were wrong for they took different
conditions as the same conditions [9].
There is a kind of opposite view that directly and erroneously
believes that the probability is unchanged, that is to say, since
plaintexts having had a prior probability distribution, its
probability distribution changes not any longer. it is proved that the
statement would bring about contradiction finally when ciphertext as a
fixed value and equiprobable keys are considered, and that is also
inconsistent with Shannon argumentation for he got the result that
posterior probability is equiprobable[2]. But there is still necessary
for in-depth analyses on the problem.
Firstly, if this argument is proved to be established, we do not have
to prove and can directly get the conclusion that after the ciphertext
is intercepted the posterior probability of each plaintext is equal to
its prior probability for the probability unchanges. Namely the
cryptosystem is automatically perfectly secure.
Secondly, we can also understand it from the view of the probability.
Because the prior probability we call P(x) is gained from the
condition when only the context of communications is known, and
subsequently when gaining the ciphertext and cryptography interrelated
conditions we call all of them as y, the final probability can be seen
as conditional probability P (x | y), the final conditional
probability is not the same as prior probability unless the two events
are independent. It seems Shannon have proven that two events are
independent. But Shannon only took the value of ciphertext as y and
ignored the existence of the condition that ciphertext is a fixed
value, but not a random variable. It is just this condition that
influences the probabilities of plaintexts in company with the
cryptosystem.
Thirdly, our knowledge to events is often unreliable and incomplete.
Although sometimes the event is certain, if our knowledge to the event
is incomplete, we may get random results [9]. For example, it is
certain whether it rained or not in some place yesterday, but when we
do not hold complete conditions and information about that, we can
only get randomly uncertain result. The result cannot be taken as the
probability of rain yesterday. Strictly speaking, it should be taken
as the probability of rain under all the grasped conditions and
information. As a stopgap, we expediently take the probability as the
probability of rain yesterday for the moment when no more information
is given. The probability under this condition is seldom equal to the
probability under that condition. In the case of OTP, based on
communication contexts, we can gain a prior probability distribution
of the plaintexts. As we have no further understanding and no further
related information about the plaintext, we have to use the prior
probability distribution for the moment [10]. After gaining the
ciphertext, we can try to get more information according to the
ciphertext and probability distribution of keys. At that time the
information is still incomplete, but cryptanalyst will not abandon the
use of such information and make full use of such information and
conditions to obtain more reliable probability. When only considering
that the ciphertext is given, there is a one-to-one correspondence
between all the keys and plaintexts and all keys are equally likely,
we can educe that the plaintexts are equally likely. That is
inconsistent with the prior probability and hence the fusion and
compromise of the probabilities is needed. The compromise probability
may be more perfect and reliable, but it is not equal to the prior
probability.
Fourthly, we can illustrate the problem that we can get different
probabilities under different conditions and the probability of an
event changes with the corresponding conditions from a new
perspective, we want to determine the probability of an event m in the
situation that certain conditions occur together. These conditions may
be relevant or irrelevant with the probability. We can select all of
the influencing conditions; assume to be c1, c2... cn. We can assume
that the probability can be determined by the n influencing conditions
c1, c2... cn, then probability can be expressed as
P(m) f c1, c2, ..., cn
For a sophisticated case, P(m) may be a random variable. In order to
be convenient for the analysis, we consider the value of the above
function is fixed. When certain conditions are still unknown, the
probability P(m) itself is not fixed, we may get the expectation of
P(m) under the imperfect conditions. But the expectation (probability)
gained from the imperfect conditions is not reliable and seldom equal
to the probability gained from the complete conditions. The more
conditions we know, the more reliable the corresponding probability
is. Due to the imperfect conditions, the probability gained from those
conditions does not mean the probability gained from complete
conditions. And therefore the conflicts under these imperfect
conditions are understandable that they do not perfectly represent the
probability under complete conditions. In fact the conditions are
usually imperfect in probability theory. It is not strict to take the
probability in that case as the probability in complete conditions. In
practice, it is prone to ignore the existence of unequal substitution,
thereby confusion and absurdity may appear.
Another kind of opposite views argued the plaintext and ciphertext in
the OPT are completely independent, and thus OPT is perfectly secure.
It argued that any ciphertext might be regarded as the same for OTP
and then the plaintext and ciphertext in the OPT are completely
independent. The independence can be explained from the angles of
probability theory and usual understanding. The kind of opposite views
confused the angles of probability theory and usual understanding.
>From the angle of probability theory, the plaintext and ciphertext in
the OPT are completely independent, and thus OPT is perfectly secure.
But from the angle of probability theory, that any ciphertext may be
regarded as the same for OTP does not mean the plaintext and
ciphertext in the OPT are completely independent as probability theory
gives a strict definition of independent. The above conclusion may be
correct when it is considered from the angle of usual understanding.
For example, any ciphertext changes the probability of plaintext in
the same way. In this case, any ciphertext may be regarded as the same
for the gives the same influence on the probability of plaintext. But
this does not mean perfect secrecy for the probability changes. It is
the same in OTP.
3. Origin analyses of probability theory and information theory

The mistakes in Shannon's proof have certain relations with the
limitations of probability theory. Moreover, Shannon did not realize
the limitations of probability theory when studying his information
theory.
>From the view of probability theory, he realized the random
uncertainty of event, and expressed this uncertainty with probability,
but ignored the random uncertainty of probability itself. Though it
was not directly stated that the probability was a fixed value [11].
But it can be seen from a lot of formals in probability theory and
information theory that probability is always taken as a fixed value,
otherwise, the formals may be impossible to compute, for example, the
formal of entropy would be impossible to compute if probability is a
random variable. The case that probability is a random variable is
universal as fixed value is only a special case of random variable.
For instance, the probability that comes from the incomplete or
unreliable conditions has more than one possible value, it is not
fixed, so the probability is a random variable and has random
uncertainty correspondingly [10]. The expectation of the probability
as a random variable is a simple static token, with merely local
significance, but its concrete probability distribution and its
concentrative degree are of great significance to the compromise of
probabilities and and the reliability of information. As probability
is always treated as fixed value in probability theory and information
theory, it caused the limitation of the two theories that they can not
solve the problems when the probability is a random variable. It also
caused a few scholars to believe that once the probability was given,
the probability would not change any more for the probability is
fixed. Generally speaking, for fixed probability, neither the analysis
of the reliability of information itself nor the fusion of unreliable
and incomplete information is doable. Most information in reality is
not absolutely reliable or perfect, we should compromise and fuse
different information. As information is expressed by probability, so
information is unchangeable if the corresponding probability is
considered as fixed value. To take probability as a fixed value is one
of the fundamental reasons why information theory can not be used to
research the reliability of information itself and information fusion,
while it can be used to research authentic communication.
We usually get information from the imperfect conditions we know and
take information under imperfect conditions as information under
complete conditions as a stopgap. When imperfect information under
imperfect conditions is taken as information under complete
conditions, the information is unreliable for the information is not
the same as the information under complete conditions, so the
imperfect information can be taken as unreliable information. When
utterly knowing nothing about the event, we cannot know how many
possible random values there are, not to mention the corresponding
probabilities of the possible values. Therefore, the prior probability
we obtained is based on the known conditions and and it is also a
conditional probability. Therefore the information and probability is
relative to our known conditions. But the unequal substitution is
never pointed out directly in probability theory and information
theory. In practice, it is prone to ignore the existence of the
unequal substitution of the imperfect one and the perfect one, thereby
confusion and absurdity may appear. When a lot of conditions that
influence the probability of an event are considered and the
conditions are parallel but not in series, there may be more than one
prior probability or posterior probability according to nowadays
probability theory. For example, some conditions can influence birth
gender of baby. Under condition A, the probability of male baby P(M A)
is c and under condition B, the probability of male baby P(M B) is d.
Now a question occurs: if both of the above conditions exist, how can
we gain the probability of male baby P(M A, B)? Other than problems in
nowadays probability theory, in this case the two conditions are
parallel for when A occurs, B is not considered and when B occurs, A
is not considered. and we do not know the conditional probability P(A
B) or P(B A). It is the same in the example of one-time pad, if the
final posterior probability is considered, it is the compromise of the
two probabilities under two conditions. The problem of compromise is
not settled in nowadays probability theory and is very complex. When
there are some parallel conditions, there may be more than one
probability and the probabilities may be conflicting. The uncertainty
theory and information fusion theory attempts to solve the problem,
but they are not strict and their algorithms are approximate but not
accurate algorithms, what's more, there are absurdities in some
algorithms. It is necessary to develop relevant accurate theory from
the perspective of probability theory.
In addition, probability theory does not realize the complexity of the
conditions, some conditions may be cryptic and interactional, so to
carefully recognize and strictly distinguish each condition is
necessary, and it should be realized that the probabilities under
different conditions are not the same, for example, the probability is
very easy to calculate when only condition x or y exists , but when
both the condition x and y coexist , the calculation of the
corresponding probability is difficult. This causes difficulty in
using the ready-made formulas of probability theory when conditional
probability is unknown. In the information theory, these problems
issues are not under consideration as well, so information theory can
not be used in some fields such as information fusion and artificial
intelligence. Due to the problem, Shannon failed to discover and
distinguish these different conditions in his proof [10].
>From Shannon's result PE(M) =1/n in his example, we can find Shannon
confused the posterior probability under all the conditions with the
imperfect probability when only the ciphertext and OPT were considered
(the prior probability as a condition was not considered, otherwise,
the result is impossible unless the prior probability P(M) =1/n
uncommonly happens). That may owe to the limitation of probability
theory that ignore the discrimination and recognition of different
conditions.
In the case of OTP, the uncertainty of plaintext is increased after
the compromise and fusion, but the conditional entropy was not
increased in information theory. It seems to be contradictory with
information theory. In fact, Shannon's conditional entropy is just one
kind of weighted average of a series of conditional entropies. It was
pointed out that Shannon's conclusion is not absolutely correct [12].
>From the above examples, we can find that the reliability and
completeness of the information are not considered in information
theory. However, the reliability and completeness are very important.
Information is valuable only when it is reliable at a certain extent,
otherwise it would be worthless. However, the relevant theories about
information reliability and completeness (such as uncertainty theory,
information fusion) are far from perfect and systematic, and there are
only approximate algorithms. Some of them even have absurdities.
Probability theory lags behind the study of the relevant theories yet.

4. Conclusion
This paper gives further and profound analyses on the blemish in
Shannon's proof and points out the root. Also, the debates about the
problem with a few experts and scholars are analyzed. This means that
the OPT is not as commonly thought to be perfectly secure and
unbreakable. However, it still has good statistical property and
superior security. At the same time, it analyzes the origin of
Shannon's mistakes in probability theory and information theory for
the probability in the two theories is always taken as fixed value,
but not random variable. This also provides a good direction for the
development of the probability theory and information theory and helps
the two theories to expand so as to solve more problems in reality.
Shannon made a lot of great contributions in information theory and
cryptography, but due to the limitation of probability theory and the
cognition of that time, it is hard to ensure absolutely no mistake in
so many contributions.

Reference
[1]. Bruce Schneier, Applied Cryptography Second Edition: protocols,
algorithms, and source code in C[M],John Wiley &Sons,Inc,1996
[2]. C.E.Shannon, Communication Theory of Secrecy Systems [J], Bell
System Technical journal, v.28, n.4, 1949, 656-715.
[3]. Yong WANG, Security of One-time System and New Secure System [J],
Netinfo Security, 2004, (7):41-43
[4]. Yong WANG, Fanglai ZHU, Reconsideration of Perfect Secrecy,
Computer Engineering, 2007, 33 19
[5]. Yong WANG, Perfect Secrecy and Its Implement [J],Network &
Computer Security,2005(05)
[6]. Yong WANG, Fanglai ZHU, Security Analysis of One-time System and
Its Betterment, Journal of Sichuan University (Engineering Science
Edition), 2007, supp. 39(5):222-225
[7]. Yong WANG, Shengyuan Zhou, On Probability Attack, Information
Security and Communications Privacy, 2007,(8) 39 40
[8]. Yong WANG, Confirmation of Shannon's Mistake about Perfect
Secrecy of One-time-pad, http://arxiv.org/abs/0709.4420.
[9]. Yong WANG, Mistake Analyses on Proof about Perfect Secrecy of One-
time-pad, http://arxiv.org/abs/0709.3334
[10]. Yong WANG, On Relativity of Probability, www.paper.edu.cn, Aug,
27, 2007.
[11]. Yong WANG, On Relativity of Information, presented at First
National Conference on Social Information Science in 2007, Wuhan,
China, 2007.
[12]. Yong Wang. Question on Conditional Entropy. arXiv:
http://arxiv.org/pdf/0708.3127.

wangyong

unread,
Oct 28, 2007, 10:32:03 PM10/28/07
to

I never using a pseudo-random sequence to generate the
> OTP data

wangyong

unread,
Oct 28, 2007, 10:34:09 PM10/28/07
to

> Even when not considering a prior probability
> on the plaintext ("regardless of the prior
> probability of plaintext") the observation of the
> value of the cyphertext tells you nothing about
> the probability distribution on the plaintext.
> In particular, you cannot "educe that the plaintexts
> are equally likely"
>
> - William Hughes

On Relativity of Probability


Yong WANG
(School of Computer and Control, GuiLin University Of Electronic
Technology ,Guilin City, Guangxi Province , China, 541004)
hel...@126.com

Abstract-This paper points out the limitations of present probability
theory that don't recognize characteristics of probability as follows:
Firstly, division of prior probability and posterior probability is
not absolute in that prior probability is conditional probability
actually; Secondly, probability is not absolutely fixed, and it may be
random. Thirdly, probability is evolving with the increase of the
conditions. Finally, the probability is complicated. Meantime, it
analyzes some misuses of probability theory in application.
Index Terms-probability theory, relativity, conditional probability,
information theory

MR 60A10
1.Introduction
The present probability theory cannot solve all problems arising from
probability. For example, as for probability of event, it may be
different in different conditions or for different people. However,
probability theory doesn't solve how to make fusion and compromise.
These issues have not been studied and solved due to the limitations
of probability theory itself. The present probability theory is based
on the kolmogorov axiomatic system [1]. The system has its own
limitations, just as the scholars B. de Finetti and Xiong Daguo and so
on have pointed out some disadvantages and deficiencies of the system
[2]. But the relativity of probability is not realized. Some of the
factors, such as probability, are limited and absolute, thus the
theory has limitations.

2. Relativity of probability and limitations of probability theory
The present probability theory doesn't take into consideration that
conditions are varied. For instance, people may hastily give a
conditional probability value without hesitation. However, the value
is usually unknown and may be random. It is impossible for us to solve
the problem of probability by present probability theory under
multiple conditions if the conditional probability is unknown. It is
possible that the value of probability is random, but we usually
describe probability as a fixed value, which leads to the fact that
probability is always considered as a fixed value while probability
theory is applied. But only in limited conditions, the probability is
fixed. That results in many limitations. Similar problem also exists
in information theory [3]. We analyze limitations of probability
theory and relativity of probability from the following perspectives:
Firstly, prior probability and posterior probability are absolutely
divided in probability theory. In fact, the order is relative. Prior
probability can be elicited only under certain conditions. There ought
to be some known conditions, or the elicitation of probability is
groundless. Suppose that we have no idea of an event, we can't know


how many possible random values there are, not to mention the

corresponding probabilities of the possible values. It is sure that
the prior probability distribution itself can be referred to as one
condition. In addition, there also exists the case of more than one
condition, under which their order can be interchangeable. For


example, some conditions can influence birth gender of baby. Under

condition A, the probability of male baby P(A) is c, and under
condition B, the probability of male baby P(B) is d. When we want to
gain the the probability of male baby under the two conditions, P(A)
and P(B) can be respectively considered as prior probability for they
are parallel. Therefore, the prior probability we have obtained is
based on the known conditions and it is also a conditional
probability. Under different conditions, we can get different
probability, so probability is relative to corresponding conditions.
The understanding that probability is relative to corresponding
various conditions is helpful to analyze and recognize each existing
condition consciously and carefully so as to differentiate various
conditions rather than get confused. Actually, many existing
conditions can not be realized because they are covert.
Secondly, probability is not fixed under some conditions. Probability
is to depict random uncertainty, but it may be uncertain and
inconstant in itself, that is to say, it maybe a random variable. In
reality, there are more uncertain events than certain ones. Fixed
value is only a special case of random variable. Probability has its
own random uncertainty, just as derivative has its derivative and
multiorder derivatives etc. Take an example to explain uncertainty of
probability: we gain the pass rate of a product by sampling
inspection. The true pass rate may not the same as the gained pass
rate, for the probability gained by sampling inspection is random.
Relative to the gained pass rate, the true pass rate is a random
variable distributed round the gained one if we have no more idea of
the product. Take another example: we get the corresponding
probabilities of possible results of an event in an unreliable way,
then the probabilities are unreliable, and each true probability in
theory may have more than one value, so the true probability in theory
is still a random variable around the probability obtained by the
unreliable way, therefore, the probability is uncertain at that time.
Some people hold the viewpoint that probability is certain due to the
limitation of probability theory. What's worse is that sometimes, they
don't differentiate changes of conditions, so take the probability in
one condition as the probability under another condition, which leads
to mistakes. Being aware of that probability has the character of
uncertainty is helpful for us to free ourselves from the limitations
and framework of traditional probability theory, and not to take
random values as fixed values. In reality, the conditions and
information we usually get are not absolutely reliable, so the true
probability is random in certain degree. That is to say, we replace
the true probability with the probability based on the unreliable
conditions and information, which is relative and unreliable.
Thirdly, probability keeps evolving with the increase of known
conditions, and is relative to our known conditions. Both Popper and
Darwin hold the thinking of evolution, probability is evolved with
conditions too. Just like human beings usually acquaint themselves
with events from unknown to known, they realize probability
distribution from uncertain to certain under most circumstances. The
author brings forth the issue of the geometric mean of probability.
When certain qualifications were satisfied, by normalizing geometric
mean of the probabilities under different conditions, we can work out
the probability in the circumstance where all the conditions exist[4].
In this way, if any of the conditions can ascertain that the
occurrence probability of one possible result is 1 and that of other
results 0, the final probability of the result always remains 1. The
present probability theory doesn't realize the evolution thoroughly,
and lacks of a way of probability computation which can fuse different
conditions. People realize objects from uncertain to certain finally
for the known conditions changes and probability also changes with the
known conditions. The understanding of the relativity of the gradual
evolution is helpful for us to understand and apply the probability
theory better and to realize that the change itself from unknown to
known and from uncertain to certain is also a sort of evolution of
probability. In reality, the probability gained from imperfect
conditions is unilateral. Ihe probability is relative to our known
conditions and is seldom the same as true probability. To illustrate
the problem, we can analyze an example. We hope to determine the
probability of an event m when certain conditions occur together,
these conditions may be relevant or irrelevant with the probability.
We can select all of the relevant conditions, assume to be c1, c2, ...,
cn. We can assume that the probability can be determined by those
conditions c1, c2, ..., cn, then the probability may be expressed as


P(m) f c1, c2, ..., cn
For a sophisticated case, P(m) may be a random variable. In order to
be convenient for the analysis, we consider the value of the above
function is fixed. When certain conditions are still unknown, the

probability P(m) itself is not fixed, the probability gained from the
imperfect conditions is not reliable. The more conditions we know, the


more reliable the corresponding probability is. Due to the imperfect

conditions, the probability gained from that conditions does not mean
the probability gained from overall conditions. And therefore the


conflicts under these imperfect conditions are understandable that
they do not perfectly represent the probability under complete
conditions. In fact the conditions are usually imperfect in
probability theory. It is not strict to take the probability in that
case as the probability in complete conditions. In practice, it is

prone to ignore the existence of substitution, thereby confusion and
absurdity may appear. For instance, it is certain whether it rained or


not in some place yesterday, but when we do not hold complete
conditions and information about that, we can only get randomly
uncertain result. The result cannot be taken as the probability of
rain yesterday. Strictly speaking, it should be taken as the
probability of rain under all the grasped conditions and information.
As a stopgap, we expediently take the probability as the probability
of rain yesterday for the moment when no more information is given.

Finally, probabilities in reality are much more complicated than those
in the present probability theory. We usually get the relatively true
probability. For instance, as for the probability of the rise-fall of
Stock Market, it is very complicated. To begin with, provided that
some experts sum up the law of probabilities of the rise-fall of Stock
Market on the basis current situation and attitudes of shareholders,
the rule of Stock Market would change once people know the law and
apply it to share trading. For example, all shareholders just simply
follow the trend which may lead to a state that Stock Market will keep
rising or falling. However, if finding out the law through research
that Stock Market will turn to the opposite when reaching its extreme,
they will change their ways of share trading, they trade shares as
soon as stocks rise to an appropriate point. By doing that, the law of
Stock Market changes and then the corresponding probability comes to
change. Then, let's still take Stock Market for example. If a
shareholder doesn't know others' decisions, especially their future
decisions, he may pay much attention to the rise-fall of Stock Market
and afterwards work out the probability of the rise-fall. However, if
he knows future decisions of other shareholders, he will optimize his
decision according to the corresponding decisions, which results in
change of the rise-fall probability of Stock Market. Finally, for the
fluctuation of Stock Market lies on many uncertain factors, such as
the shareholders and operational positions of the listed companies,
and the factors are interactive, the rise-fall probability of Stock
Market is also a value with multi-uncertainty. The above-mentioned
examples are sufficient to prove that probability is complicated.
However, because the present probability theory doesn't take into
consideration the complexity of problems of probability, some
expressions such as Full Probability Formula, Bayes' formula take
conditional probability as known and fixed value, which leads to
misunderstanding of some scholars and then results in some mistakes
while they applies the expressions in a clumsy manner.
The aforesaid analyses also show that conditions themselves are
various. For example, conditions can be result of experiment, result
of sampling inspection, theorem, rules, knowledge, common sense,
grammar, language translation, encoding scheme and reliability of
information etc. Some conditions such as encoding scheme, grammar and
definition are commonly neglected for they are quotidian. In like
manner, the conditions that are connotative may be neglected. If we
can't realize the conditions, we can produce some paradox.
3.Some problems in application of probability theory
Traditional probability theory doesn't definitely show it is absolute
probability theory, but it is trapped into the thought of absolute
probability theory both in theory and in application due to lack of
realization of relativity of probability.
Information theory is an important application field of probability
theory. In information theory, Shannon usually adopted the mode of
traditional probability theory when he utilized the probability
theory. Therefore, information theory has its limitations in many
fields. For instance, when computing X conditional entropy while Y
occurs, Shannon only thought of Y as a known condition, and did not
list conditional probabilities as known conditions but placed them
into the formula as a known value. Conditional entropy has other
problems [5]. Additionally, another problem comes out: when many
conditions coexisted, how to compute the final conditional entropy
when the conditional probabilities are unknown and only the
probabilities are known when each condition solely exists. It is a
problem that Shannon's information theory doesn't take into account.
For information theory only consider the case of authentic
communication, a lot of problems, such as the reliability of
information and the uncertainty of probability, can be ignored, but
the problems should be solved if considering information issues in
practice.
Shannon only realizes uncertainty of events, but don't realize that
probability itself may also be uncertain. For instance, if the
information is unreliable, it is possible that the probability is not
a fixed value but a random variable, namely, the random uncertainty
itself in Shannon's information theory is actually of random
uncertainty. When information is imperfect, the probability maybe a
random variable too. The case of unreliable and imperfect information
widely exists. Information theory cannot efficaciously solve the
problem in that case partly due to neglect the mulriple random
uncertainty. The present hyper entropy theory in nature does
researches on the problem as well. Hyper entropy means entropy of
entropy, namely means random uncertainty of random uncertainty.
However, it is not sure whether the uncertainty of probability is
accurately denoted as hyper entropy.
In Full Probability Formula, and Bayes' Formula, all probabilities are
in the same precondition, that is to say, the preconditions of the
prior probability are same, and subsequent conditions are built upon
the prior condition. That is presupposition of the equation forming,
but it is not directly mentioned in the probability theory. What's
more, in the probability theory the conditions are not well recognized
and differentiated consciously. That may lead to mistake. The author
once pointed out [6, 7] that Bays Formula is misused for two different
conditions is not differentiated.
4.Conclusion
Only when we have a good command of all the conditions related to an
unknown event and their relations such as their conditional
probability, and when these conditions are reliable, can we know the
probability of the unknown event. However, in reality the probability
we get is usually relative, different from the true probability, for
the reason of unreliable and incomplete conditions and information.
This paper analyses the relativity of the probability, only to offer
something common to induce more valuable viewpoints for more new
problems are needed to be studied on this problem, for example, the
theory study related to the reliability and completeness of
information. Besides probability theory, those questions universally
exist in other fields, especially in the field of information
technology. The relativity put forward in this paper is not only
helpful to the development of probability theory, but can effectively
rectify some of the current misuses and shallow cognition of the
probability theory, which can promote development and improvement of
other applied subjects.

wangyong

unread,
Oct 28, 2007, 10:36:04 PM10/28/07
to
> Please explain how case 2 arises.- -
>
> - -

the cipher is a fixed value, but not random.
when we consider the decryption.
It is due to the condition is inconsistent

wangyong

unread,
Oct 28, 2007, 10:38:49 PM10/28/07
to

you should know the probability is under imperfect conditions , the
posterior is under complete conditions.The probabilities are not the
same.

William Hughes

unread,
Oct 29, 2007, 7:22:38 AM10/29/07
to

Your "prior condition" is that before noting that C=0 you know


nothing about the probability distribution on

the plaintext. You then note that C=0, an observation
that gives you no information about the probability
distribution on the plaintext. You then conclude
that the probability distribution on the plaintext is
uniform.

- William Hughes


matt271...@yahoo.co.uk

unread,
Oct 29, 2007, 9:38:58 AM10/29/07
to

I have no idea what you mean. Frankly, even if you yourself know what
you are trying to say, I am not convinced that your English is good
enough to explain it to anyone else. Your replies consist mostly of
disconnected sentence fragments, and are largely unintelligible.

wangyong

unread,
Oct 29, 2007, 8:52:09 PM10/29/07
to
> - William Hughes- -
>
> - -
there are two conditions ( CAse 1 and case 2 )which have different
probabilities before noting c=0.
that is the key of the problem.

wangyong

unread,
Oct 30, 2007, 12:11:05 AM10/30/07
to
> that is the key of the problem.- -
>
> - -

I found a lot of my replies were invisible.

wangyong

unread,
Oct 30, 2007, 12:14:24 AM10/30/07
to
Due to the mapping of M, K and C, the probabilities of M, K and C are
complicatedly interactional. For the above example, the probability
of
plaintext changes when the ciphertext is fixed, even though the
ciphertext is unknown.
When only considering the fixed ciphertext and the equiprobability of
key, we can gain that plaintexts are equally likely for there is a
one-
to-one correspondence between all the plaintexts and keys for fixed
ciphertext. There is conflict between the prior probability and the
uniformly distributed probability gained above.
In order to understand the inconsistency of probability in the
example
and the need for fusion of the probabilities in this case, we adopt
the combinations of different conditions for the following deduction
to analyze the existence of probability conflict.
For our simple example about OTP, when considering the condition that
ciphertext is 0, the probability of ciphertext being 0 is 1, and the
probability of ciphertext being 1 is 0. But according to the prior
probability distribution of plaintexts given and uniformly
distributed
keys, we can easily find that ciphertext is uniformly distributed,
that is to say, all ciphertext are equally likely. We can see the two
probability distributions of ciphertext in different conditions are
conflictive.
When only considering that the intercepted ciphertext is 0 and prior
probability of plaintext being 0 we call P(M=0) is 0.9, and prior
probability of plaintext being 1 we call P(M=1) is 0.1, the
probability of key being 0 we call P(K=0) is 0.9, and the probability
of key being 1 we call P(K=1) is 0.1 because there is a one-to-one
correspondence between all the plaintexts ands keys. However,
according to the requirement of OTP, all the keys are equally likely,
so conflict of the probabilities occurs as before.
Such conflicts show that under different conditions we may draw
inconsistent probabilities, so it needs to fuse and compromise. The
probabilities obtained by the different combinations of unilateral
conditions are inconsistent. That is to say, the conditions in OPT
can
not coexist. When all the conditions are considered, some of the
conditions must change, so it is not proper to use these conditions
when computing the final posterior probability. It likes four
irregular feet of a same table. There is always one foot that is
turnup when the table is on the horizontal ground. If the four feet
should touch the horizontal ground at the same time, distortion would
happen. In literature [7], formula was presented to fuse the
inconsistent probabilities.
Shannon did not realize that the conditions were impossible to
coexist. When taking them into the formula, there must be mistake for
the conditions cannot coexist and the probabilities have changed when
all the conditions are considered at the same time.
For the conditions in the example are very complex, and some are
connotative, it is essential to list them and analyze the impact of
the conditions on the probability. Literature [4] considered an

especially understanding following which OTP could be thought
perfectly secure if some added conditions were satisfied and analyzed
that was unlikely to be Shannon's view. This paper analyzes the
problem in detail and confirms the result that the especially
understanding is not Shannon's view using the information gained from
Shannon' s proof.

wangyong

unread,
Oct 30, 2007, 1:52:41 AM10/30/07
to
> disconnected sentence fragments, and are largely unintelligible.- -
>
> - -

----there are so many question,most of them can reply by my papers, so
I reply in an easy way,

the detailed answers lie in the papers.

case 2 due to the prior probability , key distribution, the fixed
cipher are inconsistent

wangyong

unread,
Oct 30, 2007, 3:07:14 AM10/30/07
to
As different conditions can gain different probability distribution,
we list the conditions those impact on the probability distribution
of
plaintext and the corresponding probabilities of plaintext when only
considering some of the conditions.

wangyong

unread,
Oct 30, 2007, 4:59:42 AM10/30/07
to

William Hughes

unread,
Oct 30, 2007, 7:24:03 AM10/30/07
to

This does not matter.


Case 2 is a probability distribution
on the plaintext. You know knothing


about the probability distribution on the

plaintext so you know knothing about
the Case 2 distribution.
You make an observation (C=0) which tells
you nothing about the Case 2 distribution.
You conclude that the Case 2 distribution is uniform.

- William Hughes


William Hughes

unread,
Oct 30, 2007, 7:32:29 AM10/30/07
to
On Oct 30, 2:07 am, wangyong <hell...@126.com> wrote:
> Due to the mapping of M, K and C, the probabilities of M, K and C are
> complicatedly interactional. For the above example, the probability
> of
> plaintext changes when the ciphertext is fixed, even though the
> ciphertext is unknown.
> When only considering the fixed ciphertext and the equiprobability of
> key, ...

If the cyphertext is fixed then the key probablility is not uniform.
It is unknown (it depends both on the fixed value of the cyphertext,
and the probability distribution on the plaintext, neither
of which is known). The conclusion is not that the plaintext
probability
is uniform, but that the plaintext probability is unknown.


- William
Hughes


matt271...@yahoo.co.uk

unread,
Oct 30, 2007, 8:42:10 AM10/30/07
to

Of course they can.

For example, I have a biased coin that comes up heads with probability
p and tails with probability 1-p. I toss the coin, and if it's heads I
choose M=0, otherwise M=1. Then I take a fair coin (heads and tails
probability both 1/2) and toss that. If it comes up heads I choose
K=0, otherwise K=1. I then generate C from M and K.

matt271...@yahoo.co.uk

unread,
Oct 30, 2007, 8:51:22 AM10/30/07
to

You have posted an awful lot of material on this, much of it
repetitious. There is little incentive to read it all because, in my
estimation, the probability that it contains anything worthwhile is
very low.

However, I've read through some parts of your posts and my honest
opinion -- which I will be happy to retract if subsequently proved
wrong -- is that you are completely confused and your papers are
nonsense.

wangyong

unread,
Oct 30, 2007, 9:28:25 AM10/30/07
to
You have posted an awful lot of material on this, much of it
repetitious. There is little incentive to read it all because, in my
estimation, the probability that it contains anything worthwhile is
very low.
-----that is a good way to avoid problem.

However, I've read through some parts of your posts and my honest
opinion -- which I will be happy to retract if subsequently proved
wrong -- is that you are completely confused and your papers are
nonsense.
------------you are unreasonable, how can you get conclusion on
problems you have not see through,
It seems you are guilty but just avoid the problem.

wangyong

unread,
Oct 30, 2007, 9:33:26 AM10/30/07
to
Of course they can.

For example, I have a biased coin that comes up heads with
probability
p and tails with probability 1-p. I toss the coin, and if it's heads
I
choose M=0, otherwise M=1. Then I take a fair coin (heads and tails
probability both 1/2) and toss that. If it comes up heads I choose
K=0, otherwise K=1. I then generate C from M and K.


--------------do you know In OPT key is random generated.
In this case the keys are simimaly likely are not considered.
just like the mistake in OPT.

wangyong

unread,
Oct 30, 2007, 9:41:44 AM10/30/07
to

============================you should see the preconditions I set.you
are addlepated for you do not know there is a one-to-one
correspondence between all the plaintexts and keys, so the
probabilities of the corresponding plaintext and key are the same. As
all the keys are equally likely, so all the plaintexts are equally
likely.note that is under my preconditions.

so the plaintext probability is uniform, but that the plaintext
probability is unknown.

=====it seems insistant.

wangyong

unread,
Oct 30, 2007, 9:45:46 AM10/30/07
to
> K=0, otherwise K=1. I then generate C from M and K.- -
>
> - -


the first is wrong, i missed some.

but your example is not complete. If complete, it is not fit for the
case of fixed ciphertext.

wangyong

unread,
Oct 30, 2007, 9:48:01 AM10/30/07
to
You have posted an awful lot of material on this, much of it
repetitious. There is little incentive to read it all because, in my
estimation, the probability that it contains anything worthwhile is
very low.

However, I've read through some parts of your posts and my honest
opinion -- which I will be happy to retract if subsequently proved
wrong -- is that you are completely confused and your papers are
nonsense.


----I find you try to give example ,but you finally found your example
are not fit,

give up and just to say:"You have posted an awful lot of material on


this, much of it
repetitious. There is little incentive to read it all because, in my
estimation, the probability that it contains anything worthwhile is
very low.

However, I've read through some parts of your posts and my honest
opinion -- which I will be happy to retract if subsequently proved
wrong -- is that you are completely confused and your papers are

nonsense. "'''

matt271...@yahoo.co.uk

unread,
Oct 30, 2007, 9:48:12 AM10/30/07
to

In an OTP system the key is randomly generated with uniform
distribution. In my example the key is randomly generated with uniform
distribution. There is no difference.


wangyong

unread,
Oct 30, 2007, 9:55:31 AM10/30/07
to


In an OTP system the key is randomly generated with uniform
distribution. In my example the key is randomly generated with
uniform
dis

tribution. There is no difference.

--------------you are right , then the ciphertext is a fixed
value??????????????????????????????????????????????? that is the key.

matt271...@yahoo.co.uk

unread,
Oct 30, 2007, 9:57:54 AM10/30/07
to

Despite not having read every word of your posts, I have in fact made
a significant effort (probably far more than is justified) to actually
*understand* the supposed problem. If I wanted to "avoid the problem"
then I would not have bothered responding in the first place.
Unfortunately, none of the parts of your papers that I have read, or
that you have quoted, make any sense to me.

matt271...@yahoo.co.uk

unread,
Oct 30, 2007, 10:03:46 AM10/30/07
to

You keep mentioning this stuff about the ciphertext being a "fixed
value", but I have absolutely no idea what you mean by this. In *any
particular realisation* of the message encryption, the ciphertext is
obviously a fixed value. Just like every time you toss a coin you get
either heads or tails, not some probabilistic superimposition of the
two. So what?


wangyong

unread,
Oct 30, 2007, 10:13:45 AM10/30/07
to

---------------------If so ,why do not point out first.

you can point out now.

wangyong

unread,
Oct 30, 2007, 10:15:48 AM10/30/07
to
On 10 30 , 9 57 , matt271829-n...@yahoo.co.uk wrote:

---------------------If so ,why do not point out first.

wangyong

unread,
Oct 30, 2007, 10:46:03 AM10/30/07
to

----------------In the case of OPT,the conditions are conflicting,
that is not the same as your case.we should consider it
respectively.In the condition of OPT, key is random from the angle of
Stat..

William Hughes

unread,
Oct 30, 2007, 11:14:46 AM10/30/07
to
On Oct 30, 8:41 am, wangyong <hell...@126.com> wrote:
> On 10 30 , 7 32 , William Hughes <wpihug...@hotmail.com> wrote:
>
>
>
> > On Oct 30, 2:07 am, wangyong <hell...@126.com> wrote:
>
> > > Due to the mapping of M, K and C, the probabilities of M, K and C are
> > > complicatedly interactional. For the above example, the probability
> > > of
> > > plaintext changes when the ciphertext is fixed, even though the
> > > ciphertext is unknown.
> > > When only considering the fixed ciphertext and the equiprobability of
> > > key, ...
>
> > If the cyphertext is fixed then the key probablility is not uniform.
> > It is unknown (it depends both on the fixed value of the cyphertext,
> > and the probability distribution on the plaintext, neither
> > of which is known). The conclusion is not that the plaintext
> > probability
> > is uniform, but that the plaintext probability is unknown.
>
> > - William
> > Hughes
>
> ============================you should see the preconditions I set.you
> are addlepated for you do not know there is a one-to-one
> correspondence between all the plaintexts and keys, so the
> probabilities of the corresponding plaintext and key are the same. As
> all the keys are equally likely ...

No.

True: For a fixed plaintext the keys are equally likely.

False: For a fixed cyphertext the keys are equally likely.

For a fixed cyphertext the probability of the keys depends
on the probability of the plaintext, which is unknown. So for
a fixed cyphertext the probability distribution on the keys
is unknown.


-William Hughes

wangyong

unread,
Oct 30, 2007, 11:23:06 AM10/30/07
to
On 10 30 , 10 03 , matt271829-n...@yahoo.co.uk wrote:

----------------In the case of OPT,the conditions are conflicting,

William Hughes

unread,
Oct 30, 2007, 12:16:05 PM10/30/07
to
On Oct 30, 8:41 am, wangyong <hell...@126.com> wrote:
> On 10 30 , 7 32 , William Hughes <wpihug...@hotmail.com> wrote:
>
>
>
> > On Oct 30, 2:07 am, wangyong <hell...@126.com> wrote:
>
> > > Due to the mapping of M, K and C, the probabilities of M, K and C are
> > > complicatedly interactional. For the above example, the probability
> > > of
> > > plaintext changes when the ciphertext is fixed, even though the
> > > ciphertext is unknown.
> > > When only considering the fixed ciphertext and the equiprobability of
> > > key, ...
>
> > If the cyphertext is fixed then the key probablility is not uniform.
> > It is unknown (it depends both on the fixed value of the cyphertext,
> > and the probability distribution on the plaintext, neither
> > of which is known). The conclusion is not that the plaintext
> > probability
> > is uniform, but that the plaintext probability is unknown.
>
> > - William
> > Hughes
>
> ============================you should see the preconditions I set.you
> are addlepated for you do not know there is a one-to-one
> correspondence between all the plaintexts and keys, so the
> probabilities of the corresponding plaintext and key are the same.

Correct

> As all the keys are equally likely,

Not correct.

True: for a fixed plaintext the keys are equally likely.

False: for a fixed cyphertext the keys are equally likely.

For a fixed cyphertext the probablity of the keys depends on the
probability of the plaintext (which is unknown). So for a fixed
cyphertext the probability of the keys is unknown,

- William Hughes

matt271...@yahoo.co.uk

unread,
Oct 30, 2007, 2:15:28 PM10/30/07
to

What on earth does "random from the angle of Stat" mean?

wangyong

unread,
Oct 30, 2007, 7:22:19 PM10/30/07
to
On 10 30 , 10 03 , matt271829-n...@yahoo.co.uk wrote:

wangyong

unread,
Oct 30, 2007, 8:50:44 PM10/30/07
to
> -William Hughes- -
>
> - -

True: For a fixed plaintext the keys are equally likely.


False: For a fixed cyphertext the keys are equally likely.

-----------------see my precondition

For a fixed cyphertext the probability of the keys depends
on the probability of the plaintext, which is unknown. So for
a fixed cyphertext the probability distribution on the keys
is unknown.


-------------------you repeat question regardless of my replies.


can you tell me
If so, for a fixed cyphertext , the postior probability of plaintext
=the prior????

wangyong

unread,
Oct 30, 2007, 8:55:15 PM10/30/07
to

>
> > ----------------In the case of OPT,the conditions are conflicting,
> > that is not the same as your
>
> > case.we should consider it respectively.In the condition of OPT, key
> > is random from the angle of
> > Stat..
>
> What on earth does "random from the angle of Stat" mean?- -
>
> - -


If you divided it into parts, youcan take it as a fixed value
but from the angle of statistics and probability,they are random on
the whole if not divided.

wangyong

unread,
Oct 30, 2007, 8:56:41 PM10/30/07
to
> - William Hughes- -
>
> - -

-------see my preconditions.
only considering

matt271...@yahoo.co.uk

unread,
Oct 30, 2007, 9:49:40 PM10/30/07
to

I have no idea what you mean by this. Since I cannot understand any of
your replies I am giving up on this thread. I wish you good luck!

William Hughes

unread,
Oct 30, 2007, 10:04:10 PM10/30/07
to


Your precondition is

There is no information about the probability distribution
on the plaintext. Only information about the OTP is used.

Under this precondition: for a fixed cyphertext the
probability distriution on the keys in unknown.

>
> For a fixed cyphertext the probability of the keys depends
> on the probability of the plaintext, which is unknown. So for
> a fixed cyphertext the probability distribution on the keys
> is unknown.
>
> -------------------you repeat question regardless of my replies.
>
> can you tell me
> If so, for a fixed cyphertext , the postior probability of plaintext
> =the prior????

Yes. If you make an observation that tells you nothing about
the probability of the plaintext (e.g. the value of the cyphertext)
the posterior probability is equal to the prior probability.


- William Hughes

wangyong

unread,
Oct 30, 2007, 11:08:40 PM10/30/07
to

---------------An easy way to understand is that you can find the
conditions cannot coexist, so the contional prabability cannot be used
in a simple way.

wangyong

unread,
Oct 30, 2007, 11:11:41 PM10/30/07
to
On 10 30 , 10 03 , matt271829-n...@yahoo.co.uk wrote:

----------------In the case of OPT,the conditions are conflicting,

wangyong

unread,
Oct 30, 2007, 11:14:08 PM10/30/07
to
> - William Hughes- -
>
> - -


Your precondition is

There is no information about the probability distribution
on the plaintext. Only information about the OTP is used.

-----------------my precondition is only consider cipertext fixed, key
simiarly likey.

in that condition keys and plaintexts are both simiarly likey, but
not unknown.


Yes. If you make an observation that tells you nothing about
the probability of the plaintext (e.g. the value of the cyphertext)
the posterior probability is equal to the prior probability.

---------------------That is not unknown.If so, the the
probability of K is known for the one-to-one corr--- between M and
K.Then you are self-contradictory


It is loading more messages.
0 new messages