Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

another news article on Kryptos

98 views
Skip to first unread message

Douglas A. Gwyn

unread,
Jul 19, 1999, 3:00:00 AM7/19/99
to

Mok-Kong Shen

unread,
Jul 19, 1999, 3:00:00 AM7/19/99
to
Douglas A. Gwyn wrote:
>
> http://www.washingtonpost.com/wp-srv/national/daily/july99/kryptos19.htm


I have a (very very) stupid question:

Jim Gillogly has "tried on the order of 20 billion trial decryptions
spread over two dozen different systems with perhaps 5 or 10 variations
each, on average". If there were much more candidate systems and (known
and less well-known or unknown) variations being tried, couldn't it
happen that a decryption of a sufficiently short ciphertext becomes
ambiguous, i.e. there would be more than one readable probable
plaintexts? How can one go about to exclude such a possibility?


M. K. Shen

Jim Gillogly

unread,
Jul 19, 1999, 3:00:00 AM7/19/99
to

There are 97 characters in this cryptogram. The chance of having it
decrypt to two totally different plausible plaintexts is negligible.
The precise value of "negligible" is left as an exercise for the reader,
but I'll point out that 20 billion isn't a very big number as key spaces
go, and one doesn't expect that it would take more than two or three
8-byte blocks to nail down a 56-bit DES key beyond a shadow of a doubt.

--
Jim Gillogly
Mersday, 26 Afterlithe S.R. 1999, 17:14
12.19.6.6.14, 12 Ix 2 Xul, Eighth Lord of Night

Mok-Kong Shen

unread,
Jul 19, 1999, 3:00:00 AM7/19/99
to
Jim Gillogly wrote:
>

> There are 97 characters in this cryptogram. The chance of having it
> decrypt to two totally different plausible plaintexts is negligible.
> The precise value of "negligible" is left as an exercise for the reader,
> but I'll point out that 20 billion isn't a very big number as key spaces
> go, and one doesn't expect that it would take more than two or three
> 8-byte blocks to nail down a 56-bit DES key beyond a shadow of a doubt.

I believe what you said is true, since one knows (or it can be
assumed) that the encryption was done with some 'classical' (very
old) methods. However, recently in another thread I put up the
following question: If one XOR the plaintext M_r with n plausible
messages M_1, ... M_n and a keystream K, how likely is one to find
the true message M_r? I conjecture that perhaps an analogous situation
could be envisaged with the 'classical' methods.

M. K. Shen
was

wtshaw

unread,
Jul 19, 1999, 3:00:00 AM7/19/99
to
In article <379346F0...@stud.uni-muenchen.de>, Mok-Kong Shen
<mok-ko...@stud.uni-muenchen.de> wrote:

> Douglas A. Gwyn wrote:
> >
> > http://www.washingtonpost.com/wp-srv/national/daily/july99/kryptos19.htm
>
>
> I have a (very very) stupid question:
>
> Jim Gillogly has "tried on the order of 20 billion trial decryptions
> spread over two dozen different systems with perhaps 5 or 10 variations
> each, on average". If there were much more candidate systems and (known
> and less well-known or unknown) variations being tried, couldn't it
> happen that a decryption of a sufficiently short ciphertext becomes
> ambiguous, i.e. there would be more than one readable probable
> plaintexts? How can one go about to exclude such a possibility?
>

The chances of having multiple somethings readable fall rather fast with
increases in ciphertext lengths for simple ciphers. Jim's problem is one
of kind, figuring out what simple cipher and key were used.

I figure he knows of lots of possibilities for ciphers, more than most of
us, but falls short of the numbers of them that are known to NSA. As far
kinds of ciphers and associated possible keystructures, there are many
more than any of us could ever describe, and heaven knows; I try to
describe lots of them to prove that to a lesser degree.

On the cipher-busting side, luck and labor are the twin secret agents one
is normally forced to conspire with to attempt to solve the unknown
cipher. Good Luck, Jim.
--
When I talk about running the bases, it't not baseball.

Jim Gillogly

unread,
Jul 19, 1999, 3:00:00 AM7/19/99
to

Same answer as in all the other threads. If the key is random and as
long as the ciphertext and the method is XORing it, then you can have
any number of plausible plaintexts. While theoretically it's mildly
interesting (the first time), in this case it's not relevant, since
Scheidt says in the new article that he and Sanborn wanted the cipher
to be solvable. This means it does not have a random key as long as
the plaintext.

--
Jim Gillogly
Mersday, 26 Afterlithe S.R. 1999, 18:15

Mok-Kong Shen

unread,
Jul 19, 1999, 3:00:00 AM7/19/99
to
Jim Gillogly wrote:
>

> Scheidt says in the new article that he and Sanborn wanted the cipher
> to be solvable. This means it does not have a random key as long as
> the plaintext.

I am sorry not to be able to give examples to strongly support my
conjecture. But there are languages where there can be sentences
such that the failing or wrong placing of punctuations can lead
to different meanings. I 'extrapolated' this in conceiving my
conjecture. I mean that even with a classical transposition system
there could be two different transpositions of the same bunch
of (ciphertext) characters that are both meaningful, if the plaintext
is 'particular' enough. That the author intends that his puzzle
be solvable doesn't garantee the non-existence of such possibilities,
I am afraid.

M. K. Shen

Dave Salovesh

unread,
Jul 19, 1999, 3:00:00 AM7/19/99
to
In article <jgfunj-1907...@dial-243-091.itexas.net>,
jgf...@vgrknf.arg (wtshaw) opined:

>I figure he knows of lots of possibilities for ciphers, more than most of
>us, but falls short of the numbers of them that are known to NSA. As far
>kinds of ciphers and associated possible keystructures, there are many
>more than any of us could ever describe, and heaven knows; I try to
>describe lots of them to prove that to a lesser degree.

I didn't check the online version, so it may be different, but the print
version has a bit in it that three (unnamed) NSA cryptographers have
also gotten to the same point, working on their own time. In the story,
here are no more details to that statement.

Call for speculation: If an NSA cryptographer was the first to solve
the last cipher, would the NSA allow an announcement of that fact?

I ask only because I can't decide which answer makes more sense. On one
hand, I think they might want the bragging rights for this exercise,
even if the details of who, how, and what remained secret as it has for
the current claim (yeah, I know that anyone could claim that if they
didn't need to support the claim, but I'm assuming that there's some
degree of honor involved).

On the other hand, I can imagine that they would want to keep their
precise level of skill somewhat obscured, and so would not want to make
even an oblique admission that this cipher is in their reach. Thus,
they'd never want to announce that they've solved the last cipher, even
once Jim manages to find and announce the solution.

Any thoughts?


Jim Gillogly

unread,
Jul 19, 1999, 3:00:00 AM7/19/99
to
Mok-Kong Shen wrote:
> I am sorry not to be able to give examples to strongly support my
> conjecture. But there are languages where there can be sentences

Let me suggest you try the experiment -- it's actually quite instructive.
Construct the longest sequence you can that can be decrypted with two
different simple substitutions into sensible English (or German or
Esperanto) sentences. Make the substitutions different, of course:
don't just have a long sentence with "now" replaced with "not",
for example. You'll find this is quite challenging, and that it may
give you some visceral intuitions about unicity points.

Now note that you have a rather large key-space to work from: 26!
possible substitution alphabets, which is about an 88-bit keyspace.

If you have trouble constructing two such credible sentences of more
than 30 or 40 characters, then I suggest that your concern about
accidentally running into the wrong sentence in only 20 billion
trials is not well-founded. From my own experience, the difficulty
is not in general finding too many incorrect solutions, but in
finding even one credible one. As with the discussion we had about
the chance that a correctly-operating random number generator might
generate a few million 0 bits in a row, the answer is (again) it
won't happen in real life, and for real applications it's not worth
discussing at this length.

--
Jim Gillogly
Mersday, 26 Afterlithe S.R. 1999, 20:43

Jim Gillogly

unread,
Jul 19, 1999, 3:00:00 AM7/19/99
to
Dave Salovesh wrote:
> I didn't check the online version, so it may be different, but the print
> version has a bit in it that three (unnamed) NSA cryptographers have
> also gotten to the same point, working on their own time. In the story,
> here are no more details to that statement.

I can add a couple more. The three NSA analysts spent most of 1992
(off hours) working on it, and got the third part near the end of the
year. The NSA Public Affairs Office explicitly gave me permission to
pass on this information.

> Call for speculation: If an NSA cryptographer was the first to solve
> the last cipher, would the NSA allow an announcement of that fact?

Why not? They were willing to have the other priority information known.
They did not, however, wish to release any details of the analysts' identity.

> On the other hand, I can imagine that they would want to keep their
> precise level of skill somewhat obscured, and so would not want to make
> even an oblique admission that this cipher is in their reach. Thus,
> they'd never want to announce that they've solved the last cipher, even
> once Jim manages to find and announce the solution.
>
> Any thoughts?

Yes -- why wouldn't the same reasoning have kept them from taking
credit for having solved the first three ciphers first, and even
telling us how long it took their team to solve them? Note that
Ed Scheidt explicitly said he had picked ciphers with a historical
unclassified basis. This suggests the solution to this one will not
compromise national security in any way... including giving away that
it's the same cipher used by the Lower Slobbovian spies whose traffic
we've been reading since 1989.

--
Jim Gillogly
Mersday, 26 Afterlithe S.R. 1999, 22:30

Jim Gillogly

unread,
Jul 19, 1999, 3:00:00 AM7/19/99
to
Douglas A. Gwyn wrote:
> Actually, the idea of running billions of guesses isn't very
> productive when you don't know what kind of system you're dealing
> with. If I were analyzing it, which I don't currently have time
> to do, I'd assume one of the likely candidate systems such as
> plaintext autokey, then use special methods (such as chaining)
> that are relevant for C/A of such systems. Any guesswork at the
> initial stage would be limited to (a) trying probable words and
> (b) using ancillary possible clues, such as the KRYPTOS mixed
> alphabet.

While I agree that that's preferable in general, I don't have any
special methods for one of my top three candidates, and it has a
lot of potential initial key set-up conditions. Another of my
favorites has some nice special methods, but only for more ciphertext
or known plaintext, so again I'm reduced to massive hill-climbing.
I've definitely had the KRYPTOS mixed alphabet in all its forms as
one of my numero uno tries on each possible system, though.

--
Jim Gillogly
Highday, 27 Afterlithe S.R. 1999, 04:52
12.19.6.6.15, 13 Men 3 Xul, Ninth Lord of Night

David Lesher

unread,
Jul 20, 1999, 3:00:00 AM7/20/99
to
mr.kil...@anagrams.r.us (Mr. Kile A. Noy) writes:


>>Call for speculation: If an NSA cryptographer was the first to solve
>>the last cipher, would the NSA allow an announcement of that fact?

>I can't imagine any government agency passing up a chance to look good.
>--

Especially not one starved for funds & desperate for good PR.

--
A host is a host from coast to coast.................wb8foz@nrk.com
& no one will talk to a host that's close........[v].(301) 56-LINUX
Unless the host (that isn't close).........................pob 1433
is busy, hung or dead....................................20915-1433

Douglas A. Gwyn

unread,
Jul 20, 1999, 3:00:00 AM7/20/99
to
Mok-Kong Shen wrote:
> ... doesn't garantee the non-existence of such possibilities,
> I am afraid.

That is where experience comes in. To have much risk of an accidental
decipherment to a substantially wrong plaintext, with most classical
encryption schemes, the message would have to be no more than three
dozen characters. 97 puts it out of the realm of practical concern.

Ultimately, when one thinks he has found the correct decryption, the
coherence of the plaintext and the complexity of the supposed method
of encryption must be evaluated to see how likely the answer really
is. ("Baconian" results, such as the strange alternate Kryptos
messages about God etc. we saw here recently, typically fail to meet
this criterion.)

Douglas A. Gwyn

unread,
Jul 20, 1999, 3:00:00 AM7/20/99
to
wtshaw wrote:
> I figure he knows of lots of possibilities for ciphers, more than
> most of us, but falls short of the numbers of them that are known
> to NSA.

Undoubtedly, but that's not very important for Kryptos.

Douglas A. Gwyn

unread,
Jul 20, 1999, 3:00:00 AM7/20/99
to
I think it's simpler than that -- most NSA employees are still
operating under their old security indoctrination, which among
other things tells them that there is a law making it a Federal
felony to identify active employees of any US intelligence
agency. (There is such a law; its intent was to stop antiwar
protesters from publishing the names and addresses of CIA
employees to be "hit" or harrassed.) Also, there is a carryover
from the Cold War policy of never saying anything about their
capabilities, even when clearly there is no national security
impact.

Douglas A. Gwyn

unread,
Jul 20, 1999, 3:00:00 AM7/20/99
to
Just out of curiosity, what does its modulo-26 FFT spectrum look like?

Jim Gillogly

unread,
Jul 20, 1999, 3:00:00 AM7/20/99
to
Douglas A. Gwyn wrote:
> Just out of curiosity, what does its modulo-26 FFT spectrum look like?

Beats me -- would I go to Kullback's "Statistical methods" for that?

--
Jim Gillogly
Highday, 27 Afterlithe S.R. 1999, 07:33

Mok-Kong Shen

unread,
Jul 20, 1999, 3:00:00 AM7/20/99
to
Douglas A. Gwyn wrote:

> Actually, the idea of running billions of guesses isn't very
> productive when you don't know what kind of system you're dealing
> with. If I were analyzing it, which I don't currently have time
>

Could the present instance be interpreted to mean that there is
some substantial advantage in letting the analyst to guess what
encryption algorithm one uses? The search space of the analyst could
apparently be much enlarged through variabilities realized not only
in switching from one to another of a set of algorithms but also
in rendering, where feasible, the algorithms suitably parametrized
(e.g. with variable round numbers, block sizes, etc.), with different
parameters being chosen by the user at different times. There are
certainly also disadvantages of this approach. But I suppose perhaps
there is a break-even point in practice such that in certain cases
it is better always to stick to one fixed algorithm for all times,
while in others it is preferable to exploit the benefits of
variabilities.

M. K. Shen
-----------------------------
http://www.stud.uni-muenchen.de/~mok-kong.shen/ (Updated: 12 Apr 99)

Doug Gwyn (ISTD/CNS) <gwyn>

unread,
Jul 20, 1999, 3:00:00 AM7/20/99
to
Jim Gillogly wrote:
> Beats me -- would I go to Kullback's "Statistical methods" for that?

No, Fourier analysis wasn't commonly used before Cooley & Tukey invented
the FFT (actually, Good and others came up with similar methods
earlier).

The intent is to analyze the periodicities, if any are evident. This is
done in classical C/A literature by running ICs at various column
widths.

Jim Gillogly

unread,
Jul 20, 1999, 3:00:00 AM7/20/99
to
Doug Gwyn (ISTD/CNS) wrote:
> The intent is to analyze the periodicities, if any are evident. This is
> done in classical C/A literature by running ICs at various column
> widths.

Oh, that part I've done. Sorted by high IC, with the first column being
the assumed column widths:

50 0.0800
38 0.0789
43 0.0775
34 0.0686
44 0.0682
45 0.0667
42 0.0635
31 0.0538
25 0.0533
41 0.0488

and down further into the noise.

I find the 50 and 25 suspiciously round, but haven't been able to get them
to mean anything. I'd been looking for Fibonacci-style generators with
period 50, and not finding any. An ordinary period 25 or 50 Quag III
with KRYPTOSABC... alphabet key is appealing, but again I haven't
found anything to support it after some reasonably extensive crib-dragging
and hill-climbing.

--
Jim Gillogly
Sterday, 28 Afterlithe S.R. 1999, 03:56
12.19.6.6.16, 1 Cib 4 Xul, First Lord of Night

Jim Gillogly

unread,
Jul 20, 1999, 3:00:00 AM7/20/99
to
Dave Salovesh wrote:
> Was there an announcement in 1992 when the NSA people solved the first
> three parts? From my warped perspective, the announcement was made on
> Monday in the Post.

No. My first clue was this June, when I called CIA and they sent me an
announcement of the talk David Stein gave over a year ago; in that announcement
it mentioned the NSA break and that he'd not known about it.

Since that time I've learned there is a "Kryptos Society" at NSA that
has newsletters and Christmas puzzles. I don't know whether it's open
to outsiders.

> At the risk of further revealing my ignorance, I thought I knew what
> Scheidt and Sanborn had publicly given as the background, yet I didn't
> know that the ciphers were known flavors until just now. Because of
> that missing fact, I was speculating that the last cipher was a
> completely home-grown variety or variation, especially suited to this
> exact plaintext and circumstance.

Roger -- the blurb the CIA had been sending out speculated (among other
wrong speculations) that the last part was one-time pad. We've learned
quite a lot since June.

--
Jim Gillogly
Sterday, 28 Afterlithe S.R. 1999, 05:04

Dave Salovesh

unread,
Jul 21, 1999, 3:00:00 AM7/21/99
to
In article <3793A8B0...@acm.org>,
Jim Gillogly <j...@acm.org> opined:

(on details about NSA analysts working on Kryptos)

>I can add a couple more. The three NSA analysts spent most of 1992
>(off hours) working on it, and got the third part near the end of the
>year. The NSA Public Affairs Office explicitly gave me permission to
>pass on this information.

Was there an announcement in 1992 when the NSA people solved the first


three parts? From my warped perspective, the announcement was made on
Monday in the Post.

>Dave Salovesh wrote:
>> Call for speculation: If an NSA cryptographer was the first to solve
>> the last cipher, would the NSA allow an announcement of that fact?
>

>Why not? They were willing to have the other priority information known.
>They did not, however, wish to release any details of the analysts' identity.

Understandable. This was just the first mainstream article I could find
that mentioned that anyone other than you and Stein had gotten this far.
No, that's not quite right either, but the others seem to have all
jumped onto Kryptos during this recent wave of interest.

On the other hand, it was news articles that appeared a little over a
month ago that turned my attention to this group. I haven't been
following this for very long, and I'm sure there are parts of this story
that I'm missing. I hope someone is chronicling the activities for
posterity. Someday, when I really think I might understand what it all
means, I'd like to review the work on Kryptos.

>> Any thoughts?
>
>Yes -- why wouldn't the same reasoning have kept them from taking
>credit for having solved the first three ciphers first, and even
>telling us how long it took their team to solve them? Note that
>Ed Scheidt explicitly said he had picked ciphers with a historical
>unclassified basis. This suggests the solution to this one will not
>compromise national security in any way... including giving away that
>it's the same cipher used by the Lower Slobbovian spies whose traffic
>we've been reading since 1989.

At the risk of further revealing my ignorance, I thought I knew what


Scheidt and Sanborn had publicly given as the background, yet I didn't
know that the ciphers were known flavors until just now. Because of
that missing fact, I was speculating that the last cipher was a
completely home-grown variety or variation, especially suited to this
exact plaintext and circumstance.

Continuing with this mistaken idea, publicizing the actual cipher or its
method wouldn't risk anyone's security, but publicly announcing that it
could be broken by a third party might, if doing so would reveal that a
cipher that was previously considered unbreakable and safe was not.
Clearly, if the ciphers are historical and unclassified, that risk is
nil.

I found out about the sculpture and the puzzle within when I moved to
Washington about two years ago. This means that the NSA people had
solved the first three ciphers 5 years before I knew they existed. And
all I knew about it then was that it existed.

So I guess that for me the greatest mystery of Kryptos is my own lack of
knowledge. That's not entirely surprising to me, as this is the first
time I've really attempted to understand the science of cryptography in
any way other than what is required to use PGP or read the novel
Cryptonomicon. It's a hobby for me, and at this point I don't quite
understand it, let alone getting anywhere near good at it. I think I
better keep lurking a while longer...


Doug Gwyn (ISTD/CNS) <gwyn>

unread,
Jul 21, 1999, 3:00:00 AM7/21/99
to
Mok-Kong Shen wrote:
> Could the present instance be interpreted to mean ...

Certainly, the more complex you make the encryption system,
*usually* more work (and input data) is needed to crack it.

But "switching algorithms" under control of a key is itself
a fixed algorithm, just more complex than its components.

Doug Gwyn (ISTD/CNS) <gwyn>

unread,
Jul 21, 1999, 3:00:00 AM7/21/99
to
Jim Gillogly wrote:
> 50 0.0800
> 25 0.0533

An IC should be around 1. There are 4 coincidences at a width of 50,
and (47*1+3*0)/26 are expected for random, so IC(50) = 4*26/47 = 2.2.
For 25 columns, there are 6 coincidences, with (22*6+3*3)/26 expected
at random, so IC(25) = 1.1. If the bulge at 50 is causal, then the
smaller bulge at 25 would be a natural consequence.

Another interesting characteristic is that there are 6 doubled
(adjacent) letters, with 96/26 = 3.7 expected for random.

> I'd been looking for Fibonacci-style generators with period 50,
> and not finding any.

Not surprising, since they're unlikely to be used for alphabetic
(non-binary) systems.

> An ordinary period 25 or 50 Quag III with KRYPTOSABC...

I don't recall what a Quag III (ACA terminology) is, but it is
unlikely that a long key was used, unless it was an autokey or
running key.

Have you tried running key, with KRYPTOSABC... (both forms of
encipherment)? This could be tested by crib dragging, looking
for potential plaintext in the recovered key fragment.

Jim Gillogly

unread,
Jul 21, 1999, 3:00:00 AM7/21/99
to
Doug Gwyn (ISTD/CNS) wrote:
>
> Jim Gillogly wrote:
> > 50 0.0800
> > 25 0.0533
>
> An IC should be around 1. There are 4 coincidences at a width of 50,
> and (47*1+3*0)/26 are expected for random, so IC(50) = 4*26/47 = 2.2.

Unfortunately there are a couple of definitions of IC, and I grew up
using the Sinkov one, which has the same range as kappa (Dorothy Denning
uses it this way also). But yes, the bulge at 50 is suspicious.

> Another interesting characteristic is that there are 6 doubled
> (adjacent) letters, with 96/26 = 3.7 expected for random.

Yup. Depending on the alphabet and plaintext, Wheatstone can lead
to more doubled letters than random (and more than in plain English,
which I think is a little less than random), but I don't see a
smoking gun here.

> > I'd been looking for Fibonacci-style generators with period 50,
> > and not finding any.
>
> Not surprising, since they're unlikely to be used for alphabetic
> (non-binary) systems.

I was thinking in particular of Gromark (Gronsfeld with Mixed Alphabet
and Fib-style Running Key), but yes, they're not common.

> Have you tried running key, with KRYPTOSABC... (both forms of
> encipherment)? This could be tested by crib dragging, looking
> for potential plaintext in the recovered key fragment.

Yes -- also with KRYPTOSABC... with mixing in a transposition block
using a few different widths. Of course, I may not have dragged the
right cribs. I seem to recall that one of Allen Sherman's students
did a running key program with pretty good results, so that ought
to be feasible. Hurm...

--
Jim Gillogly
29 Afterlithe S.R. 1999, 02:37
12.19.6.6.17, 2 Caban 5 Xul, Second Lord of Night

Jim Gillogly

unread,
Jul 21, 1999, 3:00:00 AM7/21/99
to
Jim Gillogly wrote:

>
> Doug Gwyn (ISTD/CNS) wrote:
>
> > Have you tried running key, with KRYPTOSABC... (both forms of
> > encipherment)? This could be tested by crib dragging, looking
> > for potential plaintext in the recovered key fragment.
>
> Yes -- also with KRYPTOSABC... with mixing in a transposition block
> using a few different widths. Of course, I may not have dragged the
> right cribs. I seem to recall that one of Allen Sherman's students
> did a running key program with pretty good results, so that ought
> to be feasible. Hurm...

I should add I've tried more than "both forms of encipherment": Vigenere,
Beaufort, Variant Beaufort and Porta, both enciphering and deciphering,
since the dragged crib could be in either key or plaintext.

--
Jim Gillogly
29 Afterlithe S.R. 1999, 03:30

wtshaw

unread,
Jul 21, 1999, 3:00:00 AM7/21/99
to
In article <3795D991...@arl.mil>, "Doug Gwyn (ISTD/CNS) <gwyn>"
<gw...@arl.mil> wrote:

> Mok-Kong Shen wrote:
> > Could the present instance be interpreted to mean ...
>
> Certainly, the more complex you make the encryption system,
> *usually* more work (and input data) is needed to crack it.

This is not going to be always true. Complication is one way to strength,
but it might involve doing things inefficiently. The better goal is to
have as simple an encyption system as possible, with the strength being
entirely in the key(s).


>
> But "switching algorithms" under control of a key is itself
> a fixed algorithm, just more complex than its components.

Consider a little ditty I did called Code Blue which in nothing more than
a takeoff on a tableau ciphers, with a deranged alphabet as the base for
the table, another deranged alphabet for the usual keystring, and another
alphabet of still the same size which determines which of three modes the
table can be used. The base string is 27 characters, the keystring is 27
characters, and the mode determining key is 27 characters, most
importantly of three trits each. The whole thing cycles in 81
encryptions, the keystring being used 3 times in the process.
--
When I talk about running the bases, it's not baseball.

wtshaw

unread,
Jul 21, 1999, 3:00:00 AM7/21/99
to
In article <3795FAE0...@arl.mil>, "Doug Gwyn (ISTD/CNS) <gwyn>"

<gw...@arl.mil> wrote:
>
> Have you tried running key, with KRYPTOSABC... (both forms of
> encipherment)? This could be tested by crib dragging, looking
> for potential plaintext in the recovered key fragment.

Knowing Jim is a rather clever attacker, I'm sure that he has tried lot of
things that might be suggested. However, I wonder if some earlier part of
ciphertext could be used as some sort of key, possibly pulled off in some
unusual fashion, for solving the last part...my two cents worth.

Mok-Kong Shen

unread,
Jul 22, 1999, 3:00:00 AM7/22/99
to
Doug Gwyn (ISTD/CNS) wrote:
>
> Certainly, the more complex you make the encryption system,
> *usually* more work (and input data) is needed to crack it.
>
> But "switching algorithms" under control of a key is itself
> a fixed algorithm, just more complex than its components.

Besides switching algorithms and employing parametrized algorithms
mentioned previously, I use to think that there is substantial
potential of variability obtainable when one uses superencipherment.
For chaining n algorithms can be done in n factorail different
ways. Even for n as low as 3 the impact on the analyst could be
huge. (It is assumed, of course, that the chaining of the specific
algorithms concerned does not introduce weakness.)

BTW, in a recent thread one learned that in the realm of classical
methods double transpositions result in an effective key length which
is the LCM of the key lengths of the components. Could it be that the
yet unsolved part of Kryptos is superenciphered?

M. K. Shen

Jim Gillogly

unread,
Jul 22, 1999, 3:00:00 AM7/22/99
to
Mok-Kong Shen wrote:
> Could it be that the
> yet unsolved part of Kryptos is superenciphered?

Yes. One of my three leading candidates is polyalphabetic substitution
(like Kryptos-I and Kryptos-II) followed by transposition (like K-III).
This would be an interesting "closure" of the methods used previously.
I don't know whether this would qualify under Scheidt's appellation of
"a whole different ball game", but it would certainly be much more
challenging than either taken separately.

--
Jim Gillogly
29 Afterlithe S.R. 1999, 11:18

Douglas A. Gwyn

unread,
Jul 22, 1999, 3:00:00 AM7/22/99
to
Jim Gillogly wrote:
> Doug Gwyn (ISTD/CNS) wrote:
> > An IC should be around 1. ...

> Unfortunately there are a couple of definitions of IC, and I grew up
> using the Sinkov one, which has the same range as kappa (Dorothy Denning
> uses it this way also).

An IC is supposed to be the ratio of the actual coincidence rate to the
expected coincidence rate (for a random model). That way, one doesn't
have to know kappa(r) nor mentally divide by kappa(r). The best
tutorial
I have seen on this subject is Mountjoy's "The Bar Statistics", which is
unfortunately still classified (for no good reason that I can see,
other than that nobody has requested a declassification review).
("Bar" has to do with averages. If you examine the way I computed the
IC in my previous posting, you can figure out all there really is to
it.)

full...@aspi.net

unread,
Jul 22, 1999, 3:00:00 AM7/22/99
to
Jim Gillogly wrote:
>
> Mok-Kong Shen wrote:
> >
> > Douglas A. Gwyn wrote:
> > >
> > > http://www.washingtonpost.com/wp-srv/national/daily/july99/kryptos19.htm
> >
> > I have a (very very) stupid question:
> >
> > Jim Gillogly has "tried on the order of 20 billion trial decryptions
> > spread over two dozen different systems with perhaps 5 or 10 variations
> > each, on average". If there were much more candidate systems and (known
> > and less well-known or unknown) variations being tried, couldn't it
> > happen that a decryption of a sufficiently short ciphertext becomes
> > ambiguous, i.e. there would be more than one readable probable
> > plaintexts? How can one go about to exclude such a possibility?
>
> There are 97 characters in this cryptogram. The chance of having it
> decrypt to two totally different plausible plaintexts is negligible.
> The precise value of "negligible" is left as an exercise for the reader,

For a "real" message this is indetectably close to zero, but we aren't
dealing with a "real" message. <evil thought> What is the chance that
the author designed the text to be ambiguous? Didn't Dennis Ritchie
show a comparably long sentence with multiple (semi-sensible)
decryptions?

> but I'll point out that 20 billion isn't a very big number as key spaces
> go, and one doesn't expect that it would take more than two or three
> 8-byte blocks to nail down a 56-bit DES key beyond a shadow of a doubt.
>
> --
> Jim Gillogly
> Mersday, 26 Afterlithe S.R. 1999, 17:14

Terry Ritter

unread,
Jul 23, 1999, 3:00:00 AM7/23/99
to

On Wed, 21 Jul 1999 14:30:41 GMT, in <3795D991...@arl.mil>, in
sci.crypt "Doug Gwyn (ISTD/CNS) <gwyn>" <gw...@arl.mil> wrote:

>[...]


>But "switching algorithms" under control of a key is itself
>a fixed algorithm, just more complex than its components.

Note that this statement is not true if the algorithm set keeps
expanding, because then the algorithm is certainly *not* fixed.

---
Terry Ritter rit...@io.com http://www.io.com/~ritter/
Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM


Douglas A. Gwyn

unread,
Jul 23, 1999, 3:00:00 AM7/23/99
to
Terry Ritter wrote:
> Note that this statement is not true if the algorithm set keeps
> expanding, because then the algorithm is certainly *not* fixed.

If you can actually implement it, it certainly is.

Terry Ritter

unread,
Jul 24, 1999, 3:00:00 AM7/24/99
to

No, it is not. One can implement a system which supports the dynamic
introduction of new algorithms. Any particular description of "the"
overall system must thus be continually updated and so is certainly
*not* fixed. Indeed, the actual system cannot even be described in
any more than "handwave" precision such as: "in addition to the known
set of algorithms, currently unknown additional algorithms of
virtually unlimited nature may be present."

David Wagner

unread,
Jul 24, 1999, 3:00:00 AM7/24/99
to
In article <3795FAE0...@arl.mil>, Doug Gwyn (ISTD/CNS) <gwyn> wrote:
> An IC should be around 1. There are 4 coincidences at a width of 50,
> and (47*1+3*0)/26 are expected for random, so IC(50) = 4*26/47 = 2.2.

May I ask a question (out of the blue) about why the IC is defined
this way?

For a random source, the number of coincidences has approximately a
Gaussian distribution, with expected value ~ 1.81 and standard deviation
~ 1.32 in the case you give above (a period of 50).

It seems to me, then, that it would be more natural to characterize
the observed number of coincidences as about DI(50) = (4 - 1.81)/1.32
= 1.66 standard deviations above the mean, instead of IC(50) = 2.2.
Let's call this new measure DI, short for Dave's Index.

Note that, in comparison to DI, the IC exaggerates the deviation from
random for small samples and underrepresents the deviation when you
have a lot of data. (An IC of 2 should be very interesting if you have
a lot of data, but with less data there's a greater chance that it just
happened by chance. This is relevant because less data is available with
the larger periods, i.e. the larger column-widths, so raw IC values for
different periods aren't directly comparable.)

So it seems to me that the DI gives a more uniform and representative
way of summarizing the number of coincidences than the classical IC.
Why do people use the IC, and not the DI? Is this a stupid question?

Douglas A. Gwyn

unread,
Jul 25, 1999, 3:00:00 AM7/25/99
to
David Wagner wrote:
> It seems to me, then, that it would be more natural to characterize
> the observed number of coincidences as about DI(50) = (4 - 1.81)/1.32
> = 1.66 standard deviations above the mean, instead of IC(50) = 2.2.
> Let's call this new measure DI, short for Dave's Index.

In the crypto trade, that has traditionally been called the "sigmage"
of the IC.

> Note that, in comparison to DI, the IC exaggerates the deviation from
> random for small samples and underrepresents the deviation when you
> have a lot of data. (An IC of 2 should be very interesting if you have
> a lot of data, but with less data there's a greater chance that it just
> happened by chance. This is relevant because less data is available with
> the larger periods, i.e. the larger column-widths, so raw IC values for
> different periods aren't directly comparable.)

Actually, IC values *are* comparable, since they are indexed to 1
(in economics-speak).

If someone tells me that the delta IC of a literal ciphertext is
1.7, I immediately suspect it is a simple substitution or
transposition. If somebody told me only that the IC had a
sigmage of 3.0, but not the IC itself, I'd say it was significant
but significant of what, I wouldn't know.

> So it seems to me that the DI gives a more uniform and representative
> way of summarizing the number of coincidences than the classical IC.
> Why do people use the IC, and not the DI? Is this a stupid question?

No, it's a good question.

The sigmage shows how significant the deviation from random is, and
comes into evaluating how reliable an indication a "large" IC value
is, when one knows the variance of the reference model.

The various ICs are closely related to chi-square and to other
correlation measures. There are also information-theoretic
measures of the "distance" between distributions. For chi-square,
the number of d.f. enters into interpretation of the significance,
which fully corrects for sample size.

Usually, one uses ICs to compare hypotheses with similar sample
sizes, in which case the slight gain in power from comparing
sigmages does not offset the additional computation required.

In practice, both are usually computed when using a digital
computer, but only the IC is computed when using pencil-and-paper,
unless the question of significance seems important. Experienced
cryppies can usually sense how significant bulges are without
making any explicit computations.

Mok-Kong Shen

unread,
Jul 26, 1999, 3:00:00 AM7/26/99
to
Terry Ritter wrote:
>
> No, it is not. One can implement a system which supports the dynamic
> introduction of new algorithms. Any particular description of "the"
> overall system must thus be continually updated and so is certainly
> *not* fixed. Indeed, the actual system cannot even be described in
> any more than "handwave" precision such as: "in addition to the known
> set of algorithms, currently unknown additional algorithms of
> virtually unlimited nature may be present."

I guess that there is the practical problem of not having an
unexhaustible source of new algorithms. Hence in my humble view
switching among a sufficiently large set of algorithms, utilizing
the combinatorial variations of superencipherment and exploiting
the variabilities of parametrized algorithms are the measures that
one can realistically have in practice to obtain security beyond
what is inherent in the algorithms.

M. K. Shen

Mok-Kong Shen

unread,
Jul 26, 1999, 3:00:00 AM7/26/99
to
Jim Gillogly wrote:
>

> Yes. One of my three leading candidates is polyalphabetic substitution
> (like Kryptos-I and Kryptos-II) followed by transposition (like K-III).
> This would be an interesting "closure" of the methods used previously.
> I don't know whether this would qualify under Scheidt's appellation of
> "a whole different ball game", but it would certainly be much more
> challenging than either taken separately.

I am sure that we all hope that you will soon have success in solving
the last part of Kryptos.

Just a question (independent of Kryptos): Is is better to have
polyalphabetic substitution followed by transposition or the
other way round? Or is it indifferent? Why?

If the unsolved part is really 'a whole different ball game', then
I suppose that there is practically nothing left (after excluding
substitution and transposition) in the realm of classical methods
excepting perhaps code book, which seems to be quite unlikely, I guess.

M. K. Shen

Jim Gillogly

unread,
Jul 26, 1999, 3:00:00 AM7/26/99
to
Mok-Kong Shen wrote:
> Jim Gillogly wrote:
> > Yes. One of my three leading candidates is polyalphabetic substitution
> > (like Kryptos-I and Kryptos-II) followed by transposition (like K-III).
> > This would be an interesting "closure" of the methods used previously.
> > I don't know whether this would qualify under Scheidt's appellation of
> > "a whole different ball game", but it would certainly be much more
> > challenging than either taken separately.

> Just a question (independent of Kryptos): Is is better to have


> polyalphabetic substitution followed by transposition or the
> other way round? Or is it indifferent? Why?

I'd say it's better to substitute first and transpose second, because
it's easier to diagnose a transposition cipher than a polyalphabetic.
If it's transposition first, then polyalphabetic, you can try solving
the polyalphabetic for something that gives not only a good index of
coincidence for English, but also that has the right individual letter
frequencies for English. If you're going the other way, when the
transposition is unwound correctly, you then need to nearly solve the
polyalphabetic to see whether it's the right one. If it's a long
text with a shortish key it will be obvious (though still more
expensive than recognizing a transposition), but if it's a short text
and a longish key, even incorrect transpositions will result in ICs
that give false positives that need to be checked before moving on
to the next transposition candidate.

> If the unsolved part is really 'a whole different ball game', then
> I suppose that there is practically nothing left (after excluding
> substitution and transposition) in the realm of classical methods
> excepting perhaps code book, which seems to be quite unlikely, I guess.

You can have substitutions and transpositions that are a whole
different ballgame from what's gone before in Kryptos. I've seen
no reason to change my top three candidates from the first time I
posted them here. In no particular order, they're some form of
autokey, some form of running key, and some form of combined
polyalphabetic with transposition.

Sure wish we had more ciphertext!

--
Jim Gillogly
Mersday, 3 Wedmath S.R. 1999, 16:51
12.19.6.7.1, 6 Imix 9 Xul, Sixth Lord of Night

wtshaw

unread,
Jul 26, 1999, 3:00:00 AM7/26/99
to
In article <379C252E...@stud.uni-muenchen.de>, Mok-Kong Shen
<mok-ko...@stud.uni-muenchen.de> wrote:

> Terry Ritter wrote:
> >
> > No, it is not. One can implement a system which supports the dynamic
> > introduction of new algorithms. Any particular description of "the"
> > overall system must thus be continually updated and so is certainly
> > *not* fixed. Indeed, the actual system cannot even be described in
> > any more than "handwave" precision such as: "in addition to the known
> > set of algorithms, currently unknown additional algorithms of
> > virtually unlimited nature may be present."
>
> I guess that there is the practical problem of not having an
> unexhaustible source of new algorithms.

I would not be so sure about not having as many algorithms as you might
want be capable of being developed. The question of exhaustion pertains
more people involved, and indirectly therefore to the rate at which new
algorithms might become available.

> Hence in my humble view
> switching among a sufficiently large set of algorithms, utilizing
> the combinatorial variations of superencipherment and exploiting
> the variabilities of parametrized algorithms are the measures that
> one can realistically have in practice to obtain security beyond
> what is inherent in the algorithms.
>

That too.
--
Real Newsreaders do not read/write in html.

wtshaw

unread,
Jul 26, 1999, 3:00:00 AM7/26/99
to
In article <379C28FD...@stud.uni-muenchen.de>, Mok-Kong Shen

<mok-ko...@stud.uni-muenchen.de> wrote:
>
> If the unsolved part is really 'a whole different ball game', then
> I suppose that there is practically nothing left (after excluding
> substitution and transposition) in the realm of classical methods
> excepting perhaps code book, which seems to be quite unlikely, I guess.
>
You seem to accept that the system will be of a popularly known classical
method; it could just as well be of an obscure method popularly known to
obscure people, at least at the time.

John Savard

unread,
Jul 26, 1999, 3:00:00 AM7/26/99
to
rit...@io.com (Terry Ritter) wrote, in part:

>On Wed, 21 Jul 1999 14:30:41 GMT, in <3795D991...@arl.mil>, in
>sci.crypt "Doug Gwyn (ISTD/CNS) <gwyn>" <gw...@arl.mil> wrote:

>>[...]
>>But "switching algorithms" under control of a key is itself
>>a fixed algorithm, just more complex than its components.

>Note that this statement is not true if the algorithm set keeps


>expanding, because then the algorithm is certainly *not* fixed.

It certainly is possible to devise an open ended encryption program.
For example, GPG, GNU Privacy Guard, (currently still in beta)
provides for the addition of new encryption algorithms as modules.

However, if one switches algorithms _under the control of a key_, at
some point one must define what actions the program is to take for any
particular key. Otherwise, the programs belonging to the sender and
recipient may not be compatible. Perhaps this is the source of the
current objection.

John Savard ( teneerf<- )
http://www.ecn.ab.ca/~jsavard/crypto.htm

Douglas A. Gwyn

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
wtshaw wrote:
> You seem to accept that the system will be of a popularly known
> classical method; it could just as well be of an obscure method
> popularly known to obscure people, at least at the time.

It was evident from the outset that Kryptos must be using
classical methods of the sort encountered in MilCryp.
And this assumption was bolstered by the recent recoveries.
There is no reason to change that assumption for the final part.

Douglas A. Gwyn

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Jim Gillogly wrote:
> ... I've seen

> no reason to change my top three candidates from the first time I
> posted them here. In no particular order, they're some form of
> autokey, some form of running key, and some form of combined
> polyalphabetic with transposition.

Don't forget the suggestion in the ABCNews forum that the running
key might be one of the recovered messages.

wtshaw

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
In article <379cd73b...@news.prosurfr.com>,

jsa...@tenMAPSONeerf.edmonton.ab.ca (John Savard) wrote:
>
> It certainly is possible to devise an open ended encryption program.
> For example, GPG, GNU Privacy Guard, (currently still in beta)
> provides for the addition of new encryption algorithms as modules.
>
I ran across a program called Ahoy! for the Mac which allows for new
plugins. The company even offers a package for design of new ones. Ahoy!
is a chat program best I can tell, and blowfish is already available.
--
Crop report--It's been a very good year for figs. Garlic was abundant, but berries were few.

wtshaw

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
In article <379D20B2...@null.net>, "Douglas A. Gwyn"
<DAG...@null.net> wrote:

Best to not put blinders on prematurely. If I got it right, *a whole new
ball game* could be sort of a cryptic clue. So, I think numbers, and what
different ball games suggest, the most likely being baseball. The mind
races to see something with a loop of four characters like the bases,
autokey like deal, or number of players, or innings being significant.

As deceit is a basic in crypto, figure that it is not ruled out, even to
being enticed down a path that is not going to lead you were you want to
go.

Douglas A. Gwyn

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
wtshaw wrote:
>> Best to not put blinders on prematurely. If I got it right, *a whole new
> ball game* could be sort of a cryptic clue.

Not likely; that's a standard American idiom.

wtshaw

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to

Some people use double meanings.
--
Freedom means having the right to chose to be isolated and left
alone. It also means not having the right to force someone to get
involved. But, the continuation of freedom demands that some of
us act for those that can't or won't.

Mok-Kong Shen

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
John Savard wrote:
>

> It certainly is possible to devise an open ended encryption program.
> For example, GPG, GNU Privacy Guard, (currently still in beta)
> provides for the addition of new encryption algorithms as modules.
>

> However, if one switches algorithms _under the control of a key_, at
> some point one must define what actions the program is to take for any
> particular key. Otherwise, the programs belonging to the sender and
> recipient may not be compatible. Perhaps this is the source of the
> current objection.

You can switch algorithms during processing of a single message
in a variety of ways, e.g. after a number (dependent on the key)
of records, etc. I think that even switching algorithms from message
to message, i.e. one algorithm for one message, is quite advantageous
if the schedule is such that the analyst has big difficulty to
figure out which algorithm pertains to which message.

M. K. Shen

0 new messages