Live from the Second AES Conference

39 views
Skip to first unread message

dian...@tecapro.com

unread,
Mar 23, 1999, 3:00:00 AM3/23/99
to

Here is my report on the first day of the AES2 in Rome. Lots of
participants, 180+ people from (according to NIST) 23 countries.
About half the speakers today were not native English speakers, so
this is indeed an international process. 28 papers were submitted
(all are at NIST's site: aes.nist.gov) and 21 will be presented
here.

First some general observations about today: A very full schedule
with 13 presentations and a rump session with another 15 or so
short themes, that was finally concluded at 9pm. Many surprises
and, frankly, some confusion right now. Almost all candidates
ciphers were stained one way or the other - and a surprising
strong showing of Rijndael (in the negative sense that very little
negative was said against it). Certainly many of the speakers were
frankly biased, but this is normal in a competition. Please keep
in mind that I am biased too. Here I express what I understood
from the proceedings as well as my personal opinions, both of
which may not be correct. Also, sometimes I only mention
approximate numbers. If you are really interested, read the papers
on NIST's site.

In the morning we had 8 presentations with surveys of the ciphers
- actually lots of measurements mainly about speed. Speed numbers
can be bewildering because there are so many parameters beyond the
cipher itself: hardware platform, OS, compiler, quality of
implementation, memory/speed trade-offs, key scheduling versus
encryption. Speed may be easy to measure, but I think its
significance is overrated (the advance of hardware technology will
double AES's effective speed every 18 months or so, and I don't
believe that world wide speed requirements will grow that fast
too). Even worse: different designers chose different margins of
security and an evaluation process that is obsessed with speed
would penalize those, who probably wisely, chose to be more
conservative. Thus, Eli Biham proposed a normalization of the
candidate algorithms depending on the number of rounds that are
known not to be secure. This view has also its detractors.

In the afternoon smart card implementations were discussed. First
efficiency estimates (both speed and space) were shown. After that
came three very interesting presentations about attacks against
smart card implementations. These are real world, practical
attacks. IBM's Pankaj Rohatgi explained how he got all 128 bits of
a Twofish key after only 50 (that is 50 not 2^50) uses of a smart
card!

Now some more information about the presentations:

First, NIST's Jim Foti explained that they did a lot of speed
measurements on different platforms. On the reference platform the
fastest algorithms were Crypton, Rijndael, RC6, MARS, Twofish, and
E2. The algorithms with the fastest key scheduling are: Crypton,
Magenta, Safer+, E2, RC6, Rijndael. Dead last, by three orders of
magnitude, is Frog, which is my own candidate. (I wanted to have a
complex scheduler and on my Pentium it used "only" one hundredth
of a second, so I thought that was good enough!) There is much
change in ranking depending on the survey of speed, but in general
the "fast" algorithms are Crypton, Mars, RC6, Rijndael, Twofish
and Mars.

The next paper was by Brian Gladman, the Briton who singlehandedly
coded in C all 15 algorithms based only on their description, but
he was not present. His basic result is that for small, medium and
large blocks RC6, Rijndael, Mars, Twofish and Crypton are the
fastest.

Then came Bruce Schneier's presentation about speed. Again Bruce's
exposition was interesting and clear. He made a good point that
only assembly code speeds should be compared because otherwise you
measure a particular compiler's efficiency rather than the
cipher's. This is true, but in a networked world pure Java is
going to be used too as a platform independent solution. He
mentioned that the CPU also affects relative speeds, especially
with ciphers like RC6 and MARS that use data depending rotations
and multiplications. (RC6 in particular took some beating for its
relative inefficiency both with 8-bit smart cards and later,
surprisingly, with next generation CPUs.) Bruce mentioned that 8
bit processors with little RAM will stay with us forever, because
they will be used more and more for embedded systems. Less
convincingly, he claimed that also 32 bit processors will stay
with us for a long time. I think the prevalent view is that in the
future 64 bits CPUs and beyond will be prevalent for high-end
personal applications. His final ranking: Twofish, Rijndael,
Crypton, E2, Mars for Pentiums; Rijndael, Crypton, E2, RC6,
Twofish for hashing.

Then came SUN's Alan Folmsbee who compared ciphers from the Java
perspective. He used two platforms: Ultrasparc and "MicroJava" a
new processor that directly executes Java source code. He measured
"excess avalanche", RAM size, ROM size and speed. Excess avalanche
tries to measure how conservative the choice of number of rounds
is. According to his particular weights (and at the very least he
is considered impartial) the five best algorithms are: RC6, Mars,
Safer+, Serpent and Crypton. Guess who number six is? Frog.
Trouble is he did not even pretend to measure security as such and
only used the avalanche criterion - which is not supposed to be
synonymous with security. By the way, here Rijndael did not fare
very well being one of the slower algorithms.

Serve Vaudenay was another designer presenting a survey. His
recommendation is Mars, RC6, Serpent, ... and DFC. He criticized
NIST on the choice of ANSI-C for their reference platform. I tend
to agree: assembler and then Java for interoperability probably
makes more sense. His criteria were speed, transparency in the
design of the S-boxes (with some criticism for Serpent in that
respect) and simplicity. I liked his emphasis on simplicity in
algorithm design. He stated that a complicated (but insecure)
algorithm only increases the latency delay before somebody finds a
practical attack. His last criterion was "state of the art
research". HPC and Frog were deemed "empirical" with no reference
to any research results and were un-ceremoniously thrown off board
(I wonder what happened to Frog's splendid simplicity?) Deal and
DFC were judged to be based on a theoretical design, everybody
else on a mere heuristic design.

Then came Craig Clapp (who I believe participates in sci.crypt)
with a rather interesting paper on instruction level parallelism
of the candidates. He tried to measure what would happen in highly
parallel processors that are projected in the future. For this he
actually used a compiler that produces code for hypothetical
processors with between one and ten pipelines. As expected, with
hypothetical processors that are similar to current designs RC6,
Mars and Twofish came out best. With higher levels of parallelism
though, Rijndael and Crypton were faster. By the way, in these
measurements he used Biham's notion of ciphers with "normalized"
security, i.e. sometimes less rounds than the standard
implementation.

After that, Eli Biham made his case for normalizing the algorithms
to en equal level of security before making speed measurements. So
he cut down the number of rounds to an "insecure" number depending
either on their authors' estimates or on his own estimate - and
then added two rounds. By "insecure" here is meant that an attack
is known of roughly less complexity than exhaustive key search.
Now, of course, this is a lower limit; nobody really knows at what
number of rounds any of these ciphers become insecure in that
sense. Even so it is a good method because none of the candidates
should be allowed to be faster than that. If somebody finds a
better attack, then this would move up the minimum number of
rounds resulting in a slower cipher. Now, he was criticized for
adding two rounds instead of increasing the number of rounds by a
constant factor. Anyway, in his final analysis (surprise!) Serpent
becomes the fastest normalized cipher. With all that, Eli Biham's
experience in cryptography is really huge and his basic point is
undoubtedly sound. I think it useful to copy here his estimates:

originally now
Cipher rounds cycles minimal secure rounds cycles
Serpent 32 1800 17 956
Mars 32 1600 20 1000
Rijndael 10 1276 8 1021
Twofish 16 1254 14 1097
Crypton 12 1282 11 1175
E2 12 1808 10 1507
RC6 20 1436 21 1508
CAST-256 48 2088 40 1740
DES (with NIST API would be more or less here)
Safer+ 8 4424 7 3871
DFC 8 5874 9 6608
Deal 6 8950 10 14917
LOKI97 16 6376 38 15143
Magenta 6 23186 11 42508
Frog 8 2182 ?
HPC 8 2602 ?

(He does not try to estimate Frog's and HPC's minimal secure
rounds - a pity - but Wagner's paper on Frog estimates double the
number of rounds, which would put me in the 10th place)

Next, Jim Foti explained some preliminary results of NIST's round
1 randomness testing. For that battery of tests he used NIST's own
statistical tools, Crypt-XB (which I had never heard before) and
Diehard. In general everybody made this grade, even though they
were some quirks with RC6 and HPC that are interpreted as
statistical illusions while more testing is being done.

Next, thankfully, came lunch. One amazing piece of information I
picked at the table (while eating pasta al-dente and a literally
bleeding piece of meat). As it happens, I sat at the table with a
guy from the Federal Reserve. So I asked him if it was true that
every day more than a hundred billion dollars is transmitted by
wire. I explained that I had read this somewhere but that it
seemed an incredible amount. Well, he answered that every day more
than three trillion dollars are transferred by electronic means -
in other words, every five days an amount compared to USA's one
year economic output is translated into electrons.

After lunch, Prof. Koeune from Belgium, also a sometime
participant in sci.crypt, presented a paper on the implementation
of several AES candidates on smart cards. He chose two instances:
the low cost, 8-bit Intel 8051 and the advanced, 32-bit ARM with
1kB of RAM. The approximate results are as follows:

8051 ARM
Cipher RAM cycles cycles
E2 344 9K 2K
RC6 205 14K 1K
Rijndael 49 4K 2K
Twofish 49 ? 10K

For more and up to date information visit the page of cEASer
project: http://www.dice.ucl.ac.be/crypto/CAESAR/caesar.html

Geoffrey Keating presented a similar paper for the Motorola 6805,
also a 8-bit CPU. Best algorithms: Rijndael, Twofish, Crypton and
RC6. RC6 does need a lot of RAM (170-200 bytes) and later in the
evening River presented RC6a with a different key scheduler that
requires only 25 bytes of RAM and also is twice as fast in
assembler and five times as fast in C - Rivest's middle name of
course is Merlin).

Then came three presentations about attacks against smart cards,
no doubt orchestrated by NIST so that each paper made an even more
powerful impression than the one before.

The first paper was presented by Adi Shamir. I don't know whether
Shamir is a teacher, but if he is then he must be a very good one
indeed. First he explained about Paul Kocher's power attacks
against smart cards (explained in Kocher's home page
www.cryptography.com). The basic idea is to observe how much power
a smart card consumes while executing a cipher (this can be done
down to the level of microcode!). So, every instruction has a
footprint and also the "Hamming weight" of the data processed
(i.e. the number of bits with value 1) affects the power
consumption as you need more power to charge up a register or
memory bit to one than to flush it down. So, if you write a value
with a lot of ones you will require more power. This attack, by
the way, is a noninvading attack. Say you use a smart card and its
reader has been doctored by an attacker and transmits details
about the power consumption. Now Shamir showed that it is not
necessary to have known texts encrypted. By using data from many
different encryptions (different data but with the same key) and
comparing their power graphs you will find that there are places
where the power consumption strongly correlates. These are the
times when the cipher is doing work that is not dependent on the
variable data streams - i.e. when the cipher is doing its key
scheduling. The technique explained is slightly more complex, but
the point made was that DES could easily (actually especially
easily) be broken in this way. On the other hand, Skipjack, a
modern cipher produced by the US government, would be especially
hard to attack in this way.

Next, Joan Daemen, one of the authors of Rijndael, presented
another comparative study about attacks against smart cards. He
differentiated between timing attacks (useful mainly against older
implementations of public key smart cards), power analysis
attacks, and Differential Power Analysis (DPA) attacks. The latter
is based on the fact that for some instructions the average power
consumption correlates with the value of one input bit. As
possible defense mechanisms he mentioned artificially produced
noise but this can be cancelled out using a larger sample of
cases. Another possibility is "balancing", an expensive
proposition where, for example, you use 4 bit arithmetic on an 8
bit smart card, taking care that the other 4 bits are always the
complement of the "true" 4 bits, i.e. maintaining a constant
Hamming weight. According to his analysis the algorithms which are
easiest to protect against such attacks are Crypton, DEAL,
Magenta, Rijndael and Serpent. The worse would be CAST-256, DFC,
E2, HPC, Mars and RC6. His conclusion was that these kind of
implementation attacks are really much more relevant than the
academic attacks often discussed, because they can be implemented
in practice and do some real harm.

This point was made dramatically clear by the next speaker, IBM's
Pankaj Rohatgi, who actually implemented the Twofish 6805
reference code on a ST16 smart card and was able to recover the
full 128-bit secret key using only about 50 independent block
encryptions. He showed how noise can be cancelled out in these
attacks and how he guessed the bits of the whitening. Whitening
XORs data bits with constant key bits, ie: D <- D xor K. Using DPA
it can be found out if D retains its value (in which case K=0) or
is inverted (in which case K=1). In this way all of the whitening
keys' bits can be found. In the case of Twofish the recovery of
the whitening key allows the definition of a small number (10^6)
of candidate master keys, from which the correct key can be
derived. He surveyed all 15 candidates declaring them all unfit.
The easiest to attack were judged to be Crypton, Deal, Loki-97,
Magenta, Rijndael, Safer+, Serpent and Twofish, where DPA needs to
be done only on very few rounds. Slightly harder would be ciphers
like CAST-256, DFC, E2, Mars and RC6. Hardest to attack would be
Frog and HPC, but only because they employ large key dependent
tables - which make them more difficult to implement in smart
cards in the first place. The moral here may be that availability
of more RAM can make a smart cart more secure. He mentioned some
possible defenses that do not work: adding noise or reordering the
sequence of operations. He made clear that even balancing would
not work, because this by definition is a technique where two
physically separate parts of hardware are used; these parts can
never be identical, so a correlation of bit values can always be
determined. He rather cryptically mentioned that secret sharing
techniques where one bit is split in several values might have an
application here. I asked him whether a capacitor build in the
smart card and powerful enough to charge several instructions
wouldn't effectively hide the detailed timing of the power
consumption. He said that a capacitor can be easily disabled, but
that this defense might work in an noninvading threat model (this
is the most dangerous kind where a smart-card's key can be stolen
while the user is using the card in the normal manner).

After that came the rump session. A few highlights:

Doug Whiting described some efficiency estimates on Merced
(supposedly a secret future Intel CPU). Here are the results:

cipher Pentium II Merced
RC6 250 620
Rijndael 283 170
Serpent 900 800
Twofish 258 230

RC6 (as mentioned before) actually seems to get worse. The same is
estimated will happen with Mars too.

Takeshi Shimoyama claimed that some S-boxes of Serpent were not
secure against high-order differential cryptanalysis. His
presentation was laced with a lot of jokes and laughter and I
cannot judge how serious (if at all) this a problem is for
Serpent.

David Wagner showed that there is a large set of equivalent keys
in HPC that can be found just by flipping 1 bit of the master key
and then keep trying for about 500 times. This would mean that HPC
is not a good algorithm for implementing hashes. Wagner, by the
way, will present tomorrow the paper about weak keys in Frog.

Rivest presented RC6a with the much more efficient key scheduler
variant (see above). He claimed that 8-bit chips "will go away",
but nobody really believed that he really believed it.

Bruce Schneier made a case against the idea of choosing not one
but several "approved" AES algorithms. He used the argument that
cryptanalytic efforts should not be divided between several
ciphers because then each individual cipher would be less trusted.
I didn't quite understand this argument: all ciphers that make it
to the second round will be strongly analyzed anyway no matter
whether only one or more than one is finally declared the winner.
He mentioned that if they were two AES ciphers and you used one
bit of the key to chose one of them, then half the time you would
use a less powerful cipher. But one could also say that half the
time one would use the more powerful cipher. (Say, after the
second round cipher A is deemed the strongest followed by cipher
B. Now suppose NIST chooses both as a standard and somebody in the
future finds an attack that works better against A than against B,
which would make B stronger than A). Also if there should be a
catastrophic failure with cipher A (a non zero probability),
cipher B could be easily substituted just by fixing this one bit
in the key. He argued that multiple standard ciphers would be much
more expensive to implement. This is probably true but I think
this measurable cost should be balanced against the risk (a
non-zero probability) that a single AES may fail catastrophically
some time in the future when it is already broadly used. If that
should happen the cost would be extremely large. The possibility
of a catastrophic failure is really what should keep the NIST
people awake at night.

One wonders what is so wrong in declaring several good algorithms
for specialized environments. Actually, after today's presentation
about attacks against smart cards, somebody mentioned that a smart
card is a sufficiently different environment to justify a
specialized cipher. There is a large class of applications where
ciphers are implemented in software, where speed is not important,
and where security is important. In these cases one might choose
to combine several algorithms in series at no particular additional
cost.

So who will make it to the second round? No idea.


-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own

Christopher Jobmann

unread,
Mar 23, 1999, 3:00:00 AM3/23/99
to
dian...@tecapro.com wrote:
>
> Here is my report on the first day of the AES2 in Rome. Lots of
<...lots of news...>

Thanks !


> So who will make it to the second round? No idea.

Will the second round candidates be chosen/announced on this conference,
or will we have to wait to find out ??

jsa...@ecn.ab.ca

unread,
Mar 23, 1999, 3:00:00 AM3/23/99
to
dian...@tecapro.com wrote:
: Almost all candidates

: ciphers were stained one way or the other - and a surprising
: strong showing of Rijndael (in the negative sense that very little
: negative was said against it).

This reminds me of something I've noticed. Although Magenta had serious
problems, the cryptanalysis of it was based on a weakness in its key
schedule. The only other possible problem with it is that the S-box,
although nonlinear, seems a bit too simple.

If both these problems were corrected - which wouldn't change the speed of
enciphering blocks in it - the design seems like potentially a very good
one. (I had thought, though, that it would be slow. If that isn't the
case, it seems to be a good basic design.)

: Thus, Eli Biham proposed a normalization of the


: candidate algorithms depending on the number of rounds that are
: known not to be secure. This view has also its detractors.

That is a way to compare the designs for one type of merit. I'd consider
that a valid thing to examine, but I'd also note that it was more relevant
to determining which of the ciphers have ideas in them that would be
useful in a new cipher than in choosing between the existing designs as
they are: if this point were missed, I'd be a detractor.

: IBM's Pankaj Rohatgi explained how he got all 128 bits of


: a Twofish key after only 50 (that is 50 not 2^50) uses of a smart
: card!

I wonder how secure some of the other ciphers would be, if the kind of
optimizations Bruce suggested for fitting Twofish on a smart card were
applied to them. That is, if it were possible.

This result may well mean that the whole idea of using a cipher on a
smartcard will have to lead to the design of special ciphers optimized for
that particular application.
: He made a good point that


: only assembly code speeds should be compared because otherwise you
: measure a particular compiler's efficiency rather than the
: cipher's.

If you compare all the ciphers on the same compiler ... and I remember at
a computer chess competition, someone suggested that it wouldn't be a bad
idea to require the entrants all to use C, so that it wouldn't be a
competition of assembly-language coding skills.

Possibly one should really be comparing assembly code speeds for an
idealized machine, with an architecture free of ... peculiarities. But
comparing everything on one computer running Java was a good approach to
take if there were very limited resources to set the thing up.

: Less


: convincingly, he claimed that also 32 bit processors will stay
: with us for a long time.

I think that 32-bit integer arithmetic will stay with us for a long time,
even if few microprocessors have a 32-bit data bus. I'll definitely admit
I don't think that there even *are* many 386s and 486s in new products
today. But I also don't think that it makes any sense for a computer to
force people to do integer arithmetic exclusively on 64-bit quantities.
The DEC Alpha, on which an int is 64 bits and a short int is 32 bits, is,
and I think will remain, very unusual.

Memory will always cost money, and people will always want more of it, so
they're unlikely to want to store quantities with useless precision.

: The technique explained is slightly more complex, but


: the point made was that DES could easily (actually especially
: easily) be broken in this way. On the other hand, Skipjack, a
: modern cipher produced by the US government, would be especially
: hard to attack in this way.

This is a significant result, that will add to our knowledge.

: Another possibility is "balancing", an expensive


: proposition where, for example, you use 4 bit arithmetic on an 8
: bit smart card, taking care that the other 4 bits are always the
: complement of the "true" 4 bits, i.e. maintaining a constant
: Hamming weight.

Ideally, a specialized microprocessor designed to do its calculations in a
balanced way should be used for encryption...

: Bruce Schneier made a case against the idea of choosing not one


: but several "approved" AES algorithms. He used the argument that
: cryptanalytic efforts should not be divided between several
: ciphers because then each individual cipher would be less trusted.
: I didn't quite understand this argument: all ciphers that make it
: to the second round will be strongly analyzed anyway no matter
: whether only one or more than one is finally declared the winner.

Well, I don't think that particular point demolishes his argument. He is
thinking of the kind of analysis DES recieved - over decades.

But that is not to say I necessarily agree with that conclusion. Because
that applies to how well the AES will be trusted ten years from now - *if*
it isn't broken. (Suppose they choose the wrong one.)

Ten years from now, people may be using other ciphers. This doesn't seem
to be relevant to what is most secure for AES users in the near term
period after adoption of the standard.

: The possibility


: of a catastrophic failure is really what should keep the NIST
: people awake at night.

I agree with you there, but I don't think choosing two AES candidates
(although it at least provides people with alternatives) really prevents
that danger. Maybe we should conclude that so much has been learned from
the AES process that all the entrants - having been designed before it -
are obsolete. And so what is needed is to take the best ideas from all the
entrants, and create a new block cipher based on this new knowledge.

: One wonders what is so wrong in declaring several good algorithms


: for specialized environments. Actually, after today's presentation
: about attacks against smart cards, somebody mentioned that a smart
: card is a sufficiently different environment to justify a
: specialized cipher.

Yes, but I think that a special cipher for smart cards wouldn't look like
an AES candidate. Instead, it would be more like RC4. Block ciphers don't
offer maximum security for minimal resources.

The AES effort sought a block cipher because, in certain ways, a block
cipher is more flexible than a stream cipher. This is fine; but block
ciphers also have fundamental limitations too, and so while I think an
effort to find a good cipher for smart cards is a good idea, an effort to
find a good block cipher with a 128-bit block and 128, 192, and 256-bit
key sizes for smart cards is *not* necessarily such a good idea.

John Savard

DJohn37050

unread,
Mar 23, 1999, 3:00:00 AM3/23/99
to
For an alternative viewpoint to Schneier's "choose one" perspective, see my
"Future Resiliency" paper on the NIST website.

Don Johnson

STL137

unread,
Mar 23, 1999, 3:00:00 AM3/23/99
to
Merced isn't a supposed chip - it is definitely going to come out mid-2000.

-*---*-------
S.T.L. ==> STL...@aol.com <== My quotes page is at: http://quote.cjb.net
~~~ My main website is at: http://137.tsx.org ~~~
If you see a message of mine posted on two newsgroups, then it is because I
have replied to a crossposted message. I *never* crosspost of my own accord!
I block all unapproved E-mail. If you wish to talk to me, post to alt.test.9
with the subject "Moo" and your E-mail address in the body. I will allow you
as soon as I sign on next.
"This universe is not hostile, or yet is it friendly. It is simply
indifferent" - John H. Holmes, The Sensible Man's View of Religion

lam...@bite.me.spammers

unread,
Mar 23, 1999, 3:00:00 AM3/23/99
to
jsa...@ecn.ab.ca () writes:
>The DEC Alpha, on which an int is 64 bits and a short int is 32 bits, is,
>and I think will remain, very unusual.

on an alpha (linux and digital unix) longs are 64, ints are 32 and shorts
are 16.

--
Lamont Granquist (lam...@u.washington.edu)
ICBM: 47 39'23"N 122 18'19"W

wtshaw

unread,
Mar 23, 1999, 3:00:00 AM3/23/99
to
In article <7d6q7j$mvs$1...@nnrp1.dejanews.com>, dian...@tecapro.com wrote:

> This point was made dramatically clear by the next speaker, IBM's

> Pankaj Rohatgi...... He surveyed all 15 candidates declaring them
all unfit.

I maintian that the criteria were written in order to prevent a really
strong cipher from getting much consideration. It is refreshing to see
this statement actually being made.
.....

>
> So who will make it to the second round? No idea.
>

Thanks for your report
--
If I have not offended you lately, be patient as I will probably get a round to it. As a critic, I am probably duty bound to say things that I otherwise would not if I were trying to butter people up; is honesty the best policy, or just politically correct at certain times?

Brian Gladman

unread,
Mar 24, 1999, 3:00:00 AM3/24/99
to

<dian...@tecapro.com> wrote in message
news:7d6q7j$mvs$1...@nnrp1.dejanews.com...

>
>
> Here is my report on the first day of the AES2 in Rome. Lots of
> participants, 180+ people from (according to NIST) 23 countries.
> About half the speakers today were not native English speakers, so
> this is indeed an international process. 28 papers were submitted
> (all are at NIST's site: aes.nist.gov) and 21 will be presented
> here.

Many thanks for a helpful and informative summary of the AES2 conference. As
a contributor who could not attend I much appreciate this feedback.

[snip]


> Then came Bruce Schneier's presentation about speed. Again Bruce's
> exposition was interesting and clear. He made a good point that
> only assembly code speeds should be compared because otherwise you
> measure a particular compiler's efficiency rather than the

> cipher's. This is true...

No its not! A number of factors are involved in ***all*** comparisons:

1. algorithm efficiency
2. the efficiency of the tools used to convert the code into machine format
3. the quality of the implementation.

and others as well. It is quite wrong to suggest that comparisons in high
level languages test only item 2 and, if Bruce really did say this, then he
clearly does not understand the issues involved. It is true that assembler
comparisons will reduce the uncertainties caused by item 2 but they often do
so at the expense of introducing far worse uncertainties as a result of item
3. In Bruce's own assembler comparisons, for example, the 'quality'
(non-existence!) of assembler implementations of some AES candidates
produced results in earlier versions of his report that were quite wrong. I
personally spent quite a bit of time communicating with Bruce and his team
seeking corrections of quite large magnitude in some of his comparisons.

In practice it is important to compare AES candidate performance in
assembler and in higher level languages. Other things being equal, an
algorithm that can be coded efficiently in both assembler and high level
languages is better than one that is inefficient unless it is coded in
assembler. If an algorithm is efficient in high level languages we can have
both efficiency AND portability and these are both valuable properties.
Fortunately there are several AES candidates that meet this requirement.

[snip]

> Bruce Schneier made a case against the idea of choosing not one
> but several "approved" AES algorithms.

It is worth reading Don Johnson's paper here. I too have suggested that it
would be wrong to select a single AES algorithm (in comments to the NIST AES
forum). It is simply not good security practice to 'put all our eggs in one
basket' in this way.

But I do not support choosing algorithms for good performance in different
domains. If we choose a good 'smartcard algorithm', a good 'PC algorithm',
.... we will not get the algorithm redundancy that we need in each domain.

Hence if we do go for (say) three AES winners, it will be important to
select algorithms that are all good across a broad range of application
domains, not the best algorithms in each individual application domain. So
we do need to take Don's ideas on board and consider carefully whether we
want to take the risks involved in selecting a single winner.

> One wonders what is so wrong in declaring several good algorithms
> for specialized environments. Actually, after today's presentation
> about attacks against smart cards, somebody mentioned that a smart
> card is a sufficiently different environment to justify a specialized
cipher.

While I support the selection of more than one AES winner, I don't like the
idea of selection on the basis of the best performers in each application
domain since this will undermine the very algorithm redundancy that we are
seeking.

> So who will make it to the second round? No idea.

We need much more on security rather than performance here. And we don't
yet understand implementation assurance differences between the AES
candidates. Here, we are only just starting to see (with smartcards) some
of the many issues involved and we have very little information for other
processor families in this respect. In AES time scales it will be attacks on
implementation weaknesses that will almost certainly offer the easiest 'way
in' and we are really only just starting to look at these issues.

Brian Gladman


dian...@tecapro.com

unread,
Mar 24, 1999, 3:00:00 AM3/24/99
to

Here is my report on the second and last day of AES2.

First came five papers with analysis of several algorithms (in
general it was felt that the analytic results of the first round
were relatively few):

Sean Murphy described properties of Twofish pertaining to its key
schedule. First it was shown that the whitening keys (XORed with
the data before and after the main cipher) are not uniformly
distributed, i.e. take not all possible values. In a similar vain,
each 8-byte round subkey also cannot take all possible values.
Ideally, of course, all key material within a cipher should be as
close to random as possible, otherwise an attacker gains some
predictive power. Later in the afternoon workers from the Twofish
team explained exactly how much entropy was lost: the whitening
key has 117 bits of entropy (instead of the optimal 128) and each
round key has 63.2 bits of entropy (instead of 64). They claimed
this is not a useful cryptanalytic property which does seems
reasonable.

John Kelsey, also from the Twofish design team, described a
weakness in Safer+'s key schedule (256 bit key variant) that led
to a possible attack requiring 2^200 encryptions - clearly a very
theoretical attack. He finished suggesting an improvement to the
cipher's key schedule. Later in the afternoon, Safer+'s designer,
James Massey, conceded that his main attention was invested in the
128 bit variant and described a similar fix.

Vincent Rijmen, one of Rijndael's designers, explained the attack
on LOKI97, the first serious published attack against an AES
candidate: a differential attack that needs 2^56 chosen plaintexts
and a lineal attack that need 2^56 known plaintexts but works only
on 2^(-25) of the keys.

Then came David Wagner's exposition of the attack on Frog. David,
by the way is still only a student in California. I liked his
description of Frog as a cipher where the "wires" are parametrized
to the key. This means that Frog looks like a large collection of
individual ciphers with different internal wiring. The basic point
is that the attacker can search and choose the instances where
this wiring defines a weaker cipher, and then mount an attack on
the weak key that gave rise to this configuration. (For those
familiar with Frog the weak wiring happens when the bombpermu
pointer addresses the previous byte.) It turns out that 2^(-33) of
the keys are now weak and that in these cases the key can be
broken with 2^58 chosen plaintexts. The decryption function of
Frog has much slower diffusion (something first noticed by John
Savard, a frequent participant in sci.crypt) which allows for a
more successful attack that requires 2^36 chosen ciphertexts and
works for 2^(-29.3) of the keyspace. A linear attack on Frog was
also described.

Frog-like ciphers do have this kind of problem: by putting some or
most of the functional definition of the cipher within the key,
the possibility exists that some of these functions will be weak.
I suppose there are ways to fix Frog in order to strengthen it
against the differential and lineal characteristics discovered by
Wagner and achieve a much lower or zero number of weak keys,
trivially: suppress the weak wiring configurations. (I did ask him
to propose a fix but he was not forthcoming - I wonder whether
using ADDs instead of XORs for diffusion would cut it). The really
interesting question though is to find out if a generalized
version of Frog that just follows its design principles to a
higher degree would increase or decrease the frequency of Wagner
weak keys. The whole idea in Frog is to do nothing designed
specifically as a defense against a particular attack. So if
Generalized Frog turns out to be weaker against Wagner's attack
then it is bad news for the basic idea. But, if it turns out that
"purer" (even less structured) forms of Frog are more resistant,
then confidence in Frog's design principle will increase. So
Wagner's attack can be used as a platform to evaluate the design
methodology, something even more important than the candidate
cipher itself.

Finally Eli Biham described the attack against Magenta. The attack
works with 2^64 chosen plaintexts and 2^64 complexity or 2^33
known plaintexts but 2^97 complexity. In the afternoon, Magenta's
Klaus Huber gave a very short rebuttal, explaining that this
weakness can easily be fixed and that Magenta does have some
interesting theoretical and implementational properties. In
software it is relatively slow though.

Next came a series of theoretical papers about various candidates:

Jaques Stern of DFC described an update of the cipher that is
faster and can be made scalable to allow for different block sizes
(a useful property of several other candidates). The DFC team
worked really hard to have decorrelation theory serve as the
theoretical underpinning of their cipher. (Also it seems that
decorrelation modules can be inserted into other ciphers such as
E2.) He explained that the proofs apply to an idealized version of
the cipher that DFCs approximates. By the way, later this
afternoon somebody asked whether "proofs" are really important in
cipher design. Bruce Schneier's reasonable comment was that any
proven property in a cipher increases the quality of its analysis
and hence its security. He also made clear that proofs discuss a
specific thread model and that general proofs for all possible
threats are not really possible. Well, I tried to go the other way
and asked Jaques whether DFC's designers had put any thought in
defending against unknown attacks (the proofs on DFC pertain only
to first order differential attacks). He clearly disliked the
question and his answer was somehow patronizing in the sense that
it is meaningless to discuss unknowns. There was laughter around
the room but I really think this is no laughing matter: Every ten
years or so a new powerful cryptanalytic technique is discovered.
In AES's life span maybe five new types of attack will be
published. Only in the last year unforeseen and extremely powerful
attacks against smart cards were demonstrated. Also, several
ciphers including Mars, E2 and Twofish (not to mention Frog) did
include in their documentation at least some reasoning about their
strength against unknown attacks. I understand that mathematicians
prefer to work with well defined problems and consider vague
problems as problems not worth thinking about, but ciphers are
machines to be used in the real world where the givens are not
always clear-cut.

Kazukuni Kobara next discussed a very theoretical paper on
pseudorandomness. After her talk somebody had to ask her how this
work relates to E2, to which she answered, as if just remembering,
that E2 does indeed achieve optimal pseudorandomness after two
rounds. It is interesting to observe how cultural characteristics
have an impact in this international process. The designers of the
Asian candidates (E2 and Crypton) are really far too modest and
this, I think, hurts their chances. American designers, of course,
are the most crass, and I think this is a winning strategy.

Next, James Massey explained why Safer+ diffusion is optimal.
Safer+ is a mathematician's kind of cipher and its weakness were
discovered in the key scheduler, precisely the place where math
does not help a lot.

RSA's Scott Contini quantified the differential properties of
data-depending rotations followed by multiplications as used in
MARS and RC6 - a technique that appears to be powerful and is also
under the cloud of a RSA patent. In general, I am in favor of
patents but not in favor of being able to patent such broad and
obvious ideas as using data-depending rotations in ciphers; after
all this is an available machine instruction. My very first cipher
design done several years ago, GodSave, uses data-depending
rotations like crazy and I had no idea that RSA was using them
too.

Finally, Dough Whiting described some new results on Twofish. Even
better speeds and better implementation flexibility (even though
it appears that the very fastest code is self-modifying - this may
look "unpure" to some but I think that it is a valid optimization
technique). Even more carefully crafted cryptanalytic results were
given.

Next came lunch. Today I sat with a guy who works with a large
organization that builds encryption hardware for the government!
Sorry, not NSA; I never met any of these guys even though they
were there - they appeared to be playing No Such Agency. I
certainly would have enjoyed talking to somebody from the NSA. No,
my table companion worked for a private company and the government
in question is UK's. I asked him why government ciphers are still
implemented in hardware and not in software working in dedicated
one chip computers. Apparently speed is not the answer - very high
security communication is really low bandwidth. I didn't find his
answers very convincing: He said that an alpha particle from space
can knock out a transistor in the chip and modify the cipher. Then
again it is a trivial matter to have not one but many CPUs
computing the same cipher in parallel to account for such type of
events. Also he claimed that software correctness was more costly
to verify than a hardware design. Anyhow, I suspect that for all
the mathematical image of cryptography, tradition and the inertia
of ideas does play an important role.

Later, I happened to speak to one of the top cryptographers of the
world and I asked him one of my favourite questions: First assume
two very good ciphers A and B, say two of the better AES
candidates. If your life depended on it and you had to choose
between A.A (doubling A's rounds), B.B (doubling B's rounds) or
A.B (sequentially executing both ciphers), what would you do? The
standard answer is that mixing ciphers is a bad idea because the
resulting cipher could be weaker than each individual one, that
the resulting cipher would be a completely new cipher which would
require fresh analysis which is the only way to gain trust, etc.
Well my companion's answer was "A.B", and he gave precisely the
reasons I expected: a future attack that may work against A could
very well stop at B's gate.

After lunch came a long session with open discussions. Supposedly
candidates would slug it out between themselves while seated in
front of the audience - you know the Christians and lions metaphor
that NIST's Miles had used to explain the choice of Rome for the
second AES conference. Well, to play it safe I sat at the very
back of the room close to the exit. Thankfully, nothing dramatic
happened. A few designers talked for a few minutes defending their
ciphers and that was it. By the way all candidate ciphers, with
the exception of HPC and LOKI97, were represented at this
conference.

Then came a long discussion about intellectual rights (IP) issues.
Miles really seemed unhappy about the whole situation. Here is the
problem: All candidates have renounced their property rights on
their own ciphers if they are declared winners. This was a
condition to enter the competition. So far so good, but even in
the first conference the issue was raised about what would happen
if a looser candidate claimed IP rights on some parts of the
winner algorithm: this certainly could result in a very messy
litigious situation that would defeat one of the basic ideas
behind the AES: that it should be a global royalty-free algorithm.
A few weeks back NIST sent to all candidates an informal question
trying to measure how they stood in this matter. Several of them
(CAST-256, Crypton, Deal, Frog, LOKI97, Rijndael, Serpent and
Twofish) gave a clear positive answer. The others (DFC, E2, HPC,
Magenta, MARS, RC6 and Safer) were not that forthcoming. The
situation is not as bad as it looks. For example, one of the
favorite ciphers, RC6, in only asking for some kind of honorific
recognition if some of its IP is used by the AES winner. Another
favorite, IBM's MARS, had a more convoluted position. They
claimed the NIST's clarification question was not very clear
itself and that it literally said something very different -
therefore their convoluted answer which striked me also as somehow
ambiguous. Overall, my feeling is that NIST will apply gentle arm
twisting behind the scenes before announcing the finalists for the
second round and that it will succeed in getting all the important
designers to agree to play it clean.

After that a lively discussion followed with the participation of
many people from the audience. The main themes:

Smart cards. Several people expressed that the importance that was
given to implementation issues of the AES on a low end smart card
was not reasonable. The standard argument in favor of very tight
implementations on smart cards was that there are cases where
millions of smart cards are needed where even a small price
difference has a large impact. Against that, somebody from the
audience presented the following strong argument: cases where the
AES is going to be used in a smart card are cases where you want
to protect information that is valuable - otherwise you don't
really need a 128 bit key and can use any of well known medium
range ciphers. If you are protecting information that is valuable,
then spending one dollar more for each card is a reasonable
investment. Miles explained that originally NIST had not intended
to require implementations on 8-bit platforms but that the initial
comments supplied by the public were strongly in favor that NIST
should add smart card flexibility to the AES. So, even though it
will be judged positively if a candidate does operate efficiently
on a smart card, he felt that the whole issue was becoming
overrated. My feeling is that the PDA attacks against smart cards
have really confused the situation here - possibly security for
smart cards will have to follow completely novel design paradigms,
maybe even hardware designs. (At the coffee break I approached a
small group of people that included Adi Shamir who had talked
about power analysis attacks the previous day. The normal
implementation of Generalized Frog takes the secret key and
compiles it to actual code that can then be loaded in a smart
card. So I asked Adi if a situation where each smart card actually
holds a different program processing only data would be an
improvement. As expected the answer is no, because one of the
easiest things to read from a card is the "signature" of the
instructions being executed. My other idea, to have a card hold a
string of keys that will be used only once might work with some
applications, such as high security home banking. On the whole
though the situation with smart cards is considered hopeless).

Another big theme was to have not one but several candidates
declared AES. The previous day Miles had explained the main ideas
of Don Johnson's paper (who unfortunately was not present) and
Bruce Schneier had presented a passionate case in favor of a
unique AES. Today several participants spoke in favor of more than
one AES (the main arguments were insurance against a weakness
being found in the main winner and allowing for more specialized
ciphers); I think these arguments won the day. Miles said that
NIST would very carefully consider the advantages that more than
one winner would represent. Having more than one NIST approved
cipher does have another advantage nobody mentioned: it would
quiet those paranoid minds that might suspect that the AES winner
could be a NSA implanted mole.

The question of number of rounds came up. NIST said it would
consider a cipher secure if it could not be broken with 2^64
chosen texts. A thoughtful counter-argument was that different
amounts of analysis result in different apparent minimal secure
rounds. For example, complicated ciphers have an advantage here,
because simpler ciphers allow for more elegant analysis and
therefore attract more cryptanalysis. This gave me the idea that
instead of trying to normalize security (a difficult proposition)
and then comparing speeds, a better methodology could be to
normalize speed (best assembly implementation on one or two
reference platforms) and then compare security. After all,
optimized speed on a given platform gives a good idea about an
algorithm's workload. Also, speed could be normalized at such a
high level that ALL competing ciphers would be pushed into
insecurity which would then allow more or less reasonable
quantitative comparisons to be made about their relative
"structural" strenght. It is clearly too late to use this
methodology in the first round but maybe it could be used in the
second. I feel like sending NIST an official comment in this
sense. I wonder what the reader thinks about this?

At the very end of the session Miles explained what the future of
the AES competition looks like:

Comment period for Round I ends April 15. So if the reader wants
to influence the process, the time to send in a comment is now.
The email address for that is aesfir...@nist.gov

April 19, the official comments will be published.

May 15, NIST will accept "tweaks" on the algorithms. NIST is the
final judge about what qualifies for a tweak.

Mid-summer, probably by July 99 the finalists will be announced
(probably no more than five but possibly more). At the same time
the justification will be given for their choice. In this point of
time, Round 2 begins.

Now the finalists may submit new optimized code for any platforms
the feel like.

CD-ROMs will be distributed 2-3 months after the start of Round 2
with the new code and most relevant information about the AES
process.

Miles expressed the hope that in Round 2 more serious
cryptanalytic work will be presented.

Efficiency will now be tested also for the 192 and 256 bit
variants.

Hardware performance estimation for all finalists will by supplied
by NSA.

IP issues will be clarified.

January 15, 2000 is closing day for presenting papers for AES3.

Week of April 10, 2000, AES3 will be held in New York.

May 15, 2000 comment period of Round 2 closes.

Approximately August 2000 the winner(s) are announced. The "(s)"
was on NIST's slide.


So who will the winner(s) be? No idea. Many people expressed the
opinion that this may indeed not be a fully technical choice. For
example, it is difficult to imagine the main AES chosen by NIST
for the US government could be a foreign cipher. (By the way, this
makes another point in favor of more than one winner. Jaques Stern
did mention something rather cryptical about a possible European
cipher competition.)

Who will make to the second round? I do feel there is some general
consensus. Here are my choices:

RC6 and MARS will make it for their pedigree not to mention the
fact that they are fine ciphers with probably a lot of work
invested in them.

Rijndael should make on merit alone.

After that, things become somehow less clear.

Serpent is a favorite. It is considered a very conservative
design and everybody agrees that security is indeed the
preeminent concern. Its designers have some of the sharpest
minds in the field and I think NIST will choose Serpent just
for keeping Shamir and Biham in the competition, not to mention
motivate them to attack everybody else.

Twofish is all over the place; it would be cruel to kill it off
so soon. Also it is a very valiant effort and certainly it is a
fine cipher. Extracting the full key out a smart card with
Twofish is widely considered an implementation issue rather
than a weakness of the cipher itself. If you really want to be
logical here, this makes no sense. On the other hand if you
would kick Twofish out for this reason alone then there would a
blood bath with most of the well considered candidates being
thrown overboard too. Twofish does have some good pedigree (it
came out of Blowfish), is fast and flexible on many platforms
and appears to be strong (results to date are not really
significant).

Finally, as a nod to Asia, I think NIST will pick E2 too. It is
certainly a very fine cipher considering its analysis, its
theoretical underpinning and its speed. The Japanese presenters
are probably the most sympathetic and choosing E2 would include
some ladies in the finalist makeup. Also nobody wants to miss
the nice accents.

So here is my guess: MARS, RC6, Serpent and Twofish from the US,
Rijndael from the EU and E2 from Japan.

Alan Braggins

unread,
Mar 24, 1999, 3:00:00 AM3/24/99
to
dian...@tecapro.com writes:
> it appears that the very fastest code is self-modifying - this may
> look "unpure" to some but I think that it is a valid optimization
> technique).
[...]

> improvement. As expected the answer is no, because one of the
> easiest things to read from a card is the "signature" of the
> instructions being executed.

Any ideas whether self-modifying code would make reading instruction
signatures easier or more difficult?

Jim Gillogly

unread,
Mar 24, 1999, 3:00:00 AM3/24/99
to
Thanks very much for the reports from the AES conference.
Your views and notes are invaluable, as they were from the
first conference. Well done!

dian...@tecapro.com wrote:
> rounds. It is interesting to observe how cultural characteristics
> have an impact in this international process. The designers of the
> Asian candidates (E2 and Crypton) are really far too modest and
> this, I think, hurts their chances. American designers, of course,
> are the most crass, and I think this is a winning strategy.

Ouch!

> The standard answer is that mixing ciphers is a bad idea because the
> resulting cipher could be weaker than each individual one, that
> the resulting cipher would be a completely new cipher which would
> require fresh analysis which is the only way to gain trust, etc.

I don't know whose standard answer this is, but it has to be
totally bogus for immediately obvious reasons: if A.B is
E2.RC6 and it's weaker than either of them, then you
can mount an effective attack on E2 by encrypting it with
RC6, and vice versa. This can be done by the cryptanalyst
in possession of A-ciphertext, and doesn't need to have been
done by the sender.

Whoever gives this "standard answer" isn't thinking it through.

--
Jim Gillogly
2 Astron S.R. 1999, 16:56
12.19.6.0.17, 12 Caban 10 Cumku, Eighth Lord of Night

DJohn37050

unread,
Mar 24, 1999, 3:00:00 AM3/24/99
to
Yes, I mention AB encryption (or even more) as Super AES in my paper as a
possibility for the paranoid that want to spend the MIPS.
Don Johnson

William hugh Murray

unread,
Mar 25, 1999, 3:00:00 AM3/25/99
to

Of course, the value of super-encryption, even 3DES, assumes that
encryption is the weak link in one's security. Since that is almost
never the case, why bother? I ask this as a serious question. It seems
to me that the entire AES effort spends incredible amounts of treasure
on the easy part of the security problem.

What am I missing?

whmurray.vcf

John Savard

unread,
Mar 25, 1999, 3:00:00 AM3/25/99
to
William Hugh Murray <whmu...@sprynet.com> wrote, in part:

>Of course, the value of super-encryption, even 3DES, assumes that
>encryption is the weak link in one's security. Since that is almost
>never the case, why bother? I ask this as a serious question. It seems
>to me that the entire AES effort spends incredible amounts of treasure
>on the easy part of the security problem.

>What am I missing?

You are fundamentally correct.

However,

- when there is a weakness in most parts of computer security, they can
only be exploited by certain people, who may need physical proximity, or
who may need to take a risk of getting caught. Weaknesses in ciphers can be
exploited by passive eavesdroppers halfway around the world.

- improvements in the speed and cost of computers mean that DES _is_ too
weak. Yes, encryption is only the easy and obvious tip of the security
iceberg. (Did somebody else say this before? I wouldn't want to plagiarize
Bruce.) But that's not a reason for making a certain minimal effort to make
it adequate before going on to the "real" problems. And, no offence, but
the AES effort can be described as "minimal" - in terms of the expenditures
made by the U.S. Government, the rewards offered to participants, and so
on. When the U.S. Government is *serious* about wanting a new cryptosystem,
it goes to the NSA.

So, in conclusion, I'd say that the AES effort is spending no more effort
than necessary on what is the easy part of the security problem, but still
a part that needed a little work at the moment.

John Savard (teneerf is spelled backwards)
http://members.xoom.com/quadibloc/index.html

Nicol So

unread,
Mar 25, 1999, 3:00:00 AM3/25/99
to
William hugh Murray wrote:
>
> DJohn37050 wrote:
> >
> > Yes, I mention AB encryption (or even more) as Super AES in my paper as a
> > possibility for the paranoid that want to spend the MIPS.
> > Don Johnson
>
> Of course, the value of super-encryption, even 3DES, assumes that
> encryption is the weak link in one's security. Since that is almost
> never the case, why bother? I ask this as a serious question. It seems
> to me that the entire AES effort spends incredible amounts of treasure
> on the easy part of the security problem.
>
> What am I missing?

I see things a little differently. It is true that cryptography is
often not the weakest link of a system, but it has the potential of
becoming it when you start fixing other weak links. When you have fixed
things to a point where all the weak links are approximately equally
difficult to breach, you might be tempted to stop because at that point
strengthening any single link will merely shift the weakest link to
somewhere else. To strengthen security beyond that level, you've got to
start somewhere and you don't want to be limited by the _current_ weak
links. Unless strengthening the cryptography is taking away a
significant amount of resources that could otherwise be used to
profitably strengthen other weak links, doing it is worthwhile (up to a
point).

I don't think we are really spending that much resources on an improved
encryption standard, considering the number of future systems that can
benefit from the effort.

Nicol

Volker Hetzer

unread,
Mar 26, 1999, 3:00:00 AM3/26/99
to
dian...@tecapro.com wrote:
>
> Here is my report on the second and last day of AES2.
First, thanks.

>
> So who will the winner(s) be? No idea. Many people expressed the
> opinion that this may indeed not be a fully technical choice. For
> example, it is difficult to imagine the main AES chosen by NIST
> for the US government could be a foreign cipher.

Yeah, and then restricting export to the very country they got the
algorithm from.

BTW, thanks to any non-US cryptographers who overcame any possible grudges against
the US gvt. and participated anyway. I really hope the US gvt. honors this
by finally abandoning those stupid export restrictions.

Volker

Trevor Jackson, III

unread,
Mar 26, 1999, 3:00:00 AM3/26/99
to
John Savard wrote:
>
> William Hugh Murray <whmu...@sprynet.com> wrote, in part:
>
> >Of course, the value of super-encryption, even 3DES, assumes that
> >encryption is the weak link in one's security. Since that is almost
> >never the case, why bother? I ask this as a serious question. It seems
> >to me that the entire AES effort spends incredible amounts of treasure
> >on the easy part of the security problem.
>
> >What am I missing?
>
> You are fundamentally correct.
>
> However,
>
> - when there is a weakness in most parts of computer security, they can
> only be exploited by certain people, who may need physical proximity, or
> who may need to take a risk of getting caught. Weaknesses in ciphers can be
> exploited by passive eavesdroppers halfway around the world.

In addition to the physical locatilty you described, I think temporal
locality is another factor that influences the desire for "unreasonably"
strong ciphers. Given that we want some secrets to remain secret for up
to 50 years, we have to anticipate that an adversary might spend 50
years beating on a message whose incerception took only a fraction of a
second.

As an example of the difficulty of this problem, consider the
duplication of keys. A physical key is easy to duplicate given a master
as a reference. But what if, by simply recording the usage of the key,
its properties could later be determined and a duplicate constructed?
An extreme example would be the reconstruction of the key from a video
or picture of a user applying it to a lock. If this kind of capability
was suspected we'd soon see "unreasonable" key architecture and
deployment.

Of course the real situation is even worse in that a duplicate physical
key cannot be tested without applying it to the actual target lock,
which is not a hidden process. A cipher key can be tested in secret by
applying it to intercepted messages (the metaphorical equivaent of
applying the duplicate key to a picture of the target lock). Since the
attacker has all of these after-the-fact issues in his favor, the
defender has to invest a large amount of effort before the fact. This
inequality cannot ever be adjusted.

Physical security can be increased after the fact. A suspect lock can
be changed without disturbing the objects being secured. A
cryptographic lock can never be changed or even tweaked. There is
*nothing* the defender can do to protect messages already sent. Thus
the premium wil always be on the side of "unreasonably" strong ciphers.

Bruce Schneier

unread,
Mar 28, 1999, 3:00:00 AM3/28/99
to
>Subject: Live from the Second AES Conference
>Date: Tue, 23 Mar 1999 01:19:57 GMT
>From: dian...@tecapro.com
>Organization: Deja News - The Leader in Internet Discussion
>Newsgroups:sci.crypt

>
> First some general observations about today: A very full schedule
> with 13 presentations and a rump session with another 15 or so
> short themes, that was finally concluded at 9pm. Many surprises
> and, frankly, some confusion right now.

NIST has a hard job ahead of them. Still, I think we will get a
second round with many strong candidates. It's a good field.

> Almost all candidates
> ciphers were stained one way or the other - and a surprising
> strong showing of Rijndael (in the negative sense that very little
> negative was said against it).

I didn't think so. I was not suprised at all at Rijndael's showing.
It deserves the attention it is getting. To me the big suprise was
the complete lack of attention that Mars has been receiving. It just
wasn't talked about.

> Speed may be easy to measure, but I think its
> significance is overrated (the advance of hardware technology will
> double AES's effective speed every 18 months or so, and I don't
> believe that world wide speed requirements will grow that fast
> too).

Don't believe it. Moore's Law has two effects. The first is what you
said: the effective speed of AES will double every 18 months or so on
a specific platform: the computer on your desk, for example. This is
a good thing, but remember that Moore's law will also apply to the
network your computer is attached to, the disk drive in your computer,
etc., etc., etc. What mattes is not the raw speed of AES, but the
effecive speed with respect to the various devices it is being used in
conjunction with. Sure, if you can encrypt at 50 MBits/sec today you
can encrypt at 100 MBits/sec in 18 months, but that will just keep up
with the demand for encryption speed.

The other effect of Moore's Law is that cost goes halves every 18
months. So while a 6805 used to be an expensive processor that
powered your desktop computer, today it's in a smart card selling for
less than a dollar. It's in your dishwasher, your bread machine, your
car, and lots of other places. As things get even cheaper, you'll
find existing devices in smaller and weirder places. If you can
encrypt on the 6805 in 5000 clock cycles per byte, you'll be doing it
for decades to come.

What this all means is that performance does matter. It matters a
lot. If you have five secure algorithm to choose from, you'll choose
on the basis of performance. And you'll choose on the basis of
performance across a wide variety of platforms.

> In the afternoon smart card implementations were discussed. First
> efficiency estimates (both speed and space) were shown. After that
> came three very interesting presentations about attacks against
> smart card implementations. These are real world, practical
> attacks. IBM's Pankaj Rohatgi explained how he got all 128 bits of
> a Twofish key after only 50 (that is 50 not 2^50) uses of a smart
> card!

As he pointed out, this attack works against ANY of the AES candidates
as well as any other algorithm we know about. These attacks are very
real, and have very little to do with the specifics of the
cryptography.

> (I wanted to have a
> complex scheduler and on my Pentium it used "only" one hundredth
> of a second, so I thought that was good enough!)

It is good enough for applications like email and other client
protocols. The problem comes when the Pentium is an SSL server that
needs to set up 1000 keys a second to keep up with the connection
demand. Then you need a fast key schedule. Or think about AES being
used as a hash function, where you have to do one key schedule for
each encryption function. That's even worse.

> Then came Bruce Schneier's presentation about speed. Again Bruce's
> exposition was interesting and clear. He made a good point that
> only assembly code speeds should be compared because otherwise you
> measure a particular compiler's efficiency rather than the
> cipher's. This is true, but in a networked world pure Java is
> going to be used too as a platform independent solution.

Yes, but those will never be time-critical applications. Jave is just
too slow, and not just for encryption. When speed really matters, the
code will be in Assembly.

> He
> mentioned that the CPU also affects relative speeds, especially
> with ciphers like RC6 and MARS that use data depending rotations
> and multiplications. (RC6 in particular took some beating for its
> relative inefficiency both with 8-bit smart cards and later,
> surprisingly, with next generation CPUs.)

It is interesting. Algorithms that could be implemented using
instructions that are part of the RISC subset (XOR, ADD, table lookup,
shift1) were fast and consistant across all CPUs. Algorithms that
used complex operations (multiplies, data-dependent rotates) had
wildly different performance on different platforms. DFC, for
example, is very slow on 32-bit CPUs and very fast on the 64-bit Dec
Alpha, and much less fast on the 64-bit Merced and McKinley (the
next-generation Intel chips). RC6 and Mars, because they use both
data-depdent rotates and 32-bi tmultiplies, are fast on the Pentium
Pro, Pentium II, and Pentium III, but are slow everywhere else. Their
peformance is medeocre on Pentium, ARM, and most other 32-bit CPUs.
Their performance is even worse on Merced, McKinley, and Alpha.

> Bruce mentioned that 8
> bit processors with little RAM will stay with us forever, because
> they will be used more and more for embedded systems. Less
> convincingly, he claimed that also 32 bit processors will stay
> with us for a long time. I think the prevalent view is that in the
> future 64 bits CPUs and beyond will be prevalent for high-end
> personal applications.

Of course, but that does not mean tha 32--bit CPUs will disappear.
Think of 32-bit CPUs in cellphones, smart cards, pagers, PDAs,
etc. Think of the ARM. These CPUs will be used for a long time to
come.

> His final ranking: Twofish, Rijndael,
> Crypton, E2, Mars for Pentiums; Rijndael, Crypton, E2, RC6,
> Twofish for hashing.

For overall performance, Rijndael is the best candidate. I did not
have Twofish first.

> Geoffrey Keating presented a similar paper for the Motorola 6805,
> also a 8-bit CPU. Best algorithms: Rijndael, Twofish, Crypton and
> RC6. RC6 does need a lot of RAM (170-200 bytes) and later in the

> evening River [ed- Rivest?] presented RC6a with a different key


> scheduler that
> requires only 25 bytes of RAM and also is twice as fast in
> assembler and five times as fast in C - Rivest's middle name of
> course is Merlin).

Beware that the RC6a key schedule is not part of AES, and has not
received any analysis. I strongly caution against using it at this
time. NIST has said that they would allow "tweaks" to the algorithm
for the second round, but a completely redesigned key schedule is
almost certainly not a tweak.

> Next, Joan Daemen, one of the authors of Rijndael, presented
> another comparative study about attacks against smart cards. He
> differentiated between timing attacks (useful mainly against older
> implementations of public key smart cards), power analysis
> attacks, and Differential Power Analysis (DPA) attacks. The latter
> is based on the fact that for some instructions the average power
> consumption correlates with the value of one input bit. As
> possible defense mechanisms he mentioned artificially produced
> noise but this can be cancelled out using a larger sample of
> cases. Another possibility is "balancing", an expensive
> proposition where, for example, you use 4 bit arithmetic on an 8
> bit smart card, taking care that the other 4 bits are always the
> complement of the "true" 4 bits, i.e. maintaining a constant
> Hamming weight. According to his analysis the algorithms which are
> easiest to protect against such attacks are Crypton, DEAL,
> Magenta, Rijndael and Serpent. The worse would be CAST-256, DFC,
> E2, HPC, Mars and RC6. His conclusion was that these kind of
> implementation attacks are really much more relevant than the
> academic attacks often discussed, because they can be implemented
> in practice and do some real harm.

The one sloppy part of this analysis was that he assumed that XORs are
easier to balance and hence more secure than ADD. This is, of course,
nonsense...since ADD can be built out of XORs.

> The easiest to attack were judged to be Crypton, Deal, Loki-97,
> Magenta, Rijndael, Safer+, Serpent and Twofish, where DPA needs to
> be done only on very few rounds. Slightly harder would be ciphers
> like CAST-256, DFC, E2, Mars and RC6. Hardest to attack would be
> Frog and HPC, but only because they employ large key dependent
> tables - which make them more difficult to implement in smart
> cards in the first place.

We've looked at a lot of these DPA attacks. Basically, all algorithms
are subject to attack. If someone thinks that one is easier than the
other, it's because he hasn't looked hard enough for an attack. This
problem needs to be solved at the protocol layer (best) or at the
smart-card hardware layer. It cannot be solved with algorithm design.

> Doug Whiting described some efficiency estimates on Merced
> (supposedly a secret future Intel CPU). Here are the results:
>
> cipher Pentium II Merced
> RC6 250 620
> Rijndael 283 170
> Serpent 900 800
> Twofish 258 230
>
> RC6 (as mentioned before) actually seems to get worse. The same is
> estimated will happen with Mars too.

Bizarre, isn't it?

> Takeshi Shimoyama claimed that some S-boxes of Serpent were not
> secure against high-order differential cryptanalysis. His
> presentation was laced with a lot of jokes and laughter and I
> cannot judge how serious (if at all) this a problem is for
> Serpent.

It is a serious problem, but it is by no means an attack. It is VERY
unlikely that a higher-order differential attack can be carried out to
anywhere near the total number of rounds that Serpent has; the attacks
only seem to be effective against ciphers with few rounds.

> Rivest presented RC6a with the much more efficient key scheduler
> variant (see above). He claimed that 8-bit chips "will go away",
> but nobody really believed that he really believed it.

Agreed.

> Bruce Schneier made a case against the idea of choosing not one
> but several "approved" AES algorithms. He used the argument that
> cryptanalytic efforts should not be divided between several
> ciphers because then each individual cipher would be less trusted.
> I didn't quite understand this argument: all ciphers that make it
> to the second round will be strongly analyzed anyway no matter
> whether only one or more than one is finally declared the winner.

My argument is that after a single algorithm is chosen, there is still
about a one-year period before it becomes a FIPS. And even after it
becomes a FIPS, we will continue to analyze AES. The question is
whether we are analyzing one or two ciphers.

> He mentioned that if they were two AES ciphers and you used one
> bit of the key to chose one of them, then half the time you would
> use a less powerful cipher. But one could also say that half the
> time one would use the more powerful cipher.

Indeed. But if you lose your privacy, security, money, or whatever 50%
of the time, it doesn't matter about the other 50%. There are few
operational systems where saying "half the time you use the stronger
cipher" is good enough.

> He argued that multiple standard ciphers would be much
> more expensive to implement. This is probably true but I think
> this measurable cost should be balanced against the risk (a
> non-zero probability) that a single AES may fail catastrophically
> some time in the future when it is already broadly used. If that
> should happen the cost would be extremely large. The possibility
> of a catastrophic failure is really what should keep the NIST
> people awake at night.

NIST can point to other algorithms as alternates, but I feel there
should only be one AES.

> One wonders what is so wrong in declaring several good algorithms
> for specialized environments. Actually, after today's presentation
> about attacks against smart cards, somebody mentioned that a smart
> card is a sufficiently different environment to justify a
> specialized cipher. There is a large class of applications where
> ciphers are implemented in software, where speed is not important,
> and where security is important. In these cases one might choose
> to combine several algorithms in series at no particular additional
> cost.

Because there are few specialized environments. Smart cards talk to
deskop PCs and remote servers. Cellphones talk to base stations.
Software talks to hardware. There are some applications where one
smart card encrypts and the other decrypts, but they are very much in
the minority.

> So who will make it to the second round? No idea.

Thanks for writing this, by the way. I'm glad someone is keeping a
public record. It was good seeing you, and I look forward to reading
your summary of the second day.

Bruce
**********************************************************************
Bruce Schneier, President, Counterpane Systems Phone: 612-823-1098
101 E Minnehaha Parkway, Minneapolis, MN 55419 Fax: 612-823-1590
Free crypto newsletter. See: http://www.counterpane.com

Bruce Schneier

unread,
Mar 28, 1999, 3:00:00 AM3/28/99
to
On 23 Mar 99 09:32:56 GMT, jsa...@ecn.ab.ca () wrote:
>This reminds me of something I've noticed. Although Magenta had serious
>problems, the cryptanalysis of it was based on a weakness in its key
>schedule. The only other possible problem with it is that the S-box,
>although nonlinear, seems a bit too simple.

There are a bunch of other problems the the security. I believe
one--the massive number of fixed points--is mentioned in our paper.
Honestly, there are many other ways of attacking Magenta. It's just
nor worth spending the time writing it up.

Magenta now replaces McGuffin as my favorite toy cipher to give
budding cryptanalysts.

>: IBM's Pankaj Rohatgi explained how he got all 128 bits of
>: a Twofish key after only 50 (that is 50 not 2^50) uses of a smart
>: card!
>
>I wonder how secure some of the other ciphers would be, if the kind of
>optimizations Bruce suggested for fitting Twofish on a smart card were
>applied to them. That is, if it were possible.

He said in his talk that every cipher is vulnerable. We've done this
sort of work, too, and we have found that you can't defend against
these types of attack with the algorithm. You can do some things with
the implementation and some things with the hardware, but basically
you need to defend in the protocol layer.

>If you compare all the ciphers on the same compiler ... and I remember at
>a computer chess competition, someone suggested that it wouldn't be a bad
>idea to require the entrants all to use C, so that it wouldn't be a
>competition of assembly-language coding skills.

Then you're comparing compiler efficiency. What we did in our
comparison is assume best possible ASM. We didn't implement
everything, but we gave all algorithms the benefits of the doubt.

Bruce Schneier

unread,
Mar 28, 1999, 3:00:00 AM3/28/99
to
On 23 Mar 1999 20:58:36 GMT, stl...@aol.com (STL137) wrote:

>Merced isn't a supposed chip - it is definitely going to come out mid-2000.

The Merced design (and McKinley) is frozen at this point. Both of
them are definitely coming out; they are the Intel 64-bit CPUs.

Bruce Schneier

unread,
Mar 28, 1999, 3:00:00 AM3/28/99
to
>Subject: Re: Live from the Second AES Conference
>Date: Wed, 24 Mar 1999 14:18:38 GMT
>From: dian...@tecapro.com
>Newsgroups: sci.crypt

>
> Sean Murphy described properties of Twofish pertaining to its key
> schedule. First it was shown that the whitening keys (XORed with
> the data before and after the main cipher) are not uniformly
> distributed, i.e. take not all possible values. In a similar vain,
> each 8-byte round subkey also cannot take all possible values.
> Ideally, of course, all key material within a cipher should be as
> close to random as possible, otherwise an attacker gains some
> predictive power. Later in the afternoon workers from the Twofish
> team explained exactly how much entropy was lost: the whitening
> key has 117 bits of entropy (instead of the optimal 128) and each
> round key has 63.2 bits of entropy (instead of 64). They claimed
> this is not a useful cryptanalytic property which does seems
> reasonable.

This was a really nice result, by the way. They sent us a draft of
the paper some weeks before the conference, and we were able to
investigate what they found further. The whitening issue is not a
problem because all you need cryptographically out of a whitening
function is half the length of the block in entropy. Hence, for AES
candidates only 64 bits of entropy are needed. For example, RC6
only bothered using whitening on half the block for this reason. We
decided to whiten the entire block, but still this is not an issue

The other thing they noticed is that sequential round subkeys do
not have full entropy. This is, of course, true, as it is for pretty
much any cipher with round subkeys. DES has this feature even more
severely. There's no attck here; never has been against any cipher.

Copies of our report discussing this observation and its ramifications
can be found on the Twofish homepage:

http://www.counterpane.com/twofish.html

> Then came David Wagner's exposition of the attack on Frog. David,
> by the way is still only a student in California. I liked his
> description of Frog as a cipher where the "wires" are parametrized
> to the key. This means that Frog looks like a large collection of
> individual ciphers with different internal wiring.

This is really the proper way to look at ciphers with key-dependent
diffusion properties.

> The basic point
> is that the attacker can search and choose the instances where
> this wiring defines a weaker cipher, and then mount an attack on
> the weak key that gave rise to this configuration. (For those
> familiar with Frog the weak wiring happens when the bombpermu
> pointer addresses the previous byte.) It turns out that 2^(-33) of
> the keys are now weak and that in these cases the key can be
> broken with 2^58 chosen plaintexts. The decryption function of
> Frog has much slower diffusion (something first noticed by John
> Savard, a frequent participant in sci.crypt) which allows for a
> more successful attack that requires 2^36 chosen ciphertexts and
> works for 2^(-29.3) of the keyspace. A linear attack on Frog was
> also described.

Look at CAST-256 and Mars, and you can see what a cipher looks like
when it has balanced diffusion in both directions.

> Frog-like ciphers do have this kind of problem: by putting some or
> most of the functional definition of the cipher within the key,
> the possibility exists that some of these functions will be weak.
> I suppose there are ways to fix Frog in order to strengthen it
> against the differential and lineal characteristics discovered by
> Wagner and achieve a much lower or zero number of weak keys,
> trivially: suppress the weak wiring configurations. (I did ask him
> to propose a fix but he was not forthcoming - I wonder whether
> using ADDs instead of XORs for diffusion would cut it).

Better would be to get rid of the key-dependent permutation. Figure
out which permutation gives you the best diffusion and use that for
all the keys. Why make some of the keys worse than the others?

> The really
> interesting question though is to find out if a generalized
> version of Frog that just follows its design principles to a
> higher degree would increase or decrease the frequency of Wagner
> weak keys. The whole idea in Frog is to do nothing designed
> specifically as a defense against a particular attack. So if
> Generalized Frog turns out to be weaker against Wagner's attack
> then it is bad news for the basic idea. But, if it turns out that
> "purer" (even less structured) forms of Frog are more resistant,
> then confidence in Frog's design principle will increase. So
> Wagner's attack can be used as a platform to evaluate the design
> methodology, something even more important than the candidate
> cipher itself.

I think that extending this type of design strategy will end up with a
weak cipher. If you take a design element and make it key dependent,
then you run the risk of having weak keys. Diffusion is a
particularly bad thing to make key dependent.

Some years ago a cryptographer proposed RDES, a variant of DES
where the left-half/right-half swapping was key dependent. It was
broken using exactly this same kind of "weak key" cryptanalysis.

> Kazukuni Kobara next discussed a very theoretical paper on
> pseudorandomness. After her talk somebody had to ask her how this
> work relates to E2, to which she answered, as if just remembering,
> that E2 does indeed achieve optimal pseudorandomness after two
> rounds. It is interesting to observe how cultural characteristics
> have an impact in this international process. The designers of the
> Asian candidates (E2 and Crypton) are really far too modest and
> this, I think, hurts their chances. American designers, of course,
> are the most crass, and I think this is a winning strategy.

I agree that E2 is faring worse than it deserves to.

> Next came lunch. Today I sat with a guy who works with a large
> organization that builds encryption hardware for the government!
> Sorry, not NSA; I never met any of these guys even though they
> were there - they appeared to be playing No Such Agency.

I thought the half-dozen or so NSA people were very open and
forthcoming. I think you were just unlucky not to run into one of
them.

> Later, I happened to speak to one of the top cryptographers of the
> world and I asked him one of my favourite questions: First assume
> two very good ciphers A and B, say two of the better AES
> candidates. If your life depended on it and you had to choose
> between A.A (doubling A's rounds), B.B (doubling B's rounds) or
> A.B (sequentially executing both ciphers), what would you do? The
> standard answer is that mixing ciphers is a bad idea because the
> resulting cipher could be weaker than each individual one, that
> the resulting cipher would be a completely new cipher which would
> require fresh analysis which is the only way to gain trust, etc.
> Well my companion's answer was "A.B", and he gave precisely the
> reasons I expected: a future attack that may work against A could
> very well stop at B's gate.

I agree with this, and I suspect that most cryptographers would.
While it cannot be proven that a cascade of A B is no more secure than
A, and could possibly be weaker than A, in realilty that just won't
happen. Your analysis is sound.

> Then came a long discussion about intellectual rights (IP) issues.

THe only really outstanding IP issue are the data-dependent rotations.
RSADSI has a patent on them in their RC6 patent. IBM claims that
they have a prior patent on the technique. IBM has NOT AGREED that
it would not sue AES if RC6 is chosen. This is a big problem, and
needs to be resolved.

> Smart cards. Several people expressed that the importance that was
> given to implementation issues of the AES on a low end smart card
> was not reasonable. The standard argument in favor of very tight
> implementations on smart cards was that there are cases where
> millions of smart cards are needed where even a small price
> difference has a large impact. Against that, somebody from the
> audience presented the following strong argument: cases where the
> AES is going to be used in a smart card are cases where you want
> to protect information that is valuable - otherwise you don't
> really need a 128 bit key and can use any of well known medium
> range ciphers. If you are protecting information that is valuable,
> then spending one dollar more for each card is a reasonable
> investment.

This only makes sense if the analysis is weak+cheap vs fast+expensive.
We don't have to make that tradeoff in this process. We can choose
an AES that is strong and cheap on smart cards, or one that is strong
and expensive on smart cards. Given that trade-off, it makes no sense
to tax smart-card applications even though those applications might be
willing to spend the money if forced.

> This gave me the idea that
> instead of trying to normalize security (a difficult proposition)
> and then comparing speeds, a better methodology could be to
> normalize speed (best assembly implementation on one or two
> reference platforms) and then compare security. After all,
> optimized speed on a given platform gives a good idea about an
> algorithm's workload. Also, speed could be normalized at such a
> high level that ALL competing ciphers would be pushed into
> insecurity which would then allow more or less reasonable
> quantitative comparisons to be made about their relative
> "structural" strenght. It is clearly too late to use this
> methodology in the first round but maybe it could be used in the
> second. I feel like sending NIST an official comment in this
> sense. I wonder what the reader thinks about this?

Someone told me you made this suggestion, and I think it is an
interesting idea.

The one thing you forgot to mention is the straw poll. At the end
of the day, Miles asked everyone to vote for the various candidates.
You could either vote "yes," as in "yes it should make it into the
second round, "no," or "maybe." This vote was unofficial and
anonymous.

At the rump session of FSE two days later, Miles presented the
results. (My apologies if this doesn't format well on your
particular netnews readers. I formatted it quickly for my machine.)

Out of the 170 or so participants, 104 filled the survey out. The
first column is the number of people who left the space blank.
Then, the number of people who said yes, this algorithm should
go into the second round. Then, the number of people who were
unsure. Then, the number of people who said no. The next
column contains the yes votes minus the no votes; this gives
the algorithm a relative rank in the ordering.

blank yes ??? no yes-no rank
Rijndael 7 77 19 1 +76 1
RC6 4 79 15 6 +73 2
Twofish 9 64 28 3 +61 3
Mars 5 58 35 6 +52 4
Serpent 6 52 39 7 +45 5
E2 11 27 53 13 +14 6
CAST-256 12 16 58 18 -2 7
SAFER+ 13 20 47 24 -4 8
DFC 12 22 43 27 -5 9
Crypton 14 16 43 31 -15 10
DEAL 10 1 22 71 -70 11
HPC 12 1 13 78 -77 12
Magenta 9 1 10 84 -83 13
Frog 11 1 6 86 -85 14 tie
Loki98 10 1 7 86 -85 14 tie

What is interesting is that the algorithms divide pretty neatly
into five categories:

Overwheming Yes: Rijndael, RC6, Twofish, Mars, Serpent
Mild Yes: E2
Middle-of-the-Road: CAST-256, SAFER+, DFC
Mild No: Crypton
Overwhelming No: DEAL, HPC, Magenta, Frog, Loki97

You should have stayed for FSE. The best analysis papers of
the AES algorithms were presented there, and there were a lot
more interesting papers.

And I did notice that your six picks were the same top six picks
from the survey.

Bruce Schneier

unread,
Mar 28, 1999, 3:00:00 AM3/28/99
to
On 24 Mar 1999 16:14:17 +0000, Alan Braggins <ar...@ncipher.com> wrote:

>dian...@tecapro.com writes:
>> it appears that the very fastest code is self-modifying - this may
>> look "unpure" to some but I think that it is a valid optimization
>> technique).

>[...]


>> improvement. As expected the answer is no, because one of the
>> easiest things to read from a card is the "signature" of the
>> instructions being executed.
>

>Any ideas whether self-modifying code would make reading instruction
>signatures easier or more difficult?

It doesn't matter. Sometimes techniques like this require a little
smarter measuing; sometimes they require a little smarter analysis.
None of them help.

Bruce Schneier

unread,
Mar 28, 1999, 3:00:00 AM3/28/99
to
On Wed, 24 Mar 1999 09:15:06 -0800, Jim Gillogly <j...@acm.org> wrote:

>Thanks very much for the reports from the AES conference.
>Your views and notes are invaluable, as they were from the
>first conference. Well done!
>
>dian...@tecapro.com wrote:

>> rounds. It is interesting to observe how cultural characteristics
>> have an impact in this international process. The designers of the
>> Asian candidates (E2 and Crypton) are really far too modest and
>> this, I think, hurts their chances. American designers, of course,
>> are the most crass, and I think this is a winning strategy.
>

>Ouch!

Unfortunately, I agree with him. I think both of ciphers have
gotten less support than they deserve. With Crypton there was also a
cultural barrier; I believe the designer misunderstood the rules, and
hence has been redesigning his cipher for the past year. E2 is one
of the ciphers I really like, and I will be disappointed if it does
not make it into the second round.

David Crick

unread,
Mar 28, 1999, 3:00:00 AM3/28/99
to
Bruce Schneier wrote:
>
> To me the big suprise was the complete lack of attention that Mars
> has been receiving. It just wasn't talked about.

One for the conspiracy theorists? (remember the NSA/IBM angle of DES)

David.

--
+---------------------------------------------------------------------+
| David Crick dac...@mcmail.com http://members.tripod.com/~vidcad/ |
| Damon Hill WC '96 Tribute: http://www.geocities.com/MotorCity/4236/ |
| Brundle Quotes Page: http://members.tripod.com/~vidcad/martin_b.htm |
| PGP Public Key: (RSA) 0x22D5C7A9 00252D3E4FDECAB3 F9842264F64303EC |
+---------------------------------------------------------------------+

Terry Ritter

unread,
Mar 28, 1999, 3:00:00 AM3/28/99
to

On Sun, 28 Mar 1999 11:21:37 GMT, in <36fe106...@news.visi.com>,
in sci.crypt schn...@counterpane.com (Bruce Schneier) wrote:

>[...]


>a 6805 used to be an expensive processor that
>powered your desktop computer,

That was never true: I was in the NMOS microprocessor design group at
Mot when the 6805 was designed.

The 6805 was specifically intended to be the smallest (lowest cost)
6800-like processor that we wanted to build. It was generally
targeted at the "cost-sensitive" automotive people who wanted an even
cheaper single-chip solution than the 6801. Many different versions
included various forms of on-chip parallel I/O, timers, serial ports,
etc. One smart-card design had a serial ALU.

The 6805 came out *after* both the 6809, which I helped design, and
the 68000, which started the 16-bit era for Mot. Thus, the 6805 never
was "an expensive processor," and as far as I know, nobody was foolish
enough to use it as the heart of a desktop computer.

The 6805 started out cheap.

---
Terry Ritter rit...@io.com http://www.io.com/~ritter/
Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM


Terry Ritter

unread,
Mar 28, 1999, 3:00:00 AM3/28/99
to

On Sun, 28 Mar 1999 11:21:37 GMT, in <36fe106...@news.visi.com>,
in sci.crypt schn...@counterpane.com (Bruce Schneier) wrote:

>[...]


>The one sloppy part of this analysis was that he assumed that XORs are
>easier to balance and hence more secure than ADD. This is, of course,
>nonsense...since ADD can be built out of XORs.

Sorry. ADD (integer addition) cannot be built out of XOR's.

ADD requires (e.g.) an AND function as a carry. AND cannot be built
from XOR.

William hugh Murray

unread,
Mar 28, 1999, 3:00:00 AM3/28/99
to

Bruce Stonier wrote:
>
> On 23 Mar 99 09:32:56 GMT, jsa...@ecn.ab.ca () wrote:

> >This reminds me of something I've noticed. Although Magenta had serious
> >problems, the cryptanalysis of it was based on a weakness in its key
> >schedule. The only other possible problem with it is that the S-box,
> >although nonlinear, seems a bit too simple.
>

> There are a bunch of other problems the the security. I believe
> one--the massive number of fixed points--is mentioned in our paper.
> Honestly, there are many other ways of attacking Magenta. It's just
> nor worth spending the time writing it up.
>
> Magenta now replaces McGuffin as my favorite toy cipher to give
> budding cryptanalysts.
>

> >: IBM's Pankaj Rohatgi explained how he got all 128 bits of
> >: a Twofish key after only 50 (that is 50 not 2^50) uses of a smart
> >: card!
> >
> >I wonder how secure some of the other ciphers would be, if the kind of
> >optimizations Bruce suggested for fitting Twofish on a smart card were
> >applied to them. That is, if it were possible.
>

> He said in his talk that every cipher is vulnerable. We've done this
> sort of work, too, and we have found that you can't defend against
> these types of attack with the algorithm. You can do some things with
> the implementation and some things with the hardware, but basically
> you need to defend in the protocol layer.

Principle: Any phenomenon of the implementation (e.g., time or power
used) that is a function of the key, including its intended use, leaks
information about the key.

I cannot speak for Dr. Rohatgi, but IBM understands this full well.
They have always asserted that the algorithm is only a necessary, but
not sufficient, part of the encryption solution. They have always
recognized the importance to security of the both the protocol and the
implementation. That is one reason why they offered a suite of
protocols, mostly ignored, with the DES. It is one reason that they
only offer tamper resistant encryption modules. It is one reason why
they invented the control vector. It is one reason they developed the
Common Cryptographic Architecture. It is one reason why their cards will
not talk in the clear or encrypt an arbitrary value.

Given that, I wonder why this presentation or why, if it had to be
given, it was silent on this point. I must confess to some
disappointment, not to say suspicion of the motive. Perhaps it is only
an omission in an otherwise excellent report.

You and I both know that it is incredibly easy to make gross mistakes in
both protocols and implementations, that "encryption is harder than it
seems." We also know that such mistakes are more vulnerable to
discovery and exploitation than those in algorithms. However, good
practices in protocols and implementations are easier to codify, and
variances from those practices easier to detect and correct for.

>
> >If you compare all the ciphers on the same compiler ... and I remember at
> >a computer chess competition, someone suggested that it wouldn't be a bad
> >idea to require the entrants all to use C, so that it wouldn't be a
> >competition of assembly-language coding skills.
>

> Then you're comparing compiler efficiency. What we did in our
> comparison is assume best possible ASM. We didn't implement
> everything, but we gave all algorithms the benefits of the doubt.
>

whmurray.vcf

Richard Outerbridge

unread,
Mar 28, 1999, 3:00:00 AM3/28/99
to
-----BEGIN PGP SIGNED MESSAGE-----

1999-03-28 15:40:59 EDT

In article <37011231...@news.visi.com>, schn...@counterpane.com
(Bruce Schneier) wrote:

On the basis of NIST's straw poll I think there are only three
categories.

Must consider:
Rijndael, RC6

Should consider:
Twofish, Mars, Serpent, E2, CAST-256, Safer+, DFC, Crypton

Don't Consider:
All the rest.

This isn't much different than your average Academic bell curve.

So what's interesting to me is the inverse relationship: as many of
the respondents felt that only Rijndael and RC6 should be seriously
considered as felt that everything beyond Crypton should yes-no be
seriously excluded. Everything in between is wait-and-see, as
far as I can see.

outer

-----BEGIN PGP SIGNATURE-----
Version: PGP Personal Privacy 6.0.2

iQCVAwUBNv6UBNNcQg4O6q8hAQFicAP8CYA4SoMtZhjD76er2r4IB+4SPWIXsfKk
m3Yw+T88WQkNIE+HY80eyJdMNXnInVAZjetf6sU/GRkUbcYYvENfyJDdXRla9CoA
lmOoZA5Cdo3GkwOPaM34tbxoFO0yIOO8Owkl2dk9k2gHU34GsldrSCsd9MZgFTHd
WVRJ+Q28TqI=
=yfq/
-----END PGP SIGNATURE-----

--
"Just an eccentric soul with a curiosity for the bizarre."
PGPpubkey 1024/0EEAAF21 1994/07/23 <ou...@interlog.com>
Fingerprint = 6A89 D49F D3DA 12E4 040A 273B F383 0127

Message has been deleted

Lassi Hippeläinen

unread,
Mar 29, 1999, 3:00:00 AM3/29/99
to
Bruce Schneier wrote:

> <...>


> While it cannot be proven that a cascade of A B is no more secure than
> A, and could possibly be weaker than A, in realilty that just won't

> happen. <...>

I have a pragmatic proof (i.e. good enough for an engineer):

Assumption: A and B are secure ciphers.
Claim: Chained A & B is at least as safe as A or B alone.
Proof: If not, A is a crack of B or vice versa, which violates the
assumption.

-- Lassi

wtshaw

unread,
Mar 29, 1999, 3:00:00 AM3/29/99
to
In article <36FF56DA...@avoidspammers.ieee.org>, "Lassi Hippeläinen"
<lahi...@avoidspammers.ieee.org> wrote:

> Bruce Schneier wrote:
>
> > <...>


> > While it cannot be proven that a cascade of A B is no more secure than
> > A, and could possibly be weaker than A, in realilty that just won't

> > happen. <...>
>
> I have a pragmatic proof (i.e. good enough for an engineer):
>
> Assumption: A and B are secure ciphers.
> Claim: Chained A & B is at least as safe as A or B alone.
> Proof: If not, A is a crack of B or vice versa, which violates the
> assumption.
>

If the two are used with the key in common, than you are only testing the
one common key, not nessarilly the mutual security of the algorithms with
different keys. Key structure might make one key equivalent to another
one with the other algorithm, so then, a single key might act like
separate keys.
--
Ethnic clensing does not appear to be a response to a high moral calling.

John Savard

unread,
Mar 29, 1999, 3:00:00 AM3/29/99
to
schn...@counterpane.com (Bruce Schneier) wrote, in part:

>E2 is one
>of the ciphers I really like, and I will be disappointed if it does
>not make it into the second round.

I remember looking at E2, and I found the description understandable, and
the cipher (like LOKI-97) was quite DES-like.

But because the PDF document describing it had such large copyright notices
on every page, I was worried there would be an objection to my describing
it on my web page. (The other AES candidates I've omitted describing were
mostly because I found their descriptions hard to follow. MAGENTA I
eventually figured out, and described in a post, but I omitted because of
its misfortune. Since you've told me its design is instructive, I may
reconsider.)

John Savard

unread,
Mar 29, 1999, 3:00:00 AM3/29/99
to
schn...@counterpane.com (Bruce Schneier) wrote, in part:

>There are a bunch of other problems the the security. I believe


>one--the massive number of fixed points--is mentioned in our paper.
>Honestly, there are many other ways of attacking Magenta. It's just
>nor worth spending the time writing it up.

>Magenta now replaces McGuffin as my favorite toy cipher to give
>budding cryptanalysts.

Clearly, I'll have to take another look. The design certainly seemed
impressive.

quoting me:
>>If you compare all the ciphers on the same compiler ... and I remember at
>>a computer chess competition, someone suggested that it wouldn't be a bad
>>idea to require the entrants all to use C, so that it wouldn't be a
>>competition of assembly-language coding skills.

>Then you're comparing compiler efficiency. What we did in our
>comparison is assume best possible ASM. We didn't implement
>everything, but we gave all algorithms the benefits of the doubt.

If everybody's C code is compiled on the same compiler, one may be
comparing optimizations or something, but one isn't comparing compilers.

John Savard

unread,
Mar 29, 1999, 3:00:00 AM3/29/99
to
rit...@io.com (Terry Ritter) wrote, in part:

>The 6805 came out *after* both the 6809, which I helped design, and
>the 68000, which started the 16-bit era for Mot. Thus, the 6805 never
>was "an expensive processor," and as far as I know, nobody was foolish
>enough to use it as the heart of a desktop computer.

Doubtless, Bruce just had a slip of memory, and he meant "6502", which was
used in a number of famous desktop computers.

John Savard

unread,
Mar 29, 1999, 3:00:00 AM3/29/99
to
rit...@io.com (Terry Ritter) wrote, in part:

>On Sun, 28 Mar 1999 11:21:37 GMT, in <36fe106...@news.visi.com>,


>in sci.crypt schn...@counterpane.com (Bruce Schneier) wrote:

>>[...]


>>The one sloppy part of this analysis was that he assumed that XORs are
>>easier to balance and hence more secure than ADD. This is, of course,
>>nonsense...since ADD can be built out of XORs.

>Sorry. ADD (integer addition) cannot be built out of XOR's.

>ADD requires (e.g.) an AND function as a carry. AND cannot be built
>from XOR.

I can't think of a way to build ADD from XOR, but if one also allows a
table lookup, one can build XOR from ADD without using AND, NOT, or any
other logic operation.

Reuben Sumner

unread,
Mar 29, 1999, 3:00:00 AM3/29/99
to
On Tue, 23 Mar 1999 01:19:57 GMT,
dian...@tecapro.com <dian...@tecapro.com> wrote:
> The first paper was presented by Adi Shamir. I don't know whether
> Shamir is a teacher, but if he is then he must be a very good one
> indeed. First he explained about Paul Kocher's power attacks

Indeed I can assure you that Prof Shamir is an excellent teacher.
I have taken two one semester courses in algorithms taught by him
and am now taking the second of his one semester courses in
cryptography. He is faculty at the Weizmann Institute of Science
together with Professors Oded Goldreich, Shafi Goldwasser,
and Moni Naor. All very accomplished cryptographers, together
with other fields of research.

The Weizmann Institute of Science is located in Rehovot, Israel,
about a half hour's drive in clear traffic (rare) from Tel Aviv.
The institute is the home of the Feinberg Graduate School with about
700 MSc and PhD students from around the globe.

Courses and seminars at the Weizmann Institute are given
in English. All applicants accepted as full time students are given
housing plus a generous stipend. Application forms and other information
can be found at
http://www.weizmann.ac.il/feinberg/appforms.html
Further information about research in computer science taking place at
the institute can be found at
http://www.wisdom.weizmann.ac.il/~naor/foundation.html

Reuben

Trevor Jackson, III

unread,
Mar 29, 1999, 3:00:00 AM3/29/99
to
LSI-11 uber alles! (esp. unibus junk)

Your attack would work only if it were unnecessary. If the same key is
used for both the inner and outer encryption it may be weaker than
either cipher alone. But given a message encrypted with the inner
cipher, one must use the original key for the outer encipherment in
order to get a weaker ciphertext.

But if you have the key used for the inner cipher, you don't need to
weaken the cipher text in order to get the plain text. You'd just
decrypt the message with the inner cipher.


Matthias Bruestle wrote:
>
> Mahlzeit


>
> Bruce Schneier (schn...@counterpane.com) wrote:
> > I agree with this, and I suspect that most cryptographers would.
> > While it cannot be proven that a cascade of A B is no more secure than
> > A, and could possibly be weaker than A, in realilty that just won't
> > happen. Your analysis is sound.
>

> If A/B is weaker than A alone, doesn't that mean, that an attacker can
> weaken something which is encrypted by A by encrypting it again with B?
> Then A/B would again be at least as secure as A.
>
> Mahlzeit
>
> endergone Zwiebeltuete
>
> --
> PGP: SIG:C379A331 ENC:F47FA83D I LOVE MY PDP-11/34A, M70 and MicroVAXII!
> --
> Take my mind
> All the way
> The darkside calls
> I shan't resist

Bruce Schneier

unread,
Mar 29, 1999, 3:00:00 AM3/29/99
to
On Mon, 29 Mar 1999 16:15:16 GMT, jsa...@tenMAPSONeerf.edmonton.ab.ca
(John Savard) wrote:
>quoting me:
>>>If you compare all the ciphers on the same compiler ... and I remember at
>>>a computer chess competition, someone suggested that it wouldn't be a bad
>>>idea to require the entrants all to use C, so that it wouldn't be a
>>>competition of assembly-language coding skills.
>
>>Then you're comparing compiler efficiency. What we did in our
>>comparison is assume best possible ASM. We didn't implement
>>everything, but we gave all algorithms the benefits of the doubt.
>
>If everybody's C code is compiled on the same compiler, one may be
>comparing optimizations or something, but one isn't comparing compilers.

One is comparing both how well the coder optimized his code, and how
well the compiler optimizes the particular algorithm. For example,
the Borland C compiler can't do rotates well. Any algorithm using
rotates will look relatively worse than an algorithm that does not, if
compared using a Borland compiler. This relativel difference won't
exist if the algorithms are compared using a Microsoft compiler.

David Wagner

unread,
Mar 29, 1999, 3:00:00 AM3/29/99
to
In article <36F91D9A...@acm.org>, Jim Gillogly <j...@acm.org> wrote:
> I don't know whose standard answer this is, but it has to be
> totally bogus for immediately obvious reasons: if A.B is
> E2.RC6 and it's weaker than either of them, then you
> can mount an effective attack on E2 by encrypting it with
> RC6, and vice versa. This can be done by the cryptanalyst
> in possession of A-ciphertext, and doesn't need to have been
> done by the sender.

You might want to check out
``Cascade Ciphers: The Importance of Being First''
in J. Cryptology vol. 6 1993 pp. 55--61.
The authors show that A.B (encrypting with A first, then B second)
is at least as secure as A, but not necessarily as secure as B (under
some threat models, anyway -- in particular, under ciphertext-only
probable-plaintext attack, A.B might be weaker than B, if you are
especially unlucky).

Also, one requires the assumption that A and B are keyed independently,
which raises the question: what key schedule should we use for A.B?
Should we use A's key schedule or B's key schedule, or something totally
new (and of unknown security)? The best answer isn't entirely clear.

Jim Gillogly

unread,
Mar 29, 1999, 3:00:00 AM3/29/99
to
David Wagner wrote:
> In article <36F91D9A...@acm.org>, Jim Gillogly <j...@acm.org> wrote:
> > I don't know whose standard answer this is, but it has to be
> > totally bogus for immediately obvious reasons: if A.B is
> > E2.RC6 and it's weaker than either of them, then you
> > can mount an effective attack on E2 by encrypting it with
> > RC6, and vice versa. This can be done by the cryptanalyst
> > in possession of A-ciphertext, and doesn't need to have been
> > done by the sender.
>
> You might want to check out
> ``Cascade Ciphers: The Importance of Being First''
> in J. Cryptology vol. 6 1993 pp. 55--61.
> The authors show that A.B (encrypting with A first, then B second)
> is at least as secure as A, but not necessarily as secure as B (under
> some threat models, anyway -- in particular, under ciphertext-only
> probable-plaintext attack, A.B might be weaker than B, if you are
> especially unlucky).

I'll check the reference. However, unless the two use related keys and
the ciphertext of the first is distinguishable from random, I have yet to
be convinced that applying B to A in this case doesn't constitute an
attack on A that could be applied by the cryptanalyst who simply obtains
A ciphertext. But I'll read the paper...

> Also, one requires the assumption that A and B are keyed independently,
> which raises the question: what key schedule should we use for A.B?
> Should we use A's key schedule or B's key schedule, or something totally
> new (and of unknown security)? The best answer isn't entirely clear.

The obvious suggestion to use each component's native key schedule.
Are you suggesting merging the E2 and RC6 key schedules in some way, to
use my example above? The advantage to keeping them separate is that
this is the version that has been tested. Further, if you merge them
you have related keys, which (if the ciphertext is distinguishable from
random) would be a serious hazard. What's the disadvantage of keeping
their native key schedules?

--
Jim Gillogly
Highday, 7 Astron S.R. 1999, 23:38
12.19.6.1.2, 4 Ik 15 Cumku, Fourth Lord of Night

Message has been deleted

Serge Vaudenay

unread,
Mar 30, 1999, 3:00:00 AM3/30/99
to
In article <slrn7fvaj7....@bach.wisdom.weizmann.ac.il>, rasu...@bach.wisdom.weizmann.ac.il (Reuben Sumner) writes:
|> On Tue, 23 Mar 1999 01:19:57 GMT,
|> dian...@tecapro.com <dian...@tecapro.com> wrote:
|> > The first paper was presented by Adi Shamir. I don't know whether
|> > Shamir is a teacher, but if he is then he must be a very good one
|> > indeed. First he explained about Paul Kocher's power attacks
|>
|> Indeed I can assure you that Prof Shamir is an excellent teacher.
|> I have taken two one semester courses in algorithms taught by him
|> and am now taking the second of his one semester courses in
|> cryptography. He is faculty at the Weizmann Institute of Science
|> together with Professors Oded Goldreich, Shafi Goldwasser,
|> and Moni Naor. All very accomplished cryptographers, together
|> with other fields of research.
|>

This reminds me the chair of the session who was supposed to introduce him by
"if you don't know Adi Shamir, please have a look to your badge in order to
check that you are in the right conference".

Serge Vaudenay


Serge Vaudenay

unread,
Mar 30, 1999, 3:00:00 AM3/30/99
to
In article <7das7k$42p$1...@nnrp1.dejanews.com>, dian...@tecapro.com writes:
|>
|> Finally, as a nod to Asia, I think NIST will pick E2 too. It is
|> certainly a very fine cipher considering its analysis, its
|> theoretical underpinning and its speed. The Japanese presenters
|> are probably the most sympathetic and choosing E2 would include
|> some ladies in the finalist makeup. Also nobody wants to miss
|> the nice accents.
|>

Choosing a cipher because some ladies are in?
I wish we have paid more attention to this interesting criterion during
the workshop.

Of course, you should include DFC in your list (because of Helena).

Serge


John Savard

unread,
Mar 30, 1999, 3:00:00 AM3/30/99
to
Jim Gillogly <j...@acm.org> wrote, in part:
>David Wagner wrote:

>> You might want to check out
>> ``Cascade Ciphers: The Importance of Being First''
>> in J. Cryptology vol. 6 1993 pp. 55--61.

>I'll check the reference. However, unless the two use related keys and


>the ciphertext of the first is distinguishable from random, I have yet to
>be convinced that applying B to A in this case doesn't constitute an
>attack on A that could be applied by the cryptanalyst who simply obtains
>A ciphertext. But I'll read the paper...

But it could be an attack on B - basically, as I understood how this work
was cited in Applied Cryptography (Bruce Schneier), the paper shows that
the first cipher can weaken the second one if it is designed to introduce
known plaintext, additional redundancy, or things like that. If one feels -
for example, as the implementor of both ciphers - that this is not a real
worry, then one has gained security from a cascade. (But notice too that AB
might be subject to the same attack as double-DES, so one may really want
ABC.)

Robert Harley

unread,
Mar 30, 1999, 3:00:00 AM3/30/99
to

Dianelos Georgoudis (dian...@tecapro.com) writes:
> After lunch, Prof. Koeune from Belgium, also a sometime
> participant in sci.crypt, presented a paper on the implementation
> of several AES candidates on smart cards. He chose two instances:
> the low cost, 8-bit Intel 8051 and the advanced, 32-bit ARM with
> 1kB of RAM. The approximate results are as follows:
>
> 8051 ARM
> Cipher RAM cycles cycles
> E2 344 9K 2K
> RC6 205 14K 1K
> Rijndael 49 4K 2K
> Twofish 49 ? 10K

I did an implementation of DFC for ARM and timed it at 540 cycles.

That works out at a little over 60 MBits/sec on the ARM chip here
beside me which is *great* performance for portable and embedded
applications.

If anyone wants the source code, just let me know. It's for ARM Linux
but should be easy to port to any decent development environment.

Bye,
Rob.

PAD
PAD
PAD
PAD
PAD
PAD
PAD
PAD

David Wagner

unread,
Mar 30, 1999, 3:00:00 AM3/30/99
to
In article <3700114C...@acm.org>, Jim Gillogly <j...@acm.org> wrote:
> > The authors show that A.B (encrypting with A first, then B second)
> > is at least as secure as A, but not necessarily as secure as B (under
> > some threat models, anyway -- in particular, under ciphertext-only
> > probable-plaintext attack, A.B might be weaker than B, if you are
> > especially unlucky).
>
> I'll check the reference. However, unless the two use related keys and
> the ciphertext of the first is distinguishable from random, I have yet to
> be convinced that applying B to A in this case doesn't constitute an
> attack on A that could be applied by the cryptanalyst who simply obtains
> A ciphertext. But I'll read the paper...

Here's the idea (intuition only, hope I remember this correctly).
Let S be the set of English-looking plaintexts, and suppose A maps S
to some other set T. It might happen that B is weak for encrypting T
but good for encrypting S. In this case, A.B would be weaker than B
is, for probable-plaintext attacks.

(I believe that if there is a probable-plaintext attack on A.B, then
there is necessarily a chosen-plaintext attack on B. But typically a
cipher that falls to probable-plaintext attacks is considered weaker
than a cipher which resists everything short of chosen-text attacks,
so this doesn't contradict the claim that A.B might be weaker than B.)

> > Also, one requires the assumption that A and B are keyed independently,
> > which raises the question: what key schedule should we use for A.B?
>

> The obvious suggestion to use each component's native key schedule.

Let me make sure I understand correctly. To encipher under E2.RC6
with a 128-bit key K, you first encrypt under E2 with the subkeys
determined by E2's key schedule applied to K, then encrypt under RC6
with the subkeys determined by RC6's key schedule on K?

If I am got that right, this doesn't work -- the two ciphers' keys
aren't independent. In fact, they're about as dependent as you
can get -- they're equal!
So the proof of security doesn't go through.

Using dependent keys somehow "voids the security warranty" provided
by the theoretical proofs...

Robert Harley

unread,
Mar 30, 1999, 3:00:00 AM3/30/99
to

Dianelos Georgoudis (dian...@tecapro.com) wrote:
> Speed may be easy to measure, but I think its significance is overrated

I concur absolutely with this opinion.


*** What is going on? ***

Is the AES process supposed to choose a very fast algorithm that is
somewhat secure or a very secure algorithm that is somewhat fast?

I sincerely hope it is the latter but if the discussions being
reported are anything to go by, it looks like the process is off
track.

The purpose of the whole thing is to replace DES and that is not
because of DES's performance: it is because DES is not secure enough.


As Dianelos mentioned, speed is a relatively easy issue. That's
presumably why everyone is rushing to measure it today and guesstimate
it for the future.

But if speed were really the primary concern, the choice would be much
easier. The fastest candidates are clearly RC6 running on x86 and DFC
running on Alpha, so we could narrow it down to two. Great!

In fact speed is by far the lesser concern compared to security.


Does it matter if transactions take three seconds (or milliseconds or
whatever) instead of two? It matters a little but just is not a big
deal, after all they have to take some amount of time.

Would it matter if many billions of dollars worth of transactions were
intercepted or endangered or plain held up because of a catastrophic
failure of an AES algorithm? That would be a nightmare and must be
avoided.


Thus it seems clear that the important question is NOT:

Which is the fastest algorithm (with decent security)?


but instead:

Which is the most secure algorithm (with decent performance)?


This is the more difficult question but it is the one we have to ask
and the one we have to answer.

I hope the guys at NIST can still see the wood for the trees.

Bye,
Rob.

Jim Gillogly

unread,
Mar 30, 1999, 3:00:00 AM3/30/99
to
David Wagner wrote:
>
> Here's the idea (intuition only, hope I remember this correctly).
> Let S be the set of English-looking plaintexts, and suppose A maps S
> to some other set T. It might happen that B is weak for encrypting T
> but good for encrypting S. In this case, A.B would be weaker than B
> is, for probable-plaintext attacks.

Oh. I had assumed we were talking about modern general-purpose
ciphers rather than theoretical ciphers that were broken in
specific ways. I'm less interested in the latter, and will bow
out of the thread.

> > > Also, one requires the assumption that A and B are keyed independently,
> > > which raises the question: what key schedule should we use for A.B?
> >
> > The obvious suggestion to use each component's native key schedule.
>
> Let me make sure I understand correctly. To encipher under E2.RC6
> with a 128-bit key K, you first encrypt under E2 with the subkeys
> determined by E2's key schedule applied to K, then encrypt under RC6
> with the subkeys determined by RC6's key schedule on K?

My suggestion was that to use E2.RC6, where each of E2 and RC6 is to
be used in its 128-bit mode, you would need a 256-bit key, with all
256 bits independently chosen. E2 would use 128 bits, and RC6 would
use 128 bits, each using its own key schedule. I agree that 128-bit
E2.RC6 is not well-defined, and probably not desirable.

> If I am got that right, this doesn't work -- the two ciphers' keys
> aren't independent. In fact, they're about as dependent as you
> can get -- they're equal!
> So the proof of security doesn't go through.

I agree, and didn't intend this. I thought that's what <you> were
suggesting .

> Using dependent keys somehow "voids the security warranty" provided
> by the theoretical proofs...

Absolutely agree.
--
Jim Gillogly
Sterday, 8 Astron S.R. 1999, 17:18
12.19.6.1.3, 5 Akbal 16 Cumku, Fifth Lord of Night

David Wagner

unread,
Mar 30, 1999, 3:00:00 AM3/30/99
to
In article <370108A1...@acm.org>, Jim Gillogly <j...@acm.org> wrote:
> Oh. I had assumed we were talking about modern general-purpose
> ciphers rather than theoretical ciphers that were broken in
> specific ways. I'm less interested in the latter, and will bow
> out of the thread.

Ok. I just wanted to point out that the "provable security" isn't
necessarily quite as provable as you might think -- there are subtle
pitfalls.

> My suggestion was that to use E2.RC6, where each of E2 and RC6 is to
> be used in its 128-bit mode, you would need a 256-bit key, with all
> 256 bits independently chosen. E2 would use 128 bits, and RC6 would
> use 128 bits, each using its own key schedule.

Ok, I see now.
But then I don't see how this can be reasonably promulgated as an
AES standard, because it has only (at most) 128 bits of strength, not
256 bits of strength. I believe that if we standardize on an algorithm
with a 256-bit key, it should have 256 bits of strength. Moreover, it
was a design requirement for AES that the ciphers be able to support
128 bit key lengths, which E2.RC6 would not satisfy.

Thus, I don't think cascading multiple algorithms will give us a good
AES standard. (Doesn't mean it's not a useful technique, just that it
doesn't immediately solve the problem of finding an AES cipher.)

wtshaw

unread,
Mar 30, 1999, 3:00:00 AM3/30/99
to
In article <rz7yake...@corton.inria.fr>, Robert Harley
<har...@corton.inria.fr> wrote:
....

> *** What is going on? ***
...

> The purpose of the whole thing is to replace DES and that is not
> because of DES's performance: it is because DES is not secure enough.

> As Dianelos mentioned, speed is a relatively easy issue....


> In fact speed is by far the lesser concern compared to security.

...


> Thus it seems clear that the important question is NOT:
> Which is the fastest algorithm (with decent security)? but instead:
>
> Which is the most secure algorithm (with decent performance)?

> This is the more difficult question but it is the one we have to ask
> and the one we have to answer.
>
> I hope the guys at NIST can still see the wood for the trees.
>

Sometimes it is best to let the hot bloods run around in circles, they
seem to enjoy it so much. As you point out, the serious government need
is not for speed, be that the need of other people in other places.

One might wonder whether the whole business is more a PR measure, if
making some political statement, or a sincere effort to appeal for help in
a fair and open manner. We doubt the latter since we are so used to
government doing so much for the chosen and in private, and there are
those that continue to do so.

An adopted standard may not really mean much after all other than setting
up a target for additional study, which may or may not reach results on a
preset schedule. I mention this, because it is another area where time,
like speed, may seem or actually push wrong-headeding evaluation.

A slow and deliberate process is necessary to maintain proper
prospective. And, the government has the option to can the whole thing,
regardless of the effort that has been involved. After all, there are
always more, and surely new optons, if the goal is really to weigh the
security need as first.
--
Too much of a good thing can be much worse than none.

Terje Mathisen

unread,
Mar 30, 1999, 3:00:00 AM3/30/99
to
Bruce Schneier wrote:
>
> On Mon, 29 Mar 1999 16:15:16 GMT, jsa...@tenMAPSONeerf.edmonton.ab.ca
> (John Savard) wrote:
[snip]

> >If everybody's C code is compiled on the same compiler, one may be
> >comparing optimizations or something, but one isn't comparing compilers.
>
> One is comparing both how well the coder optimized his code, and how
> well the compiler optimizes the particular algorithm. For example,
> the Borland C compiler can't do rotates well. Any algorithm using
> rotates will look relatively worse than an algorithm that does not, if
> compared using a Borland compiler. This relativel difference won't
> exist if the algorithms are compared using a Microsoft compiler.

Indeed.

Even though there might exist crypto algorithms which would happen to
compile into near-optimal code on almost all compilers, I believe a new
standard encryption algorithm is more than important enough to deserve
being implemented in hand-optimized asm code for all major cpu
architectures.

I.e. there is no particular reason to handicap an algorithm just because
it uses a normal cou instruction which is hard/impossible to describe
directly in portable C.

This is why I really like the AES anylysis submitted by B. Schneier's
group, where they compared the relative speed of a theoretically perfect
asm implementation of each algorithm.

The numbers they came up with seems to correlate well with what good
coders have been able to do on several of the algorithms.

Terje

--
- <Terje.M...@hda.hydro.com>
Using self-discipline, see http://www.eiffel.com/discipline
"almost all programming can be viewed as an exercise in caching"

Brian Gladman

unread,
Mar 30, 1999, 3:00:00 AM3/30/99
to

Terje Mathisen <Terje.M...@hda.hydro.com> wrote in message
news:37012098...@hda.hydro.com...

I wonder why :-)

Brian Gladman


Jim Gillogly

unread,
Mar 30, 1999, 3:00:00 AM3/30/99
to
David Wagner wrote:
>
> In article <370108A1...@acm.org>, Jim Gillogly <j...@acm.org> wrote:
>
> > My suggestion was that to use E2.RC6, where each of E2 and RC6 is to
> > be used in its 128-bit mode, you would need a 256-bit key, with all
> > 256 bits independently chosen. E2 would use 128 bits, and RC6 would
> > use 128 bits, each using its own key schedule.
>
> Ok, I see now.
> But then I don't see how this can be reasonably promulgated as an
> AES standard, because it has only (at most) 128 bits of strength, not
> 256 bits of strength. I believe that if we standardize on an algorithm
> with a 256-bit key, it should have 256 bits of strength. Moreover, it
> was a design requirement for AES that the ciphers be able to support
> 128 bit key lengths, which E2.RC6 would not satisfy.

I agree that this does not produce a candidate for AES, and this was not
in fact Dianelos' intent when he brought it up. He introduced it as a
lunch-time question to an unnamed cryppie, and framed it as A and B
representing two strong ciphers, using as a specific example two AES
candidate ciphers. I reacted to his question on that basis.

> Thus, I don't think cascading multiple algorithms will give us a good
> AES standard. (Doesn't mean it's not a useful technique, just that it
> doesn't immediately solve the problem of finding an AES cipher.)

Agreed, again. I'm reminded of King Gama (from Princess Ida), who found
that he couldn't persuade anybody to argue with him.

Before ditching the question completely I'll also agree with William
Hugh Murray's point that crypto strength is unlikely to be the weak
link in one's security. However, if CPU is not an issue, cascading
protects against unexpected breakage of one of the components.

--
Jim Gillogly
Sterday, 8 Astron S.R. 1999, 21:40

John Savard

unread,
Mar 30, 1999, 3:00:00 AM3/30/99
to
Robert Harley <har...@corton.inria.fr> wrote, in part:

>Is the AES process supposed to choose a very fast algorithm that is
>somewhat secure or a very secure algorithm that is somewhat fast?

>I sincerely hope it is the latter but if the discussions being
>reported are anything to go by, it looks like the process is off
>track.

Back at the time this whole process began, the basic requirement was set as
for a cipher both faster and more secure than Triple-DES. And, if the
security requirement is met, the faster the better.

It is not so much that the process has gotten off its intended track but
that the track is perhaps the wrong one to be on. Because a stream cipher
can make use of an ever-changing portion of a large key (but look at
Mishmash, a component of my Quadibloc III), it is possible for a stream
cipher to be more secure at a given level of speed than a block cipher: so
if speed is an over-riding concern, why a block cipher?

One other initial stipulation is (slightly) violated by MARS: that ciphers
should be based on a unified structure, deriving from a single idea or at
least a single round type.

It certainly sounded to me at the beginning that the fastest algorithm that
wasn't felt to be insecure would be the winner. There is some justification
for this approach: without a strong emphasis on speed, there is no
incentive to produce a theoretically novel design, or even a quality design
regarding expertise.

A mere duffer - a cryptographic dilettante - like myself can come up with a
design like Quadibloc III. *Very* weak, thanks to the Mishmash component,
against power consumption measuring attacks - and requiring large
key-dependent S-boxes, hence unsuitable for smart cards - but yes, it's
secure. A structure like that is not going to be susceptible to any form of
cryptanalysis that improves on brute force any time soon.

It's the effort to obtain security from a speedy design that will lead to
identifying the best primitives or developing new ones to use in a block
cipher.

However, that means that while the AES process has produced beneficial
output, using the finished product from it is something like going up in a
rocket whose millions of parts were all "supplied by the lowest bidder".
What if the one great idea turns out to be susceptible to some non-obvious
transform of the differential cryptanalysis idea?

How to escape from the conundrum:

- an emphasis on speed forces people to think, and produce well-engineered
designs, while emphasizing security can be met in lazy ways;

- in the end result, cipher users usually need security much more than
speed?

Don't use the exact design the AES process turns up - but use the good
ideas the process produced? Of course, one will have to be selective for
another reason: only the winner is required to relinquish patent rights to
his block cipher - and only respecting implementations of the exact winning
design, not in respect of the principles or other designs similar to, or
based on, it. Fortunately, not all the candidate algorithms are proprietary
in this fashion.

I'm very glad the AES process is taking place, but I admit that if one
looks really hard at it, there are reasons to fear that it hasn't just gone
off track, but it was never on a track to begin with. It's much too late, I
fear, to remedy this: it has produced a major advance in the state of the
art, and this surely counts for more than any problems we may have with its
ability to produce a good *direct* result.

Paul Koning

unread,
Mar 30, 1999, 3:00:00 AM3/30/99