Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

crypto flaw in secure mail standards (was: Order of encryption and authentication)

1,806 views
Skip to first unread message

Don Davis

unread,
Jun 22, 2001, 10:18:12 AM6/22/01
to
All current secure-mail standards specify, as their "high-
security" option, a weak use of the public-key sign and encrypt
operations. On Thursday the 28th of this month, I'll present
my findings and my proposed repairs of the protocols, at the
Usenix Technical Conference here in Boston:
http://www.usenix.org/events/usenix01/usenix01.pdf

Citation:
Don Davis, "Defective Sign & Encrypt in S/MIME, PKCS#7, MOSS,
PEM, PGP, and XML." To appear in Proc. Usenix Tech. Conf. 2001,
Boston. June 25-30, 2001.

A short summary: All current secure-mail standards have a
significant cryptographic flaw. There are several standard
ways to send and read secure e-mail. The most well-known
secure mail systems are PGP and S/MIME. All current public-
key-based secure-mail standards have this flaw. Here are some
examples of the flaw in action:

Suppose Alice and Bob are business partners, and are setting
up a deal together. Suppose Alice decides to call off the
deal, so she sends Bob a secure-mail message: "The deal is off."
Then Bob can get even with Alice:

* Bob waits until Alice has a new deal in the works
with Charlle;
* Bob can abuse the secure e-mail protocol to re-encrypt
and resend Alice's message to Charlie;
* When Charlie receives Alice's message, he'll believe
that the mail-security features guarantee that Alice
sent the message to Charlie.
* Charlie abandons his deal with Alice.

Suppose instead that Alice & Bob are coworkers. Alice uses
secure e-mail to send Bob her sensitive company-internal
sales plan. Bob decides to get his rival Alice fired:

* Bob abuses the secure e-mail protocol to re-encrypt and
resend Alice's sales-plan, with her digital signature,
to a rival company's salesman Charlie.
* Charlie brags openly about getting the sales plan from
Alice. When he's accused in court of stealing the plan,
Charlie presents Alice's secure e-mail as evidence of
his innocence.

Surprisingly, standards-compliant secure-mail clients will
not detect these attacks.

------------------------------
Abstract
Simple Sign & Encrypt, by itself, is not very secure.
Cryptographers know this well, but application programmers
and standards authors still tend to put too much trust
in simple Sign-and-Encrypt. In fact, every secure e-mail
protocol, old and new, has codified naïve Sign & Encrypt
as acceptable security practice. S/MIME, PKCS#7, PGP,
OpenPGP, PEM, and MOSS all suffer from this flaw.
Similarly, the secure document protocols PKCS#7, XML-
Signature, and XML-Encryption suffer from the same flaw.
Naïve Sign & Encrypt appears only in file-security and
mail-security applications, but this narrow scope is
becoming more important to the rapidly-growing class
of commercial users. With file- and mail-encryption
seeing widespread use, and with flawed encryption in
play, we can expect widespread exposures.

In this paper, we analyze the naïve Sign & Encrypt flaw,
we review the defective sign/encrypt standards, and we
describe a comprehensive set of simple repairs. The
various repairs all have a common feature: when signing
and encryption are combined, the inner crypto layer must
somehow depend on the outer layer, so as to reveal any
tampering with the outer layer.

-----------------------------

Once I've presented the paper, I'll make this link live:
http://world.std.com/~dtd/sign_encrypt/sign_encrypt7.ps

- don davis, boston
http://world.std.com/~dtd


----------------------------------------------------------------------------
David Hopwood (david....@zetnet.co.uk) wrote:

> Alice has a public encryption key PKB for Bob, with fingerprint FB.
> Alice's own public verification key is PKA, with fingerprint FA.
> Enc is an IND-CCA2-secure encryption scheme and Sign is an EUF-CMA-secure
> signature scheme with appendix. Alice sends a signed and encrypted
> message with plaintext M to Bob, as:
>
> FB, Enc_PKB(FA, M, Sign_PKA(FB, M))


reply from: lcs Mixmaster Remailer (m...@anon.lcs.mit.edu):

The interesting one is the FB inside the signature. This seems
to serve as a sort of "Dear Bob" within the signed portion of
Alice's message, so that it is clear that she meant to send it
to Bob.
There was a debate about a similar issue in the early days of
the XML encryption effort.
It was argued, though, that ... what was really needed (at
least as an option) was a combined signature and encryption
transform. The reasoning was essentially that shown by David's
example above. If you just sign and then encrypt, it is as
if the inner "FB" doesn't exist. Then Bob can decrypt the
data and re-encrypt it for someone else. They then receive
a message encrypted to them and signed by Alice, and they
might be misled into thinking that Alice intended to send
it to them. Maybe if there were some instructions in the
message they might think Alice intended for them to be
followed. Much mischief could result.

In the end the group decided not to try to support a combined
mode like this. Partially it was for organizational reasons; ...
But it was also argued that the mistake above was simply a matter
of not understanding the semantics of public key signatures.

-

lcs Mixmaster Remailer

unread,
Jun 22, 2001, 1:20:15 PM6/22/01
to
Don Davis writes,

> All current secure-mail standards have a
> significant cryptographic flaw. There are several standard
> ways to send and read secure e-mail. The most well-known
> secure mail systems are PGP and S/MIME. All current public-
> key-based secure-mail standards have this flaw. Here are some
> examples of the flaw in action:
>
> Suppose Alice and Bob are business partners, and are setting
> up a deal together. Suppose Alice decides to call off the
> deal, so she sends Bob a secure-mail message: "The deal is off."

The only thing protected in a signed message is that portion signed.
Alice needs to say, "Bob, the deal is off."

Actually this is not enough. Suppose Alice sends this, or equivalently
suppose we use an encryption scheme similar to what David Hopwood
describes where the inner signed portion includes the outer key.

There can still be trouble. Suppose at some later time Alice and Bob
negotiate a new contract, and Bob wants to get out of it. He pulls out
this old message of Alice's and stamps a new date on it, claiming that
it was with regard to their new contract negotiation. He says that
Alice withdrew from the contract so he is not liable for any penalties.

Again the problem is that only what is signed is protected. If the date
is not signed, it is not protected. So the protocol has to include the
date in the signature. (Actually I think most email encryption protocols
do this, but the point is that the formal description of what is signed
may not show that.) Only what is signed is protected.

Even the date may not be enough. Suppose Alice and Bob are separately
negotiating two different contracts, using a threaded mail reader
which uses Reply-To: or some similar fields in the mail header so
that exchanges with regard to one contract are shown separately from
exchanges with regard to the other. Then Alice might send, "Bob, the
deal is off," including a date in the signature, and expect it to apply
just to the deal being negotiated in that thread, because that's how her
mail software shows it. However Bob can take the message and claim that
it applied to the other thread.

In this case, other context that was in the minds of Alice and Bob is
not being covered by the signature. This is really the general form of
the issue being discussed. What is in the minds of the participants,
what assumptions are they making that are not being written down?

This is why we have lawyers and contracts and fine print. These
institutions and practices are the result of centuries of people weaseling
out of contracts in various ways.

It is mistaken to think that we can solve this problem by a little
cryptographic legerdemain involving copying a field from the outer
encryption envelope into the inner signature. That does not begin to
cover all of the things that can go wrong.

The only real solution is to use the advice and experience of the
legal system when negotiating a contract which will bind the parties.
Make sure everything is written down and sign a document which is as
clear, specific and free of ambiguity as possible.

It's not a cryptographic issue, and failures of this kind are not
cryptographic failures. Cryptography can't read the minds of the
parties involved and know that all of their assumptions are included in
the signed portion. The real solution is for the communicants to take
the responsibility to put everything there that is needed. Only what
is signed is protected.

David Wagner

unread,
Jun 22, 2001, 2:00:54 PM6/22/01
to
lcs Mixmaster Remailer wrote:
>It is mistaken to think that we can solve this problem by a little
>cryptographic legerdemain involving copying a field from the outer
>encryption envelope into the inner signature. That does not begin to
>cover all of the things that can go wrong.

Maybe we can't completely solve every variation on the problem, but
we can prevent the common case. It seems that, with some judicious
engineering, we can take a big bite out of the most important areas,
even if we don't solve all the corner cases. And if it doesn't anything,
it would appear to be a win-win situation.

The point is to try to build a system that is as robust as we can make it.
Even if it is not perfect in principle, we want to make it as good as can
be in practice. And, in practice, many users might not realize the subtle
details of exactly what is and isn't protected by the crypto. We can
try to educate them, and that is worth doing, too, but I believe it would
be a mistake to expect that we can solve this problem just with education.

Mark Wooding

unread,
Jun 22, 2001, 2:31:58 PM6/22/01
to
Don Davis <d...@world.std.com> wrote:

> Suppose Alice and Bob are business partners, and are setting
> up a deal together. Suppose Alice decides to call off the
> deal, so she sends Bob a secure-mail message: "The deal is off."
> Then Bob can get even with Alice:
>
> * Bob waits until Alice has a new deal in the works
> with Charlle;
> * Bob can abuse the secure e-mail protocol to re-encrypt
> and resend Alice's message to Charlie;
> * When Charlie receives Alice's message, he'll believe
> that the mail-security features guarantee that Alice
> sent the message to Charlie.
> * Charlie abandons his deal with Alice.

If this is a failure in S/MIME (and I could accept arguments that it
is), it's because it doesn't sign the right things. It has omitted
important metadata from the mail headers including the intended
recipient.

PGP's view of the world is different, I think -- it doesn't assume it'll
be used in email, and so shouldn't get involved with deciding which bits
of metadata should be included and which shouldn't. This makes PGP are
more general tool, but a more dangerous one and a harder one to use
properly.

> Suppose instead that Alice & Bob are coworkers. Alice uses
> secure e-mail to send Bob her sensitive company-internal
> sales plan. Bob decides to get his rival Alice fired:
>
> * Bob abuses the secure e-mail protocol to re-encrypt and
> resend Alice's sales-plan, with her digital signature,
> to a rival company's salesman Charlie.
> * Charlie brags openly about getting the sales plan from
> Alice. When he's accused in court of stealing the plan,
> Charlie presents Alice's secure e-mail as evidence of
> his innocence.

This attack depends on stupidity in the court. Merely having an
encrypted email signed by someone doesn't mean that it was actually sent
to you (as, indeed, is the point of your attack), but I'd expect a
sensible expert witness to be able to explain this fairly
straightforwardly. It's the exact analogue of Bob giving Charlie a
signed paper document in a nice envelope, addressed to Charlie.


I think there's a rather more pernicious problem with separable signing
and encryption. Suppose that Alice sends a `secure' message to Bob
saying `I think our boss Charlie is molesting little boys'. Bob can
strip off the outer encryption layer and then give Charlie the message,
still signed by Alice's key. This can't, I belive, be solved by
cryptography. Procedural fix: don't sign things you don't want to be
held accountable for.

I'm very cautious about what I sign, and I've tried to design protocols
which don't leave certificates lying about after they're finished.


> The various repairs all have a common feature: when signing and
> encryption are combined, the inner crypto layer must somehow depend on
> the outer layer, so as to reveal any tampering with the outer layer.

This is the wrong fix. The right fix involves inserting appropriate
metadata (e.g., recipient and context information) within the signature
boundary.

-- [mdw]

David A Molnar

unread,
Jun 22, 2001, 3:34:15 PM6/22/01
to
Mark Wooding <m...@nsict.org> wrote:

> strip off the outer encryption layer and then give Charlie the message,
> still signed by Alice's key. This can't, I belive, be solved by
> cryptography. Procedural fix: don't sign things you don't want to be
> held accountable for.

Designated verifier signatures might be useful here.

-David

vedaal

unread,
Jun 22, 2001, 4:32:07 PM6/22/01
to
Joseph Ashwood wrote:

>there is a flaw in the argument, specifically I believe that
>there is little to no weight to the argument, since it has been
contradicted
>by the standard in one case, and the implementation in the other.
>Joe

If what you mean, is that pgp will verify the original time and date
of signing,
and that when it is re-sent, it will be clearly seen from the
signature that
it was not signed at the time the re-sender would like to claim it to
be,
then you are quite correct.

The issue i was addressing is,
if Alice sends sensitive or critical information to Bob, and Bob wants
to demonstrate that this information came from 'Alice',
and points to Alice's verifiable signature as proof of this,
(independent of the
time and date that it was signed),
then, if Alice sends a signed and encrypted message to Bob using a pgp
dh/dss
key, then Bob *cannot* demonstrate that Alice has signed this without
either
giving out his secret key, or decrypting the message in front of those
to whom he wishes to demonstrate this.

The most common way that a message is both signed and encrypted in pgp
or gpg
is to use the *sign and encrypt* option,
not, to first sign, and then encrypt the already signed message as a
separate option.

What i intended to point out, is the not well-known point, that if a
message
is encrypted and signed in pgp, where both the receiver and sender are
using RSA keys,
it is possible for the receiver to decrypt the message, and separate
it into
a verifiable signed message that is as if the sender sent it 'without'
encrypting it.

This has both good and bad implications:

'good':

it is possible for someone acting upon orders or information, to
demonstrate this with a verifiable message from the sender, by sending
the separated clear-signed text, without doing so in person, or
compromising his/her key.

'bad':

sensitive information meant for only the recipient, is possible to be
publicly posted as a clear-signed message, with no indication that it
was intended to be a private message to only one person.


hope this clarifies things,


vedaal

lcs Mixmaster Remailer

unread,
Jun 22, 2001, 5:40:16 PM6/22/01
to
David Wagner writes:

> Maybe we can't completely solve every variation on the problem, but
> we can prevent the common case. It seems that, with some judicious
> engineering, we can take a big bite out of the most important areas,
> even if we don't solve all the corner cases. And if it doesn't anything,
> it would appear to be a win-win situation.

I think you meant, "if it doesn't [cost] anything". But actually it does
cost something, in that it means that signature and encryption can no
longer be decoupled in the software design. When you sign you have to
know if you are going to encrypt. Depending on how the software works
and is segmented, that may not be easy or convenient.

It also depends to some extent on the protocol. It could be that
with some protocols you sign first and only later does the message
get encrypted, maybe in some kind of gateway. It might even be that
encryption occurs at a different network layer than signature, since it
is really there for quite a different reason.

(Granted, the original article only mentioned secure-mail standards,
and in that case encryption and signature probably are closely linked.
But similar concerns could apply to other standards as well, and in fact
the same issue was raised with regard to XML encryption and signatures.
The fact that it becomes difficult to say whether a given protocol
is vulnerable to this weakness demonstrates that the supposed flaw is
actually rather poorly defined. It amounts to a concern that people will
misunderstand what the crypto is doing, and so to judge how much of a
flaw it is we have to guess at how badly people will fail to understand
their crypto.)

> The point is to try to build a system that is as robust as we can make it.
> Even if it is not perfect in principle, we want to make it as good as can
> be in practice. And, in practice, many users might not realize the subtle
> details of exactly what is and isn't protected by the crypto. We can
> try to educate them, and that is worth doing, too, but I believe it would
> be a mistake to expect that we can solve this problem just with education.

Let's look at how this flaw would actually play out in practice, say
for this case:

: Suppose instead that Alice & Bob are coworkers. Alice uses


: secure e-mail to send Bob her sensitive company-internal
: sales plan. Bob decides to get his rival Alice fired:
:
: * Bob abuses the secure e-mail protocol to re-encrypt and
: resend Alice's sales-plan, with her digital signature,
: to a rival company's salesman Charlie.
: * Charlie brags openly about getting the sales plan from
: Alice. When he's accused in court of stealing the plan,
: Charlie presents Alice's secure e-mail as evidence of
: his innocence.

Now look more closely at where things go wrong. It's not when Alice
sent the email, it's not when Bob re-encrypted it and sent it to Charlie,
or when Charlie brags about getting it from Alice.

The problem happens when Charlie opens Alice's secure email and the
court says, oh, that means that Alice must have sent it to you.

The court is dead wrong in reaching this conclusion. And if Alice is
there, she can speak up and say, no I didn't, that's mail I sent to
Bob. He must have re-sent it to Charlie.

In any realistic scenario where this fraud is played out, it is very
likely that the damaged party is going to contest it in this manner.
And it will quickly become apparent that the encryption does not prove
anything about the sender's intended recipient, as the fraudster claimed.

But in fact, it's highly unlikely in practice that things would even get
this far. People simply don't understand cryptography well enough to
draw these conclusions. No judge is going to be convinced because Charlie
sits down at a computer, fiddles with his keys, and some kind of message
appears on the screen. The judge isn't an expert on crypto and he won't
try to use his personal understanding of the field in resolving the case.
He will appoint a master or rely on expert testimony. And experts,
if they are honest, will be able to explain the true facts.

So for this fraud to be successful, you have to assume a very specific
degree of cryptographic misunderstanding on the part of the participants.
And you have to assume that this misunderstanding will stand even when
contested by the injured party, who can demonstrate the true facts of
the matter.

In fact, very few people today understand crypto well enough even to
rise to the level where they could be fooled by this fraud. And those
who do are probably smart enough that with a little thought they can
quickly understand the true facts of the matter in a situation like this.

As a result, the best solution to solving this problem is probably
education after all. It's not that there is a vast body of
misunderstanding out there which has to be corrected. Most people know
nothing about the issue. All that is necessary is to make sure people
understand, as they are introduced to cryptography, that only what is
signed is protected.

This is crucial information for them to learn anyway, as demonstrated
in the earlier examples. Failing to drive this point home will cause
considerable grief down the road, if and when people begin to be held
to a nonrepudiation signature standard. Making the point clear from the
beginning is the best way to solve the particular problem about who the
intended recipient of the message is, and a host of others as well.

lcs Mixmaster Remailer

unread,
Jun 22, 2001, 5:40:16 PM6/22/01
to
vedaal writes:

> The issue i was addressing is, if Alice sends sensitive or critical
> information to Bob, and Bob wants to demonstrate that this information
> came from 'Alice', and points to Alice's verifiable signature as proof
> of this, (independent of the time and date that it was signed), then,
> if Alice sends a signed and encrypted message to Bob using a pgp dh/dss
> key, then Bob *cannot* demonstrate that Alice has signed this without
> either giving out his secret key, or decrypting the message in front of
> those to whom he wishes to demonstrate this.

Based on RFC 2440, the OpenPGP spec, this does not appear to be the case.
It should be possible to strip off the outer encryption layer and leave
just a (binary) signed message which could be verified by PGP. GPG may
have a command to do this stripping, if PGP does not, or you might have
to write a custom program to do the stripping. But the resulting signed
message should be verifiable by any OpenPGP compliant program since it
is a legal OpenPGP message.

You could also strip off the outer encryption layer and then re-encrypt
the resulting binary signed message for someone else's key, which is
exactly the fraud being discussed here. So PGP is not immune to it at
all, whether using a dh/dss key or an rsa key.

David Wagner

unread,
Jun 22, 2001, 6:26:57 PM6/22/01
to
lcs Mixmaster Remailer wrote:
>I think you meant, "if it doesn't [cost] anything".

Right.

>But actually it does
>cost something, in that it means that signature and encryption can no
>longer be decoupled in the software design. When you sign you have to
>know if you are going to encrypt.

Can you help me understand this? Obviously I must be missing something.
Why do you need to know whether you are going to encrypt? In particular,
one obvious proposal would be the following: When you sign, include any
relevant contextual information (e.g., date, time, To:) in the signature.
Does this not work?

P.S. Regarding the example about the court: That's not the example I find
most compelling, because in a court case there are enough resources that
the potential for such a the failure mode would perhaps be discovered.
Rather, I'm most worried about everyday users being confused by such an
attack, where there is no issue about taking things to court.

Henrick Hellström

unread,
Jun 22, 2001, 8:10:32 PM6/22/01
to
As far as I can see, there really is a problem here. It might very well be
the case that Alice does not include vital incormation in the original
message, and hence is vurnerable to the attack. The signature should not
only, as it is, be dependentant of the private key of the sender, but also
of the public key of the intendent recipient. This is easily fixed by
changing the protocol.

David Wagner et al. are perfectly right about it being possible to use the
existing protocols in such way that the attack will fail. However, it would
be better if the protocols themselves made the attack impossible, and it is
certainly possible to design such protocols (e.g. use ElGamal but replace
the generator of the group by the key of the intended recipient (untested,
unproved)).

--
Henrick Hellström hen...@streamsec.se
StreamSec HB http://www.streamsec.com

"David Wagner" <d...@mozart.cs.berkeley.edu> skrev i meddelandet
news:9h0gnh$r7$1...@agate.berkeley.edu...

SCOTT19U.ZIP_GUY

unread,
Jun 22, 2001, 6:48:17 PM6/22/01
to
d...@mozart.cs.berkeley.edu (David Wagner) wrote in
<9h0gnh$r7$1...@agate.berkeley.edu>:


>
>P.S. Regarding the example about the court: That's not the example I find
>most compelling, because in a court case there are enough resources that
>the potential for such a the failure mode would perhaps be discovered.
>Rather, I'm most worried about everyday users being confused by such an
>attack, where there is no issue about taking things to court.
>

Having lived many years and seeing our courts in action. Let me
sumerise how it works. If you get to be called an expert your can
work for either side that pays you the most and slant or bend the
words to make you client happy. On the other side of coin if you
show some intelligence like being an engineer. When it comes to jury
selection you will rarely get on one. Lawyers don't like juries
can use there own brians they prefer those they that they can
controll and lack the ability to think for themselves.


David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
My crypto code http://radiusnet.net/crypto/archive/scott/
MY Compression Page http://members.nbci.com/ecil/compress.htm
**TO EMAIL ME drop the roman "five" **
Disclaimer:I am in no way responsible for any of the statements
made in the above text. For all I know I might be drugged.
As a famous person once said "any cryptograhic
system is only as strong as its weakest link"

David A Molnar

unread,
Jun 22, 2001, 11:53:00 PM6/22/01
to
lcs Mixmaster Remailer <m...@anon.lcs.mit.edu> wrote:

> : * Charlie brags openly about getting the sales plan from
> : Alice. When he's accused in court of stealing the plan,
> : Charlie presents Alice's secure e-mail as evidence of
> : his innocence.

> Now look more closely at where things go wrong. It's not when Alice
> sent the email, it's not when Bob re-encrypted it and sent it to Charlie,
> or when Charlie brags about getting it from Alice.

> The problem happens when Charlie opens Alice's secure email and the
> court says, oh, that means that Alice must have sent it to you.

No, the problem happens long before any court is involved. Charlie has a
copy of the sales plan with Alice's signature. Alice's boss hears about
this from Charlie and fires her. Alice launches a wrongful termination
suit which makes Jarndyce & Jarndyce look like a schoolyard quarrel.

By the time a court reaches the verdict that there is a reasonable doubt
as to whether Alice sent the e-mail, Bob has acheived his goal of ruining
his rival. Unless I'm missing something, the crime can't even be pinned
affirmatively on Bob, at least in the absence of mail records.

Port 25 e-mail forging can cause lots of trouble. Even though the
forgeries are usually trivial to unmask by looking at the headers. This is
an order of magnitude worse. Not only does the e-mail bear the "secure"
imprimatur, but there's *less* evidence that spoofing has occured.

It seems that the best you can do is show that there's a possibility that
the message was not sent by Alice. In the real world, outside a court,
this may not be anywhere near enough.

> The court is dead wrong in reaching this conclusion. And if Alice is
> there, she can speak up and say, no I didn't, that's mail I sent to
> Bob. He must have re-sent it to Charlie.

Or maybe Alice sent it to Charlie and she's just trying to pin the blame
on Bob. Or maybe Alice sent it to sixteen people. Unless I'm missing
something, there's no way to determine which is the case. (We have to
assume that mail records between the two companies are discredited,
inconclusive, or unavailable; otherwise we could check to see which of
Alice and Bob communicated the message Charlie received with the sales
plan.)

> In any realistic scenario where this fraud is played out, it is very
> likely that the damaged party is going to contest it in this manner.

With due respect, I believe that there are realistic scenarios in
which the parties will not go to court until after damage has already been
done. Or ever go to court, for that matter. Personal relationships may be
an area for examination.

> But in fact, it's highly unlikely in practice that things would even get
> this far. People simply don't understand cryptography well enough to
> draw these conclusions. No judge is going to be convinced because Charlie

People understand the "From:" line in their inbox pretty well. They also
seem to understand "This document signed by Alice." Combine this attack
with the ability to spoof "From:" lines and you have a Charlie who really,
truly, honestly *believes* that Alice sent him the sales plan.

> And you have to assume that this misunderstanding will stand even when
> contested by the injured party, who can demonstrate the true facts of
> the matter.

I'm not clear as to how exactly Alice demonstrates conclusively that it
was Bob that forwarded the message on to Charlie. As opposed to merely
"it is possible both that Alice sent the message and that Charlie sent the
message." This seems to leave the injured pary at an impasse.
On reflection, I'm not sure whether you're claiming that Alice can so
demonstrate, but in any case her position if she cannot seems
unsatisfactory.

Unless I'm missing something?

-David

Don Davis

unread,
Jun 23, 2001, 1:51:00 AM6/23/01
to
> All current secure-mail standards specify, as their
> "high-security" option, a weak use of the public-key
> sign and encrypt operations. ...

i've received permission from usenix to release the
paper on saturday (6/23):

http://world.std.com/~dtd/sign_encrypt/sign_encrypt7.ps
http://world.std.com/~dtd/sign_encrypt/sign_encrypt7.html

lcs Mixmaster Remailer

unread,
Jun 24, 2001, 11:40:39 PM6/24/01
to
David Wagner writes:

> Can you help me understand this? Obviously I must be missing something.
> Why do you need to know whether you are going to encrypt? In particular,
> one obvious proposal would be the following: When you sign, include any
> relevant contextual information (e.g., date, time, To:) in the signature.
> Does this not work?

I agree that something like this is the best solution. In particular it
is much better than what was actually proposed in the paper, which was
to put the encryption key into the signature, or the signature key into
the encrypt, or to sign or encrypt twice. Those solutions advance the
illusion that this is a cryptographic problem related to sign+encrypt,
when it is not. (Others have observed that the same kinds of problems
arise even if the message is not encrypted at all.) It is a confusion
about what is protected specifically in an email environment, or perhaps
it is a failure to protect some information that could or should be
protected. More on this below.

Adding To:, etc. to the signature is the best solution in an email
environment. As other discussions have noted, there are other fields
which could be important as well, such as Subject, Keywords, References,
In-Reply-To, etc.. In fact some have proposed that the entire set of
email headers should be protected by the signature. This produces the
least ambiguity and possibility of error.

One cost in the context of this solution is that on the sending side the
software may have to be restructured somewhat. Presently it is likely
that signature and encryption are done before the message is formatted
for transmission. Many of the mail headers may be stamped on only at
that last point. The software may have to be rearranged to make sure
that everything is available at an earlier point in the processing.
Granted, this is more of a problem in the context of retrofitting an
existing system. It might be argued that this problem should have been
recognized from the beginning and secure email been designed to protect
the mail headers all along.

The other cost happens on the receiving side: what to do when the
protected headers don't match the outer ones? Is this worth raising a
red flag over? Or perhaps should the inner ones silently overwrite the
outer ones?

It might be that a certain amount of mismatch commonly occurs.
Mail headers are far from sacrosanct, and gateways, mail exploders and
forwarders do sometimes rewrite them. If we raise a red flag every time
then people will learn to just ignore the warnings. If we silently
overwrite then we might lose some of the advantages of the rewriting
which is done (for example mailing lists sometimes rewrite Subject to
tag it with the name of the list, to move a "Re" past the list name, etc.)

These issues can probably be solved but they require some thought and
care in implementing this proposed new capability.

> P.S. Regarding the example about the court: That's not the example I find
> most compelling, because in a court case there are enough resources that
> the potential for such a the failure mode would perhaps be discovered.
> Rather, I'm most worried about everyday users being confused by such an
> attack, where there is no issue about taking things to court.

Okay, but again, we are talking about confusion here. The real problem in
these examples is a mismatch between user's expectations and/or beliefs,
and what the software actually does. The proposed solution, especially
the crypto-only one, is to partially change the software so that it
slightly more closely approximates user's mistaken beliefs. However this
is only a partial fix and still leaves the fundamental problem in place.

Actually the problem has not been diagnosed correctly. The issue is
not just that people will mistakenly believe that the software protects
the recipient identity. The more important problem is that the software
fails to routinely protect the recipient identity (and other information).

Here is how the important problem manifests itself: Alice is a manager,
and before leaving on vacation she sends mail to Bob, her subordinate,
saying, "I got the go-ahead from the VP. We are to put the plan we
discussed into action immediately. I'll expect to see a full status
report when I return in a week." She comes back a week later and Bob
didn't do anything! "Didn't you get my email?" "Sure, but I wasn't
sure it was legitimate." "But didn't you see I signed it?" "Yeah,
but I couldn't be sure you sent it to me. It might have been meant for
someone else and redirected to me."

In the real problem, the failure is that the software did not routinely
protect the fact that Bob was the recipient. Hence he could not go on the
assumption that he was the legitimate receiver, and Alice's intention was
not met. The difference from the earlier examples is that in those cases,
someone mistakenly thought the recipient was protected. In this example,
someone correctly thought the recipient was not protected. That is why
the problem is more important and fundamental, in that it does not rely
on persistent misunderstandings, but rather the problem is that the
default behavior of the software did not represent the sender's intention.

This real problem will remain in place even once people have learned that
the fake one is not an issue. The only solution at present is to
manually copy the relevant header information into the message. David
is right that a better fix is to do so automatically.

But again, let us not be misled into thinking it is a cryptographic
failure with a cryptographic fix. It is actually a problem that is very
specific to email, and the fix is specific to the email environment.
The problem is that the sender has no easy way to protect relevant email
header information, and so the fix needs to be to provide a way to do so.
This will require some redesign of email software and of how it interfaces
to encryption. The sender side needs to figure out the headers before
it goes to encrypt/sign, and the receiver side needs to be prepared to
do something reasonable when the inner headers don't match the outer ones.

BTW, adding this capability would also allow for greater privacy
protection of messages as well. The failure to encrypt Subject lines
is something that people have complained about for years. Even the
recipient data could be hidden until the mail got to the receiving
mail server, which could decrypt an outer envelope to discover the
To: lines in full detail. Or someone could use an anonymous remailer
to hide the source of the mail from outsiders, but put a true From:
line in the inner envelope so that the recipient sees who it is from.
There are many additional advantages to being able to put mail headers
inside encryption/signature wrappers in a transparent way.

David Hopwood

unread,
Jun 26, 2001, 12:21:35 AM6/26/01
to
-----BEGIN PGP SIGNED MESSAGE-----

lcs Mixmaster Remailer wrote:
> David Wagner writes:
> > Can you help me understand this? Obviously I must be missing something.
> > Why do you need to know whether you are going to encrypt? In particular,
> > one obvious proposal would be the following: When you sign, include any
> > relevant contextual information (e.g., date, time, To:) in the signature.
> > Does this not work?
>
> I agree that something like this is the best solution. In particular it
> is much better than what was actually proposed in the paper, which was
> to put the encryption key into the signature, or the signature key into
> the encrypt, or to sign or encrypt twice. Those solutions advance the
> illusion that this is a cryptographic problem related to sign+encrypt,
> when it is not.

Correct; nevertheless, it happens that putting a 2nd pre-image resistant
hash of the encryption key into the signature solves another problem that
*is* cryptographic (see my post on compatible weak keys).

> Adding To:, etc. to the signature is the best solution in an email
> environment. As other discussions have noted, there are other fields
> which could be important as well, such as Subject, Keywords, References,
> In-Reply-To, etc.. In fact some have proposed that the entire set of
> email headers should be protected by the signature. This produces the
> least ambiguity and possibility of error.

All the headers that are generated by the sender, yes.

> One cost in the context of this solution is that on the sending side the
> software may have to be restructured somewhat. Presently it is likely
> that signature and encryption are done before the message is formatted
> for transmission. Many of the mail headers may be stamped on only at
> that last point. The software may have to be rearranged to make sure
> that everything is available at an earlier point in the processing.
> Granted, this is more of a problem in the context of retrofitting an
> existing system. It might be argued that this problem should have been
> recognized from the beginning and secure email been designed to protect
> the mail headers all along.
>
> The other cost happens on the receiving side: what to do when the
> protected headers don't match the outer ones? Is this worth raising a
> red flag over? Or perhaps should the inner ones silently overwrite the
> outer ones?
>
> It might be that a certain amount of mismatch commonly occurs.
> Mail headers are far from sacrosanct, and gateways, mail exploders and
> forwarders do sometimes rewrite them. If we raise a red flag every time
> then people will learn to just ignore the warnings.

I agree there shouldn't be an intrusive warning when headers are changed,
but the way headers are displayed should make it clear which have been
authenticated and which have not.

> If we silently
> overwrite then we might lose some of the advantages of the rewriting
> which is done (for example mailing lists sometimes rewrite Subject to
> tag it with the name of the list, to move a "Re" past the list name, etc.)

I consider those advantages fairly trivial, and not worth losing strict
authentication of the subject line (and similar fields) as provided by
the sender. There is a separate "Mailing-List" header for indicating the
name of a list, for example.

(Moving Re: past the list name is an ugly workaround to prevent continuously
expanding subject lines of the form "[listname] Re: [listname] Re: ...".
It's straightforward to prevent those on the client side instead.)

> These issues can probably be solved but they require some thought and
> care in implementing this proposed new capability.

I agree, but a slightly imperfect solution is better than leaving headers
unauthenticated.

[...]


> Actually the problem has not been diagnosed correctly. The issue is
> not just that people will mistakenly believe that the software protects
> the recipient identity. The more important problem is that the software
> fails to routinely protect the recipient identity (and other information).

Absolutely.

> But again, let us not be misled into thinking it is a cryptographic
> failure with a cryptographic fix. It is actually a problem that is very
> specific to email, and the fix is specific to the email environment.

No, I don't think this is specific to email. The problem of not including
necessary context in the data to be signed (or otherwise authenticated) is
widespread across many protocols. For example, it's very rare for signature
standards that operate on files to include the type (e.g. MIME content type
and encoding) of a file in the signed data; they usually just sign the raw
file.

This is made worse by protocols that assume public keys are dedicated to
that single protocol, but that use a general-purpose certificate standard
such as X.509 to certify keys, without any information restricting the use
of the key in the certificate. (It doesn't help that some X.509
implementations ignore even critical extensions.)

> The problem is that the sender has no easy way to protect relevant email
> header information, and so the fix needs to be to provide a way to do so.
> This will require some redesign of email software and of how it interfaces
> to encryption. The sender side needs to figure out the headers before
> it goes to encrypt/sign, and the receiver side needs to be prepared to
> do something reasonable when the inner headers don't match the outer ones.

I don't think those are difficult problems.

- --
David Hopwood <david....@zetnet.co.uk>

Home page & PGP public key: http://www.users.zetnet.co.uk/hopwood/
RSA 2048-bit; fingerprint 71 8E A6 23 0E D3 4C E5 0F 69 8C D4 FA 66 15 01
Nothing in this message is intended to be legally binding. If I revoke a
public key but refuse to specify why, it is because the private key has been
seized under the Regulation of Investigatory Powers Act; see www.fipr.org/rip


-----BEGIN PGP SIGNATURE-----
Version: 2.6.3i
Charset: noconv

iQEVAwUBOzgNwDkCAxeYt5gVAQGcLAf+IUHTT38/4DW+nFcxVfZNJ0c++NIM0HpH
S6tfyH3/2Sg2BUitBDJ1KJJNZMnVepF3uNGYGhrggH6XADz5ZzdfHk+/zn1A/JpZ
KZj6HYbAm2J8vtWpk1xrZrsNkSkFFZJXDsFV7TGAZiygwiioepdJc2gV7LHBsHP4
+k+G+9xppSjdOeN1CiZSTDuRUmwRHipnf/B3LbAvaX2HJJZCW445wwC67Nfy97aC
Ni9Xu7+02Bfi/m6dLmAUiL9/EIgBh/WFsVwENx8UQNBNIg2W2/ykuT5XxK/pObW7
0N7qXnuAr+5Jj9uwRW0/jX7PG479i7ZeQJCpZPVi7LU5ksRB3v0nlg==
=l556
-----END PGP SIGNATURE-----

D. J. Bernstein

unread,
Jul 2, 2001, 12:24:05 AM7/2/01
to
Anyone can verify a public-key signature.

This is the whole point of public-key signatures. Verification isn't
limited to the people who can create signatures.

This is, however, a useless feature for private email. Sometimes it's
downright dangerous.

In contrast, with public-key authenticators, verification is limited to
the sender and the receiver. The receiver can't convince anyone else
that the message was created by the sender; the receiver could have
computed the same authenticator.

What is a public-key authenticator? It's a secret-key authenticator,
with a key derived from g^xy, where g^x and g^y are the public keys of
the sender and the receiver.

If you were already planning to encrypt the message, using another key
derived from g^xy, then you don't have to do any extra public-key work.
A secret-key authenticator is easier to implement than a public-key
signature, and it takes less CPU time to compute.

---Dan

Roger Schlafly

unread,
Jul 2, 2001, 3:45:38 PM7/2/01
to
Someday, I think people will eventually come around to the view
that Public Key Infrastructure (PKI), as it is popularly conceived,
has some undesirable and unnecessarily anti-privacy aspects. Dan
points out one.

"D. J. Bernstein" <d...@cr.yp.to> wrote in message
news:2001Jul204....@cr.yp.to...

Nhi Trương

unread,
Jun 18, 2021, 9:39:57 AM6/18/21
to
Vào lúc 02:45:38 UTC+7 ngày Thứ Ba, 3 tháng 7, 2001, Roger Schlafly đã viết:

Max

unread,
Jun 18, 2021, 12:16:56 PM6/18/21
to
>>> ---Dan


*Drops the mike and continues working on ed25519*
0 new messages