Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Something to think about...

1 view
Skip to first unread message

Tom Zych

unread,
Jun 17, 2002, 10:02:10 PM6/17/02
to
Let's suppose that some powerful organization wanted to read all
messages encrypted with GnuPG. There's a way this might be
accomplished, which would be very hard to detect.

I'll call this hypothetical organization TLA. (Prudence dictates
that I should never say anything to suggest a specific real
agency.) Let's say that TLA decides to plant a back door in GnuPG
that will leak a few bits of your private key in every message.
GnuPG is of course free software and its source code has been
scrutinized by many people. It would seem impossible for such a
back door to go undetected.

Now, if I mention the title "Reflections on Trusting Trust", some
of you will be way ahead of me. For those who are unfamiliar with
it, RoTT is the title of Brian Kernighan's 1983 Turing Award
lecture. He described a clever but not terribly arcane way to
modify a compiler. The modified compiler would insert arbitrary
source code while compiling another program. He used this to
insert a back door into an early UNIX login program, without
having to touch the login source code.

Of course, the back door was visible in the compiler. So he went a
step further: he modified the compiler so it would also modify
itself. He recompiled the compiler, and restored the original
source. He then had an executable compiler which would insert the
back door into login, and maintain itself in its trojaned
state...with nothing in the source code to show that anything had
happened.

It's clear how how TLA could use this technique. They hack into
the machines used by the GnuPG team. They replace gcc with their
trojaned version. For good measure they trojan login too, to make
their lives easier when a gcc upgrade is released. Maybe they even
trojan gcc on the computers used by the gcc team, to catch the
folks who build GnuPG from source.

Obviously the same argument applies to PGP from NAI or anyone
else. Can anyone think of a reason this wouldn't work?

Reflections on Trusting Trust:
http://www.acm.org/classics/sep95/

--
Tom Zych
This email address will expire at some point to thwart spammers.
Permanent address is at http://pobox.com/~tz/email.html

James Preston

unread,
Jun 17, 2002, 10:47:32 PM6/17/02
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Mon, 17 Jun 2002 22:02:10 -0400, Tom Zych <tzte...@pobox.com> wrote:
> Now, if I mention the title "Reflections on Trusting Trust", some
> of you will be way ahead of me.

I had been aware of naughty compilers before but I read that lecture
only a few weeks ago - Mr. K puts it very nicely.

My question is: What can be done to mitigate that attack? The only
thing I can think of, considering that gcc is used to compile itself, is
to compile GnuPG with *as many different compilers as possible* and run
a large set of test cases to detect any change in output amongst the
output produced by each binary.

The logic goes that since most versions of GnuPG are compiled using
perhaps at most two compilers, if both of those are r00ted, then
all binaries produced by them are at risk.

Hair-brained idea: would something as silly as a scripted
implementation of parts of GnuPG (eg. Python or Perl) provide a solid
reference against this sort of attack? - ie. it would be *very* hard to
fiddle gcc is such a way that it produced a /usr/bin/python in such a
way that it detected the act of encryption.

- --
James Preston
-----BEGIN PGP SIGNATURE-----

iD8DBQE9Dp85gXK32hUOOt0RAnCxAJ4mpIohldVzut5oapBDtbci3RZhKQCfREjB
2vzDNyW2lirYw5/raQdjsPk=
=8ZCE
-----END PGP SIGNATURE-----

Robert J. Hansen

unread,
Jun 18, 2002, 1:57:31 AM6/18/02
to
> It's clear how how TLA could use this technique. They hack into
> the machines used by the GnuPG team. They replace gcc with their
> trojaned version. For good measure they trojan login too, to make

... and for good measure, they just happen to be God. Or possess
incredible psychic powers. Or...

Really, this is both a major issue and a complete nonissue. What you're
really asking is, "what do you trust?" Do you trust that the compiler
hasn't been hijacked? Do you trust your own PC not to have eavesdropping
hardware built into the motherboard? Do you... etc.

This is why we have threat models. If you have a good threat model, then
you have an answer to this question. If you don't have a good threat
model, then you're going to lie awake at night wondering what the answer
is.

If you have a good threat model, then this is a nonissue--you've already
solved the problem. And if you don't have a good threat model, then this
is a huge issue, because you can't solve the problem.

James Preston

unread,
Jun 18, 2002, 2:43:23 AM6/18/02
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Tue, 18 Jun 2002 00:57:31 -0500, Robert J. Hansen <rjha...@inav.net> wrote:
> If you have a good threat model, then this is a nonissue--you've already
> solved the problem. And if you don't have a good threat model, then this
> is a huge issue, because you can't solve the problem.

This rhetorical proposition doesn't really address what is actually a
rather nasty failure mode ie. the general compromise of a trusted peice of
software which is used to establish trust in many other systems.

The one or two failure points (eg. one of "gcc" or "the physical box
hosting GnuPG's development environment") are here theoretically
susceptable to an undetectable hack which is manifest in all
incarnations of the product. Granted you can't trust software on
untrusted hardware, but if you *do* trust your hardware, then *how* can
you make certain that you have a "good" GnuPG when source inspection
won't help? I suggested an algorithmic benchmark in an non-compiled
language.

- --
James Preston
-----BEGIN PGP SIGNATURE-----

iD8DBQE9DtZlgXK32hUOOt0RAtxgAKCx0J92y4wx2rJzRCUVCDL0TN5miACdGDB/
ItSxa2sPvyImX1rz/crxMfs=
=KBHh
-----END PGP SIGNATURE-----

Tom Zych

unread,
Jun 18, 2002, 8:11:55 AM6/18/02
to
"Robert J. Hansen" wrote:

> This is why we have threat models. If you have a good threat model, then
> you have an answer to this question. If you don't have a good threat
> model, then you're going to lie awake at night wondering what the answer
> is.

> If you have a good threat model, then this is a nonissue--you've already
> solved the problem. And if you don't have a good threat model, then this
> is a huge issue, because you can't solve the problem.

Ok. Threat model: there exists at least one large and very
talented intelligence agency, part of whose mandate is to read as
much traffic as possible. They would be willing to spend up to a
million US dollars on a scheme that has a fair chance of
succeeding for at least a few years, and they would be willing and
able to break into several machines, physically or over the
network. I think this is a realistic threat model.

To clarify the key bit leaking scheme, I think it would be
feasible to leak a couple of bits per session key. Two bits of
data, and one or two bits that give an idea of which bits they
are. If the window moves in a predictable way, and most messages
are intercepted, it would be enough. For four bits, on average the
program would have to try sixteen session keys to find one that
encrypts to the target.

Perhaps this could be detected by carefully tracing the RNG and
seeing what it should produce, and comparing this to what it does
produce.

Robert J. Hansen

unread,
Jun 18, 2002, 12:42:25 PM6/18/02
to
> This rhetorical proposition doesn't really address what is actually a

It is not a rhetorical proposition. Reread the message.

What the original poster's question boils down to is, in essence, "what
can you trust? After all, there could be a point of failure anywhere
along the chain."

The answer is, "Evaluate the threats you find likely and plan for those.
That will tell you what you should and should not trust."

I am often amazed at how few people here in a.s.p know what a threat model
is, how to draft one, or how to apply one. Given the choice between
having GnuPG or having an accurate threat model, I'd much rather have the
latter.


Robert J. Hansen

unread,
Jun 18, 2002, 12:52:52 PM6/18/02
to
> much traffic as possible. They would be willing to spend up to a
> million US dollars on a scheme that has a fair chance of
> succeeding for at least a few years, and they would be willing and
> able to break into several machines, physically or over the
> network. I think this is a realistic threat model.

In which case, you're screwed. If you've gone to the point where the FBI
is considering you a target of surveillance, you've just walked smack into
a game-over condition. Talk to a priest; you need a miracle, not crypto.

There is also a (nontrivial) risk of discovery and/or exposure from this
scheme. That also needs to be included in the threat model. After all,
assuming the USG were to do this, it would be enough to end careers and
send people to prison were it ever to be discovered. So now you have to
state, "... and a likelihood of discovery of not more than one in a
thousand."

Now you have to nail everyone with a GCC CVS repository. You can compile
GCC on Solaris, on HP-UX, on Windows, on... etc., using the compilers
available for each. It's very unlikely that you could nail all the
manufacturers' compilers. It's also very unlikely that you could nail all
the source copies of GCC floating around.

I am not concerned about this hypothetical situation. It is interesting
in the form of a thought exercise, but if three-letter-agencies want to
listen in on your traffic, they're going to come up with much better ways.
I have tremendous respect for the intelligence and low animal cunning of
the United States intelligence agencies.

> are. If the window moves in a predictable way, and most messages
> are intercepted, it would be enough. For four bits, on average the

Peer review also applies to the outputs of the system, not merely the
source code. There are a surprising number of people who pore through PGP
traffic trying to understand the message format and related incidentals.
A scheme that leaked key bits into the message would be discovered sooner
or later.

Keep in mind that PGP's most serious flaw, the ADK bug, wasn't discovered
by source analysis but by output analysis. Yes, there really are people
who do this thing for fun.

Tom Zych

unread,
Jun 18, 2002, 7:23:15 PM6/18/02
to
"Robert J. Hansen" wrote:

> In which case, you're screwed. If you've gone to the point where the FBI
> is considering you a target of surveillance, you've just walked smack into
> a game-over condition. Talk to a priest; you need a miracle, not crypto.

If a major intelligence agency is targeting a particular person,
then yes, that person will have no secrets. For a single target
or small group, an agency can go to lengths that would be
impractical on a wide scale: keyboard loggers, emissions scanning,
going through their garbage. The thing about this scheme is that
it isn't targeted at anyone in particular, and it doesn't require
that kind of effort. It simply enables TLA to sweep up whatever's
out there. We already know this is done with unencrypted traffic;
that's what Echelon is.

> There is also a (nontrivial) risk of discovery and/or exposure from this
> scheme. That also needs to be included in the threat model. After all,
> assuming the USG were to do this, it would be enough to end careers and
> send people to prison were it ever to be discovered. So now you have to
> state, "... and a likelihood of discovery of not more than one in a
> thousand."

If they were caught planting it, yes. But if someone notices
it after the fact, there's nothing to say who did it. Wild
suspicions, sure, but those won't end anyone's career.

> Now you have to nail everyone with a GCC CVS repository. You can compile
> GCC on Solaris, on HP-UX, on Windows, on... etc., using the compilers
> available for each. It's very unlikely that you could nail all the
> manufacturers' compilers. It's also very unlikely that you could nail all
> the source copies of GCC floating around.

True. But you don't have to. Every copy you do nail will break
some keys. It doesn't have to break everyone's keys to be
worthwhile.

> Peer review also applies to the outputs of the system, not merely the
> source code. There are a surprising number of people who pore through PGP
> traffic trying to understand the message format and related incidentals.
> A scheme that leaked key bits into the message would be discovered sooner
> or later.

> Keep in mind that PGP's most serious flaw, the ADK bug, wasn't discovered
> by source analysis but by output analysis. Yes, there really are people
> who do this thing for fun.

Ah. Good. That's what I was hoping people would do about this
idea. (Don't look at ME! I'd have to spend a year tooling up :)

0 new messages