Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

PGP *2.0* available

90 views
Skip to first unread message

Randy Bush

unread,
Sep 9, 1992, 12:12:27 AM9/9/92
to
bar...@think.com (Barry Margolin) writes:

> Note that this FTP server can't be used from systems behind many
> packet-filtering gateways. It tries to contact our authd, and fails
> because we don't allow incoming connections, and the FTP connection hangs.

> Note that it *does* allow connections from systems that don't run authd, so
> it's not a security issue, it's just a poor implementation of the auth
> client.

Nope. Some folk chased this bug down. Turns out thet NET/1 distribution has
this bug where if connection B gets no host, then it kills an already existing
connection A. I.e. the firewall's denial of authd kills the ftp.

NET/2 fixed the bug. But all NET/1-derived systems will see this problem.

randy
--
ra...@psg.com ...!uunet!m2xenix!randy

David Vincenzetti

unread,
Sep 7, 1992, 10:23:36 AM9/7/92
to
The program Pretty Good Privacy program, version 2.0, is available
in the ~ftp/pub/crypt directory on site ghost.dsi.unimi.it.

I bet most of you already know what PGP is, anway here are two extracts
from the documentation included in the archive:

[from: readme.doc]

> Pretty Good Privacy version 2.0 - READ ME FIRST
>
>
> You are looking at the README file for PGP release 2.0. PGP, short for
> Pretty Good Privacy, is a public key encryption package; with it, you
> can secure messages you transmit against unauthorized reading and
> digitally sign them so that people receiving them can be sure they
> come from you.
>
>

[from: pgpdoc1.txt]

> Synopsis: PGP uses public-key encryption to protect E-mail and data
> files. Communicate securely with people you've never met, with no
> secure channels needed for prior exchange of keys. PGP is well
> featured and fast, with sophisticated key management, digital
> signatures, data compression, and good ergonomic design.
> [...]
> Contents
> ========
>
> Quick Overview
> Why Do You Need PGP?
> How it Works
> Installing PGP
> How to Use PGP
> To See a Usage Summary
> Encrypting a Message
> Signing a Message
> Signing and then Encrypting
> Using Just Conventional Encryption
> Decrypting and Checking Signatures
> Managing Keys
> RSA Key Generation
> Adding a Key to Your Key Ring
> Removing a Key from Your Key Ring
> Extracting (copying) a Key from Your Key Ring
> Viewing the Contents of Your Key Ring
> How to Protect Public Keys from Tampering
> How Does PGP Keep Track of Which Keys are Valid?
> How to Protect Secret Keys from Disclosure
> Revoking a Public Key
> Advanced Topics
> Sending Ciphertext Through E-mail Channels: Radix-64 Format
> Environmental Variable for Path Name
> Setting Configuration Parameters: CONFIG.TXT
> Vulnerabilities
> Trusting Snake Oil
> PGP Quick Reference
> Legal Issues
> Acknowledgments
> About the Author
>
>
>
>
> Quick Overview
> =============
>
> Pretty Good(tm) Privacy (PGP), from Phil's Pretty Good Software, is a
> high security cryptographic software application for MSDOS, Unix,
> VAX/VMS, and other computers. PGP allows people to exchange files or
> messages with privacy, authentication, and convenience. Privacy
> means that only those intended to receive a message can read it.
> Authentication means that messages that appear to be from a
> particular person can only have originated from that person.
> Convenience means that privacy and authentication are provided
> without the hassles of managing keys associated with conventional
> cryptographic software. No secure channels are needed to exchange
> keys between users, which makes PGP much easier to use. This is
> because PGP is based on a powerful new technology called "public key"
> cryptography.
>
> PGP combines the convenience of the Rivest-Shamir-Adleman (RSA)
> public key cryptosystem with the speed of conventional cryptography,
> message digests for digital signatures, data compression before
> encryption, good ergonomic design, and sophisticated key management.
> And PGP performs the public-key functions faster than most other
> software implementations. PGP is public key cryptography for the
> masses.

Ciao, David
--
David Vincenzetti -*- System Administator and C Programmer
Dip. di Scienze dell'Informazione, Internet: vi...@ghost.dsi.unimi.it
V. Comelico 41, I-20133, Milano, Italy Phone : +39 2 55006 392
($PK available by finger$) Fax : +39 2 55006 373

fr...@d012s436.sniap.mchp.sni.de

unread,
Sep 8, 1992, 8:46:14 AM9/8/92
to
vi...@ghost.dsi.unimi.it (David Vincenzetti) writes:
: The program Pretty Good Privacy program, version 2.0, is available

: in the ~ftp/pub/crypt directory on site ghost.dsi.unimi.it.
:
: I bet most of you already know what PGP is, anway here are two extracts
: from the documentation included in the archive:

Thanks for the pointer to the ftp site

: [from: readme.doc]


:
: > Pretty Good Privacy version 2.0 - READ ME FIRST

delete, delete
: > files. Communicate securely with people you've never met, with no


: > secure channels needed for prior exchange of keys. PGP is well


As was recently discussed here, secure exchange of public keys is in
general not possible without a secure channel. This is because
although the keys do not need to be _confidentiality_ protected,
they very often need to be _integrity_ protected.

What the PGP docs should say is that you don't need to keep your
public key _confidential_.


: David Vincenzetti -*- System Administator and C Programmer


: Dip. di Scienze dell'Informazione, Internet: vi...@ghost.dsi.unimi.it
: V. Comelico 41, I-20133, Milano, Italy Phone : +39 2 55006 392
: ($PK available by finger$) Fax : +39 2 55006 373

--
Frank O'Dwyer Disclaimer:
Siemens-Nixdorf AG I will deny everything
Tel. : +49 (89) 636-40639 Fax. : +49 (89) 636-45860
e-mail: Frank....@sniap.mchp.sni.de <--Use this, reply-to is broken

Barry Margolin

unread,
Sep 8, 1992, 12:19:39 PM9/8/92
to
In article <1992Sep7.1...@ghost.dsi.unimi.it> vi...@ghost.dsi.unimi.it (David Vincenzetti) writes:
>The program Pretty Good Privacy program, version 2.0, is available
>in the ~ftp/pub/crypt directory on site ghost.dsi.unimi.it.

Note that this FTP server can't be used from systems behind many


packet-filtering gateways. It tries to contact our authd, and fails
because we don't allow incoming connections, and the FTP connection hangs.

Note that it *does* allow connections from systems that don't run authd, so
it's not a security issue, it's just a poor implementation of the auth
client.

--
Barry Margolin
System Manager, Thinking Machines Corp.

bar...@think.com {uunet,harvard}!think!barmar

Perry E. Metzger

unread,
Sep 8, 1992, 5:11:51 PM9/8/92
to
fr...@D012S436.sniap.mchp.sni.de () writes:
>vi...@ghost.dsi.unimi.it (David Vincenzetti) writes:
>: The program Pretty Good Privacy program, version 2.0, is available
>: in the ~ftp/pub/crypt directory on site ghost.dsi.unimi.it.
>:
>
>As was recently discussed here, secure exchange of public keys is in
>general not possible without a secure channel. This is because
>although the keys do not need to be _confidentiality_ protected,
>they very often need to be _integrity_ protected.
>
>What the PGP docs should say is that you don't need to keep your
>public key _confidential_.

Having examined it, PGP has addressed this question by providing for
keys to be certified by "authorities", those being people that you
designate. This allows you to have your friends, with whom you have
securely exchanged keys (presumably in person) to in turn certify new
keys. There are also key compromise certificates. There is also neat
built in radix-64 handling, a good unix port, apparent ports for VMS,
amigas and ataris, and all sorts of other goodies, and now it uses
IDEA as its base conventional cypher instead of Bass-O-Matic. Its all
quite winningly slick. I'd look at the docs before saying "yes, but
does it do X".
--
Perry Metzger pmet...@shearson.com
--
Just say "NO!" to death and taxes.
Extropian and Proud.

Marc VanHeyningen

unread,
Sep 8, 1992, 9:45:13 PM9/8/92
to
Thus said pmet...@snark.shearson.com (Perry E. Metzger):
[ ... ]

>built in radix-64 handling, a good unix port, apparent ports for VMS,
>amigas and ataris, and all sorts of other goodies, and now it uses
>IDEA as its base conventional cypher instead of Bass-O-Matic. Its all
>quite winningly slick. I'd look at the docs before saying "yes, but
>does it do X".

It looks like a pretty decent program. I'm glad it makes a closer
attempt to comply with the RFCs than the old version, though it looks
like it was more with the objective of doing different encoding rather
than complying per se; it doesn't look possible for PGP to decrypt
messages sent by some other program, or vice-versa. It appears to have
been designed with a near-paranoid rabidity about security, which is good.

It's too bad it (through little fault of its own) fails the test of "Can
I use it without fear of being sued, fired or expelled?" for us here in
the U.S. For that matter, the addition of IDEA [depending which legal
forecasts you believe] may allow our European friends to share in this
sensation. (Would multiple DES encryptions with some padding provide
comparable security without legal clouds?)

(Oh, and there's just one "But does it..." offhand: does it allow for
authentication without encrypting or encoding the plaintext [i.e. doing
the MIC-CLEAR encryption type]?)

I wonder if it would be possible to port it to use RSAREF and make it
"legitimate". It would sort of change PGP's "flavor" though.
--
Marc VanHeyningen mvan...@whale.cs.indiana.edu MIME accepted

What's the use of a good quotation if you can't change it?
- Doctor Who

Steven L. Johnson

unread,
Sep 8, 1992, 10:46:23 PM9/8/92
to
"Marc VanHeyningen" <mvan...@silky.cs.indiana.edu> writes:

>I wonder if it would be possible to port it to use RSAREF and make it
>"legitimate". It would sort of change PGP's "flavor" though.

Not being familiar with "RSAREF" and assuming for purpose of argument
that PKP's patent covering "all public key" implementations is valid
why would the use of RSAREF make PGP legitimate? Is there licensing
for it that is not available for PGP or is it for some reason not
covered? I guess I'm looking for your definition of legitimate.

-Steve

Marc VanHeyningen

unread,
Sep 8, 1992, 11:20:04 PM9/8/92
to
Thus said joh...@tigger.jvnc.net (Steven L. Johnson):

"Legal to use for noncommercial purposes in the United States (and maybe
Canada, I forget)."

RSAREF is an implementation of RSA for which one may obtain a license
from RSADSI. (Actually, it would be nice if, now that RSA is available
under license anyway, they would just allow licensing of PGP under the
same terms, but that would make too much sense.)

(Oh, and I guess that patent holders of IDEA aren't as shortsighted as
PKP, so the legal cloud I mentioned doesn't seem to exist. Who knows,
maybe it could get hacked into RIPEM.)


--
Marc VanHeyningen mvan...@whale.cs.indiana.edu MIME accepted


Patriotism is, in fact, the *first* refuge of the scoundrel.

Rich Winkel

unread,
Sep 8, 1992, 10:57:54 PM9/8/92
to
In <1992Sep8.2...@news.cs.indiana.edu> "Marc VanHeyningen" <mvan...@silky.cs.indiana.edu> writes:
>(Oh, and there's just one "But does it..." offhand: does it allow for
>authentication without encrypting or encoding the plaintext [i.e. doing
>the MIC-CLEAR encryption type]?)

It allows you to generate signatures separate from the plaintext file.
That sounds adequate for your purposes.

Rich

Gumby - The unknown user

unread,
Sep 9, 1992, 12:05:42 AM9/9/92
to
joh...@tigger.jvnc.net (Steven L. Johnson) writes:

>Not being familiar with "RSAREF" and assuming for purpose of argument
>that PKP's patent covering "all public key" implementations is valid
>why would the use of RSAREF make PGP legitimate? Is there licensing
>for it that is not available for PGP or is it for some reason not
>covered? I guess I'm looking for your definition of legitimate.

Although I hate to be a spoil-sport...

The license that you must agree to in order to obtain RSAREF is
quite restrictive. It rules out any use other than that of a
hobbyist nature.

I'm probably not the only one who destroyed their copy of this
package just because of the licensing agreement.

--
Rob McKeever VE7ICJ rmck...@sfu.ca mcke...@sfu.ca 604-463-3863
"Actually, I'm a little envious of Murphy Brown. At least she's guaranteed
of coming back this fall" - Dan Quayle, on the sitcom 'Murphy Brown'
-*- Standard Disclaimers should be adequate -*-

Nonsenso

unread,
Sep 9, 1992, 12:56:54 AM9/9/92
to
-> What the PGP docs should say is that you don't need to keep your
-> public key _confidential_.

It DOES say so. The manually explicitely states that you should share your
public key with as many people/systems as possible. What you _should_ keep
confidential is your private key.
There could have been a problem with pgp 1.0 when keys could not be
certified. That way it'd be easy for anyone to forge a key on your user-ID
and sit in the middle of communications. PGP 2.0 features this
key-certification and is thus a lot stronger regarding to key-security
than the 1.0 version.

Wietse Venema

unread,
Sep 9, 1992, 7:52:14 AM9/9/92
to
bar...@think.com (Barry Margolin) writes:

>Note that this FTP server can't be used from systems behind many
>packet-filtering gateways. It tries to contact our authd, and fails
>because we don't allow incoming connections, and the FTP connection hangs.
>
>Note that it *does* allow connections from systems that don't run authd, so
>it's not a security issue, it's just a poor implementation of the auth
>client.

It's a kernel bug. It reportedly happens with Ultrix, HP-UX and others
(ghost.dsi.unimi.it runs HP-UX). When the authd connection fails with
ICMP not reachable, these kernels zap all existing connections to that
host. The bug is there for all to see in the 4.3BSD NET/1 code.

How to find out if you have the bug:

% ftp 131.155.70.100
(suspend ftp when connection has been established)
% telnet 131.155.70.100 111

The telnet command should fail. If this messes up the ftp connection
your kernel has the bug.

Wietse

Aleksander Wittlin

unread,
Sep 9, 1992, 2:11:17 PM9/9/92
to
Hello,
PGP 2.0 as it is now, won't run on SUN3 systems.
Make is trying to use sparc.s rsalib assembler primitives
and crashes (all Suns are supposed to be sparcs!).
Is there a simple fix of that problem?

A. Wittlin

Perry E. Metzger

unread,
Sep 10, 1992, 10:56:09 AM9/10/92
to

Yes. Just tell it not to use the sparc assembly code rsa lib and to
use the C code instead. Its just a matter of hacking the makefile
slightly.

Adam Shostack

unread,
Sep 10, 1992, 2:00:09 PM9/10/92
to
In article <1992Sep9.1...@sci.kun.nl> wit...@sci.kun.nl (Aleksander Wittlin) writes:
>Hello,
> PGP 2.0 as it is now, won't run on SUN3 systems.
>Make is trying to use sparc.s rsalib assembler primitives
>and crashes (all Suns are supposed to be sparcs!).

No, they're not. All Sparcstations are based on sparc. No Sun3 is.
Sun3s are usually 68020 or 68030

>Is there a simple fix of that problem?

Yeah, learn a bit more about your machine. :)

Adam

Adam Shostack ad...@das.harvard.edu

What a terrible thing to have lost one's .sig. Or not to have a .sig
at all. How true that is.

Terry Ritter

unread,
Sep 10, 1992, 5:13:51 PM9/10/92
to

In <71601551...@utopia.hacktic.nl> Nons...@utopia.hacktic.nl
(Nonsenso) writes:

>There could have been a problem with pgp 1.0 when keys could not be
>certified. That way it'd be easy for anyone to forge a key on your user-ID
>and sit in the middle of communications. PGP 2.0 features this
>key-certification and is thus a lot stronger regarding to key-security
>than the 1.0 version.

Really? So how do we "certify" a key without already having a
previously-certified key from the other end?

How does the first guy certify his key to others so they can certify
their keys to him?

---
Terry Ritter rit...@cactus.org


Jyrki Kuoppala

unread,
Sep 11, 1992, 10:10:20 AM9/11/92
to
In article <1992Sep10....@vixvax.mgi.com>, cepek@vixvax writes:
>In article <1992Sep7.1...@ghost.dsi.unimi.it>,

>vi...@ghost.dsi.unimi.it (David Vincenzetti) writes:
>
>> The program Pretty Good Privacy program, version 2.0, is available
>> in the ~ftp/pub/crypt directory on site ghost.dsi.unimi.it.

It is also available on host nic.funet.fi, directory
pub/unix/security/crypt (in both MS-DOG and Unix versions).
Nic.funet.fi might have better connectivity to the net than the host
in Italy - at least I had some trouble ftp'ing to Italy. Your mileage
may vary.

//Jyrki

Perry E. Metzger

unread,
Sep 11, 1992, 1:03:18 PM9/11/92
to

You and your friend Alice get together and sign each others keys in
person. Later, Bob sends you a key of his, signed by Alice. Since you
know Alice's key to be valid, you suspect that Bob's is. This way, a
small number of people you trust can validate a large number of other
people who you can't meet in person.

I'd just read the manual. Most questions are answered.

Aleksander Wittlin

unread,
Sep 11, 1992, 1:10:40 PM9/11/92
to

>A. Wittlin

Indeed it is! Thanks to Stephan <neu...@informatik.uni-kl.de> for the following patch to
Makefile:

*** Makefile.sun3 Fri Sep 11 13:12:16 1992
--- Makefile Tue Sep 8 00:07:06 1992
***************
*** 97,113 ****

sunspc:
$(MAKE) all CC="ccspc -B/1.8.6/sun4 -ansi -w -I/usr/include" \
! CFLAGS="-O -DUNIX -DHIGHFIRST -DPORTABLE -DMPORTABLE"

# Sun with gcc
sungcc:
! $(MAKE) all CC=gcc LD=gcc \
! CFLAGS="-O -DUNIX -DHIGHFIRST -DPORTABLE -DMPORTABLE"

# Sun with standard cc: compile with unproto
suncc: unproto/cpp
! $(MAKE) all CC=cc LD=cc \
! CFLAGS="-Qpath unproto -O -DUNIX -DHIGHFIRST -DPORTABLE -DMPORTABLE"

sysv:
$(MAKE) all CPP=/usr/lib/cpp \
--- 97,114 ----

sunspc:
$(MAKE) all CC="ccspc -B/1.8.6/sun4 -ansi -w -I/usr/include" \
! CFLAGS="-O -DUNIX -DHIGHFIRST -DUNIT32 -DMERRITT" \
! OBJS_EXT=sparc.o

# Sun with gcc
sungcc:
! $(MAKE) all CC=gcc LD=gcc OBJS_EXT=sparc.o \
! CFLAGS="-O -DUNIX -DHIGHFIRST -DUNIT32 -DMERRITT" \

# Sun with standard cc: compile with unproto
suncc: unproto/cpp
! $(MAKE) all CC=cc LD=cc OBJS_EXT=sparc.o \
! CFLAGS="-Qpath unproto -O -DUNIX -DHIGHFIRST -DUNIT32 -DMERRITT"

sysv:
$(MAKE) all CPP=/usr/lib/cpp \

-----------------
A. Wittlin

Perry E. Metzger

unread,
Sep 11, 1992, 12:26:31 PM9/11/92
to
ce...@vixvax.mgi.com writes:
>In article <1992Sep7.1...@ghost.dsi.unimi.it>,
>vi...@ghost.dsi.unimi.it (David Vincenzetti) writes:
>
>> The program Pretty Good Privacy program, version 2.0, is available
>> in the ~ftp/pub/crypt directory on site ghost.dsi.unimi.it.
>:

>> David Vincenzetti -*- System Administator and C Programmer
>
>Page 21 of the PGPGUIDE (Ver1.0) said that future PGP releases could be
>verified using PRZ's public key. Ver2.0 doesn't seem to do this. Does
>this bother anybody?

Remember, PRZ is under legal orders not to have anything to do with
new PGP development. The DOS executable of the new distribution does
come signed by Branko Lankester, the ring leader of the overseas
development effort. Admittedly, there is no way to know if Branko's
signature is trustworthy.

Ari Juhani Huttunen

unread,
Sep 11, 1992, 6:28:01 PM9/11/92
to
In article <1992Sep11....@shearson.com> pmet...@snark.shearson.com (Perry E. Metzger) writes:

! Remember, PRZ is under legal orders not to have anything to do with
! new PGP development. The DOS executable of the new distribution does
! come signed by Branko Lankester, the ring leader of the overseas
! development effort. Admittedly, there is no way to know if Branko's
! signature is trustworthy.

Would it not be possible for Branko Lankester to publish his public
key in the BYTE Magazine (for example)? I don't know what a 1/4 page
advertisement costs, but it can't be that much. And the key would have
to be published only once.
--
...............................................................................
Ari Huttunen Any similarity to other alien life forms
is purely coincidental.
<Alien 3 misquote>

Bill Stewart 908-949-0705 erebus.att.com!wcs

unread,
Sep 12, 1992, 1:53:57 AM9/12/92
to
In article <ARI.HUTTUNEN....@supergirl.hut.fi> Ari.Hu...@hut.fi (Ari Juhani Huttunen) writes:
] Would it not be possible for Branko Lankester to publish his public
] key in the BYTE Magazine (for example)? I don't know what a 1/4 page ad[...]

You don't really need to do that - broadcasting to the net can be adequate,
provided the ostensible author of the broadcast has the opportunity to
repudiate fake broadcasts, and to receive enough separate transmissions
of the broadcast that it's unlikely they've all been spoofed on their
way back to him. Spread it around by ftp...
--

Pray for peace; Bill
#Bill Stewart 908-949-0705 w...@anchor.ho.att.com AT&T Bell Labs 4M312 Holmdel NJ
# Any technology distinguishable from magic is not sufficiently advanced ...

Rob Stampfli

unread,
Sep 11, 1992, 11:25:06 PM9/11/92
to
>Page 21 of the PGPGUIDE (Ver1.0) said that future PGP releases could be
>verified using PRZ's public key. Ver2.0 doesn't seem to do this. Does
>this bother anybody?
>
>(P.S. To stave off flames on the obviously related issue: I admit that
>I don't have an absolutely trustable certificate of PRZ's public key.)

Yeah, I wondered about this, too. The probable answer is that Phil does
not officially have anything to do with pgp2.0. He could, of course,
give his secret key to the current developers, but I suspect his
attorneys probably advised him not to do anything such that this release
could be linked back to him.

It would be nice if the current folk would issue their own signature.

Another reason may be that they are using MD5 in 2.0. Does anyone know
if pgp2.0 is compatible with pgp1.0?
--
Rob Stampfli r...@colnet.cmhnet.org The neat thing about standards:
614-864-9377 HAM RADIO: kd...@n8jyv.oh There are so many to choose from.

Ragnar Nielsen

unread,
Sep 12, 1992, 4:54:10 PM9/12/92
to
r...@colnet.cmhnet.org (Rob Stampfli) writes:


>Another reason may be that they are using MD5 in 2.0. Does anyone know
>if pgp2.0 is compatible with pgp1.0?

No, according to the documentation it's not. Looks like we all have to
distribute and collect new keys.
Hardly surprising, as one of the great new features of version 2.0 is
key authentification, where a trusted third party can vouch for a to you
unknown public key. Lack of key authentification was a serious deficiency
in PGP 1.0, and I guess the key layout had to be changed in order to
incorporate this feature.
Note: I haven't studied the source too closelyy, so I might be in error
on this one, but it seems logical to me.

The good news is that the authors of version 2.0 have said that they
intend to do what they can to keep future versions of PGP key-compatible
with version 2.0.

regards,

ragnar

Hugh Miller

unread,
Sep 12, 1992, 3:16:43 PM9/12/92
to

>Another reason may be that they are using MD5 in 2.0. Does anyone know
>if pgp2.0 is compatible with pgp1.0?

Not key-compatible, i.e., you can't use your old keys & keyrings
with the new version, and there is no conversion utility. You've got to
redo your keyrings from the ground up.
This was intentional. The idea is that with the new keyring
management features, you will more or less automatically build a good
level of security in the right way -- the FIRST time.

-=- Hugh

--
Hugh Miller | Dept. of Philosophy | Loyola University of Chicago
Voice: 312-508-2727 | FAX: 312-508-2292 | hmi...@lucpul.it.luc.edu
In this world of sin and sorrow, there is always something to be thankful
for; as for me, I rejoice that I am not a Republican. -- H. L. Mencken

Message has been deleted

Seth Robertson

unread,
Sep 13, 1992, 8:53:02 AM9/13/92
to

There appears to be a minor bug in PGP which can cause PGP to
reference element -1 in an array. When I compiled it with gcc -O on
my Sun, it caused PGP to go into an infinite loop, though of course
anything could happen.

I came across it when, er, satisfying my intellectual curiosity.

The patch is in unified diff format (see the latest version of patch
on prep.ai.mit.edu).

--- config.c~ Sat Sep 5 04:46:51 1992
+++ config.c Sat Sep 12 15:27:22 1992
@@ -309,9 +309,12 @@
}
}

+
/* Skip trailing whitespace and add der terminador */
- while( ( theCh = inBuffer[ lineBufCount - 1 ] ) == ' ' || theCh == '\t' )
- lineBufCount--;
+ if (lineBufCount)
+ while( ( theCh = inBuffer[ lineBufCount - 1 ] ) == ' ' || theCh == '\t' )
+ lineBufCount--;
+
inBuffer[ lineBufCount ] = '\0';

/* Process the line unless its a blank or comment line */

Greg - Byrd

unread,
Sep 13, 1992, 12:18:12 PM9/13/92
to
I've been following the PGP discussion with interest.
Although I have not been involved in the infringement side of patent law, I'd
like to make a few observations. If any of my comments are wrong, please corre
ct
me since I'm no patent person.

My understanding of hobbyist use of patented technology is that you are free to
build a unit for your own use, regardless of whether it is patented. This is
essential in order to foster a creative environment. You are not allowed to
make money off it, nor to distribute your unit to third parties. As an example
I
have certainly read magazine articles detailing how to build a hobbyist project
based on patented technology. The entire patent system is designed to *disclos
e*
and dessiminate technology. If you want to keep it secret don't patent it ;=)

The concept of patented algorhythms seems flaky at best. But the law is aimed
at preventing unfair commercial competition against the developer. As long as
the law accomplishes that, I shouldn't think there is anything to worry about.

Well, this is further astray than the lock disussion. (Which I liked. Anyone
know how to defeat sidebar locks like Medecos? Email me! ;=)

Don't let me sidetrack the discussion, but I welcome email from anyone who
knows the law to work differently from my understanding stated above. I'll
share any significant variation which affect PGP distribution.

Greg Byrd ---> Bell (803) 686-4575
Snail Island Technical Group / 3 Cardinal Ct. #619
Hilton Head Island / SC 29926
Email gr...@cup.portal.com
----- Uh, I didn't say that. My boss said it! Yeah. Yeah. That's it. -----

Hugh Miller

unread,
Sep 14, 1992, 12:23:59 AM9/14/92
to
<Greg Byrd's interesting points elided>...

Probably best to move this discussion to misc.legal.computing or
comp.patents (or both), since sci.crypt isn't set up for such. Meet you
there.

PeterClaus Gutmann

unread,
Sep 15, 1992, 12:01:58 AM9/15/92
to
In <1992Sep13.1...@sol.ctr.columbia.edu> space aliens made
se...@startide.ctr.columbia.edu (Seth Robertson) write:

>There appears to be a minor bug in PGP which can cause PGP to
>reference element -1 in an array. When I compiled it with gcc -O on
>my Sun, it caused PGP to go into an infinite loop, though of course
>anything could happen.
>

>[patch for config.c, ~line 310]

Gack! I recognize that code (turns bright red). I actually ripped it out of
another program I've been working on and it seems to have lost something in
the translation. The original line of code (before it got stuck into PGP) reads
(sound of dusty archives being searched):

while( bufCount && \
( theCh = buffer[ bufCount - 1 ] ) == ' ' || theCh == '\t' )
bufCount--;

Apologies to anyone who's been inconvenienced by this problem....

Peter.
--
pg...@cs.aukuni.ac.nz || pet...@kcbbs.gen.nz || pe...@nacjack.gen.nz
(In order of preference)

Marc Thibault

unread,
Sep 14, 1992, 6:53:34 AM9/14/92
to
In article <1992Sep12....@colnet.cmhnet.org>
(Rob Stampfli) writes:
...

> It would be nice if the current folk would issue their own signature.

It's hard to believe they don't read this group - how about it
guys? Just a short message with your public keys for comparison.

>
> Another reason may be that they are using MD5 in 2.0. Does anyone know
> if pgp2.0 is compatible with pgp1.0?

It isn't, but they have put in the hooks to assure upward
compatibility in future versions.

MOREOVER:

To get the value of this, it seems to me that we all should take
the earliest opportunity to begin broadcasting public keys and
maybe convincing some ftp archives to start collecting them. I'm a
newbie on Usenet so I have no idea how best to do this.

Re spoofing: If I'm worried that someone's got a wedge in my mail,
I could send messages to a number of people who have published
their keys. The message would be encrypted with their public key
and contain copies of my public key inside and out. The message
would include instructions like "Please reply with a single word
'tomato' if both are the same and 'asparagus' if they are not,"
using a different pair of words for each message.

Whoever is spoofing would have no way of knowing what to do,
unless they were also spoofing everybody else. The best they could
do is stop the replies entirely, which would tell me what I needed
to know. At this point I would send my public key under cover to a
bunch of people with a request to broadcast it, along with a
disclaimer on the spoofer's key.


-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: 2.0

mQBNAiqxYTkAAAECALfeHYp0yC80s1ScFvJSpj5eSCAO+hihtneFrrn+vuEcSavh
AAUwpIUGyV2N8n+lFTPnnLc42Ms+c8PJUPYKVI8ABRG0I01hcmMgVGhpYmF1bHQg
PG1hcmNAdGFuZGEuaXNpcy5vcmc+
=HLnv
-----END PGP PUBLIC KEY BLOCK-----

---

Marc Thibault | | Any Warming,
ma...@tanda.isis.org | Oxford Mills, Ontario | Global or otherwise,
CIS:71441,2226 | Canada | appreciated.

bo...@ids.net

unread,
Sep 16, 1992, 11:19:23 AM9/16/92
to
In article <57331154...@tanda.isis.org>, ma...@tanda.isis.org (Marc Thibault) writes:
>
> guys? Just a short message with your public keys for comparison.

Ok, here's mine.

> To get the value of this, it seems to me that we all should take
> the earliest opportunity to begin broadcasting public keys and
> maybe convincing some ftp archives to start collecting them. I'm a
> newbie on Usenet so I have no idea how best to do this.

I'm pretty new myself, (actually this is my first reply <G>)
but isn't the idea with PGP that a centralized place for keys
isn't needed or perhaps...wanted? I thought the whole idea is
that we personally trust the person acting as an "introducer".



> Re spoofing: If I'm worried that someone's got a wedge in my mail,
> I could send messages to a number of people who have published
> their keys. The message would be encrypted with their public key
> and contain copies of my public key inside and out. The message
> would include instructions like "Please reply with a single word
> 'tomato' if both are the same and 'asparagus' if they are not,"
> using a different pair of words for each message.
>
> Whoever is spoofing would have no way of knowing what to do,
> unless they were also spoofing everybody else. The best they could
> do is stop the replies entirely, which would tell me what I needed
> to know. At this point I would send my public key under cover to a
> bunch of people with a request to broadcast it, along with a
> disclaimer on the spoofer's key.
>

Whew! This is all new to me but I don't see the point in sending
a public key "under cover". It would seem to defeat the whole
purpose of having a public key. From what I read in the docs for
PGP20, it's totally up to me to make sure a public key is valid.
If I add a key to my keyring, I better make sure I know where I
got it from. And...no one that has my "real" public key can read
my stuff unless I lose control of my secret key. Again, all based
on levels of trust.

If I have this all wrong, I'll be more than willing to listen to
someone who knows more about public-key encryption than I do.
(which includes just about everyone <G>)



-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: 2.0

mQCNAiq3Le4AAAEEAMZWWqB9U3zPOMjc9tQZEU9BViMqIjx0eY/g8ciWGmbhyInZ
3QRUF9I62iPU+R+GrGNpKSC8vNtVpAa5lzcMi/mvHlqhwt3Dk6okuWqgh/VnLXZT
7D/R8vUsKpfEL8i10sqlChxmIiaqGg0lNnsS2dVZeAxsCXNhgo/6FY8KKUyLAAUR
tC9Cb2IgRmF5bmUgPEJvYiBGYXluZS0+Q1lCRVJaT05FPjxib2JmZUBpZHMubmV0
Pg==
=3cYE


-----END PGP PUBLIC KEY BLOCK-----

Bob
~~~
Bob Fayne <bo...@ids.net>

Terry Ritter

unread,
Sep 18, 1992, 1:13:21 PM9/18/92
to

In <57331154...@tanda.isis.org> ma...@tanda.isis.org (Marc Thibault)
writes:

> To get the value of this, it seems to me that we all should take
> the earliest opportunity to begin broadcasting public keys and
> maybe convincing some ftp archives to start collecting them. I'm a
> newbie on Usenet so I have no idea how best to do this.

This is incredibly dangerous.

No one can keep the same key forever; the only question is, "How
soon will the key be changed?" (Things happen: People leave,
strangers appear, fires occur, divorces happen, partnerships fail,
enemies are made, equipment is repaired, hackers get in, serious
investigations do "bag jobs" while the target is asleep, on vacation,
at a movie, etc., etc.)

Nor should anyone *want* to keep the same key: "Occasional" key
changes are *necessary* to terminate any access which somehow has
been "acquired" without the user's knowledge. Even more importantly,
key changes are necessary to limit the amount of data which could be
revealed by a *future* insecurity. The meaning of "occasional" is
the individual balance between the difficulty of key management and
the amount of information a user is willing to lose from a *single
insecure event.*

Without a protocol for assuring that any particular key is current,
accumulating public keys can only result in a file which is
increasingly composed of old keys, some of which will have been
disclosed. Any users who get disclosed keys will be completely at
risk. Nor can an old key be used to validate a new key (unless one
is willing to allow a single past (or future!) insecurity to disclose
all future data).


> Re spoofing: If I'm worried that someone's got a wedge in my mail,
> I could send messages to a number of people who have published
> their keys. The message would be encrypted with their public key
> and contain copies of my public key inside and out. The message

You may have some "published" keys, but you cannot know that they
are valid for that user. Maybe they never were.


> would include instructions like "Please reply with a single word
> 'tomato' if both are the same and 'asparagus' if they are not,"
> using a different pair of words for each message.

You can send a message, but you cannot know who gets that message,
and who replies 'tomato' or 'asparagus' until *after* a validated
set of keys has been set up. Alas, in general, a "published" key
is not sufficient leverage to allow you to perform that validation.

In other words, you cannot just "publish" your public key on a
store-and-forward computer network and expect that the same key
will always arrive unchanged. Anyone who uses a changed key
is at risk, and you are at risk if you use any key which "they"
send back to you. There need be no indication of insecurity.
You cannot tell the difference between an insecure conversation
and a secure one unless you have previously validated the keys.


> Whoever is spoofing would have no way of knowing what to do,
> unless they were also spoofing everybody else.

No. All they had to do was to get you to accept their public key
as the valid key for your correspondent, and then intercept your
messages. *After* validated keys have been established (and before
they are changed again), *then* one can sign.

But if you just use a public key that somehow "appeared," you
have no hope of knowing who can read your response. Consequently,
you cannot use the key to validate your key to a particular
correspondent. And the network has no protocol which forces the
same message to travel by alternate paths, through different nodes
so that you can compare them for transit differences. (Such a
protocol could create a huge increase in traffic, and would not,
in the end, solve the problem.)

After validated keys have been established, then, if only one party
changes their key, the new key can be validated by signing with
the key from the other end. But if the other end had been
compromised, then sending your key under the compromised key is
scarcely better than publishing it in the open. The effect is to
put the recipient at risk, since the key they receive may not be
the one you sent. Of course, you are at risk too, if they
reveal your information in a message to you. And if *both* ends
change keys (not at all unlikely, for occasional conversations),
then new validation is required.

The best practice requires "frequent" key changes, with explicit
separate-channel validation every time.


>The best they could
> do is stop the replies entirely, which would tell me what I needed
> to know. At this point I would send my public key under cover to a
> bunch of people with a request to broadcast it, along with a
> disclaimer on the spoofer's key.

If all of your messages flow through:

a) the spoofing node (say, some sort of virus process on your
home node), or

b) any one of multiple nodes equivalently spoofed (say, those
"convinced" to assist a government inquiry),

you can send messages to anyone you like and it won't make any
difference.

A serious investigation intended to prove your node is clean is
going to mean a lot of work, frequently, and all this work won't
handle case (b). Since few corporations or individuals will oppose
serious government intent, case (b) is not at all unreasonable to
consider. (Indeed, the Government might not even be involved; an
agent who purports to represent the Government may not.) Presumably,
some serious situation would be the motivation to cause someone to
implement (b). But once (b) is in place, it can be used for
anything. So, being "small change" is no protection.

---
Terry Ritter rit...@cactus.org

Marc Thibault

unread,
Sep 18, 1992, 8:27:34 PM9/18/92
to
In article <1992Sep16.101923.32@ids>
bo...@ids.net writes:

> I'm pretty new myself, (actually this is my first reply <G>)
> but isn't the idea with PGP that a centralized place for keys
> isn't needed or perhaps...wanted? I thought the whole idea is
> that we personally trust the person acting as an "introducer".

Agreed. But a great bunch of FTP sites aren't a centralised
place, and the idea is to have a reliable directory, not a
trusted source. Trust still requires the equivalent of a
face-to-face introduction. A name on the Internet, like a name
in the phone book, relates only to a "persona" until someone
you trust provides a "true identity". The main objective at
this point is not trust but privacy.

On this point I firmly agree with PRZ and intend to encrypt as
much mail as possible. In this case, my signature and public
key are inside the encrypted part. This will make it
effectively impossible to later spoof communications with
current correspondents. I wonder, if I normally encrypt all my
mail, what *they* will think if I send one message in clear?

> > Re spoofing: If I'm worried that someone's got a wedge in my mail,
> >

> Whew! This is all new to me but I don't see the point in sending
> a public key "under cover". It would seem to defeat the whole
> purpose of having a public key.

The reason for sending it under cover of someone else's key is
to get it past the spoofer unmodified, keeping in mind that,
at this stage, we know we are being spoofed. When your friends
start broadcasting your correct key, the spoofer will know
that the fat lady has sung and there's not thing one he can do
about it (except maybe show up with a warrant).


---

Marc Thibault | | Any Warming,
ma...@tanda.isis.org | R.R.1, Oxford Mills, | Global or otherwise,
CIS:71441,2226 | Ontario, Canada K0G 1S0 | appreciated.

-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: 2.0

mQBNAiqxYTkAAAECALfeHYp0yC80s1ScFvJSpj5eSCAO+hihtneFrrn+vuEcSavh
AAUwpIUGyV2N8n+lFTPnnLc42Ms+c8PJUPYKVI8ABRG0I01hcmMgVGhpYmF1bHQg
PG1hcmNAdGFuZGEuaXNpcy5vcmc+
=HLnv

D. J. Bernstein

unread,
Sep 19, 1992, 11:57:29 PM9/19/92
to
In article <1992Sep18....@cactus.org> rit...@cactus.org (Terry Ritter) writes:
> But if you just use a public key that somehow "appeared," you
> have no hope of knowing who can read your response.

That is correct. One solution to this ``problem'' is to run around
screaming ``Public key cryptography is useless!'' Another solution is to
shrug your shoulders and say, ``I don't _need_ to know _who_ can read my
response. All I need to know is that it's the same person or group of
people as last time. What I need is continuity.''

---Dan

Terry Ritter

unread,
Sep 20, 1992, 3:51:36 PM9/20/92
to

In <27740.Sep2...@virtualnews.nyu.edu> brn...@nyu.edu
(D. J. Bernstein) writes:

I have seen you make this point several times, and can honestly
say that I have never understood it.

As far as I can tell, the whole point of cryptography is to be
able to restrict information to only those who are intended to
have it. When a public key "appears," it may really be a key
which was generated inside a spoofing node. When one responds
to that key, one's response may be deciphered in the spoofing
node, then re-enciphered to the ultimate recipient.

Note that the recipient does, in fact, get the message. One
can communicate in cipher. The problem is that the spoofing
node gets to read all the communications. So unless we actually
*intended* that there be a spoofing node, and that they should
read our messages, I think the cryptography has failed.

As to public key being "useless," one should remember that,
in the classical model of cryptography (point-to-point direct
communication: the army camp radio communication model),
public key does indeed solve the problem. But direct radio
communications normally admit no possibility of a "spoofer"
who can intercept *and re-encipher* messages in transit.

The problem is that the old model does not apply to store-and-
forward computer networks, where it *is* possible for messages
to be changed in transit. So it should be no real surprise
that there are problems beyond those predicted in the old model.
This does not mean that public key is worthless. However, as
far as I can tell, the simple naive use of public key--the way
it is described in most texts--cannot achieve the usual expected
goals of cryptography in a network environment.

---
Terry Ritter rit...@cactus.org


Roger Books

unread,
Sep 21, 1992, 10:40:38 AM9/21/92
to
In article <1992Sep18....@cactus.org> rit...@cactus.org (Terry Ritter) writes:

> In <57331154...@tanda.isis.org> ma...@tanda.isis.org (Marc Thibault)
> writes:

>> To get the value of this, it seems to me that we all should take
>> the earliest opportunity to begin broadcasting public keys and
>> maybe convincing some ftp archives to start collecting them. I'm a
>> newbie on Usenet so I have no idea how best to do this.

> This is incredibly dangerous.


> In other words, you cannot just "publish" your public key on a
> store-and-forward computer network and expect that the same key
> will always arrive unchanged. Anyone who uses a changed key
> is at risk, and you are at risk if you use any key which "they"
> send back to you. There need be no indication of insecurity.
> You cannot tell the difference between an insecure conversation
> and a secure one unless you have previously validated the keys.

So what you are saying is any public key encryption is useless to me
because the odds of me being able to get in direct touch with people to
verify keys (especially if they change them often) is next to nill. As
I am connected more than Joe Public this implies encryption is useless to
them. Get a grip, I find it hard to believe that there is someone between
me and the machine that carries news to the world. (Unless they get at some
of MY hardware I don't think it's economically feasable.) Here's my public
key. My USENET access is screwey, I get everything at least twice. This
includes my own posts. If you don't here me screaming soon you'll know this
is my public key.


> ---
> Terry Ritter rit...@cactus.org

PS. Sorry I don't know to spell feasable.

Roger
bo...@fsunuc.physics.fsu.edu or bo...@nucmar.physics.fsu.edu

-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: 2.0

mQCNAiqt8FAAAAEEAOUVK2GVYcQOicCX/9xqz20cjTPO044h5av55m0gKTxILg1M
Mr+K8INVA9o6NOaEud6XEdxSUWWBJbKEAxl67rrPd0VMnBzYaFhYxclwkdeEN6gG
WnD2Tz0MPixsrXxuY3WxqzLh+w0FkZ0XMRWHQEEPmxSr2ViARK43eKKsy+9HAAUR
tC1BbGxhbiBSLiBCb29rcyA8Ym9va3NAbnVjbWFyLnBoeXNpY3MuZnN1LmVkdT4=
=3HJo

Mr. Lyn R. Kennedy

unread,
Sep 21, 1992, 3:13:18 PM9/21/92
to
rit...@cactus.org (Terry Ritter) writes:

> In <27740.Sep2...@virtualnews.nyu.edu> brn...@nyu.edu
> (D. J. Bernstein) writes:
>
> >In article <1992Sep18....@cactus.org> rit...@cactus.org
> >(Terry Ritter) writes:
> >> But if you just use a public key that somehow "appeared," you
> >> have no hope of knowing who can read your response.
>
> >That is correct. One solution to this ``problem'' is to run around
> >screaming ``Public key cryptography is useless!'' Another solution is to
> >shrug your shoulders and say, ``I don't _need_ to know _who_ can read my
> >response. All I need to know is that it's the same person or group of
> >people as last time. What I need is continuity.''

> Note that the recipient does, in fact, get the message. One


> can communicate in cipher. The problem is that the spoofing
> node gets to read all the communications. So unless we actually
> *intended* that there be a spoofing node, and that they should
> read our messages, I think the cryptography has failed.

True, the spoofer might do this for a sigle route, but doesn't the
mechanism in 2.0 sound workable? If you get a key for someone you know
at a distant site via several mutual friends, how likely is it that he
could avoid the spoofer changing all of them? Seems like the difficulty
of a spoofer dealing with multiple route is real near infinity.


-------------------------------------------------------------------------

73, internet | l...@k5qwb.lonestar.org
Lyn Kennedy packet radio | K5QWB @ N5LDD.#NTX.TX.US.NA
pony express | P.O. Box 5133, Ovilla, TX, USA 75154

-------------- "We have met the enemy and he is us." Pogo --------------

Terry Ritter

unread,
Sep 21, 1992, 5:10:37 PM9/21/92
to

In <books...@fsunuc.physics.fsu.edu> bo...@fsunuc.physics.fsu.edu
(Roger Books) writes:

>So what you are saying is any public key encryption is useless to me
>because the odds of me being able to get in direct touch with people to
>verify keys (especially if they change them often) is next to nill. As
>I am connected more than Joe Public this implies encryption is useless to
>them. Get a grip,

Get a grip yourself, I don't make the rules. If you want to blame
something, blame Mathematics, blame Logic or blame Philosophy.
Blame yourself for expecting what reality does not deliver. The
naive application of public key technology is limited in what it
can achieve. Sorry. Not my fault.


>I find it hard to believe that there is someone between
>me and the machine that carries news to the world. (Unless they get at some
>of MY hardware I don't think it's economically feasable.)

The designers of public key cryptosystems are fond of saying that
their system will need x-hundred years for a q-level supercomputer
to break the cipher. If we describe such an investment in dollars,
these designs are apparently intended to resist multi-trillion-dollar
attacks. Now, describe in dollars the effort needed to subvert
Internet itself, and "every" node in Internet. It seems to me that
subverting Internet must be a *far* cheaper approach. Therefore,
the real security is not that reported in the scientific papers.

Note that once the "entire" system is subverted (or just a lot of
it), *all* traffic is available, and not just the original target.
Access would be extremely cheap, once the subversion is in place.
Probably much cheaper than a "bag job." Depending on how much of
that goes on, it may actually be cheaper to subvert networks
just to replace existing physical approaches.


>Here's my public
>key. My USENET access is screwey, I get everything at least twice. This
>includes my own posts. If you don't here me screaming soon you'll know this
>is my public key.

Well, dream on. I expect that any local subversion process will
anticipate the possibility of a changed message coming back. There
are inherent limits to this, of course, but simply finding reflected
messages should be pretty easy. News systems generally do this as
a matter of course.

And none of this says anything at all about the possibility that the
*next* nodes could be subverted. Or that those nodes close to the
recipient might be subverted, or that the recipient's home node
might be subverted.

Yeah, you could laugh about the unthinkable possibility that anyone
could subvert internetworks. Laugh on. The problem of cryptography
is to *absolutely prevent* information disclosure *even in* such an
environment. Naive public key technology does not do this.

That said, no program whatsoever can provide real secrecy by itself.
Security is a philosophy, a way of life. If all someone has to do
is to sit across the street and monitor your CRT screen emissions,
then your level of security may be worth a few hundred dollars.
Someone really worried about their own security probably has a lot
of work--and a lot of changing--to do.

However, because a subverted network can be applied so cheaply
(once it is in place), network subversion is an issue even at
relatively low levels of security. Indeed, network insecurity
is insidious in the way it can expand to include all those who
use a subverted channel to validate keys. All it takes is an
non-validated key like those now being attached to various news
signatures.

---
Terry Ritter rit...@cactus.org

a-bo...@cup.portal.com

unread,
Sep 22, 1992, 2:39:17 AM9/22/92
to
Re: the question of how to post your public key reliably to an
insecure network.

Suppose I have posted a public key, but that a spoofer between my node
and the rest of the net is catching and replacing it in each message that
I send which displays it. Further, on each message which comes back to
me which has the public key in it (a key which will be the wrong one, the
one created by the spoofer), the spoofer changes it back to my key. So I
have no way of knowing that others are not seeing the same key that I am.

This seems to be the attack which Terry Ritter discusses, one which leads
him to think that it is actually bad for people to be putting public keys
in their .sigs on Usenet posts.

I think there are some ways around this attack. What if I submit a
message with a random number string embedded in it: 074859b7s87b6a87a6...
I wait until I see this message and get confirmation from others that it
has been received.

Then, I send another message specifying a functional relationship between
that random string and my key. Maybe it is the MD5 hash of my key, or
something more complicated.

The spoofer has a problem if he has already let that first message go
through. Now he doesn't dare let this second message go, as it will
immediately reveal that my publically visible key is invalid. If the
second message includes other discussion that is likely to elicit a
response, the fact that it has not appeared will become obvious to me.

The spoofer would also have had the option of not letting the first
message go through unchanged; perhaps he could have substituted his own
random string for mine, assuming that mine would be a function of my
public key, and instead putting in some function of his key. But I can
foil that by having certain bits of the random string be unrelated to my
key, but being, say, an MD5 hash of some message I have sent or will
send. There is no way for the spoofer to anticipate all possible kinds
of information I might embed in that random string.

Worse, I can conceal the random string in the letter frequencies of an
innocuous-appearing message, and later reveal the secret to allow the
interpretation of a hidden message which includes my public key.

The general idea of this strategy is to use redundancy to send my key in
multiple ways, but concealed so that a spoofer can't detect that a
redundant key has been sent. I think this can reduce the already-slim
possibility that Usenet is being tampered with on a large scale, so that
some of the public keys we see are not the ones the message originators
sent.

Adam Boyles - <a-bo...@cup.portal.com>

Ross Anderson

unread,
Sep 22, 1992, 6:23:31 AM9/22/92
to
In <920921233...@cup.portal.com>, Adam Boyles <a-bo...@cup.portal.com>
writes:

> Suppose I have posted a public key, but that a spoofer between my node
> and the rest of the net is catching and replacing it in each message that
> I send which displays it.
>

> (stuff deleted)


>
> I think there are some ways around this attack. What if I submit a
> message with a random number string embedded in it: 074859b7s87b6a87a6...
> I wait until I see this message and get confirmation from others that it
> has been received.
>
> Then, I send another message specifying a functional relationship between
> that random string and my key. Maybe it is the MD5 hash of my key, or
> something more complicated.
>

The spooks will replace your key hash value 07485... with theirs 46123...
on all outgoing messages from your node, and will also replace 46123...
with 07485... on all incoming messages.

>
> (stuff deleted)


>
> Worse, I can conceal the random string in the letter frequencies of an
> innocuous-appearing message, and later reveal the secret to allow the
> interpretation of a hidden message which includes my public key.

This is a lot more promising. Kahn's book relates that during the war
there were large numbers of spooks engaged in looking for hidden messages
in telex traffic, postcards etc. They used to change messages to see if
this would elicit a response. Eg a postcard saying `father has deceased'
would be changed to `father has died' at which a dumb spy would give
himself away by replying `has father died or deceased?'

It has been reported in the UK press that, at Maggie's orders, the spooks
rewrote the print routines used by PCs in the Cabinet Office so that each
user's identity was encoded in the word spacing of documents he printed
out. This is so that if a document was leaked, the investigators could
find out where it came from.

The problem with steganography is that once the method is known to the
spooks they'll watch out for it. Once they spot a text with a hidden
message they'll rewrite it for you or start other active spoofing
measures such as message replacement. Having to invent a new means of
hiding messages with each of your correpsondents defeats the object of
public key cryptography; you might as well agree secret keys with them.

So can there in fact exist a published method of steganography which can
be widely implemented in free software, but such that the spooks won't be
able to tell whether a given message contains a hidden message or not?

My idea: key the string encoding.

Let K be a secret key of 127 bits. For each succesive 40 bytes of text,
concatenate the key with it, giving 447 bits, which is input to the
secure hash algorithm. Take the least significant bit of the result to
be the next bit of the higgen message.

This can be programmed into a text editor: you input your secret key
and the hidden message, and start to type. About every 80 characters,
the terminal beeps to tell you that the latest hidden text bit is wrong.
You delete the last few words and rephrase. This should not be too hard
to put in gnuemacs!

You can then reveal the hidden message at any time by publishing the key.

You might try to hide text automatically by varying word spacing or line
lengths, but this could be detected automatically. The point about using
a keyed steganographic technique together with human choice of text changes
is that without knowing your key the spooks can't tell what message you
have encoded (if any) or mess about with it in any constructive way.

Ross

D. J. Bernstein

unread,
Sep 22, 1992, 8:39:37 PM9/22/92
to
In article <1992Sep20....@cactus.org> rit...@cactus.org (Terry Ritter) writes:
> As far as I can tell, the whole point of cryptography is to be
> able to restrict information to only those who are intended to
> have it.

That description is far too broad. You cannot singlehandedly restrict
information to a chosen recipient, because he can in turn give the
information to someone else. You pose an impossible problem; is it a
surprise that there are no solutions?

For all you know, Terry, there could be a middleman between you and the
entire rest of the world. Call him God if you want. He's spoofing every
message your eyes send to your brain! (In fact he switched purple and
yellow when you were a kid.) Nothing you can ever do will prove the
non-existence of this middleman, or remove his interference. Do you
care?

Public-key cryptography ensures continuity, in a sense that is easy to
define mathematically. If you want more than continuity---if, for
instance, you want to stop God from spoofing your messages---then I'm
afraid you're up the creek. That doesn't negate the usefulness of
public-key cryptography for the real world.

---Dan

Terry Ritter

unread,
Sep 22, 1992, 6:15:22 PM9/22/92
to

In <VwTgRB...@k5qwb.lonestar.org> l...@k5qwb.lonestar.org

(Mr. Lyn R. Kennedy) writes:

>>The problem is that the spoofing
>> node gets to read all the communications. So unless we actually
>> *intended* that there be a spoofing node, and that they should
>> read our messages, I think the cryptography has failed.

>True, the spoofer might do this for a sigle route, but doesn't the
>mechanism in 2.0 sound workable? If you get a key for someone you know
>at a distant site via several mutual friends, how likely is it that he
>could avoid the spoofer changing all of them? Seems like the difficulty
>of a spoofer dealing with multiple route is real near infinity.

First, if you mean *phoning* the distant site and verifying the
key verbally, I think this can be made to work.

However, I suspect that you instead meant the use of *email* from
intermediate sites to "confirm" the key. But multiple sites
provide advantage only to the extent that they are logically
"independent." And this is only partly true: Ultimately, they
all got the key from the same place, and they all send it to the
same place. This means that a single subverted node at either end
would eliminate any supposed advantage of multiple message paths.

Similarly, any single node through which all messages happened
to pass would also eliminate the advantage (note that routing
software would tend to route messages similarly, close to their
origin or destination). And if most of the network was
subverted, different message paths could hardly matter.

I guess if we want to talk about the "difficulty" of all this,
I would compare it to the mail or news programs themselves;
difficult, yes, but not unbelievably so. Once such a program
is done, it is only necessary to distribute it subvertly, or
in some plausible way get it installed on many systems (major
mail channels first, of course). Possibly it could be
distributed as a Trojan, or as a worm or virus. The guys or
gals doing all this would be pros; the stuff would work.

But I would guess that a serious request from your government
would probably suffice to get the program installed in 80-90%
of the major cases, wouldn't you? They wouldn't say what they
were really doing of course; there would be some other plausible
cover story (and they might not even really *be* the government).
For extreme cases the Opponent could buy a computer company (or
part of one), supply new equipment or operators, buy the company
owning the troublesome node, compromise the existing operators,
put in their own new node and volunteer to handle most of the
traffic, or whatever else works.

Frankly, I think the "difficulty" of all this could be comparable
to conventional intelligence activities which are funded as a
matter of course. Maybe "they" could even think of a way to make
all this turn a profit. Costs would be mostly front-end; once the
system was set up, "per-intercept" costs could be almost zero.

Note that we normally do not use probabilities to describe the
situation where the Opponent merely needs to use their assumed
resources in order to gain access to sensitive information.
Normally we call such a cipher "broken." In this case the
problem is in key distribution (instead of the cipher proper)
but the result is the same.

---
Terry Ritter rit...@cactus.org

Marc Thibault

unread,
Sep 22, 1992, 1:22:31 PM9/22/92
to
In article <1992Sep18....@cactus.org>
(Terry Ritter) writes:

> You may have some "published" keys, but you cannot know that they
> are valid for that user. Maybe they never were.

I'm getting knee-jerk reactions from people who think I'm
suggesting an alternative to the "web of trust". Be assured
that that is not the case. I am, however, distinguishing
between trust and privacy, and thinking about the latter.


I'm talking about using keys published in postings to
newsgroups. A spoofer would have to create new keys for every
contributor to every newsgroup I receive, otherwise he would
not be able to read my outgoing mail, and would not know what
responses I was requesting, or whether a message was a spoof
check or just a note about a posting. The essential feature of
spoofing is that it must be *preemptive*, and cover all
communications which might contain a key. It must also start
before my public key is widely distributed. This is a good
reason *not* to change one's key very often, because each of
these occasions will present an opportunity to a lurking
spoofer. It is also a good reason to start publishing your key
*now*, before the three-letter (four-letter in my country)
agencies get a grip on it.

Finally, the people I am using to check my key would
specifically not be "my correspondent", who I must assume
would also be spoofed. Since my reader automatically scans
incoming news and mail for new keys to add to my public key
ring, it will also detect a new public key for an old user;
warning of a possible problem.

> The best practice requires "frequent" key changes, with explicit
> separate-channel validation every time.

This totally negates the advantages of public-key. If I've got
to use a separate channel for validation, I may as well use
private keys.

> If all of your messages flow through:
>
> a) the spoofing node (say, some sort of virus process on your
> home node), or
>
> b) any one of multiple nodes equivalently spoofed (say, those
> "convinced" to assist a government inquiry),
>
> you can send messages to anyone you like and it won't make any
> difference.

OK - I send messages encrypted with sig keys to randomly
selected people in Finland, New Zealand, France, VietNam and
Austria. The only thing these folk have in common is that they
include public keys in their news sig and are frequent posters
to news groups or BBS conferences that have nothing to do with
my work or hobbies. Some of them may be pulled from the README
file in software I've bought, or files from any of a
half-dozen BBS's in my local calling area. These messages
include my public key *inside* the encrypted part. What was it
you were saying?

,Marc

---

Marc Thibault | | Any Warming,


ma...@tanda.isis.org | R.R.1, Oxford Mills, | Global or otherwise,
CIS:71441,2226 | Ontario, Canada K0G 1S0 | appreciated.

-----BEGIN PGP PUBLIC KEY BLOCK-----

Stephan Neuhaus (HiWi Mattern)

unread,
Sep 23, 1992, 9:32:32 AM9/23/92
to
rit...@cactus.org (Terry Ritter) writes:

> attacks. Now, describe in dollars the effort needed to subvert
> Internet itself, and "every" node in Internet. It seems to me that

> subverting Internet must be a *far* cheaper approach [than to break
> a cipher].

I beg to disagree. Since the Internet spans multiple continents and
cotains many sites, the amount of manpower needed to do that covertly
is very large. Since you spoke of subverting the whole net or a large
part of it, that cannot be done just by monitoring the transatlantic
cables or satellites or a few key sites. You must subvert the
Internet at every commercial place or University. This manpower will
cost a lot of money. How much? I don't know.

> The problem of cryptography
> is to *absolutely prevent* information disclosure *even in* such an
> environment. Naive public key technology does not do this.

Okay. In your opinion, what constitutes a non-naive way of doing
public key cryptography? Or indeed cryptography per se? (I'm not
flaming you and I have missed the beginning of this thread, sorry.)

> That said, no program whatsoever can provide real secrecy by itself.

> Security is a philosophy, a way of life. [...]


> Someone really worried about their own security probably has a lot
> of work--and a lot of changing--to do.

That is undoubtedly true. But there is a difference between wanting
security and being paranoid. I mean, I don't send snail mail letters
in safes just because some three-letter agency might steam them open
otherwise.

> Indeed, network insecurity
> is insidious in the way it can expand to include all those who
> use a subverted channel to validate keys.

That's true. You cannot validate keys over an insecure channel.

> All it takes is an
> non-validated key like those now being attached to various news
> signatures.

I disagree. In fact, I seem to have missed your point. If I post my
public key and someone chooses to trust it, that's fine for me. It's
his or her credibility that is at stake if he or she publishes a
signed copy of my public key. Nobody has to trust a public key in
order to use it. Of course, people have to understand that just
because the public key appeared on a BBS, it's not necessarily valid.

Oh, not to make you angry, but, see, I just don't agree with you:

-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: 2.0

mQBNAiqv03gAAAECANmK/tc23Jo04+yKuSc6YpelVUjeCKEzyCmCYrCbsavqp/UB
HTO7lvmoBunxJUSBbqzOk1a3U8mg37FplbUySz0ABRG0LlN0ZXBoYW4gTmV1aGF1
cyA8bmV1aGF1c0BpbmZvcm1hdGlrLnVuaS1rbC5kZT4=
=FKAu


-----END PGP PUBLIC KEY BLOCK-----

If you reply, please send mail to neu...@informatik.uni-kl.de, not
neu...@vier.informatik.uni-kl.de. This is a bug in our news software,
and I'm not sure when we'll get it fixed. Thank you.

Have fun.
--
Stephan <neu...@informatik.uni-kl.de>
sig closed for inventory. Please leave your pickaxe outside.
PGP 2.0 public key available on request.

Terry Ritter

unread,
Sep 23, 1992, 2:24:19 PM9/23/92
to

In <1992Sep22.1...@cl.cam.ac.uk> rj...@cl.cam.ac.uk (Ross
Anderson) writes:

>> Worse, I can conceal the random string in the letter frequencies of an


>> innocuous-appearing message, and later reveal the secret to allow the
>> interpretation of a hidden message which includes my public key.

>My idea: key the string encoding.

I *like* it.

But I don't think it solves our problem. Not yet.

The way I understand it, Ross proposes to hide a small amount of
data in normal text by hashing 40-byte blocks into a single bit
using a secret key. During text construction, the text editor
would "beep" if a just-created block did not produce the correct
bit; in that case the user would back up and try again with
different words.

The advantage is that we can get a hidden secret message past
the Opponent because the outer message looks innocuous. Then,
once everybody has the message, we just announce the secret key
and everyone can unlock the hidden data (the validated public key).

Unfortunately, if the Opponent sees such an announcement, the
announcement may be gobbled, in which case nobody knows about the
"validated" key except the originator and the Opponent. Then the
Opponent can create a similar message, send that, and announce
that the key is in the false message, so that we all get the
wrong "valid" key.

Still, it's an interesting idea. Maybe it's already in place:
Everything we read could be just a gloss covering the sub-rosa true
message. (That would explain the content of much of the news
traffic, and virtually all of TV.) Maybe everyone else already
knows about this. Weird.

Can we make it work?

---
Terry Ritter rit...@cactus.org

Terry Ritter

unread,
Sep 23, 1992, 2:26:25 PM9/23/92
to

In <17939.Sep2...@virtualnews.nyu.edu> brn...@nyu.edu
(D. J. Bernstein) writes:

>In article <1992Sep20....@cactus.org> rit...@cactus.org
>(Terry Ritter) writes:
>> As far as I can tell, the whole point of cryptography is to be
>> able to restrict information to only those who are intended to
>> have it.

>That description is far too broad. You cannot singlehandedly restrict
>information to a chosen recipient, because he can in turn give the
>information to someone else. You pose an impossible problem; is it a
>surprise that there are no solutions?

>For all you know, Terry, there could be a middleman between you and the
>entire rest of the world. Call him God if you want. He's spoofing every
>message your eyes send to your brain! (In fact he switched purple and
>yellow when you were a kid.) Nothing you can ever do will prove the
>non-existence of this middleman, or remove his interference. Do you
>care?

If there really were no way to assure that a powerful network
middleman could not read our messages, then I think there would
be little point in using cryptography at all. In fact, I suggest
that cryptography is actually *dangerous* *unless* we can meet
such a goal.

But two people *can* meet in person and exchange secret keys, and
they *can* communicate securely *despite* any network middlemen.
Personally, I do not need to know whether or not a middleman is
present; all I need is secure communication.

We are not discussing whether secure communication is *possible*;
we are discussing *how* to obtain it, short of single key ciphers
and personal meetings. But if that's what it takes, then that's
what it takes. Any cipher system which is only secure until the
Opponent decides to break it is not secure at all.

---
Terry Ritter rit...@cactus.org

William Unruh

unread,
Sep 23, 1992, 3:26:12 PM9/23/92
to
ma...@tanda.isis.org (Marc Thibault) writes:

>In article <1992Sep18....@cactus.org>
>(Terry Ritter) writes:

>> You may have some "published" keys, but you cannot know that they
>> are valid for that user. Maybe they never were.

> I'm getting knee-jerk reactions from people who think I'm
> suggesting an alternative to the "web of trust". Be assured
> that that is not the case. I am, however, distinguishing
> between trust and privacy, and thinking about the latter.

It seems to me that public keys are useful against tapping. There are
two types of insecure channels- ones in which your enemy may be making a
copy of all correspondence, and one in which your enemy can control what
the channel actually carries. Public keys are very useful in the former
case, but Ritter argues that there may be problems in tehlatter case.
Obviously if someone can control all I/O between you and the world, he
can cocoon you in a virtual reality, so you need a seperate channel
which may be insecure but at least not controlled to check this out.

The second point is how do I send messages to someone over a partially
controled channel- How do I knwo the public key I find on the Net is
yours and not my enemies. Again I need an insecure uncontrolled channel
to check. I also cannot see any way in which a potentially controlled
channel can be used to assure myself that your published key is really
your key. (or that the key you have for me is really mine).
However the ability to dispense with the necessity for a completely
secure channel seems to me to be a great benefit of public keys.


>> The best practice requires "frequent" key changes, with explicit
>> separate-channel validation every time.

> This totally negates the advantages of public-key. If I've got
> to use a separate channel for validation, I may as well use
> private keys.

No because the seperate channel may be tapped- Ie, you cannot be sure
that your enemy isn't listening in. Public keys allow you to use such
channels as far as I can see.
Bill Unruh
un...@physics.ubc.ca


William Unruh

unread,
Sep 23, 1992, 5:02:30 PM9/23/92
to
neu...@vier.informatik.uni-kl.de (Stephan Neuhaus (HiWi Mattern)) writes:

>That's true. You cannot validate keys over an insecure channel.

Sure you can as long as the opponent can only listen in, not alter the
contents. Just send your friend your public key, and he sends you his.

Terry Ritter

unread,
Sep 23, 1992, 3:59:58 PM9/23/92
to

In <52212866...@tanda.isis.org> ma...@tanda.isis.org (Marc Thibault)
writes:

> The essential feature of
> spoofing is that it must be *preemptive*, and cover all
> communications which might contain a key. It must also start
> before my public key is widely distributed. This is a good
> reason *not* to change one's key very often, because each of
> these occasions will present an opportunity to a lurking
> spoofer. It is also a good reason to start publishing your key
> *now*, before the three-letter (four-letter in my country)
> agencies get a grip on it.

There *can be* no good reason to not change one's key. If a key
is compromised it must be changed, no matter when it happens
nor how often it happens. If we must depend upon beating the
Opponent to the punch, all they have to do is compromise the
current key (Fire! Fire!), making us publish a new one, and then
they are first and they win.

All this is completely beside the point that old information may
yet be valuable and important, and it is in our interest to know
what the loss of a single key may cost. When the benefit from
penetration exceeds our ability to protect, we *must* change keys.


>> The best practice requires "frequent" key changes, with explicit
>> separate-channel validation every time.

> This totally negates the advantages of public-key. If I've got
> to use a separate channel for validation, I may as well use
> private keys.

I doubt that the situation is that serious, but if it is, it is.
I do not make the rules. Nor is it necessary to *believe* me.
The attacks I describe are effective, viable, and well within the
capabilities of a government agency. The only way to assure that
these attacks will not be fielded is to use systems which do not
fall to such attacks.

But separate-channel validation does not "totally negate" the
advantages of public-key. We still have the advantage that the
public portion of a public key may be monitored or copied with no
effect on our security. This supports secure key distribution by
radio or in the presence of wiretaps. Less than you would like,
perhaps, but it's what we've got.


> OK - I send messages encrypted with sig keys to randomly
> selected people in Finland, New Zealand, France, VietNam and
> Austria. The only thing these folk have in common is that they
> include public keys in their news sig and are frequent posters
> to news groups or BBS conferences that have nothing to do with
> my work or hobbies. Some of them may be pulled from the README
> file in software I've bought, or files from any of a
> half-dozen BBS's in my local calling area. These messages
> include my public key *inside* the encrypted part. What was it
> you were saying?

> ,Marc

I was saying that you had to get the sig keys first, and when
you use an unvalidated key you have no security at all on that
channel. I was saying that if you use an old compromised key
you have no security on *that* channel either, *regardless* of
whether the owner knows the key is compromised or not.

Welcome to the real world.

---
Terry Ritter rit...@cactus.org

D. J. Bernstein

unread,
Sep 24, 1992, 1:22:39 AM9/24/92
to
In article <1992Sep22.2...@cactus.org> rit...@cactus.org (Terry Ritter) writes:
> First, if you mean *phoning* the distant site and verifying the
> key verbally, I think this can be made to work.

Gee, Terry, why can't God be spoofing your phone calls? When you meet
someone in person to ``verify'' your public key, how do you know it
isn't actually God, posing as that person? Is all cryptography useless?

(Maybe God [ya know, big guy, beard, rests on weekends] doesn't do such
things, but the CIA does. In fact _everyone Terry has ever met or
communicated with_ works covertly for the CIA. No wonder he's paranoid.)

The point of cryptography is to _increase the cost_ to the enemy of
decoding or spoofing messages. Security is not a yes-no proposition.
It's a matter of economics. You want to increase the (minimum perceived)
cost of an attack past the (maximum perceived) benefits. Sure, God can
always get around your security, but it costs a bit much to defend
against his attacks, so why bother?

---Dan

D. J. Bernstein

unread,
Sep 24, 1992, 1:50:56 AM9/24/92
to
In article <1992Sep23.1...@cactus.org> rit...@cactus.org (Terry Ritter) writes:
> There *can be* no good reason to not change one's key. If a key
> is compromised it must be changed, no matter when it happens
> nor how often it happens.

Set up ten keys. Require that at least three of them cooperate in order
to change another one. Use just one of them for normal communications.
End of problem.

> I was saying that you had to get the sig keys first, and when
> you use an unvalidated key you have no security at all on that
> channel.

There is a network called MagNet (TM). Anyone connected to MagNet can
send a message to everyone else connected to MagNet. Messages do not, by
default, come with any indication of their origin. If you look at MagNet
messages you'll see some signed with public keys. Only the creator of a
key (by which I mean to include the actual person who generated the key,
and anyone he's cooperating with) can sign messages with that key. If
you see two messages on MagNet signed by the same key, you can assume
that they were both sent by the creator of that key.

Certainly there is nothing to say ``who'' the creator of a key ``really
is,'' even if the concept of ``who'' makes sense. Keys are not
``validated'' in any way. RSADSI would have a fit if it knew about
MagNet. Nevertheless MagNet is secure.

---Dan

Don Alvarez

unread,
Sep 24, 1992, 9:20:11 AM9/24/92
to
un...@physics.ubc.ca (William Unruh) and

>neu...@vier.informatik.uni-kl.de (Stephan Neuhaus (HiWi Mattern)) write:

>>That's true. You cannot validate keys over an insecure channel.
>Sure you can as long as the opponent can only listen in, not alter the
>contents. Just send your friend your public key, and he sends you his.

Wrong. Provision against altering the contents is not enough. The
opponent must be unable to impersonate your friend. That means (1) he
is unable to insert messages (different and more restrictive than
altering) or (2) you must have some way to test the authorship of
messages.

Consider case (1): If an opponent can't insert messages, then this
channel is functionally equivalent to the case in which you, your
friend, and your opponent all posses each other's public keys, and you
and your friend are the only ones who know your respective private
keys. (*NOTE* I did *NOT* say that the only way to have an insertion
proof channel was public key encryption. What I said was that any
insertion proof channel is functionally equivalent to the above public
key channel.) Conclusion from case (1): Any channel which can protect
against insertion is equivalent to a secure PK channel.

Now consider case (2): If an opponent can't impersonate your friend,
then this channel is functionally equivalent to the case in which you
and your friend each have the ability to validate each other's digital
signatures, and only you and your friend have the ability to generate
your respective digital signatures. Digital signatures are an encryption
technique, based on public key cryptography, and can be converted back into
a conventional (if inefficient) public key encryption channelend users
such as you and your frie

way to test that your friend authored a message, then that mechanism
is essentially *equivalent* to a public key encryption mechanism (no,
I didn't say it *was* public key, I just said that it could be used in
the same way), and hence any channel which allows you to distribute a
key is neccesarily a channel which is secure.

-don
d...@athena.princeton.edu

Greg - Byrd

unread,
Sep 24, 1992, 7:06:38 PM9/24/92
to

With regard to public key distribution and security, I favor public repositorie
s
such as FTP sites maintained by volunteers as well as public distribution by
postings on Usenet and the like. True, the keys distributed in such a manner
are subject to alteration, but so what? If some elaborate spoofing mechanism
*were* in place, I seriously doubt that the secrecy of its existence would be
allowed to be compromised by routine alteration of *widely dessiminated*
information. My point is that it is one thing to spoof private communications,
but it is quite another to spoof an entire network in such a way that the
spoofee is guaranteed to be fooled.

By posting your public key here, running it in the New York times, trademarking
it or even tatooing it on your forehead you make it hard to fool everyone...
(But be sure to leave room on your forehead for new keys)

All you have to do is occassionally verify through an unpredictable third party
that the various postings of your public key are uncorrupted.

Likewise, as a recipient of encrypted messages it is prudent to periodically
check the public key postings/archives for alterations. Again, this is meaning
-
less (to you, at least) unless done through an unpredictable third party. And
of course you would want to check a large number of keys through the third part
y,
not just the particular key you are interested in (to prevent selective spoofin
g
of all individuals interested in a particular key.)

If this degree of security were insufficient for me, I would tend to worry more
about burglars and the like, rather than global conspiracies. Security of the
*private* key looks to me like the weakest link.

Greg Byrd ---> Bell (803) 681-2120
Snail Island Technical Group / 3 Cardinal Ct. #619
Hilton Head Island / SC 29926
Email gr...@cup.portal.com

--- You can spoof some users some of the time. But not all users all of the ti
me ---

Terry Ritter

unread,
Sep 25, 1992, 1:34:22 AM9/25/92
to

In <neuhaus.717255152@vier> neu...@vier.informatik.uni-kl.de
(actually neu...@informatik.uni-kl.de) (Stephan Neuhaus
(HiWi Mattern)) writes:


>rit...@cactus.org (Terry Ritter) writes:

>> attacks. Now, describe in dollars the effort needed to subvert
>> Internet itself, and "every" node in Internet. It seems to me that
>> subverting Internet must be a *far* cheaper approach [than to break
>> a cipher].

>I beg to disagree. Since the Internet spans multiple continents and
>cotains many sites, the amount of manpower needed to do that covertly
>is very large. Since you spoke of subverting the whole net or a large
>part of it, that cannot be done just by monitoring the transatlantic
>cables or satellites or a few key sites. You must subvert the
>Internet at every commercial place or University. This manpower will
>cost a lot of money. How much? I don't know.

First of all, my money comparison was made with respect to the
putative level of secrecy often thought inherent in a public-
key cipher. This level is presently thought to be extremely high.
And this makes almost any other attack less expensive.

Moreover, the cost of lots of things is different when we work
with data. The cost of penetration might just be some clever way
of getting a Trojan program widely distributed; i.e., virtually
no cost at all. There also may be little delay.

Next, I would be rather surprised to find that the US government
is the only sneaky organization in the world with the resources
to initiate such an attack. In fact, I would expect that other
governments, and various forms of private enterprise would enter
the picture as well (probably first). This assumes, of course,
that we effectively standardize on a protocol they can break
(like unvalidated public keys).

Last, it is only necessary to subvert the *entire* network if you
want access to *everything* on the network. If you subvert the
home node of the target of an investigation (or every node normally
carrying mail to and from that node) then you have that node.
(Assuming that unvalidated public keys are used.) You also have
everybody on that node, and perhaps everything traveling through
that node as well. Everyone for the price of one, so to speak.


>Okay. In your opinion, what constitutes a non-naive way of doing
>public key cryptography? Or indeed cryptography per se?

The accepted method is to create a bureaucracy of key-certification
authorities and certificates.

I would vastly prefer some alternative to CA's, but not so much
that I would sacrifice security. If we don't really care about
security, we might as well use rot13, or give up ciphers entirely.
If all the Opponent has to do is to turn its attention toward
you to penetrate your security, you cannot reasonably be said to
*have* security; what you have is the *illusion* of security.
We call such ciphers "broken." Nobody wants to use them. Why?

It seems to me that separate channel validation (e.g., by phone)
would catch much of the spoofing (assuming the phone company is
secure). Maybe we can come up with something better. If not,
that doesn't change the limitations inherent in what we have.


>But there is a difference between wanting
>security and being paranoid. I mean, I don't send snail mail letters
>in safes just because some three-letter agency might steam them open
>otherwise.

Presumably, you do not send information in your letters which
could produce extremely serious consequences (loss of life,
freedom, or treasure) were it released. If you do, you should
re-think this policy.

Perhaps you do not need cryptography at all. It is highly
unlikely that traffic-monitoring programs can do much more than
scan for words of "special interest." Stay away from such words,
communicate by euphemism, and you should be OK. If you are
willing to accept *probable* security, you have that now, and
you don't need cryptography to get it.

But if you need cryptography, you aren't likely to want it to
fail. This simply means doing everything reasonable to prevent
failure; occasionally failure does happen. But you probably
would want to avoid the situation where the Opponent merely has
to look your way to penetrate your security. If you want to
price your security, note that, once your part of the network
has been subverted, the "cost-per-intercept" would be exceeding
low. A cipher *known* to be vulnerable to this attack should be
described as having little or no security *value* whatsoever.


>> All it takes is an
>> non-validated key like those now being attached to various news
>> signatures.

>I disagree. In fact, I seem to have missed your point. If I post my
>public key and someone chooses to trust it, that's fine for me.

It's only fine for you if you don't need security. Unless your
correspondent uses your currently-valid key, your communications
may be going through a spoofing node (and being read) you would
not know it. This is "easy" because the spoofing node may have
published a *different* key as your own, or one of your old keys.
The correspondent uses that key, and the spoofing node translates
those messages into your key before handing them on. The first
message to you will carry the key which you will use to reply.
If that key is replaced with a key from a spoofer, *your* security
is at risk. This is only possible because of unvalidated public
keys.


>It's
>his or her credibility that is at stake if he or she publishes a
>signed copy of my public key. Nobody has to trust a public key in
>order to use it. Of course, people have to understand that just
>because the public key appeared on a BBS, it's not necessarily valid.

There really is no point in using a public key if it is not
valid. You might as well not bother with cryptography at all.
If you want privacy, just ask everybody not to look.
Cryptography is used when you want to make it (almost)
impossible for them to do so, regardless of what *they* want.


>Oh, not to make you angry, but, see, I just don't agree with you:

Well, any attacks would be in the future, which is naturally
speculation, and you may well disagree with my predictions.
The attacks require unvalidated public keys: If people see the
problem and avoid using such keys, this will "change the future"
such that these attacks make no sense.

But my description of the weakness inherent in unvalidated public
keys is not speculation; it is fact. It is not necessary for you
to *agree* with this for you to be affected by it.

Why do you imagine that there has been a lot of talk about public
key "certification authorities?" Do you imagine this was proposed
for fun, because of a lack of common sense, or as some government
plot? It is an attempt to acquire security which is simply not
available from unvalidated public keys.

---
Terry Ritter rit...@cactus.org

Terry Ritter

unread,
Sep 25, 1992, 1:35:22 AM9/25/92
to

In <26169.Sep2...@virtualnews.nyu.edu> brn...@nyu.edu
(D. J. Bernstein) writes:

>In article <1992Sep22.2...@cactus.org> rit...@cactus.org
(Terry Ritter) writes:
>> First, if you mean *phoning* the distant site and verifying the
>> key verbally, I think this can be made to work.

>Gee, Terry, why can't God be spoofing your phone calls? When you meet
>someone in person to ``verify'' your public key, how do you know it
>isn't actually God, posing as that person? Is all cryptography useless?

Perhaps I *am* naive to promote the phone system as an alternate
channel. But it seems to me that spoofing the phone system, with
real-time audio, in addition to spoofing the data, *must* be much
more difficult than simply spoofing data in packets with little
or no time pressure. It becomes even more difficult if the parties
know each other. The point of this is to increase the cost of
penetration.


>The point of cryptography is to _increase the cost_ to the enemy of
>decoding or spoofing messages.

I agree.


>Sure, God can
>always get around your security, but it costs a bit much to defend
>against his attacks, so why bother?

A good cryptosystem must require a significant effort to penetrate
*each message*. When there exists an approach which allows security
to be penetrated at low cost (after some initial investment in
analysis or machinery), we call that system "broken." The use of
unvalidated public keys is the use of a broken system.

---
Terry Ritter rit...@cactus.org


Terry Ritter

unread,
Sep 25, 1992, 1:36:24 AM9/25/92
to

In <26375.Sep2...@virtualnews.nyu.edu> brn...@nyu.edu
(D. J. Bernstein) writes:

>In article <1992Sep23.1...@cactus.org> rit...@cactus.org (Terry Ritter) writes:
>> There *can be* no good reason to not change one's key. If a key
>> is compromised it must be changed, no matter when it happens
>> nor how often it happens.

>Set up ten keys. Require that at least three of them cooperate in order
>to change another one. Use just one of them for normal communications.
>End of problem.

Unless, of course, an intruder penetrates your physical security and
gets all ten at once. Multiples only count if they are independent.


>There is a network called MagNet (TM). Anyone connected to MagNet can
>send a message to everyone else connected to MagNet. Messages do not, by
>default, come with any indication of their origin. If you look at MagNet
>messages you'll see some signed with public keys. Only the creator of a
>key (by which I mean to include the actual person who generated the key,
>and anyone he's cooperating with) can sign messages with that key. If
>you see two messages on MagNet signed by the same key, you can assume
>that they were both sent by the creator of that key.

>Certainly there is nothing to say ``who'' the creator of a key ``really
>is,'' even if the concept of ``who'' makes sense. Keys are not
>``validated'' in any way. RSADSI would have a fit if it knew about
>MagNet. Nevertheless MagNet is secure.

Since I know nothing about MagNet, I have no idea whether it really
is secure or not. But if operates generally in the way described
here, I cannot see how it possibly could be called "secure." Or,
maybe "MagNet" is secure, while the information conveyed on it isn't.

If you just get a public key from somewhere, you can assume that
it was created, but not by whom. If MagNet is penetrated, the
key you see in any message may not be the one originally sent.
If a spoofer replaced the original key with one of his (or her) own,
the spoofer can read any mail you send under that key. If you send
your key back signed under that key, the spoofer can replace your
key with a different key, re-sign and send it on. Now the spoofer
has access to both directions.

In a sense, you have "secure communications." Unfortunately, you
are secure with a spoofer, who then decides to pass the information
along. When the recipient replies it appears that you are secure
end-to-end, but you are not. Normally, when we see that this sort
of thing is possible, we call the system "broken" and "insecure."

---
Terry Ritter rit...@cactus.org

Terry Ritter

unread,
Sep 25, 1992, 3:48:08 AM9/25/92
to

In <66...@cup.portal.com> gr...@cup.portal.com (Greg - Byrd) writes:

>With regard to public key distribution and security, I favor public

>repositories such as FTP sites maintained by volunteers as well as


>public distribution by postings on Usenet and the like. True, the
>keys distributed in such a manner are subject to alteration, but so
>what? If some elaborate spoofing mechanism *were* in place, I
>seriously doubt that the secrecy of its existence would be allowed
>to be compromised by routine alteration of *widely dessiminated*
>information. My point is that it is one thing to spoof private
>communications, but it is quite another to spoof an entire network
>in such a way that the spoofee is guaranteed to be fooled.

Apparently I have been unable to communicate the problem: It is
not necessary to spoof an entire network to ruin your security.
Your security is gone if you reply to unvalidated public keys and:
a) your home node is subverted, OR b) all the nodes through which
your node gets news (the invalidated keys) are subverted. One of
the worst things about this is that you generally cannot tell
whether (a) or (b) are true or not. When you use unvalidated
public keys, your security is completely out of your hands.

If you try to validate your keys by email, you will be foiled if:
a) your home node is subverted, OR b) all nodes your node uses
for email delivery are subverted. And you cannot tell whether
(a) or (b) are true either.


>By posting your public key here, running it in the New York times,
>trademarking it or even tatooing it on your forehead you make it
>hard to fool everyone...

While that particular key is valid, this is fine. But as soon as
you have a security incident (robbery, fire, maybe even computer
maintenance, divorce, partnership breakup, an employee change,
etc., etc.), you need a new key. You also need a new key when you
decide that you really cannot afford to risk more data than has so
far been enciphered under one key. And all those people out there
now get to choose between two (and, as time goes on, "n") different
keys for you.

Suppose we say "Just use the latest one:" Then the Opponent simply
has to "publish" a new key for you to your correspondents, which
they accept as yours, and then they are in. You may publish in
the NYT once, but every time? Even so, will everyone read it?
(You cannot email anyone to read it, since the spoofer wouldn't
allow that; the spoofer might just reply: "Got it, thanks").
And what happens when you must change your key *now*? Certainly
you get no confirmation from trademark publication for months, and
by that time, you may have been forced--by events out of your
control--to change your key again.


>All you have to do is occassionally verify through an unpredictable
>third party that the various postings of your public key are uncorrupted.

OK, but you must do this by separate channel, or some protocol
which is not yet clear. If you do this by email, the spoofer has
access to it, raises a flag to the human controller, who then
responds "Yeah, we got that all OK here."


>Likewise, as a recipient of encrypted messages it is prudent to
>periodically check the public key postings/archives for alterations.

Again, this must be via separate channel in order for it
to do what you want it to do.


>Again, this is meaningless (to you, at least) unless done through


>an unpredictable third party. And of course you would want to check

>a large number of keys through the third party, not just the


>particular key you are interested in (to prevent selective spoofing
>of all individuals interested in a particular key.)

Well, at least we are now getting to a protocol. If you are going
to interrogate someone almost at random with large numbers of keys,
some program is going to have to do it and some other program is
going to reply. And this means that the spoofer can simulate
that reply automatically.


>If this degree of security were insufficient for me, I would tend to
>worry more about burglars and the like, rather than global conspiracies.

Again, there need be no global conspiracy to eliminate your
security as long as you are using unvalidated public keys.

The problem with "worrying about burglars" is the implication
that spoofing must be difficult and expensive, so that physical
security would be the attack of choice. In fact, we are dealing
with computers here. Once a spoofing program works, it is only
necessary to get it installed; this may not cost anything at all.
Once coverage is complete around you, the Opponent can monitor
you again and again at almost no cost. This is a far different
situation than a physical attack, and may actually be far cheaper,
making it the attack of choice.

Worry about burglars, but worry more about spoofing.

---
Terry Ritter rit...@cactus.org

Eric Hughes

unread,
Sep 25, 1992, 10:01:00 AM9/25/92
to
In article <1992Sep25.0...@cactus.org> rit...@cactus.org
(Terry Ritter) writes:

> If you try to validate your keys by email, you will be foiled if:
> a) your home node is subverted, OR b) all nodes your node uses
> for email delivery are subverted. And you cannot tell whether
> (a) or (b) are true either.

Forgive me for just not knowing what you are referring to. What do
you mean by subversion here, Terry? Do you mean any of the following?

1. All communication channels are monitored and filtered.
2. Your private keys are known to the adversary.
3. All public keys have be spoofed by an interposer, both yours and other's.


Eric Hughes

Jeff Hull

unread,
Sep 25, 1992, 6:18:54 PM9/25/92
to
I am greatly enjoying this discussion of vulnerabilities of public key encryption systems.

I have two requests that I think will help all of us enjoy them more and make it far
easier to both follow the discussion and to get something useful accomplished.

First, as the primary topic of discussion shifts, would the poster who changes the
direction *please* change the subject line to be something meaningful *AND* leave
the old subject in the Subject line (in parentheses, at the end, after the word "Was").
You know, the standard Usenet way of handling multi-threaded conversations.

Second, could we (begin to) develop a set of "standard procedures" for using public
key encryption. This would entail someone, like Terry, who wants to identify a particular
vulnerability, publishing a detailed transaction (I do this, you do that). Some (much)
of this is already available in various published papers, but I have never seen a FAQ
that tells people (*me!*) where to get an electronic copy of them. If we will all do
this, the end result will be a set of procedures with known residual vulnerabilities
that people can actually use. Who knows, it may even develop into workable public
domain software. (You mean, we could actually influence how things happen? Yup.)

As I try to follow the current discussion, I feel somewhat handicapped by not having
in front of me a copy of the exact security protocol exchange that is being discussed.
Hey, Terry, would you please, *pretty please with sugar on it*, publish one?

--
_ _
/_ / ) / ) Jeff Hull jh...@muse.den.mmc.com
//_) -/- -/- 1544 S. Vaughn Cir
/_/ \_/ / / Aurora, CO 80012 303-977-1061

Greg - Byrd

unread,
Sep 25, 1992, 11:48:30 PM9/25/92
to
OK Terry, your main point seems to be the same as mine. Use of a public key
received via email (for example) opens you to interception and the like.
The single-point-failure (email channel) is too easy to spoof.

Your solution is to use a secure channel (such as in the flesh) to exchange
keys. This is ideal when practical.

My solution is to widely dessiminate my public key. And for you to widely
dessiminate your public key. Same for everyone else.

Now - the single-point-failure is eliminated. All that is needed is for
anyone suspecting spoofing to verify that the public postings of his key and
of his correspondents keys are unaltered. As an example, go to a payphone
and call a dozen people around the world who post to the net. Total strangers
whom you have never communicated with. Have them fetch the keys from their
handiest local source. Do they all match? And do they match *your* copies
of the keys? Yes? Then your public keys are good. No? Then your node or
path is being spoofed. Grab the shotgun and go hunting.

The only time this check could fail is if your correspondent didn't bother
to also perform the above procedure. But the mere fact that he *might*
do this should deter any prospective spoofer who values his hide...

If you do change your public key, just allow enough time for all of the public
key archives to be updated, perform the check again, and keep on chugging alon
g.

Greg Byrd ---> Bell (803) 686-4575

a-bo...@cup.portal.com

unread,
Sep 26, 1992, 1:36:54 AM9/26/92
to
I think Terry Ritter is underestimating the effort needed to successfully
carry out the spoofing attack he describes.

As others have pointed out, the spoofer, to attack me, must filter out
*all* messages which would allow me to receive someone else's public key.
(That is, the spoofer must, for each such message, generate his own
"fake" version of the sender's key, and substitute it in the message.)

Otherwise, if I do get an accurate key for someone else, the spoofer
can't tell what I might be sending to him. Also, any message which that
person *signs* and sends to me cannot be altered by the spoofer without
being detected. Something as simple as "your public key is XXX", if sent
to me in signed form, would expose the spoofing.

A corollary to this is that if I validate *any* other key through a
non-email channel, such as the telephone or a personal meeting, the jig
is again up. If the key has been spoofed (and all keys must be spoofed
if any are), it will be detected.

(Of course, an even simpler measure is to call someone up and ask what my
public key .sig was at his site. If I am being attacked as above,
it must be different from my actual public key.)

Ritter makes much of "validated" keys using a Certification Authority. I
will note that it will be cheaper to subvert the CA than to crack RSA,
so one of Ritter's arguments against decentralized public key use applies
to the bureaucratic approach he favors as well. Plus, any system which
involves a national monopoly arrangement for CA's (as we may have on a
de facto basis in the U.S. due to the patent protection of RSA) is, IMO,
going to be *more* vulnerable to government pressure than a decentralized
system. A compromised CA is far more dangerous than a single compromised
net node. (And thus, far more valuable to the attacker.)

Another corollary to the need for a successful spoofer to alter all
incoming keys has been stated here as well: if such spoofing is a danger
more of the future than of the present, then collecting keys *now*, before
such spoofing begins, is a very good idea. If, ten years from now, a
spoofer attacks me as above, and I've saved some of these currently-
published keys, they can be used to expose the spoofing. All it takes
is for me to send a message encrypted with one of these old keys which
includes my public key in the encrypted portion. Since I collected these
keys before the spoofer went into business, he can't tell what I am sending,
and he can't change my public key in the encrypted part of the message.
The difference between what everybody else thinks my public key is and
what I think it is will again be exposed.

Hence, this practice of publishing keys now, which Ritter decries, may
actually provide a foundation for preventing the spoofing attack in the
future.

Ritter points out that even if I follow all of these precautions, there is
no assurance that my correspondant is. If he is the one being attacked,
and, as a naive user, has not taken any of these elementary measures to
prevent it, then my messages to him will be compromised.

But of course, any encrypted communication with a person inexperienced
in the need for care must be viewed with caution. There are plenty of
other ways inexperienced users can compromise our conversation. Failure
to guard the secret key carefully is probably a far greater danger. And
no CA hierarchy will help with that. The best approach is to enlighten
my correspondant as to the need for the simple precautions I've outlined.
But hopefully, in an environment in which most email conversations are
routinely encrypted (a practice I consider likely in a few years), the
need for this kind of care will be widely understood.

Adam Boyles - a-bo...@cup.portal.com

Terry Ritter

unread,
Sep 26, 1992, 11:55:26 PM9/26/92
to

In <66...@cup.portal.com> gr...@cup.portal.com (Greg - Byrd) writes:

>OK Terry, your main point seems to be the same as mine. Use of a public key
>received via email (for example) opens you to interception and the like.
>The single-point-failure (email channel) is too easy to spoof.

As I pointed out to you politely by email, I am ready to get
out of this discussion which seems to be sucking up my time
endlessly without positive result. It is time for others to
jump in, should they feel the need.

However, no, your comment is not really my point; it is *one*
of my points. Email itself is *not* the only problem. Widely-
distributed keys are also a problem because keys must change,
and this leaves the old keys out there for people to use. If
the secret part of an old public key was actually revealed, then
everyone using the widely-published old key is inherently at risk.
Users who do not insist on a "new" key get what the deserve, just
like the users who do not validate a public key.

Yet another problem with widely-distributed keys is somehow
equating wide distribution with security. I see no reason why
the Opponent could not distribute lots of keys very widely
indeed; all they have to do now is to convince you to use one
of those. ("My new key is this: ....., and is also listed under
codename XYZ in the NYT for Feb. 10, and will be coming out
as a trademark under my *real* name UVW.")


>Your solution is to use a secure channel (such as in the flesh) to exchange
>keys. This is ideal when practical.

Actually, my "solution" (had you bothered to read and understand
the last dozen or so of my postings, or my email response), is
to use one or more *insecure* but separate channels to *validate*
keys. Open phone lines are hardly secure, and this is not a minor
point. And I have said repeatedly that, while separate channel
validation should be a great improvement over using unvalidated
keys, it is not and cannot be perfect.


>My solution is to widely dessiminate my public key. And for you to widely
>dessiminate your public key. Same for everyone else.

Your "solution" inherently carries the risk (indeed, the
expectation) that people will rely upon the old keys, even when
we know that some of them may have been revealed.

How is this a "solution"? What problem does it solve? Do you
mean the problem of secure key distribution being harder than you
would like? Anybody can "solve" that problem just by ignoring
the security requirement. The point is to solve the problem
while yet *retaining* security.


>Now - the single-point-failure is eliminated. All that is needed is for
>anyone suspecting spoofing to verify that the public postings of his key and
>of his correspondents keys are unaltered. As an example, go to a payphone
>and call a dozen people around the world who post to the net. Total strangers
>whom you have never communicated with. Have them fetch the keys from their
>handiest local source. Do they all match? And do they match *your* copies
>of the keys? Yes? Then your public keys are good. No? Then your node or
>path is being spoofed. Grab the shotgun and go hunting.

First of all, calling people who read the net hardly changes the
single-point problem if we are talking about a .sig key. The
node of the sender, or nodes around him, may have changed that
key. Everyone knows the wrong key. How does that help?

Moreover, if you got the phone numbers from the net, they are
suspicious as well. Calling lots of suspicious numbers is hardly
different then calling one. And calling one is useless. We're
talking about security here, not games.

Next, even if separate channel publications match, to the extent
that they are not *current* publications they may be out of date
and penetrated. Those who use old keys risk using a revealed
key *even if the owner does not know this*.

Last, not only are you unlikely to find a person associated with
the spoofing, you may find that your local node has "willingly"
set this up for "national security" or "law enforcement" reasons.
Few nodes could or would give a direct "No" to a serious approach
by their government. If the node owner approves, I expect that
"monitoring" data flowing through that node is legal. Even
changing the messages may be legal. You really have nothing
to complain about. Even if these things were made illegal, you
still could not prevent them from happening surreptitiously.

I believe our goal should be that, by ciphering on our home
machines, we may send ciphertext on the network and *nothing* on
the network can do anything about it (except gobble it up). Real
data security means that we don't have to *worry* about "them" or
what "might" be happening on the network, because it really
doesn't affect our security at all.


>The only time this check could fail is if your correspondent didn't bother
>to also perform the above procedure. But the mere fact that he *might*
>do this should deter any prospective spoofer who values his hide...

I have already discussed various failures which are immediately
evident. There may well be others.


>If you do change your public key, just allow enough time for all of the public
>key archives to be updated, perform the check again, and keep on chugging alon
>g.

How much time is sufficient for published keys to change?

I wish you would confine yourself to under 70-character lines.

---
Terry Ritter rit...@cactus.org

Terry Ritter

unread,
Sep 26, 1992, 11:57:22 PM9/26/92
to

In <920925223...@cup.portal.com> a-bo...@cup.portal.com writes:

>I think Terry Ritter is underestimating the effort needed to successfully
>carry out the spoofing attack he describes.

Look, I really want to get out of this discussion, but this is
really quite outrageous. We're talking about computers here.
Once a decent spoofing program is written, how much does it "cost"
to get it running on a bunch of nodes? Perhaps the expense of
visiting the various node owners and pulling out badges?


>As others have pointed out, the spoofer, to attack me, must filter out
>*all* messages which would allow me to receive someone else's public key.
>(That is, the spoofer must, for each such message, generate his own
>"fake" version of the sender's key, and substitute it in the message.)

Well, no.

To attack you, all a spoofer must do is get you to accept one of
his keys as valid. Then you have a secure channel to the spoofer,
and he may pass the information along to make you think you have
a secure connection to someone else.

One way to get you to accept one of the spoofed keys is to
"always" translate a news-signature key to one of his. (Always?
How often do you check each character in such a key?) If this
is done at or near the sending node, it shouldn't be too
difficult at all.

Another way to do it is to send: "Emergency, I had to change
my key. Here is my new one!", preferably to both ends. If
the users didn't validate their keys originally, why should
they bother later? And the spoofer is in (like Flint?).


>Otherwise, if I do get an accurate key for someone else, the spoofer
>can't tell what I might be sending to him.

Right. Once we *assume* that we have at least one valid key
(that is, we *assume* validation), then we can leverage the world.


>Also, any message which that
>person *signs* and sends to me cannot be altered by the spoofer without
>being detected.

Right. *After* we have a valid key. The whole discussion here
is about *how to get that key*.


>A corollary to this is that if I validate *any* other key through a
>non-email channel, such as the telephone or a personal meeting, the jig
>is again up. If the key has been spoofed (and all keys must be spoofed
>if any are), it will be detected.

I believe I mentioned this.


>(Of course, an even simpler measure is to call someone up and ask what my
>public key .sig was at his site. If I am being attacked as above,
>it must be different from my actual public key.)

Yes, you are attempting to *validate* your key by a separate
channel, as I have advocated.

But when you find out it is different, what do you do? Do you
move to another node? Can you?

As I see it, it is more desirable to be in a position where you
*do not care* what the network does, as long as they deliver your
messages. Then we don't have to say "Spoofing must be hard, so
it can't happen to us, so it's silly to worry about it." If we
do it right, we just don't care how easy it is.

We get to this envious position by designing a *secure* protocol
for key distribution. Right now, a reasonable ad-hoc approach is
separate-channel validation. But email .sig delivery is a poor
approach because, *if there is* a spoofer, we could be affected.
That is hardly a ringing endorsement of security.


>Ritter makes much of "validated" keys using a Certification Authority.

As far as he can remember, Mr. Ritter has mentioned CA's in two
contexts:

1) As a form of key distribution far more secure than
email .sig delivery. (We *assume* that the government
will have access. But at least CA's would tend to
keep the riff-raff out.)

2) As the basis for key-distribution in most of the
academic papers on the subject. Why is this?
General incompetence and stupidity? Conspiracy?
Let's think about it . . . .


>I
>will note that it will be cheaper to subvert the CA than to crack RSA,
>so one of Ritter's arguments against decentralized public key use applies
>to the bureaucratic approach he favors as well.

Yes, this is my point. No, I do not favor CA's.


>Plus, any system which
>involves a national monopoly arrangement for CA's (as we may have on a
>de facto basis in the U.S. due to the patent protection of RSA) is, IMO,
>going to be *more* vulnerable to government pressure than a decentralized
>system. A compromised CA is far more dangerous than a single compromised
>net node. (And thus, far more valuable to the attacker.)

I do not like CA's. But, to return to the question, why is it
that they are so often used as the basis for key-distribution
protocols? Is it some sort of government PLOT?

Or is it simply because any crypto corporation or academic who
has addressed the issue sees that the obvious trivial approach
(e.g., .sig delivery) does not maintain security?


>Another corollary to the need for a successful spoofer to alter all
>incoming keys has been stated here as well: if such spoofing is a danger
>more of the future than of the present, then collecting keys *now*, before
>such spoofing begins, is a very good idea.

Sure, you can have a good key out there as long as you don't
change your key. Too bad you have to change your key.

As long as you keep the same key, there is an increasing
probability that someone has penetrated your security. If you
have a security incident (fire, computer repair, fire an employee
or divorce a spouse) you either keep the unspoofed key and accept
a *probable* lack of security, or change the key and fall into the
domain of the spoofer.

But, in the real world, we are rarely aware that someone *has*
penetrated our security, *even if they have*. To foil this sort
of thing, it is necessary to change keys *whether or not* we are
sure we have been penetrated.

Wouldn't it be easier to just accept the potential existence of
a spoofer and then arrange things so that it doesn't matter?
CHANGE YOUR PUBLIC KEYS FREQUENTLY! VALIDATE EVERY PUBLIC KEY
BEFORE USE! It's a little more trouble to do it right, of
course. My, how we fight and complain to avoid it.


>If, ten years from now, a
>spoofer attacks me as above, and I've saved some of these currently-
>published keys, they can be used to expose the spoofing. All it takes
>is for me to send a message encrypted with one of these old keys which
>includes my public key in the encrypted portion. Since I collected these
>keys before the spoofer went into business, he can't tell what I am sending,
>and he can't change my public key in the encrypted part of the message.
>The difference between what everybody else thinks my public key is and
>what I think it is will again be exposed.

You mean that you think that individuals can keep the secret portion
of a public key secret to a high probability for the period of
ten years? Remember that once the secret part is exposed, *anybody*
with that secret can use it to "sign." Useful signatures are only
possible with secret key portions which have never been exposed.
And we only know that for sure when they have just been created.


>Hence, this practice of publishing keys now, which Ritter decries, may
>actually provide a foundation for preventing the spoofing attack in the
>future.

Ritter "decries" things which just don't work.


>Ritter points out that even if I follow all of these precautions, there is

>no assurance that my correspondent is. If he is the one being attacked,


>and, as a naive user, has not taken any of these elementary measures to
>prevent it, then my messages to him will be compromised.

Without a quote to nail this down, I'm not even sure what you
are referring to; I doubt it was something that obvious.

Perhaps my point was this: If your correspondent doesn't validate
the key he *thinks* is yours, when he contacts you, *you* may be
in trouble. You get a message *in your valid key* with a key for
you to use to reply. A shame, isn't it, that that key came from
a spoofer? Who is being attacked here, the naive user, or the
suave sophisticate who does not insist that his correspondents
validate their use of his public key?


>But of course, any encrypted communication with a person inexperienced
>in the need for care must be viewed with caution. There are plenty of
>other ways inexperienced users can compromise our conversation.

Obviously.


>Failure
>to guard the secret key carefully is probably a far greater danger.

Really? How do you know this? How do you measure a "greater"
danger than something which costs almost nothing to install,
and delivers intercepts forever at almost no cost?


>And
>no CA hierarchy will help with that. The best approach is to enlighten

>my correspondent as to the need for the simple precautions I've outlined.

The precautions you outlined are simple enough; they just
don't preserve security.


>But hopefully, in an environment in which most email conversations are
>routinely encrypted (a practice I consider likely in a few years), the
>need for this kind of care will be widely understood.

Yes. In particular, people need to understand why they should
validate public keys. If this happens soon enough, perhaps we
can avoid having the network overrun with spoofers.

---
Terry Ritter rit...@cactus.org

Marc T. Kaufman

unread,
Sep 27, 1992, 11:40:07 AM9/27/92
to
rit...@cactus.org (Terry Ritter) writes:
> In <920925223...@cup.portal.com> a-bo...@cup.portal.com writes:

->I think Terry Ritter is underestimating the effort needed to successfully
->carry out the spoofing attack he describes.

> Look, I really want to get out of this discussion, but this is
> really quite outrageous. We're talking about computers here.
> Once a decent spoofing program is written, how much does it "cost"
> to get it running on a bunch of nodes? Perhaps the expense of
> visiting the various node owners and pulling out badges?

This entire argument has revolved around establishing the private key of
someone you haven't met, or can't meet, in person or by telephone. Pray
tell, just why would you want to do that? No matter how the person comes
across on newnews, he or she could be an agent provacateur or spy or...
I mean, the ultimate (and simplest) spoof is to just pretend to be someone
else when you are on the net.

Why would you attempt to set up a secret channel with someone you haven't
'vetted by other means?
--
Marc Kaufman (kau...@CS.Stanford.EDU)

Robert Lewis Glendenning

unread,
Sep 27, 1992, 2:43:16 PM9/27/92
to
I believe that the intelligence agencies only manage to crack a small percentage
(maybe less than 1%) of all traffic. I have heard they save all traffic,
and that it takes a freight-car of tape per day.

It is not possible that they are intercepting all interesting traffic, even
though foreign military and intelligence traffic is a small percentage
(again less than 1%) of the interesting traffic of the world.

Now, Internet is in its infancy. It is already lots of megabytes per day.
If we include the private versions, certainly it is many 1000s of megabytes
per day.

So, at the time the world gets serious about networks, it will have gotten
serious about data and line security. All comm lines will be separately
encrypted. All mail, etc. will use at least one level of sophisticated
encryption.

From a cracker's point of view, this is horrid. They have to expend a lot of
effort to determine where files begin and end (due to the data link encryption),
and lots of those files are copies of the latest pinup sloshing around the
net.

I believe that no intelligence agency can handle the logistics. SPoofing is
supposed to be a way of circumventing these logistical problems, but even
sorting out the interesting traffic from all the chaf isn't going to
be easy.

Logistics also protects against spoofing, which has to be complete, or
even lousy security measures will detect it. There are thousands of nodes
on Internet. There will be millions when we get serious. The network will
be extremely dynamic. So, we only need a Consumer Report type group (or
perhaps several) to give a daily update of the number of attempted spoofs
against their test sites and mail to/from cooperating users. Once
procedures are published to test for spoofs, lots of people will do it
occassionaly and lots of internal security organizations will test
systematically.

So, IMHO, spoofing is something to be concerned about, but it is just another
annoying environmental factor, like viruses (both biological and computer).

Lew
--
Lew Glendenning rlgl...@netcom.com 408-245-7488

"Perspective is worth 80 IQ points." Nils Bohr (or somebody like that).

D. J. Bernstein

unread,
Sep 27, 1992, 6:23:27 PM9/27/92
to
Terry, you keep talking about ``security'' and ``validation'' and
``trust'' without saying what these words _mean_.

A security violation is, by definition, a violation of the applicable
security policy. When there is no policy, there can be no violations. If
you're going to claim that a system is not ``secure,'' Terry, please
point out the security policy which has been violated. (Note that a
policy need not be a formal policy.)

Only the interpretation of a message can be valid or invalid. A message,
absent any interpretation, is neither valid nor invalid. If you're going
to claim that something has not been ``validated,'' Terry, please point
out what interpretation is in question.

``I trust T when he says X'' means that if T says X then I believe X.
Trust, absent any definition of T or X, is meaningless. If you're going
to claim that something is not ``trustworthy,'' Terry, please point out
who T is and what X is.

Now, Terry, why do you claim that MagNet is insecure? You say that its
keys are not ``validated'': what interpretation of the keys are you
talking about?

---Dan

Hal

unread,
Sep 29, 1992, 11:21:07 AM9/29/92
to
I think people are coming down too hard on Terry Ritter, who
is just trying to alert people to problems with using public
keys without validation. He is getting all these sarcastic
comments just because people don't like to hear the things he
is saying.

Actually, one of the big advances in PGP 2.0 is to make it easy
to distribute _signed_ keys. That was one of the main points in
making changes to the data structures. People should use this
capability and try to collect signatures for their keys. By
distributing signed keys the hope is that the person you want to
communicate with can verify your key, at least to the extent that
the signer vouches for you.

If you don't know anybody else who is using PGP, you need to do
as Terry suggests and use some other channel to validate their
keys. Call each other up, read your keys over the phone, and if
they check out, sign them. This is probably safe enough today,
in my opinion. This way you can get legitimate signatures on your
keys which can improve the security of the whole system, as Terry
says.

Also, Terry is absolutely right about the need to change keys
periodically. The longer a key is used, the more chances someone
has to discover your secret key, and also the more it is worthwhile
for someone to spend money to try to find your key. The PGP 2.0 key
data structures actually have an expiration field in them. This
is not implemented in this version but it would be good to see this
feature used in a later version.

The idea would be that each key would have a fixed lifetime, maybe
one year or two years. After that time the key would expire. Any
attempt to use an expired key would lead to a strong warning from
the program. A few weeks before the key expires, you create a new
key, sign it with the old one, and distribute it. By signing it
with the old one you don't lead to an opportunity for someone to
easily fake a new key from you.

(This is all for the case of a "normal" key transition. Switching
keys when you know your private key has been discovered will require
more drastic measures. Key revocation certificates are another feature
which should be implemented in future versions of PGP.)

Hal Finney

-----BEGIN PGP PUBLIC KEY BLOCK-----

Version: 2.01

mQCNAiqsNkwAAAEEAMKWM52m5EWi0ocK4u1cC2PPyHT6tavk9PC3TB5XBYDegf3d
sldRpnjJj1r+aO08FFO+QLEI9wtBqvf1PPP5iLX7sD2uIVlJH14MPtyVtjm9ZKb8
JMtCW74045BgtHBC9yQ3V7vXNV5jM6dE2ocnH4AI/pBFrGLJPKgTA69YIUw3AAUR
tCZIYWwgRmlubmV5IDw3NDA3Ni4xMDQxQGNvbXB1c2VydmUuY29tPokAlQIFECqu
M1Tidd4O/2f3CwEByrUD/3uoV2y+Fuicrrd2oDawgOw9Ejcx6E+Ty9PVPqKvflLs
0zYyGfeFVSgBbTSDP3X91N3F68nydl9J9VA6QRCGelHM1cZRukCJ0AYbKYfpwUN0
xjEGHsDrd2gT5iWlB3vBZvi+6Ybs4rSq+gyZzVm1/+oRrMen32fz2r0CLgUtHok2
=fF6Z

aspri...@eagle.wesleyan.edu

unread,
Sep 28, 1992, 9:55:46 AM9/28/92
to
In article <1992Sep23.1...@cactus.org>, rit...@cactus.org (Terry Ritter) writes:
>
>
> But I don't think it solves our problem. Not yet.
>
> The way I understand it, Ross proposes to hide a small amount of
> data in normal text by hashing 40-byte blocks into a single bit
> using a secret key. During text construction, the text editor
> would "beep" if a just-created block did not produce the correct
> bit; in that case the user would back up and try again with
> different words.
>
> The advantage is that we can get a hidden secret message past
> the Opponent because the outer message looks innocuous. Then,
> once everybody has the message, we just announce the secret key
> and everyone can unlock the hidden data (the validated public key).

You are right. It does not solve the problem. Every move of the
person trying to distribute their key can be intercepted by the opponent in
this model. If the opponent can't tell what's up with a message, if the outer
message looks innocuous, he can change the wording of the message thus
destroying the inner bit pattern without alerting the readers to a change. The
opponent could then, as you said, announce that the public key is in the new
message and publish a key and algorithm to find it (after the distributor
attempts to announce such a key)

The key to initial key distribution is SECURITY, not just CRYPTOGRAPHY.
There must be some form of communication that is un-interceptable to validate
any such distribution. For example, after the initial distribution of the
outer text, two or three or the readers could physically bring the text they
received to the distributor for a comparison. If they are murdered on the way
and substituted by an excellent team of imposters, we'd have a problem :)

This solution is exactly the type of solution we are trying to avoid,
so I would love for someone to prove me wrong.

Andrew Springman

Uri Blumenthal,35-016,8621267,

unread,
Sep 29, 1992, 9:12:35 PM9/29/92
to
From article <>, by 74076...@CompuServe.COM (Hal):

> -----BEGIN PGP PUBLIC KEY BLOCK-----
> Version: 2.01

Hey, what's "Version 2.01"? Was the patch released yet?
Why isn't the world notified? (:-)

Regards,
Uri.
------------
<Disclaimer>
--
Regards,
Uri. u...@watson.ibm.com
------------
<Disclaimer>

Ross Anderson

unread,
Sep 30, 1992, 5:21:38 AM9/30/92
to
In <1992Sep28...@eagle.wesleyan.edu>, aspri...@eagle.wesleyan.edu
(Andrew Springman) writes:

> In article <1992Sep23.1...@cactus.org>, rit...@cactus.org (Terry
> Ritter) writes:
>
> > The way I understand it, Ross proposes to hide a small amount of
> > data in normal text by hashing 40-byte blocks into a single bit
> > using a secret key. During text construction, the text editor
> > would "beep" if a just-created block did not produce the correct
> > bit; in that case the user would back up and try again with
> > different words.
>

> You are right. It does not solve the problem. Every move of the
> person trying to distribute their key can be intercepted by the opponent in
> this model. If the opponent can't tell what's up with a message, if the
> outer message looks innocuous, he can change the wording of the message thus
> destroying the inner bit pattern without alerting the readers to a change.
> The opponent could then, as you said, announce that the public key is in the
> new message and publish a key and algorithm to find it (after the distributor
> attempts to announce such a key)

I believe I did mention in my original posting that the spooks did this
during the war. If they saw a telegram saying `father is deceased' they'd
change it to `father is dead'; if they got lucky, a dumb spy would query,
`is father dead or deceased?' (This is all in Kahn's book.)

If the opponent isolates you totally, intercepts and modifies _all_ your
communications, then you've indeed got a problem. But using steganography
in the way I suggested can make his job much harder, especially if you
couple it with a non electronic communications medium.

For example, if you write occasional articles, then you can hide your
certificate in text which you email to the editor. Then stop to buy a copy
of the magazine at a randomly chosen town; this should reassure all but the
clinically paranoid that Big Brother didn't monkey with that particular
email message.

I don't claim that steganography is a panacea, for the key certification
problem or anything else. It's just one of the tools in the kit. If you're
going to use crypto to transfer trust, you may want to take a lot of care
about how you establish the first link in the chain of trust.

At the moment the military and diplomatic people do this with couriers, and
the banks do it by writing letters to their correspondent banks' senior
managers' home addresses; both can be subverted with enough effort. Using a
broadcast medium like the internet, perhaps together with steganography, is
one possible improvement; another might be for one trusted party, such as
the Bank of England, to broadcast its key on teletext and the BBC World
Service. You pays your money and you takes your choice.

However if the real goal is to shield oneself from harm at the hands of the
State - whether deliberate or accidental - then cryptology is probably not
the right approach. In that case, why not form a pressure group and raise
twenty thousand a year for the local ruling party? That way you get access
to ministers if you ever need it.

Ross

R A Hollands

unread,
Sep 30, 1992, 8:42:12 AM9/30/92
to
In article <920929152106_74...@CompuServe.COM> 74076...@CompuServe.COM (Hal) writes:
>I think people are coming down too hard on Terry Ritter, who
>is just trying to alert people to problems with using public
>keys without validation. He is getting all these sarcastic
>comments just because people don't like to hear the things he
>is saying.

I'll agree with that. Terry seems to me to be saying,"It's a good system but
there are weaknesses. What are you going to use it for?"

In the PGP Phil Z compares encrypted e-mail to the envelope we use for letters.
Well, you can steam letters open; put medicinal alcohol on them and make them
transparent; poke in a split stick, wind the letter round it, remove the letter
read it and put it back. Should we give up using envelopes?

(I hope I didn't tell you something you didn't already know there)

Richard
[NULL signature]

Terry Ritter

unread,
Oct 4, 1992, 3:54:56 AM10/4/92
to

In <BvE5y...@exnet.co.uk> s0...@exnet.co.uk (R A Hollands) writes:

>Terry seems to me to be saying,"It's a good system but
>there are weaknesses. What are you going to use it for?"

Not quite, actually. I was *trying* to say: "It's a good system,
and should have almost *no* weaknesses, PROVIDED that it is *used
correctly*. I was attempting to describe incorrect usage, and
show *why* that usage really was incorrect.

I guess I would like to see public-key documentation put more
emphasis on user responsibilities. It seems unlikely that *any*
system can deliver true secrecy *by itself*; each user is
necessarily responsible for operating the system in ways which
will secure the secrecy that the system can deliver.

Individuals in the US (and probably most of the "free world") have
very little experience with information security. One of clippings
in my files is a column from Insight for Dec. 10, 1990 (p. 28),
during the buildup in the Saudi desert. It seems that Iraqi
radiomen picked up names of US servicemen, and then got Marine
patrols "to bite on a phony message." Obviously, proper military
equipment, training, and execution could have prevented this (and
probably quickly did). But normal people in our society simply
do not expect such attacks, and don't operate in ways to minimize
those possibilities. Until people understand the possible problems,
they are vulnerable, *even if* they have a good cryptosystem.

(Indeed, I would argue that the lack of understanding of information
security by the general populace--some of whom must inevitably get
into critical situations--constitutes a national security problem
much larger than any use of cryptography by criminals.)


>In the PGP Phil Z compares encrypted e-mail to the envelope we use
>for letters. Well, you can steam letters open; put medicinal alcohol
>on them and make them transparent; poke in a split stick, wind the
>letter round it, remove the letter read it and put it back. Should
>we give up using envelopes?

We should absolutely give up sending *serious* secrets in
envelopes :-).

But I guess you mean that the cryptosystem can be less than secure
if we only use it for small secrets. I never know how to respond
to this: We have to set up a secure link *before* we use it. Can
we know the extent of the secrecy we will need in the future? Can
we afford to squander secrecy up front simply because we are lazy
and would prefer not to validate public keys? I wouldn't think so,
but I'm certainly willing to listen.

---
Terry Ritter rit...@cactus.org

R A Hollands

unread,
Oct 5, 1992, 12:43:54 PM10/5/92
to
In article <1992Oct4.0...@cactus.org> rit...@cactus.org (Terry Ritter) writes:
>
> But I guess you mean that the cryptosystem can be less than secure
> if we only use it for small secrets. I never know how to respond
> to this: We have to set up a secure link *before* we use it. Can
> we know the extent of the secrecy we will need in the future? Can
> we afford to squander secrecy up front simply because we are lazy
> and would prefer not to validate public keys? I wouldn't think so,
> but I'm certainly willing to listen.
>

(Just yell if "I really want to get out of this discussion" still applies)

If we expand "small secrets" so that our secrets are evaluated in terms of
the cost to us of their being compromised - big secrets, high cost - then
doesn't it make sense to spend only as much on secrecy as we think our
secret is worth?

Since "absolute" security is absolutely unattainable aren't we actually
obliged to do this (unless we're in it for amusement sake only)?

Isn't "good enough" good enough? PGP seems good enough to me, but, then,
I've only got small secrets. And a public key posted here is just a
supply of envelopes.

Richard

Terry Ritter

unread,
Oct 6, 1992, 5:13:29 AM10/6/92
to

In <BvnqH...@exnet.co.uk> s0...@exnet.co.uk (R A Hollands) writes:

>> But I guess you mean that the cryptosystem can be less than secure
>> if we only use it for small secrets. I never know how to respond
>> to this: We have to set up a secure link *before* we use it. Can
>> we know the extent of the secrecy we will need in the future? Can
>> we afford to squander secrecy up front simply because we are lazy
>> and would prefer not to validate public keys? I wouldn't think so,
>> but I'm certainly willing to listen.

>(Just yell if "I really want to get out of this discussion" still applies)

It's not permanent, it's just that composing a serious response
for each of many different postings seriously limits real life.
I'm still catching up on the latter.


>If we expand "small secrets" so that our secrets are evaluated in terms of
>the cost to us of their being compromised - big secrets, high cost - then
>doesn't it make sense to spend only as much on secrecy as we think our
>secret is worth?

OK, but exactly what is the operational "cost" of a spoofing
process running on somebody else's computer? What kind of
secret costs less than sending "new key" messages to both ends?
(I see no reason to imagine that this would not succeed if the
ends never validated their keys originally.)

Certainly, no such program can handle *all* situations, but those
messages it could not handle could be detoured for human action.
Then, once spoofing has been accepted by both ends, deciphering
and re-enciphering should be automatic and invisible to the users.
This is cheap, clean, white-collar intelligence; a real threat.


>Since "absolute" security is absolutely unattainable aren't we actually
>obliged to do this (unless we're in it for amusement sake only)?

True, absolute security is a goal; not reality. But it would
seem rather strange to argue that this means that we don't need
to worry about enforcing security (since we know it can never be
"absolute"). It seems to me that we need to fix every hole we
can find, but most especially the cheap and easy holes.


>Isn't "good enough" good enough? PGP seems good enough to me, but, then,
>I've only got small secrets. And a public key posted here is just a
>supply of envelopes.

Again, PGP is *not* the problem. The problem is *users* who do
not validate their public keys.

A public key posted here may or may not be spoofed. If the key is
not spoofed, then we save all the trouble of validating the key.
But if the key *is* spoofed, then there is no secret at all. There
is *no* envelope; the subsequent conversation is *completely* open.

Thus, the "good enough" aspect of non-validated keys mainly acts
to enable the construction of a spoofing system. Once the system
is constructed, it can be applied virtually for free. This does
not mean that *everyone* could be attacked, but since we could not
know who *would* be at risk, we all have to accept the possibility.
When spoofing is applied, the "cost" of attacking a non-validated
key is almost zero.

Why would anyone bother using a cryptosystem if they will only
use it in a way that can be completely negated almost at will?

---
Terry Ritter rit...@cactus.org


Ed Carp

unread,
Oct 6, 1992, 5:11:35 PM10/6/92
to
Ross Anderson (rj...@cl.cam.ac.uk) wrote:

: I believe I did mention in my original posting that the spooks did this

: during the war. If they saw a telegram saying `father is deceased' they'd
: change it to `father is dead'; if they got lucky, a dumb spy would query,
: `is father dead or deceased?' (This is all in Kahn's book.)

Ton Clancy wrote about this in one of his books. The CIA used it to discover
who had leaked what to the press by generating slightly different memos and
distributing them. Each memo was slightly different, and no two were alike.
They were written in such a way that they all said the same, yet a quote
taken from one would immediately identify which copy was leaked.
--
Ed Carp, N7EKG e...@apple.com 801/538-0177
"This is the final task I will ever give you, and it goes on forever. Act
happy, feel happy, be happy, without a reason in the world. Then you can love,
and do what you will." -- Dan Millman, "Way Of The Peaceful Warrior"

Christopher Browne

unread,
Oct 6, 1992, 3:28:56 PM10/6/92
to
Would it not be appropriate for people to at least change the subject
line once in a while?

PGP *2.0* has now been "available" for a while, and the subject line
"PGP *2.0* available" really ought to be applied to messages
concerning the availability of the program. It would be relevant to
messages where new FTP sites are announced; it would be relevant to
questions like "Where can I get it?"

Many of the discussions that have gone under this subject line REALLY
should fall under a subject like: "Key Transmission."

And I think a good point has been brought up:

For most of the people playing with PGP, the actual security of the
system is really not ALL that important.

If someone is REALLY interested in security, it is because there are
two or more people that have some CRITICAL communications. And
they'll be willing to put the time and effort in to use the key system
exactly as it was designed; they'll use some system of key exchange
that is sufficiently independent that they WON'T have spoofing
problems. They will NOT be using finger to check the key!

They'll dictate armour code numbers over the telephone if they have
to; they'll more likely send a printout/disk containing the keys
through the mail. Since the data is IMPORTANT, and they NEED to send
it safely, they'll KNOW one another sufficiently well that TRUST won't
be quite the same issue that it is for those that would like to send
encrypted email around for fun.

What's important for most of the people around here is that playing
with PGP is FUN.

- It's FUN to feel that you get to "fool" sundry government agencies
- It's FUN to try to figure out ways of breaking the system
- It's FUN to try to figure out ways of strengthening the system

There may be a few professionals around this group; most of the people
here probably aren't, at least with respect to cryptology.

Now can we start putting just a little bit of the creativity into the
subject lines? Thanks!

--
Christopher Browne
cbbr...@csi.uottawa.ca
University of Ottawa
Master of System Science Program

R A Hollands

unread,
Oct 6, 1992, 11:58:04 AM10/6/92
to

Well, now you've said it three times slowly and loudly I think I'm
begining to get the point!

How does this analysis sound to you?

- The only person who can verify my key with reasonable certainty is
me. (Leaving aside the real paranoiacs in the other thread talking
about PGP having been spoofed).

- The other element in the system that needs verification is the
communications path between me and the net.

- So, if I have access to more than one route I can test my local
mail system by sending a copy of my key from one of my mailboxes to
another and comparing. (The userids on the different systems are
different so are spoofer doesn't know what I'm about).

What can I tell you?

- "My mail system is verified. Use it with confidence"? This message
can be spoofed but to what end? If the spoofer isn't substituting my
key then he hasn't gained anything and if she is then I know and she
gains nothing but a sex change.

- "My mail system is compromised. Do not talk to me." We now have
the same security as the write-only file (called "NULL:" on dos
systems). But I presume I can take measures - sue the network
administrator for negligence or something.

If this works can I extend it to users with only one net link: you
send me your key (ASCII format) and I print it and post it to you? Then
somebody steams the envelope open!!

Any good?

Richard

[ "Hey baby, you should be in real life!" - Zaphod Beelblebrox I ]

Ken Pizzini

unread,
Oct 6, 1992, 6:55:42 PM10/6/92
to
In article <1992Oct6.1...@csi.uottawa.ca> cbbr...@csi.uottawa.ca (Christopher Browne) writes:
>And I think a good point has been brought up:
>
>For most of the people playing with PGP, the actual security of the
>system is really not ALL that important.
[...]

>What's important for most of the people around here is that playing
>with PGP is FUN.
>
>- It's FUN to feel that you get to "fool" sundry government agencies
>- It's FUN to try to figure out ways of breaking the system
>- It's FUN to try to figure out ways of strengthening the system
>
>There may be a few professionals around this group; most of the people
>here probably aren't, at least with respect to cryptology.

I agree, and two more points I'd like to toss in:

- PGP lets us explore user interfaces for crypographic communications
- Using programs like PGP help create a legal presumption that
ordinary folk encrypt messages (i.e. it's not a "profile" for
illicit activities, like unsing large quantities of cash has become)

--Ken Pizzini

Phil Karn

unread,
Oct 7, 1992, 6:46:01 PM10/7/92
to
In article <17939.Sep2...@virtualnews.nyu.edu>, brn...@nyu.edu (D. J. Bernstein) writes:
|> In article <1992Sep20....@cactus.org> rit...@cactus.org (Terry Ritter) writes:
|> > As far as I can tell, the whole point of cryptography is to be
|> > able to restrict information to only those who are intended to
|> > have it.
|>
|> That description is far too broad. You cannot singlehandedly restrict
|> information to a chosen recipient, because he can in turn give the
|> information to someone else. You pose an impossible problem; is it a
|> surprise that there are no solutions?

Indeed. I've been making this point regarding anti-piracy devices for
some time. Even if Videocipher, for example, had not been broken there
would have been no way to prevent someone from taping the output of a
legitimate decoder and selling it or giving it away. The same is
undoubtedly true for software packages, at least those running on
standard general purpose machines. If you can't trust the authorized
user, then all bets are off.

Phil

Mark C. Henderson

unread,
Oct 7, 1992, 8:27:18 PM10/7/92
to
In article <1992Oct6.1...@csi.uottawa.ca> cbbr...@csi.uottawa.ca (Christopher Browne) writes:
>...

>Many of the discussions that have gone under this subject line REALLY
>should fall under a subject like: "Key Transmission."
>
>And I think a good point has been brought up:
>
>For most of the people playing with PGP, the actual security of the
>system is really not ALL that important.
>
>If someone is REALLY interested in security, it is because there are
>two or more people that have some CRITICAL communications. And
>...

>They'll dictate armour code numbers over the telephone if they have
>to; they'll more likely send a printout/disk containing the keys
>
>- It's FUN to feel that you get to "fool" sundry government agencies
>- It's FUN to try to figure out ways of breaking the system
>- It's FUN to try to figure out ways of strengthening the system

There are all sorts of uses for PGP and the various alternatives with
less than perfect key verification and hence far less than perfect
privacy/authentication.

1. Mail is often misdirected unintentionally. Life is like that. Your
sysadmin probably sees at least part of a message not intended for
his/her eyes on a regular basis. Sometimes mail even ends up in the
wrong mailbox. If it is encrypted, your privacy is probably not
compromised. For instance, the day before yesterday, due to a bizarre
system problem at one of our feed sites, my boss received a mail
message directed to me (true story). These things happen.

2. It is possible for more or less any determined knowledgable user
on a unix system (for example) to read your mail. If it is encrypted
he/she has one more hoop to jump through before he/she can read it.

3. With some method of key verification (even finger!) you are a lot
less likely to be taken in by a forgery. Of course, some god-like
organisation, can spoof all your lines of communications. But if some
country's government wants the information on your computer they can
probably get it anyway. However, anyone with a little knowledge can
forge E-mail and news articles at will. Most of us can't spoof all
of the finger requests from system x to system y on the internet. Of
course, your system software could be rewritten, your router might be
grabbing packets and the CIA might be sitting outside your house
grabbing all of the electromagnetic radiation from your computer
monitor.

People have said it here before. If you want perfect security for your
communications, you are out of luck. However if you want a greater
degree of privacy, PGP used with some, but not extraordinary care,
will give you that. I'm personally not worried about the possibility
of someone sitting outside of my home picking up the electromagnetic
radiation from my computer screen.

For the sake of an analogy. People put burglar alarms in their houses
not to give them perfect security, but to make it more difficult to
steal or damage their goods/homes. Anyone with sufficient skill,
determination, and resources will probably be able to circumvent most
alarm systems. Same thing with privacy enhanced email.

Mark
--
Mark Henderson, SoftQuad Inc, 108-10070 King George Hwy, Surrey, B.C. V3T 2W4
Internet: ma...@wimsey.bc.ca, m...@sq.com
UUCP: {van-bc,sq}!sqwest!mch Telephone: +1 604 585 8394 Fax: +1 604 585 1926
RIPEM public key available by Email, finger ma...@wimsey.bc.ca or keyserver

Terry Ritter

unread,
Oct 8, 1992, 2:43:16 AM10/8/92
to

In <1992Oct7.2...@qualcomm.com> ka...@qualcom.qualcomm.com
(Phil Karn) writes:

The rest of my original comment was:

"When a public key "appears," it may really be a key
which was generated inside a spoofing node. When one responds
to that key, one's response may be deciphered in the spoofing
node, then re-enciphered to the ultimate recipient.

"Note that the recipient does, in fact, get the message. One
can communicate in cipher. The problem is that the spoofing
node gets to read all the communications. So unless we actually
*intended* that there be a spoofing node, and that they should
read our messages, I think the cryptography has failed."


The reality that someone privy to a secret can betray it is
something most of us learned the hard way, before third grade,
and may have re-learned many times since. This is not news.

What *is* news is the idea that anyone would equate betrayal at
the other end with the possibility that a spoofer could *also*
be reading the mail and distributing it.

Expressed perhaps a bit more clearly: The *whole point* of
cryptography is to deliver information, *unexposed*, to the far
end. Then, if the secret *is* exposed, at least you know it was
them (or you). The ability to identify and eliminate channels
of exposure is a major part of security. (No previous part of
this thread has concerned the broadcast distribution of secure
information.)

If a spoofer *is* reading the mail (something well within the
range of a cracker, but which could easily be prevented by the
user), the system is *not* pretty good cryptography, it is not
even good cryptography, it is just *failed* cryptography. Instead
of a fancy two-key cipher with a strong one-key data engine, it
might as well have been a modest homophonic substitution, or a
stream cipher with a little 32-bit LCG or LFSR. But this would
*not* be real cryptography, it would be *toy* cryptography.

---
Terry Ritter rit...@cactus.org

Ted Dunning

unread,
Oct 9, 1992, 4:40:53 PM10/9/92
to

In article <1992Oct6.2...@unislc.uucp> e...@unislc.uucp (Ed Carp) writes:

Ross Anderson (rj...@cl.cam.ac.uk) wrote:

: I believe I did mention in my original posting that the spooks did this
: during the war. If they saw a telegram saying `father is deceased' they'd
: change it to `father is dead'; if they got lucky, a dumb spy would query,
: `is father dead or deceased?' (This is all in Kahn's book.)

Ton Clancy wrote about this in one of his books. The CIA used it
to discover who had leaked what to the press by generating slightly
different memos and distributing them. Each memo was slightly
different, and no two were alike. They were written in such a way
that they all said the same, yet a quote taken from one would
immediately identify which copy was leaked.

the british government has taken to doing essentially this. different
cabinet members are given copies of documents with different spacing
patterns. if they leak a xerox to the press, then it is very clear
whose copy it is.

the countermeasure is to scan and reprint anything you want to leak.

David K. Black

unread,
Oct 10, 1992, 11:38:35 PM10/10/92
to

Damon

unread,
Oct 25, 1992, 9:33:14 AM10/25/92
to

You can't get a visa to visit them, or maybe (allow 100 years...) they
are on another planet and *can't* come to meet you. Or just generally
that visiting or external validation costs would be too high for the
purposes of the communication.

Damon
--
Damon Hart-Davis | Tel/Fax: +44 81 755 0077 |1.29|| Cheap Sun eqpt available.
Internet: d...@exnet.co.uk | Also: Da...@ed.ac.uk || Mail/news feeds available.
--------------------------+ Will exchange uucped local articles free over V22b.
Public access UNIX (Suns), news and mail for #5 per month. FIRST MONTH FREE.

0 new messages