As such, I have been trying to figure out a way for use to make email
encryption standard through the mail server rather than supporting it at the
client level as we currently do (using PGP clients or GPG as appropraite).
What I want to do is somehow have every email that goes out automatically
encoded through PGP if there is a recipient public key available and at the
same time automatically decode any messages that come in so that basically
the encryption is totally oblivious to the user and can be handled at the
sysadmin level (ie. keys are generated and handled at the server and
passwords are input somehow on demand by the server).
I have seen products such as pgpmail for this on sourceforge as an MTA
(which has no files unfortunately and is in a pre-alpha state), but I keep
wondering whether there is a devastatingly clever way to do this with
postfix that anyone can think of...
Any help appreciated...
ciao!
Daryl.
Paris, France
____________________________
Daryl Manning
dman...@micheldyens.com
-
To unsubscribe, send mail to majo...@postfix.org with content
(not subject): unsubscribe postfix-users
you forget the important part for that ...
on the mailgate you need _ALL_ Passphrases for the
privat keys of your co-workers ...
so you have a security fault on that (sysadmin/hacker can
read/crypt/sign with your keys)
the client based solution is away better imho.
just my 2cent
--
intraDAT AG http://www.intradat.com
Wilhelm-Leuschner-Strasse 7 Tel: +49 69-25629-0
D - 60329 Frankfurt am Main Fax: +49 69-25629-256
But what we have is an issue where most people have trouble using the client
and so do not use encryption at all. This is more dangerous particularly in
the line of business we have (banking).
Having all keys and passphrases on the server is of course, also a security
risk, but it is one that is more manageable and also provides the ability to
decode messages should a user leave or something happen to them (you never
know when they're going to make me angry... =} ).
So, the question still stands, anyone got a good way of implementing this
with spawning from postfix or something?
Thanks!
Daryl.
-----Message d'origine-----
De : owner-pos...@postfix.org
[mailto:owner-pos...@postfix.org]De la part de Sven Michels
Envoye : vendredi 5 octobre 2001 14:58
A : Postfix-Users
Objet : Re: Implementing a PGP encryption/decryption gate
You'd acomplish this with a content_filter (man 8 smtpd). I'm not aware
of any content filters that'll do encryption for you, however. You
might have to roll your own.
--
matt
Oh, forgot to mention: this leaves your users open to identity stealing from
anyone else inside your organization (or with access to that internal SMTP
server). Since you'd have to use the MAIL FROM address to encrypt the mail
on behalf of whoever is sending it, the mail can obviously be spoofed. It's
just not a good situation -- PGP secures the package, not the transport.
John
--
John Madden
UNIX Systems Engineer
Ivy Tech State College
jma...@ivy.tec.in.us
Presumably, you could have this one dealt with "politically". If it's as
important and critical as you say, then just reject any emails that come
through in the clear. Just make sure management OK's that policy first,
because the users are gonna scream like stuck pigs.
I'd personally set that up for a list of domains that must be encrypted,
letting people still get their non-critical personal stuff, but YMMV. It's
a policy thing, really.
Although, rejecting anything not encrypted would be one hell of a spam
beater. :D
What client are they using, if I may ask? I use Kmail on Mandrake, it's
pretty much 100% seamless PGP integration, but I certainly expect I'm in the
minority.
Outlook does a better job and is the weapin of choice for most of the
office, but most people have difficulty getting the idea of the PGP public
vs. private key stuff.
I ran across pgpforwarder which seems like a pretty good idea, but I tihnk
it needs some work (and docs) before I try and play with it.
And to be honest, I'd prefer a technological solution to this problem rather
than depending upon users or the boss. Really, the whole point is to make
this seamless. If encryption is painless then everyone will use it, even if
they don't know about it. =}
ciao!
Daryl.
-----Message d'origine-----
De : owner-pos...@postfix.org
[mailto:owner-pos...@postfix.org]De la part de Jason Baker
Envoyé : vendredi 5 octobre 2001 18:16
À : postfix-users
Objet : Re: Implementing a PGP encryption/decryption gate
Jeffrey
Quoting Daryl Manning <dman...@micheldyens.com>:
> It's a Win98/2000 shop except for my servers and workstations. Exchange or
> Outlook Express. Another problem is that the OE client does not encrypt
> attachments, so people have sent things thinking they are encrypted and
> well, they're not.
>
> Outlook does a better job and is the weapin of choice for most of the
> office, but most people have difficulty getting the idea of the PGP public
> vs. private key stuff.
>
> I ran across pgpforwarder which seems like a pretty good idea, but I tihnk
> it needs some work (and docs) before I try and play with it.
>
> And to be honest, I'd prefer a technological solution to this problem rather
> than depending upon users or the boss. Really, the whole point is to make
> this seamless. If encryption is painless then everyone will use it, even if
> they don't know about it. =}
--
I don't do Windows and I don't come to work before nine.
-- Johnny Paycheck
The idea is that plaintext/HTML mail with attachements would be sent to a
PGP forwarding server (or something) which would encrypt/sign and then it
would zoom through to postfix (or in reveerse order if that would work.
The reverse happens coming back. That way, things are encrypted as they move
outside of our domain, and the user here inside doesn't have to worry in the
least about encryption at all. All done automagically.
At least that's what I'd like...
Daryl.
-----Message d'origine-----
De : owner-pos...@postfix.org
[mailto:owner-pos...@postfix.org]De la part de Jeffrey Taylor
Envoye : vendredi 5 octobre 2001 18:30
A : postfix-users
Objet : Re: Implementing a PGP encryption/decryption gate
Lots of people would like that (portions of unmentioned management *swears*
it can be done as part of an SMTP transaction), and of course, you can
circumvent the PKI process if you'd like, but that'd defeat its security.
Something like this might be possible:
Incoming Mail: Inet -> border/DMZ SMTP server -> users'
.forwards/procmail/whatever -> verify/unencrypt mail -> forward to internal
SMTP server -> mail client
Outgoing Mail: Mail client -> internal SMTP server -> *not sure on this one,
but some sort of filtering that'll do the PGP stuff for you* -> DMZ SMTP
server -> Inet.
But please, user education is a much better solution here. Email's not
secure, sending attachments is bad, HTML mail is bad, Outlook is bad, etc.
*Then* tell them that email can be made secure with only a little bit of
trouble.
John
--
John Madden
UNIX Systems Engineer
Ivy Tech State College
jma...@ivy.tec.in.us
-
Jeffrey
Quoting Daryl Manning <dman...@micheldyens.com>:
> already have a VPN... this is about mail... =}
>
> The idea is that plaintext/HTML mail with attachements would be sent to a
> PGP forwarding server (or something) which would encrypt/sign and then it
> would zoom through to postfix (or in reveerse order if that would work.
>
> The reverse happens coming back. That way, things are encrypted as they move
> outside of our domain, and the user here inside doesn't have to worry in the
> least about encryption at all. All done automagically.
>
> At least that's what I'd like...
>
> Daryl.
>
>
Hmm... why not route mail to those locations through the VPN, then?
I'm starting to agree with others - the easiest implementation method of this
will be server to server wrapping - not encrypting per recipient, but per
server.
Setting the whole thing up to use SSL/TSL (RFC2487,
http://www.rfc-editor.org/rfc/rfc2487.txt) may be the best bet, provided you
can get along well with the admins on the other side, and their servers will
handle it. There's still some security concerns with TLS (Sam Varschavchik
from courier-mta.org mentions DNS cache poisoning), but it's better than
plaintext.
I'm thinking of this basically going to rest of world, not communicating
directly between our offices.
So, plaintext(our user) --> mail server/PGP encrytor --> out encrypted
and back again
incoming encrypted --> mail server/PGP encryptor --> plaintext (our user)
So, email would be auto encrypted if there was a public key available for
the other person on the server and if someone's email cmae in encrypted it
would be automatically decrypted.
However, the point about spoofing is a good one. I figure I can get around
that with better authentication at the smtp server level.
Hmmm...
-----Message d'origine-----
De : owner-pos...@postfix.org
[mailto:owner-pos...@postfix.org]De la part de Jason Baker
Envoyé : vendredi 5 octobre 2001 19:04
À : postfix-users
Objet : Re: Implementing a PGP encryption/decryption gate
Ah, I thought this was between your office and partner offices / other
companies you work with closely, rather than in general.
Might want to go "belt and suspenders" on this - if you enable TLS on your
server, and the other side supports it, then you get encrypted communication
no matter if the recipient has PGP or not. If they both support TLS and have
a PGP key, then you've got a really nicely double-secure communication.
Unless your server is already running at 80% CPU, I don't think you'll have a
load problem.
The biggest problem with the incoming stage is, as people mentioned, the
passwords have to be on the server. Just how secure is that box? :)
If you have control of the servers at both ends, you can use the openssl
support in postfix to use starttls for the sessions that you send the mail
back and forth to these clients with. You can even verify the identity of
the servers at the other end of the pipe with certificates.
--
War is an ugly thing, but it is not the ugliest of things. The decayed and
degraded state of moral and patriotic feeling which thinks that nothing is
worth war is much worse. A man who has nothing for which he is willing to
fight, nothing he cares about more than his own personal safety, is a
miserable creature who has no chance of being free, unless made so by the
exertions of better men than himself. -- John Stuart Mill
Nick Simicich - n...@scifi.squawk.com
What do you do if there is more than one recipient?
What do you do with attachements, MIME parts, etc.?
> same time automatically decode any messages that come in so that basically
> the encryption is totally oblivious to the user and can be handled at the
Then you would need to store your user's private keys, which is a very
bad thing to do. I suggest you educate your users about using an
encryption software, this is the best solution.
olive
--
Olivier Tharan <olivier...@IDEALX.com>
you're trying to solve a non-technical problem with technical means, which
simply won't work.
this is a company policy issue. assuming adequate training (incl.
followup training) on security procedures has been given to staff,
if company policy dictates that all messages shall be encrypted then
staff who fail to abide by company policy can be reprimanded or sent on
another training course, or sacked.
IMO, security in the banking industry is very important, and staff
members who don't or won't understand the importance of security have no
business being in that industry.
sloppy security practices in bank staff is a very good reason for
customers to look for another bank - one that takes security and privacy
issues seriously.
craig
--
craig sanders <c...@taz.net.au>
Fabricati Diem, PVNC.
-- motto of the Ankh-Morpork City Watch
> * Daryl Manning <dman...@micheldyens.com> (20011005 14:28):
> > What I want to do is somehow have every email that goes out automatically
> > encoded through PGP if there is a recipient public key available and at the
>
> What do you do if there is more than one recipient?
Encrypt to each one.
> What do you do with attachements, MIME parts, etc.?
Encrypt the whole thing- extracting To: From: and Subject: in my case to
ensure that the recipient can handle the mail appropriately. Subject
exposure is of course a potential leak situation, but in my case the
cost/benefit was for exposing it.
It's easy to implement this in Postfix using the appropriate exit and a
shell or perl script. I need to add an IDEA license to GPG to get the
level of compatibilty I need for production, but it works.
>
> > same time automatically decode any messages that come in so that basically
> > the encryption is totally oblivious to the user and can be handled at the
>
> Then you would need to store your user's private keys, which is a very
> bad thing to do. I suggest you educate your users about using an
> encryption software, this is the best solution.
You have two choices: Use a server-only key, or use individual keys. In
neither case is it any worse than users having their own keys on insecure
desktop boxes so long as it's understood that (a) it's a WORK key not a
PERSONAL key, and (b) unless you go to a compartmented OS/encrypted
backups administrators and backups have the secret keys.
Encrypting it on behalf of the user with a key that indicates such seems
to be as good or better than forcing ADKs on users, which is critical in a
business environment where encryption goes to the desktop.
My own hack at this took a whopping 15 minutes to set up.
Paul
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
prob...@patriot.net which may have no basis whatsoever in fact."
> > client and so do not use encryption at all. This is more dangerous
> > particularly in the line of business we have (banking).
>
> you're trying to solve a non-technical problem with technical means, which
> simply won't work.
What evidence do you have that supports an assertion that it won't work?
I'm curious, because having built a prototype of just such a system
several months ago, none of my testing indicated a failure mode that
wasn't solvable with code.
It's perfectly possible to build a mail system with Postfix that encrypts
outbound traffic message-by-message and bounces mail that keys don't exist
for. It's even theoretically possible to do so in a way that system
administrators can't get the keys for without an autitable event- though I
haven't had time to finish that part of my particular implementation.
> this is a company policy issue. assuming adequate training (incl.
> followup training) on security procedures has been given to staff,
> if company policy dictates that all messages shall be encrypted then
> staff who fail to abide by company policy can be reprimanded or sent on
> another training course, or sacked.
Policy and training *always* fail a large percentage of the time. Systems
can be designed and modeled for much lower failure rates than policies and
staff training.
> IMO, security in the banking industry is very important, and staff
> members who don't or won't understand the importance of security have no
> business being in that industry.
Anyone who has to try to explain key-hanlding procedures to an
administrative assistant, deal with the company having either an ADK, or
the individual's key (how many IT staffs don't have user's passwords?),
THEN dealing with key revocation issues when an employee leaves will start
looking for centralized and automated mechanisms pretty darned quickly.
> sloppy security practices in bank staff is a very good reason for
> customers to look for another bank - one that takes security and privacy
> issues seriously.
What about a bank that goes to the trouble of building a fool-proof
mechanism for encrypting customer mail? Surely this last statement flys
in the face of your first? Serious enough to automate around human
failure modes seems more serious than serious enough to write a policy to
me.
Paul
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
prob...@patriot.net which may have no basis whatsoever in fact."
-
the main problem with it is that the design is based on a
best-case-scenario assumption. it's only secure IFF everything works,
IFF there is absolute zero chance of the server being compromised,
IFF there is absolute zero chance of the keys or passphrases being
compromised, IFF there is absolute zero chance of insider attack (i.e.
misuse/abuse by staff), ....
security systems should *never* be designed with such an assumption.
they should be designed to cope with the worst-case-scenario - i.e.
assume that everything that could go wrong will go wrong and figure
out ways to eliminate or minimise the risk. you will never get it
100% secure, but the point of the game is risk assessment and risk
management, not optimistic wishing that "it can't happen here".
> It's perfectly possible to build a mail system with Postfix that
> encrypts outbound traffic message-by-message and bounces mail that
> keys don't exist for. It's even theoretically possible to do so in
> a way that system administrators can't get the keys for without an
> autitable event- though I haven't had time to finish that part of my
> particular implementation.
that inherently defeats the purpose of what you're attempting by
storing the private keys and/or passphrases on the mail server where an
attack could conceivably compromise them. it also allows forgery where
one staff member could pretend to be another simply by forging the From:
header (and/or sniffing/guessing their password if smtp auth is required
to send a message - it's not hard to get someone's password on a
LAN, automated packet-sniffing tools are readily available, and most
companies don't give a damn about securing their internal network...they
think that a firewall to protect against external attack is enough).
there's little or no point in implementing security technology if you're
going to leave such gaping holes in it.
you'd be much better off just using the TLS patches for postfix and
encrypting the smtp sessions - perhaps even configuring postfix so that
it refused non-encrypted connections. that's the limit of what you can
do in an MTA - encrypt the transfer of the mail, not the content of the
mail. this only defeats packet-sniffing snoops and nothing more. if you
want the contents encrypted (for authentication and/or privacy) then it
*must* be done in the client.
> > this is a company policy issue. assuming adequate training (incl.
> > followup training) on security procedures has been given to staff,
> > if company policy dictates that all messages shall be encrypted then
> > staff who fail to abide by company policy can be reprimanded or sent
> > on another training course, or sacked.
>
> Policy and training *always* fail a large percentage of the time.
> Systems can be designed and modeled for much lower failure rates than
> policies and staff training.
so replace the idiot staff with clueful staff, and train the
ignorant-but-not-completely-cluess staff.
it's a staffing problem, not a technical problem.
> > IMO, security in the banking industry is very important, and staff
> > members who don't or won't understand the importance of security
> > have no business being in that industry.
>
> Anyone who has to try to explain key-hanlding procedures to an
> administrative assistant, deal with the company having either an
> ADK, or the individual's key (how many IT staffs don't have user's
> passwords?), THEN dealing with key revocation issues when an employee
> leaves will start looking for centralized and automated mechanisms
> pretty darned quickly.
what you're saying is "security is too hard, so we won't bother. let's
just pretend instead".
banking security is not a game, and there is no excuse for being lax
with customer's personal & financial information.
> > sloppy security practices in bank staff is a very good reason for
> > customers to look for another bank - one that takes security and
> > privacy issues seriously.
>
> What about a bank that goes to the trouble of building a fool-proof
> mechanism for encrypting customer mail?
if the so-called "fool-proof" mechanism requires storing the keys and
passphrases on the mail server, then that's a good reason to abandon
that bank - any bank which is that clueless about security is way too
high a risk to accept.
> Surely this last statement flys in the face of your first? Serious
> enough to automate around human failure modes seems more serious than
> serious enough to write a policy to me.
technology is NOT and never will be a substitute for intelligent human
decision making. human failure modes can only be solved by dealing with
the humans involved - that means either training or replacement of
staff.
what you're suggesting doesn't provide any real security, it just gives
a false sense of security - which is WORSE than none at all, as it
encourages staff to take risks that they wouldn't take if they were
aware of the lack of security.
craig
--
craig sanders <c...@taz.net.au>
Fabricati Diem, PVNC.
-- motto of the Ankh-Morpork City Watch
> > What evidence do you have that supports an assertion that it won't
> > work? I'm curious, because having built a prototype of just such a
> > system several months ago, none of my testing indicated a failure mode
> > that wasn't solvable with code.
>
> the main problem with it is that the design is based on a
> best-case-scenario assumption. it's only secure IFF everything works,
*All* designs are based on best-case scenerios.
> IFF there is absolute zero chance of the server being compromised,
> IFF there is absolute zero chance of the keys or passphrases being
> compromised, IFF there is absolute zero chance of insider attack (i.e.
> misuse/abuse by staff), ....
Versus IF every user follows a procedure, IF every user has a secure
desktop, IF every user doesn't execute locally malicious code, and IF
everyone leaving the company is good enough not to use their keys?
> security systems should *never* be designed with such an assumption.
> they should be designed to cope with the worst-case-scenario - i.e.
> assume that everything that could go wrong will go wrong and figure
> out ways to eliminate or minimise the risk. you will never get it
> 100% secure, but the point of the game is risk assessment and risk
> management, not optimistic wishing that "it can't happen here".
I still submit that machine failure is much easier to model than human
failure and much more predictable. It's certainly easy to build a server
that has such an extremely low chance of compromise that it's not worth
worrying about as a primary risk vector. If that's done right, then
you've handled the passprase problem, and you're left with the staff
problem. You have the staff problem in any case- IT staff has access to
users' desktops and machine backups. You've simply moved that down a
scale to protecting every desktop (use AV as an example of where that's a
vector that has a high failure mode) to protecting a single server,
backups for a single machine and extending trust to a single set of
administrators/operations personnel.
> > It's perfectly possible to build a mail system with Postfix that
> > encrypts outbound traffic message-by-message and bounces mail that
> > keys don't exist for. It's even theoretically possible to do so in
> > a way that system administrators can't get the keys for without an
> > autitable event- though I haven't had time to finish that part of my
> > particular implementation.
>
> that inherently defeats the purpose of what you're attempting by
> storing the private keys and/or passphrases on the mail server where an
> attack could conceivably compromise them. it also allows forgery where
No, it doesn't. An attack could conceivable compromise them *anywhere*.
> one staff member could pretend to be another simply by forging the From:
> header (and/or sniffing/guessing their password if smtp auth is required
> to send a message - it's not hard to get someone's password on a
That depends on how you architect the authentication piece (SSL'd mail
from a Web-based scheme on-server for instance works pretty well.)
> LAN, automated packet-sniffing tools are readily available, and most
> companies don't give a damn about securing their internal network...they
> think that a firewall to protect against external attack is enough).
I've got news for anyone who still believes that firewalls contain network
integrity...
>
> there's little or no point in implementing security technology if you're
> going to leave such gaping holes in it.
>
So, based on the last few years of internal AV events, user password
management failures, etc. you think that moving it out to the desktop is
*better*? I think your assessment is flawed.
> you'd be much better off just using the TLS patches for postfix and
> encrypting the smtp sessions - perhaps even configuring postfix so that
> it refused non-encrypted connections. that's the limit of what you can
> do in an MTA - encrypt the transfer of the mail, not the content of the
> mail. this only defeats packet-sniffing snoops and nothing more. if you
> want the contents encrypted (for authentication and/or privacy) then it
> *must* be done in the client.
TLS requires both ends to speak TLS.
> > > if company policy dictates that all messages shall be encrypted then
> > > staff who fail to abide by company policy can be reprimanded or sent
> > > on another training course, or sacked.
> >
> > Policy and training *always* fail a large percentage of the time.
> > Systems can be designed and modeled for much lower failure rates than
> > policies and staff training.
>
> so replace the idiot staff with clueful staff, and train the
> ignorant-but-not-completely-cluess staff.
1. We live in the real world, where replacing staff because they don't
understand trust management when their job isn't security is generally an
actionable item.
> it's a staffing problem, not a technical problem.
No, it's an infrastructure problem. If we had reasonably secure
infrastructure we wouldn't need to educate the staff.
> what you're saying is "security is too hard, so we won't bother. let's
> just pretend instead".
No, what I'm saying is "Security is difficult enough that we have to use a
pragmatic approach and put it where we can manage the risk in the best
way."
>
> banking security is not a game, and there is no excuse for being lax
> with customer's personal & financial information.
I doubt you've worked with many banks if you believe the level of security
is that much different than any other organization. People are people, no
matter what their profession.
> > > sloppy security practices in bank staff is a very good reason for
> > > customers to look for another bank - one that takes security and
> > > privacy issues seriously.
> >
> > What about a bank that goes to the trouble of building a fool-proof
> > mechanism for encrypting customer mail?
>
> if the so-called "fool-proof" mechanism requires storing the keys and
> passphrases on the mail server, then that's a good reason to abandon
> that bank - any bank which is that clueless about security is way too
> high a risk to accept.
You seem to think that remote access to keys on a server is a given,
yet on the desktop it's not, maybe it's you that's clueless?
MLS systems have been able to protect and compartmentalize information for
what ~25 years now. If you can't trust two people in an organization,
then (a) you've hired the wrong people and (b) they can compromise all the
desktops and get all the keys anyway.
"Fool-proof" of course refers to the encrypting of e-mail, not the key
part anyway- you're expecting a 0% human failure rate- and worse-yet
replacement *after* a problem (which means that you've already done the
evil bad thing you profess before you start to change out staff- meaning
that you could have lots of failures without even having a key
compromise.)
> > Surely this last statement flys in the face of your first? Serious
> > enough to automate around human failure modes seems more serious than
> > serious enough to write a policy to me.
>
> technology is NOT and never will be a substitute for intelligent human
> decision making. human failure modes can only be solved by dealing with
> the humans involved - that means either training or replacement of
> staff.
Training is never better than about 85% effective in the real-world under
the best of all possible situations. You've yet to address 15% of the
cases.
Replacement requires failure, sometimes multiple times in a given legal
climate/contract situation.
While you're budgeting for all those brilliant people, your competitors
are going to be taking your customers with better rates.
> what you're suggesting doesn't provide any real security, it just gives
> a false sense of security - which is WORSE than none at all, as it
> encourages staff to take risks that they wouldn't take if they were
> aware of the lack of security.
Your proposal offers *less* effective security by putting key management
in the hands of novices, placing keys on desktops of users who have a
proven proclivity for executing malicious code, completely ignoring the
lack of real-time key revocation systems, and *still* contains the same IT
person access to machine failure mode. The only potentially larger issue
is that complete compromise of all keys is easier[1] in my scenerio.
That's taken care of by (a) splitting to multiple servers, or (b) using an
OS with MLS to remove access to the keys to everything but a verified key
management daemon.
You should game it out more carefully.
Paul
[1] Not really in most offices, as a simple script added to their user
profiles will give the same access, same type of compromise, different
vector. But for argument's sake, I'll take that hit because the math
still works in my favor.
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
prob...@patriot.net which may have no basis whatsoever in fact."
-
no. you are wrong. the design of security systems is inherently based
around a worst-case analysis - "how could an attacker get in? what could
they do if they did? how can i stop them? how can i minimise the damage
if they do get past all my defences?"
if the private key on one desktop is compromised because the user used a
crappy passphrase or their keystrokes were being logged or whatever then
you've got ONE compromised key.
this minimises the damage - if an attacker gets in and steals one key +
passphrase, they have to do just as much work for each and every other
key.
OTOH, if a server containing all of the private keys with all of
the passphrases in plain text is compromised, then *all* keys are
compromised. they have to work hard for every thing they gain.
this maximises the damage to total, or catastrophic, failure. if an
attacker gets in, they get all keys at once with no further effort
required.
> > IFF there is absolute zero chance of the server being compromised,
> > IFF there is absolute zero chance of the keys or passphrases being
> > compromised, IFF there is absolute zero chance of insider attack (i.e.
> > misuse/abuse by staff), ....
>
> Versus IF every user follows a procedure, IF every user has a secure
> desktop, IF every user doesn't execute locally malicious code,
these are user workstation issues, you don't deal with them by
"securing" the server and pretending they don't exist any more. you fix
them at the point of origin, i.e. on the desktop machines and in the
desktop users.
> and IF everyone leaving the company is good enough not to use their
> keys?
you revoke the key when they leave.
> > security systems should *never* be designed with such an assumption.
> > they should be designed to cope with the worst-case-scenario - i.e.
> > assume that everything that could go wrong will go wrong and figure
> > out ways to eliminate or minimise the risk. you will never get it
> > 100% secure, but the point of the game is risk assessment and risk
> > management, not optimistic wishing that "it can't happen here".
>
> I still submit that machine failure is much easier to model than human
> failure and much more predictable.
that may be true, but it's also irrelevant. if the problem is human
failure then dealing with the more predicatable machine failure doesn't
do any good - you're fixing the wrong problem.
in other words, if the problem is poor driving skills, fixing the
automatic transmission isn't going to make for a safer driver.
> It's certainly easy to build a server that has such an extremely low
> chance of compromise that it's not worth worrying about as a primary
> risk vector. If that's done right, then you've handled the passprase
> problem, and you're left with the staff problem.
that, to be blunt, is just so much bullshit. you *haven't* "handled the
passphrase problem", you've completely ignored it. closing your eyes and
hoping the problem will go away won't make it do so.
> You have the staff problem in any case- IT staff has access to users'
> desktops and machine backups. You've simply moved that down a scale
> to protecting every desktop (use AV as an example of where that's a
no, you haven't "moved that down a scale" at all, because you simply
don't store the passphrase on the desktop. it is a phrase that the user
remembers and types in when it's needed.
this is the whole point of having a passphrase, this is the basis of
the security model used by PGP and similar programs such as GnuPG - the
private key is protected by a passphrase known only by the authorised
user(s).
given enough time and CPU power, it may be possible to perform a
brute-force attack on the pass-phrase, which is why it is so important
to pick a good (i.e. long and complicated yet memorable) pass-phrase and
also to be very careful about how and where it is used.
> > one staff member could pretend to be another simply by forging the
> > From: header (and/or sniffing/guessing their password if smtp auth
> > is required to send a message - it's not hard to get someone's
> > password on a
>
> That depends on how you architect the authentication piece (SSL'd mail
> from a Web-based scheme on-server for instance works pretty well.)
it doesn't matter whether the MUA is a web application or a binary
running on windows or any other system. if you're careless with the
pass-phrase and store it on the server in plain-text then it is at risk
of being stolen.
if ssl encryption is used on internal network traffic, then it's very
difficult to steal login passwords...but most networks use unencrypted
POP or IMAP or similar protocols, and many even allow unencrypted telnet
because their alleged network admin believes that they don't have to
worry about security on anything happening inside the firewall (which
is cretinous because the vast majority of packet sniffing compromises
are performed on internal machines - whether by dodgy staff or after a
script-kiddie has compromised a box)
> > there's little or no point in implementing security technology if
> > you're going to leave such gaping holes in it.
>
> So, based on the last few years of internal AV events, user password
> management failures, etc. you think that moving it out to the desktop
> is *better*? I think your assessment is flawed.
i *know* your understanding of security issues is deeply flawed.
> > you'd be much better off just using the TLS patches for postfix and
> > encrypting the smtp sessions - perhaps even configuring postfix so
> > that it refused non-encrypted connections. that's the limit of what
> > you can do in an MTA - encrypt the transfer of the mail, not the
> > content of the mail. this only defeats packet-sniffing snoops and
> > nothing more. if you want the contents encrypted (for authentication
> > and/or privacy) then it *must* be done in the client.
>
> TLS requires both ends to speak TLS.
yes, i said that - "perhaps even configuring postfix so that it refused
non-encrypted connections".
> > so replace the idiot staff with clueful staff, and train the
> > ignorant-but-not-completely-cluess staff.
>
> 1. We live in the real world, where replacing staff because they don't
> understand trust management when their job isn't security is generally
> an actionable item.
depending on what their actual job is, security IS part of their job.
someone working with sensitive & confidential material has an ethical
duty and a legal obligation to keep that material secret and secure.
that means not leaving it on their desk for the cleaner to read when
they go home at night, and it also means not sending it unencrypted
over the internet.
the technology to encrypt it is available, and
with training can be used by a reasonably intelligent person - it is
negligent NOT to use it where appropriate.
> > it's a staffing problem, not a technical problem.
>
> No, it's an infrastructure problem. If we had reasonably secure
> infrastructure we wouldn't need to educate the staff.
that, to be blunt again, is even more bullshit. security is NOT a
product. security is a process, a process that involves *everyone*. a
security system is no stronger than the weakest link in the chain...and
in this case, the weakest link is ignorant, poorly trained staff who not
only know very little about security, they don't even comprehend the
importance of security. one cretin (or malicious insider) in the wrong
position can utterly compromise a system beyond repair.
if your security procedures are broken, then no amount of technology is
going to give you a secure system.
> > what you're saying is "security is too hard, so we won't bother.
> > let's just pretend instead".
>
> No, what I'm saying is "Security is difficult enough that we have to
> use a pragmatic approach and put it where we can manage the risk in
> the best way."
you're operating under the severe delusion that your model does
*anything* at all to manage the risk. it doesn't. it's broken. what's
more, it's broken by design.
you're still playing the "it's too hard" tune. sure, it's hard - but
pretending won't make it any easier.
> > > What about a bank that goes to the trouble of building a
> > > fool-proof mechanism for encrypting customer mail?
> >
> > if the so-called "fool-proof" mechanism requires storing the keys
> > and passphrases on the mail server, then that's a good reason to
> > abandon that bank - any bank which is that clueless about security
> > is way too high a risk to accept.
>
> You seem to think that remote access to keys on a server is a given,
> yet on the desktop it's not, maybe it's you that's clueless?
again, you completely miss the point.
the private key is encrypted using the passphrase, and can not be
decrypted without knowing the passphrase. if it can't be decrypted, it
can't be used.
in short, the private key stored on the desktop is worthless without the
passphrase.
the passphrase is not stored on any computer, it is 'stored' in the
user's head and typed in when it's needed - that's why it's called a
passphrase.
storing it alongside the key on the server completely defeats PGP/GPG's
security model of key + passphrase.
this doesn't mean that you can be careless with private keys - you
certainly should take adequate precautions to keep it secret and
secure...but storing it alongside the passphrase is absolutely insane.
> "Fool-proof" of course refers to the encrypting of e-mail, not the key
> part anyway- you're expecting a 0% human failure rate- and worse-yet
no, i'm not. i'm saying that you can't solve human failure by fixing the
server. you solve human failure by fixing or replacing the human.
this is not a technological problem. it is a training problem.
> > > Surely this last statement flys in the face of your first?
> > > Serious enough to automate around human failure modes seems more
> > > serious than serious enough to write a policy to me.
> >
> > technology is NOT and never will be a substitute for intelligent
> > human decision making. human failure modes can only be solved by
> > dealing with the humans involved - that means either training or
> > replacement of staff.
>
> Training is never better than about 85% effective in the real-world
> under the best of all possible situations. You've yet to address 15%
> of the cases.
the way to do that is you give those 15% something else to do and don't
let them work on anything that is security sensitive. if they can't
understand the need for security or learn good security practices then
they should not be working in an area where security is important.
period.
> > what you're suggesting doesn't provide any real security, it just
> > gives a false sense of security - which is WORSE than none at all,
> > as it encourages staff to take risks that they wouldn't take if they
> > were aware of the lack of security.
>
> Your proposal offers *less* effective security by putting key
> management in the hands of novices,
no, you are wrong. it requires trained staff to type in their passphrase
in order to send encrypted mail.
your model has *zero* effective security because it commits the
cretinously negligent crime of storing the passphrase alongside the
private key. this is sheer idiocy.
please, do some research and get a clue about security issues before
responding. i'll ignore any response which is just more of the same.
craig
--
craig sanders <c...@taz.net.au>
Fabricati Diem, PVNC.
-- motto of the Ankh-Morpork City Watch
We have been talking about using individual encryption and decryption
keys. In some areas, a message that is signed electronically is considered
to have the same legal effect as if it were signed, certified mail. So a
message comes out of your mail system from a bank officer, signed with the
bank officer's key, obligating the bank to do something that would be
within this officer's purview, except that the officer in question claims
that he never wrote it. But the bank is obligated to do something that
costs them.
The problem I see is that you are going to be doing smtp's typical security
through assertion, with no proof of ID inside the bank other than a "from"
header. For signature verification, you are going to replace a clerk's
verification of an electronic signature coming from outside of the bank
with a line in the e-mail from your automated verifier that asserts that
the signature was verified somewhere else.
Of course, either in the queue from the officer to outside, or in the queue
from outside to the functionary, you have no chain of custody, all you are
doing is assertion. If you don't sign the outbound messages, you only
encrypt them in the public key of the recipient, you are simply adding
transport security (if the recipient knows what they are doing). On the
inbound side, you should include the original encrypted and possibly signed
message somewhere, so that the clerk can verify the signature (if it is
important).
It is true that, in corporate environments, private keys are held on key
recovery servers so that the loss of a machine does not mean that encrypted
mail is lost, but in many cases, those keys are held split across a couple
of servers and they require multiple people acting together to recover a
private key. This is so that one person, acting alone, does not have the
keys needed to begin forging signed mail. And so forth.
You talk about human failure vs. machine failure, and malicious acts. The
issue I see is that you are creating a system that potentially puts a
cryptographic rubber stamp on malicious acts.
At 08:29 AM 10/7/2001 -0400, Paul D. Robertson wrote:
>On Sun, 7 Oct 2001, Craig Sanders wrote:
>
> > > What evidence do you have that supports an assertion that it won't
> > > work? I'm curious, because having built a prototype of just such a
> > > system several months ago, none of my testing indicated a failure mode
> > > that wasn't solvable with code.
> >
> > the main problem with it is that the design is based on a
> > best-case-scenario assumption. it's only secure IFF everything works,
>
>*All* designs are based on best-case scenerios.
>
> > IFF there is absolute zero chance of the server being compromised,
> > IFF there is absolute zero chance of the keys or passphrases being
> > compromised, IFF there is absolute zero chance of insider attack (i.e.
> > misuse/abuse by staff), ....
>
>Versus IF every user follows a procedure, IF every user has a secure
>desktop, IF every user doesn't execute locally malicious code, and IF
>everyone leaving the company is good enough not to use their keys?
>
> > security systems should *never* be designed with such an assumption.
> > they should be designed to cope with the worst-case-scenario - i.e.
> > assume that everything that could go wrong will go wrong and figure
> > out ways to eliminate or minimise the risk. you will never get it
> > 100% secure, but the point of the game is risk assessment and risk
> > management, not optimistic wishing that "it can't happen here".
>
>I still submit that machine failure is much easier to model than human
>failure and much more predictable. It's certainly easy to build a server
>that has such an extremely low chance of compromise that it's not worth
>worrying about as a primary risk vector. If that's done right, then
>you've handled the passprase problem, and you're left with the staff
>problem. You have the staff problem in any case- IT staff has access to
>users' desktops and machine backups. You've simply moved that down a
>scale to protecting every desktop (use AV as an example of where that's a
>vector that has a high failure mode) to protecting a single server,
>backups for a single machine and extending trust to a single set of
>administrators/operations personnel.
>
> > > It's perfectly possible to build a mail system with Postfix that
> > > encrypts outbound traffic message-by-message and bounces mail that
> > > keys don't exist for. It's even theoretically possible to do so in
> > > a way that system administrators can't get the keys for without an
> > > autitable event- though I haven't had time to finish that part of my
> > > particular implementation.
> >
> > that inherently defeats the purpose of what you're attempting by
> > storing the private keys and/or passphrases on the mail server where an
> > attack could conceivably compromise them. it also allows forgery where
>
>No, it doesn't. An attack could conceivable compromise them *anywhere*.
>
> > one staff member could pretend to be another simply by forging the From:
> > header (and/or sniffing/guessing their password if smtp auth is required
> > to send a message - it's not hard to get someone's password on a
>
>That depends on how you architect the authentication piece (SSL'd mail
>from a Web-based scheme on-server for instance works pretty well.)
>
> > LAN, automated packet-sniffing tools are readily available, and most
> > companies don't give a damn about securing their internal network...they
> > think that a firewall to protect against external attack is enough).
>
>I've got news for anyone who still believes that firewalls contain network
>integrity...
>
> >
> > there's little or no point in implementing security technology if you're
> > going to leave such gaping holes in it.
> >
>
>So, based on the last few years of internal AV events, user password
>management failures, etc. you think that moving it out to the desktop is
>*better*? I think your assessment is flawed.
>
> > you'd be much better off just using the TLS patches for postfix and
> > encrypting the smtp sessions - perhaps even configuring postfix so that
> > it refused non-encrypted connections. that's the limit of what you can
> > do in an MTA - encrypt the transfer of the mail, not the content of the
> > mail. this only defeats packet-sniffing snoops and nothing more. if you
> > want the contents encrypted (for authentication and/or privacy) then it
> > *must* be done in the client.
>
>TLS requires both ends to speak TLS.
>
> > > > if company policy dictates that all messages shall be encrypted then
> > > > staff who fail to abide by company policy can be reprimanded or sent
> > > > on another training course, or sacked.
> > >
> > > Policy and training *always* fail a large percentage of the time.
> > > Systems can be designed and modeled for much lower failure rates than
> > > policies and staff training.
> >
> > so replace the idiot staff with clueful staff, and train the
> > ignorant-but-not-completely-cluess staff.
>
>1. We live in the real world, where replacing staff because they don't
>understand trust management when their job isn't security is generally an
>actionable item.
>
> > it's a staffing problem, not a technical problem.
>
>No, it's an infrastructure problem. If we had reasonably secure
>infrastructure we wouldn't need to educate the staff.
>
> > what you're saying is "security is too hard, so we won't bother. let's
> > just pretend instead".
>
>No, what I'm saying is "Security is difficult enough that we have to use a
>pragmatic approach and put it where we can manage the risk in the best
>way."
>
> >
> > banking security is not a game, and there is no excuse for being lax
> > with customer's personal & financial information.
>
>I doubt you've worked with many banks if you believe the level of security
>is that much different than any other organization. People are people, no
>matter what their profession.
>
> > > > sloppy security practices in bank staff is a very good reason for
> > > > customers to look for another bank - one that takes security and
> > > > privacy issues seriously.
> > >
> > > What about a bank that goes to the trouble of building a fool-proof
> > > mechanism for encrypting customer mail?
> >
> > if the so-called "fool-proof" mechanism requires storing the keys and
> > passphrases on the mail server, then that's a good reason to abandon
> > that bank - any bank which is that clueless about security is way too
> > high a risk to accept.
>
>You seem to think that remote access to keys on a server is a given,
>yet on the desktop it's not, maybe it's you that's clueless?
>
>MLS systems have been able to protect and compartmentalize information for
>what ~25 years now. If you can't trust two people in an organization,
>then (a) you've hired the wrong people and (b) they can compromise all the
>desktops and get all the keys anyway.
>
>"Fool-proof" of course refers to the encrypting of e-mail, not the key
>part anyway- you're expecting a 0% human failure rate- and worse-yet
>replacement *after* a problem (which means that you've already done the
>evil bad thing you profess before you start to change out staff- meaning
>that you could have lots of failures without even having a key
>compromise.)
>
> > > Surely this last statement flys in the face of your first? Serious
> > > enough to automate around human failure modes seems more serious than
> > > serious enough to write a policy to me.
> >
> > technology is NOT and never will be a substitute for intelligent human
> > decision making. human failure modes can only be solved by dealing with
> > the humans involved - that means either training or replacement of
> > staff.
>
>Training is never better than about 85% effective in the real-world under
>the best of all possible situations. You've yet to address 15% of the
>cases.
>
>Replacement requires failure, sometimes multiple times in a given legal
>climate/contract situation.
>
>While you're budgeting for all those brilliant people, your competitors
>are going to be taking your customers with better rates.
>
> > what you're suggesting doesn't provide any real security, it just gives
> > a false sense of security - which is WORSE than none at all, as it
> > encourages staff to take risks that they wouldn't take if they were
> > aware of the lack of security.
>
>Your proposal offers *less* effective security by putting key management
>in the hands of novices, placing keys on desktops of users who have a
>proven proclivity for executing malicious code, completely ignoring the
>lack of real-time key revocation systems, and *still* contains the same IT
>person access to machine failure mode. The only potentially larger issue
>is that complete compromise of all keys is easier[1] in my scenerio.
>That's taken care of by (a) splitting to multiple servers, or (b) using an
>OS with MLS to remove access to the keys to everything but a verified key
>management daemon.
>
>You should game it out more carefully.
>
>Paul
>[1] Not really in most offices, as a simple script added to their user
>profiles will give the same access, same type of compromise, different
>vector. But for argument's sake, I'll take that hit because the math
>still works in my favor.
>-----------------------------------------------------------------------------
>Paul D. Robertson "My statements in this message are personal opinions
>prob...@patriot.net which may have no basis whatsoever in fact."
>
>-
>To unsubscribe, send mail to majo...@postfix.org with content
>(not subject): unsubscribe postfix-users
--
War is an ugly thing, but it is not the ugliest of things. The decayed and
degraded state of moral and patriotic feeling which thinks that nothing is
worth war is much worse. A man who has nothing for which he is willing to
fight, nothing he cares about more than his own personal safety, is a
miserable creature who has no chance of being free, unless made so by the
exertions of better men than himself. -- John Stuart Mill
Nick Simicich - n...@scifi.squawk.com
-
> > *All* designs are based on best-case scenerios.
>
> no. you are wrong. the design of security systems is inherently based
> around a worst-case analysis - "how could an attacker get in? what could
> they do if they did? how can i stop them? how can i minimise the damage
> if they do get past all my defences?"
No, worst-case scenerios are *always* doomsday scenerios. You *always*
lose a worst-case scenerio (that's the worst that can happen- duh.)
Security systems are designed to operate under normal operations, and
account for reasonable failure modes. That's true of firewalls, IPSec
devices, AV products, IDS systems, mail sytems, TCSEC/CC evaluated
products, SCIF design, and a *bunch* of other things.
I don't know how many security systems you've designed, but I've designed
at least hundreds over the last 18 years, and I've still never had a
failure in real operations.
Hey, don't take my word for it though- try talking with Bill Murray or Bob
Abbot, both of whom have more individual INFOSEC experience than anyone,
and who have observed the failure modes in question at thousands of sites
each. I think you'll find that my statistics are pretty close to their
experiences. Feel free to offer counter-statistics though.
> if the private key on one desktop is compromised because the user used a
> crappy passphrase or their keystrokes were being logged or whatever then
> you've got ONE compromised key.
But if it's an admin, you've got all compromised keys, just as in my
scenerio. It's a lot easier to gain one trustworthy admin than it is 300
turstworthy employees *and* one trustworthy admin.
Also, you have the users in _my_ scenerio sniffing for passwords and
forging mail, but you don't have them breaking into co-worker's machines
in yours (and that's ignoring the fact that I can take away both the
sniffing and the password problems.) Let's not forget the social
engineering attacks either, which are likely to succeede in about 13% of
cases under the best circumstances in average companies[2]. The tests
I've seen show about a 2 week window between training and returning to 87%
compliance on social engineering attacks _from_an_unknown_attacker_.
> this minimises the damage - if an attacker gets in and steals one key +
> passphrase, they have to do just as much work for each and every other
> key.
Once you have the key, the passphrase is unnecessary most of the time.
FWIW, "good" passphrases happen about 78% of the time (not accounting for
them being written down.) Your failure modes are climbing up.
> OTOH, if a server containing all of the private keys with all of
> the passphrases in plain text is compromised, then *all* keys are
> compromised. they have to work hard for every thing they gain.
Once again, MLS solves this problem reasonably effectively. Even
withstanding that though, it's definitely statistically better to have one
hardened server than 300 hardened desktops from a defensive point of view.
Maybe you're planning on taking all the money we're saving in fired staff
and putting them all on Trusted Solaris though? ;)
> this maximises the damage to total, or catastrophic, failure. if an
> attacker gets in, they get all keys at once with no further effort
> required.
Yes, but the value here is in encryption for transit, not for veracity,
you're dealing with two known sets of people (employees and customers),
not two unknown sets of people, and therefore the encryption boundary is
significantly more managable at a single point than at multiple points.
So the chance of failure of my one point is statistically insignificant
compared to the chance of a failure at your hundreds of points.
I can provide reasonable proof of inability to break my model without
significant duress, I've yet to see comprehensive proofs for MS-based
desktops where n>5 (and never in a real company.) In fact, if you look at
the non-IIS infection rates for NIMDA, I think you'll see that desktop
security isn't. You're expecting every bank to be better than 98% of
sites out there, which of course simply isn't possible.
[I'm talking formal proof here BTW.]
> > Versus IF every user follows a procedure, IF every user has a secure
> > desktop, IF every user doesn't execute locally malicious code,
>
> these are user workstation issues, you don't deal with them by
> "securing" the server and pretending they don't exist any more. you fix
> them at the point of origin, i.e. on the desktop machines and in the
> desktop users.
If the server's authentication mechanisms are well-done[2], they don't
exist anymore as a vector of attack on that particular mechanism.
Passphrases are resuable authentication tokens, and that's never been a
best-case scenerio. I can add one-time two factor hardware based
authentication trivially. Changes to your scheme don't scale and provide
greater opertunity for problems.
> > and IF everyone leaving the company is good enough not to use their
> > keys?
>
> you revoke the key when they leave.
Which doesn't propogate to your customers/business partners personal
keyrings (whoops!), or are you now asserting that you're to educate all
your customers and partners on cryptographic operations and lose
all the ones who don't measure up?
> > I still submit that machine failure is much easier to model than human
> > failure and much more predictable.
>
> that may be true, but it's also irrelevant. if the problem is human
It's not irreleveant. You haven't removed machine failure, and you've
added significantly more human failure in your solution.
> failure then dealing with the more predicatable machine failure doesn't
> do any good - you're fixing the wrong problem.
No, you're focusing on the wrong part of the problem.
> in other words, if the problem is poor driving skills, fixing the
> automatic transmission isn't going to make for a safer driver.
Actually, that's not true, fixing the transmission problem will allow
the class of driver who would be unable to stop on a hill without rolling
backwards into the car behind them to successfully drive. Perfect is the
enemy of good enough. If their attention isn't on shifting gears, it's
more likely to be on the road and traffic around them. So it can make
safer drivers. The true danger is in making drivers of people who
shouldn't be, not in not making safer drivers. As a matter of fact, in
Shinar, et al (1998)[3] subjects showed decreased detection of roadside
signs during manual shifting in comparison with automatic transmission
cars. So yes, you *can* make safer drivers with automatic transmissions,
and apparently provably so.
> > It's certainly easy to build a server that has such an extremely low
> > chance of compromise that it's not worth worrying about as a primary
> > risk vector. If that's done right, then you've handled the passprase
> > problem, and you're left with the staff problem.
>
> that, to be blunt, is just so much bullshit. you *haven't* "handled the
> passphrase problem", you've completely ignored it. closing your eyes and
> hoping the problem will go away won't make it do so.
Your assertion was that putting all the phrases on the server made them
vulnerable, and your characterization was that it was a major problem.
Real-world experience shows that to be false. As a matter of fact,
real-world desktops with unauthorized users are much more likely to happen
than compromise of a well-managed server.
> > You have the staff problem in any case- IT staff has access to users'
> > desktops and machine backups. You've simply moved that down a scale
> > to protecting every desktop (use AV as an example of where that's a
>
> no, you haven't "moved that down a scale" at all, because you simply
> don't store the passphrase on the desktop. it is a phrase that the user
> remembers and types in when it's needed.
Get access to the desktop and getting the passprhase is trivial, and may
not even be necessary. Become the IT guy and you've probably got all the
passwords and half of the passphrases anyway. Make mail break for a user
(one filter rule) then go to their desk to "fix" their problem and I'd bet
that you get the passphrase >80% of the time if you're the normal IT guy.
Heck if it was only 33%, you'd have 100 in a 300 person bank. Think
that's not enough to do damage? Now start thinking about people who's
secretaries read their mail for them... Yep- that's normally the execs...
> this is the whole point of having a passphrase, this is the basis of
> the security model used by PGP and similar programs such as GnuPG - the
> private key is protected by a passphrase known only by the authorised
> user(s).
That protection has proven weak at best in the past [1] if you have access
to the secret key file. However, you can always do the Webmail passphrase
thing with my solution, where yours allows no relocation of credential
checking and only static repeated credentials.
> given enough time and CPU power, it may be possible to perform a
> brute-force attack on the pass-phrase, which is why it is so important
> to pick a good (i.e. long and complicated yet memorable) pass-phrase and
> also to be very careful about how and where it is used.
There hare been implementation flaws and successful non-brute-force
attacks on secret keys before[1]. No doubt there will be again. Brute
force will still work in ~10% of the cases anyway- that's 30 keys in a
company of 300 (again, best-case.)
> > > one staff member could pretend to be another simply by forging the
> > > From: header (and/or sniffing/guessing their password if smtp auth
> > > is required to send a message - it's not hard to get someone's
> > > password on a
> >
> > That depends on how you architect the authentication piece (SSL'd mail
> > from a Web-based scheme on-server for instance works pretty well.)
>
> it doesn't matter whether the MUA is a web application or a binary
> running on windows or any other system. if you're careless with the
> pass-phrase and store it on the server in plain-text then it is at risk
> of being stolen.
Only if the server has a vulnerability which can be exploited remotely.
Compare that to desktop vulnerabilities and you'll find that real-world
statistics favor the server. Also, there's nothing stopping the user from
encrypting prior to the message hitting the server. In a past company, I
had one server handing mail for a good number > 30,000 users. That server
even pre-postfix, but certainly since the second alpha version of postfix
had zero compromises. The desktop number was greater than zero. You're
ignoring threat rate completely. That's a significant flaw if you expect
to field such systems.
> if ssl encryption is used on internal network traffic, then it's very
> difficult to steal login passwords...but most networks use unencrypted
> POP or IMAP or similar protocols, and many even allow unencrypted telnet
> because their alleged network admin believes that they don't have to
> worry about security on anything happening inside the firewall (which
> is cretinous because the vast majority of packet sniffing compromises
> are performed on internal machines - whether by dodgy staff or after a
> script-kiddie has compromised a box)
But at least that mechanism *requires* that an attacker *break the law*
for a failure mode to happen, versus requiring that a legitimate user
simply forget procedure, not understand procedure, be tricked or make a
mistake. Once again, the math works for my solution not yours.
> > > you'd be much better off just using the TLS patches for postfix and
> > > encrypting the smtp sessions - perhaps even configuring postfix so
> > > that it refused non-encrypted connections. that's the limit of what
> > > you can do in an MTA - encrypt the transfer of the mail, not the
> > > content of the mail. this only defeats packet-sniffing snoops and
> > > nothing more. if you want the contents encrypted (for authentication
> > > and/or privacy) then it *must* be done in the client.
> >
> > TLS requires both ends to speak TLS.
>
> yes, i said that - "perhaps even configuring postfix so that it refused
> non-encrypted connections".
That'll make all your customers/business partners happy when they can't
get your mail or send you mail. Been in business long?
> > > so replace the idiot staff with clueful staff, and train the
> > > ignorant-but-not-completely-cluess staff.
> >
> > 1. We live in the real world, where replacing staff because they don't
> > understand trust management when their job isn't security is generally
> > an actionable item.
>
> depending on what their actual job is, security IS part of their job.
Funnily enough though companies aren't tripping over each other to send
every single worker to do training every two weeks. You'll still get >=3%
failure rates if you do that, but that's the best it ever gets. You'd
better consider working for a governement though, most companies can't
afford such solutions.
> someone working with sensitive & confidential material has an ethical
> duty and a legal obligation to keep that material secret and secure.
Which presupposes an understanding of attack methodologies, and
appropriate defenses. Once again, not likely to happen organization-wide
and still prone to a ~15% failure rate when it does happen.
> that means not leaving it on their desk for the cleaner to read when
> they go home at night, and it also means not sending it unencrypted
> over the internet.
And yet when solving the "unencrypted over the Internet" problem you try
to focus on the local problem, which has as many other equal attacks
against either infrastructure.
> the technology to encrypt it is available, and
> with training can be used by a reasonably intelligent person - it is
> negligent NOT to use it where appropriate.
Encryption boundaries are strongest when well-managed. Yet you propose
always ending the boundary on a machine that normally *can't* strongly
enforce such a boundary. I doubt you've ever worked anywhere with a
significant number of formally trained crypto people on-staff, or you'd
see the folly in such an assertion. Extending trust to an untrustworthy
realm weakens it.
I can do MAC layers to ensure that the encryption boundary is
unidirectional on an MLS system, how do you propose to do the equiv. on
the desktop?
> > > it's a staffing problem, not a technical problem.
> >
> > No, it's an infrastructure problem. If we had reasonably secure
> > infrastructure we wouldn't need to educate the staff.
>
> that, to be blunt again, is even more bullshit. security is NOT a
> product. security is a process, a process that involves *everyone*. a
> security system is no stronger than the weakest link in the chain...and
> in this case, the weakest link is ignorant, poorly trained staff who not
> only know very little about security, they don't even comprehend the
> importance of security. one cretin (or malicious insider) in the wrong
> position can utterly compromise a system beyond repair.
I didn't say it wasn't a process, I said that secure infrastructure
removes the need to educate around insecure infrastructure.
Let's go back to your car example- if I remove the manual transmission and
replace it with an automatic, I still have to teach the person how to
drive, but I certainly don't have to teach them to upshift or downshift.
Downshifting is still useful, but not for 99.99% of drivers.
I can (and have) put a user on a secure and contained network with
complete strong multi-layer access control, output control and strong data
labeling and not have to worry about educating them on quite a few things
because there's no way for them to operate in anything other than a
specific mode. That doesn't make them inculpable or incapable of doing
bad things, it does make insecurity something they have to try very hard
to achieve. That's much more likely to be successful than trying the
opposite way. If the technology is default deny in nature, then the risk
is significantly less than if the technology is default allow in nature
and the policy is all that's "enforcing" default deny.
If you don't know how to engineer systems so that one person can't
compromise them without a significant auditable failure that requires
physical access , that's not my problem. In YOUR scenerio you're relying
on that not happening on hundreds of stations in a complex way. I can
narrow that universe down to one system in a complex way effectively and
auditably.
> if your security procedures are broken, then no amount of technology is
> going to give you a secure system.
And still if your procedures are there, there's no ammount of training
that will make everyone follow them.
> > No, what I'm saying is "Security is difficult enough that we have to
> > use a pragmatic approach and put it where we can manage the risk in
> > the best way."
>
> you're operating under the severe delusion that your model does
> *anything* at all to manage the risk. it doesn't. it's broken. what's
> more, it's broken by design.
You're wrong, and you have yet to prove otherwise. You've yet to find
flaw in a system that poses equivalent authentication risk, better
revocation, and more likely less administrative risk. Definitely it moves
the social engineering risk down considerably, physical risk
exponentially, and malcode risk down to near zero.
> you're still playing the "it's too hard" tune. sure, it's hard - but
> pretending won't make it any easier.
Pretending a policy and firing people will fix it is more deluded. That's
been tried several times over the last 25 years, and it has yet to have
the results its proponents desire.
> > > > What about a bank that goes to the trouble of building a
> > > > fool-proof mechanism for encrypting customer mail?
> > >
> > > if the so-called "fool-proof" mechanism requires storing the keys
> > > and passphrases on the mail server, then that's a good reason to
> > > abandon that bank - any bank which is that clueless about security
> > > is way too high a risk to accept.
> >
> > You seem to think that remote access to keys on a server is a given,
> > yet on the desktop it's not, maybe it's you that's clueless?
>
> again, you completely miss the point.
>
> the private key is encrypted using the passphrase, and can not be
> decrypted without knowing the passphrase. if it can't be decrypted, it
> can't be used.
1. Wrong, it *can* be decrypted without the passphrase, and more
importantly:
2. It's *easy* to get the passphrase either physically (camera, shoulder
surfing, social engineering...) or electronically (sniffing, trojaning.)
Your multiple points of failure have a statistically higher failure rate
than my single one, and more importantly significatnly so. I've designed
binary and trinary systems before that use the same principles to move
even the single point of failure out, but in this case the threat is so
statistically small that it's not worth the effort or increase in
complexity.
> in short, the private key stored on the desktop is worthless without the
> passphrase.
You're relying on something that's proven to be unreliable in the past.
See [1] for instance. You're also making a 100% assertion on something
that hasn't proven to be 100% true.
> > "Fool-proof" of course refers to the encrypting of e-mail, not the key
> > part anyway- you're expecting a 0% human failure rate- and worse-yet
>
> no, i'm not. i'm saying that you can't solve human failure by fixing the
> server. you solve human failure by fixing or replacing the human.
I'm saying that if the "requirement" is "unencrypted e-mails don't come
from our network" then it's easier to solve with technology than with
humans. A bank with internal security issues has a *lot* more to worry
about than privacy, and solving the privacy piece doesn't necessitate
firing half the staff.
> the way to do that is you give those 15% something else to do and don't
> let them work on anything that is security sensitive. if they can't
> understand the need for security or learn good security practices then
> they should not be working in an area where security is important.
> period.
Remember, 15% is the *best* case- now maybe in your dreamworld compaines
fire 3 in 20 people to manage security, but in the real world that doesn't
happen, especially if those 3 happen to be the revenue producing ones out
of the equation, or more likely the customer care ones.
> > > what you're suggesting doesn't provide any real security, it just
> > > gives a false sense of security - which is WORSE than none at all,
> > > as it encourages staff to take risks that they wouldn't take if they
> > > were aware of the lack of security.
> >
> > Your proposal offers *less* effective security by putting key
> > management in the hands of novices,
>
> no, you are wrong. it requires trained staff to type in their passphrase
> in order to send encrypted mail.
And to protect those hundreds of passprhase and the hundreds of platforms
they type that passprhase in on. Each significantly more likely to be a
point of failure than a single well-administered system and trusted
administrator.
> your model has *zero* effective security because it commits the
> cretinously negligent crime of storing the passphrase alongside the
> private key. this is sheer idiocy.
That's only true *if* the risk to compromise of that server is high, and
then only if you choose to not perform additional authentication on the
sender. You've yet to provide any real-world experience that the
risk is higher than it is at the desktop. That's because you *can't*
provide such experience because it simply isn't there.
> please, do some research and get a clue about security issues before
> responding. i'll ignore any response which is just more of the same.
I doubt you've done 1/100th of the research that I have. Certainly you
haven't even tried to look at real companies and where real security
failures really happen and how they happen. Ignoring me doesn't make you
any more right, but this is off-topic for postfix-users, so feel free to
reply directly.
Paul
[1] http://www.icz.cz/en/onas/krypto.html
[2] Secur-ID, Cryptocard...
[3] Shinar, D., Meir, M., & Ben-Shoham, I. (1998). How automatic is manual
gear shifting? Human Factors, 40(4), 647-654.
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
prob...@patriot.net which may have no basis whatsoever in fact."
-
> The real problem is expectations and effects. You are taking a system that
> has two purposes, transport security and authentication, and you are using
> it for transport security only. Digital signatures have legal standing in
Absolutely. In fact, that was exactly the piece that I expected people
like Bob Moskowitz to freak out about (but he really wasn't all that
concerned as long as the key was marked appropriately.)
Though if you either aren't too risk-adverse, or if you couple additional
authentication or verification on to the system you can provide
authentication at a different level- that is more like corporate
authentication than personal authentication, I'd argue that such a
scheme is more appropriate for business use anyway.
Managing that "inside the company" risk of forgery, theft, etc. is rightly
what the business should be doing anyway. There are many, many ways to do
so, from "Here's the envelope logs for this week, please check them" to
"mail client is a Web page, type in your ID/token response here, to
"outbound mail is escrowed until validated by the sender."
Also, please note that automatic encryption does *not* preclude manual
encryption earlier in the process.
I'm always complaining to "security professions" who use the same key
(with no expiration date most times) for all their work and home mails
because I don't think it's in the businesses best interests, and
revocation becomes a real and major issue. If you can't get professionals
to do things right, I have little hope of getting generic employees to.
> many places. If you are only doing transport security, but you are using a
> system which is used for digital signatures, you have to be careful not to
> create the effect of a signed message coming from an individual. Whether
> you use a signing key that is identified as "The East Bank of the
> Mississippi's automated process signing key - creates no legal obligation"
> or the individual's asserted (to your system, internally) private key) is
> something that you may want to discuss with the corporate lawyers.
I've had those discussions in the past, and indeed "Automated signing key"
identification is what they recommended. I've always also tried to
auto-footer the mail with a URI that points to an explaination.
> We have been talking about using individual encryption and decryption
> keys. In some areas, a message that is signed electronically is considered
> to have the same legal effect as if it were signed, certified mail. So a
> message comes out of your mail system from a bank officer, signed with the
> bank officer's key, obligating the bank to do something that would be
> within this officer's purview, except that the officer in question claims
> that he never wrote it. But the bank is obligated to do something that
> costs them.
This is also true of certified mail, so the risk is roughly the same.
That is- you have an obligation to provide an audit mechanism which shows
the action to a reasonable degree of certainty, and the business must make
a cost/benefit analysis on the viability of that degree of certainty.
This is classic risk assemsment in the financial sector, and moving it to
systems doesn't change much other than the math on potential
attack vectors.
> The problem I see is that you are going to be doing smtp's typical security
> through assertion, with no proof of ID inside the bank other than a "from"
> header. For signature verification, you are going to replace a clerk's
> verification of an electronic signature coming from outside of the bank
> with a line in the e-mail from your automated verifier that asserts that
> the signature was verified somewhere else.
You can do verification in different places, such as escrowing outbound
mail and requiring two-factor token-based Web authentication of the
message on a second system. That's certainly a higher standard than you
would generally get with paper mail which has many oppertunities to be
modified or replaced in the physical path. It's also a higher standard
that you'd get on a desktop system in the mail room.
I'd have the automated verifyer electronically sign the message depending
on the level of verification (which for an organizaiton may start at none,
and climb up to pre-shared IPSec keys with token validation.) The X509
level of certificate stuff seems to fail in this regard real-world, but
I'd make the signature dependent on verification, and have a class of
signature for each level. Everything I've deployed to date has but a
single level though.
For mailing list use, I find SMTP to be fine, as I can equate to an IP
address and equate that to a Media Access Control (the other kind of MAC
in this discussion) address. Typically, I've also allowed the user to
send pre-encrypted to a daomeon's key who then will decrypt and re-encrypt
and sign as well.
> Of course, either in the queue from the officer to outside, or in the queue
> from outside to the functionary, you have no chain of custody, all you are
> doing is assertion. If you don't sign the outbound messages, you only
This is a flaw in digital signature laws that isn't addressed well. Chain
of custoday issues are best addressed by encrypting the message, and
placing it on an external Web server and having the recipient come and get
it. That, of course requires pre-shared authentication credentials or
one-time retrieval capability.
> encrypt them in the public key of the recipient, you are simply adding
> transport security (if the recipient knows what they are doing). On the
> inbound side, you should include the original encrypted and possibly signed
> message somewhere, so that the clerk can verify the signature (if it is
> important).
That still begs for non-repudiation problems on behalf of the recipient.
However if the message is that critical, or must pass such a test, then
e-mail probably isn't the approrpiate transmission medium. I think though
that overall, including the original is a good idea for some applications,
and perhaps the best use of an ADK yet. I'll add that capability to my
future implementations.
> It is true that, in corporate environments, private keys are held on key
> recovery servers so that the loss of a machine does not mean that encrypted
> mail is lost, but in many cases, those keys are held split across a couple
> of servers and they require multiple people acting together to recover a
> private key. This is so that one person, acting alone, does not have the
> keys needed to begin forging signed mail. And so forth.
Right, adding such systems improves veracity while driving up cost and
complexity. Typically it's only done in extremely large environments.
ADKs are starting to take favor, but of course don't give the company any
reasonable assurance about outbound mails.
> You talk about human failure vs. machine failure, and malicious acts. The
> issue I see is that you are creating a system that potentially puts a
> cryptographic rubber stamp on malicious acts.
This is managable to approximately the same level as in physical mail,
though the totality of the failure mode is normally significantly higher,
which means that more effort must be put into ensuring that you don't
reach a failure mode (dilligence in design, more formal validation and
stronger auditing than is normally used in such systems are most of the
additional cost.)
Since there is generally no stamp now, adding a stamp isn't a significant
downside (especially at customer trust) and the same failure modes and
rubber stamp happen at the normally untrustworthy desktop as well. The
reason I normally advocate MLS systems is that you can then provide
administrative audit trails, removing the most likely vector of
compromise. Webmail via SSL with token authentication (same server) tends
to be a better standard of authentication than most paper contracts
receive (and gives a degree of non-repudiation.) Combined with MLS, where
the mail-authentication and encryption daemon's MAC compartment (or role
if you're a fan of roles over MAC) doesn't exist on box for any other
entity, and the ability to grant such role/mac is revoked means that an
auditor must simply verify proper MLS installation/setup and then check
physical uptime logs.
That's about as narrow as the window gets, and I still say that it's
narrower than the window that relies soley on the general user population
and their desktops, especially for revocation issues.
TLS is _obviously_ a better transport-layer solution, it just isn't widely
enough deployed to be practical and really wants a different machine than
SMTP to keep the encryption boundary hard.
Thanks,
Paul
If you are generating a huge amount of custom code, why not just use a
product which does it all.
I know this is off of the topic of this list, which is postfix.
The other point is, if you are going to run this sort of solution, why not
use S/mime instead of PGP? It seems a lot more "standard". Is PGP the
defacto banking industry standard?
At 02:18 PM 10/7/2001 -0400, Paul D. Robertson wrote:
>On Sun, 7 Oct 2001, Nick Simicich wrote:
>
> > The real problem is expectations and effects. You are taking a system
> that
> > has two purposes, transport security and authentication, and you are using
> > it for transport security only. Digital signatures have legal standing in
>
>Absolutely. In fact, that was exactly the piece that I expected people
>like Bob Moskowitz to freak out about (but he really wasn't all that
>concerned as long as the key was marked appropriately.)
>
>Though if you either aren't too risk-adverse, or if you couple additional
>authentication or verification on to the system you can provide
>authentication at a different level- that is more like corporate
>authentication than personal authentication, I'd argue that such a
>scheme is more appropriate for business use anyway.
>
>Managing that "inside the company" risk of forgery, theft, etc. is rightly
>what the business should be doing anyway. There are many, many ways to do
>so, from "Here's the envelope logs for this week, please check them" to
>"mail client is a Web page, type in your ID/token response here, to
>"outbound mail is escrowed until validated by the sender."
>
>Also, please note that automatic encryption does *not* preclude manual
>encryption earlier in the process.
>
>I'm always complaining to "security professions" who use the same key
>(with no expiration date most times) for all their work and home mails
>because I don't think it's in the businesses best interests, and
>revocation becomes a real and major issue. If you can't get professionals
>to do things right, I have little hope of getting generic employees to.
>
> > many places. If you are only doing transport security, but you are
> using a
> > system which is used for digital signatures, you have to be careful not to
> > create the effect of a signed message coming from an individual. Whether
> > you use a signing key that is identified as "The East Bank of the
> > Mississippi's automated process signing key - creates no legal obligation"
> > or the individual's asserted (to your system, internally) private key) is
> > something that you may want to discuss with the corporate lawyers.
>
>I've had those discussions in the past, and indeed "Automated signing key"
>identification is what they recommended. I've always also tried to
>auto-footer the mail with a URI that points to an explaination.
>
> > We have been talking about using individual encryption and decryption
> > keys. In some areas, a message that is signed electronically is
> considered
> > to have the same legal effect as if it were signed, certified mail. So a
> > message comes out of your mail system from a bank officer, signed with the
> > bank officer's key, obligating the bank to do something that would be
> > within this officer's purview, except that the officer in question claims
> > that he never wrote it. But the bank is obligated to do something that
> > costs them.
>
>This is also true of certified mail, so the risk is roughly the same.
>That is- you have an obligation to provide an audit mechanism which shows
>the action to a reasonable degree of certainty, and the business must make
>a cost/benefit analysis on the viability of that degree of certainty.
>This is classic risk assemsment in the financial sector, and moving it to
>systems doesn't change much other than the math on potential
>attack vectors.
>
> > The problem I see is that you are going to be doing smtp's typical
> security
> > through assertion, with no proof of ID inside the bank other than a "from"
> > header. For signature verification, you are going to replace a clerk's
> > verification of an electronic signature coming from outside of the bank
> > with a line in the e-mail from your automated verifier that asserts that
> > the signature was verified somewhere else.
>
>You can do verification in different places, such as escrowing outbound
>mail and requiring two-factor token-based Web authentication of the
>message on a second system. That's certainly a higher standard than you
>would generally get with paper mail which has many oppertunities to be
>modified or replaced in the physical path. It's also a higher standard
>that you'd get on a desktop system in the mail room.
>
>I'd have the automated verifyer electronically sign the message depending
>on the level of verification (which for an organizaiton may start at none,
>and climb up to pre-shared IPSec keys with token validation.) The X509
>level of certificate stuff seems to fail in this regard real-world, but
>I'd make the signature dependent on verification, and have a class of
>signature for each level. Everything I've deployed to date has but a
>single level though.
>
>For mailing list use, I find SMTP to be fine, as I can equate to an IP
>address and equate that to a Media Access Control (the other kind of MAC
>in this discussion) address. Typically, I've also allowed the user to
>send pre-encrypted to a daomeon's key who then will decrypt and re-encrypt
>and sign as well.
>
> > Of course, either in the queue from the officer to outside, or in the
> queue
> > from outside to the functionary, you have no chain of custody, all you are
> > doing is assertion. If you don't sign the outbound messages, you only
>
>This is a flaw in digital signature laws that isn't addressed well. Chain
>of custoday issues are best addressed by encrypting the message, and
>placing it on an external Web server and having the recipient come and get
>it. That, of course requires pre-shared authentication credentials or
>one-time retrieval capability.
>
> > encrypt them in the public key of the recipient, you are simply adding
> > transport security (if the recipient knows what they are doing). On the
> > inbound side, you should include the original encrypted and possibly
> signed
> > message somewhere, so that the clerk can verify the signature (if it is
> > important).
>
>That still begs for non-repudiation problems on behalf of the recipient.
>However if the message is that critical, or must pass such a test, then
>e-mail probably isn't the approrpiate transmission medium. I think though
>that overall, including the original is a good idea for some applications,
>and perhaps the best use of an ADK yet. I'll add that capability to my
>future implementations.
>
> > It is true that, in corporate environments, private keys are held on key
> > recovery servers so that the loss of a machine does not mean that
> encrypted
> > mail is lost, but in many cases, those keys are held split across a couple
> > of servers and they require multiple people acting together to recover a
> > private key. This is so that one person, acting alone, does not have the
> > keys needed to begin forging signed mail. And so forth.
>
>Right, adding such systems improves veracity while driving up cost and
>complexity. Typically it's only done in extremely large environments.
>ADKs are starting to take favor, but of course don't give the company any
>reasonable assurance about outbound mails.
>
> > You talk about human failure vs. machine failure, and malicious acts. The
> > issue I see is that you are creating a system that potentially puts a
> > cryptographic rubber stamp on malicious acts.
>
I advertise STARTTLS on one of my smtp servers and I have had spammers make
secure connections to me to try to send me spam.
> Good answers to my points, but I have one question: If you really need all
> of this, why not use Lotus notes? End-user to end-user encryption, signed
> messages, everything works pretty much automatically. Even S/mime for
I've never been comfortable with the quality of code in Notes, ever since
seeing how it talked to my SMTP gateways and got all confused about who it
was talking to. Also, given the questionability of the crypto in Notes,
having a full solution that's verifyable seemed to be a better idea.
Also, the ability to drop new nodes in places without licensing costs and
export issues was attractive- though not a primary concern.
Certainly Lotus has done a magnificent job with regards to key handling.
If it weren't for my mistrust of the rest of their code it'd be higher on
my list.
> non-Notes recipients. Verification of sender in agents if the sender signs
> the mail, which can just happen. Yes, I know, it seems like no one at Lotus
> has ever actually used a RFC821/822 based mail system some times, but the
> new gateways that are in use at IBM are much better, and I think that those
> are part of the products now.
>
> If you are generating a huge amount of custom code, why not just use a
> product which does it all.
It's not a huge ammount of code really- right now about 20 lines of perl
as a front-end to GPG. Adding a Web-based mail server and RAIDUS auth to
Apache means plugging together lego-like blocks, not really generating
much custom code. Using tokens means about another 30 lines of code to do
cookies right rather than having to retry auth for SecurID.
Of course I haven't tackled automating key management yet, which will
indeed need some moderate coding effort.
> I know this is off of the topic of this list, which is postfix.
>
Yep, I'll happily accept non-list mail on this topic for those who want to
discuss it, but it's probably best to keep it off list.
> The other point is, if you are going to run this sort of solution, why not
> use S/mime instead of PGP? It seems a lot more "standard". Is PGP the
> defacto banking industry standard?
GPG is easy to integrate, S/MIME is slated for phase 3 of my project
simply because institutionally PGP is more widespread and S/MIME needed
more custom code. Plus I use pine, so GPG/PGP was an easier thing for me
to test ;)
On another front, I've decided that having the bot send back a fingerprint
of a message to the sender helps with both verification and against
spoofing. That should be another line of code.
Phase II will involve moving the system into the TCB of an Open Source
implementation of a GFAC system. That's when custom code will be
necessary. Fortunately now there are distributions with most all the
right pieces, so it should be easier sailing that when I first spec'd it
out last year.
Paul
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
prob...@patriot.net which may have no basis whatsoever in fact."
-
They get revoked I hope.
> > that inherently defeats the purpose of what you're attempting by
> > storing the private keys and/or passphrases on the mail server where an
> > attack could conceivably compromise them. it also allows forgery where
>
> No, it doesn't. An attack could conceivable compromise them *anywhere*.
Anywhere the passphrase is available, assuming the encryption is
sound.
Thus it is available on the server even when the user is not
around, which is I think the point he was trying to make.
> I've got news for anyone who still believes that firewalls contain network
> integrity...
As a reseller I like to think they help - but as said before in
the discussion security is about minimising risks.
> I doubt you've worked with many banks if you believe the level of security
> is that much different than any other organization.
You noticed that to *8-). Actually it does vary, some banks do
deploy and manage fairly sensible internal security systems. I
think if a bank has IT staff that are committed to security, it
tends to be easier to get the money to do such things than in
non-finanical organisation, but it can take a long time to turn
around an organisation which has been lax.
One bank someone I knew worked at described the IT security
policy as "we assume all the staff are honest", as I said it
varies.
There are a number of issues here, and arguing about whether it
is a staff or technical problem isn't going to solve them.
First, the person is talking about securing Microsoft Outlook.
Surely this is a bit like taking a tea strainer (Non-Tea drinker
should substitute sieve or other suitably perforated kitchen
utensil), and thinking it is the right shape to make a spoon out
of.
Second, the encryption being proposed is too difficult for some
staff members. Now we could train staff, and sack those who
can't master it, or we could look for a simpler piece of
software. Once set up, public key encryption should mainly
require that the user enter a password (Possibly when they log
in, or use a security token!!), ideally software that can take
administrator determined actions when a security condition
arises (mail from revoked key etc) should be chosen. Most staff
need to know how to enter passwords to use their computers -
although I appreciate some find it challenging.
I'm not up on PGP based mail clients - but I dare say someone on
the list can advise on easy to use mail clients. Ironically
despite less use S/MIME is probably better supported - this
shows the power of open standards against troublesome
proprietary formats.
The idea of rejecting unencryped e-mail appears to have merit -
easy to do, and enforces the policy.
then use the right tool for the job. use transport layer encryption. use
the TLS patches for postfix. don't use a tool designed for a completely
different job.
that gives you the encryption you want without giving the false
impression of authentication/identity-verification.
> > > and IF everyone leaving the company is good enough not to use
> > > their keys?
> >
> > you revoke the key when they leave.
>
> Which doesn't propogate to your customers/business partners personal
> keyrings (whoops!),
so? it gives you what is important, which is repudiation - e.g. you
publicly revoked the key on 2001-10-08 (the same day you noticed the
compromise, or the same day the staff member left) by alerting various
public key servers, so any recipient who continues to treat it as being
authoritative has only themself to blame.
in the case of business partners with whom you know you have ongoing
communications about particular projects, it would be normal to send
them a copy of the key revocation certificate too.
> or are you now asserting that you're to educate all
> your customers and partners on cryptographic operations and lose
> all the ones who don't measure up?
you can only be responsible for your own security practices. if your
correspondent's procedures are lax, that's their problem. in some cases,
you should refrain from sending sensitive confidential material to
people/companies who are known to have inadequate security procedures.
in any case, the same problem exists when using PGP - if the recipient
doesn't have or doesn't know how to use PGP, they can't read the mail.
> > > I still submit that machine failure is much easier to model than
> > > human failure and much more predictable.
> >
> > that may be true, but it's also irrelevant. if the problem is human
>
> It's not irreleveant. You haven't removed machine failure, and you've
> added significantly more human failure in your solution.
no, the *same* amount of human failure is there...or probably less,
since you've bothered training the staff and informing them of their
responsibilities.
> > failure then dealing with the more predicatable machine failure
> > doesn't do any good - you're fixing the wrong problem.
>
> No, you're focusing on the wrong part of the problem.
not at all.
the problem is NOT technology. the problem is staff and training.
> Get access to the desktop and getting the passprhase is trivial, and
> may not even be necessary. Become the IT guy and you've probably got
> all the passwords and half of the passphrases anyway. Make mail break
> for a user (one filter rule) then go to their desk to "fix" their
> problem and I'd bet that you get the passphrase >80% of the time if
> you're the normal IT guy. Heck if it was only 33%, you'd have 100 in
> a 300 person bank. Think that's not enough to do damage? Now start
> thinking about people who's secretaries read their mail for them...
> Yep- that's normally the execs...
this is a training problem.
"do not give your passphrase to anyone for any reason, not even the IT
staff. failure to observe this rule may result in immediate termination
of your employment. this is your only warning".
make an example of the first few mid-level executives who think it
doesn't apply to them and the rest will soon take notice.
if you don't have the political will to do what is necessary to ensure
security, then don't even bother. you may fool yourself, but you
certainly won't fool any attacker.
> > this is the whole point of having a passphrase, this is the basis of
> > the security model used by PGP and similar programs such as GnuPG
> > - the private key is protected by a passphrase known only by the
> > authorised user(s).
>
> That protection has proven weak at best in the past [1] if you have
> access to the secret key file. However, you can always do the Webmail
> passphrase thing with my solution, where yours allows no relocation of
> credential checking and only static repeated credentials.
you just don't get it, do you?
it makes no difference whether the client is a webmail app or not.
> > given enough time and CPU power, it may be possible to perform
> > a brute-force attack on the pass-phrase, which is why it is so
> > important to pick a good (i.e. long and complicated yet memorable)
> > pass-phrase and also to be very careful about how and where it is
> > used.
>
> There hare been implementation flaws and successful non-brute-force
> attacks on secret keys before[1]. No doubt there will be again.
> Brute force will still work in ~10% of the cases anyway- that's 30
> keys in a company of 300 (again, best-case.)
(i'll use your example of 10% because it's convenient, not because i
agre with it)
to get that 10%, every passphrase would have to be attacked - a
process that is likely to take weeks or months or even years FOR EACH
PASSPHRASE.
even if the attackers are only after the passphrase for one key, a
brute-force attack against it is still going to take weeks/months/years,
with no guarantee of success. the better the passphrase, the longer
the brute-force attack is going to take, and the lower the chance of
successful compromise.
obviously, it's stupid to use well known quotations as pass-phrases as
they are subject to dictionary attack (it's not hard to get books full
of quotations), but even that's not an insurmountable problem - e.g.
use the initial letters of two (or more) easilyremembered quotations
plus some numerals or punctuation characters and capitalisation of some
letters to make up the passphrase.
the idea is to make the passphrase long and compliated yet easy to
remember.
> > depending on what their actual job is, security IS part of their job.
>
> Funnily enough though companies aren't tripping over each other to send
> every single worker to do training every two weeks. You'll still get >=3%
> failure rates if you do that, but that's the best it ever gets. You'd
> better consider working for a governement though, most companies can't
> afford such solutions.
again, you're refusing to understand that this is not a technological
problem.
this is a problem of staff and training - and the will to do what is
required.
craig
--
craig sanders <c...@taz.net.au>
Fabricati Diem, PVNC.
-- motto of the Ankh-Morpork City Watch
> "Paul D. Robertson" wrote:
> >
> > ... and IF
> > everyone leaving the company is good enough not to use their keys?
>
> They get revoked I hope.
Maybe I missed someone putting in the infrastructure for real-time key
revocation? In general keys are good for as long as they say they are,
and there isn't a good way to centrally revoke keys that sit on other
people's public keyrings. Once the IPSEC guys get their act together (and
we've been waiting a long time in IPSEC land for vendors to achieve key
management- about two years AIR.) maybe we'll see something useful in that
regard, but it isn't here today.
> > No, it doesn't. An attack could conceivable compromise them *anywhere*.
>
> Anywhere the passphrase is available, assuming the encryption is
> sound.
And the implementation is sound and the passphrase isn't guessable...
Take 300 users, tell them all what a passphrase should be, have them all
make one, and see what percentage still use something guessable.
Now add anyone who can engineer their passprhase out of them, shoulder
surf, shake and season to taste.
> Thus it is available on the server even when the user is not
> around, which is I think the point he was trying to make.
Yes, it can be, but as I've pointed out, that doesn't immediately equate
compromise any faster than compromise of a particular user, and in my
experience with large organizations, I can pretty much count on user
compromise.
> > I've got news for anyone who still believes that firewalls contain network
> > integrity...
>
> As a reseller I like to think they help - but as said before in
> the discussion security is about minimising risks.
Feel free to contact me off list for that discussion. It's a long and
sordid tale of SSL, vendors and things that go bump in the night ;)
> > I doubt you've worked with many banks if you believe the level of security
> > is that much different than any other organization.
>
> You noticed that to *8-). Actually it does vary, some banks do
> deploy and manage fairly sensible internal security systems. I
Yep, some do, others struggle to stay afloat, pinch every penny and
generally work just like every other company in the world.
> think if a bank has IT staff that are committed to security, it
> tends to be easier to get the money to do such things than in
> non-finanical organisation, but it can take a long time to turn
> around an organisation which has been lax.
I've seen as many non-financial institutions do the right thing as
financial institutions.
> One bank someone I knew worked at described the IT security
> policy as "we assume all the staff are honest", as I said it
> varies.
My favorite was the bank that wanted to do 802.11b so they wouldn't have
to run any more cable.
> Second, the encryption being proposed is too difficult for some
> staff members. Now we could train staff, and sack those who
> can't master it, or we could look for a simpler piece of
> software. Once set up, public key encryption should mainly
> require that the user enter a password (Possibly when they log
> in, or use a security token!!), ideally software that can take
> administrator determined actions when a security condition
> arises (mail from revoked key etc) should be chosen. Most staff
> need to know how to enter passwords to use their computers -
> although I appreciate some find it challenging.
That still leaves the institutional issues surrounding the company knowing
what's being communicated to their customers/partners.
> I'm not up on PGP based mail clients - but I dare say someone on
> the list can advise on easy to use mail clients. Ironically
> despite less use S/MIME is probably better supported - this
> shows the power of open standards against troublesome
> proprietary formats.
Hmmm, I thought OpenPGP was an open standard?
Seriously, if the mail clients made it easy enough, this wouldn't be an
issue because everyone would be using encrypted mail.
Paul
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
prob...@patriot.net which may have no basis whatsoever in fact."
-
> On Sun, 7 Oct 2001, Simon Waters wrote:
> > I'm not up on PGP based mail clients - but I dare say someone on
> > the list can advise on easy to use mail clients. Ironically
> > despite less use S/MIME is probably better supported - this
> > shows the power of open standards against troublesome
> > proprietary formats.
Evolution supports S/MIME in it's current state (I believe), and you
probably saw the posts over the weekend about the Germans and Kmail in
this area.
UK work on S/MIME is documented at
http://www.skills-1st.co.uk/papers/afindlay.html. There are several
relating to work carried out by Netproject members on PKI.
Netproject were planning to work with Exim (largely for the valid
reason that the team members knew it). I can't think postfix (or any
other serious, general-purpose MTA) would have any problems being
dropped in, but Weitse may have looked at this sometime.
--
Keith Matthews Spam trap - my real account at this
node is keith_m
Frequentous Consultants - Linux Services,
Oracle development & database administration
Thanks to everyone who argued over encryption methodology though I hope
everyone is still friends after some of the posts... =}
In that case, figuring we will most likely bes standardizing on an Outlook
and Evolution desktop, would S/MIME make more sense. More people *seem* to
use PGP and I have to admit I'm more familiar with PGP/GPG.
What advantages besides support would their be to an S/MIME configuration?
Daryl.
-----Message d'origine-----
De : owner-pos...@postfix.org
[mailto:owner-pos...@postfix.org]De la part de Keith Matthews
Envoyé : lundi 8 octobre 2001 09:44
À : Postfix-Users
Objet : Re[2]: Implementing a PGP encryption/decryption gate
I just found out about GEAM, a « specialized MTA which takes care of
encrypting all messages leaving your company ».
See http://www.g10code.com/p-geam.html
PGP:
* Is around for quite some time, however has changed over time which may
lead to trouble with respect to compatibility:
- Old versions (2.x) used RSA keys. Their encryption techniques are
not compatible with recent versions of PGP. You cannot send an email
to two parties using old an new keys at the same time.
- PGP uses the "web of trust" concept. Keys are exchanged between parties
and people countersign each others keys. It can be very difficult to
actually check the authenticity of some other persons key if you don't
have an entry point to into the "web of trust" near to this person.
- Some mailers have PGP support built in (e.g. mutt), for other mailers
you can get PGP support. Unfortunately, RFC2015 (PGP/MIME), as used
by mutt, did not make its way into all other mailers, so that you may
see interoperability problems.
S/MIME
* Is coming later:
- Please take it with a grain of salt, but for me it seems that both
Outlook (Express) and Netscape mailer together dominate the market
of email software with respect to the number of users. Both support
S/MIME, which makes a strong point for S/MIME for me. I do not say
that I like this software or propose its use, but as a matter of fact,
most people I know tend to use one of them, leaving a small number of
command line enthusiasts like myself using mutt (elm, pine,...)
- S/MIME supports an X509 concept with a central CA (certificate authority).
This makes the handling of the authenticity problem much simpler, if
you are using it in a company (or university) context. Most people
ignore this point, but when we are talking about security, we must talk
about authenticity. Using keys without proper authentication renders
encryption finally useless.
- It seems, that the S/MIME RFC is more or less closely followed by
the software, so interoperability should be fine.
Summarizing: I would recommend you to go the S/MIME way for the long term
advantages.
Best regards,
Lutz
--
Lutz Jaenicke Lutz.J...@aet.TU-Cottbus.DE
BTU Cottbus http://www.aet.TU-Cottbus.DE/personen/jaenicke/
Lehrstuhl Allgemeine Elektrotechnik Tel. +49 355 69-4129
Universitaetsplatz 3-4, D-03044 Cottbus Fax. +49 355 69-4153
For end-to-end, use encryption functions in the MUA: PGP, S/MIME.
For hop-by-hop, use encryption functions in the MUA+MTA: TLS (SSL).
Sounds like some people are looking for a model where the post
office signs and seals letters for the sender. That makes sense
only when there is some method to authenticate the sender to
sender-side post office, and when there is some method for the
recipient-side post office to inform the recipient that a message
is "OK" or "BAD", whatever that means. This all seems very clumsy
compared to end-to-end, MUA-to-MUA, encryption and authentication.
It also has a centralist flavor that I am not in favor of.
Wietse
> Wow! I had no idea asking this question would generate so much traffic -
> sorry for all the non-postfix stuff though it is realted as postfix would be
> the hub of the email system.
Yeah, that's the problem with this list, so many members know about so
much <g>.
> In that case, figuring we will most likely bes standardizing on an Outlook
> and Evolution desktop, would S/MIME make more sense. More people *seem* to
> use PGP and I have to admit I'm more familiar with PGP/GPG.
> What advantages besides support would their be to an S/MIME configuration?
Government systems will be using it. Before long if you want to send
a signed e-mail to a government department you'll have to use S/MIME.
Once people have to support S/MIME regardless then PGP will get
dropped except for minority groups, who wants to run with 2 systems ?
I make no comment about the technical 'betterness' of either.