Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

SSL_library_init() & EVP_sha256

71 views
Skip to first unread message

Phil Pennock

unread,
Jun 15, 2009, 12:02:04 AM6/15/09
to
Folks,

The approach of the Exim MTA to cryptography is simple -- don't
second-guess the SSL library developers when it comes to choosing which
algorithms/digests/etc to load, and provide a knob
("tls_require_ciphers") for administrators to restrict what can be
loaded. The MTA developers do not want to be in the cryptoanalysis
game, deciding when digests are or are not safe to use and reason that
this is best handled by the SSL libraries which are maintained by people
who understand this stuff better.

There's a pending request, from February 2008, to load SHA-256 in Exim
to let people verify certificates from CAs which have migrated to
SHA-256. http://bugs.exim.org/show_bug.cgi?id=674

When RFC 5246 came out, specifying TLS 1.2 and having all mandated
cipher suites use SHA-256, we assumed that to aid the transition OpenSSL
would add EVL_sha256() to the list of digests initialised in
SSL_library_init(), even before support of TLS 1.2 itself. I've checked
OpenSSL 1.0.0 beta 2 and see that this is still not the case.

I'm seeing usage of SHA-256 become more widespread by CAs today.

Are there plans to add this digest to the list initialised by
SSL_library_init() ?

If not, why not please?

Thanks,
-Phil
______________________________________________________________________
OpenSSL Project http://www.openssl.org
Development Mailing List opens...@openssl.org
Automated List Manager majo...@openssl.org

Bodo Moeller

unread,
Jun 15, 2009, 5:10:24 AM6/15/09
to
On Mon, Jun 15, 2009 at 5:46 AM, Phil Pennock<opens...@spodhuis.org> wro=
te:

> When RFC 5246 came out, specifying TLS 1.2 and having all mandated
> cipher suites use SHA-256, we assumed that to aid the transition OpenSSL
> would add EVL_sha256() to the list of digests initialised in

> SSL_library_init(), even before support of TLS 1.2 itself. =A0I've checke=


d
> OpenSSL 1.0.0 beta 2 and see that this is still not the case.
>
> I'm seeing usage of SHA-256 become more widespread by CAs today.
>
> Are there plans to add this digest to the list initialised by
> SSL_library_init() ?

I think SSL_library_init() is meant to provide just the subset of
algorithms needed by the SSL/TLS protocol implementation itself, which
currently doesn't include SHA-256.

Most applications, however, just call OpenSSL_add_all_algorithms() to
get more than that subset. If you'd rather not define more encryption
algorithms than needed to cut down some overhead, you should be able
to make do with calling SSL_library_init() and
OpenSSL_add_all_digests(). Then the hash algorithms available for
certificate verification will include SHA-256.

Bodo

Phil Pennock

unread,
Jun 15, 2009, 4:39:32 PM6/15/09
to
On 2009-06-15 at 11:02 +0200, Bodo Moeller wrote:

> On Mon, Jun 15, 2009 at 5:46 AM, Phil Pennock<opens...@spodhuis.org> wrote:
>
> > When RFC 5246 came out, specifying TLS 1.2 and having all mandated
> > cipher suites use SHA-256, we assumed that to aid the transition OpenSSL
> > would add EVL_sha256() to the list of digests initialised in
> > SSL_library_init(), even before support of TLS 1.2 itself.  I've checked

> > OpenSSL 1.0.0 beta 2 and see that this is still not the case.
> >
> > I'm seeing usage of SHA-256 become more widespread by CAs today.
> >
> > Are there plans to add this digest to the list initialised by
> > SSL_library_init() ?
>
> I think SSL_library_init() is meant to provide just the subset of
> algorithms needed by the SSL/TLS protocol implementation itself, which
> currently doesn't include SHA-256.
>
> Most applications, however, just call OpenSSL_add_all_algorithms() to
> get more than that subset. If you'd rather not define more encryption
> algorithms than needed to cut down some overhead, you should be able
> to make do with calling SSL_library_init() and
> OpenSSL_add_all_digests(). Then the hash algorithms available for
> certificate verification will include SHA-256.

Doesn't this add various insecure digests, since OpenSSL is complete, so
the application needs to be back in the crypto-engineering game and
figuring out which digests to exclude? And my understanding is that
since this is certificate path verification, the cipher suite spec
passed to SSL_CTX_set_cipher_list() does not help filter this out, even
if we set a default which has !LOW in it? I could well be wrong here,
please correct me if so.

For an application which just wants to, by default, support the normal
ciphers and expect SSL_CTX_set_verify() to (a) work and (b) not support
digests more trivially broken than MD5, what's the correct way to go
please?

Thanks,
-Phil

David Schwartz

unread,
Jun 15, 2009, 5:17:36 PM6/15/09
to

Phil Pennock wrote:

> The approach of the Exim MTA to cryptography is simple -- don't
> second-guess the SSL library developers when it comes to choosing which
> algorithms/digests/etc to load, and provide a knob
> ("tls_require_ciphers") for administrators to restrict what can be
> loaded. The MTA developers do not want to be in the cryptoanalysis
> game, deciding when digests are or are not safe to use and reason that
> this is best handled by the SSL libraries which are maintained by people
> who understand this stuff better.

That just won't work. Cryptography is not a "drop in a library and mark a
checkbox on your product" thing. It has to be properly integrated in an
application with decisions made as to what the application actually needs,
what threat models it faces, and so on.

If the Exim MTA takes that approach to cryptography, I would consider it
unreliable from a security standpoint. The OpenSSL folks don't necessarily
have any idea, nor care, what the Exim MTA needs from OpenSSL and won't make
sure it gets what it needs. If the Exim MTA folks don't do that, then
nobody's doing that.

OpenSSL is a library that provides security services to applications, but it
has no idea what those applications need, what threats they face, what
security model they live in, and so on. You cannot simply accept the
defaults and hope for the best. That might work, but to be reliable,
somebody somewhere has to make sure it does in fact work.

DS

Phil Pennock

unread,
Jun 15, 2009, 6:03:40 PM6/15/09
to
On 2009-06-15 at 14:17 -0700, David Schwartz wrote:
> Phil Pennock wrote:
> > The approach of the Exim MTA to cryptography is simple -- don't
> > second-guess the SSL library developers when it comes to choosing which
> > algorithms/digests/etc to load, and provide a knob
> > ("tls_require_ciphers") for administrators to restrict what can be
> > loaded. The MTA developers do not want to be in the cryptoanalysis
> > game, deciding when digests are or are not safe to use and reason that
> > this is best handled by the SSL libraries which are maintained by people
> > who understand this stuff better.
>
> That just won't work. Cryptography is not a "drop in a library and mark a
> checkbox on your product" thing. It has to be properly integrated in an
> application with decisions made as to what the application actually needs,
> what threat models it faces, and so on.

While so true, nobody's even come up with a definition of which hostname
should be verified by an MTA when delivering to another MTA, so there's
no hostname verification, so the security model for TLS for MTA->MTA
communications is broken, independent of which software is used for the
MTA. So at *best*, there's opportunistic link encryption.

Mail client (MUA) -> Mail submission server (MSA, aka MTA on port 587)
is tolerably defined, but that's about it.

None of this is specific to this application.

So the only real verification is "trust path for the certificate, used
for this purpose, to a trusted CA with every step valid". *This* is
supposed to be bog-standard SSL/TLS usage.

> OpenSSL is a library that provides security services to applications, but it
> has no idea what those applications need, what threats they face, what
> security model they live in, and so on. You cannot simply accept the
> defaults and hope for the best. That might work, but to be reliable,
> somebody somewhere has to make sure it does in fact work.

Right. But what in this requires that the application know, or encode,
which particular digests are or are not currently considered safe?
Aside from the unfortunate reality of no hostname verification, the MTA
use-case is a standard profile and should accept "normal" algorithms in
real-world use by Certificate Authorities in the wild today.

You cannot simply take a bunch of common-sense stances and extrapolate
past "understand your security model" to "every application needs to be
maintained by cryptanalysts who keep track of which ciphers are
currently needed by or broken for operation of SSL/TLS". Not if there's
any real-world expectation that the applications all keep up and that a
crypto breakthrough can be fixed by configuration and recompiling the
SSL libraries, instead of recompiling each and every SSL application,
after waiting for code updates from those developers too.

-Phil

David Schwartz

unread,
Jun 15, 2009, 6:48:52 PM6/15/09
to

Phil Pennock wrote:


> > That just won't work. Cryptography is not a "drop in a library
> > and mark a
> > checkbox on your product" thing. It has to be properly integrated in an
> > application with decisions made as to what the application
> > actually needs,
> > what threat models it faces, and so on.

> While so true, nobody's even come up with a definition of which hostname
> should be verified by an MTA when delivering to another MTA, so there's
> no hostname verification, so the security model for TLS for MTA->MTA
> communications is broken, independent of which software is used for the
> MTA. So at *best*, there's opportunistic link encryption.

This is all application-specific stuff that OpenSSL cannot and does not
know. So there are two possibilities:

1) This is irrelevent to how Exim uses OpenSSL, in which case why bring it
up?

2) This affects how Exim uses OpenSSL. In which case, how can OpenSSL just
magically do the right thing?

> None of this is specific to this application.

Yes, it all is. As far as OpenSSL is concerned, anything about how mail
works is specific to the application, since OpenSSL knows nothing special
about mail.

> So the only real verification is "trust path for the certificate, used
> for this purpose, to a trusted CA with every step valid". *This* is
> supposed to be bog-standard SSL/TLS usage.

No, that's very unusual, since it provided no MITM-rejection.

> > OpenSSL is a library that provides security services to
> > applications, but it
> > has no idea what those applications need, what threats they face, what
> > security model they live in, and so on. You cannot simply accept the
> > defaults and hope for the best. That might work, but to be reliable,
> > somebody somewhere has to make sure it does in fact work.

> Right. But what in this requires that the application know, or encode,
> which particular digests are or are not currently considered safe?

Because there is no such thing as "currently considered safe". MD5 is
currently safe for some applications and not others. In some cases, for
example where encryption is purely oppurtunistic, you may prefer to use a
known-unsafe hash algorithm rather than have no encryption at all. So the
issue is not if they're "currently considered safe", the issue is whether
using them or not using provides the best mix of security and other values
for this particular application.

> Aside from the unfortunate reality of no hostname verification, the MTA
> use-case is a standard profile and should accept "normal" algorithms in
> real-world use by Certificate Authorities in the wild today.

I don't think this "standard profile" exists. And certainly prefering weak
encryption to none at all is *not* part of any standard profile. Look at,
for example, what Firefox makes you go through to connect to a site when the
name mismatches.

> You cannot simply take a bunch of common-sense stances and extrapolate
> past "understand your security model" to "every application needs to be
> maintained by cryptanalysts who keep track of which ciphers are
> currently needed by or broken for operation of SSL/TLS". Not if there's
> any real-world expectation that the applications all keep up and that a
> crypto breakthrough can be fixed by configuration and recompiling the
> SSL libraries, instead of recompiling each and every SSL application,
> after waiting for code updates from those developers too.

Wishing won't make it so. Unfortunately, it does require every application
to confirm that a crypto vulnerability is fixed for that application. For
example, if there's a bug in OpenSSL certificate verification code, without
talking to the people who wrote a particular application, how do you know if
even uses OpenSSL's certificate verification code? It may plug in its own,
which may or may not have this vulnerability. This same type of reasoning
applies to the majority of bugs. (Obvious exceptions might include, for
example, bugs in handling raw SSL protocol data over the wire.)

Every application *does* need to be maintained by programmers with a pretty
significant knowledge of cyrptography. Every bug *does* need to be
evaluluated by every application to ensure that a library upgrade is
sufficient to fix the bug for that application.

That is the unfortunate reality. And I think it's an insoluable problem
because seucrity is, fundamentally, hard.

I wouldn't go so far as to say a professional cryptographer needs to be
on-staff for every application that uses OpenSSL. But I would go so far as
to say that no application can ever assume it is "bog standard" and can
trust that the default settings will always do what is right for that
application. And, in fact, if that were so, this bug report would not exist.

DS

0 new messages