Carter
Carter Browne
CBCS
cbr...@cbcs-usa.com
781-721-2890
______________________________________________________________________
OpenSSL Project http://www.openssl.org
User Support Mailing List openss...@openssl.org
Automated List Manager majo...@openssl.org
I expect 0.9.8m to have the new renegotiation semantics enabled.
Until then, you can go back to 0.9.8k -- just be aware that
renegotiation cannot be performed within the guarantees that TLS
otherwise makes (namely, that the endpoint cannot be arbitrarily
changed without collusion between the old and new endpoints).
The attack is simple:
1) Set up a secure webserver, with an (innocent) file that you don't
want accessed, somewhere. This is 'server'. (This demo works better
if it's in a directory that requires a client certificate for access.)
2) Set up a TLS client endpoint and a server endpoint in the same
process, preferably somewhere else. We'll call this the 'proxy'.
3) Arrange DNS such that the name that server thinks it has (the
HostName directive in Apache) actually goes to proxy.
4) Attack:
a) Client connects to HostName, ends up talking to proxy. It sends
ClientHello, but it doesn't necessarily receive an answer immediately.
b) proxy makes connection to server over TLS.
c) proxy sends something like GET /file/you/don't/want/shown HTTP/1.1
\n Host: server \n\n
d) proxy then sends ClientHello from client and passes it to server,
and transparently proxies the connection from here forward.
e) server receives ClientHello and interprets it (per spec) as a
request to renegotiate.
f) (re)negotiation finishes, server thinks all data received can be
treated as being under the same security veil, processes it (including
prefix), and the attack is successful.
Since there's no cryptographic verification of the prior state in the
renegotiation (client thinks it's initial negotiation, server thinks
it's a renegotiation, but there's no way within TLS to signal that),
client and server then create new master_secrets. Then, server
processes the entire request, including the prefix that the attacker
injected.
It has been suggested that this is an inherent flaw with how HTTP
handles paths with different security characteristics, and I am
inclined to agree -- a client/server pair that understands the
difference between "pre-renegotiation data" and "post-renegotiation
data" won't trip on this, even if renegotiation is "insecure". That's
because such a pair would treat the data prior to the latest
renegotiation's Finished messages as suspect (assuming that the
client-server pair did mutual authentication), and generally (as a
best practice) with suspect data from a potential attacker it's best
to drop it.
It is also, though, undeniably a flaw in the TLS specification that's
amplified by the clients of the libraries that implement it -- because
the clients don't understand the concept of "security veil", the TLS
implementations tend to provide a raw stream of bytes (akin to a
read()/write() pair) without the application necessarily being aware
of the change.
-Kyle H
I honestly don't know. You would do better asking on an Apache
support list, I think; most of us don't know the Apache codebase (or
mod_ssl's requirements on the library) at all.
I *think* you might be able to do it at a /VirtualHost level, for the
https server, as long as you get rid of any SSL configuration
statements (other than SSLRequire SSL) in the /Directory clauses.
(Use the highest security you need, and disable renegotiation.)
-Kyle H
The SSL renegotiation insecurity has two aspects, namely client
initiated renegotiation and server initiated renegotiation. Both of them
can be used by a man in the middle as an attack vector.
Renegotiations are needed for an Apache https configurations only, if
you have a complex SSL configuration that has various different SSL
requirements in the same vhost, like requiring client certs only for
some Directory, or changing the allowed cipher specs for some Directory
(or Location).
If you do not use such a configuration, the best and at the moment only
way to be safe against the attack is upgrading to OpenSSl 0.9.8l.
There is a patch for Apache 2.2.14 which completely disables client
initiated renegotiation thereby still allowing server side renegotiation:
http://www.apache.org/dist/httpd/patches/apply_to_2.2.14/
This makes you safe from (only) one half of the attack without an
OpenSSL upgrade and still allows the complex configs to work. An
enhancement of this patch which should prevent all server side
renegotiation attacks known at the moment has been applied to the 2.2.x
branch very recently:
http://svn.apache.org/viewvc?rev=896900&view=rev
The first patch has been backported and suggested for 2.0:
http://svn.apache.org/viewvc?view=revision&revision=882861
http://people.apache.org/~rjung/patches/cve-2009-3555_httpd_2_0_x-v2.patch
and for 1.3:
http://www.mail-archive.com/modssl...@modssl.org/msg17939.html
A backport for the second patch does not yet exist.
I think further discussion about Apache specific question are a better
fit for the Apache httpd users list.
Regards,
Rainer
I miss something around the Re-negotiation flaw and fail to
understand why it is a flaw in TLS. I hope I miss just a small
piece. Could anyone please enlight me?
* Kyle Hamilton wrote on Thu, Jan 07, 2010 at 16:22 -0800:
> It is also, though, undeniably a flaw in the TLS specification
> that's amplified by the clients of the libraries that implement
> it -- because the clients don't understand the concept of
> "security veil", the TLS implementations tend to provide a raw
> stream of bytes (akin to a read()/write() pair) without the
> application necessarily being aware of the change.
Could it be considered that a miss-assumption about SSL/TLS
capabilities caused this situation?
I think since TLS should be considered a layer, its payload
should not make any assumptions to it (or vice versa). But in the
moment some application `looks to the TLS state' and tries to
associate this information to some data in some buffer, I think
it makes a mistake.
When using HTTP over IPSec, I think no one ever had the idea to
open or block URLs based on the currently used IPSec
certificate...
Am I wrong when I think that those level-mixing causes the
trouble? If a user (by browsers default configuration) first
accepts some sillyCA or www.malicious.com but then later does not
accept it any longer and expects the trust that initially was
given to be taken back in retroperspective and finds this failing
and unsafe (impossible), is this really a TLS weakness?
It seems it is, so what do I miss / where is my mistake in
thinking?
I also wondered a lot about the Extended Validation attack from
last year; I had assumed that in `EV mode' a browsers tab is
completely isolated against all others and second no other
connectivity is possible as with the locked EV parameters, but as
it turned out this is not the case. Everything can change but the
green indicator remains. Strange...
Now I ask myself what happens if I connect via HTTPS and read the
crypto information as displayed by my browser and decide to
accept it - but after a renegiotation different algorithms are
used. As far as I understand, I would get absolutely no notice
about that. I could find myself suddenly using a 40 bit export
grade or even a NULL chipher to a different peer (key) without
any notice! If I understand correctly, even if I re-verify the
contents of the browsers security information pane right before
pressing a SUBMIT button, even then the data could be transferred
with different parameters if a re-negotiation happens at the
`right' time!
If this would be true, this means the information firefox shows
up when clicking the lock icon does not tell anything about the
data I will sent; at most it can tell about the past, how the
page was loaded, but not reliable, because maybe it changed for
the last part of the page.
Where is my mistaking in thinking?
oki,
Steffen
--
--[end of message]------------------------------------------------->8=======
About Ingenico: Ingenico is a leading provider of payment solutions, with over 15 million terminals deployed in more than 125 countries. Its 2,850 employees worldwide support retailers, banks and service providers to optimize and secure their electronic payments solutions, develop their offer of services and increase their point of sales revenue. More information on http://www.ingenico.com/.
This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation.
P Please consider the environment before printing this e-mail
> Could it be considered that a miss-assumption about SSL/TLS
> capabilities caused this situation?
Only with hindsight.
> I think since TLS should be considered a layer, its payload
> should not make any assumptions to it (or vice versa). But in the
> moment some application `looks to the TLS state' and tries to
> associate this information to some data in some buffer, I think
> it makes a mistake.
Well then TLS is basically useless. A secure connection whose properties I cannot trust is not particularly useful. If I receive "foo" over the connection and cannot ever know where the middle "o" came from, what can I do with the "foo"? Anser -- nothing.
> When using HTTP over IPSec, I think no one ever had the idea to
> open or block URLs based on the currently used IPSec
> certificate...
I'm not sure I get the point of your analogy.
> Am I wrong when I think that those level-mixing causes the
> trouble? If a user (by browsers default configuration) first
> accepts some sillyCA or www.malicious.com but then later does not
> accept it any longer and expects the trust that initially was
> given to be taken back in retroperspective and finds this failing
> and unsafe (impossible), is this really a TLS weakness?
No, that's not. Because in that case the client's behavior is objectively unreasonable. But looking to the state of the current connection to decide what privileges to give it is part of TLS's intended use.
> It seems it is, so what do I miss / where is my mistake in
> thinking?
The mistake is in thinking that any security protocol is useful as a security measure on end A if the security parameters can be changed by end B at any time with no notification to higher levels on end A.
> Now I ask myself what happens if I connect via HTTPS and read the
> crypto information as displayed by my browser and decide to
> accept it - but after a renegiotation different algorithms are
> used. As far as I understand, I would get absolutely no notice
> about that. I could find myself suddenly using a 40 bit export
> grade or even a NULL chipher to a different peer (key) without
> any notice! If I understand correctly, even if I re-verify the
> contents of the browsers security information pane right before
> pressing a SUBMIT button, even then the data could be transferred
> with different parameters if a re-negotiation happens at the
> `right' time!
That could be argued to be a bug. Ideally, a renegotiation should not be permitted to reduce the security parameters unless, at absolute minimum, the intention to renegotiate is confirmed at both ends using at least the security level already negotiated.
> If this would be true, this means the information firefox shows
> up when clicking the lock icon does not tell anything about the
> data I will sent; at most it can tell about the past, how the
> page was loaded, but not reliable, because maybe it changed for
> the last part of the page.
>
> Where is my mistaking in thinking?
Correct, and to the extent TLS permits a renegotiation to reduce the security parameters without confirming the intention to reduce those parameters at the current level, TLS is broken. If the two endpoints negotiate a particular level of security, no attacker should be able to reduce that level of security within the connection without having to break the current level of security.
That is, if the two ends negotiate 1,024-bit RSA and 256-bit AES, then an attacker should not be able to renegotiate a lower (or different) security within that connection without having to break either 1,024-bit RSA, 256-bit AES, or one of the hard algorithms inside TLS itself (such as SHA1). TLS permitted an attacker to do this, and so was deemed broken.
DS
This allows for a MITM to initiate a session with a server, inject
data, then connect the client that has connected to the MITM. For the
attack to work, additional attacks (such as IP network
untrustworthiness, IP rerouting, or DNS spoofing) are necessary.
Unfortunately, network untrustworthiness is something that's more
common than before. Many places offer free wifi, and the installation
of a proxy to perform this attack is only a little more complex than
child's play.
The worst thing about this attack is that it provides no means for
either the client or server to detect it. The client will receive the
server's correct certificate, the same way it expects to. The server
will receive either the client's correct certificate or no certificate
(as the client decides) the same way it expects to. There is no way
to identify this attack at the TLS protocol level. Applications can
mitigate the effect of the attack in several ways; the most important
example is webservers, which could (for example) identify any Content
(defined as "the portion of data which is separated from the headers
by the sequence "\r\n\r\n", and which starts immediately after the
last \n of the separator") sent from the client to the server that
looks like the start of an HTTP request, after the header has already
been transmitted, and denying it. (There is no mechanism defined in
HTML for any method where a form POST begins its data with (^POST )
or (^GET ), so any POST data from a client which contains those
strings -- along with other HTTP method strings -- is not valid, and
should be given a 400 Bad Request -- preferably with a Location:
header so that it is correctly redirected to a place which describes:
how the error was detected, what it means, and what the client should
do (change to another network) -- within the cover of the 'true'
session between the 'true' client and server. In the preferable
implementation, it would redirect to a location which would accept
post data, send it to /dev/null, and then print out the information
for the client.
But I'm not an HTTP/HTML guru, and I have not evaluated the security
of this. (Seriously, I didn't think of this until I started writing
this email. But the reason for accepting POST data, then voiding it,
is to provide a mechanism for the semantics of the Location: redirect
to still function. It states that when posting to a location, if a
client receives a Location header, it should post the data to the
Location as well.)
On Mon, Jan 11, 2010 at 5:59 AM, Steffen DETTMER
<Steffen...@ingenico.com> wrote:
> Hi all!
>
> I miss something around the Re-negotiation flaw and fail to
> understand why it is a flaw in TLS. I hope I miss just a small
> piece. Could anyone please enlight me?
>
> * Kyle Hamilton wrote on Thu, Jan 07, 2010 at 16:22 -0800:
>> It is also, though, undeniably a flaw in the TLS specification
>> that's amplified by the clients of the libraries that implement
>> it -- because the clients don't understand the concept of
>> "security veil", the TLS implementations tend to provide a raw
>> stream of bytes (akin to a read()/write() pair) without the
>> application necessarily being aware of the change.
>
> Could it be considered that a miss-assumption about SSL/TLS
> capabilities caused this situation?
Nobody thought of this attack until late 2009, so it was mis-assumed
that the protocol was as secure as it was thought to be (since
1995/1998/2001+).
> I think since TLS should be considered a layer, its payload
> should not make any assumptions to it (or vice versa). But in the
> moment some application `looks to the TLS state' and tries to
> associate this information to some data in some buffer, I think
> it makes a mistake.
No, it doesn't. The reason why is inherent with authentication,
authorization, and accountability: data which is accepted from an
unauthenticated source MUST BE considered potentially hazardous as a
matter of course. (It's rather telling that Microsoft changed the
meaning of unauthenticated connections to its RPC server in Windows NT
4.0 Service Pack 3. Prior to NT4SP3, unauthenticated data was
automatically mapped into the only realm that existed that could hold
it: the Everyone group. NT4SP3 created "Unauthenticated Users", and
provided a means for "Unauthenticated Users" to be excluded from
"Everyone" (which essentially turned "Everyone" into "Authenticated
Users" without having to change the Everyone SID on all the objects in
the system).
Any system that uses TLS is automatically attempting to impose some
form of security on the communication (be it 'security from the
sysadmin who runs the network, without any regard for whoever is at
the other end' or 'a bank imposing its policies on the connection so
that it doesn't have its arse handed to it by the cops'). This means
that it is considered to be "trusted", so to speak, for that purpose.
In order for a system to be trusted, it must reliably be able to
identify when the event happened, who asked for the event to happen,
what was asked to happen, and whether the request was rejected,
accepted-and-errored, or accepted-and-succeeded. This means that even
if the TLS-speaking process is not necessarily part of the
host-computer's trusted computing base, it is still part of a
different trusted computing base.
This means that data which is accepted via an unauthenticated cover
cannot be later converted to an authenticated cover, *unless the state
where the data was accepted could provide adequate security for the
request, AND the state where the data was accepted is reliably
attested to by the now-authenticated entity sending the data*.
TLS (as of now, pre-Secure Renegotiation Indication RFC) satisfies the
first prong of that test, but not the second.
> When using HTTP over IPSec, I think no one ever had the idea to
> open or block URLs based on the currently used IPSec
> certificate...
I don't know the truth of what other people have thought, so I can
only hypothesize and theorize: it is possible to do, in several ways.
If the default route of the local-to-user IPsec peer goes to the
remote-to-user IPsec peer, the remote peer can place any set of
firewall policies in place based on the identity authenticated by the
IPsec certificate. (It could also do it with shared-secret systems,
based on the secret used. This becomes something like a 'group
password', though.) Microsoft ISA Server has had the ability to
accept or deny requests based on the identity attempting them ever
since it came out, as has Squid, and all of the other commercial and
noncommercial proxy server software out there.
All that is required is 802.1X and passing that authentication to
other components.
> Am I wrong when I think that those level-mixing causes the
> trouble? If a user (by browsers default configuration) first
> accepts some sillyCA or www.malicious.com but then later does not
> accept it any longer and expects the trust that initially was
> given to be taken back in retroperspective and finds this failing
> and unsafe (impossible), is this really a TLS weakness?
That is not a TLS weakness, and PKI theory is out-of-scope for this
discussion. Caveat emptor -- if you mess with things you don't
understand, you're going to get bitten in ways you don't understand.
This is why Mozilla Firefox has a huge "YOU DO NOT WANT TO DO THIS"
page that shows up whenever a certificate issued by an unrecognized
authority is found. The user is ultimately responsible for his or her
own choices.
> It seems it is, so what do I miss / where is my mistake in
> thinking?
TLS is authentication-agnostic. There is no *requirement* that a
server authenticate itself; the only requirement is that the server
not ask the client for authentication if it hasn't authenticated
itself. (This is in stark contrast to IPsec, where IKE requires the
client to offer its credentials before the server ever responds.)
There are two currently-defined cryptographic credential systems for
TLS, both based on certifications of some kind. The first one is the
standard X.509 certificate which has been in place since Netscape put
Verisign's root key in Navigator, back in 1995. The second is the
OpenPGP key/certificate format.
> I also wondered a lot about the Extended Validation attack from
> last year; I had assumed that in `EV mode' a browsers tab is
> completely isolated against all others and second no other
> connectivity is possible as with the locked EV parameters, but as
> it turned out this is not the case. Everything can change but the
> green indicator remains. Strange...
Well, partly that's because it's not possible to retrofit a different
type and brand of security onto a model which didn't have it as a
primary design goal. As an example: Firefox didn't want to do EV, and
certainly didn't want to rewrite everything. The NSS team implemented
EV, and Johnathan Nightingale managed to convince the security-UI team
to put in the green bar. But, the semantics of an EV certificate were
never well-defined, as relate to non-EV certificates and to EV
certificates issued to other corporations.
(Hell, there's not even any defined algorithm for checking to see
whether two X.509 Subjects are the same -- the best I've been able to
come up with is "check to see that the Distinguished Name components
match, in the same order, and that the Authority matches.")
> Now I ask myself what happens if I connect via HTTPS and read the
> crypto information as displayed by my browser and decide to
> accept it - but after a renegiotation different algorithms are
> used. As far as I understand, I would get absolutely no notice
> about that. I could find myself suddenly using a 40 bit export
> grade or even a NULL chipher to a different peer (key) without
> any notice! If I understand correctly, even if I re-verify the
> contents of the browsers security information pane right before
> pressing a SUBMIT button, even then the data could be transferred
> with different parameters if a re-negotiation happens at the
> `right' time!
HTTP is stateless. There is essentially a new connection built for
each request (the only caveat being 'HTTP pipelining'). The Submit
button could go to a completely different server, with a completely
different security setup.
This implies that a renegotiation *is* going to happen when you hit
Submit, unless ALL of the following four things are true: The client
keeps a session cache, the server keeps a session cache, the Submit
button goes to the same server (or to a server which the client also
has a connection in its cache for), and the Submit is triggered before
either the server or the client clear the session from their caches.
If ANY of those four things are not true, a full renegotiation occurs.
This is why NULL ciphers are required to be off by default, and I
don't know any browser vendor which still ships anything that accepts
40- or 56-bit keys by default either. In Firefox, you can disable the
negotiation of lower-strength ciphers through about:config.
> If this would be true, this means the information firefox shows
> up when clicking the lock icon does not tell anything about the
> data I will sent; at most it can tell about the past, how the
> page was loaded, but not reliable, because maybe it changed for
> the last part of the page.
This is correct, as far as it goes. There is an implicit contract
between the browser and the server: the server will not send HTML
scripting data (and if you don't think "show this form on the page,
and then submit what the user responds" is scripting, then realize
that javascript can change the behavior of *anything* in the page that
it has access to within the Document Object Model) which will cause
the browser to behave maliciously, and the browser will not interpret
the data in a malicious manner. There's an implicit contract between
the user and the browser (and the computer the browser is running on):
the browser will not behave maliciously, and the user can rely on its
presentation.
Any one of those contracts can be violated without any side other than
the violating one knowing -- for example, Firefox extensions can
silently change the content of pages, and if the user installs a
malicious one...
> Where is my mistaking in thinking?
>
> oki,
>
> Steffen
Please also see Dave Schwartz's excellent response.
-Kyle H