Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[Suggestion] Allow JavaScript to get SSL certificates chain params

3,457 views
Skip to first unread message

Alex Syrnikov

unread,
Feb 19, 2012, 12:16:31 AM2/19/12
to mozilla-dev-s...@lists.mozilla.org
Hi, All.

I have Suggestion to mozilla developers to improve SSL CA security.

I think if JavaScript will be capable to get SSL sertificates params
It will be possible to write script, which get SSL params, send it to
original server and got result (does it the same server send or
changed by third party). In case of man in the middle Javascript will
show warning (SSL session is listened by somebody) to user and server
side code will inform webmaster that client on some IP got left-signed
sertificate chain. In this case client wil be able to talk provider
what he think about listening SSL traffic, and webmaster will never
use SSL certificates from that CA.

Is it possible to modify so JavaScript in Firefox?

Jan Schejbal

unread,
Feb 19, 2012, 8:56:15 PM2/19/12
to mozilla-dev-s...@lists.mozilla.org
Am 2012-02-19 06:16, schrieb Alex Syrnikov:
> In case of man in the middle Javascript will
> show warning

In case of Man in the Middle attacks, the attacker can remove or change
the JavaScript, too. Of course, the attacker may have trouble finding
the JS, but this is security-by-obscurity and rarely effective.

Kind regards,
Jan

--
Please avoid sending mails, use the group instead.
If you really need to send me an e-mail, mention "FROM NG"
in the subject line, otherwise my spam filter will delete your mail.
Sorry for the inconvenience, thank the spammers...

Robert Andreassen

unread,
Feb 20, 2012, 8:04:23 AM2/20/12
to jan.sche...@gmx.de, mozilla-dev-s...@lists.mozilla.org
On 2/20/12, Jan Schejbal <jan.sche...@gmx.de> wrote:
> Am 2012-02-19 06:16, schrieb Alex Syrnikov:
>> In case of man in the middle Javascript will
>> show warning
>
> In case of Man in the Middle attacks, the attacker can remove or change
> the JavaScript, too. Of course, the attacker may have trouble finding
> the JS, but this is security-by-obscurity and rarely effective.
>
> Kind regards,
> Jan
>
> --
> Please avoid sending mails, use the group instead.
> If you really need to send me an e-mail, mention "FROM NG"
> in the subject line, otherwise my spam filter will delete your mail.
> Sorry for the inconvenience, thank the spammers...
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>

Tom Ritter

unread,
Feb 20, 2012, 9:39:14 AM2/20/12
to mozilla-dev-s...@lists.mozilla.org
On Feb 19, 8:56 pm, Jan Schejbal <jan.schejbal_n...@gmx.de> wrote:
> In case of Man in the Middle attacks, the attacker can remove or change
> the JavaScript, too. Of course, the attacker may have trouble finding
> the JS, but this is security-by-obscurity and rarely effective.

I challenge "rarely". Go look at the javascript used in gmail, and
tell me if you think you'd be able to find the check of the SSL
certificate.

Alex - I've been arguing for this property for some time now [0].
Exposing SSL certificate information to the browser allows two things:

1) Defensive javascript practices for sites. Yes, theoretically,
someone doing an active attack on a SSL connection could rewrite the
javascript. And if that javascript was 'standard', say a library,
then it wouldn't be theoretic, it'd be a switch on a command line tool
or intercept box. But obfuscated javascript would have a decent chance
of slipping by. And sites with an interest in this: Facebook, Google,
Twitter, all have the engineering resources to create a custom
obfuscator that can change with each request, or each day. Let them
at least *try*.

2) I'm not sure how an extensions in Firefox retrieve SSL cert
information. But I do know how extensions in Chrome do it: they
don't. It's not exposed. Exposing this information via javascript
properties allows javascript-only extensions, like GreaseMonkey and
ChromeScripts the ability to get in on the defensive game and help
users.

I don't see what there is to *lose* exposing them as javascript
properties. Maybe some misguided developer thinks they've made their
site secure against a MITM by adding in a cert check in javascript.
But they haven't made it *less secure*. And with custom solutions all
around, if 99 are blocked and just one gets through and tips off
someone being middled - we can get value out of that. Like that
single Chrome user in Iran blew open Diginotar because gmail had
pinned keys, and the MITM didn't handle that case.

-tom

[0] http://www.ietf.org/mail-archive/web/websec/current/msg00517.html

Jan Schejbal

unread,
Feb 20, 2012, 6:40:32 PM2/20/12
to mozilla-dev-s...@lists.mozilla.org
Am 2012-02-20 15:39, schrieb Tom Ritter:
> 2) I'm not sure how an extensions in Firefox retrieve SSL cert
> information. But I do know how extensions in Chrome do it: they
> don't. It's not exposed. Exposing this information via javascript
> properties allows javascript-only extensions, like GreaseMonkey and
> ChromeScripts the ability to get in on the defensive game and help
> users.

This is a valid point, however, aren't there interfaces only available
to extensions? These should include it for sure.

> I don't see what there is to *lose* exposing them as javascript
> properties.

I am a bit worried about possible privacy implications, but this is just
a vague feeling.

Peter Kurrasch

unread,
Feb 20, 2012, 8:53:24 PM2/20/12
to mozilla-dev-s...@lists.mozilla.org
Yes! There are add-ons, and I can highly recommend the following:
CipherFox
CertWatch
Certificate Patrol

You might also consider:
Cert Viewer Plus
Conspiracy
ShareMeNot

Good stuff. And I would agree that cert chain access should be available
in some manner but I think there are some heavy cost/benefit considerations
to be made.

Jean-Marc Desperrier

unread,
Feb 22, 2012, 6:57:44 AM2/22/12
to mozilla-dev-s...@lists.mozilla.org
Jan Schejbal a écrit :
> Am 2012-02-20 15:39, schrieb Tom Ritter:
>> > 2) I'm not sure how an extensions in Firefox retrieve SSL cert
>> > information. [...]
> This is a valid point, however, aren't there interfaces only available
> to extensions? These should include it for sure.

In the case of Certificate Patrol, the extension is 100% javascript and
the info is fully available to javascript.
Actually the last time I watched there was a weak point in CP, it could
have had access to the whole chain, but didn't use it.

Jan Schejbal

unread,
Feb 22, 2012, 10:01:53 AM2/22/12
to mozilla-dev-s...@lists.mozilla.org
Am 2012-02-22 12:57, schrieb Jean-Marc Desperrier:
>
> Actually the last time I watched there was a weak point in CP, it could
> have had access to the whole chain, but didn't use it.

Current cert patrol does display the full chain.

Michael

unread,
Feb 28, 2012, 12:36:36 PM2/28/12
to mozilla-dev-s...@lists.mozilla.org
For the benefit of people who find this thread in the future, the
relevant extension coding techniques in Mozilla are:

* Use the observer-service to register a "http-on-examine-response"
observer.
* When you observe this event, use
channel.QueryInterface(Ci.nsIHTTPChannel) to get the HTTP channel from
it
* Pull the securityInfo attribute of that channel
* Use QueryInterface(nsISSLStatusProvider) on the securityInfo to
access the SSL data from the securityInfo.
* Pull the SSLStatus attribute of the securityInfo
* Use QueryInterface(nsISSLStatus) on the SSLStatus to access the
details of the connection. The serverCert attribute contains the
server certificate.

CertPatrol contains JavaScript code which does all these steps, as
well as much more to parse the certificate and display the chain.
https://addons.mozilla.org/en-US/firefox/addon/certificate-patrol/

Sample code on MDN:
https://developer.mozilla.org/En/How_to_check_the_security_state_of_an_XMLHTTPRequest_over_SSL

The docs for these interfaces are somewhat thin, but here's the source
references:
http://mxr.mozilla.org/mozilla-central/source/security/manager/boot/public/nsISSLStatusProvider.idl
http://mxr.mozilla.org/mozilla-central/source/security/manager/ssl/public/nsISSLStatus.idl
http://mxr.mozilla.org/mozilla-central/source/security/manager/ssl/public/nsIX509Cert.idl

Jean-Marc Desperrier

unread,
Feb 29, 2012, 8:29:04 AM2/29/12
to mozilla-dev-s...@lists.mozilla.org
Michael a écrit :
> CertPatrol contains JavaScript code which does all these steps, as
> well as much more to parse the certificate and display the chain.
> https://addons.mozilla.org/en-US/firefox/addon/certificate-patrol/

And the source is available on-line here :
https://addons.mozilla.org/fr/firefox/files/browse/134629/
I find it a bit unfurtunate that it doesn't demonstrate that the
getChain() property of nsIX509Cert allows to get the full chain up to
the root.
http://hg.mozilla.org/mozilla-central/annotate/7d7179d2d809/security/manager/ssl/public/nsIX509Cert.idl#l204

If you are worried you may have been MITM by an intermediate CA, getting
to the full chain is very important (it's perfectly possible for the
intermediate CA to generates further intermediate certs that make it
very much look like the certificate that directly issued your cert was
the correct one, except if you check the complete list).

Brian Smith

unread,
Mar 1, 2012, 2:11:05 AM3/1/12
to Jean-Marc Desperrier, mozilla-dev-s...@lists.mozilla.org
Jean-Marc Desperrier wrote:
> I find it a bit unfurtunate that it doesn't demonstrate that the
> getChain() property of nsIX509Cert allows to get the full chain up to
> the root.

The getChain() method doesn't use PKIX path building rules and so it won't always give correct results if/when we switch to libpkix. From what I can tell, this is something that isn't particularly easy to fix.

Also, depending on what changes we make to our cert validation, we may not always have X.509 certificates to return as the chain.

Consequently the getChain() method may soon give wrong results and/or we may just remove it to avoid it giving wrong results.

Cheers,
Brian

Jan Schejbal

unread,
Mar 1, 2012, 11:00:11 AM3/1/12
to mozilla-dev-s...@lists.mozilla.org
Am 2012-02-20 02:56, schrieb Jan Schejbal:
> In case of Man in the Middle attacks, the attacker can remove or change
> the JavaScript, too. Of course, the attacker may have trouble finding
> the JS, but this is security-by-obscurity and rarely effective.

After thinking about this a bit, I start liking the idea. While it will
not *prevent* targetted MitM attacks, it could probably be used to
*detect* the certificates being used in broad attacks (check fingerprint
and if it doesn't match POST the chain to the server).

This would mean that websites would have a realistic chance obtain
certificates used to perform MitM attacks on them, making it very likely
that any CA issuing MitM certs will get caught.

Unfortunately, on the other hand, this is also a perfect example of
privacy implications even if the attribute was accessible only under a
strict same-origin policy: The MitM chain can contain e.g. company names
and locations.

Jean-Marc Desperrier

unread,
Mar 1, 2012, 11:26:25 AM3/1/12
to mozilla-dev-s...@lists.mozilla.org
Brian Smith a écrit :
> Jean-Marc Desperrier wrote:
>> I find it a bit unfurtunate that it doesn't demonstrate that the
>> getChain() property of nsIX509Cert allows to get the full chain up to
>> the root.
>
> The getChain() method doesn't use PKIX path building rules and so it
> won't always give correct results if/when we switch to libpkix. From
> what I can tell, this is something that isn't particularly easy to fix.

We're slipping into a discussion that might be more appropriate in the
crypto list, if you wish to redirect it.

But anyway : I *don't* actually care about that as long as the path
returned is the one used by Firefox to decide if the cert was valid or
not. Are you telling me the way it's calculated is not actually from the
result of the cert validation ? Ugh, that hurts if that's the case.

> Also, depending on what changes we make to our cert validation, we
> maynot always have X.509 certificates to return as the chain.
>
> Consequently the getChain() method may soon give wrong results
> and/or we may just remove it to avoid it giving wrong results.

The only result that would be wrong for me is the one that doesn't match
what Firefox/NSS used to decide if the cert is valid.

If the way Firefox/NSS calculates that path doesn't always conform to
X509/pkix, then I could see it as a conformity bug in Firefox/NSS that'd
be worth being fixed, but certainly not as a reason to hide the chain.
And actually, I could imagine a situation where that very error would
make me want to access the chain, to be able to check if NSS accepted it
for the right reason or not.

OTOH I can understand if the property disappears for
performance/optimization reason, or if getChain() is somewhat lying to
me and not actually showing me the path coming the certificate validation.

But if you want to keep the current functionality in Firefox of being
able to visualize the certificate hierarchy from the page info, as well
as from the certificate manager, you'll need some alternate way of
accessing the result anyway.
And I don't see that part disappearing, don't see Firefox moving to
making the issuing CA information completely unavailable, even more
after what happened lately with several of them.

Jean-Marc Desperrier

unread,
Mar 1, 2012, 11:40:11 AM3/1/12
to mozilla-dev-s...@lists.mozilla.org
Jan Schejbal a écrit :
> Unfortunately, on the other hand, this is also a perfect example of
> privacy implications even if the attribute was accessible only under a
> strict same-origin policy: The MitM chain can contain e.g. company names
> and locations.

Of the server. Where the js is expected to come from.

Privacy implication arise if the js is not really from the server the
page comes from, this really sounds like the domain of Content Security
Policy. I don't know if CSP can make it disabled by default, and only
enabled if a proper CSP rule says it should be.

Jan Schejbal

unread,
Mar 1, 2012, 7:16:32 PM3/1/12
to mozilla-dev-s...@lists.mozilla.org
Am 2012-03-01 17:40, schrieb Jean-Marc Desperrier:
>
> Of the server. Where the js is expected to come from.

Visitor visits a page containing the JS. His company MitMs the
connection using a non-public company-CA issued cert to inspect the
page. The JS detects the MitM and uploads the cert chain containing the
internal company-CA cert. This certificate contains the name of the
company and the city where Visitor is working.

Of course, "MitM is not supposed to happen so the company is responsible
if this happens" is an understandable reaction to this scenario.

Brian Smith

unread,
Mar 1, 2012, 9:57:29 PM3/1/12
to Jean-Marc Desperrier, mozilla-dev-s...@lists.mozilla.org
Jean-Marc Desperrier wrote:
> Brian Smith a écrit :
> But anyway : I *don't* actually care about that as long as the path
> returned is the one used by Firefox to decide if the cert was valid
> or not. Are you telling me the way it's calculated is not actually
> from the result of the cert validation ? Ugh, that hurts if that's
> the case.

It is calculated using the classic (non-PKIX) cert chain building algorithm. When we switch to libpkix and/or other validation, getChain will not necessarily match. Depending on what prefs you have set, the libpkix-based chain building requires an async interface, so we won't be able to implement getChain() in a way that matches the libpkix semantics. Plus, doing the validation and then calling getChain() is racy, at least in theory (even now, before we start using libpkix).

> But if you want to keep the current functionality in Firefox of being
> able to visualize the certificate hierarchy from the page info, as
> well
> as from the certificate manager, you'll need some alternate way of
> accessing the result anyway.
> And I don't see that part disappearing, don't see Firefox moving to
> making the issuing CA information completely unavailable, even more
> after what happened lately with several of them.

Instead, we will probably have to introduce a new API that allows extensions to get the chain built during validation from the nsISSLStatus interface. I am not sure if there will be a time between the time we switch to libpkix and the time when extensions will be able to get the chain that was constructed during validation.

Cheers,
Brian

richar...@googlemail.com

unread,
May 19, 2013, 11:11:51 PM5/19/13
to
A year later... is there any update on this?

Here's my take - as the operator of a small website which deals with personal data, I'd like to at least be able to warn my users when they are unwittingly being MITMd by an HTTPS proxy (eg a work computer).

My approach would be:

1. From the server, embed the true SHA1 fingerprint in the page body.
2. In the browser, use JS to get the SHA1 fingerprint that the browser received (this is the value the user can see by clicking the padlock icon).
3. Compare them, and if not equal, warn.

Now I know that a really well implemented evil proxy would be able to edit my page on the fly and spoof #1. But the chances are, that if I write my own JS code, most standard corporate proxys won't actually do it. So it's a 99.99% chance I can warn my users.

However, for this to work, we need a JS function that gets some of the SSL certificate info (just the SHA1 fingerprint would do), but this needs to be in standard Firefox, rather than by an extension. If I want to seamlessly protect my less technically aware users, I can't ask them to install a browser extension! This function should ideally get into the core javascript definition so that all browsers can use it.

What do you think? Thanks very much,

Richard

P.S. Having searched extensively for how to do this, there seem to be quite a few people wanting such a feature. Also, there is a good explanation for end-users here: https://www.grc.com/fingerprints.htm

Gervase Markham

unread,
May 20, 2013, 8:25:33 AM5/20/13
to mozilla-dev-s...@lists.mozilla.org
On 20/05/13 04:11, richar...@googlemail.com wrote:
> A year later... is there any update on this?

Is there a bug filed? If so, its status will answer your question. If
not, then nothing gets done without a bug, so it might be a good idea to
file one :-)

Gerv

Brian Smith

unread,
May 23, 2013, 3:29:34 AM5/23/13
to richar...@googlemail.com, dev-secur...@lists.mozilla.org
richar...@googlemail.com wrote:
> Here's my take - as the operator of a small website which deals with
> personal data, I'd like to at least be able to warn my users when
> they are unwittingly being MITMd by an HTTPS proxy (eg a work
> computer).

dev-tech-crypto is a better forum for this discussion, IMO. You are more likely to get a response from people who do the coding work on Firefox and NSS in one those forums.

> Now I know that a really well implemented evil proxy would be able to
> edit my page on the fly and spoof #1. But the chances are, that if I
> write my own JS code, most standard corporate proxys won't actually
> do it. So it's a 99.99% chance I can warn my users.

I personally don't see a feature that allows evil proxies to MITM the user, while forbidding usually-helpful (or at least benign) proxies from function correctly, to be worth spending resources on. In particular, I am not sure it is reasonable to implement a feature that will cause NetNanny, Kaspersky Internet Security, and other security software on the local computer to stop working.

Further, the law of unintended consequences would be in full effect here: If a site could obtain the SHA1 hash of the EE certificate's public key, and if such security software were to generate a random keypair for each user, then this certificate public key hashing API would be a vector for tracking internet users; it would be akin to an undeletable, cryptographically-secure tracking cookie.

I *do* think it would be valuable to give users more control and understanding of how MITM proxies are affecting the privacy of their web browsing, though I'm unsure of what a usable way of doing that is. Right now, a pretty good solution for it would be to do your private/personal browsing using Firefox for Android, using your mobile carrier's network (not your employer's wifi), and not any desktop machine owned/operated/maintained by your employer. However, I suspect that there will be pressure on us to add the features to Firefox for Android that (unintentionally) enable the type of snooping by employers that you are trying to prevent.

Does your site use an Extended Validation (EV) certificate? This is one of the problems that EV is purported to solve, because MITM certs are never shown as EV. Recently, I've been thinking of whether or not Firefox's EV indicator is actually valuable to end users, and I am curious to hear your thoughts about why EV would or wouldn't work for solving this problem for your site.

Cheers,
Brian

Richard Neill

unread,
May 23, 2013, 9:29:01 AM5/23/13
to Brian Smith, dev-secur...@lists.mozilla.org
On Thu, May 23, 2013 at 8:29 AM, Brian Smith <bsm...@mozilla.com> wrote:

> richar...@googlemail.com wrote:
> > Here's my take - as the operator of a small website which deals with
> > personal data, I'd like to at least be able to warn my users when
> > they are unwittingly being MITMd by an HTTPS proxy (eg a work
> > computer).
>
> dev-tech-crypto is a better forum for this discussion, IMO. You are more
> likely to get a response from people who do the coding work on Firefox and
> NSS in one those forums.
>

That's probably correct - though this was the first discussion that google
archives.

>
> > Now I know that a really well implemented evil proxy would be able to
> > edit my page on the fly and spoof #1. But the chances are, that if I
> > write my own JS code, most standard corporate proxys won't actually
> > do it. So it's a 99.99% chance I can warn my users.
>
> I personally don't see a feature that allows evil proxies to MITM the
> user, while forbidding usually-helpful (or at least benign) proxies from
> function correctly, to be worth spending resources on. In particular, I am
> not sure it is reasonable to implement a feature that will cause NetNanny,
> Kaspersky Internet Security, and other security software on the local
> computer to stop working.
>

Well, "benign" proxies aren't benign to the person being monitored,
especially if they don't know that it's happening - and while I wouldn't be
able to break the proxy's functionality, I would at least be able to notify
the user of its existence. Also, though this feature wouldn't defend
against really hardcore evil proxies, it would be pretty easy to defend
against most of them most of the time. For an evill proxy to suppress
notifications, it would need to be able to parse arbitrary javascript
functions and modify them before sending the JS to the browser. Provided
each site's admin writes their own function, it would break the majority of
the evil proxies often enough that the users would discover it.


>
> Further, the law of unintended consequences would be in full effect here:
> If a site could obtain the SHA1 hash of the EE certificate's public key,
> and if such security software were to generate a random keypair for each
> user, then this certificate public key hashing API would be a vector for
> tracking internet users; it would be akin to an undeletable,
> cryptographically-secure tracking cookie.
>

Interesting idea. But the site (except when proxied) already knows its own
SHA1 key. All we could get would be the proxy's key... which might be
slightly useful to an attacker, but it only identifies the organisation,
and most businesses with SSL-appliances use a static gateway IP anyway.
Also, II already have login credentials for the users!


>
> I *do* think it would be valuable to give users more control and
> understanding of how MITM proxies are affecting the privacy of their web
> browsing, though I'm unsure of what a usable way of doing that is. Right
> now, a pretty good solution for it would be to do your private/personal
> browsing using Firefox for Android, using your mobile carrier's network
> (not your employer's wifi), and not any desktop machine
> owned/operated/maintained by your employer. However, I suspect that there
> will be pressure on us to add the features to Firefox for Android that
> (unintentionally) enable the type of snooping by employers that you are
> trying to prevent.
>

Yes, I know that. But most users don't - and I'm trying to protect them.


>
> Does your site use an Extended Validation (EV) certificate? This is one of
> the problems that EV is purported to solve, because MITM certs are never
> shown as EV. Recently, I've been thinking of whether or not Firefox's EV
> indicator is actually valuable to end users, and I am curious to hear your
> thoughts about why EV would or wouldn't work for solving this problem for
> your site.
>

For me, it's partly about cost (we're volunteers, and we can't really
justify the expense for "official" validation). Also, most users don't
notice the security warnings the browser gives them.

Best wishes,

Richard

Peter Kurrasch

unread,
May 28, 2013, 2:32:21 PM5/28/13
to dev-secur...@lists.mozilla.org
Without getting into the merits of these ideas, I would just mention
that using hashes in any form is not really a viable approach to
validating certs.

On the one hand it is a quick and easy way to verify you are looking at
the right cert. On the other hand, as soon as someone changes a piece
of data in the cert--thereby changing the hash outcome--the whole
process falls apart.

For example, if I have an intermediate cert which has an updated
expiration date, the thumbprint/hash/whatever of it is now different.
If my code only expects the old hash value, I have a problem. If, as a
consequence, I start reporting errors or blocking connections, that can
lead to real trouble--and it can be difficult to diagnose, too!

So, just be careful.


Previously...
>>> Here's my take - as the operator of a small website which deals with
>>> personal data, I'd like to at least be able to warn my users when
>>> they are unwittingly being MITMd by an HTTPS proxy (eg a work
>>> computer).
>>> Now I know that a really well implemented evil proxy would be able to
>>> edit my page on the fly and spoof #1. But the chances are, that if I
>>> write my own JS code, most standard corporate proxys won't actually
>>> do it. So it's a 99.99% chance I can warn my users.

al...@equenext.com

unread,
Jun 5, 2013, 2:20:48 PM6/5/13
to

If a code only expects the old hash value, but gets a new one, the whole certificate would be uploaded to a server and reviewed by security staff.

It's likely that it would be some kind of mistake. But if it's not, the one such illegal certificate in the hands of a site owner (who knows it's not his stuff) is a very powerful thing because it contains enough proof that somebody in a trust chain is not trustworthy.

I'd say it very much worth it.

As for unintended consequences, yes, it would be hard, but possible to track when user browsed some site at the first time if this site is able to run javascript. It would probably require a patch to openssl library. But such kind of tracking is already possible: https://panopticlick.eff.org/
0 new messages