--
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To post to this group, send email to net...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CAPP_2SZ0OAwc7ou2mmCk3A21Jp2xrt0pQ83Bcc9XBU0jYEuwZw%40mail.gmail.com.
Excited to see this happen, especially given the stats listed at the top of the doc.
On Wed, Sep 21, 2016 at 1:33 AM Emily Stark <est...@chromium.org> wrote:
Following some discussion with rsleevi@ and eroman@, I'd like to add intermediate fetching during Chrome's certificate verification on Android. From Safe Browsing Extended Reporting data, we estimate that a significant percentage of certificate errors are due to servers serving chains that omit the necessary intermediates. (Android does not fetch intermediates during certificate verification.)--I wrote up a doc explaining more about the problem and motivation, why we'd like to do the intermediate fetching from within Chrome on Android, and an attempt at a plan for implementing it: https://docs.google.com/document/d/1ryqFMSHHRDERg1jm3LeVt7VMfxtXXrI8p49gmtniNP0/edit#As usual, all sorts of feedback is welcome and much appreciated. Thanks!Emily
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+unsubscribe@chromium.org.
Hey Ryan, I'll continue the discussion here on the thread since I find doc comment boxes tend to get a little constraining.I was initially thinking about putting this logic in CertVerifyProc, and I talked with Eric a little bit about that off-list before I wrote up this proposal. He made an argument, which I found convincing, that worker threads shouldn't block on URLRequests happening on the IO thread. However, I hadn't seen nss_ocsp before (which you pointed to in the doc), and didn't realize that it does exactly that! I'm guessing maybe he hadn't seen that code either? Eric, do you feel differently about pursuing an approach where CertVerifyProc kicks off the AIA fetches on the IO thread given that we already do basically the same thing for HTTP requests that NSS needs?
(All that said, even if we agree that CertVerifyProc can kick off and wait on the fetches on the IO thread, I haven't sketched out that approach in detail yet so it might still turn out to be very hairy for other reasons, in which case I would explore the idea of composing an AIAChasingCertVerifier.)
On Wed, Sep 21, 2016 at 5:35 PM, Emily Stark <est...@chromium.org> wrote:Hey Ryan, I'll continue the discussion here on the thread since I find doc comment boxes tend to get a little constraining.I was initially thinking about putting this logic in CertVerifyProc, and I talked with Eric a little bit about that off-list before I wrote up this proposal. He made an argument, which I found convincing, that worker threads shouldn't block on URLRequests happening on the IO thread. However, I hadn't seen nss_ocsp before (which you pointed to in the doc), and didn't realize that it does exactly that! I'm guessing maybe he hadn't seen that code either? Eric, do you feel differently about pursuing an approach where CertVerifyProc kicks off the AIA fetches on the IO thread given that we already do basically the same thing for HTTP requests that NSS needs?Right, my uncertainty around your design was because I wasn't sure if you'd considered this and intentionally ruled it out or just weren't familiar with it (since it is subtle)There's another catch to the NSS design, which, while primarily driven by NSS requirements, might have been a reason for you to design the way you did. NSS sets a global HTTP context for satisfying requests, which means that we can't associate the HTTP client with each CertVerifyProc. As a consequence, we need a 'global' URLRequestContext that's available, which means the SystemURLRequestContext (in effect). The implications of this for Linux/ChromeOS are that any user-defined proxy settings (such as via extensions or cloud policies) are left on the user's URLRequestContext, and don't propagate to the SystemURLRequestContext (which is user-agnostic), which means that some AIA fetches fail.So, these are two downsides (blocking worker threads, using the system URL request context), but it's unclear whether they're big enough to warrant breaking the CertVerifier threading abstraction. They may be, but my gut suggests that they're reasonable tradeoffs (that is, we're happy with them for Linux/ChromeOS for now).
Just want to check that I understand: for the system URLRequestContext issue, it seems like that's imposed by the NSS API, but in implementing AIA fetching for android, we could use the profile URLRequestContext if we so desire. Is that right, or am I misunderstanding?
--
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+unsubscribe@chromium.org.
To post to this group, send email to net...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CACvaWvYFqxrkw55LEfcqv%2BX6q0O0dd6WqT5%2BKKbC-d_%3D64b7dg%40mail.gmail.com.
As is typical in any feature which exists to wallpaper over a problem instead of solving it, the AIA fetching mechanism has several downsides as well.
For this reason, I think it is worth trying to find an alternative that has positive short-term for Chrome for Android *and* positive long-term effects on the web as a whole.
FWIW, my hypothesis, which I haven't gotten around to verifying or refuting with an experiment, is that a small number of intermediates likely account for a huge percentage of the problem, and that if the browser simply shipped with a small number (say, less than 100) of these intermediates, without doing AIA fetching on any platform, the problem would mostly go away, while minimizing the negative effect of encouraging misconfiguration over time. I think it would be great to have a browser try an experiment along these lines.
We may simply have to agree to disagree about the value of AIA.
Emily Stark <est...@chromium.org> wrote:Following some discussion with rsleevi@ and eroman@, I'd like to add intermediate fetching during Chrome's certificate verification on Android. From Safe Browsing Extended Reporting data, we estimate that a significant percentage of certificate errors are due to servers serving chains that omit the necessary intermediates. (Android does not fetch intermediates during certificate verification.)I wrote up a doc explaining more about the problem and motivation, why we'd like to do the intermediate fetching from within Chrome on Android, and an attempt at a plan for implementing it: https://docs.google.com/document/d/1ryqFMSHHRDERg1jm3LeVt7VMfxtXXrI8p49gmtniNP0/edit#As usual, all sorts of feedback is welcome and much appreciated. Thanks!
Could you also share the documentation of the alternatives to AIA fetching you explored and why they are unacceptable? Also, what are the reasons that has Android teams have given for avoiding AIA fetching for so long?
I think everybody agrees that false positive certificate errors are bad and that it is worth spending effort to avoid them.As is typical in any feature which exists to wallpaper over a problem instead of solving it, the AIA fetching mechanism has several downsides as well. In particular, such fallback mechanisms actually encourage people to leave their servers misconfigured since the misconfiguration will be "magically" fixed by the browsers. Thus, realistically, we should expect that this change will increase the number of (mobile) websites that are misconfigured. Then we'd be in a vicious spiral where even more implementations would feel compelled to implement AIA fetching to be compatible with Chrome for Android, which would encourage even more websites to be misconfigured, ad infinitum. For this reason, I think it is worth trying to find an alternative that has positive short-term for Chrome for Android *and* positive long-term effects on the web as a whole.
In the document, you write "Using data from Chrome’s Safe Browsing Extended Reporting program, we estimate that server chains with incorrect or missing intermediates account for >10% of all certificate validation errors in Chrome, and >30% of all certificate validation errors that occur in Chrome for Android. About 90% of the errors caused by missing or misconfigured intermediates occur on Android."Could you explain how these numbers 10%, 30%, and 90% are calculated? Are the 10%/30%/90% numbers indicative of the number of times that users see the certificate error page, or are they indicative of the number of pageloads with such certificate errors, or are they indicative of the number of TLS connections with such certificate errors, or HTTP requests on TLS connections with such certificate errors, or something else?
In particular, do you expect that this change will account for a 10%/30% reduction in certificate error pages seen by users? And, if this change isn't expected to improve things by the full 10%/30%, then what is expected improvement?
Hi Brian, thanks for your email. I hadn't heard some of the arguments you make before and appreciate hearing this different perspective. In addition to the replies Ryan's making about the core disagreement, I want to answer some of your specific questions -- answers inline.On Thu, Sep 22, 2016 at 5:04 PM, Brian Smith <br...@briansmith.org> wrote:Emily Stark <est...@chromium.org> wrote:Following some discussion with rsleevi@ and eroman@, I'd like to add intermediate fetching during Chrome's certificate verification on Android. From Safe Browsing Extended Reporting data, we estimate that a significant percentage of certificate errors are due to servers serving chains that omit the necessary intermediates. (Android does not fetch intermediates during certificate verification.)I wrote up a doc explaining more about the problem and motivation, why we'd like to do the intermediate fetching from within Chrome on Android, and an attempt at a plan for implementing it: https://docs.google.com/document/d/1ryqFMSHHRDERg1jm3LeVt7VMfxtXXrI8p49gmtniNP0/edit#As usual, all sorts of feedback is welcome and much appreciated. Thanks!Could you also share the documentation of the alternatives to AIA fetching you explored and why they are unacceptable? Also, what are the reasons that has Android teams have given for avoiding AIA fetching for so long?I'm hesitant to speak for another team and it's a bit difficult to answer this without doing so. I guess one thing to point out is that I haven't personally heard the arguments you make below as one of the reasons that Android doesn't do AIA fetching. To me, the important thing is slow release/update cycle I mentioned in the doc. Even if Android goes all-in on AIA fetching tomorrow, we'd probably still want it in Chrome until the Android implementation reaches enough users.Independent of Android, we've talked about doing outreach to site owners: contact the top N sites that have misconfigured intermediates, do a notification in Webmaster Tools, something in DevTools, etc. I think those are things worth exploring in parallel.I think everybody agrees that false positive certificate errors are bad and that it is worth spending effort to avoid them.As is typical in any feature which exists to wallpaper over a problem instead of solving it, the AIA fetching mechanism has several downsides as well. In particular, such fallback mechanisms actually encourage people to leave their servers misconfigured since the misconfiguration will be "magically" fixed by the browsers. Thus, realistically, we should expect that this change will increase the number of (mobile) websites that are misconfigured. Then we'd be in a vicious spiral where even more implementations would feel compelled to implement AIA fetching to be compatible with Chrome for Android, which would encourage even more websites to be misconfigured, ad infinitum. For this reason, I think it is worth trying to find an alternative that has positive short-term for Chrome for Android *and* positive long-term effects on the web as a whole.In the document, you write "Using data from Chrome’s Safe Browsing Extended Reporting program, we estimate that server chains with incorrect or missing intermediates account for >10% of all certificate validation errors in Chrome, and >30% of all certificate validation errors that occur in Chrome for Android. About 90% of the errors caused by missing or misconfigured intermediates occur on Android."Could you explain how these numbers 10%, 30%, and 90% are calculated? Are the 10%/30%/90% numbers indicative of the number of times that users see the certificate error page, or are they indicative of the number of pageloads with such certificate errors, or are they indicative of the number of TLS connections with such certificate errors, or HTTP requests on TLS connections with such certificate errors, or something else?Chrome sends a report every time an opted-in user sees the certificate error page, and the 10%/30%/90% is the percentage of those reports that we estimate are due to misconfigured intermediates.
Brian Smith <br...@briansmith.org> wrote:As is typical in any feature which exists to wallpaper over a problem instead of solving it, the AIA fetching mechanism has several downsides as well.Hi Brian,I think I'd have to disagree with this characterization of AIA. I realize we may simply disagree,
AIA fetching represents an important aspect of PKI mobility and changes, much like root autoupdates do.
While you present it as "papering over a problem", it's equally fair (and perhaps more accurate) to highlight that it allows for PKI transitions to happen independently, without centralized coordination and management.
For this reason, I think it is worth trying to find an alternative that has positive short-term for Chrome for Android *and* positive long-term effects on the web as a whole.As stated above, some of us believe that AIA fetching represents an important aspect of ecosystem health, and that all PKI clients should implement this for a robustness.
While I know you have experience with Firefox deciding not to fetch intermediates,
it's also clear that Mozilla's decisions behind that have necessitated trusting CAs well beyond when it's advisable.
This problem has equally affected other consumers of the Mozilla Root Store, such as RHEL.
FWIW, my hypothesis, which I haven't gotten around to verifying or refuting with an experiment, is that a small number of intermediates likely account for a huge percentage of the problem, and that if the browser simply shipped with a small number (say, less than 100) of these intermediates, without doing AIA fetching on any platform, the problem would mostly go away, while minimizing the negative effect of encouraging misconfiguration over time. I think it would be great to have a browser try an experiment along these lines.I believe the Web PKI is best suited when avoiding such centrally managed solutions. Perhaps this may change in time, particularly as the industry moves to better disclosure of intermediates in a technically discoverable way.
We may simply have to agree to disagree about the value of AIA.
My suggestion is that people experiment with downloading the intermediates that are commonly missing from the same place that roots are already downloaded from and secured using the same mechanisms, whereas the AIA fetching mechanism proposes to download them from an arbitrary site an attacker or perhaps even a legit peer asks us to download them from.
Could you make this more concrete? What do you mean, exactly, by a PKI transition? Could you provide a concrete example? Again, as far as I understand the original proposal, enabling such agility was not the problem that the proposal is trying to solve, so it might not matter anyway.
I am sure that you genuinely believe this is true. But, I am still not convinced that it is necessary. If all HTTPS clients are going to be pushed towards implementing this, then I think it is important that the motivation is stated clearly.
FWIW, it appears from reading Firefox's bug database that they're still open to the idea of preloading the intermediate certificates before they decide to implement AIA fetching. (That is based on my reading of decisions made by people other than me, after I left, so I might be misunderstanding them.)
I doubt this is true, but I don't follow Mozilla stuff as closely as you seem to. What specific cases do you have in mind?
Red Hat doesn't even use the same certificate verification code as Firefox, so if Red Hat also does not enable AIA fetching in their products then that is their own independent choice.
This is my main point: The more implementations that implement AIA fetching now, the harder it will be to drop it later in favor of better alternatives if/when they become viable.
On Thu, Sep 22, 2016 at 5:04 PM, Brian Smith <br...@briansmith.org> wrote:Could you also share the documentation of the alternatives to AIA fetching you explored and why they are unacceptable? Also, what are the reasons that has Android teams have given for avoiding AIA fetching for so long?I'm hesitant to speak for another team and it's a bit difficult to answer this without doing so. I guess one thing to point out is that I haven't personally heard the arguments you make below as one of the reasons that Android doesn't do AIA fetching. To me, the important thing is slow release/update cycle I mentioned in the doc. Even if Android goes all-in on AIA fetching tomorrow, we'd probably still want it in Chrome until the Android implementation reaches enough users.
Independent of Android, we've talked about doing outreach to site owners: contact the top N sites that have misconfigured intermediates, do a notification in Webmaster Tools, something in DevTools, etc. I think those are things worth exploring in parallel.
In the document, you write "Using data from Chrome’s Safe Browsing Extended Reporting program, we estimate that server chains with incorrect or missing intermediates account for >10% of all certificate validation errors in Chrome, and >30% of all certificate validation errors that occur in Chrome for Android. About 90% of the errors caused by missing or misconfigured intermediates occur on Android."Could you explain how these numbers 10%, 30%, and 90% are calculated? Are the 10%/30%/90% numbers indicative of the number of times that users see the certificate error page, or are they indicative of the number of pageloads with such certificate errors, or are they indicative of the number of TLS connections with such certificate errors, or HTTP requests on TLS connections with such certificate errors, or something else?Chrome sends a report every time an opted-in user sees the certificate error page, and the 10%/30%/90% is the percentage of those reports that we estimate are due to misconfigured intermediates.
In particular, do you expect that this change will account for a 10%/30% reduction in certificate error pages seen by users? And, if this change isn't expected to improve things by the full 10%/30%, then what is expected improvement?I'm not sure if the above explanation answers this or not? Conceptually, I'd expect a 30% reduction in certificate error pages seen by Android Chrome users.
Thanks for your helpful reply, Emily. Replies inline.Emily Stark <est...@chromium.org> wrote:On Thu, Sep 22, 2016 at 5:04 PM, Brian Smith <br...@briansmith.org> wrote:Could you also share the documentation of the alternatives to AIA fetching you explored and why they are unacceptable? Also, what are the reasons that has Android teams have given for avoiding AIA fetching for so long?I'm hesitant to speak for another team and it's a bit difficult to answer this without doing so. I guess one thing to point out is that I haven't personally heard the arguments you make below as one of the reasons that Android doesn't do AIA fetching. To me, the important thing is slow release/update cycle I mentioned in the doc. Even if Android goes all-in on AIA fetching tomorrow, we'd probably still want it in Chrome until the Android implementation reaches enough users.I also don't know why Android hasn't done AIA fetching up to this point, or why Chrome for Android doesn't do AIA fetching up to this point. I do seem to remember it being said years ago (yes, I've been discussing this issue with people for years) that it was a conscious decision to *not* implement AIA fetching on Android, but I don't think any specific reasons were given then either. Regardless, I was hoping that some of the reasons for not doing AIA fetching must have been brought up in some discussion.
Independent of Android, we've talked about doing outreach to site owners: contact the top N sites that have misconfigured intermediates, do a notification in Webmaster Tools, something in DevTools, etc. I think those are things worth exploring in parallel.These seem like useful things, though I would guess that they won't change the state of things too much on Android unless you simulate Android's certificate validation logic and root store(s) in the Chrome developer tools, which seems like a lot of work relative to the effectiveness I would expect it to have.
In the document, you write "Using data from Chrome’s Safe Browsing Extended Reporting program, we estimate that server chains with incorrect or missing intermediates account for >10% of all certificate validation errors in Chrome, and >30% of all certificate validation errors that occur in Chrome for Android. About 90% of the errors caused by missing or misconfigured intermediates occur on Android."Could you explain how these numbers 10%, 30%, and 90% are calculated? Are the 10%/30%/90% numbers indicative of the number of times that users see the certificate error page, or are they indicative of the number of pageloads with such certificate errors, or are they indicative of the number of TLS connections with such certificate errors, or HTTP requests on TLS connections with such certificate errors, or something else?Chrome sends a report every time an opted-in user sees the certificate error page, and the 10%/30%/90% is the percentage of those reports that we estimate are due to misconfigured intermediates.Usually when we look at the compatibility impact of a change, we do it in terms of percentage of total pageloads or similar. For example, in the "Intent to Deprecate/Remote" emails, there is almost always a use count figure quoted that shows the compatibility impact is minimal. In order to help convert the numbers 10%/30% into comparable figures, could you share the total percentage of pageloads that result in any kind of certificate error page?
In particular, do you expect that this change will account for a 10%/30% reduction in certificate error pages seen by users? And, if this change isn't expected to improve things by the full 10%/30%, then what is expected improvement?I'm not sure if the above explanation answers this or not? Conceptually, I'd expect a 30% reduction in certificate error pages seen by Android Chrome users.Yes, you answered exactly the question I had. Thanks!Cheers,Brian--
--
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+unsubscribe@chromium.org.
To post to this group, send email to net...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CAFewVt6N7Vz1OsAe0xRf0OteBGdV%3D%2B0uxnuqSLnjBB_dbJXoAQ%40mail.gmail.com.
Yep, for something like Webmaster Tools, we could conceivably simulate Android's certificate verification, but for DevTools, not so much. There I was thinking that we might be able to do something super simple that covers some not all cases, like warn if we successfully built a chain but the server only served a leaf. I know Ryan doesn't think this kind of stuff belongs in DevTools though so you two might be able to agree to agree on that point at least. :)
One thing I haven't looked closely at is whether it's a small number of sites causing most of the problem. I half-expect that, among these ideas for reaching site owners, individual outreach to top sites might have the biggest impact (similar to your hypothesis about 100 intermediates).
Usually when we look at the compatibility impact of a change, we do it in terms of percentage of total pageloads or similar. For example, in the "Intent to Deprecate/Remote" emails, there is almost always a use count figure quoted that shows the compatibility impact is minimal. In order to help convert the numbers 10%/30% into comparable figures, could you share the total percentage of pageloads that result in any kind of certificate error page?Unfortunately I can't share an exact number, but it's well under 1% of page loads. Another, somewhat more qualitative way to look at it is that certificate errors are consistently a top source of user complaints.
On Thu, Sep 22, 2016 at 6:22 PM, Brian Smith <br...@briansmith.org> wrote:My suggestion is that people experiment with downloading the intermediates that are commonly missing from the same place that roots are already downloaded from and secured using the same mechanisms, whereas the AIA fetching mechanism proposes to download them from an arbitrary site an attacker or perhaps even a legit peer asks us to download them from.While true, this doesn't materially change the threat model, as browser clients must already be capable of loading arbitrary certificates (the sites), and capable of following arbitrary resources (after all, this is Hypertext).That is, to a browser client, an AIA fetch is conceptually quite similar to fetching JQuery from a CDN (secured with SRI).
As Emily clarified, we see both - things that are much more clearly misconfigurations (such as leaf-only certs), but we also see that the variety of root stores across Android revisions - which, unfortunately, don't autoupdate - need the ability to handle transitions. A PKI transition can be observed in both sites I posted - crt.sh (the transition away from AddTrust) and google.com (the transition away from Equifax). In both cases, new roots are stood up, which are supported on new clients, but older clients need older intermediates.When sites are going through such transitions, they have to choose who the 'default' configuration works for. When a new version of Android is released, for example, the site likely can't rely on that version reaching 100% ubiquity within days, and thus needs to continue to supply the 'old' chain to the root. However, it equally can't wait for 100% ubiquity of the new version before it may decide to stop sending the older chain - even if that's what Google has, in effect, been doing.
As you know, this is complicated all the more during the transition from, say, RSA-1024 bit to RSA-2048 bit. The lack of AIA fetching, for example, has required applications running on RHEL (as well as some versions of Android) to support RSA-1024 bit roots, to facilitate the transition, even when it's completely undesirable to support 1024-bit keys *and* paths of purely 2048-bits are possible.
While it sounds like you're positioning this as a Chrome move that pushes the ecosystem one way, it's worth noting this is true on most platforms and browsers. The exception, rather than the rule, is Chrome on Android and Firefox.
Firefox's stance is largely possible because they consider changes only relevant to the current version of Firefox - older root stores are not considered, asymmetric release schedules (such as OS vs root store) are not considered, root autoupdate (or the lack thereof) is not configured. While it's elegant in some ways, it unfortunately does not reflect where the broader ecosystem - of browsers and non-browser TLS clients - are.
FWIW, it appears from reading Firefox's bug database that they're still open to the idea of preloading the intermediate certificates before they decide to implement AIA fetching. (That is based on my reading of decisions made by people other than me, after I left, so I might be misunderstanding them.)Indeed, and in a forward-thinking world in which all paths are permuted, and all versions of the root store is known (as in the case of Firefox), this is perhaps tenable. However, clients such as Chrome - which execute on a variety of platforms which may have disjoint stores - the notion of a 'one size fits all' solution is not consistent with how the WebPKI historically works or, I would argue, is desired to work. At a minimum, the notion of an 'intermediate bundle' is necessary per the set of trusted CAs (to be minimal), or contains unrelated or perhaps conflicting intermediates (if using a single union)
I doubt this is true, but I don't follow Mozilla stuff as closely as you seem to. What specific cases do you have in mind?Equifax
This is my main point: The more implementations that implement AIA fetching now, the harder it will be to drop it later in favor of better alternatives if/when they become viable.I think we might disagree here, but ultimately, it depends on the objectives. I believe you're attaching a distinction to different types of configurations - one, such as only supplying the leaf cert, is 'bad' and the 'servers fault', and so displaying a warning is entirely appropriate to force it to send additional certs - while the other, the aforementioned root store disharmony, is 'good', and something the client should fix (perhaps with better alternatives).
I don't believe that in the system we have, we can reliably or meaningfully distinguish those two in such a way that it will result in a positive outcome, and it's better to overcorrect for the ecosystem and minimize any user interstitials, then it is to take the ideological approach and refuse to fix.
I'm incredibly sympathetic to the ecosystem arguments, but as I mentioned in the other mail, I believe that some form of AIA fetching is a net-win for all clients, properly optimized (such as with proper caching). And once that feature is introduced, the ability to distinguish 'bad' from 'good' is necessarily lost, and I don't see that as a bad thing either, which may be our point of disagreement.
I think that's oversimplifying things too much because it doesn't take into account the difference in what happens in the trusted parent process vs the less-trusted content processes.
and other mechanisms to limit what third parties are contacted when connecting to it, except for OCSP and AIA fetching, which are beyond its control.
AIA fetching has basically the same risks as OCSP (including also similar terrible performance and reliability impact).
Also, in general it seems there is a common goal to move to an HTTPS-only web, but AIA fetches are virtually always http://, so they can't be a long-term solution. Browsers using a mechanism that relies on http:// fetching places a compatibility burden on non-browser clients that would prevent them from disabling non-https:// fetching entirely.
(if I understand http://android-developers.blogspot.com/2016/07/changes-to-trusted-certificate.html correctly, this latter part is fixed in Android 7).
The question is about what is the least harmful workaround that is effective.
But, in this case, in addition to the privacy concerns with the OCSP mechanism mentioned in the Chromium Security FAQ that also apply to AIA fetching,
users running the newer Android have to sit around and wait for the AIA fetch to finish.
From Adam Langley's blog post at https://www.imperialviolet.org/2012/02/05/crlsets.html we can see that "the median time for a successful OCSP check is ~300ms and the mean is nearly a second."
Firefox's telemetry for Firefox 48 on Android (https://tinyurl.com/hfx42s9) reports the following (in milliseconds):
- 58.05
- 117.23
- 244.6
- 590.38
- 2.62k
I think it would be better to find a solution to get rid of the AIA mechanism on *all* platforms and *all* browsers that doesn't break the web. (FWIW, I also think Firefox's caching of every intermediate cert it comes across on the web is bad for the web.)
1. Are you open to the possibility of pre-populating the cache from some browser update or root-store source, like CRLSets are already pre-populated?
2. Are you open to measuring the effects that such pre-populating the cache would have?
3. Are you open to the possibility that pre-populating and occasionally refreshing the cache, as is done already for CRLSets, may mitigate the raised problems enough that we could avoid the hazards, poor user experience, and complexity inherent in AIA fetching?
I appreciate your feedback, Brian, but I'm afraid I have to point out there are serious flaws in your arguments here, as appealing as they are at first glance.
On Wed, Sep 28, 2016 at 5:38 PM, Brian Smith <br...@briansmith.org> wrote:I think that's oversimplifying things too much because it doesn't take into account the difference in what happens in the trusted parent process vs the less-trusted content processes.This is not a meaningful argument. You are going to parse the server-sent certificates in order to determine if they're trusted. For your argument to be that AIA harms security, you would need to show that there's no trust at all in the server. Unfortunately, this is easily and emprically demonstrable as 'no additional risk' beyond what's already accepted.
and other mechanisms to limit what third parties are contacted when connecting to it, except for OCSP and AIA fetching, which are beyond its control.The introduction of OCSP is clearly a misdirect;
let's focus on what we're discussing, which is AIA. Your assertion previously was that sites can control what third-parties are contacted by supplying the 'correct' intermediates.
I showed that the notion of 'correct' is flawed and improper.
You now assert that the site cannot control AIA fetching.
One, this is logically inconsistent with your previous argument; are you now swayed to believe that the notion of 'correct' is flawed, or are you still holding on to that argument?
Two, there is clearly a choice in CA selection, and if we were to accept your previous supposition that there is a 'correct' way to do it (and perhaps simply CAs are not doing it yet), then clearly, there's choice. So I don't buy this argument at all - it's inconsistent with your previous arguments and inconsistent entirely.
AIA fetching has basically the same risks as OCSP (including also similar terrible performance and reliability impact).This is a gross oversimplification, and I'm disappointed to see it made, because I know you are intimately familiar with what I'm about to point out: Which is that OCSP directly refers to the certificate you're using (that is, you ask the CA "I'd like to ask about the certificate for google.com"), whereas AIA is asking the CA for a certificate the CA provides - that is, "Please tell me about your CA". This does not reveal the site you're using in practice.
Now, I anticipate you might respond with a hypothetical concern, which is 'what if' CAs wanted to track.
Also, in general it seems there is a common goal to move to an HTTPS-only web, but AIA fetches are virtually always http://, so they can't be a long-term solution. Browsers using a mechanism that relies on http:// fetching places a compatibility burden on non-browser clients that would prevent them from disabling non-https:// fetching entirely.This is to suggest the move to eliminate HTTP is on ideological purity grounds, which is not the case.
This is not saying "All HTTP for all protocols is bad". It's acknowledging the risks and balances, and in the case of AIA fetches, the browser has strong confidence in the authenticity and integrity of the message (vis-a-vis the signature on the AIA-signed certificate), has strong confidence that the privacy is preserved (vis-a-vis CT and WebPKI disclosures), and has strong assurances that the risks of HTTP for browsing *do not apply*.
users running the newer Android have to sit around and wait for the AIA fetch to finish.Yes. They do.
From Adam Langley's blog post at https://www.imperialviolet.org/2012/02/05/crlsets.html we can see that "the median time for a successful OCSP check is ~300ms and the mean is nearly a second."This is comparing chickens and eggs. As you know, OCSP responders are unfortunately deployed quite often in 'live' scenarios (e.g. signing on the fly). AIA does not have that.As you know, OCSP responses are not germane to caching (which is why Microsoft pushed for things like "High Performance OCSP" profiles, which our other platforms - notably NSS - did not implement at the time we measured), and even under ideal conditions are limited to 7 days. AIA caches are measured in years.Firefox's telemetry for Firefox 48 on Android (https://tinyurl.com/hfx42s9) reports the following (in milliseconds):
- 58.05
- 117.23
- 244.6
- 590.38
- 2.62k
This is comparing apples to oranges.
1. Are you open to the possibility of pre-populating the cache from some browser update or root-store source, like CRLSets are already pre-populated?Explicitly: No.There are far more important, user-facing, security-relevant things to be investing resources in.2. Are you open to measuring the effects that such pre-populating the cache would have?This suggests I accept as valid your arguments for the benefits of a cache. I explicitly reject that, and thus naturally reject the conclusion that there is value in this.
3. Are you open to the possibility that pre-populating and occasionally refreshing the cache, as is done already for CRLSets, may mitigate the raised problems enough that we could avoid the hazards, poor user experience, and complexity inherent in AIA fetching?No, because I disagree with you on the hazards, I disagree with you on the poor user experience, and I disagree with you with the notion of it being inherently complex.