Allowing SSL interception

2,040 views
Skip to first unread message

Nick Cullen

unread,
Sep 28, 2015, 3:59:38 PM9/28/15
to certificate-transparency

A number of suppliers offer 'SSL interception' as part of the security functionality offered by Corporate Proxy servers with protective Web filtering capabilities. These products dynamically sign a 'spoof' certificate, using an on-board root cert - which has been added to corporate assets as an additional Trusted Root.

When the CT project is up and running, and browser have been updated to block (or produce really horrid security warnings) if the SSL certificate has been 'spoofed', then what will be the impact on such Corporate (Spying / Protection) devices ?

Is there any provision within the Certificate Transparency eco-system to allow Corporations to 'legitimise' this intrusion into the end to end security of SSL ? and if so, can someone point me to where I can find how to do so.

Regards,
Nick Cullen

Adam Eijdenberg

unread,
Sep 28, 2015, 7:36:03 PM9/28/15
to certificate-transparency
Hi Nick,

I think that's a good question.  This is something that CT clients will need to decide on - my feeling is that for Chrome we will most likely take the same route that HPKP does, ie disable checking when the cert received chains to a private (vs public) root.  See the following note from their FAQ:

"Chrome does not perform pin validation when the certificate chain chains up to a private trust anchor. A key result of this policy is that private trust anchors can be used to proxy (or MITM) connections, even to pinned sites. “Data loss prevention” appliances, firewalls, content filters, and malware can use this feature to defeat the protections of key pinning."

I think in general the same considerations/tradeoffs would apply to CT.

Cheers, Adam




--
You received this message because you are subscribed to the Google Groups "certificate-transparency" group.
To unsubscribe from this group and stop receiving emails from it, send an email to certificate-transp...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Daniel Kahn Gillmor

unread,
Sep 29, 2015, 11:50:29 AM9/29/15
to certificate-...@googlegroups.com
On Mon 2015-09-28 19:35:53 -0400, 'Adam Eijdenberg' via certificate-transparency <certificate-...@googlegroups.com> wrote:
> Hi Nick,
>
> I think that's a good question. This is something that CT clients will
> need to decide on - my feeling is that for Chrome we will most likely take
> the same route that HPKP does, ie disable checking when the cert received
> chains to a private (vs public) root. See the following note from their
> FAQ:
>
> "Chrome does not perform pin validation when the certificate chain chains
> up to a private trust anchor. A key result of this policy is that private
> trust anchors can be used to proxy (or MITM
> <http://en.wikipedia.org/wiki/Man-in-the-middle_attack>) connections, even
> to pinned sites. “Data loss prevention” appliances, firewalls, content
> filters, and malware can use this feature to defeat the protections of key
> pinning."
> http://www.chromium.org/Home/chromium-security/security-faq#TOC-How-does-key-pinning-interact-with-local-proxies-and-filters-
>
> I think in general the same considerations/tradeoffs would apply to CT.

i don't think that user-installed CAs should automatically be considered
"MITM-able" or "CT-violable" -- browsers that want to permit certs from
certain CAs to not need CT logs in this way should require an extra
setting for that chained root. I've reported this for mozilla here:

https://bugzilla.mozilla.org/show_bug.cgi?id=1168603

And, if a browser implements an "all-local-roots-are-allowed-to-MITM"
mode (both chrome and firefox have it), it should be optional by
default.

The bug for Mozilla to change the default for
security.cert_pinning.enforcement_level to 2 (strict) is here:

https://bugzilla.mozilla.org/show_bug.cgi?id=1059392

I think the same conservative constraints should be followed for any
sort of systematized override of CT requirements.

--dkg

Brad Hill

unread,
Sep 29, 2015, 12:28:50 PM9/29/15
to certificate-...@googlegroups.com
The user's computer is the user's computer.  It's not really a good use of anyone's time and energy to engage in an arms race about whether they are allowed to modify their own trust stores.  That way lies DRM.

Daniel Kahn Gillmor

unread,
Sep 29, 2015, 12:39:57 PM9/29/15
to Brad Hill, certificate-...@googlegroups.com
On Tue 2015-09-29 12:28:40 -0400, Brad Hill <hill...@gmail.com> wrote:
> The user's computer is the user's computer. It's not really a good use of
> anyone's time and energy to engage in an arms race about whether they are
> allowed to modify their own trust stores. That way lies DRM.

I'm not suggesting that the user be disallowed from modifying their own
trust stores. I'm pointing out that the current standard conflates "i'd
like to trust this additional root X" with "i'd like any site certified
by root CA X to be able to disregard all additional safeguards that
other CAs are subject to."

This conflation is a mistake, and a disservice to users.

There are currently non-standard CAs that some users are obliged to use
(see the examples given in
https://bugzilla.mozilla.org/show_bug.cgi?id=1059392) which absolutely
should be subject to the additional safeguards that we have, and should
not be allowed to override the browser's standard HSTS or CT policy.

Browsers that conflate these sentiments put their users at higher risk
than they need to. Not every non-public CA is (or should be) acting as
an a MITM.

--dkg

Brian Smith

unread,
Sep 29, 2015, 5:26:13 PM9/29/15
to certificate-...@googlegroups.com, Brad Hill
On Tue, Sep 29, 2015 at 6:39 AM, Daniel Kahn Gillmor <d...@fifthhorseman.net> wrote:
On Tue 2015-09-29 12:28:40 -0400, Brad Hill <hill...@gmail.com> wrote:
> The user's computer is the user's computer.  It's not really a good use of
> anyone's time and energy to engage in an arms race about whether they are
> allowed to modify their own trust stores.  That way lies DRM.

I'm not suggesting that the user be disallowed from modifying their own
trust stores.  I'm pointing out that the current standard conflates "i'd
like to trust this additional root X" with "i'd like any site certified
by root CA X to be able to disregard all additional safeguards that
other CAs are subject to."

In other words, some user-installed CAs are installed for MitM purposes, but some are installed only to be able to access certain websites that use non-public CAs. In an ideal world, client software would let the user indicate which purpose(s) a private CA certificate is used for. If a private CA cert isn't installed for MitM purposes, then it shouldn't be able to subvert the HPKP mechanism. Also, in an ideal world, the client software would let the user name constrain such private CAs to domains they care about.

For example, if I have to install a private CA cert to access https://mail.mycorp.intranet, I don't want the sysadmins at my company to be able to MitM my connections to Google or Facebook. I just want to use that cert to access https://mail.mycorp.intranet.

Cheers,
Brian
--

Brad Hill

unread,
Sep 29, 2015, 8:23:54 PM9/29/15
to Brian Smith, certificate-...@googlegroups.com
I wholeheartedly agree that browsers should give users easy tools to make more granular trust decisions in such situations.  But once a user has made such a choice, within the scope of the trust they've granted the only logical way to respect that choice is that CT, pinning and other defense-in-depth mechanisms (mostly intended to reduce the risk of the default, public trust sets) shouldn't undo it.  So I think this is a general browser user experience request, not something that should be specific to CT.

It makes sense to try to provide good options for granting limited trust when a site like China Rail attempts to social engineer a user into installing a root cert, it doesn't make sense to try to engage in an arms-race with root-privileged, manufacturer-installed malware like the Superfish software mentioned in that bug. (or even, to try to distinguish stuff like that from "legitmate" enterprise installed software; root is root)

Daniel Kahn Gillmor

unread,
Sep 30, 2015, 4:07:31 AM9/30/15
to Brad Hill, Brian Smith, certificate-...@googlegroups.com
On Tue 2015-09-29 17:23:43 -0700, Brad Hill wrote:
> I wholeheartedly agree that browsers should give users easy tools to make
> more granular trust decisions in such situations. But once a user has made
> such a choice, within the scope of the trust they've granted the only
> logical way to respect that choice is that CT, pinning and other
> defense-in-depth mechanisms (mostly intended to reduce the risk of the
> default, public trust sets) shouldn't undo it.

I'm not sure how you conclude that HPKP-style pinning and other
defense-in-depth mechanisms are just to reduce the risk of the default,
public trust sets. HPKP should be at least as useful against private
CAs as it is against public CAs.

> So I think this is a general browser user experience request, not
> something that should be specific to CT.

right, CT is only a part of this pattern, but it's a relevant part.

> It makes sense to try to provide good options for granting limited trust
> when a site like China Rail attempts to social engineer a user into
> installing a root cert

This is exactly what i was describing, and is what is mentioned in that
bug report. Everyone trying to get a Brazilan visa [0] should not need
to give up their defenses-in-depth against possible MitMs from
ICP-Brazil.

> it doesn't make sense to try to engage in an arms-race with
> root-privileged, manufacturer-installed malware like the Superfish
> software mentioned in that bug.

Agreed, and i don't think anyone suggested that goal except one
commenter on a bug. Superfish-style, root-enabled attacks can always
set the flags they need (or modify the software directly) to make their
MitM cert avoid any defense-in-depth measure. That doesn't mean that we
should lose those same defenses against any other locally-imported CA,
or that we should deliberately carve out accomodations for easy
facilitation of TLS interception MitM boxes.

--dkg

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=438825#c127

Florian Weimer

unread,
Sep 30, 2015, 4:18:29 AM9/30/15
to certificate-...@googlegroups.com
On 09/18/2015 12:53 PM, Nick Cullen wrote:

> A number of suppliers offer 'SSL interception' as part of the security
> functionality offered by Corporate Proxy servers with protective Web
> filtering capabilities. These products dynamically sign a 'spoof'
> certificate, using an on-board root cert - which has been added to
> corporate assets as an additional Trusted Root.

And this means that the trust decision, cipher suite selection, etc. has
to happen at the reencrypting proxy, without a good way of signaling
back problems to the original application.

Maybe it's time to revisit this model and replace it with something
technically more reasonable (like session key escrow)?

--
Florian Weimer / Red Hat Product Security

Nick Cullen

unread,
Aug 27, 2016, 6:20:17 AM8/27/16
to certificate-transparency
I think we have to accept and enable MiTM interception by EXPLICIT corporate proxies, carving out an accomodation that allows them to be used, provided we can be clear the the user KNOWS that they are there, and that the privacy of their communications via a Corporate Infrastructures is not absolute (in so far as Local Law allows).

By ALLOWING the interception under specific technical conditions, we can actually make the Users position stronger, and improve defences against unauthorised interception (including transparent MiTM attacks).

As I spend my time at work trying to implement such a proxy (and trying to ensure it can enforce a Corporate Policy about what sites the Internal users are allowed to visit), it strikes me that the standards around HTTP CONNECT are in need of updating. A new response code to inform the user that their connection will not be allowed because of a corporate policy (allowing a link to that Corporate policy to be provided), would be VERY useful - and fills a serious functionality gap in that browsers do not allow such information to accompany an HTTP 403 (or other error code from the Proxy).

The existence of specific codes (and even specific syntax) to support an intercepted version of HTTPS Connect via Explicit proxy could also allow the existence of Interception to be disclosed to the Browser (and so to the User) in a way that would enhance rather then disadvantage their privacy.

WITHOUT this facilitation I find myself installing a new local trusted root, and configuring the proxy to interfere with every HTTP CONNECT request, so that it can insert its own 'fake certificate' in order to be able to send the user an HTTP Redirect within the 'trusted' HTTPS channel so as to provide an explanation of why Corporate Policy prevents the access (and how they can do something about that).

It seems very wrong that I should need to breach privacy in this way, just to give the user a helpful message.

Regards,
Nick

Eran Messeri

unread,
Aug 30, 2016, 8:42:20 AM8/30/16
to certificate-...@googlegroups.com
I'll prefix by saying that while this is an interesting and valid discussion, the CT group may not be the best place to get a broad audience involved as the points you raise apply more broadly to TLS/SSL and its implementation than just CT. chromium-dev or one of IETF's workgroups in the security area may be more appropriate.

On Sat, Aug 27, 2016 at 11:20 AM, 'Nick Cullen' via certificate-transparency <certificate-...@googlegroups.com> wrote:
I think we have to accept and enable MiTM interception by EXPLICIT corporate proxies, carving out an accomodation that allows them to be used, provided we can be clear the the user KNOWS that they are there, and that the privacy of their communications via a Corporate Infrastructures is not absolute (in so far as Local Law allows).
To the best of my understanding, Chrome's long-standing policy is that it cannot, and will not, protect the user against MITM in such a scenario, where there's a locally-installed trust root. I expect (but cannot guarantee) that compliance with Chrome's CT policy will not apply to certificates chaining to a locally-installed trust root. 

By ALLOWING the interception under specific technical conditions, we can actually make the Users position stronger, and improve defences against unauthorised interception (including transparent MiTM attacks).

As I spend my time at work trying to implement such a proxy (and trying to ensure it can enforce a Corporate Policy about what sites the Internal users are allowed to visit), it strikes me that the standards around HTTP CONNECT are in need of updating. A new response code to inform the user that their connection will not be allowed because of a corporate policy (allowing a link to that Corporate policy to be provided), would be VERY useful - and fills a serious functionality gap in that browsers do not allow such information to accompany an HTTP 403 (or other error code from the Proxy).

The existence of specific codes (and even specific syntax) to support an intercepted version of HTTPS Connect via Explicit proxy could also allow the existence of Interception to be disclosed to the Browser (and so to the User) in a way that would enhance rather then disadvantage their privacy.

WITHOUT this facilitation I find myself installing a new local trusted root, and configuring the proxy to interfere with every HTTP CONNECT request, so that it can insert its own 'fake certificate' in order to be able to send the user an HTTP Redirect within the 'trusted' HTTPS channel so as to provide an explanation of why Corporate Policy prevents the access (and how they can do something about that).

It seems very wrong that I should need to breach privacy in this way, just to give the user a helpful message.

Regards,
Nick

On Wednesday, 30 September 2015 09:18:29 UTC+1, Florian Weimer wrote:
On 09/18/2015 12:53 PM, Nick Cullen wrote:

> A number of suppliers offer 'SSL interception' as part of the security
> functionality offered by Corporate Proxy servers with protective Web
> filtering capabilities. These products dynamically sign a 'spoof'
> certificate, using an on-board root cert - which has been added to
> corporate assets as an additional Trusted Root.

And this means that the trust decision, cipher suite selection, etc. has
to happen at the reencrypting proxy, without a good way of signaling
back problems to the original application.

Maybe it's time to revisit this model and replace it with something
technically more reasonable (like session key escrow)?

--
Florian Weimer / Red Hat Product Security

--
You received this message because you are subscribed to the Google Groups "certificate-transparency" group.
To unsubscribe from this group and stop receiving emails from it, send an email to certificate-transparency+unsub...@googlegroups.com.

Maxwell Funk

unread,
Jun 16, 2017, 8:14:55 AM6/16/17
to certificate-transparency
Great question, currently CT is only enabled to complete checks for Extended Validation (EV) certs issued by publicly trusted CAs... I believe Google recently announced that it will extend CT checks to OV and DV certs as well in October 2017. 

Does anyone know if this will affect the 'session' certs issued by transparent proxies that chain to a private trust anchor?

- Max

Rob Percival

unread,
Jun 16, 2017, 10:40:17 AM6/16/17
to certificate-transparency

CT is not enforced for private trust anchors, so it shouldn't affect them.


--
You received this message because you are subscribed to the Google Groups "certificate-transparency" group.
To unsubscribe from this group and stop receiving emails from it, send an email to certificate-transp...@googlegroups.com.

Maxwell Funk

unread,
Jun 16, 2017, 11:30:40 AM6/16/17
to certificate-transparency
Much obliged.  Also want to confirm that this is true for these session certs that express domains that are available to the public... for instance, user visits google.com, proxy issues a cert for *.google.com that is good for a day or two, would CT ignore that cert even though it expresses a cert that exists in the CT ledger?

- Max
To unsubscribe from this group and stop receiving emails from it, send an email to certificate-transparency+unsub...@googlegroups.com.

Rob Percival

unread,
Jun 16, 2017, 11:41:32 AM6/16/17
to certificate-transparency

Yes, such certificates would be exempt from CT enforcement. It doesn't matter what the domain is, so long as it chains to a private trust anchor.


To unsubscribe from this group and stop receiving emails from it, send an email to certificate-transp...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "certificate-transparency" group.
To unsubscribe from this group and stop receiving emails from it, send an email to certificate-transp...@googlegroups.com.

Nick Cullen

unread,
Jun 16, 2017, 4:29:19 PM6/16/17
to certificate-...@googlegroups.com
Which is slightly disappointing in a way, since there have been a number of cases where 'private' trust anchors have been Forced on citizens. Required to complete registration for some central service, and thereafter, open to possible or actual exploitation. So Google certificates could be in use for privacy breach in purposes, and yet the domain owner remains unaware, and their customers unprotected. I thought that CT was intended to bring transparency, yet this type of situation will remain opaque.

Regards, Nick

From: 'Rob Percival' via certificate-transparency
Sent: ‎16/‎06/‎2017 16:41
To: certificate-transparency
Subject: Re: Allowing SSL interception

You received this message because you are subscribed to a topic in the Google Groups "certificate-transparency" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/certificate-transparency/kiOJFO6_N0o/unsubscribe.
To unsubscribe from this group and all its topics, send an email to certificate-transp...@googlegroups.com.

Rob Percival

unread,
Jun 16, 2017, 4:35:35 PM6/16/17
to certificate-...@googlegroups.com

Matt Palmer

unread,
Jun 16, 2017, 10:53:18 PM6/16/17
to 'Nick Cullen' via certificate-transparency
On Fri, Jun 16, 2017 at 09:28:36PM +0100, 'Nick Cullen' via certificate-transparency wrote:
> Which is slightly disappointing in a way, since there have been a number
> of cases where 'private' trust anchors have been Forced on citizens.
> Required to complete registration for some central service, and
> thereafter, open to possible or actual exploitation. So Google
> certificates could be in use for privacy breach in purposes, and yet the
> domain owner remains unaware, and their customers unprotected. I thought
> that CT was intended to bring transparency, yet this type of situation
> will remain opaque.

The situation you describe could just as easily happen with an organisation
that forces the use of an alternate piece of software to complete
registration for some central service, where CT *also* won't protect you.

The answer to both situations is to only use the software (or install the
trust anchor) for the times it is needed, and then not use it (or remove it)
when it isn't.

- Matt

Shiladitya dey

unread,
Jan 12, 2018, 5:06:44 AM1/12/18
to certificate-transparency
Rob,
My colleague had a one-one communication with you sometime last week. I thought it would be more appropriate to post the followup questions in the open forum.
This is in the context of a SSL inspection product.

It will be very helpful if you could kindly answer the below questions – 

  1. QA:  It’s important that we get to see the exact behavior of Chrome with CT validation enabled (Warnings / Errors etc). Is such a distribution available to download please?  If not then what are the alternatives?  (earlier I tried rebuilding from git clone but it didn’t fly)
  2. SCT Validation: For upstream traffic in an inspected flow (e.g. Proxy <–> linkedin.com) the proxy must validate the SCT (delivered as TLS extension or X509 attribute or OCSP stapling) and also the fact that the leaf certificate has been added to a merkle tree in one of the public log servers that Chrome trust. 
    1. Do you think validating the signature in SCT (assuming that I have the public key of the log) alone will suffice?
    2. In order to perform signature validation, we would require the public key of the Log. The spec doesn’t explain how to get the public key for the Log ID. Currently, public keys are not available in the list of trusted logs. How to obtain the public key for a logID has not been delineated in the spec either.   
    3. How does Chrome validate SCT at this point please? Where to find the list of trusted logs in Chrome? We didn’t notice any request going out to the log servers.
  3. Trust store for logs: I presume Google will be concerned about a CA if Chrome finds a cert issued by the same CA that wasn’t part of a public log. This means, the list of trusted logs can change. How do I keep my product in sync with the trusted log store that’s centrally managed at Chromium?
Regards,
S.Dey
To unsubscribe from this group and stop receiving emails from it, send an email to certificate-transparency+unsub...@googlegroups.com.

Matt Palmer

unread,
Jan 12, 2018, 8:54:20 PM1/12/18
to certificate-...@googlegroups.com
On Fri, Jan 12, 2018 at 02:04:44AM -0800, Shiladitya dey wrote:
> 2. *SCT Validation: *For upstream traffic in an inspected flow (e.g.
> Proxy <–> linkedin.com) the proxy must validate the SCT (delivered as TLS
> extension or X509 attribute or OCSP stapling) and also the fact that the
> leaf certificate has been added to a merkle tree in one of the public log
> servers that Chrome trust.
> 1. Do you think validating the signature in SCT (assuming that I have
> the public key of the log) alone will suffice?

That would be a question for the customers of your product. What
requirements do your customers have for CT validation? Chromium's position
is different to most consumers of SCTs, because Chromium requires an SCT
from a Google log, and as such the threat of a misbehaving log is suitably
mitigated. However, unless you're intending on running your own logs and
mandating an SCT from one of your logs be present in any certificate you
choose to trust (which I have doubts would be a demand you could reasonably
make), then your threat model is different to Chromium's, and you should
take appropriate steps to mitigate the threats in your model.

> 2. In order to perform signature validation, we would require the
> public key of the Log. The spec doesn’t explain how to get the public key
> for the Log ID. Currently, public keys are not available in the list of
> trusted logs. How to obtain the public key for a logID has not been
> delineated in the spec either.

Getting a public key from a logID hasn't been specified because it's a
policy issue, not a specification one -- how a log is considered "trusted"
by a user agent is something for each user agent to consider individually.
The canonical source of public keys *for Chromium* is in the Chromium source
code (somewhere; I don't have a reference to hand because I'm not a Chromium
developer).

> 3. How does Chrome validate SCT at this point please?

Chromium's source code is publicly available, and that would be best way to
determine this.

> Where to find the list of trusted logs in Chrome?

They're embedded in the Chromium source.

> We didn’t notice any request going out to the log servers.

No, in general user agents shouldn't be making requests to log servers to
validate SCTs; CT is explicitly designed to not require that -- all the
information you need to validate that an SCT was properly issued is in the
SCT and the certificate (and the log public key). Validating proper
behaviour of a log (certificate inclusion in the merkle tree, presentation
of consistent tree heads) involves auditing the logs, and gossiping tree
heads. Making requests to log servers to validate SCTs both increases the
load on the log server, and also leaks detailed information about browsing
habits to third parties, which is *probably* something your customers
wouldn't want to have happen.

> 3. *Trust store for logs: *I presume Google will be concerned about a CA
> if Chrome finds a cert issued by the same CA that wasn’t part of a public
> log. This means, the list of trusted logs can change. How do I keep my
> product in sync with the trusted log store that’s centrally managed at
> Chromium?

You keep an eye on the relevant category in the Chromium bug tracker for
inclusion requests, and/or the portions of the Chromium source code that list
the logs Chromium trusts.

- Matt

Andrew Ayer

unread,
Jan 16, 2018, 12:18:45 PM1/16/18
to certificate-...@googlegroups.com, Shiladitya dey
On Fri, 12 Jan 2018 02:04:44 -0800 (PST)
Shiladitya dey <shilad...@gmail.com> wrote:

> 3. *Trust store for logs: *I presume Google will be concerned
> about a CA if Chrome finds a cert issued by the same CA that wasn't
> part of a public log. This means, the list of trusted logs can change.

Yes, and this happens frequently.

In 2016, two logs were distrusted by Chrome and shut down: Certly, for
excessive downtime, and Izenpe, for violating the append-only
property. One log, Google Aviator, was frozen (made read-only) for
violating the Maximum Merge Delay.

In 2017, two logs were distrusted and shut down: Venafi, for violating the
append-only property, and PuChuangSiDa, for disappearing.

Late in 2017, two logs, StartCom and WoSign, were discovered to be
failing to log submitted certificates, and will probably be distrusted.

In addition, the newest logs (Cloudflare Nimbus, DigiCert Yeti, and
Google Argon) are being operated on a yearly rotation schedule: every
year, the oldest log is frozen and a new log is created to handle
newer certificates.

Therefore, any vendor which wants to implement CT enforcement needs a
way to track changes to the log ecosystem and push out new log lists to
the vast majority of its userbase within weeks. If they don't, their
users will soon be unable to establish TLS connections.

At the very least, the trusted log list should have an expiration date,
so if it doesn't get updated, CT enforcement is automatically
disabled. Chrome's log lists are valid for only 10 weeks - if Chrome
is not updated for 10 weeks, Chrome disables CT enforcement.

Regards,
Andrew

Prabha Loganayaki

unread,
Jan 22, 2018, 5:14:45 AM1/22/18
to certificate-transparency
Hi Andrew

We are looking at implementing a CT client. Is there a way we can get the updated list of trusted log servers from Chromium?
And how do we get updates on any changes in the list runtime?

Also is there a distribution of Chromium available that will behave exactly like how it would post April? How do we get it?

Thanks
Prabha

Prabha Loganayaki

unread,
Jan 22, 2018, 5:41:17 AM1/22/18
to certificate-transparency
Also, do you think validating the signature in SCT alone will be enough (assuming I have the public key of the log server)?

Thanks
Prabha

Victor Valle

unread,
Feb 20, 2018, 2:59:48 AM2/20/18
to certificate-transparency

Salz, Rich

unread,
Feb 20, 2018, 8:27:50 AM2/20/18
to certificate-...@googlegroups.com

There is no provision within CT to support what you want. Usually this is done by having “Enterprise” versions of browsers that, for example, blindly accept any certificates from certain configured root CA’s, without enforcing more stringent security requirements on them.

 

Reply all
Reply to author
Forward
0 new messages