Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

EV Certificate-based Security Model

35 views
Skip to first unread message

Gervase Markham

unread,
Aug 24, 2011, 7:48:34 AM8/24/11
to mozilla-d...@lists.mozilla.org
On the last WebAPI call, I outlined a permissions model for web apps
based on EV certs. Chris has asked me to write it up in more detail.

Background and Initial Analysis
-------------------------------

The aim of a security model is to prevent evil actions. Evil actions can
occasionally be done unwittingly by good people, but in general evil
actions are taken by evil people. How can one prevent such actions?
There are the following ways:

1) Prevent the action being taken; subdivided into:
A) prevent it technically
B) prevent it organizationally
2) Increase the cost to the attacker of taking the action, such that
the action becomes unappealing; subdivided into:
A) increase monetary cost
B) increase non-monetary cost

Examples of how you might do each of these are:

1 A) - APIs which pop up "can this app do this action?" prompts
(although their effectiveness in prevention is debated)
1 B) - App stores with manual code review (e.g. AMO)
2 A) - Require app developers to post a $50,000 bond or similar
2 B) - Require app developers to reveal their real-world identities so
they can be arrested if their app does something illegal

I suggest that 1 A) is very difficult to make usable, 1 B) has scaling
issues, and 2 A) restricts innovation. My proposal is for a solution
which falls mostly into bucket 2 B) (with a little 2 A) thrown in, and 1
A) as a fallback option).

EV Certificates
---------------

EV ("Extended Validation") certificates are what (some say) certificates
should always have been - a strong binding from a domain name to the
identity of its owner. The standard was developed over several years by
the CA/Browser Forum, which includes all the major CAs and browser
vendors, and is now at 1.3:
http://cabforum.org/Guidelines_v1_3.pdf

The aim of EV is to make the information inside a certificate as to the
identity of the owner accurate, by doing vetting which is prohibitively
expensive to fool. You could perhaps obtain an EV certificate with false
information in it if you were prepared to spend tens of thousands of
dollars on the subterfuge; but it would be very difficult to make that
money back in criminal enterprise. Hence, false EV certificates are
economically unattractive investment propositions for criminals.

(We would need to be careful, if we were to use them as part of our
scheme, not to shift that balance by significantly increasing the
rewards of getting a false one.)

My Proposal
-----------

My proposal is that when you install a web application from a website
over an EV-SSL connection, you ask the user "Do you trust <Company X>?"
If they say yes, then you save the EV certificate somewhere in a browser
or OS-level store associated with that app, and which the app can't
touch. You then know who to go after if the app turns out to be
malicious. Whenever the app was reloaded, you would re-check the EV-ness
of the currently-used certificate, and update the store.

Apps loaded in this way would gain full access to your hardware etc.
without prompting.

Apps loaded by other means (over non-EV SSL or plain HTTP) would need to
be restricted in one of the other ways listed above - by 1 A)-style
OS-level prompts for dangerous actions, for example. The UX here would
be worse - we'd try not to make it too much worse, but it would be worse
- and that would drive popular apps to the EV model, and better UX,
while still enabling small, niche apps to run. A bit like AMO vs.
off-AMO, or Android Market vs. 3rd party website.

Note that this model also enables a form of "app store" - a company
could buy an EV cert and go into business certifying the "goodness" of
other people's apps using any method, and hosting them on their
infrastructure. The cost of this would perhaps be less than an EV cert,
and would further extend the opportunities for promptless apps. The
company could use any certification model, from code review to "he is an
alumni of my old school", as long as they were willing to put their
reputation and identity behind the app.

Advantages
----------

This works like the web, leveraging its existing secure transmission and
verified real-world identity infrastructure. It doesn't require a single
central authority, scales much better than code review, and has better
UX than a prompt-based approach. It extends the "it just works" of
websites into the realm of hardware interactions and greater capabilities.

What This Is Not
----------------

People have proposed certificate-based solutions for "code signing"
before. They have tended to founder on implementational complexity. But
here we are not signing the code, we are taking any code served up by a
verified identity. There is no need for additional build steps.

Having said that, we should carefully analyse previous certificate-based
efforts and see what the pitfalls were, so we can hopefully avoid them.

Disadvantages
-------------

EV is currently only available to companies and not natural persons,
although we might be able to change that. (It's an issue which has come
up regularly in the CAB Forum.)

EV can be expensive - although it seems GoDaddy have found a way to sell
them for $99:
http://www.sslshopper.com/cheapest-ev-ssl-certificates.html
(I'm not sure if there's a catch there. Reading the reviews, perhaps
it's that their process is slow and difficult.)

This only helps in preventing actions which are _illegal_, or for which
legal redress would be possible. Therefore, it may not be so effective
at preventing unwanted privacy violations. This would need further
analysis. We could perhaps also have a certificate blacklist; if someone
knew their $400 investment would become useless if they behaved badly,
that might be an incentive. But that does hand a fair bit of control to
Mozilla that we might not want.

I have an off-the-wall idea about this; what if we made it so that when
an app developer used an API, that created a binding contract with the
user to behave in a certain way? So, in order to enable the Contacts
API, say, the app had to call:

var ppol = getContactsPrivacyPolicy();
myAppAgreesToThisPolicyPleaseEnableContactAPI(ppol);

We would document that calling that call legally required the app
developer to respect the machine-readable privacy policy returned by the
call. If they wrote an app which didn't, the user could sue! If
click-through agreements can be legally binding, then surely we can find
a way to make this legally binding.


I'm sure this doesn't cover all the possible issues; comments welcome.

Gerv

Robert Kaiser

unread,
Aug 24, 2011, 10:53:13 AM8/24/11
to mozilla-d...@lists.mozilla.org
Gervase Markham schrieb:

> My proposal is that when you install a web application from a website
> over an EV-SSL connection, you ask the user "Do you trust<Company X>?"

There is a problem with this from the beginning. I trust an entity for
certain things and not others - and even more, I trust that one app of
them can access one feature, while another may not.

And that's in addition of the proposed EV policy 1) being dependent on
the CA model, which we know to be seriously flawed, if not broken, by
design (see various problems with CAs we have seen over the recent
years) and 2) catering to the rich major companies while making it hard
for small individuals or non-profits to do things reasonably.

Robert Kaiser

--
Note that any statements of mine - no matter how passionate - are never
meant to be offensive but very often as food for thought or possible
arguments that we as a community should think about. And most of the
time, I even appreciate irony and fun! :)

Robert Accettura

unread,
Aug 24, 2011, 11:50:31 AM8/24/11
to Robert Kaiser, mozilla-d...@lists.mozilla.org
On Wed, Aug 24, 2011 at 10:53 AM, Robert Kaiser <ka...@kairo.at> wrote:
> Gervase Markham schrieb:

>>
>> My proposal is that when you install a web application from a website
>> over an EV-SSL connection, you ask the user "Do you trust<Company X>?"
>
> There is a problem with this from the beginning. I trust an entity for
> certain things and not others - and even more, I trust that one app of them
> can access one feature, while another may not.
>
> And that's in addition of the proposed EV policy 1) being dependent on the
> CA model, which we know to be seriously flawed, if not broken, by design
> (see various problems with CAs we have seen over the recent years) and 2)
> catering to the rich major companies while making it hard for small
> individuals or non-profits to do things reasonably.
>

I'd also argue that companies will eventually learn to scale identity
verification criteria (GoDaddy may already have). Also price over
time may not keep up with inflation, therefore making prices "lower"
over the course of time. + market demand etc.

I'm not sure relying on economics is a sound security mechanism.
Secondly I'm not sure relying on being a verified physical identity
really does that much unless you trust the government creating the
documents behind it. In the US that even comes down to the state
level and can be done largely online. I'm not sure there are any
known fraud rates for that, but I suspect if it's like anything else
it's well above 0%. Politicians likely don't care terribly much since
"business growth" sounds positive.

Lastly there are other identity initiatives going around including
NSTIC[1] in the US which isn't clear if it would be limited to
individuals, or what would even come out of it. Verified identities
in one place could be transferable in some way further altering the
economics.

Summary: I like the technical perspective, but I'm uneasy that this
will be future proof, or anything close to it.


1. http://www.nist.gov/nstic/

Gervase Markham

unread,
Aug 24, 2011, 1:34:16 PM8/24/11
to mozilla-d...@lists.mozilla.org
On 24/08/11 15:53, Robert Kaiser wrote:
> There is a problem with this from the beginning. I trust an entity for
> certain things and not others - and even more, I trust that one app of
> them can access one feature, while another may not.

Sure. I would imagine that each app would present such a dialog; giving
permission for App A from Google doesn't mean you give permission for App B.

> And that's in addition of the proposed EV policy 1) being dependent on
> the CA model, which we know to be seriously flawed, if not broken, by
> design (see various problems with CAs we have seen over the recent
> years)

You have to do a bit better than that :-) Which flaws in particular, and
why do they apply to EV?

> and 2) catering to the rich major companies while making it hard
> for small individuals or non-profits to do things reasonably.

I don't think that's necessarily so, but to the extent it is, it's a
trade-off against the poor UI of multiple hard-to-understand permissions
popups, and the lack of scalability of having a team code-reviewing the
world's web apps.

Looking at my general analysis at the top, what sort of system do you
favour, and which of my four buckets does it fall into?

Gerv

Brian Hernacki (Palm GBU)

unread,
Aug 24, 2011, 5:00:36 PM8/24/11
to mozilla-d...@lists.mozilla.org

On Aug 24, 2011, at 7:53 AM, Robert Kaiser wrote:

> Gervase Markham schrieb:


>> My proposal is that when you install a web application from a website
>> over an EV-SSL connection, you ask the user "Do you trust<Company X>?"
>

> There is a problem with this from the beginning. I trust an entity for
> certain things and not others - and even more, I trust that one app of
> them can access one feature, while another may not.
>

> And that's in addition of the proposed EV policy 1) being dependent on
> the CA model, which we know to be seriously flawed, if not broken, by
> design (see various problems with CAs we have seen over the recent

> years) and 2) catering to the rich major companies while making it hard

> for small individuals or non-profits to do things reasonably.

While you may want a certification/validation system *like* EV,CA,etc I don't think you want to overload the existing ones as they serve a different purpose and may have flaws that make them unsuitable.

As we've looked at supporting more expansive APIs to web applications one of the important goals of source validation is to provide a strong enough identity connection to enable remediation. It's reasonable to expect that each platform/product or even user population will want to draw the line on that differently. Whatever scheme is used should support the notion of multiple authorities and the expectation of different validation standards.

--brian

Robert O'Callahan

unread,
Aug 24, 2011, 5:33:50 PM8/24/11
to Gervase Markham, mozilla-d...@lists.mozilla.org
I don't think prompting people to make a trust decision for "company X" or
"application Y" is going to be effective. It doesn't seem different from
other trust prompts that have been ineffective in the past. You're simply
placing a barrier between users and completing an action they have already
initiated, and asking them to click to jump over it.

I don't think relying on the economics of "being able to go after someone"
is going to be effective. It relies on identity verification being effective
worldwide, which it clearly is not. It relies on people with real identities
not being able to just disappear, which is also unreasonable. It relies on
economic arguments the that the cost of being a "fall guy", or just
disappearing, or taking some punishment in whatever jurisdiction, exceeds
the rewards of abusing an EV. Those arguments will be very difficult to
verify even if they're plausible, which I don't think they are.

You say:

> We would need to be careful, if we were to use them as part of our
> scheme, not to shift that balance by significantly increasing the
> rewards of getting a false one
>

That's going to be tough, since your proposal would do just that.

Rob
--
"If we claim to be without sin, we deceive ourselves and the truth is not in
us. If we confess our sins, he is faithful and just and will forgive us our
sins and purify us from all unrighteousness. If we claim we have not sinned,
we make him out to be a liar and his word is not in us." [1 John 1:8-10]

Robert O'Callahan

unread,
Aug 24, 2011, 5:38:50 PM8/24/11
to Gervase Markham, mozilla-d...@lists.mozilla.org
We may be able to use identity to improve permissions in other ways.

For example, where we have a passive geolocation-style prompt for permission
for a particular feature, and we have a "remember this decision" checkbox, I
think it would make sense to offer "remember this decision for all Google
applications", since Google can get that effect anyway in most cases by
framing an application from one domain in another application and exchanging
information in their backend, or by just serving multiple applications from
the same domain.

Cesar Oliveira

unread,
Aug 24, 2011, 5:56:40 PM8/24/11
to Brian Hernacki (Palm GBU), mozilla-d...@lists.mozilla.org
On Wed, Aug 24, 2011 at 17:00, Brian Hernacki (Palm GBU) <
Brian.H...@palm.com> wrote:

>
> While you may want a certification/validation system *like* EV,CA,etc I
> don't think you want to overload the existing ones as they serve a different
> purpose and may have flaws that make them unsuitable.
>

Yes. This also makes the revocation trickey because you couldn't use
existing infrastructure (CRL) to do so. A website that does something
malicious over EV-SSL does not necessarily mean their certificate gets
revoked. Or at least, I think that's how it works. Just a thought anyways.

Cesar

Gervase Markham

unread,
Aug 25, 2011, 6:58:34 AM8/25/11
to mozilla-d...@lists.mozilla.org
On 24/08/11 16:50, Robert Accettura wrote:
> I'd also argue that companies will eventually learn to scale identity
> verification criteria (GoDaddy may already have). Also price over
> time may not keep up with inflation, therefore making prices "lower"
> over the course of time. + market demand etc.

Great - the more people who have access to verified identity, the better.

> I'm not sure relying on economics is a sound security mechanism.

I think you are misunderstanding slightly. We are not relying on the
high price of obtaining a real EV certificate as the primary security
mechanism, we are relying on the high price of obtaining one with bad
information in it. This price is related to the level of verification
done, which is defined by the EV standard, and is independent
(hopefully) of CA efficiency.

> Secondly I'm not sure relying on being a verified physical identity
> really does that much unless you trust the government creating the
> documents behind it.

EV certificates also include the country, so people can make that
determination too.

> In the US that even comes down to the state
> level and can be done largely online. I'm not sure there are any
> known fraud rates for that, but I suspect if it's like anything else
> it's well above 0%. Politicians likely don't care terribly much since
> "business growth" sounds positive.

If you think the EV standard can be broken, break it - and the CAB Forum
will patch it :-) Just like security software. But we have a reasonable
level of confidence that the verification requires means that breaking
it would be a pretty expensive proposition.

> Lastly there are other identity initiatives going around including
> NSTIC[1] in the US which isn't clear if it would be limited to
> individuals, or what would even come out of it. Verified identities
> in one place could be transferable in some way further altering the
> economics.

I would anticipate, in this case, that people would use their NSTIC
identity to more easily obtain a certificate containing it.

Certificates have other roles in the Internet ecosystem - they provide
an encrypted channel to a known domain name. That part won't go away.

Gerv

Gervase Markham

unread,
Aug 25, 2011, 7:00:08 AM8/25/11
to mozilla-d...@lists.mozilla.org
On 24/08/11 22:00, Brian Hernacki (Palm GBU) wrote:
> While you may want a certification/validation system *like* EV,CA,etc
> I don't think you want to overload the existing ones as they serve a
> different purpose and may have flaws that make them unsuitable.

They serve the purpose of binding an identity to a domain name. And
that's what this scheme needs.

You'll need to be more specific than "they may have flaws which make
them unsuitable". :-)

> As we've looked at supporting more expansive APIs to web applications
> one of the important goals of source validation is to provide a
> strong enough identity connection to enable remediation. It's
> reasonable to expect that each platform/product or even user
> population will want to draw the line on that differently. Whatever
> scheme is used should support the notion of multiple authorities and
> the expectation of different validation standards.

Why do you need different validation standards? Surely "strong enough to
enable remediation" is the single required standard? Or have I
misunderstood you?

Gerv

Gervase Markham

unread,
Aug 25, 2011, 7:06:52 AM8/25/11
to mozilla-d...@lists.mozilla.org
On 24/08/11 22:56, Cesar Oliveira wrote:
> Yes. This also makes the revocation trickey because you couldn't use
> existing infrastructure (CRL) to do so. A website that does something
> malicious over EV-SSL does not necessarily mean their certificate gets
> revoked. Or at least, I think that's how it works. Just a thought anyways.

http://cabforum.org/Guidelines_v1_3.pdf

11.3.2 Investigation

CAs MUST begin investigation of all Certificate Problem Reports within
twenty-four hours of receipt and decide whether revocation or other
appropriate action is warranted based on at least the following criteria:

(1) The nature of the alleged problem;

(2) The number of Certificate Problem Reports received about a
particular EV Certificate or Web site;

(3) The identity of the complainants (for example, a complaint from a
law enforcement official that a Web site is engaged in illegal
activities should carry more weight than a complaint from a consumer
alleging that they didn't receive the goods they ordered); and

(4) Relevant legislation.


So it is at the CA's discretion, but there is certainly a capability for
revocation based on illegal activity, and the CA's Terms of Use will
probably prohibit it (and that allows them to revoke also).

Gerv

Gervase Markham

unread,
Aug 25, 2011, 7:13:01 AM8/25/11
to mozilla-d...@lists.mozilla.org
Hi Robert,

Some reasonable points :-) Let me continue to argue my corner:

On 24/08/11 22:33, Robert O'Callahan wrote:
> I don't think prompting people to make a trust decision for "company X" or
> "application Y" is going to be effective. It doesn't seem different from
> other trust prompts that have been ineffective in the past. You're simply
> placing a barrier between users and completing an action they have already
> initiated, and asking them to click to jump over it.

Perhaps you are right. Perhaps we don't need the prompt; we just need to
store the identity for future redress.

> I don't think relying on the economics of "being able to go after someone"
> is going to be effective. It relies on identity verification being effective
> worldwide, which it clearly is not.

The EV Guidelines were written to be applicable worldwide, with input
from many different countries. But if a CA cannot sufficiently verify an
identity in a country, it won't issue EV certificates to applicants from
that country.

> It relies on people with real identities
> not being able to just disappear, which is also unreasonable.

Well, it depends. If they've managed to use their illegal actions to
clear $10M, then disappearing is probably an option they could take. If
all they can do is make a few thousand from credit card number abuse,
that doesn't really finance a Caribbean island lifestyle.

> It relies on
> economic arguments the that the cost of being a "fall guy", or just
> disappearing, or taking some punishment in whatever jurisdiction, exceeds
> the rewards of abusing an EV.

Legal systems and courts do tend to try and see to it that crime doesn't
pay. Clearly, they are not 100% successful, as we still have crime :-)
But I think it's at least worth looking into whether we can set up the
incentives right.

>> We would need to be careful, if we were to use them as part of our
>> scheme, not to shift that balance by significantly increasing the
>> rewards of getting a false one
>>
> That's going to be tough, since your proposal would do just that.

Well, to say that we need to do a threat analysis, and work out how
criminals could cash in, and how much they could cash in, from having
full access to your device and those of N other people who also
installed the app.

Gerv

Robert Accettura

unread,
Aug 25, 2011, 9:42:24 AM8/25/11
to Gervase Markham, mozilla-d...@lists.mozilla.org
On Thu, Aug 25, 2011 at 6:58 AM, Gervase Markham <ge...@mozilla.org> wrote:
> On 24/08/11 16:50, Robert Accettura wrote:
>> I'd also argue that companies will eventually learn to scale identity
>> verification criteria (GoDaddy may already have).  Also price over
>> time may not keep up with inflation, therefore making prices "lower"
>> over the course of time.  + market demand etc.
>
> Great - the more people who have access to verified identity, the better.
>
>> I'm not sure relying on economics is a sound security mechanism.
>
> I think you are misunderstanding slightly. We are not relying on the
> high price of obtaining a real EV certificate as the primary security
> mechanism, we are relying on the high price of obtaining one with bad
> information in it. This price is related to the level of verification
> done, which is defined by the EV standard, and is independent
> (hopefully) of CA efficiency.
>

I don't think that can be assumed. Prices will drop over time,
malicious people get more clever, EV standard is largely static or at
least moving at a much slower rate than the criminals. Odds are
against them.

>> Secondly I'm not sure relying on being a verified physical identity
>> really does that much unless you trust the government creating the
>> documents behind it.
>

> EV certificates also include the country, so people can make that
> determination too.
>

This might be useful for many, but at least in America, this property
has no value in helping people make decisions as geographic ignorance
is legendary...

1/5 can't find the US on a map:
http://www.youtube.com/watch?v=lj3iNxZ8Dww (ignore her answer)

- Only 37% of young Americans can find Iraq on a map—though U.S.
troops have been there since 2003.
- 6 in 10 young Americans don't speak a foreign language fluently.
- 20% of young Americans think Sudan is in Asia. (It's the largest
country in Africa.)
- 48% of young Americans believe the majority population in India is
Muslim. (It's Hindu—by a landslide.)
- Half of young Americans can't find New York on a map.

Per: http://www.nationalgeographic.com/roper2006/findings.html

I could go on...

>> In the US that even comes down to the state
>> level and can be done largely online.  I'm not sure there are any
>> known fraud rates for that, but I suspect if it's like anything else
>> it's well above 0%.  Politicians likely don't care terribly much since
>> "business growth" sounds positive.
>

> If you think the EV standard can be broken, break it - and the CAB Forum
> will patch it :-) Just like security software. But we have a reasonable
> level of confidence that the verification requires means that breaking
> it would be a pretty expensive proposition.
>

I also question some of the vetting requirements.... i.e.:
The Private Organization must not be listed on any government denial
list or prohibited list (e.g., trade embargo) under the laws of the
CA's jurisdiction.

This is actually pretty oppressive as someone in an embargoed country
(say Libya until 2003) would likely need to use a CA within their
dictatorship government, or someone who recognizes them. That CA
however likely couldn't legally be included in a browser like Firefox.

Granted, that's a gripe against EV standards and US law and not
specifically your proposal. I should make that perfectly clear.

I do think a standard should be as open as possible to allow for free
flowing information. I think history has shown how important the
internet is to personal freedoms. We should be doing what we can to
keep the internet and web applications as open to as many people as
possible and ensuring the ability for governments to regulate or
restrict information is kept to an absolute minimum (read: 0). IMHO
this is part of principle 2 of the Mozilla manifesto: "The Internet is
a global public resource that must remain open and accessible."

Giving governments the ability to control a whitelist/blacklist just
seems counter to the principles that have guided the internet to date.

>> Lastly there are other identity initiatives going around including
>> NSTIC[1] in the US which isn't clear if it would be limited to
>> individuals, or what would even come out of it.   Verified identities
>> in one place could be transferable in some way further altering the
>> economics.
>

> I would anticipate, in this case, that people would use their NSTIC
> identity to more easily obtain a certificate containing it.
>
> Certificates have other roles in the Internet ecosystem - they provide
> an encrypted channel to a known domain name. That part won't go away.
>

Agreed. Certificates do have a role, but I don't think they should be
gatekeepers unless we have absolute trust in governments and CA's.

--
Robert Accettura
rob...@accettura.com

Robert Kaiser

unread,
Aug 25, 2011, 10:07:56 AM8/25/11
to mozilla-d...@lists.mozilla.org
Gervase Markham schrieb:

> On 24/08/11 15:53, Robert Kaiser wrote:
>> There is a problem with this from the beginning. I trust an entity for
>> certain things and not others - and even more, I trust that one app of
>> them can access one feature, while another may not.
>
> Sure. I would imagine that each app would present such a dialog; giving
> permission for App A from Google doesn't mean you give permission for App B.

Would I need to trust all my local data and loss of privacy to that
company before being able to even try out something of the app
functionality? That was pointed out by some Planet Mozilla post in
recent months as one of the failures of the Android permission model
that we should not run into - just that in your proposed case, the user
wouldn't even know *what* the app wants to access, it would just be able
to access virtually everything, up to wiretapping all communications and
sending all his private pictures to the company. At least that's how I
understand it.

>> And that's in addition of the proposed EV policy 1) being dependent on
>> the CA model, which we know to be seriously flawed, if not broken, by
>> design (see various problems with CAs we have seen over the recent
>> years)
>
> You have to do a bit better than that :-) Which flaws in particular, and
> why do they apply to EV?

For one one thing, that the CA anyone uses is a SPOF for them, that the
amount of companies being able to grant valid EV certs is very small and
therefore easily corruptable - and then, if you manage to pressure them
or break into them (the former for national agencies, the latter for
black hats, probably with the same effects, though), they are too large
to fail, so the argument that "we can throw them out of validity" gets
smashed by that.
Others have already explained that it's not that hard to get a false
identity if you want to, and given that EV issuers will be pressured to
have low prices, even more if we also build on those here, they will
only do preliminary checks, at the limit of what's enough for EV (or
even below and hope they don't get caught). And even then, what does "Do
you trust AKtiv Software, AT?" mean? How should you know if that's a
trustworthy company as some user out there? (Well, I know they are, as
it's my dad's company, but who else does?)

Robert Kaiser

unread,
Aug 25, 2011, 10:11:31 AM8/25/11
to mozilla-d...@lists.mozilla.org
Gervase Markham schrieb:

> So it is at the CA's discretion

And it's their money, so they will let a lot pass. Or at least some CAs
will, and we can't pull the CAs from validity because they are too large
to fail, we can't block thousands or even hundreds of apps just because
one does wrong stuff.

Also, the user said (s)he "trusts" them, so why is wiretapping all
communication or sending all their photos to Flickr without asking
illegal in the first place?

Robert Accettura

unread,
Aug 25, 2011, 10:48:12 AM8/25/11
to Robert Kaiser, mozilla-d...@lists.mozilla.org
On Thu, Aug 25, 2011 at 10:11 AM, Robert Kaiser <ka...@kairo.at> wrote:
> Gervase Markham schrieb:
>>
>> So it is at the CA's discretion
>
> And it's their money, so they will let a lot pass. Or at least some CAs
> will, and we can't pull the CAs from validity because they are too large to
> fail, we can't block thousands or even hundreds of apps just because one
> does wrong stuff.
>
> Also, the user said (s)he "trusts" them, so why is wiretapping all
> communication or sending all their photos to Flickr without asking illegal
> in the first place?
>

This raises an interesting point. If a CA used by Facebook and Google
were to start giving out certs to invalid individuals... would Mozilla
revoke? AFAIK there's no clear policy other than the typical "reserve
the right"[1].

That's also beyond webapi (it impacts Firefox as it ships today), but
still worth questioning before tying more things into this policy. Do
big CA's have anything to fear?

1. https://www.mozilla.org/projects/security/certs/policy/EnforcementPolicy.html

Robert Kaiser

unread,
Aug 25, 2011, 3:01:14 PM8/25/11
to mozilla-d...@lists.mozilla.org
Robert Accettura schrieb:

> This raises an interesting point. If a CA used by Facebook and Google
> were to start giving out certs to invalid individuals... would Mozilla
> revoke? AFAIK there's no clear policy other than the typical "reserve
> the right"[1].

We already touched that question briefly in s-g (but IIRC noted that
this should be discussed in public) when the Comodo hacking case came
up. It's not clear if we would even have been able to pull the Comodo
root given how much of the web depends on it - and there are some more
(also given that I just learned that Equifax is now owned by GeoTrust
which together with Verisign seems to belong to Symantec) that are just
too large to fail. Actually, the question is that where the limit of
pain/risk is where we actually _can_ do something. But I guess this all
ultimately belongs to a different forum - just its outcome is quite
relevant to this discussion.

> That's also beyond webapi (it impacts Firefox as it ships today), but
> still worth questioning before tying more things into this policy. Do
> big CA's have anything to fear?

That's the question, and IMHO shows once more why the rather
centralistic CA model is problematic.

Brian Hernacki (Palm GBU)

unread,
Aug 25, 2011, 3:13:46 PM8/25/11
to Gervase Markham, mozilla-d...@lists.mozilla.org

On Aug 25, 2011, at 4:00 AM, Gervase Markham wrote:

> On 24/08/11 22:00, Brian Hernacki (Palm GBU) wrote:

>> While you may want a certification/validation system *like* EV,CA,etc
>> I don't think you want to overload the existing ones as they serve a
>> different purpose and may have flaws that make them unsuitable.
>

> They serve the purpose of binding an identity to a domain name. And
> that's what this scheme needs.
>
> You'll need to be more specific than "they may have flaws which make
> them unsuitable". :-)

I was thinking more of a trust relationship flaw. Individual CAs may not operate in a manner consistent with how the owner of the platform or users desire.

>> As we've looked at supporting more expansive APIs to web applications
>> one of the important goals of source validation is to provide a
>> strong enough identity connection to enable remediation. It's
>> reasonable to expect that each platform/product or even user
>> population will want to draw the line on that differently. Whatever
>> scheme is used should support the notion of multiple authorities and
>> the expectation of different validation standards.
>

> Why do you need different validation standards? Surely "strong enough to
> enable remediation" is the single required standard? Or have I
> misunderstood you?


Perhaps the use of "standards" was vague. I don't mean different technical standards to communicate the information. Those I think should be common. But the nature of the validation of the entity (e.g. company or developer) providing the application will naturally vary. Different markets and uses will require different levels of validation of the source. Different geo-regions also have different capabilities to validate.

In some cases you might only care that you have reliable contact information for the developer. In other cases you may want to ensure stronger identity to enable legal remediation. Consider the differences now in validation within the various mobile platform developer environments.

--brian

Gervase Markham

unread,
Aug 26, 2011, 7:55:52 AM8/26/11
to mozilla-d...@lists.mozilla.org
On 25/08/11 15:48, Robert Accettura wrote:
> This raises an interesting point. If a CA used by Facebook and Google
> were to start giving out certs to invalid individuals... would Mozilla
> revoke? AFAIK there's no clear policy other than the typical "reserve
> the right"[1].

Once we have the technical infrastructure in place (which is being
worked on), we would probably put a date-based revocation in - so
existing certs would continue to work, and those issued after the CA
decided to throw away its multi-million dollar business (a somewhat
unlikely event) would not.

Gerv

Gervase Markham

unread,
Aug 26, 2011, 8:03:38 AM8/26/11
to mozilla-d...@lists.mozilla.org
On 25/08/11 15:07, Robert Kaiser wrote:
> Would I need to trust all my local data and loss of privacy to that
> company before being able to even try out something of the app
> functionality?

See my other suggestion in the same message about making app makers
legally agree to respect your privacy.

> That was pointed out by some Planet Mozilla post in
> recent months as one of the failures of the Android permission model
> that we should not run into - just that in your proposed case, the user
> wouldn't even know *what* the app wants to access, it would just be able
> to access virtually everything, up to wiretapping all communications and
> sending all his private pictures to the company. At least that's how I
> understand it.

I'm not sure how you would solve that problem. If an app needs access to
your contacts in order to do anything useful (say it's an app which
tries to locate the social media presence of each of your friends), then
how can you "try out some of the app functionality" without giving it
permission to read your contacts? This is a problem whatever your
permissions model is.

>> You have to do a bit better than that :-) Which flaws in particular, and
>> why do they apply to EV?
>
> For one one thing, that the CA anyone uses is a SPOF for them,

In what way?

> that the
> amount of companies being able to grant valid EV certs is very small and
> therefore easily corruptable

I don't understand your point here. You are saying that if there were
more companies able to issue EV, they would be less corruptible?

> and then, if you manage to pressure them
> or break into them (the former for national agencies, the latter for
> black hats, probably with the same effects, though), they are too large
> to fail, so the argument that "we can throw them out of validity" gets
> smashed by that.

See my other message in this post. But if a CA is broken into and some
bad certs are issued, we don't necessarily throw out the CA, we revoke
the certs.

> Others have already explained that it's not that hard to get a false
> identity if you want to,

Please show the flaws in the EV vetting standard, perhaps by explaining
the steps you would go through to successfully acquire a false EV cert.

> and given that EV issuers will be pressured to
> have low prices, even more if we also build on those here, they will
> only do preliminary checks,

You make this assertion without evidence.

> at the limit of what's enough for EV (or
> even below and hope they don't get caught).

They are audited in various ways to make sure they checks are done, and
a failed audit is the end of their business. Would it really be worth
it? We see no evidence of this happening.

> And even then, what does "Do
> you trust AKtiv Software, AT?" mean? How should you know if that's a
> trustworthy company as some user out there? (Well, I know they are, as
> it's my dad's company, but who else does?)

Good question. But let's take it out of the webapps realm, as it's
exactly the same problem. Why should I download and install some binary
software on my PC from AKtiv Software, AT?

Gerv

Gervase Markham

unread,
Aug 26, 2011, 8:08:20 AM8/26/11
to mozilla-d...@lists.mozilla.org
On 25/08/11 14:42, Robert Accettura wrote:
> I don't think that can be assumed. Prices will drop over time,
> malicious people get more clever, EV standard is largely static or at
> least moving at a much slower rate than the criminals. Odds are
> against them.

That's an assertion without evidence. The EV standard moves at the rate
necessary to eliminate any attacks. No successful attacks so far -> not
much change. But the CAB Forum still exists, and is ready to make
changes should the need arise.

>> EV certificates also include the country, so people can make that
>> determination too.
>
> This might be useful for many, but at least in America, this property
> has no value in helping people make decisions as geographic ignorance
> is legendary...

But isn't "who can I trust?", a non-webapp-specific problem?

We have this problem with websites today. How do I decide to trust "Foo
Sellers, Inc." with my credit card details?

I don't think you will find any websites conducting criminal activity
behind an EV cert. Criminals don't like to be accurately identified, and
have that identity sent to every victim.

>> If you think the EV standard can be broken, break it - and the CAB Forum
>> will patch it :-) Just like security software. But we have a reasonable
>> level of confidence that the verification requires means that breaking
>> it would be a pretty expensive proposition.
>
> I also question some of the vetting requirements.... i.e.:
> The Private Organization must not be listed on any government denial
> list or prohibited list (e.g., trade embargo) under the laws of the
> CA's jurisdiction.

The political implications of this, while interesting, are outside the
scope of the question of whether the EV guidelines are sufficient to
provide a reliable determination of identity.

> Giving governments the ability to control a whitelist/blacklist just
> seems counter to the principles that have guided the internet to date.

Governments will do what governments do, and those under their
jurisdiction have to respect that. That's not something we can change
when defining the security model for web apps.

Gerv

Gervase Markham

unread,
Aug 26, 2011, 8:10:07 AM8/26/11
to mozilla-d...@lists.mozilla.org
On 25/08/11 20:13, Brian Hernacki (Palm GBU) wrote:
> I was thinking more of a trust relationship flaw. Individual CAs may
> not operate in a manner consistent with how the owner of the platform
> or users desire.

That is why I am suggesting using the documented, industry-wide EV
standard rather than just "any cert".

>>> As we've looked at supporting more expansive APIs to web
>>> applications one of the important goals of source validation is
>>> to provide a strong enough identity connection to enable
>>> remediation. It's reasonable to expect that each platform/product
>>> or even user population will want to draw the line on that
>>> differently. Whatever scheme is used should support the notion of
>>> multiple authorities and the expectation of different validation
>>> standards.
>>
>> Why do you need different validation standards? Surely "strong
>> enough to enable remediation" is the single required standard? Or
>> have I misunderstood you?
>
> Perhaps the use of "standards" was vague. I don't mean different
> technical standards to communicate the information. Those I think
> should be common. But the nature of the validation of the entity
> (e.g. company or developer) providing the application will naturally
> vary. Different markets and uses will require different levels of
> validation of the source.

I don't think I agree, so can you give an example?

> Different geo-regions also have different
> capabilities to validate.

If a geo-region has no capability for anyone to establish their
identity, I would think twice about doing business with anyone from that
region.

> In some cases you might only care that you have reliable contact
> information for the developer. In other cases you may want to ensure
> stronger identity to enable legal remediation. Consider the
> differences now in validation within the various mobile platform
> developer environments.

Can you expand more on that?

Gerv

Brian Hernacki (Palm GBU)

unread,
Aug 26, 2011, 12:28:45 PM8/26/11
to mozilla-d...@lists.mozilla.org

On Aug 26, 2011, at 5:10 AM, Gervase Markham wrote:

> On 25/08/11 20:13, Brian Hernacki (Palm GBU) wrote:
>> I was thinking more of a trust relationship flaw. Individual CAs may
>> not operate in a manner consistent with how the owner of the platform
>> or users desire.
>

> That is why I am suggesting using the documented, industry-wide EV
> standard rather than just "any cert".

I have a couple issues relying on EV certs. First they create a high barrier due to cost and process that may not be suitable in some cases and may adversely affect the ability of smaller players to participate. Second, I have no trust in the parties that currently issue them. I suppose that can be solved by separating the issue of which authorities I trust from the mechanism used to communicate the identity. Which brings us mostly back to the first point. And that's really my point below.

>> Perhaps the use of "standards" was vague. I don't mean different
>> technical standards to communicate the information. Those I think
>> should be common. But the nature of the validation of the entity
>> (e.g. company or developer) providing the application will naturally
>> vary. Different markets and uses will require different levels of
>> validation of the source.
>

> I don't think I agree, so can you give an example?

So let's get down to the brass tacks then. How do you validate a user? Email address? Drivers license? Credit card? Passport? Do you require them to do it live and in person? Or over the wire? Does it have an associated cost?

Is it the same level of validation needed for access to location services, camera, audio, data storage, file system access, encrypted storage access, etc? Is it the same for all applications?

A one-man development shop making an application that would use location or camera will not want to incur much cost or friction. Consider what they are required to do for Android or iOS today. However an enterprise development company looking to do high sensitivity applications would be willing to pay large fees, provide articles of incorporation, DUNS #s, etc.


>> Different geo-regions also have different
>> capabilities to validate.
>

> If a geo-region has no capability for anyone to establish their
> identity, I would think twice about doing business with anyone from that
> region.

Identity validation capability differ widely across the globe. The mechanisms used to validate, the friction, cost, and legal structure are pretty variant. While in the US and EU there may be some (though not very strong) means to efficiently validate other regions like India are more challenging. But our scheme needs to support global use.

>> In some cases you might only care that you have reliable contact
>> information for the developer. In other cases you may want to ensure
>> stronger identity to enable legal remediation. Consider the
>> differences now in validation within the various mobile platform
>> developer environments.
>

> Can you expand more on that?
>

See my above example

--brian


Robert Kaiser

unread,
Aug 26, 2011, 5:29:51 PM8/26/11
to mozilla-d...@lists.mozilla.org
Gervase Markham schrieb:

> On 25/08/11 15:07, Robert Kaiser wrote:
>> Would I need to trust all my local data and loss of privacy to that
>> company before being able to even try out something of the app
>> functionality?
>
> See my other suggestion in the same message about making app makers
> legally agree to respect your privacy.

Nice suggestion, no way to guarantee it or even have everybody sign a
legal contract and still make this be the open decentralized web.

> I'm not sure how you would solve that problem.

Exactly. And that's why the model fails. We need an answer here. See
http://weblogs.mozillazine.org/roc/archives/2011/06/permissions_for.html

>> For one one thing, that the CA anyone uses is a SPOF for them,
>
> In what way?

If the CA is in trouble, so are they. And if we allow a new version with
a new cert from a different CA to just take over, we only need one
temporarily corrupted CA to do a lot of damage to the whole system.

> I don't understand your point here. You are saying that if there were
> more companies able to issue EV, they would be less corruptible?

Yes, because if one of those large CAs goes with some bad practice, we
cannot do anything about it anyhow, as they are too big to fail.

> Please show the flaws in the EV vetting standard, perhaps by explaining
> the steps you would go through to successfully acquire a false EV cert.

1) I acquire a false identity from some country I can get it from.
2) I acquire some illegal money or use my stock pile of it I already have.
3) I guess no problem to get the EV cert from there.

Also, in reality, it doesn't matter, because people install the app
anyhow and grant all privs to me if the app looks cool enough, as my
legal name is not negatively known to them and they get prompted for
trusting some unknown-to-them name with every second app anyhow.

> They are audited in various ways to make sure they checks are done, and
> a failed audit is the end of their business. Would it really be worth
> it? We see no evidence of this happening.

No evidence yet, as EV is still quite young and we have been lucky so
far. The problem is that it would not end their business as we can't let
them fail anyhow. If GeoTrust/Verisign or Comodo would start issuing
non-standard weakly vetted EV certs tomorrow, it would not destroy their
business because we cannot take them down without taking down too much
of the Internet that people would just move to a browser or platform
that still accepts them.

>> And even then, what does "Do
>> you trust AKtiv Software, AT?" mean? How should you know if that's a
>> trustworthy company as some user out there? (Well, I know they are, as
>> it's my dad's company, but who else does?)
>
> Good question. But let's take it out of the webapps realm, as it's
> exactly the same problem. Why should I download and install some binary
> software on my PC from AKtiv Software, AT?

Because you read some email that said they had a cool app you need to
try and that makes you see all ponies all around.

Robert O'Callahan

unread,
Aug 28, 2011, 1:29:03 AM8/28/11
to Gervase Markham, mozilla-d...@lists.mozilla.org
On Thu, Aug 25, 2011 at 11:13 PM, Gervase Markham <ge...@mozilla.org> wrote:

> Some reasonable points :-) Let me continue to argue my corner:
>

Thanks. Sure :-).


> On 24/08/11 22:33, Robert O'Callahan wrote:

> > I don't think relying on the economics of "being able to go after
> someone"
> > is going to be effective. It relies on identity verification being
> effective
> > worldwide, which it clearly is not.
>

> The EV Guidelines were written to be applicable worldwide, with input
> from many different countries. But if a CA cannot sufficiently verify an
> identity in a country, it won't issue EV certificates to applicants from
> that country.
>

That would put us in the unfortunate position of saying that if you're in
that country, you can't deploy full-fledged Web apps. Conceivably that's
"just the way it has to be", but it certainly grates against our mission.

> It relies on people with real identities
> > not being able to just disappear, which is also unreasonable.
>

> Well, it depends. If they've managed to use their illegal actions to
> clear $10M, then disappearing is probably an option they could take. If
> all they can do is make a few thousand from credit card number abuse,
> that doesn't really finance a Caribbean island lifestyle.
>

You don't have to disappear to the Caribbean. Cases of people disappearing
and assuming false identities are still heard of in my part of the world, at
least.

>> We would need to be careful, if we were to use them as part of our
> >> scheme, not to shift that balance by significantly increasing the
> >> rewards of getting a false one
> >>
>
> > That's going to be tough, since your proposal would do just that.
>

> Well, to say that we need to do a threat analysis, and work out how
> criminals could cash in, and how much they could cash in, from having
> full access to your device and those of N other people who also
> installed the app.
>

It's just such a dynamic argument. You can do an analysis this year, under
some set of assumptions, and even if you get it right, all kinds of things
could change in the future and potentially destabilize your entire model.

While I wouldn't rule out this approach entirely, I would want to exhaust
every other possibility first.

Gervase Markham

unread,
Aug 29, 2011, 6:11:02 AM8/29/11
to mozilla-d...@lists.mozilla.org
On 26/08/11 22:29, Robert Kaiser wrote:
>> See my other suggestion in the same message about making app makers
>> legally agree to respect your privacy.
>
> Nice suggestion, no way to guarantee it or even have everybody sign a
> legal contract and still make this be the open decentralized web.

Did you actually read my proposal?

A contract is not just a piece of paper that you sign. You can signify
acceptance of a contract in a number of ways, including verbally. Why
not by having your code call a certain API?

This would be good for e.g. the Facebooks of this world, who are
legitimate businesses but have an interest in violating your privacy.

>> I'm not sure how you would solve that problem.
>
> Exactly. And that's why the model fails. We need an answer here. See
> http://weblogs.mozillazine.org/roc/archives/2011/06/permissions_for.html

My point is, I'm not sure how you would solve the problem, as you state
it, in _any_ model.

>>> For one one thing, that the CA anyone uses is a SPOF for them,
>>
>> In what way?
>
> If the CA is in trouble, so are they. And if we allow a new version with
> a new cert from a different CA to just take over, we only need one
> temporarily corrupted CA to do a lot of damage to the whole system.

Your predictions of doom need to come with more detail :-)

Just like with a website, a CA is not a SPOF - you can switch CAs. And,
when we have CAA, you will be able to specify a small set of CAs which
are permitted to issue certs for your domain, avoiding the need for
single-CA locking (which might be a SPOF) but also avoiding the risk of
J. Random CA issuing a cert for your domain.

>> I don't understand your point here. You are saying that if there were
>> more companies able to issue EV, they would be less corruptible?
>
> Yes, because if one of those large CAs goes with some bad practice, we
> cannot do anything about it anyhow, as they are too big to fail.

Like I said, that is not true. We will soon be able to issue a
time-based revocation.

I would suggest that you read up on what has been done and what we are
doing with the CA model, before listing a whole load of criticisms which
have been or are being addressed.

>> Please show the flaws in the EV vetting standard, perhaps by explaining
>> the steps you would go through to successfully acquire a false EV cert.
>
> 1) I acquire a false identity from some country I can get it from.
> 2) I acquire some illegal money or use my stock pile of it I already have.
> 3) I guess no problem to get the EV cert from there.

Sorry, you need to do better than that. There is one big obvious thing
in what you have said which makes it clear that you haven't even read
the EV standard at all. (Read it and you'll soon find out what that
obvious thing is.)

Kairo, I have a lot of respect for you, but it is difficult to have a
design discussion with someone who refuses to read up on the details of
specific proposals, and instead argues based on out-of-date generalities.

>> Good question. But let's take it out of the webapps realm, as it's
>> exactly the same problem. Why should I download and install some binary
>> software on my PC from AKtiv Software, AT?
>
> Because you read some email that said they had a cool app you need to
> try and that makes you see all ponies all around.

So you are saying that I should tell everyone never to download and
install software that comes from your Dad's company?

Gerv

Gervase Markham

unread,
Aug 29, 2011, 6:14:38 AM8/29/11
to mozilla-d...@lists.mozilla.org
On 26/08/11 17:28, Brian Hernacki (Palm GBU) wrote:
> I have a couple issues relying on EV certs. First they create a high
> barrier due to cost and process that may not be suitable in some
> cases and may adversely affect the ability of smaller players to
> participate.

Perhaps you would care to engage with the points I raised on this issue?

>>> Perhaps the use of "standards" was vague. I don't mean different
>>> technical standards to communicate the information. Those I
>>> think should be common. But the nature of the validation of the
>>> entity (e.g. company or developer) providing the application will
>>> naturally vary. Different markets and uses will require different
>>> levels of validation of the source.
>>
>> I don't think I agree, so can you give an example?
>
> So let's get down to the brass tacks then. How do you validate a
> user? Email address? Drivers license? Credit card? Passport? Do you
> require them to do it live and in person? Or over the wire? Does it
> have an associated cost?

Read the EV document.

> Is it the same level of validation needed for access to location
> services, camera, audio, data storage, file system access, encrypted
> storage access, etc? Is it the same for all applications?

For those APIs which need any restrictions at all (and I agree we should
build as many as possible to not need them), then yes, because it's
about establishing identity to a certain standard, and knowing someone's
identity is an incentive for them to behave well.

>> If a geo-region has no capability for anyone to establish their
>> identity, I would think twice about doing business with anyone from
>> that region.
>
> Identity validation capability differ widely across the globe. The
> mechanisms used to validate, the friction, cost, and legal structure
> are pretty variant. While in the US and EU there may be some (though
> not very strong) means to efficiently validate other regions like
> India are more challenging. But our scheme needs to support global
> use.

Your argument seems to be: "It's very hard to properly identify someone
in e.g. India. And yet I want to run the code they send me (even though
I don't know who they are), and your scheme would stop me."

Er, isn't that a good thing, that it stops you running code you don't
know the provenance of?

Gerv

Gervase Markham

unread,
Aug 29, 2011, 6:19:09 AM8/29/11
to mozilla-d...@lists.mozilla.org
On 28/08/11 06:29, Robert O'Callahan wrote:
> That would put us in the unfortunate position of saying that if you're in
> that country, you can't deploy full-fledged Web apps. Conceivably that's
> "just the way it has to be", but it certainly grates against our mission.

Well, you can, but you get warning dialogs.

Also, it wouldn't be as bad as all that. Remember, this model allows
delegated trust. If I see lots of app developers in Bhutan suffering
because there's no trust infrastructure, and I (as a foreigner) know
about Bhutan and know how to reliably identify people, I can get an EV
cert, start a hosting service, and validate them and then host their apps.

>> Well, it depends. If they've managed to use their illegal actions to
>> clear $10M, then disappearing is probably an option they could take. If
>> all they can do is make a few thousand from credit card number abuse,
>> that doesn't really finance a Caribbean island lifestyle.
>
> You don't have to disappear to the Caribbean. Cases of people disappearing
> and assuming false identities are still heard of in my part of the world, at
> least.

But it is true that you need to have made a certain amount of money to
make it worthwhile. I mean, no criminal is going to make the plan:

1) Spend $50,000 fooling the EV system to get a cert
2) Make $50,000 defrauding people before it's revoked
3) Spend lots of money assuming a false identity because now everyone
has associated my old one with criminality
4) ???
5) Profit!

>> Well, to say that we need to do a threat analysis, and work out how
>> criminals could cash in, and how much they could cash in, from having
>> full access to your device and those of N other people who also
>> installed the app.
>
> It's just such a dynamic argument. You can do an analysis this year, under
> some set of assumptions, and even if you get it right, all kinds of things
> could change in the future and potentially destabilize your entire model.
>
> While I wouldn't rule out this approach entirely, I would want to exhaust
> every other possibility first.

What do you think of my initial analysis dividing solution types into 4
buckets? Do you think it covers all the ground, or did I miss something?

Gerv


Robert Kaiser

unread,
Aug 29, 2011, 9:55:32 AM8/29/11
to mozilla-d...@lists.mozilla.org
Gervase Markham schrieb:

> So you are saying that I should tell everyone never to download and
> install software that comes from your Dad's company?

I doubt they will ever produce a web app that could be interesting for
you. But I was just taking them as an example. You can take any company
that you don't know but creates an app that someone has told you to be
"so cool" or even "awesome".

Robert Kaiser

unread,
Aug 29, 2011, 9:57:51 AM8/29/11
to mozilla-d...@lists.mozilla.org
Gervase Markham schrieb:
> 4) ???
> 5) Profit!

That part at least works for a lot of people. ;-)

Robert Kaiser

unread,
Aug 29, 2011, 10:07:37 AM8/29/11
to mozilla-d...@lists.mozilla.org
Gervase Markham schrieb:

> Er, isn't that a good thing, that it stops you running code you don't
> know the provenance of?

But it doesn't for the normal user, as someone told him this was good,
or this app just looked good for him, and he would accept any billboard
to first get into it, esp. one that doesn't even tell him what the app
needs access to (but as the Android case shows well, even that).

Robin Berjon

unread,
Aug 29, 2011, 10:22:05 AM8/29/11
to Gervase Markham, mozilla-d...@lists.mozilla.org
On Aug 29, 2011, at 12:14 , Gervase Markham wrote:
> Er, isn't that a good thing, that it stops you running code you don't
> know the provenance of?

Please don't hit me, but it seems that in terms of the information that is provided the user to make her decision to run a given app or not this proposal is rather similar to the ActiveX security model in which users would choose to "trust software from X". It might be interesting to see a postmortem of how well that worked (a quick search didn't bring one up, but I might well have been looking for the wrong stuff).

--
Robin Berjon - http://berjon.com/ - @robinberjon

Brian Hernacki (Palm GBU)

unread,
Aug 30, 2011, 12:21:53 AM8/30/11
to mozilla-d...@lists.mozilla.org

> > So let's get down to the brass tacks then. How do you validate a
> > user? Email address? Drivers license? Credit card? Passport? Do you
> > require them to do it live and in person? Or over the wire? Does it
> > have an associated cost?

> Read the EV document.

EV always seemed very focused on business and government entities but it there is accommodation for over-the-wire verification of an individual thats very very cheap and suitable for students, garage devs, etc please point me at it as I'm not aware of it.

Of course, Comodo and DigiNotar are listed as EV issuers so I'm not sure I'd trust them anyways. But as I said, mechanism is different than trust policy.

> Your argument seems to be: "It's very hard to properly identify someone
> in e.g. India. And yet I want to run the code they send me (even though
> I don't know who they are), and your scheme would stop me."

Whatever mechanism is chosen needs to work as close to globally as possible. I may choose note trust an application from India but people in India may have different opinions.

A good design would separate the mechanism used to communicate from the trust policy and allow the platform and/or user to determine the policy. We will however need a mechanism that accommodates multiple authorities to assert on behalf of a developer to allow an application to reach non-overlapping trust configurations.

--brian

Gervase Markham

unread,
Aug 30, 2011, 9:38:36 AM8/30/11
to mozilla-d...@lists.mozilla.org
On 29/08/11 15:22, Robin Berjon wrote:
> Please don't hit me, but it seems that in terms of the information
> that is provided the user to make her decision to run a given app or
> not this proposal is rather similar to the ActiveX security model in
> which users would choose to "trust software from X". It might be
> interesting to see a postmortem of how well that worked (a quick
> search didn't bring one up, but I might well have been looking for
> the wrong stuff).

Not hitting you at all; that's a reasonable comparison (although we
might do different things with defaults, and be able to present more
information). I would be very interested to hear how well this worked.

Microsoft's technology for assigning reputation to code (SmartScreen) is
also partly digital certificate based. It also has a crowdsourced
reputation component, but this is rather privacy-violating.

Gerv


Gervase Markham

unread,
Aug 30, 2011, 9:41:22 AM8/30/11
to mozilla-d...@lists.mozilla.org
On 30/08/11 05:21, Brian Hernacki (Palm GBU) wrote:
> EV always seemed very focused on business and government entities but
> it there is accommodation for over-the-wire verification of an
> individual thats very very cheap and suitable for students, garage
> devs, etc please point me at it as I'm not aware of it.

Is there such as thing as cheap and good verification of individual
identity?

> Whatever mechanism is chosen needs to work as close to globally as
> possible. I may choose note trust an application from India but
> people in India may have different opinions.

Why does living in India or being Indian make a particular Indian more
trustworthy?

> A good design would separate the mechanism used to communicate from
> the trust policy and allow the platform and/or user to determine the
> policy.

I'm not sure I quite understand how that works. Can you give a
hypothetical example?

> We will however need a mechanism that accommodates multiple
> authorities to assert on behalf of a developer to allow an
> application to reach non-overlapping trust configurations.

Multiple authorities can assert on behalf of a developer if the
developer is happy to use one URL per authority for their app.

Gerv

Robert O'Callahan

unread,
Aug 30, 2011, 8:12:51 PM8/30/11
to Gervase Markham, mozilla-d...@lists.mozilla.org
On Mon, Aug 29, 2011 at 10:19 PM, Gervase Markham <ge...@mozilla.org> wrote:

> On 28/08/11 06:29, Robert O'Callahan wrote:

> > That would put us in the unfortunate position of saying that if you're in
> > that country, you can't deploy full-fledged Web apps. Conceivably that's
> > "just the way it has to be", but it certainly grates against our mission.
>

> Well, you can, but you get warning dialogs.
>

... which suck.

Also, it wouldn't be as bad as all that. Remember, this model allows
> delegated trust. If I see lots of app developers in Bhutan suffering
> because there's no trust infrastructure, and I (as a foreigner) know
> about Bhutan and know how to reliably identify people, I can get an EV
> cert, start a hosting service, and validate them and then host their apps.
>

Yeah, but to some extent you're going to have to rely on national
infrastructure for legally pursuing people. I just don't see how this is
going to work in an environment with corrupt law enforcement.

>> Well, it depends. If they've managed to use their illegal actions to
> >> clear $10M, then disappearing is probably an option they could take. If
> >> all they can do is make a few thousand from credit card number abuse,
> >> that doesn't really finance a Caribbean island lifestyle.
> >
> > You don't have to disappear to the Caribbean. Cases of people
> disappearing
> > and assuming false identities are still heard of in my part of the world,
> at
> > least.
>

> But it is true that you need to have made a certain amount of money to
> make it worthwhile. I mean, no criminal is going to make the plan:
>
> 1) Spend $50,000 fooling the EV system to get a cert
> 2) Make $50,000 defrauding people before it's revoked
> 3) Spend lots of money assuming a false identity because now everyone
> has associated my old one with criminality
> 4) ???
> 5) Profit!
>

OK, but my wild guess is that a $1M reward in step 2 would make it appealing
to a lot of people.

Then you have to ask yourself "what permissions can I grant that will scale
to a maximum of $1M of reward if abused?" I have no idea how to answer that
today, let alone for some arbitrary time into the future.

What do you think of my initial analysis dividing solution types into 4
> buckets? Do you think it covers all the ground, or did I miss something?
>

Within your buckets, I think bucket 1A is far preferable to the others and
we should scrape to the very bottom of it before trying the others.

I might add
3) Make it cheap/easy to recover from the action being taken
4) Make it difficult for attackers to benefit from their actions

#4 assumes that sheer malice is dwarfed by self-interested actions, which
seems to be the case currently.

Cesar Oliveira

unread,
Sep 1, 2011, 11:06:11 AM9/1/11
to Gervase Markham, mozilla-d...@lists.mozilla.org
On Mon, Aug 29, 2011 at 06:19, Gervase Markham <ge...@mozilla.org> wrote:

>
> What do you think of my initial analysis dividing solution types into 4
> buckets? Do you think it covers all the ground, or did I miss something?
>
>

(I'm throwing in a bunch of notes/ideas just from reading everything)

I wonder if WOT could play a part in alleviating the problem of trust.
That's a whole other thread/lifetime. Someone with a lot more knowledge of
WOT would have to argue for it :)

I think we are focusing hard on EV's shortcomings and saying "we shouldn't
use it because it isn't perfect" (not everyone is saying that. Though
personally I am not convinced that EV would be the best solution to all
permission problems, but I still think it can play a role). I think it would
be more useful if we compare it against the other proposed ideas. Nothing
could be 100% effective against all malicious parties. Even if we could get
close does not mean the solution would be a good idea.

AMO has been startling resilient. It had to code review thousands of
extensions and updates. Though I think at one point they had to hire
additional help to deal with backlog. I suppose if we had to handle
magnitudes more, you are right and it would not be scalable without a huge
amount of resources. But code review might have a part to play in this
solution.

This might benefit from a multifaceted approach. For example, we can use EV
certs for Telephony/USB/Filesystem access if it makes sense to do so. But
for the Contacts API, I can imagine a scenario where we could use tokens
instead of real data to protect people's privacy, so that reading a contact
is a low security/privacy risk (I am imagining a use case like listing
contacts to send an e-mail using mailto:. This might make other use cases
impossible, like sending an e-mail server side, in which case a website
would need escalated privileges). Or use a sandbox mode where the random
contacts are sent to the webapp so the user can figure out at least, if the
webapp is maliciously deleting it.

One aspect of EV that I am worried about is revocation. As you mentioned it
is at the discretion of the CA, and I think the browser needs that power in
an extreme cases of abuse where the CA might refuse to do so.

Cesar

0 new messages