_______________________________________________
WG-UMA mailing list
WG-...@kantarainitiative.org
http://kantarainitiative.org/mailman/listinfo/wg-uma
Paul,
I’m a newbie at UMA, but two cases come to mind where digital-signing may come-in handy.
The first is for logging purposes. The Host may wish to log all requests (success or failed) to the Host. It needs to distinguish real requests versus bogus requests from external parties.
The other is for sub-function outsourcing, where the Host is in fact outsourcing some functions to another entity (opaque to the Requestor, and to the AM even). This outsource entity may desire to see the original signed request.
Just some thoughts.
/thomas/
hardjono[at]mit.edu
Hi Paul,
>>> If the issue is that in order to log a message, you'd want to log
>>> the signature along with it—presumably for the purposes of
>>> non-repudiation—which you could verify at a later time,
>>> which would be a problem with bearer token because you
>>> don't want to write the bearer tokens—secrets—in your log,
>>> then we need to understand something: what type of key
>>> infrastructure will be in place to allow signatures in
>>> logs to be verified potentially long after the transaction was logged?
Unfortunately, when it comes to legal disputes (ie. in court)
all the forensic evidence will come into play.
I’m thinking of the healthcare scenario under the recent HITECH Act.
Consider the Host as a small third-party medical database provider (supposedly
neutral) and consider the Requester being a Health Insurance Co (who
wants to know every detail of a person’s health record).
If I was the Host, I would like to have all possible evidence
of access to be logged. So much so, that I would insist that actual
digital signatures be applied (which holds-up in court BTW).
>>> Let's presume public key infrastructure because symmetric keys
>>> will be near impossible for host to use in any non-repudiation
>>> case. Okay, so now what will you log? Well, the message, including
>>> the signature, and presumably including the certificate of the signer.
>>> This certificate was signed by some trusted third-party.
>>> It's unlikely to be that of an industry-trusted third
>>> party (such as Verisign); more likely it will be that of AM.
ok, so I don’t really get this one. I would have thought that
the Host would insist that the Requestor use a cert issued
by a public CA like Verisign. (It’d be nice if the CA was also an AM,
but we can’t rely on that.)
>>> So, host now needs to store the certificate of the issuing AM,
>>> and be able to prove at a later time that it was from the issuing AM.
If a Public CA was the issuer, then the Host need only keep
a small number CA certs. There aren’t that many
industry grade CAs around. Even fewer than the number
of CA certs shipped in browsers J
>> Does this sound like it's getting hard? It does to me.
>>> Seems like a high price to pay so that you can verify long after
>>> the fact the authenticity of logged requests.
If the Host was taken to court and is later penalized on a per-incident
basis (as in HITECH Act), then no this is not a high price for
business survivability. It’d be interesting to hear the opinions of
real-world Hosts.
/thomas/
Hardjono[at]mit.edu
> If I was the Host, I would like to have all possible evidence of
> access to be logged. So much so, that I would insist that actual
> digital signatures be applied (which holds-up in court BTW).
Currently, we do not have a requirement that reads along the lines of
"provide hosts the maximum amount of forensic evidence such that they
can prove their cases in court."
Please consider that my conclusion was based on the published
requirements in the UMA working group; if the requirements change, the
conclusion should likely be revisited. First, I think it's important to
get some buy-in to change the requirements.
So, presuming such a requirement was added, what exactly would such
digital signatures be used to prove in court? What supporting evidence
would be required to prove this? What kinds of added burdens will this
add to all parties to achieve this?
> ok, so I don’t really get this one. I would have thought that the Host
> would insist that the Requestor use a cert issued by a public CA like
> Verisign. (It’d be nice if the CA was also an AM, but we can’t rely on
> that.)
What exactly would Verisign provide assurance of when issuing such a
certificate? How would this be used by AM and host to correlate on this
requester? I agree we can't assume that all AMs are CAs or vice-versa.
> If the Host was taken to court and is later penalized on a
> per-incident basis (as in HITECH Act), then no this is not a high
> price for business survivability. It’d be interesting to hear the
> opinions of real-world Hosts.
Perhaps if host's liability is particularly high, it would/should invoke
technology beyond the UMA protocol to protect itself—or even in some
cases, possibly even negate the use of distributed policy delegation
models, as they remove too much control over access control.
Also, if host is subject to statutory per-incident fines for breaches of
security or privacy, it's likely that the host would not even accept
that user is free to choose their own arbitrary AM either—the lawyers
would probably conclude that it's too risky, digital signatures or not.
Paul
Since our list of requirements is "emergent" and we sometimes pick up new requirements and even design principles after a need emerges, we might want to get more specific here. Note that requirement R0b (meaning it came from our charter) already talks about "policies and enforceable contract terms", which must be balanced with DP8, "We should avoid adding crypto burdens as part of our simplicity goal".
The answer to Paul's question below might be found in the answer to what sorts of assurance the host will seek that it's safe to defer access decisions to another party (that is, to become a mere policy enforcement point). While some low-sensitivity use cases may make this handover of control easy to do, we've got other use cases where we can anticipate hosts whitelisting trustworthy AMs, and possibly wanting extra assurances that the granting access "wasn't their fault". (I'm thinking of custodian and hData...)
Issues like this are one reason why it's important for us to continue the process of vetting our scenarios and deciding their disposition. We can probably expect our requirements doc to grow, if slowly, as a result.
Eve
Eve Maler
e...@xmlgrrl.com
http://www.xmlgrrl.com/blog
Comments inline.
> -----Original Message-----
> From: wg-uma-...@kantarainitiative.org [mailto:wg-uma-
> bou...@kantarainitiative.org] On Behalf Of Paul C. Bryan
> Sent: Friday, April 02, 2010 11:40 AM
> To: 'WG UMA'
> Subject: Re: [WG-UMA] Signed messages
>
> On Fri, 2010-04-02 at 10:54 -0400, Thomas Hardjono wrote:
>
> > If I was the Host, I would like to have all possible evidence of
> > access to be logged. So much so, that I would insist that actual
> > digital signatures be applied (which holds-up in court BTW).
>
> Currently, we do not have a requirement that reads along the lines of
> "provide hosts the maximum amount of forensic evidence such that they
> can prove their cases in court."
I would think that a broader "auditability" should be a requirement for any entity (offering any services today) wishing to garner trust from its users/consumers and partners.
Currently there is already an audit requirement R0d ("Allow an individual to audit and monitor various aspects of access relationships"). This is kind of vague/fuzzy, and I'm not sure how one would implement this requirement.
There is also another proposed requirement P8 Access to Audit Logs ("A Host must inform the AM protecting a particular resource on that Host in a timely way of all successful Requester access events")
[NB. I don't get the current P8: I thought the Host is holding/protecting the resource (and not the AM). The AM is issuing/managing the tokens for the resource sitting at the Host.]
Perhaps we could make P8 tighter. Hows this:
A Host protecting a resource belonging to a User must log all access events (in a non-repudiable manner) to that resource, and make such log data available to both the AM (who issued all access tokens to that resource) and the User owning that resource.
>
> Please consider that my conclusion was based on the published
> requirements in the UMA working group; if the requirements change, the
> conclusion should likely be revisited. First, I think it's important to
> get some buy-in to change the requirements.
>
> So, presuming such a requirement was added, what exactly would such
> digital signatures be used to prove in court? What supporting evidence
> would be required to prove this? What kinds of added burdens will this
> add to all parties to achieve this?
Yes, I absolutely understand this (crawl first, then run). There are technologies usable for non-value transactions (eg. login to FaceBook), and others for value-carrying transactions (eg. give access to my health records or credit reports).
So, I won't argue any longer for this signing thing :)
> > ok, so I don’t really get this one. I would have thought that the Host
> > would insist that the Requestor use a cert issued by a public CA like
> > Verisign. (It’d be nice if the CA was also an AM, but we can’t rely on
> > that.)
>
> What exactly would Verisign provide assurance of when issuing such a
> certificate? How would this be used by AM and host to correlate on this
> requester? I agree we can't assume that all AMs are CAs or vice-versa.
FYI, when you buy a certain class of certificate, Verisign does a bunch of manual checks/verifications about your identity. If you are a company, it checks the Dun & Bradstreet data, domain records as well as other data (which is why some certs take up to 2 weeks to issue). Then there is the actual recorded issuance, which may involve attorneys being present in-person.
So to summarize, a CA like Verisign ensures that your real-world brick mortar identity matches the "digital" identity". That is all Verisign claims/implies in its X509 certificates (nothing more). Underlying all this is the Certificate Practices Statement (105 page legal doc) which explains the liabilities and "warranties" (eg. $10K to $10M). IMHO, it is precisely this CPS doc which is one of the key reasons corporations buys certs from Verisign.
(PS. The above does not apply to the cornflakes-box $19 email cert :)
/thomas/
If we are contemplating adding new requirements, I think it would be
wise to project the effect/impact such requirements would have, and
balance these against other considerations, namely the complexity-burden
for implementors and ability for users—namely my mom—to understand and
effectively use such a system.
Paul
-----Original Message-----
From: Eve Maler <e...@xmlgrrl.com>
To: Paul C. Bryan <em...@pbryan.net>
Cc: 'WG UMA' <wg-...@kantarainitiative.org>
Subject: Re: [WG-UMA] Signed messages
- No hosts are interested in signed messages from requesters, when it comes right down to it, even in cases of high trust/sensitivity (maybe in this case the signing option even withers in the OAuth 2.0 spec?)
- Some deployed hosts demand signed messages from requesters for their own protection, but UMA doesn't "care" and doesn't enforce it -- it's entirely out of band (it becomes something negotiated separately, whether statically or at run time, by means standard or proprietary)
- We decide to take on-board some UMA protocol implications in some hosts wanting signed messages, and provide some interoperable means for a host to convey this demand and for a requester to heed it
Right now we're in a mode of passively accepting the OAuth 2.0 ability to sign messages. Do we need to say something in our own spec? advocate some change to OAuth? or just continue to be silent on the matter? That's what I'm interested in figuring out in order to put the issue to bed, and I invite everyone to weigh in...
Eve