[WG-P3] to facilitate the Privacy Framework discussion

0 views
Skip to first unread message

j stollman

unread,
Jan 13, 2011, 6:25:52 AM1/13/11
to Kantara P3WG
All,

I have significantly updated my Trust Framework presentation to facilitate our update on the Privacy Framework subgroup activity.

Enjoy.

Jeff

--
Jeff Stollman
stoll...@gmail.com
1 202.683.8699
Elements of a Trust Framework.ppt

Bob Pinheiro

unread,
Jan 13, 2011, 10:34:26 AM1/13/11
to wg...@kantarainitiative.org
Jeff et al,

Two comments:

1.  I think the term "framework" might be a bit overused.  To me, "framework" should be reserved for the highest level entity we are talking about; in this case, the trust framework.  That is, the framework provides the structure that defines how all of the component pieces will work together to achieve the overall goal, which in this case is trust.  [Of course, there are different trust relationships, and presumably the trust framework will deal with all of these].   

What then to call these components of trust  (ie, privacy, notification, control, etc)?   Each of these components seems to provide a set of criteria for enabling one aspect of trust (ie, privacy, etc) to be satisfied.  Would it make sense to call these components "criteria"?  So the trust framework would be composed of privacy criteria, identity criteria, notification criteria, etc.

2.  My second comment concerns the privacy criteria.  Privacy criteria includes things such as restrictions on use, restrictions on collection, informed consent, etc.  What about restrictions on monitoring?  One aspect of privacy that people are concerned about regarding the identity ecosystem is that the government (or someone) will be able to monitor their online activities easily.  U-Prove technology, in particular, provides a way to prevent this by providing ways to ensure untraceability and unlinkability of the U-Prove tokens that convey identity claims.

Bob P.
_______________________________________________ WG-P3 mailing list WG...@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-p3

 

Rainer Hörbe

unread,
Jan 13, 2011, 10:36:17 AM1/13/11
to j stollman, Kantara P3WG
May I post a few comments to your presentation:

a) Definition of Trust:
There are some common alternatives to the definition "Willingness to engage in a transaction", like
"Reliance on the enforcement of a certain policy", or
"Understood probability in the expected behavior of another party".

There are more scientific definitions of trust available, like in the Wiki article on computation trust:
http://en.wikipedia.org/wiki/Computational_trust#Defining_Trust

I think that the normative definition should user a precise definition, even at the expense of clarity, and a comment can contain the easy ones.

b) Trust element
"A performance commitment by a single party (Object) to a second single party (Subject) .. ". 
Sorry if I invoke the impression to split hairs, but trust implies the point of view of a party that requires/expects some behavior. So a trust element would have be defined for the view of that party, as a requirement to another party that asserts/commits some behavior. Is this term really needed? If the perspective is reverse as I suggest, It sounds like a synonym for a requirement.

c) Trust Framework
"A set of verifiable commitments from the various parties of a transaction to the other parties."
Is a single transaction the right pivot? There might be commitments that are not clearly related to a transaction, like service level agreements or regular auditing.

d) Laws of Trust
I do not understand "5. Trust is not personal". Please can you explain?

e) Defining a Privacy Framework
"The selection of Trust Elements to be included in the subset is not critical."
That is a very good point! I think I got too much fascinated by the question how to break the TF into modules.
But the definition of requirements/trust elements is critical if we need interoperability between frameworks and policies.

f) Classifying Trust Elements

When I look at the slide "Classifying Trust Elements", It appears as a user-centric view. Is this Privacy Framework defined to be all-encompassing or user-centric? In the enterprise use case we have privacy requirements as well. However, these are no requirements from the user as a data subject to the service provider, but of the data controller to user.  

Also we need to be aware that we have a known conflict between data protection and information security. Each one regards the other one as auxiliary tool for its own objectives. That makes a single hierarchical breakdown impossible.

I agree that we cannot map the whole space at once. But we should at least coordinate the territorial claims between the KI WGs.

g) Parties in a Trust Framework
Interesting idea to categorize parties by their level of involvement. I like to see that the IAF Glossary found its way into this listing.

But it also brings a point where I have difficulty with: What kind of roles do we discuss on the level of the trust framework? Actors that can act as legally responsible parties? Or do we also include roles that act on behalf of legally responsible actors?

E.g the audit repository: Is the audit repository a separate role, or even multiple roles per provider?. There could be a common audit repository as, well. A colleague of mine from Germany argued recently that it is the hinge for establishing trust from a legal level, particularly if a transaction ends at court. So he thinks it needs to be part of the TF. I am not convinced of that.

h) Example Trust Elements - Data Collection 1
As mentioned recently, "Extent of risk imposted on subject" does not fit into this class of requirements. We cannot assess risk at this point, only require the protection of the subject through the data collected by the object.

i) Matrix View of Trust Framework Map
I agree with your comments - it will not be possible to create a comprehensive presentation in a flat matrix. However, in a database-powered approach (like cmmls.portalvebund.at) it should be possible, because you cant always work on particular views and have reporting tools to analyze gaps and overlaps.

But any suggestions to simplify the model are very welcome! 

Regarding the IAF: Identity Assurance is primarily a trust relationship between RP and IdP. It implies, that the IdP has to implement a contract with the subscriber, but that is secondary from the position of the IAF, because subscribers and subjects are not participants of the IAF.

There are more trust relationships in the matrix than explained in the slides before .. what does it mean?

j) Parties in a Trust Framework
Could you include the supplements that I added to your spreadsheet, and discuss the open points. A list of Parties/Roles/Actors common for all Kantara Trust Frameworks could be an own document, because it is referenced at different places, like TF, use cases, glossary and TF architecture.

Good to see this document progressing. Thank you, Jeff.

- Rainer
<Elements of a Trust Framework.ppt>_______________________________________________

j stollman

unread,
Jan 13, 2011, 10:54:27 AM1/13/11
to Bob Pinheiro, wg...@kantarainitiative.org
Bob,

Regarding your point about the use of the term "Framework":  I agree with your perspective, but the term is already out there and we have enough headwinds to fight without getting stuck on that point.  In the long run, I would like to implement your suggestion  We'll see how the winds blow over time.

Regarding you point about monitoring:  You raise a good point.  I am not sure that anything that I have so far covers this and it certainly needs to be added to the matrix.

Thanks for the insight.

Jeff

John Bradley

unread,
Jan 13, 2011, 10:59:43 AM1/13/11
to j stollman, wg...@kantarainitiative.org
In the FICAM privacy document there are restrictions on what IdP can do with information about what Government web sites people visit.

General restrictions on collection and use of that information is an interesting question as monitoring that is part of some IdP's business model. 
Beware what you get for free:)

John B.

j stollman

unread,
Jan 13, 2011, 1:00:22 PM1/13/11
to Rainer Hörbe, Kantara P3WG
Rainer,

I appreciate the time and effort you have expended on reviewing my presentation.

As usual, you have made some astute and helpful observations.  I have attempted to address each one in line below.

Jeff

On Thu, Jan 13, 2011 at 10:36 AM, Rainer Hörbe <rai...@hoerbe.at> wrote:
May I post a few comments to your presentation:

a) Definition of Trust:
There are some common alternatives to the definition "Willingness to engage in a transaction", like
"Reliance on the enforcement of a certain policy", or
"Understood probability in the expected behavior of another party".

There are more scientific definitions of trust available, like in the Wiki article on computation trust:
http://en.wikipedia.org/wiki/Computational_trust#Defining_Trust

I think that the normative definition should user a precise definition, even at the expense of clarity, and a comment can contain the easy ones.

In the end, we may be obliged to select a different word than "trust" in order not to be saddled with all of the baggage that this word brings with it. But for the purposes of our "Trust" Framework, the definition is appropriate to our purpose -- even if the word "trust" is not.

b) Trust element
"A performance commitment by a single party (Object) to a second single party (Subject) .. ". 
Sorry if I invoke the impression to split hairs, but trust implies the point of view of a party that requires/expects some behavior. So a trust element would have be defined for the view of that party, as a requirement to another party that asserts/commits some behavior. Is this term really needed? If the perspective is reverse as I suggest, It sounds like a synonym for a requirement.

Your observation is correct.  I tried to circumvent the problem with the text at the end of the definition: 
A performance commitment by a single party (Object) to a second single party (Subject) that engenders the trust of the Subject in the performance of the Object.

Trust still flows from the Subject to the Object.  I welcome anyone's help on improving the phrasing.

c) Trust Framework
"A set of verifiable commitments from the various parties of a transaction to the other parties."
Is a single transaction the right pivot? There might be commitments that are not clearly related to a transaction, like service level agreements or regular auditing.

Again, you are correct.  The Trust Framework needs to be broad enough to cover a broad range of transactions. (All transactions would be ideal, but, likely, unachievable.)  I will amend this.

d) Laws of Trust
I do not understand "5. Trust is not personal". Please can you explain?

Generally, my trust does not align to a particular entity.  Rather, it is engendered by one or more commitments that I have reason to believe that entity will honor.  I may not trust Google, but I might trust that they take pains to anonymize certain attributes about me.  I may have a strong belief that my mother would not intentionally do anything to hurt me, but I might not trust her ability to protect my personal information that resides on her computer.

e) Defining a Privacy Framework
"The selection of Trust Elements to be included in the subset is not critical."
That is a very good point! I think I got too much fascinated by the question how to break the TF into modules.
But the definition of requirements/trust elements is critical if we need interoperability between frameworks and policies.

We are in agreement.

f) Classifying Trust Elements

When I look at the slide "Classifying Trust Elements", It appears as a user-centric view. Is this Privacy Framework defined to be all-encompassing or user-centric? In the enterprise use case we have privacy requirements as well. However, these are no requirements from the user as a data subject to the service provider, but of the data controller to user.  

Also we need to be aware that we have a known conflict between data protection and information security. Each one regards the other one as auxiliary tool for its own objectives. That makes a single hierarchical breakdown impossible.

I agree that we cannot map the whole space at once. But we should at least coordinate the territorial claims between the KI WGs.

I merely picked Trust Elements that are popular for purposes of illustration.  I am in full agreement that Privacy issues apply to a broad range of roles.


g) Parties in a Trust Framework
Interesting idea to categorize parties by their level of involvement. I like to see that the IAF Glossary found its way into this listing.

But it also brings a point where I have difficulty with: What kind of roles do we discuss on the level of the trust framework? Actors that can act as legally responsible parties? Or do we also include roles that act on behalf of legally responsible actors?

E.g the audit repository: Is the audit repository a separate role, or even multiple roles per provider?. There could be a common audit repository as, well. A colleague of mine from Germany argued recently that it is the hinge for establishing trust from a legal level, particularly if a transaction ends at court. So he thinks it needs to be part of the TF. I am not convinced of that.

The questions is valid, but might be premature right now.  I think that we will have to do a lot more work before we will be ready to answer it.  Possibly, the audit repository can be part of an entity's control system and need not be a role unto itself.  But, I haven't given this enough thought to provide you with an answer at this time.

h) Example Trust Elements - Data Collection 1
As mentioned recently, "Extent of risk imposed on subject" does not fit into this class of requirements. We cannot assess risk at this point, only require the protection of the subject through the data collected by the object.

Correct.  How about, "The limitation of risk imposed on Subject through the selection of data collected by Object." 

This is a uniquely interesting Trust Element. I assert that different attributes create different levels of risk.  For example, retaining a Subject's social insurance number (social security number in the US) likely creates a higher risk than a surname.  But in certain circumstances (e.g., countries involved in a tribal civil war), a surname may be enough to target the Subject for genocide.  For this reason, careful selection of low-risk attributes can reduce risk and, as a result, increase trust.


i) Matrix View of Trust Framework Map
I agree with your comments - it will not be possible to create a comprehensive presentation in a flat matrix. However, in a database-powered approach (like cmmls.portalvebund.at) it should be possible, because you cant always work on particular views and have reporting tools to analyze gaps and overlaps.

But any suggestions to simplify the model are very welcome! 

Regarding the IAF: Identity Assurance is primarily a trust relationship between RP and IdP. It implies, that the IdP has to implement a contract with the subscriber, but that is secondary from the position of the IAF, because subscribers and subjects are not participants of the IAF.

I interpret your comment as pointing out that what I have identified as a relationship covered by the IAF between Subject and IdP is not strictly accurate.  I can't dispute this.  But for purposes of illustration (since the illustration matrix is only a subset of the "real" matrix, I am hopeful that this inaccuracy can be overlooked. 

There are more trust relationships in the matrix than explained in the slides before .. what does it mean?

I keep expanding the underlying spreadsheet as new trust elements are identified.   For example, based on Bob Pinhiero's input, I just added "Monitoring" today. 

j) Parties in a Trust Framework
Could you include the supplements that I added to your spreadsheet, and discuss the open points. A list of Parties/Roles/Actors common for all Kantara Trust Frameworks could be an own document, because it is referenced at different places, like TF, use cases, glossary and TF architecture.

I thought I did include your additions.  Admittedly, I did not include the ones for which we did not yet have names.  I left these out until we figure out what to call them to keep the presentation more simple.  It is not meant to be a complete model -- just a conceptual model.  I still maintain the unnamed categories in the spreadsheet that underlies the ppt.

Good to see this document progressing. Thank you, Jeff.

Thank you for all of your hard work.

- Rainer

Am 13.01.2011 um 12:25 schrieb j stollman:

All,

I have significantly updated my Trust Framework presentation to facilitate our update on the Privacy Framework subgroup activity.

Enjoy.

Jeff

-- 
Jeff Stollman
stoll...@gmail.com
1 202.683.8699
<Elements of a Trust Framework.ppt>_______________________________________________

Mark Lizar

unread,
Jan 13, 2011, 6:20:13 PM1/13/11
to Bob Pinheiro, wg...@kantarainitiative.org
Good points, in reference to your second. 


On 13 Jan 2011, at 15:34, Bob Pinheiro wrote:



2.  My second comment concerns the privacy criteria.  Privacy criteria includes things such as restrictions on use, restrictions on collection, informed consent, etc.  What about restrictions on monitoring?  One aspect of privacy that people are concerned about regarding the identity ecosystem is that the government (or someone) will be able to monitor their online activities easily.  U-Prove technology, in particular, provides a way to prevent this by providing ways to ensure untraceability and unlinkability of the U-Prove tokens that convey identity claims.


What you are referring to here sounds and looks like surveillance. IN this way  (above) U-Prove provides greater control over surveillance, addressing issues in the consistent application of levels of control. But does not solve the issues of transparency or control over surveillance practices.  e.g. the gaps between a code of practice, implementation, and use.  (trustworthiness)  

- M

Susan Landau

unread,
Jan 13, 2011, 7:12:36 PM1/13/11
to wg...@kantarainitiative.org
On 1/13/11 6:20 PM, Mark Lizar wrote:
> Good points, in reference to your second.
>
>
> On 13 Jan 2011, at 15:34, Bob Pinheiro wrote:
>
>>
>>
>> 2. My second comment concerns the privacy criteria. Privacy
>> criteria includes things such as restrictions on use, restrictions on
>> collection, informed consent, etc. What about restrictions on
>> monitoring? One aspect of privacy that people are concerned about
>> regarding the identity ecosystem is that the government (or someone)
>> will be able to monitor their online activities easily. U-Prove
>> technology, in particular, provides a way to prevent this by
>> providing ways to ensure untraceability and unlinkability of the
>> U-Prove tokens that convey identity claims.
>>
>
> What you are referring to here sounds and looks like surveillance. IN
> this way (above) U-Prove provides greater control over surveillance,
> addressing issues in the consistent application of levels of control.
> But does not solve the issues of transparency or control over
> surveillance practices. e.g. the gaps between a code of practice,
> implementation, and use. (trustworthiness)
I have some trouble with the U-Prove fits all model, and I think perhaps
a more useful way to think about the control over surveillance issue
that Bob raises is through use cases. There are use cases where
tracking where the user is going and what the user is doing is highly
appropriate (think of someone using identity management tools to access
control functions of the power grid), some use cases where anonymity and
U-Prove or Tor type solutions may be appropriate (think of someone
accessing HIV/AIDS or STD information at a government site), and use
cases in between, some of which may offer different levels of service
dependent on the type of monitoring permitted.

Susan

John Bradley

unread,
Jan 13, 2011, 8:02:30 PM1/13/11
to Susan Landau, wg...@kantarainitiative.org
I am trying to work with MS on how U-Prove could fit into a larger picture.

At the moment their implementation doesn't support pseudonymous identifiers, or token claim revocation.

That will change over time.

The IMI protocol supported barrier tokens where the IdP did not know who the RP was.
It had a security problem with not being able to audience restrict the tokens, allowing a bad RP to replay them as the user before they expire.

The current u-Prove implementation has similar issues.

To get around that the client needs to go back to the issuer each time to get new crypto for the token.
Doable but gives up the advantage of not needing the IdP online.

The big problem will be revenue models, claims issuers will need to charge the user if they can't recover money from the RP for valuable claims.

If the user is using a "Cloud Selector" the current deployment theory you hide the information from the Issuer but create a new entity that has access to all of the users information including where they use it. That could be argued as not being worth the advantage of hiding the info from the issuer.

If you use u-prove with a smart client then you start getting the real advantages. (If you can find someone willing to issue the claims)

With IMI (Information Cards) we discovered that even MS has a hard time building a critical mass of client agents.

U-Prove is not a solution to everything, it is particularly useful for certain crypto but will not be widespread any time soon.

If the Web 2.0 folks reject RSA as being too complicated they will never get zero knowledge proofs.

Policy will be needed to protect people for a long time.

Regards
John B.

Susan Landau

unread,
Jan 13, 2011, 8:20:24 PM1/13/11
to Kantara P3WG
Concur with all; one question:

On 1/13/11 8:02 PM, John Bradley wrote:
> I am trying to work with MS on how U-Prove could fit into a larger picture.
>
> At the moment their implementation doesn't support pseudonymous identifiers, or token claim revocation.
>
> That will change over time.
>
> The IMI protocol supported barrier tokens where the IdP did not know who the RP was.
> It had a security problem with not being able to audience restrict the tokens,

audience restrict?


> allowing a bad RP to replay them as the user before they expire.
>
> The current u-Prove implementation has similar issues.
>
> To get around that the client needs to go back to the issuer each time to get new crypto for the token.
> Doable but gives up the advantage of not needing the IdP online.
>
> The big problem will be revenue models, claims issuers will need to charge the user if they can't recover money from the RP for valuable claims.
>
> If the user is using a "Cloud Selector" the current deployment theory you hide the information from the Issuer but create a new entity that has access to all of the users information including where they use it. That could be argued as not being worth the advantage of hiding the info from the issuer.
>
> If you use u-prove with a smart client then you start getting the real advantages. (If you can find someone willing to issue the claims)
>
> With IMI (Information Cards) we discovered that even MS has a hard time building a critical mass of client agents.

Interesting. Who are the clients that have signed on?


> U-Prove is not a solution to everything, it is particularly useful for certain crypto but will not be widespread any time soon.
>
> If the Web 2.0 folks reject RSA as being too complicated they will never get zero knowledge proofs.

Right.


> Policy will be needed to protect people for a long time.

A couple of years ago, Hubert Le Van Gong, Robin Wilton, and I wrote a
paper on protecting user privacy in FIM, the point of which was that
sometimes it was technology, sometimes policy, sometimes law. It
appeared at Financial Crypto and Data Security in 2009; I can send on if
anyone is interested.

Best,

Susan Landau

unread,
Jan 13, 2011, 8:59:40 PM1/13/11
to wg...@kantarainitiative.org
On 1/13/11 6:25 AM, j stollman wrote:
> All,
>
> I have significantly updated my Trust Framework presentation to
> facilitate our update on the Privacy Framework subgroup activity.
Jeff,

The mathematician in me is a little confused by the definitions and
"laws" you have on slides 4 and 5. I think you are trying for more
precision than is appropriate. It could be my misunderstanding. Here
are some of my confusions:

o I don't understand your definition of a trust framework: if A and B
commit to a transaction, then I don't get the role "controls" play. Do
you mean instead that if A and B commit to a transaction, then the
transaction MUST (as in IETF "MUST") follow the regulatory and
contractual obligations or the transaction is not valid within the trust
framework? Or do you mean something else?

o If trust is "the willingness of a party to engage in a transaction,"
then the law that says "Trust is not uniform" is logically unnecessary.

o I have no idea what you mean by "Trust is not personal." Is it "Each
transaction involving a party is handled individually within the
framework and does not rely upon previous transactions between these
parties"? --- or is something else meant?

o I don't understand the comment about permutations (and suspect it is
unnecessary).

o I'm not sure I understand the function of slide 6; I can't figure out
how it adds to the definition of Trust Framework.

In general, I think it is best to be as simple as possible about what is
meant by Trust Framework, and I'd urge you to move in the direction of
more rather than less. That said, while I understand the EU
requirements on use limitation, I also have my doubts on efficacy of
that principle in practice. Thus I would urge that the Privacy
Framework include auditing of the use of data.

Finally, I have a question that perhaps should have been my first
question here. What is the purpose of this set of slides? They seem
somewhat diffuse at present, and I am confused by them. Thanks.

John Bradley

unread,
Jan 13, 2011, 9:22:26 PM1/13/11
to Susan Landau, Kantara P3WG

On 2011-01-13, at 10:20 PM, Susan Landau wrote:

> Concur with all; one question:
>
> On 1/13/11 8:02 PM, John Bradley wrote:
>> I am trying to work with MS on how U-Prove could fit into a larger picture.
>>
>> At the moment their implementation doesn't support pseudonymous identifiers, or token claim revocation.
>>
>> That will change over time.
>>
>> The IMI protocol supported barrier tokens where the IdP did not know who the RP was.
>> It had a security problem with not being able to audience restrict the tokens,
> audience restrict?

Setting the saml:AudienceRestrictionCondition in the returned SAML token.

All RP/SP should check that to be certain the token was generated for them.

In IMI the selector sends ic:AppliesTo in the RST along with the certificate of the RP, that allows the Issuer to place the CN of the certificate in the audiance restriction condition and encrypt the token for the RP.

You can generate a RST without sending ic:AppliesTo in the RST, however that allows the token to be replayed at any RP.

In principal as long as the IdP is generating proper PPID the token is not useful in impersonating the user at a different RP, however if the RP is only looking at the claims in the SAML assertion it could be fooled.

In SAML the IdP will generally use audience restriction as it knows who the RP is anyway. I suppose the same issues might apply if you were using the ECP profile.

You could provide a RP generated pseudonymous audience restriction using a smart client and still keep the SAML token with only some changes to the authn request flow in the IMI case.

I am not pushing IMI, just saying that there are ways of achieving the same privacy with EC or RSA.

I don't have anything against U-Prove, but I care more about the relative privacy profiles than the exact crypto.

>> allowing a bad RP to replay them as the user before they expire.
>>
>> The current u-Prove implementation has similar issues.
>>
>> To get around that the client needs to go back to the issuer each time to get new crypto for the token.
>> Doable but gives up the advantage of not needing the IdP online.
>>
>> The big problem will be revenue models, claims issuers will need to charge the user if they can't recover money from the RP for valuable claims.
>>
>> If the user is using a "Cloud Selector" the current deployment theory you hide the information from the Issuer but create a new entity that has access to all of the users information including where they use it. That could be argued as not being worth the advantage of hiding the info from the issuer.
>>
>> If you use u-prove with a smart client then you start getting the real advantages. (If you can find someone willing to issue the claims)
>>
>> With IMI (Information Cards) we discovered that even MS has a hard time building a critical mass of client agents.
> Interesting. Who are the clients that have signed on?

I was referring to having enough Info-Card selectors on peoples computers to be able to make the proposition attractive to RP.

There are really only two IMI selectors, MS Cardspace and Azigo. Azigo storing credentials in a hosted service introduced some other security considerations as well.

Regards
John B.

Susan Landau

unread,
Jan 13, 2011, 9:33:18 PM1/13/11
to Kantara P3WG
On 1/13/11 9:22 PM, John Bradley wrote:
>
>> audience restrict?
> Setting the saml:AudienceRestrictionCondition in the returned SAML token.
>
> All RP/SP should check that to be certain the token was generated for them.
Okay, thanks.

> In IMI the selector sends ic:AppliesTo in the RST along with the certificate of the RP, that allows the Issuer to place the CN of the certificate in the audiance restriction condition and encrypt the token for the RP.
>
> You can generate a RST without sending ic:AppliesTo in the RST, however that allows the token to be replayed at any RP.
>
> In principal as long as the IdP is generating proper PPID the token is not useful in impersonating the user at a different RP, however if the RP is only looking at the claims in the SAML assertion it could be fooled.
>
> In SAML the IdP will generally use audience restriction as it knows who the RP is anyway. I suppose the same issues might apply if you were using the ECP profile.
>
> You could provide a RP generated pseudonymous audience restriction using a smart client and still keep the SAML token with only some changes to the authn request flow in the IMI case.
>
> I am not pushing IMI, just saying that there are ways of achieving the same privacy with EC or RSA.
Yes.

> I don't have anything against U-Prove, but I care more about the relative privacy profiles than the exact crypto.
Makes sense.

Thanks a lot for the detail.
>
>
>>> ...


>>>
>>> With IMI (Information Cards) we discovered that even MS has a hard time building a critical mass of client agents.
>> Interesting. Who are the clients that have signed on?
> I was referring to having enough Info-Card selectors on peoples computers to be able to make the proposition attractive to RP.
>
> There are really only two IMI selectors, MS Cardspace and Azigo. Azigo storing credentials in a hosted service introduced some other security considerations as well.

Okay, thanks.

Bob Pinheiro

unread,
Jan 13, 2011, 11:46:50 PM1/13/11
to wg...@kantarainitiative.org
On 1/13/2011 8:02 PM, John Bradley wrote:
To get around that the client needs to go back to the issuer each time to get new crypto for the token. 
Doable but gives up the advantage of not needing the IdP online. 

John, could you just explain what exactly this "new crypto" is that prevents a long-lived u-prove token from being used without the IdP being online?  My understanding is that a u-prove token is created with both a token-specific public key as well as a corresponding private key that is controlled by the subject.  When the subject attempts to use the token to convey a claim to a relying party, the RP sends a challenge to the subject's selector, and the selector then encrypts this challenge with the token's private key (under the direction of the subject).  The encrypted challenge, along with the token, is returned to the RP.  The RP uses the token's public key to decrypt the encrypted challenge, and thus verifies the original challenge.  Hence the RP knows that the claim presented by the token refers to the subject, since only that subject controls the private key.

I may be missing an important detail here, but why would an interaction with a "live" IdP be required for this scenario to work?


The big problem will be revenue models, claims issuers will need to charge the user if they can't recover money from the RP for valuable claims.

Even if the IdP doesn't collect a fee from the RP every time a claim is transmitted to the RP, other business models may be possible.  For instance, OIX assumes that different "trust communities" (ie, libraries, telecom, government, financial, healthcare, etc) may emerge that may each be governed by its own trust framework.  If that happens, maybe RPs that are members of some particular trust community will only choose to trust claims issued by IdPs that are members of the same community.  This does not seem unreasonable, since claims generated by IdPs in a given trust community may not necessarily contain attributes required by RPs in a different community.  Possibly the RP would pay a flat fee to the community for claims issued by IdPs within the community, and these fees would support the IdPs in the community. 


If the user is using a "Cloud Selector"  the current deployment theory you hide the information from the Issuer but create a new entity that has access to all of the users information including where they use it.   That could be argued as not being worth the advantage of hiding the info from the issuer.

If you use u-prove with a smart client then you start getting the real advantages. (If you can find someone willing to issue the claims) 

I agree; I think a smart client on the user's device provides the most advantage.  If the user desires to use different devices, however, that does present a problem.  If a provider of high value services must depend on direct interaction with an IdP, or must rely on claims residing in a cloud-based claims agent, there is the possibility that the IdP or claims agent will be unavailable.  That would mean that the service provider's customers would not be able to access their service.  As far as I know, there is no component of a trust framework that defines the reliability of specific identity providers (or cloud-based claims agents) in terms of the expected availability of that IdP or claims agent.  If subjects need to depend on the availability of certain IdPs or cloud-based claims agents in order to be able to access their high value services, it would seem there should be some way for them to assess the reliability of these IdP/agents. 

 
With IMI (Information Cards) we discovered that even MS has a hard time building a critical mass of client agents.

U-Prove is not a solution to everything, it is particularly useful for certain crypto but will not be widespread any time soon.

If the Web 2.0 folks reject RSA as being too complicated they will never get zero knowledge proofs.

Policy will be needed to protect people for a long time.

Regards
John B.

If neither u-prove or information cards will be widely available anytime soon, I think that is mostly because there hasn't been sufficient demand for those things.  So the question is, how can demand be generated, either for u-prove and infocards or comparable alternatives?  Will the government's push for NSTIC help to spur that demand?

- Bob P. 

Mark Lizar

unread,
Jan 14, 2011, 5:39:33 AM1/14/11
to Susan Landau, wg...@kantarainitiative.org
Excellent idea..


On 14 Jan 2011, at 01:59, Susan Landau wrote:

> That said, while I understand the EU
> requirements on use limitation, I also have my doubts on efficacy of
> that principle in practice. Thus I would urge that the Privacy
> Framework include auditing of the use of data.


I think we should collect these sort of comments/PF requirement
options in a nice warm place on our wiki.. "-)

Maybe even an audit focus at some point in the PF strategy would be
appropriate.. ?

-M

Mark Lizar

unread,
Jan 14, 2011, 7:54:52 AM1/14/11
to Kantara P3WG
Thanks,

+1- Very Useful..

Anna Slomovic

unread,
Jan 14, 2011, 11:51:21 AM1/14/11
to Bob Pinheiro, wg...@kantarainitiative.org

Bob,

 

Clarification question. Why wouldn’t restriction on monitoring be a special case of restriction on collection (which is required for monitoring), combined with restriction on use?

 

Thanks.

 

Anna

 

 

Anna Slomovic

Chief Privacy Officer

Anakam, an Equifax company

1010 N. Glebe Rd.

Suite 500

Arlington, VA 22201

 

P: 703.888.4620

F: 703.243.7576

John Bradley

unread,
Jan 14, 2011, 12:27:12 PM1/14/11
to Bob Pinheiro, wg...@kantarainitiative.org
On 2011-01-14, at 1:46 AM, Bob Pinheiro wrote:

On 1/13/2011 8:02 PM, John Bradley wrote:
To get around that the client needs to go back to the issuer each time to get new crypto for the token. 
Doable but gives up the advantage of not needing the IdP online. 

John, could you just explain what exactly this "new crypto" is that prevents a long-lived u-prove token from being used without the IdP being online?  My understanding is that a u-prove token is created with both a token-specific public key as well as a corresponding private key that is controlled by the subject.  When the subject attempts to use the token to convey a claim to a relying party, the RP sends a challenge to the subject's selector, and the selector then encrypts this challenge with the token's private key (under the direction of the subject).  The encrypted challenge, along with the token, is returned to the RP.  The RP uses the token's public key to decrypt the encrypted challenge, and thus verifies the original challenge.  Hence the RP knows that the claim presented by the token refers to the subject, since only that subject controls the private key.

I may be missing an important detail here, but why would an interaction with a "live" IdP be required for this scenario to work?


The client can't change anything in the u-prove token on its own.   So the subject cannot be pseudonymous.  If the token has a long validity period the RP also cannot tell if the token has subsequently been revoked since issuance.

The way they avoid having the IdP know who the RP is through a OCSP like check is have the client effectively resign the token with a updated issuance and pseudonymous subject.  This functionality is not in the current CTP.   That is why it can operate without the Issuer being on line.   It however may not be appropriate for places where you want a pseudonym or guarantee the claims have not been revoked.

I understand they are targeting the next release of the CTP to add that functionality.

For more information on how showing protocols work.


The big problem will be revenue models, claims issuers will need to charge the user if they can't recover money from the RP for valuable claims.

Even if the IdP doesn't collect a fee from the RP every time a claim is transmitted to the RP, other business models may be possible.  For instance, OIX assumes that different "trust communities" (ie, libraries, telecom, government, financial, healthcare, etc) may emerge that may each be governed by its own trust framework.  If that happens, maybe RPs that are members of some particular trust community will only choose to trust claims issued by IdPs that are members of the same community.  This does not seem unreasonable, since claims generated by IdPs in a given trust community may not necessarily contain attributes required by RPs in a different community.  Possibly the RP would pay a flat fee to the community for claims issued by IdPs within the community, and these fees would support the IdPs in the community. 


I think that is a possibility.  However will they see blinding the Issuer to who the RP is as a sufficient use case to invest in new crypto infrastructure.   
We have to be realistic it will take time to deploy and for people to understand the difference between this and conventional RSA/EC crypto.

There are some models where the Issuer wants to know who is consuming the claims for various reasons.   That leads me to think that u-prove won't be completely replacing conventional asymmetric cryptography any time soon.

John B.

Bob Pinheiro

unread,
Jan 14, 2011, 12:33:33 PM1/14/11
to Anna Slomovic, wg...@kantarainitiative.org
I don't know; maybe it would.  I guess it depends on how you define these terms.  I assumed that "collection" refers to the act of collecting personally identifiable information such as name, address, etc.   To me, there seems to be a distinct difference between that, and discerning one's online behavior gleaned through observation and inference; that is, through monitoring and surveillance.  But again, it depends on how "collection" is defined.  I did not see a definition of this term in Jeff's slide deck, so perhaps I made the wrong assumption. 

Robin Wilton

unread,
Jan 15, 2011, 9:30:11 AM1/15/11
to Rainer Hörbe, j stollman, Kantara P3WG
 Hi folks -
 
Just catching upon a week's worth of Inbox after being out of the office. So, just a very brief comment on definition of trust. Here is one which I have found very useful:
 
"Trust is a belief that a party will act in your interests (or at least not act against your interests), even if they find themselves in a position to do otherwise"
 
For me, this captures two essential characteristics of trust:
 
1 - it is a belief, not a 'fact'. That is, trust is a matter of judgement and, as Rainer suggests, a certain degree of probability or prediction. Beliefs can be rational and justified, or not. There are many factors which may be good grounds for a belief, and many factors which look like good grounds... right up to the point where they are shown not to be. (Turkeys trust the farmer not to chop their heads off, right up to Thanksgiving [or Christmas] - which is a long-established example of the problem of inductiive reasoning).
 
We often base trust decisions on inductive reasoning ("John has been trustworthy every time so far, so he will be trustworthy this time"), which also kind of explains why violated trust is so hard to rebuild... ... ...
 
2 - The trusted party has to have a real option to act in an untrustworthy way... otherwise what you are trusting is not them, but something else. Thus, for instance, suppose that a bank employee has the keys to the vault, but doesn't steal all the cash because there's a camera in the vault and it would be obvious who the thiief was. You're not really trusting the employee, because they don't really have a practical option of acting in an untrustworthy way; you're trusting the locked vault and the camera.
 
Hope this helps -
 
Robin
Robin Wilton
+44 (0)705 005 2931

j stollman

unread,
Jan 15, 2011, 4:50:29 PM1/15/11
to Anna Slomovic, wg...@kantarainitiative.org
Anna,

I think your are correct that surveillance is a special case of collection.  I would suggest that the differentiation comes from the fact that, in collection, the Subject consciously transmits personal information to the collector while in surveillance the Subject may be unaware of what information has been transmitted or even the fact that information has been transmitted.

The distinction may turn out to be primarily about the issue Mark is working on regarding Notification and Consent.  Using the rough distinctions above, Collection would include Notification and Consent; surveillance might not.

Jeff

j stollman

unread,
Jan 15, 2011, 8:05:35 PM1/15/11
to Susan Landau, wg...@kantarainitiative.org
Susan,

I'll start with your last question first.


What is the purpose of this set of slides?  
In my view, one of the reasons that we have had a proliferation of organizations proffering disparate attempts at creating a Privacy Framework is that no one has taken the time to define the problem in sufficient detail to create a common understanding of its purpose.  So, like the blind men describing the elephant, the various offerings reflect each group's perspective, but fail to address the entirety of the problem.

A second reason that none of the proffered solutions has gained widespread traction is that a Privacy Framework, by itself, is insufficient to solve the larger problem which I suggest is creating sufficient trust among relevant parties to cause each of them to conduct a transaction.  I, therefore, believe that Trust is the real problem and Privacy is just a subset.

If we can accept the second reason above, this then leads me to two conclusions:
  1. We need to define a solution (or a road map to achieve the solution) to the Trust problem before we worry about the Privacy subset.
  2. The scope of the Privacy subset (i.e., which trust elements are included in it) is less critical than the fact that every trust element will eventually be addressed (e.g., as an element of Identity, Notification, Controls, or whatever other subsets we deem useful).
The slide deck attempts to describe the larger Trust Framework problem to allow us to consciously select the Trust Elements that we consider most valuable to include in the Privacy Framework that we compose.   Criteria for selection may be some sense of the relative importance of certain Trust Elements or may be the ability to gain significant reduction in complexity by selecting Trust Elements that can be addressed in a common way.

I include other responses in line below.

Thank you for your feedback.

jeff


On Thu, Jan 13, 2011 at 8:59 PM, Susan Landau <susan....@privacyink.org> wrote:
On 1/13/11 6:25 AM, j stollman wrote:
> All,
>
> I have significantly updated my Trust Framework presentation to
> facilitate our update on the Privacy Framework subgroup activity.
Jeff,

The mathematician in me is a little confused by the definitions and
"laws" you have on slides 4 and 5.   I think you are trying for more
precision than is appropriate.   It could be my misunderstanding.  Here
are some of my confusions:

o  I don't understand your definition of a trust framework: if A and B
commit to a transaction, then I don't get the role "controls" play.  Do
you mean instead that if A and B commit to a transaction, then the
transaction MUST (as in IETF "MUST") follow the regulatory and
contractual obligations or the transaction is not valid within the trust
framework?  Or do you mean something else?

A and B may not commit to the transaction unless each trusts that certain controls are in place that will support their need for protection.  The trust may be dependent upon knowing things like (1) how does Party B intend to protect my PII? (2) does Party B offer me the ability to restrict passing of my data to other parties? (3) does party B afford me the opportunity to review and correct errors in my data it holds?

o  If trust is "the willingness of a party to engage in a transaction,"
then the law that says "Trust is not uniform" is logically unnecessary.

I don't dispute your point, but all definitions are in flux.  And I think there is value in emphasizing that trust will vary with the particulars of a transaction, including (1) who the other parties are, (2) the nature of the transaction, (3) the levels of assurance and protection committed to by the parties, etc.


o  I have no idea what you mean by "Trust is not personal."  Is it "Each
transaction involving a party is handled individually within the
framework and does not rely upon previous transactions between these
parties"? --- or is something else meant?

Yes, you are correct.  Party B may not have blanket trust in the commitments made by Party A.  He may trust Party A for one transaction (e.g., a low value purchase), but not trust Party A for another (e.g., a high value transaction). 

o  I don't understand the comment about permutations (and suspect it is
unnecessary).

The point is that the problem space is large and the number of permutations yields an upper bound to the size of the problem space.

o  I'm not sure I understand the function of slide 6; I can't figure out
how it adds to the definition of Trust Framework.

You are right.  I certainly don't consider this a polished presentation.  I created it as a straw man to start the conversation and collect feedback.  If this every grows in to a polished final version, one or the other definition will need to be excised.

In general, I think it is best to be as simple as possible about what is
meant by Trust Framework, and I'd urge you to move in the direction of
more rather than less.   That said, while I understand the EU
requirements on use limitation, I also have my doubts on efficacy of
that principle in practice.  Thus I would urge that the Privacy
Framework include auditing of the use of data.

The decision of what Trust Elements are included or excluded from the Privacy Framework we create is not mine.  It will be made by P3 as a group.  Personally, I have no strong opinions at this point about what items we include and which we leave to other groups to address. 

I am concerned that whatever plan we use to move forward addresses the entirety of the problem, though.  This is not to suggest that P3 lead the overall Trust Framework effort.  Rather, I would like to know that a road map exists that includes building out the entire Trust Framework (over time) so that subsystems like Privacy have sufficient support to be effective. 

I consider this like a Space Shuttle mission.  I don't know if Privacy is the launch pad, the launch vehicle, the shuttle, the NASA command center or the landing strip.  But I know that if we don't have all of these systems built out to a master set of specifications, the mission is unlikely to succeed.  Simple failures to align all the pieces (e.g., the landing strip being too short for the speed and mass of the shuttle, or specs from one supplier being read as inches when they are in centimeters) will result in mission failure.  But, so far, I haven't observed that other attempts at privacy frameworks have acknowledged the larger system-of-systems environment that needs to be addressed.
 

Finally, I have a question that perhaps should have been my first
question here. What is the purpose of this set of slides?   They seem
somewhat diffuse at present, and I am confused by them.  Thanks.

Best,

Susan






_______________________________________________
WG-P3 mailing list
WG...@kantarainitiative.org
http://kantarainitiative.org/mailman/listinfo/wg-p3

Susan Landau

unread,
Jan 16, 2011, 10:21:53 AM1/16/11
to wg...@kantarainitiative.org
Susan,

I'll start with your last question first.

What is the purpose of this set of slides?  
In my view, one of the reasons that we have had a proliferation of organizations proffering disparate attempts at creating a Privacy Framework is that no one has taken the time to define the problem in sufficient detail to create a common understanding of its purpose.  So, like the blind men describing the elephant, the various offerings reflect each group's perspective, but fail to address the entirety of the problem.
Thanks.   I have a slightly different view, and suspect that we first need to solve the problems faced by each of the different groups in order to understand what the commonalities are.


A second reason that none of the proffered solutions has gained widespread traction is that a Privacy Framework, by itself, is insufficient to solve the larger problem which I suggest is creating sufficient trust among relevant parties to cause each of them to conduct a transaction.  I, therefore, believe that Trust is the real problem and Privacy is just a subset.
Trust is certainly the real issue.


If we can accept the second reason above, this then leads me to two conclusions:
  1. We need to define a solution (or a road map to achieve the solution) to the Trust problem before we worry about the Privacy subset.
  2. The scope of the Privacy subset (i.e., which trust elements are included in it) is less critical than the fact that every trust element will eventually be addressed (e.g., as an element of Identity, Notification, Controls, or whatever other subsets we deem useful).
The slide deck attempts to describe the larger Trust Framework problem to allow us to consciously select the Trust Elements that we consider most valuable to include in the Privacy Framework that we compose.   Criteria for selection may be some sense of the relative importance of certain Trust Elements or may be the ability to gain significant reduction in complexity by selecting Trust Elements that can be addressed in a common way.

I include other responses in line below.

Thank you for your feedback.

Sure.

We have a difference of opinion here; I think the attempt to do a Trust Framework, then a Privacy Framework is an attempt to boil the ocean.  But now that I understand the motivation and purpose for what you're doing, it helps me understand where I want to concentrate my efforts.  So thanks.


On Thu, Jan 13, 2011 at 8:59 PM, Susan Landau <susan....@privacyink.org> wrote:
On 1/13/11 6:25 AM, j stollman wrote:
> All,
>
> I have significantly updated my Trust Framework presentation to
> facilitate our update on the Privacy Framework subgroup activity.
Jeff,

The mathematician in me is a little confused by the definitions and
"laws" you have on slides 4 and 5.   I think you are trying for more
precision than is appropriate.   It could be my misunderstanding.  Here
are some of my confusions:

o  I don't understand your definition of a trust framework: if A and B
commit to a transaction, then I don't get the role "controls" play.  Do
you mean instead that if A and B commit to a transaction, then the
transaction MUST (as in IETF "MUST") follow the regulatory and
contractual obligations or the transaction is not valid within the trust
framework?  Or do you mean something else?

A and B may not commit to the transaction unless each trusts that certain controls are in place that will support their need for protection.  The trust may be dependent upon knowing things like (1) how does Party B intend to protect my PII? (2) does Party B offer me the ability to restrict passing of my data to other parties? (3) does party B afford me the opportunity to review and correct errors in my data it holds?

o  If trust is "the willingness of a party to engage in a transaction,"
then the law that says "Trust is not uniform" is logically unnecessary.

I don't dispute your point, but all definitions are in flux.  And I think there is value in emphasizing that trust will vary with the particulars of a transaction, including (1) who the other parties are, (2) the nature of the transaction, (3) the levels of assurance and protection committed to by the parties, etc.
I would ask are you doing a logical framework, a legal framework, a policy framework?   Words mean different things in different contexts.  Math, and also law, work by minimization, and if something is covered earlier, added text doesn't help.   For my own part, clarity is helped by precision and a narrow definition.



o  I have no idea what you mean by "Trust is not personal."  Is it "Each
transaction involving a party is handled individually within the
framework and does not rely upon previous transactions between these
parties"? --- or is something else meant?

Yes, you are correct.  Party B may not have blanket trust in the commitments made by Party A.  He may trust Party A for one transaction (e.g., a low value purchase), but not trust Party A for another (e.g., a high value transaction). 

o  I don't understand the comment about permutations (and suspect it is
unnecessary).

The point is that the problem space is large and the number of permutations yields an upper bound to the size of the problem space.
I would omit this.


o  I'm not sure I understand the function of slide 6; I can't figure out
how it adds to the definition of Trust Framework.

You are right.  I certainly don't consider this a polished presentation.  I created it as a straw man to start the conversation and collect feedback.  If this every grows in to a polished final version, one or the other definition will need to be excised.

In general, I think it is best to be as simple as possible about what is
meant by Trust Framework, and I'd urge you to move in the direction of
more rather than less.   That said, while I understand the EU
requirements on use limitation, I also have my doubts on efficacy of
that principle in practice.  Thus I would urge that the Privacy
Framework include auditing of the use of data.

The decision of what Trust Elements are included or excluded from the Privacy Framework we create is not mine.  It will be made by P3 as a group.  Personally, I have no strong opinions at this point about what items we include and which we leave to other groups to address. 

I am concerned that whatever plan we use to move forward addresses the entirety of the problem, though.  This is not to suggest that P3 lead the overall Trust Framework effort.  Rather, I would like to know that a road map exists that includes building out the entire Trust Framework (over time) so that subsystems like Privacy have sufficient support to be effective. 

I consider this like a Space Shuttle mission.  I don't know if Privacy is the launch pad, the launch vehicle, the shuttle, the NASA command center or the landing strip.  But I know that if we don't have all of these systems built out to a master set of specifications, the mission is unlikely to succeed.  Simple failures to align all the pieces (e.g., the landing strip being too short for the speed and mass of the shuttle, or specs from one supplier being read as inches when they are in centimeters) will result in mission failure.  But, so far, I haven't observed that other attempts at privacy frameworks have acknowledged the larger system-of-systems environment that needs to be addressed.
Here we are going to disagree.  I think the way to work on a privacy framework --- and the work happening in OASIS PNRM (that I'll report on during the next P3 call)  --- is bottom up, by really understanding use cases and the differences/commonalities.

In any case, thanks very much for the clear answers.  I now understand what you are trying to do, and I appreciate the explanation.

Best,

Susan

Rainer Hörbe

unread,
Jan 16, 2011, 2:56:23 PM1/16/11
to Susan Landau, wg...@kantarainitiative.org

Am 16.01.2011 um 16:21 schrieb Susan Landau:
Am 16.01.2011 um 02:05 schrieb j stollman:
Susan,


What is the purpose of this set of slides?  
In my view, one of the reasons that we have had a proliferation of organizations proffering disparate attempts at creating a Privacy Framework is that no one has taken the time to define the problem in sufficient detail to create a common understanding of its purpose.  So, like the blind men describing the elephant, the various offerings reflect each group's perspective, but fail to address the entirety of the problem.
Thanks.   I have a slightly different view, and suspect that we first need to solve the problems faced by each of the different groups in order to understand what the commonalities are.

From the RP perspective we already have a number of frameworks for Entity Authentication Assurance frameworks, like the IAF. These models lack a clean and explicit model in certain aspects, like categorization of trust requirements, actors and scope. I have not learned about a more advanced concept in the PF either. If we ignore these shortcomings, we may get some documents finished earlier, but at the expense of compatibility between IAF and PF. And I would claim that mapping them to non-Kantara frameworks and policies would be more difficult as well.


A second reason that none of the proffered solutions has gained widespread traction is that a Privacy Framework, by itself, is insufficient to solve the larger problem which I suggest is creating sufficient trust among relevant parties to cause each of them to conduct a transaction.  I, therefore, believe that Trust is the real problem and Privacy is just a subset.
Trust is certainly the real issue.

If we can accept the second reason above, this then leads me to two conclusions:
  1. We need to define a solution (or a road map to achieve the solution) to the Trust problem before we worry about the Privacy subset.
  2. The scope of the Privacy subset (i.e., which trust elements are included in it) is less critical than the fact that every trust element will eventually be addressed (e.g., as an element of Identity, Notification, Controls, or whatever other subsets we deem useful).
The slide deck attempts to describe the larger Trust Framework problem to allow us to consciously select the Trust Elements that we consider most valuable to include in the Privacy Framework that we compose.   Criteria for selection may be some sense of the relative importance of certain Trust Elements or may be the ability to gain significant reduction in complexity by selecting Trust Elements that can be addressed in a common way.

I include other responses in line below.

Thank you for your feedback.

Sure.

We have a difference of opinion here; I think the attempt to do a Trust Framework, then a Privacy Framework is an attempt to boil the ocean.  But now that I understand the motivation and purpose for what you're doing, it helps me understand where I want to concentrate my efforts.  So thanks.

The parties in a federation need to have a complete trust framework. Delivering a patchwork of several non-harmonized frameworks would not get the business of the federation participants done earlier. In software development it is common sense that it is more expensive to fix design shortcomings in the field than in the development cycle.

Of course we cannot build a world model and need to restrict our scope. But I think that some modeling can be done that is not as vast as the ocean. Approaches like the FSI baseline protection model for IT-security (http://en.wikipedia.org/wiki/IT_baseline_protection) are a realistic alternative. It has a common model  and provides the details using a catalogue, therefore decoupling the generic concepts and voluminous details.


Rainer

Susan Landau

unread,
Jan 16, 2011, 3:10:48 PM1/16/11
to wg...@kantarainitiative.org
On 1/16/11 2:56 PM, Rainer Hörbe wrote:
>
> Thanks. I have a slightly different view, and suspect that we first
> need to solve the problems faced by each of the different groups in
> order to understand what the commonalities are.
>
> From the RP perspective we already have a number of frameworks for
> Entity Authentication Assurance frameworks, like the IAF. These models
> lack a clean and explicit model in certain aspects, like
> categorization of trust requirements, actors and scope. I have not
> learned about a more advanced concept in the PF either. If we ignore
> these shortcomings, we may get some documents finished earlier, but at
> the expense of compatibility between IAF and PF. And I would claim
> that mapping them to non-Kantara frameworks and policies would be more
> difficult as well.
I wasn't clear by what I mean by "different" groups. I meant different
organizations on Trust Frameworks, not different groups within KI.

>
>>
>> We have a difference of opinion here; I think the attempt to do a
>> Trust Framework, then a Privacy Framework is an attempt to boil the
>> ocean. But now that I understand the motivation and purpose for what
>> you're doing, it helps me understand where I want to concentrate my
>> efforts. So thanks.
>
>
> The parties in a federation need to have a complete trust framework.
> Delivering a patchwork of several non-harmonized frameworks would not
> get the business of the federation participants done earlier.
Of course.

> In software development it is common sense that it is more expensive
> to fix design shortcomings in the field than in the development cycle.
Of course.

My concern is whether P3WG has the expertise and is the appropriate
group to be developing the Trust Framework. That's why I asked where the
decision to do this is coming from.

Rainer Hörbe

unread,
Jan 16, 2011, 3:23:57 PM1/16/11
to Susan Landau, Kantara P3WG

Am 16.01.2011 um 21:10 schrieb Susan Landau:

>
> My concern is whether P3WG has the expertise and is the appropriate
> group to be developing the Trust Framework. That's why I asked where the
> decision to do this is coming from.

There is an effort going on the IAWG to create a Trust Framework Model now for 2 weeks, and UMA WG plans to start a similar effort AFAIK. I am interested in coordinating these efforts, and receiving input from the user-centric side, as the IAWG is a bit enterprise-biased.

Rainer

Susan Landau

unread,
Jan 16, 2011, 3:36:03 PM1/16/11
to Kantara P3WG
On 1/16/11 3:23 PM, Rainer Hörbe wrote:
> Am 16.01.2011 um 21:10 schrieb Susan Landau:
>
>> My concern is whether P3WG has the expertise and is the appropriate
>> group to be developing the Trust Framework. That's why I asked where the
>> decision to do this is coming from.
> There is an effort going on the IAWG to create a Trust Framework Model now for 2 weeks, and UMA WG plans to start a similar effort AFAIK. I am interested in coordinating these efforts, and receiving input from the user-centric side, as the IAWG is a bit enterprise-biased.
These two groups doing it, and having their efforts coordinated, make a
great deal of sense to me.

Susan

Eve Maler

unread,
Jan 16, 2011, 5:42:57 PM1/16/11
to Susan Landau, Kantara P3WG
Just to offer a bit more context, the UMA group is planning to document its own trust model, and we're very much hoping to leverage the "meta-model" that Rainer et al. have been developing. For example, in UMA, the requester endpoint (a software tool) and its associated requesting party (a type of "relying" entity that can potentially carry liability for its actions) start out 100% untrusted, but by virtue of various interactions, become trusted by some other parties for some things. This affects our security and privacy considerations and, I hope, will also be a pedagogical tool for certain audiences to explain UMA's whole reason for being.

Eve


Eve Maler http://www.xmlgrrl.com/blog
+1 425 345 6756 http://www.twitter.com/xmlgrrl

Anna Slomovic

unread,
Jan 18, 2011, 2:09:44 PM1/18/11
to j stollman, wg...@kantarainitiative.org
Jeff,

When the privacy community talks about Collection, it includes both collection of which the subject is aware and collection of which s/he is unaware. In fact, below is the OECD definition of the Collection Limitation FIP:

Collection Limitation Principle

There should be limits to the collection of personal data and any such data should
be obtained by lawful and fair means and, where appropriate, with the knowledge
or consent of the data subject.

As you can see, "knowledge or consent of the data subject" is delimited by "where appropriate."

Anna

Anna Slomovic
Chief Privacy Officer

Anakam, Inc.
1010 N. Glebe Road


Suite 500
Arlington, VA 22201

T: 703.888.4620
F: 703.243.7576
W: www.anakam.com
E: aslo...@anakam.com
________________________________________
From: j stollman [stoll...@gmail.com]
Sent: Saturday, January 15, 2011 4:50 PM
To: Anna Slomovic
Cc: Bob Pinheiro; wg...@kantarainitiative.org


Subject: Re: [WG-P3] to facilitate the Privacy Framework discussion

Anna,

I think your are correct that surveillance is a special case of collection. I would suggest that the differentiation comes from the fact that, in collection, the Subject consciously transmits personal information to the collector while in surveillance the Subject may be unaware of what information has been transmitted or even the fact that information has been transmitted.

The distinction may turn out to be primarily about the issue Mark is working on regarding Notification and Consent. Using the rough distinctions above, Collection would include Notification and Consent; surveillance might not.

Jeff

On Fri, Jan 14, 2011 at 11:51 AM, Anna Slomovic <aslo...@anakam.com<mailto:aslo...@anakam.com>> wrote:
Bob,

Clarification question. Why wouldn’t restriction on monitoring be a special case of restriction on collection (which is required for monitoring), combined with restriction on use?

Thanks.

Anna


Anna Slomovic
Chief Privacy Officer
Anakam, an Equifax company
1010 N. Glebe Rd.
Suite 500
Arlington, VA 22201

P: 703.888.4620
F: 703.243.7576

From: wg-p3-...@kantarainitiative.org<mailto:wg-p3-...@kantarainitiative.org> [mailto:wg-p3-...@kantarainitiative.org<mailto:wg-p3-...@kantarainitiative.org>] On Behalf Of Bob Pinheiro
Sent: Thursday, January 13, 2011 10:34 AM
To: wg...@kantarainitiative.org<mailto:wg...@kantarainitiative.org>
Subject: Re: [WG-P3] to facilitate the Privacy Framework discussion

Jeff et al,

Two comments:

1. I think the term "framework" might be a bit overused. To me, "framework" should be reserved for the highest level entity we are talking about; in this case, the trust framework. That is, the framework provides the structure that defines how all of the component pieces will work together to achieve the overall goal, which in this case is trust. [Of course, there are different trust relationships, and presumably the trust framework will deal with all of these].

What then to call these components of trust (ie, privacy, notification, control, etc)? Each of these components seems to provide a set of criteria for enabling one aspect of trust (ie, privacy, etc) to be satisfied. Would it make sense to call these components "criteria"? So the trust framework would be composed of privacy criteria, identity criteria, notification criteria, etc.

2. My second comment concerns the privacy criteria. Privacy criteria includes things such as restrictions on use, restrictions on collection, informed consent, etc. What about restrictions on monitoring? One aspect of privacy that people are concerned about regarding the identity ecosystem is that the government (or someone) will be able to monitor their online activities easily. U-Prove technology, in particular, provides a way to prevent this by providing ways to ensure untraceability and unlinkability of the U-Prove tokens that convey identity claims.

Bob P.

On 1/13/2011 6:25 AM, j stollman wrote:
All,

I have significantly updated my Trust Framework presentation to facilitate our update on the Privacy Framework subgroup activity.

Enjoy.

Jeff

--
Jeff Stollman
stoll...@gmail.com<mailto:stoll...@gmail.com>
1 202.683.8699


_______________________________________________

WG-P3 mailing list

WG...@kantarainitiative.org<mailto:WG...@kantarainitiative.org>

http://kantarainitiative.org/mailman/listinfo/wg-p3

_______________________________________________
WG-P3 mailing list
WG...@kantarainitiative.org<mailto:WG...@kantarainitiative.org>
http://kantarainitiative.org/mailman/listinfo/wg-p3


--
Jeff Stollman
stoll...@gmail.com<mailto:stoll...@gmail.com>

Reply all
Reply to author
Forward
0 new messages