_______________________________________________ WG-P3 mailing list WG...@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-p3
<Elements of a Trust Framework.ppt>_______________________________________________
May I post a few comments to your presentation:
a) Definition of Trust:
There are some common alternatives to the definition "Willingness to engage in a transaction", like
"Reliance on the enforcement of a certain policy", or
"Understood probability in the expected behavior of another party".
There are more scientific definitions of trust available, like in the Wiki article on computation trust:
http://en.wikipedia.org/wiki/Computational_trust#Defining_Trust
I think that the normative definition should user a precise definition, even at the expense of clarity, and a comment can contain the easy ones.
b) Trust element
"A performance commitment by a single party (Object) to a second single party (Subject) .. ".Sorry if I invoke the impression to split hairs, but trust implies the point of view of a party that requires/expects some behavior. So a trust element would have be defined for the view of that party, as a requirement to another party that asserts/commits some behavior. Is this term really needed? If the perspective is reverse as I suggest, It sounds like a synonym for a requirement.
c) Trust Framework
"A set of verifiable commitments from the various parties of a transaction to the other parties."Is a single transaction the right pivot? There might be commitments that are not clearly related to a transaction, like service level agreements or regular auditing.
d) Laws of TrustI do not understand "5. Trust is not personal". Please can you explain?
e) Defining a Privacy Framework"The selection of Trust Elements to be included in the subset is not critical."That is a very good point! I think I got too much fascinated by the question how to break the TF into modules.But the definition of requirements/trust elements is critical if we need interoperability between frameworks and policies.
f) Classifying Trust ElementsWhen I look at the slide "Classifying Trust Elements", It appears as a user-centric view. Is this Privacy Framework defined to be all-encompassing or user-centric? In the enterprise use case we have privacy requirements as well. However, these are no requirements from the user as a data subject to the service provider, but of the data controller to user.Also we need to be aware that we have a known conflict between data protection and information security. Each one regards the other one as auxiliary tool for its own objectives. That makes a single hierarchical breakdown impossible.I agree that we cannot map the whole space at once. But we should at least coordinate the territorial claims between the KI WGs.
g) Parties in a Trust FrameworkInteresting idea to categorize parties by their level of involvement. I like to see that the IAF Glossary found its way into this listing.But it also brings a point where I have difficulty with: What kind of roles do we discuss on the level of the trust framework? Actors that can act as legally responsible parties? Or do we also include roles that act on behalf of legally responsible actors?E.g the audit repository: Is the audit repository a separate role, or even multiple roles per provider?. There could be a common audit repository as, well. A colleague of mine from Germany argued recently that it is the hinge for establishing trust from a legal level, particularly if a transaction ends at court. So he thinks it needs to be part of the TF. I am not convinced of that.
h) Example Trust Elements - Data Collection 1
As mentioned recently, "Extent of risk imposed on subject" does not fit into this class of requirements. We cannot assess risk at this point, only require the protection of the subject through the data collected by the object.
i) Matrix View of Trust Framework MapI agree with your comments - it will not be possible to create a comprehensive presentation in a flat matrix. However, in a database-powered approach (like cmmls.portalvebund.at) it should be possible, because you cant always work on particular views and have reporting tools to analyze gaps and overlaps.But any suggestions to simplify the model are very welcome!Regarding the IAF: Identity Assurance is primarily a trust relationship between RP and IdP. It implies, that the IdP has to implement a contract with the subscriber, but that is secondary from the position of the IAF, because subscribers and subjects are not participants of the IAF.
There are more trust relationships in the matrix than explained in the slides before .. what does it mean?
j) Parties in a Trust FrameworkCould you include the supplements that I added to your spreadsheet, and discuss the open points. A list of Parties/Roles/Actors common for all Kantara Trust Frameworks could be an own document, because it is referenced at different places, like TF, use cases, glossary and TF architecture.
Good to see this document progressing. Thank you, Jeff.
- Rainer
Am 13.01.2011 um 12:25 schrieb j stollman:<Elements of a Trust Framework.ppt>_______________________________________________All,
I have significantly updated my Trust Framework presentation to facilitate our update on the Privacy Framework subgroup activity.
Enjoy.
Jeff
--
Jeff Stollman
stoll...@gmail.com
1 202.683.8699
2. My second comment concerns the privacy criteria. Privacy criteria includes things such as restrictions on use, restrictions on collection, informed consent, etc. What about restrictions on monitoring? One aspect of privacy that people are concerned about regarding the identity ecosystem is that the government (or someone) will be able to monitor their online activities easily. U-Prove technology, in particular, provides a way to prevent this by providing ways to ensure untraceability and unlinkability of the U-Prove tokens that convey identity claims.
Susan
At the moment their implementation doesn't support pseudonymous identifiers, or token claim revocation.
That will change over time.
The IMI protocol supported barrier tokens where the IdP did not know who the RP was.
It had a security problem with not being able to audience restrict the tokens, allowing a bad RP to replay them as the user before they expire.
The current u-Prove implementation has similar issues.
To get around that the client needs to go back to the issuer each time to get new crypto for the token.
Doable but gives up the advantage of not needing the IdP online.
The big problem will be revenue models, claims issuers will need to charge the user if they can't recover money from the RP for valuable claims.
If the user is using a "Cloud Selector" the current deployment theory you hide the information from the Issuer but create a new entity that has access to all of the users information including where they use it. That could be argued as not being worth the advantage of hiding the info from the issuer.
If you use u-prove with a smart client then you start getting the real advantages. (If you can find someone willing to issue the claims)
With IMI (Information Cards) we discovered that even MS has a hard time building a critical mass of client agents.
U-Prove is not a solution to everything, it is particularly useful for certain crypto but will not be widespread any time soon.
If the Web 2.0 folks reject RSA as being too complicated they will never get zero knowledge proofs.
Policy will be needed to protect people for a long time.
Regards
John B.
On 1/13/11 8:02 PM, John Bradley wrote:
> I am trying to work with MS on how U-Prove could fit into a larger picture.
>
> At the moment their implementation doesn't support pseudonymous identifiers, or token claim revocation.
>
> That will change over time.
>
> The IMI protocol supported barrier tokens where the IdP did not know who the RP was.
> It had a security problem with not being able to audience restrict the tokens,
audience restrict?
> allowing a bad RP to replay them as the user before they expire.
>
> The current u-Prove implementation has similar issues.
>
> To get around that the client needs to go back to the issuer each time to get new crypto for the token.
> Doable but gives up the advantage of not needing the IdP online.
>
> The big problem will be revenue models, claims issuers will need to charge the user if they can't recover money from the RP for valuable claims.
>
> If the user is using a "Cloud Selector" the current deployment theory you hide the information from the Issuer but create a new entity that has access to all of the users information including where they use it. That could be argued as not being worth the advantage of hiding the info from the issuer.
>
> If you use u-prove with a smart client then you start getting the real advantages. (If you can find someone willing to issue the claims)
>
> With IMI (Information Cards) we discovered that even MS has a hard time building a critical mass of client agents.
Interesting. Who are the clients that have signed on?
> U-Prove is not a solution to everything, it is particularly useful for certain crypto but will not be widespread any time soon.
>
> If the Web 2.0 folks reject RSA as being too complicated they will never get zero knowledge proofs.
Right.
> Policy will be needed to protect people for a long time.
A couple of years ago, Hubert Le Van Gong, Robin Wilton, and I wrote a
paper on protecting user privacy in FIM, the point of which was that
sometimes it was technology, sometimes policy, sometimes law. It
appeared at Financial Crypto and Data Security in 2009; I can send on if
anyone is interested.
Best,
The mathematician in me is a little confused by the definitions and
"laws" you have on slides 4 and 5. I think you are trying for more
precision than is appropriate. It could be my misunderstanding. Here
are some of my confusions:
o I don't understand your definition of a trust framework: if A and B
commit to a transaction, then I don't get the role "controls" play. Do
you mean instead that if A and B commit to a transaction, then the
transaction MUST (as in IETF "MUST") follow the regulatory and
contractual obligations or the transaction is not valid within the trust
framework? Or do you mean something else?
o If trust is "the willingness of a party to engage in a transaction,"
then the law that says "Trust is not uniform" is logically unnecessary.
o I have no idea what you mean by "Trust is not personal." Is it "Each
transaction involving a party is handled individually within the
framework and does not rely upon previous transactions between these
parties"? --- or is something else meant?
o I don't understand the comment about permutations (and suspect it is
unnecessary).
o I'm not sure I understand the function of slide 6; I can't figure out
how it adds to the definition of Trust Framework.
In general, I think it is best to be as simple as possible about what is
meant by Trust Framework, and I'd urge you to move in the direction of
more rather than less. That said, while I understand the EU
requirements on use limitation, I also have my doubts on efficacy of
that principle in practice. Thus I would urge that the Privacy
Framework include auditing of the use of data.
Finally, I have a question that perhaps should have been my first
question here. What is the purpose of this set of slides? They seem
somewhat diffuse at present, and I am confused by them. Thanks.
> Concur with all; one question:
>
> On 1/13/11 8:02 PM, John Bradley wrote:
>> I am trying to work with MS on how U-Prove could fit into a larger picture.
>>
>> At the moment their implementation doesn't support pseudonymous identifiers, or token claim revocation.
>>
>> That will change over time.
>>
>> The IMI protocol supported barrier tokens where the IdP did not know who the RP was.
>> It had a security problem with not being able to audience restrict the tokens,
> audience restrict?
Setting the saml:AudienceRestrictionCondition in the returned SAML token.
All RP/SP should check that to be certain the token was generated for them.
In IMI the selector sends ic:AppliesTo in the RST along with the certificate of the RP, that allows the Issuer to place the CN of the certificate in the audiance restriction condition and encrypt the token for the RP.
You can generate a RST without sending ic:AppliesTo in the RST, however that allows the token to be replayed at any RP.
In principal as long as the IdP is generating proper PPID the token is not useful in impersonating the user at a different RP, however if the RP is only looking at the claims in the SAML assertion it could be fooled.
In SAML the IdP will generally use audience restriction as it knows who the RP is anyway. I suppose the same issues might apply if you were using the ECP profile.
You could provide a RP generated pseudonymous audience restriction using a smart client and still keep the SAML token with only some changes to the authn request flow in the IMI case.
I am not pushing IMI, just saying that there are ways of achieving the same privacy with EC or RSA.
I don't have anything against U-Prove, but I care more about the relative privacy profiles than the exact crypto.
>> allowing a bad RP to replay them as the user before they expire.
>>
>> The current u-Prove implementation has similar issues.
>>
>> To get around that the client needs to go back to the issuer each time to get new crypto for the token.
>> Doable but gives up the advantage of not needing the IdP online.
>>
>> The big problem will be revenue models, claims issuers will need to charge the user if they can't recover money from the RP for valuable claims.
>>
>> If the user is using a "Cloud Selector" the current deployment theory you hide the information from the Issuer but create a new entity that has access to all of the users information including where they use it. That could be argued as not being worth the advantage of hiding the info from the issuer.
>>
>> If you use u-prove with a smart client then you start getting the real advantages. (If you can find someone willing to issue the claims)
>>
>> With IMI (Information Cards) we discovered that even MS has a hard time building a critical mass of client agents.
> Interesting. Who are the clients that have signed on?
I was referring to having enough Info-Card selectors on peoples computers to be able to make the proposition attractive to RP.
There are really only two IMI selectors, MS Cardspace and Azigo. Azigo storing credentials in a hosted service introduced some other security considerations as well.
Regards
John B.
Thanks a lot for the detail.
>
>
>>> ...
>>>
>>> With IMI (Information Cards) we discovered that even MS has a hard time building a critical mass of client agents.
>> Interesting. Who are the clients that have signed on?
> I was referring to having enough Info-Card selectors on peoples computers to be able to make the proposition attractive to RP.
>
> There are really only two IMI selectors, MS Cardspace and Azigo. Azigo storing credentials in a hosted service introduced some other security considerations as well.
Okay, thanks.
To get around that the client needs to go back to the issuer each time to get new crypto for the token. Doable but gives up the advantage of not needing the IdP online.
The big problem will be revenue models, claims issuers will need to charge the user if they can't recover money from the RP for valuable claims.
If the user is using a "Cloud Selector" the current deployment theory you hide the information from the Issuer but create a new entity that has access to all of the users information including where they use it. That could be argued as not being worth the advantage of hiding the info from the issuer. If you use u-prove with a smart client then you start getting the real advantages. (If you can find someone willing to issue the claims)
With IMI (Information Cards) we discovered that even MS has a hard time building a critical mass of client agents. U-Prove is not a solution to everything, it is particularly useful for certain crypto but will not be widespread any time soon. If the Web 2.0 folks reject RSA as being too complicated they will never get zero knowledge proofs. Policy will be needed to protect people for a long time. Regards John B.
On 14 Jan 2011, at 01:59, Susan Landau wrote:
> That said, while I understand the EU
> requirements on use limitation, I also have my doubts on efficacy of
> that principle in practice. Thus I would urge that the Privacy
> Framework include auditing of the use of data.
I think we should collect these sort of comments/PF requirement
options in a nice warm place on our wiki.. "-)
Maybe even an audit focus at some point in the PF strategy would be
appropriate.. ?
-M
+1- Very Useful..
Bob,
Clarification question. Why wouldn’t restriction on monitoring be a special case of restriction on collection (which is required for monitoring), combined with restriction on use?
Thanks.
Anna
Anna Slomovic
Chief Privacy Officer
Anakam, an Equifax company
1010 N. Glebe Rd.
Suite 500
Arlington, VA 22201
P: 703.888.4620
F: 703.243.7576
On 1/13/2011 8:02 PM, John Bradley wrote:
To get around that the client needs to go back to the issuer each time to get new crypto for the token. Doable but gives up the advantage of not needing the IdP online.
John, could you just explain what exactly this "new crypto" is that prevents a long-lived u-prove token from being used without the IdP being online? My understanding is that a u-prove token is created with both a token-specific public key as well as a corresponding private key that is controlled by the subject. When the subject attempts to use the token to convey a claim to a relying party, the RP sends a challenge to the subject's selector, and the selector then encrypts this challenge with the token's private key (under the direction of the subject). The encrypted challenge, along with the token, is returned to the RP. The RP uses the token's public key to decrypt the encrypted challenge, and thus verifies the original challenge. Hence the RP knows that the claim presented by the token refers to the subject, since only that subject controls the private key.
I may be missing an important detail here, but why would an interaction with a "live" IdP be required for this scenario to work?
The big problem will be revenue models, claims issuers will need to charge the user if they can't recover money from the RP for valuable claims.
Even if the IdP doesn't collect a fee from the RP every time a claim is transmitted to the RP, other business models may be possible. For instance, OIX assumes that different "trust communities" (ie, libraries, telecom, government, financial, healthcare, etc) may emerge that may each be governed by its own trust framework. If that happens, maybe RPs that are members of some particular trust community will only choose to trust claims issued by IdPs that are members of the same community. This does not seem unreasonable, since claims generated by IdPs in a given trust community may not necessarily contain attributes required by RPs in a different community. Possibly the RP would pay a flat fee to the community for claims issued by IdPs within the community, and these fees would support the IdPs in the community.
On 1/13/11 6:25 AM, j stollman wrote:Jeff,
> All,
>
> I have significantly updated my Trust Framework presentation to
> facilitate our update on the Privacy Framework subgroup activity.
The mathematician in me is a little confused by the definitions and
"laws" you have on slides 4 and 5. I think you are trying for more
precision than is appropriate. It could be my misunderstanding. Here
are some of my confusions:
o I don't understand your definition of a trust framework: if A and B
commit to a transaction, then I don't get the role "controls" play. Do
you mean instead that if A and B commit to a transaction, then the
transaction MUST (as in IETF "MUST") follow the regulatory and
contractual obligations or the transaction is not valid within the trust
framework? Or do you mean something else?
o If trust is "the willingness of a party to engage in a transaction,"
then the law that says "Trust is not uniform" is logically unnecessary.
o I have no idea what you mean by "Trust is not personal." Is it "Each
transaction involving a party is handled individually within the
framework and does not rely upon previous transactions between these
parties"? --- or is something else meant?
o I don't understand the comment about permutations (and suspect it is
unnecessary).
o I'm not sure I understand the function of slide 6; I can't figure out
how it adds to the definition of Trust Framework.
In general, I think it is best to be as simple as possible about what is
meant by Trust Framework, and I'd urge you to move in the direction of
more rather than less. That said, while I understand the EU
requirements on use limitation, I also have my doubts on efficacy of
that principle in practice. Thus I would urge that the Privacy
Framework include auditing of the use of data.
Finally, I have a question that perhaps should have been my first
question here. What is the purpose of this set of slides? They seem
somewhat diffuse at present, and I am confused by them. Thanks.
Best,
Susan
_______________________________________________
WG-P3 mailing list
WG...@kantarainitiative.org
http://kantarainitiative.org/mailman/listinfo/wg-p3
Susan,
I'll start with your last question first.
What is the purpose of this set of slides?
In my view, one of the reasons that we have had a proliferation of organizations proffering disparate attempts at creating a Privacy Framework is that no one has taken the time to define the problem in sufficient detail to create a common understanding of its purpose. So, like the blind men describing the elephant, the various offerings reflect each group's perspective, but fail to address the entirety of the problem.
A second reason that none of the proffered solutions has gained widespread traction is that a Privacy Framework, by itself, is insufficient to solve the larger problem which I suggest is creating sufficient trust among relevant parties to cause each of them to conduct a transaction. I, therefore, believe that Trust is the real problem and Privacy is just a subset.
If we can accept the second reason above, this then leads me to two conclusions:
The slide deck attempts to describe the larger Trust Framework problem to allow us to consciously select the Trust Elements that we consider most valuable to include in the Privacy Framework that we compose. Criteria for selection may be some sense of the relative importance of certain Trust Elements or may be the ability to gain significant reduction in complexity by selecting Trust Elements that can be addressed in a common way.
- We need to define a solution (or a road map to achieve the solution) to the Trust problem before we worry about the Privacy subset.
- The scope of the Privacy subset (i.e., which trust elements are included in it) is less critical than the fact that every trust element will eventually be addressed (e.g., as an element of Identity, Notification, Controls, or whatever other subsets we deem useful).
I include other responses in line below.
Thank you for your feedback.
On Thu, Jan 13, 2011 at 8:59 PM, Susan Landau <susan....@privacyink.org> wrote:
On 1/13/11 6:25 AM, j stollman wrote:Jeff,
> All,
>
> I have significantly updated my Trust Framework presentation to
> facilitate our update on the Privacy Framework subgroup activity.
The mathematician in me is a little confused by the definitions and
"laws" you have on slides 4 and 5. I think you are trying for more
precision than is appropriate. It could be my misunderstanding. Here
are some of my confusions:
o I don't understand your definition of a trust framework: if A and B
commit to a transaction, then I don't get the role "controls" play. Do
you mean instead that if A and B commit to a transaction, then the
transaction MUST (as in IETF "MUST") follow the regulatory and
contractual obligations or the transaction is not valid within the trust
framework? Or do you mean something else?
A and B may not commit to the transaction unless each trusts that certain controls are in place that will support their need for protection. The trust may be dependent upon knowing things like (1) how does Party B intend to protect my PII? (2) does Party B offer me the ability to restrict passing of my data to other parties? (3) does party B afford me the opportunity to review and correct errors in my data it holds?
o If trust is "the willingness of a party to engage in a transaction,"
then the law that says "Trust is not uniform" is logically unnecessary.
I don't dispute your point, but all definitions are in flux. And I think there is value in emphasizing that trust will vary with the particulars of a transaction, including (1) who the other parties are, (2) the nature of the transaction, (3) the levels of assurance and protection committed to by the parties, etc.
o I have no idea what you mean by "Trust is not personal." Is it "Each
transaction involving a party is handled individually within the
framework and does not rely upon previous transactions between these
parties"? --- or is something else meant?
Yes, you are correct. Party B may not have blanket trust in the commitments made by Party A. He may trust Party A for one transaction (e.g., a low value purchase), but not trust Party A for another (e.g., a high value transaction).
o I don't understand the comment about permutations (and suspect it is
unnecessary).
The point is that the problem space is large and the number of permutations yields an upper bound to the size of the problem space.
o I'm not sure I understand the function of slide 6; I can't figure out
how it adds to the definition of Trust Framework.
You are right. I certainly don't consider this a polished presentation. I created it as a straw man to start the conversation and collect feedback. If this every grows in to a polished final version, one or the other definition will need to be excised.
In general, I think it is best to be as simple as possible about what is
meant by Trust Framework, and I'd urge you to move in the direction of
more rather than less. That said, while I understand the EU
requirements on use limitation, I also have my doubts on efficacy of
that principle in practice. Thus I would urge that the Privacy
Framework include auditing of the use of data.
The decision of what Trust Elements are included or excluded from the Privacy Framework we create is not mine. It will be made by P3 as a group. Personally, I have no strong opinions at this point about what items we include and which we leave to other groups to address.
I am concerned that whatever plan we use to move forward addresses the entirety of the problem, though. This is not to suggest that P3 lead the overall Trust Framework effort. Rather, I would like to know that a road map exists that includes building out the entire Trust Framework (over time) so that subsystems like Privacy have sufficient support to be effective.
I consider this like a Space Shuttle mission. I don't know if Privacy is the launch pad, the launch vehicle, the shuttle, the NASA command center or the landing strip. But I know that if we don't have all of these systems built out to a master set of specifications, the mission is unlikely to succeed. Simple failures to align all the pieces (e.g., the landing strip being too short for the speed and mass of the shuttle, or specs from one supplier being read as inches when they are in centimeters) will result in mission failure. But, so far, I haven't observed that other attempts at privacy frameworks have acknowledged the larger system-of-systems environment that needs to be addressed.
Am 16.01.2011 um 02:05 schrieb j stollman:
Susan,
What is the purpose of this set of slides?
In my view, one of the reasons that we have had a proliferation of organizations proffering disparate attempts at creating a Privacy Framework is that no one has taken the time to define the problem in sufficient detail to create a common understanding of its purpose. So, like the blind men describing the elephant, the various offerings reflect each group's perspective, but fail to address the entirety of the problem.
Thanks. I have a slightly different view, and suspect that we first need to solve the problems faced by each of the different groups in order to understand what the commonalities are.
Trust is certainly the real issue.
A second reason that none of the proffered solutions has gained widespread traction is that a Privacy Framework, by itself, is insufficient to solve the larger problem which I suggest is creating sufficient trust among relevant parties to cause each of them to conduct a transaction. I, therefore, believe that Trust is the real problem and Privacy is just a subset.
Sure.
If we can accept the second reason above, this then leads me to two conclusions:
The slide deck attempts to describe the larger Trust Framework problem to allow us to consciously select the Trust Elements that we consider most valuable to include in the Privacy Framework that we compose. Criteria for selection may be some sense of the relative importance of certain Trust Elements or may be the ability to gain significant reduction in complexity by selecting Trust Elements that can be addressed in a common way.
- We need to define a solution (or a road map to achieve the solution) to the Trust problem before we worry about the Privacy subset.
- The scope of the Privacy subset (i.e., which trust elements are included in it) is less critical than the fact that every trust element will eventually be addressed (e.g., as an element of Identity, Notification, Controls, or whatever other subsets we deem useful).
I include other responses in line below.
Thank you for your feedback.
We have a difference of opinion here; I think the attempt to do a Trust Framework, then a Privacy Framework is an attempt to boil the ocean. But now that I understand the motivation and purpose for what you're doing, it helps me understand where I want to concentrate my efforts. So thanks.
My concern is whether P3WG has the expertise and is the appropriate
group to be developing the Trust Framework. That's why I asked where the
decision to do this is coming from.
>
> My concern is whether P3WG has the expertise and is the appropriate
> group to be developing the Trust Framework. That's why I asked where the
> decision to do this is coming from.
There is an effort going on the IAWG to create a Trust Framework Model now for 2 weeks, and UMA WG plans to start a similar effort AFAIK. I am interested in coordinating these efforts, and receiving input from the user-centric side, as the IAWG is a bit enterprise-biased.
Rainer
Susan
Eve
Eve Maler http://www.xmlgrrl.com/blog
+1 425 345 6756 http://www.twitter.com/xmlgrrl
When the privacy community talks about Collection, it includes both collection of which the subject is aware and collection of which s/he is unaware. In fact, below is the OECD definition of the Collection Limitation FIP:
Collection Limitation Principle
There should be limits to the collection of personal data and any such data should
be obtained by lawful and fair means and, where appropriate, with the knowledge
or consent of the data subject.
As you can see, "knowledge or consent of the data subject" is delimited by "where appropriate."
Anna
Anna Slomovic
Chief Privacy Officer
Anakam, Inc.
1010 N. Glebe Road
Suite 500
Arlington, VA 22201
T: 703.888.4620
F: 703.243.7576
W: www.anakam.com
E: aslo...@anakam.com
________________________________________
From: j stollman [stoll...@gmail.com]
Sent: Saturday, January 15, 2011 4:50 PM
To: Anna Slomovic
Cc: Bob Pinheiro; wg...@kantarainitiative.org
Subject: Re: [WG-P3] to facilitate the Privacy Framework discussion
Anna,
I think your are correct that surveillance is a special case of collection. I would suggest that the differentiation comes from the fact that, in collection, the Subject consciously transmits personal information to the collector while in surveillance the Subject may be unaware of what information has been transmitted or even the fact that information has been transmitted.
The distinction may turn out to be primarily about the issue Mark is working on regarding Notification and Consent. Using the rough distinctions above, Collection would include Notification and Consent; surveillance might not.
Jeff
On Fri, Jan 14, 2011 at 11:51 AM, Anna Slomovic <aslo...@anakam.com<mailto:aslo...@anakam.com>> wrote:
Bob,
Clarification question. Why wouldn’t restriction on monitoring be a special case of restriction on collection (which is required for monitoring), combined with restriction on use?
Thanks.
Anna
Anna Slomovic
Chief Privacy Officer
Anakam, an Equifax company
1010 N. Glebe Rd.
Suite 500
Arlington, VA 22201
P: 703.888.4620
F: 703.243.7576
From: wg-p3-...@kantarainitiative.org<mailto:wg-p3-...@kantarainitiative.org> [mailto:wg-p3-...@kantarainitiative.org<mailto:wg-p3-...@kantarainitiative.org>] On Behalf Of Bob Pinheiro
Sent: Thursday, January 13, 2011 10:34 AM
To: wg...@kantarainitiative.org<mailto:wg...@kantarainitiative.org>
Subject: Re: [WG-P3] to facilitate the Privacy Framework discussion
Jeff et al,
Two comments:
1. I think the term "framework" might be a bit overused. To me, "framework" should be reserved for the highest level entity we are talking about; in this case, the trust framework. That is, the framework provides the structure that defines how all of the component pieces will work together to achieve the overall goal, which in this case is trust. [Of course, there are different trust relationships, and presumably the trust framework will deal with all of these].
What then to call these components of trust (ie, privacy, notification, control, etc)? Each of these components seems to provide a set of criteria for enabling one aspect of trust (ie, privacy, etc) to be satisfied. Would it make sense to call these components "criteria"? So the trust framework would be composed of privacy criteria, identity criteria, notification criteria, etc.
2. My second comment concerns the privacy criteria. Privacy criteria includes things such as restrictions on use, restrictions on collection, informed consent, etc. What about restrictions on monitoring? One aspect of privacy that people are concerned about regarding the identity ecosystem is that the government (or someone) will be able to monitor their online activities easily. U-Prove technology, in particular, provides a way to prevent this by providing ways to ensure untraceability and unlinkability of the U-Prove tokens that convey identity claims.
Bob P.
On 1/13/2011 6:25 AM, j stollman wrote:
All,
I have significantly updated my Trust Framework presentation to facilitate our update on the Privacy Framework subgroup activity.
Enjoy.
Jeff
--
Jeff Stollman
stoll...@gmail.com<mailto:stoll...@gmail.com>
1 202.683.8699
_______________________________________________
WG-P3 mailing list
WG...@kantarainitiative.org<mailto:WG...@kantarainitiative.org>
http://kantarainitiative.org/mailman/listinfo/wg-p3
_______________________________________________
WG-P3 mailing list
WG...@kantarainitiative.org<mailto:WG...@kantarainitiative.org>
http://kantarainitiative.org/mailman/listinfo/wg-p3
--
Jeff Stollman
stoll...@gmail.com<mailto:stoll...@gmail.com>