About privacy using robots and possible ways to protect us

0 views
Skip to first unread message

vjrj

unread,
Jun 17, 2009, 8:12:42 PM6/17/09
to Google Wave API
Although I'm not and expert in wave, I'm these days a little bit
worried about privacy policies by using wave third-party robots.

As robots acts like any other participant, I can imagine normal people
using cute, privative and popular robots that do a misuse of the data
of their waves.

If we use robots/services similar to Spelly or Linky by default, and
they are not FLOSS (that is, we can not install them in our wave
server and we cannot verify the code) any "owner" of these robots can
use the data of our waves for any other purposes, like tracking our
conversations, our contacts, building profiles of us, etcetera.

Also looking at the "Delegation" section of:
http://www.waveprotocol.org/whitepapers/access-control
this can stay even more occult behind a reliable participant.

In the other side, I can imagine robots trying to protect us also
about these kind of "not verified" robots, adblock robots, and other
protecting robots.

I'm wrong about my perception? What do you think about this?

PS: Sorry for my English...

Michael Van Biesbrouck

unread,
Jun 17, 2009, 8:41:10 PM6/17/09
to google-...@googlegroups.com
On Wed, Jun 17, 2009 at 5:12 PM, vjrj<vruiz....@gmail.com> wrote:
>
> Although I'm not and expert in wave, I'm these days a little bit
> worried about privacy policies by using wave third-party robots.
>
> As robots acts like any other participant, I can imagine normal people
> using cute, privative and popular robots that do a misuse of the data
> of their waves.
>
> If we use robots/services similar to Spelly or Linky by default, and
> they are not FLOSS (that is, we can not install them in our wave
> server and we cannot verify the code) any "owner" of these robots can
> use the data of our waves for any other purposes, like tracking our
> conversations, our contacts, building profiles of us, etcetera.

Spelly and Linky are provided by the owner of the wave server (Google,
in this case). Your data is at the whim of your wave server provider
regardless of which first-party robots are used. Check their privacy
policy and make sure that you are okay with it. In the case of
third-party robots (the ones that anyone can develop and add to
waves), privacy will be determined by the wisdom of the participants
(the only people that can add third-party robots). If you are
discussing top-secret information, adding third-party robots to format
smilies would be inadvisable.

> Also looking at the "Delegation" section of:
> http://www.waveprotocol.org/whitepapers/access-control
> this can stay even more occult behind a reliable participant.
>
> In the other side, I can imagine robots trying to protect us also
> about these kind of "not verified" robots, adblock robots, and other
> protecting robots.

Third-party robots could be used to remove spam or help the wave owner
moderate a wave.

In general, you need to decide if you trust any participant you add,
whether human or robot. In the case of a robot, you are trusting the
human behind it.

There is no extant mechanism to block unwanted participants, but
someone could create a server that block untrusted participants
(random robots) and lets wave participants vote before someone can be
added to the wave. Note that anyone who wants to add another
undesirable wave participant could cut-and-paste wave contents to the
same detrimental effect.

Michael

Leonard Badi

unread,
Jun 18, 2009, 3:29:48 AM6/18/09
to google-...@googlegroups.com
What happens when a participant to wave add a malicious robot (without knowing that it is malicious). How will the other participants be affected by the robot actions. In other words, will it affect only the   machine from which the malicious robot was added, the entire federation? 

Jeff Brown

unread,
Jun 18, 2009, 4:13:52 AM6/18/09
to google-...@googlegroups.com
It sounds like it might be useful to enlist another robot for the purpose of vetting whether certain participants can be added or removed from a wave.  For example, we could have a corporate "Secrecy" bot ensure that only authorized participants were added.  We could also have a "Supervisory" bot ensure that no required participants were removed.
 
Of course the idea of bots vetting changes gets us into some dangerous semantic territory (we would likely need some kind of messy transaction mechanism to make it work, ugh).
 
I think a more promising avenue would be to enforce security policies within the scope of the wave server itself.  The wave server could take into account the content of the wave (participant information, annotations, etc.) to decide whether to allow certain state changes to be performed.  Servers could be configured to deny relaying of certain content to external participants.  This arrangement mirrors how things work today including the risk of implementing convoluted ad-hoc local policies...
 
Jeff.


From: google-...@googlegroups.com [mailto:google-...@googlegroups.com] On Behalf Of Leonard Badi
Sent: Thursday, June 18, 2009 12:30 AM
To: google-...@googlegroups.com
Subject: [Google Wave APIs] Re: About privacy using robots and possible ways to protect us

Bastian Hoyer

unread,
Jun 18, 2009, 9:20:33 AM6/18/09
to Google Wave API
I think you don't need robots for the invite/remove stuff because of
the
group feature. There are rights that allow to add yourself to a wave
readonly or
readwrite depending on the group rights if you are in the group,

The question is how groups work, e.g. if you can have robots adding/
removing
people from groups. That way you don't need robots in individual waves
as long as you add the correct groups to the wave


On 18 Jun., 10:13, "Jeff Brown" <jeff.br...@gmail.com> wrote:
> It sounds like it might be useful to enlist another robot for the purpose of
> vetting whether certain participants can be added or removed from a wave.
> For example, we could have a corporate "Secrecy" bot ensure that only
> authorized participants were added.  We could also have a "Supervisory" bot
> ensure that no required participants were removed.
>
> Of course the idea of bots vetting changes gets us into some dangerous
> semantic territory (we would likely need some kind of messy transaction
> mechanism to make it work, ugh).
>
> I think a more promising avenue would be to enforce security policies within
> the scope of the wave server itself.  The wave server could take into
> account the content of the wave (participant information, annotations, etc.)
> to decide whether to allow certain state changes to be performed.  Servers
> could be configured to deny relaying of certain content to external
> participants.  This arrangement mirrors how things work today including the
> risk of implementing convoluted ad-hoc local policies...
>
> Jeff.
>
>   _____  
>
> From: google-...@googlegroups.com
> [mailto:google-...@googlegroups.com] On Behalf Of Leonard Badi
> Sent: Thursday, June 18, 2009 12:30 AM
> To: google-...@googlegroups.com
> Subject: [Google Wave APIs] Re: About privacy using robots and possible ways
> to protect us
>
> On Thu, Jun 18, 2009 at 2:41 AM, Michael Van Biesbrouck <mlvan...@gmail.com>
> wrote:

Creq

unread,
Jun 18, 2009, 4:08:53 PM6/18/09
to Google Wave API
what vjrj says is a really important issue. i personally wouldn't add
a robot of which i didn't know exactly that it only does what it
pretends. even if i had a normal private conversation. two possible
solutions for this:

there could be an independent authority which verifies robots (and
their code) and marks them visibly for the users.
the other option is that when google or whichever trustworthy wave-
provider runs a robots-library anyway, they should very likely review
the robots before they include them. (i assume apple reviews apps
before they provide them in app store, too?)

Avital Oliver

unread,
Jun 18, 2009, 4:26:41 PM6/18/09
to google-...@googlegroups.com
This is up to the users more than up to Google. What constitutes a "bad" robot? Different people will have different views.

I think users will learn to be weary of non-open source robots and different communities will learn to "allow" or "disallow" different robots (where people will look at the source code of robots and write independent descriptions of what they do).

Avital Oliver

unread,
Jun 18, 2009, 4:27:52 PM6/18/09
to google-...@googlegroups.com
s/weary/wary/

Creq

unread,
Jun 18, 2009, 4:35:18 PM6/18/09
to Google Wave API
> What constitutes a "bad" robot?

i would consider a robot, that forwards every keystroke directly to
its programmer without declaring that, a bad robot ;-)

vjrj

unread,
Jun 18, 2009, 4:41:06 PM6/18/09
to Google Wave API
Yesterday I started to think that a set of advogato robot/gadget can
be maybe useful in this task (and also used in wave based barter and/
or auction systems more complex than Bidder gadget).

Maybe a adaptation of http://advogato.org/trust-metric.html. I was
looking in a java port: http://www.saddi.com/projects/index.html
although some distributed system would be better.

Is someone interested in participate in some kind of robot/gadget? I
have no access to the sandbox, them I have to wait...

gmurray

unread,
Jun 19, 2009, 1:34:29 AM6/19/09
to Google Wave API
Haven't read much into the apis etc and don't have access to box yet,
so don't know if it is addressed, but isn't it a problem that anyone
can add a participant to a wave? Be the incoming participant human or
robot there may be into that you have submitted to the wave that was
fine for the current participants but never the less, you would not
want to share with a newly added robot or human participant that you
"ok". Is there any facility for saying that you want to "ok"
participants that other users add to waves that you have contributed
content to?

This all sounds a bit tricky. And its all well and good if you are
just using the waves for mindless entertainment, and keep personal
details out, but for any usage profile that entails you adding
sensitive content to a wave, one would think you would want tighter
controls on the info.

Or is the onus on the user that if they are adding sensitive content
to a wave, that they should create that as a private chunk even if
they intend all the participants of the wave to see it, but that way,
further additions to the wave cannot see the content unless they are
added to the private sections.

This all starts to sound a lot like some of the patterns in the
identity selection/management realm (infocard/etc.) where users need
to share claims with relying parties, but don't want their identity
providers to just promiscuously share all the available claims with
any RP.

It may even be appropriate for a user to set various options in their
profile about which kinds of data they would like to NOT share with
which types of participants. There is opportunity for a lot of trouble
here, as, although this all seems wonderfully easy to use, it also
seems like it would be very easy to mistakenly dump a bunch of
sensitive content out onto twitter, for example.
Reply all
Reply to author
Forward
0 new messages