Nick Mikhailovsky wrote:
> Nathan,
>
> Nice job! +1
>
> I believe it is important to keep OpenSocial in the loop (even that
> internally we can make it FOAF-based), because it can provide seamless
> integration of OpenLike/RelationshipButtons with second-tier social services
> like MySpace, LinkedIn and Blogger. This is where the key interface part is
> - making OpenLike/RelationshipButtons work as smooth as FaceBook's Like
> button, without the need to log in when any of these apps are running in a
> different browser window. I will look into that later today/tomorrow.
My main concern with the current openlike, is that really it isn't
anything (no offence at all), it's nothing more than the knowledge that
one service uses ?url= and another uses ?addLike= (examples)
imho what's really needed is a single protocol that all services can
implement, and that anybody can implement, this protocol has two sides,
one is to publish likes in a single format which can be consumed by anybody
<person> likes <thing>
the "likes" verb could of course be changed, and ideally would be a
property from an rdfs or owl ontology, this could just as easily be
"knows" <person>, "follows" <user-account>, or any other link you could
imagine.
The above is very much handled by structured linked data already and
could be implemented in a snap. (+ widgets made etc).
The bit which isn't really handled is the common protocol for sending
those relations (together with any additional information such as a
title, a comment, dates× etc).
Again though the common data format is covered by linked data, as is
0-click authorisation + person identification (via foaf+ssl).
I wholeheartedly feel this is an opportunity for multiple parties to
come together and forge out the missing link, which is that protocol for
sending this structured information to endpoints in a restful manner.
> The thing I did not quite get in your post was about delegating Likes to the
> third party. Can you detail a bit more - what specifically do you propose to
> delegate?
we can delegate two things:
1: the listing of this additional data (let's say all services provided
an url for a users likes, then in my own foaf document I could simply
assert:
#me rdfs:seeAlso <
http://facebook.com/nathan/likes>,
<
http://digg.com/nathan/links>,
<
http://reddit.com/nathan> .
Then any client that so wished could lookup those likes and display
them, or pubsubhubbub subscribe to each of the end points, or countless
methods of handling it - the point is that by having a uniform data
structure that is understood by all, every service and machine can
understand the data from any other service, it moves HTTP in to the API
role and no documentation or further info is needed, you simply GET one
of the urls and parse the structured linked data returned, who or
whatever you are.
> Protocol for sending the triple from the end-client should most likely be a
> GET request, for simplicity and brevity. Internally looks like it should
> use sparql update, as you have suggested.
yup it's one way to handle it, as long as GET remains "safe", the actual
save action would have to be POST after a user confirm, however if a
browser extension simply posted it to an https endpoint then it could be
a single action, just click one button and foaf+ssl does all the
identification + handling, the process at the endpoint collects the data
and saves it, then publishes it and notifies subscribers, all safe
secure, instant, delegated and distributed - simples.
> Nick.
>
> 2010/4/26 Nathan <
nat...@webr3.org>