What does this mean exactly?

2 views
Skip to first unread message

f8

unread,
Jan 7, 2008, 3:02:46 AM1/7/08
to DataPortability.Public.General
Does this mean that data (photos, 'friends lists', blogs, etc) will be
able to be imported to a first-party website from a third-party
website?

e.g.
I import my photos and friends list from Myspace to Orkut, therefore
making this data available to my Friends on Orkut without having any
further link to myspace.

Is the above example the case?

thanks

Chris Saad

unread,
Jan 7, 2008, 3:08:29 AM1/7/08
to DataPortability.Public.General
It means " our identity, photos, videos and other forms of personal
data should be discoverable by, and shared between our chosen tools or
vendors." :)

Discoverable and shared means exactly that.

Think of it like a Hard Drive.

Lets say you save your Photoshop saves your photos in your "My
Photoshop Pictures" folder

Lets say you save your digital camera photos in your "My Digital
Camera Photos" folder.

Photoshop can look outside its Directory (it's 'Silo') into the other
folders on the hard drive.

Can it MOVE the files? Sure it can

Should it? Problably not.

Unless you want to maybe remove a "hard drive" (A service) from your
'Hard Drive' (Uour list of trusted data stores/apps).

Does this help?

Zef Hemel

unread,
Jan 7, 2008, 3:12:38 AM1/7/08
to dataportabi...@googlegroups.com
That would be one of the use cases yes. But it's not only about
importing and exporting, it is also about using and manipulating data
stored on different sites through another, without even "importing"
it.

Zef

--
Zef Hemel
E-Mail: z...@zefhemel.com
Phone: (+31) (0)6 156 19 280
Web: http://www.zefhemel.com

Bart Stevens

unread,
Jan 7, 2008, 9:19:24 AM1/7/08
to DataPortability.Public.General
Zef,

Do you mean creating some sort of meta system, hovering over the
different silo's?

Bart

On Jan 7, 9:12 am, "Zef Hemel" <zefhe...@gmail.com> wrote:
> That would be one of the use cases yes. But it's not only about
> importing and exporting, it is also about using and manipulating data
> stored on different sites through another, without even "importing"
> it.
>
> Zef
>

Zef Hemel

unread,
Jan 7, 2008, 9:45:11 AM1/7/08
to dataportabi...@googlegroups.com
Bart,

In a sense. Although it's more about creating an alternative interface
to different web applications. Many web apps have their own
non-standard APIs to retrieve files, collections of things, do
querying of the data and such. If we standardize these interfaces (or
at least if everyone agrees to implement a certain interface) data
portability becomes a lot easier.

A year ago or so I created a prototype of this including a simple "web
shell", like a bash/windows command prompt. I had a "web root" where
some services were "mounted", for example flickr and S3. I could then
cd into the flickr "directory" that contained a list of my flickr
albums. I could then just do a cp (copy command) to copy an album from
flickr to my S3 account while preserving all of its metadata. To do
this I implemented a thin wrapper around the flickr API and did the
same for my JungleDisk S3 storage, it wasn't that complicated and
worked reasonably well.

This is similar to Chris Saad' vision here:
http://dataportability-public.googlegroups.com/web/WebFS-on-the-desktop.gif

Zef

esdaniel

unread,
Jan 7, 2008, 10:51:08 AM1/7/08
to DataPortability.Public.General
Besides portability there is the benefit of integration to pursue...
unless I'm mistaken the challenge is to remove repetition and
duplication i.e. implement the DRY rule.

Thus, one might import and export 'keys' - these keys know where the
underlying data lives and have their own meta-index.

An interface could then query an index server to retrieve the URI and
load the data for the corresponding key - essentially I'm alluding to
a massively distributed RDBMS type approach that leverages semantic
form such as tagging amongst other capabilities to index and query
data.

On Jan 7, 2:45 pm, "Zef Hemel" <zefhe...@gmail.com> wrote:
> Bart,
>
> In a sense. Although it's more about creating an alternative interface
> to different web applications. Many web apps have their own
> non-standard APIs to retrieve files, collections of things, do
> querying of the data and such. If we standardize these interfaces (or
> at least if everyone agrees to implement a certain interface) data
> portability becomes a lot easier.
>
> A year ago or so I created a prototype of this including a simple "web
> shell", like a bash/windows command prompt. I had a "web root" where
> some services were "mounted", for example flickr and S3. I could then
> cd into the flickr "directory" that contained a list of my flickr
> albums. I could then just do a cp (copy command) to copy an album from
> flickr to my S3 account while preserving all of its metadata. To do
> this I implemented a thin wrapper around the flickr API and did the
> same for my JungleDisk S3 storage, it wasn't that complicated and
> worked reasonably well.
>
> This is similar to Chris Saad' vision here:http://dataportability-public.googlegroups.com/web/WebFS-on-the-deskt...
>
> Zef

kidehen

unread,
Jan 7, 2008, 12:03:55 PM1/7/08
to DataPortability.Public.General


On Jan 7, 10:51 am, esdaniel <edward.dan...@gmail.com> wrote:
> Besides portability there is the benefit of integration to pursue...
> unless I'm mistaken the challenge is to remove repetition and
> duplication i.e. implement the DRY rule.
>
> Thus, one might import and export 'keys' - these keys know where the
> underlying data lives and have their own meta-index.
>
> An interface could then query an index server to retrieve the URI and
> load the data for the corresponding key - essentially I'm alluding to
> a massively distributed RDBMS type approach that leverages semantic
> form such as tagging amongst other capabilities to index and query
> data.
>

This is basically what you get with OpenLink Data Spaces (ODS)
courtesy of Linked Data URIs :-)

Links:

1. Live Instance: http://myopenlink.net/ods
2. Live Demo Instance: http://demo.openlinksw.com/ods

All you have to do is create an Application Instance and then create
links to your existing Web 2.0 data on the Web. Once this is done, you
will be able to dereference all your data via your system generated
URIs. The URI pattern is:
http://myopenlink.net/dataspace/person/<your-ods-id>#this for you the
Person Entity and <http://myopenlink.net/dataspace/person/<your-ods-
id> for your FOAF file (and OpenID) URI.

Of course, you can install ODS yourself and own your domain, but even
better you can install the EC2 version of ODS based on the following:
AMI ID: ami-e2ca2f8b
Manifest: virtuoso-images/virtuoso-dataspace-server-3.manifest.xml

You can follow the guide in my blog post to set this all up and then
create your own point of control Identity and Data in the Clouds:
http://www.openlinksw.com/dataspace/kid...@openlinksw.com/weblog/kid...@openlinksw.com%27s%20BLOG%20%5B127%5D/1261



[SNIP]

Kingsley

Josh Patterson

unread,
Jan 8, 2008, 12:57:08 AM1/8/08
to DataPortability.Public.General
Really quickly: thats our WRFS "wNode", analagous to a filesystem
"inode" -- an index of where you store your data. It's discovered via
Attribute Exchange and openID 2.0 --- Theres an official spec of this
coming, 3rd parties can query, read, write, update it via a protocol
similar to how openID works. The security mechanics are done with
OAuth + Discovery. Some people say the FOAF file alone accomplishes
this, but the wNode serves the same function, basically, indexes
"above" the silo, links data. We could quickly write a plugin that
exposes the wNode as a FOAF xml document, but id say down the road the
wNode will have more complex functionality like only showing certain
data to certain parties.

Josh Patterson

On Jan 7, 10:51 am, esdaniel <edward.dan...@gmail.com> wrote:

Josh Patterson

unread,
Jan 8, 2008, 1:03:56 AM1/8/08
to DataPortability.Public.General
Another aspect of the wNode is that it is platform independent and was
designed with a high degree of decentralization in mind, as well as
run time discovery. We want to be able to deal with a user's data as a
logically whole filesystem, as a logically whole database. The
aggregation part is key to this.

Josh Patterson

On Jan 7, 10:51 am, esdaniel <edward.dan...@gmail.com> wrote:

Alexandre Passant

unread,
Jan 8, 2008, 2:50:57 AM1/8/08
to dataportabi...@googlegroups.com
Hi,

On Jan 8, 2008 6:57 AM, Josh Patterson <jpatt...@floe.tv> wrote:
>
> OAuth + Discovery. Some people say the FOAF file alone accomplishes
> this, but the wNode serves the same function, basically, indexes
> "above" the silo, links data. We could quickly write a plugin that
> exposes the wNode as a FOAF xml document, but id say down the road the
> wNode will have more complex functionality like only showing certain
> data to certain parties.

Well, most FOAF files expose all data because they're created once and
then saved on the
server, or because they don't take any parameters into account to be rendered.
Now, imagine those FOAF files are created on the fly for certain
services, depending on who
asks it, and you have solved the problem you mention since you can
restrict what is exported
in some cases.

Could you tell more on what wNode offers regarding FOAF - especially
what FOAF can't achieve ?
I that not only a problem of building the file than the format / model
of RDF/FOAF itself ?
If so, a wNode would be a FOAF file that respects WRFS protocol to be
created, kind of "dynamic and 3rd-party-permissions based profile".

Best,

Alex.

Reply all
Reply to author
Forward
0 new messages