Rebooting discussion around assets, containers, and inventory

0 views
Skip to first unread message

Ryan McDougall

unread,
Jun 11, 2009, 2:44:43 AM6/11/09
to kyor...@googlegroups.com
Before the discussion wanders into the deep end, let us see if we can
agree on a common understanding, set of principles, or even just
assumptions:

== On Meta-data

There are several orthogonal concerns that require meta-data, and
while some implementations may choose to combine these concerns
together, that should be done explicitly as a design choice.

(With thanks to Metaforik for use as an example)

* Any to Any: Size, Dates (create, modify, publish...), Version
* Content creator to End-user: the attribution terms or conditions
under which he has made the content available (copyright, licensing)
* Content creator to End-user: a description or purpose for the content
* Content creator to User-agent: the best ways to view the content
under different situations (device profiles, LoD)
* Content creator to User-agent: the language the item was created in,
and other possible local versions
* Content creation application to User-agent: MIME (or otherwise) Type
for the user-agent to distinguish how to handle the asset
* Content creation application to User-agent: Location of actual
binary (PURL-style asset)
* Security permissions (fill in more details here...)

== On Container formats

There are several use cases for container formats as well.

* To attach meta-data from any set of concerns above.
* To link together pre-existing container formats, such as Ogre .mesh,
.material, .particle, etc.
* To provide hierarchical or relational spatial information -- layout
containers

== On Relating and Addressing Assets

There are several ways in which assets relate to each other in the
pipeline from creation to display.

* Content creation: assets are related to each other in container
formats using simple local addressing that is easy for a content
creator to understand and manipulate
* Publication: assets are related to each other in container formats
using globally unique addressing that is friendly to global indexing,
searching, retrieving and caching
* Simulation: assets are related to each other in computer memory
using locally unique addressing that is friendly to local indexing,
searching, retrieving and caching

== On Layers of Abstraction

Clearly the following are all present in any VW system. The question
is at what layers do we expose our interfaces.

* Asset Layer
- stored in uniform memory
- uniquely addressed in machine friendly way
- 1-1 mapping of addresses to names make assets immutable
- operations are GET and SET

* File/Inventory Layer
- stored in a hierarchical memory
- addressed by arbitrary, human-readable names
- easy renaming makes assets mutable
- addresses may contain cycles
- operations are GET, SET, RENAME, LIST, COPY, MOVE, DELETE, etc.

* Versioning Filesystem Layer
- inherits all properties of File Layer, but adds versioning, history,
deltas, replication, etc.

* Database Layer
- stored in relational memory
- addressed by arbitrarily complex expressions (machine-level
addressing irrelevant)
- mutability depends on database design (mutability is a policy detail)
- operations are many and can be combined to arbitrary complexity
(though not Turing-complete)
- not available in serializable format without expensive, explicit step
- complex to use and administer

Complaints, additions?

Cheers,

Patrick Horn

unread,
Jun 11, 2009, 3:07:26 AM6/11/09
to kyor...@googlegroups.com
Ryan McDougall saidin a message sent at 10.06.2009 23:44:
I'm pretty happy with this list overall.

The one thing I am confused about is what you mean by the Database
Layer... I'm a bit confused where this fits into the discussion,
assuming metadata is already covered by the File Layer (i.e. as an xml
format such as Metaforik).

-Patrick

Ryan McDougall

unread,
Jun 11, 2009, 3:14:05 AM6/11/09
to kyor...@googlegroups.com

I think I'm kinda just extending the focus, walking up the
feature/abstraction ladder, for the sake of completeness.

In theory the content creator tool writes directly to a local DB. The
local DB replicates to a global database during publish. The simulator
and client SQL statements which pull their views from the global DB --
serialized in any format you choose.

I think that might be some of what Wonderland does -- I think it's a
database-heavy design...

> -Patrick
>

Cheers,

Tommi Laukkanen

unread,
Jun 12, 2009, 2:37:09 AM6/12/09
to kyoryoku
How about adding also possibility of defining asset bundles which can
be used as preloadables. Instead of downloading 1000 assets and their
dependencies you could download one zip file. This would usually be
good alternative when connecting to new world the first time. This
might be premature optimization but still one use case.

-tommi

Alon Zakai

unread,
Jun 12, 2009, 2:45:53 PM6/12/09
to kyor...@googlegroups.com
I think this is useful for more reasons than optimizing
the initial load of a new world: Often many assets
logically fit together, and do not make sense by
themselves. To risk sounding like a broken record,
in Linux dependency systems this sort of this is
standard, each package contains multiple files,
it would simply be odd to have every single file
be a separate package. The same seems true if
we replace 'file' with 'asset' IMO.

We are currently implementing a simple version
of this: If an asset is in fact a compressed
archive, then it is uncompressed upon
reception, and the result is the same as if
each individual file were a separate asset
that was acquired individually.

Best,
Alon Zakai / kripken

daniel miller

unread,
Jun 12, 2009, 3:43:03 PM6/12/09
to kyor...@googlegroups.com
I'm new to this area, and have only followed the discussion loosely,
but it seems to me that the dependency system you're referring to (ie
apt-get etc) has a conceptual relationship to the version control
analogy that has also been bandied about.

I guess the deep question is, what is the responsibility of the CDN vs
the application? Who is in charge of keeping track of these
dependency relations?

-dan

Alon Zakai

unread,
Jun 13, 2009, 2:20:56 AM6/13/09
to kyor...@googlegroups.com
On Fri, Jun 12, 2009 at 10:43 PM, daniel miller<danb...@gmail.com> wrote:
>
> I'm new to this area, and have only followed the discussion loosely,
> but it seems to me that the dependency system you're referring to (ie
> apt-get etc) has a conceptual relationship to the version control
> analogy that has also been bandied about.
>

My conceptualization is indeed along the lines of
apt-get, while another approach (Ryan, etc.)
is to see it like git. So I guess the question is,
to which is a virtual worlds asset system more
comparable.

The two analogies are more similar than not, but
differ in some ways. Perhaps mainly that apt-get
is more focused on the deployment stage, while
while git is more focused on the development
stage.

> I guess the deep question is, what is the responsibility of the CDN vs
> the application?  Who is in charge of keeping track of these
> dependency relations?
>

I agree, this is the right question here.

My take:

1. A metadata server (one of many) holds
the dependency information. Different
versions of assets are managed as
in apt-get, i.e., 'manual' versioning (e.g.,
how python2.5 and python2.6 are stored).
This is basically all the complexity that
the protocol should provide, as it is all
the complexity that the end-user needs to
see.

2. Per-file versioning, including diffs and
the ability to view snapshots from any
point in time, as with git, are useful, and
might be done on the asset server itself,
leaving the metadata server completely
out of it. Simple asset servers might
not have this (they would just allow
downloading of content, and simple
uploading by content owners), while
more complex ones would, and each
might do so differently.

In other words, a clean separation between
metadata server and asset server(s)
makes sense to me.

Tommi Laukkanen

unread,
Jun 14, 2009, 12:26:44 AM6/14/09
to kyoryoku
Now that we have basic requirements could we give serious evaluation
to Metaforik and see if it would work as is to some of them. Could we
evolve Metaforik to better fit our needs? I guess this would include
defining set of use/test cases which you would need to implement with
sample documents/messages? We could also use this set to evaluate any
new format proposals.

-tommi
Reply all
Reply to author
Forward
0 new messages