Fwd: Sirikata Viewer specs

0 views
Skip to first unread message

Ryan McDougall

unread,
Feb 17, 2009, 3:49:32 AM2/17/09
to realxtend-a...@googlegroups.com
The VW group from Stanford University has created a research level VW
protocol and implementation called Sirikata. They have expressed
interest in re-using the realxtend viewer for their viewer part.

I asked them for some information about their protocol so we can make
sure we don't assume too much about SL in our design. The following is
their reply.

I will make time to dissect the reply further, but I wanted to forward
it right away for your comments.

Cheers,


---------- Forwarded message ----------
From: Daniel Horn <dani...@graphics.stanford.edu>
Date: Fri, Feb 13, 2009 at 1:33 PM
Subject: Sirikata Viewer specs
To: platfo...@googlegroups.com


Ryan-
I hope you don't mind CCing my answer to the dev list, since this is
useful information for everyone :-) I'll start very lowlevel and move
up and title each section, so you can skip ahead if the section is
uninteresting.

--Wire protocol--
To begin with, a basic client has a fairly straightforward wire
protocol to the server. Right now the server uses a simplified form of
structured streams (http://pdos.csail.mit.edu/uia/sst/) to
communicate, allowing many objects to communicate over a single
connection and separating the lossy and orderless packets from the
ordered ones.
Right now we just use the SST *software* abstraction, leaving the very
efficient UDP protocol implementation for a future generation--so
everything happens over TCP with a small and constant number of TCP
sockets.
It begins with a simple TCP connection and a 24 byte handshake that
includes a version number and number of sockets to use.
For a single socket, the packets begin with a 4 byte size and a byte
valued at 0x1 (to indicate that the first stream is in use since 0x0
is a control).
a simple 7 byte packet (4 bytes size, 3 bytes payload) on each socket
closes off the connection, and should be acknowledged by the other
side.

--Serialization format--
Within these packets we use protocol buffers to encode our data. We
wanted to use a well documented, well established standard that had
room for both floating point, double, signed and unsigned
datatypes--as well as an efficient encoding. We considered Thrift and
the linden labs serialization format, but neither offered sufficient
improvement over protocol buffers aside from nice XML formats.

We have .proto files from an earlier version of our client/server, but
these are currently undergoing some rewriting, so I will talk about
them at a higher level in the next section.

--Communication methodology--
Right now we essentially have 2 types of messages: one to interact
with the proximity system on the space server to which a client would
connect---this kind of message asks the server to notify clients when
items enter into range--and may be defined by a solid angle or a
distance metric--the other message is a generic type of message
directed at one of the objects that has been introduced via the
proximity system---these messages make up the majority of the
communication in the system thus far.
The messages themselves are just viewed as some UUID's for the object
name and byte array data content. These byte arrays comprise the
function name and the arguments to that function. Arguments are
encoded, again, using protocol buffers.

The messages to objects can include things like getMeshURI() or
getPosition() --and in the future it may include requests to subscribe
to various update services located on other servers.

--Graphics Display--
Aside from generic transforms, geometry is not sent down the control
pipe, but instead it is farmed off to any CDN accessible through a
URI. Our prototype system supports HTTP already but we are working on
adding S3 support very soon. All our content is stored in a number of
levels of indirection, the last of which is named by hash. So you may
see a client requesting
meerkat://daniel@server/mymesh.mesh
and inside the mesh itself may have a reference to an ogre material called

meerkat://daniel@server/mymaterial.material:Metal
and that material may reference a hash directly
texture_unit {
texture mhash://server/0x1924498214
}

or maybe by name, or an inherited material, etc. The very detailed
rules about the material system are quite interesting and include
addressing some of the issues about running trusted shaders, etc.

One of the other significant challenges we faced was to convert Ogre
to allow streaming content like this from the web in a way that would
not impact framerate and would not require all dependencies to already
exist on the disk.

We would be very excited if the code we had to do this sort of
dependency analysis and mesh/material/texture/animation download found
some use outside of Sirikata, potentially in your viewer or for anyone
else. At the very least the changes we made to ogre improve its
usability in streaming applications a lot :-)

We also have worked on some interesting code to try to limit the
portions of the scene shown to the client based on available graphics
memory and (soon) framerate. This code was heavily dependent on having
stored all data on the CDN by hash and keeping such references on the
client for instancing opportunities and updates.

Anyhow if you have questions about any particular area I've covered
please let me know and I'll do my best to fill in the numerous gaps.

-Daniel


----

What sort of assumptions does your protocol make on a viewer? What is
the expected architecture for a viewer?

Cheers,

Reply all
Reply to author
Forward
0 new messages