Week 3 Project Summary

4 views
Skip to first unread message

Ryan McDougall

unread,
Jan 28, 2009, 9:07:38 AM1/28/09
to ou...@adminotech.com, cr...@ludocraft.com, realxtend-a...@googlegroups.com
== ModreX
=== Mikko
ModRex
-Get Python scripts to "work" in ModRex. (Script can be put to objects
and run, they respond to touch, not all functionality exist)
-Test and document all python methods in Rex. Write status of all
methods to wiki. Similar to this:
http://opensimulator.org/wiki/LSL_Status/Functions
-MediaURL: RexAssetData wrapper, DB module for storeing rex asset data

=== Tuomo
Modrex:rex ode physics
- make rex collision physics work with modrex. Requires implementing own
rex ode physics plugin. Implement as clean as possible with no hacks.

== Heikki
Task: rex avatar, migration from current viewer to rexNG. Continue work
with framework.

1. Compile framework on Visual Studio 2008. Estimated: 0.5 days.

2. Gather avatar specific asset server / authentication information to one
place. Estimated: 0.5 days.

3. Go over rex avatar specific code in the current viewer, see what parts
can be used and how. Rough overview of how the rex avatar works
clientside. Estimated: 2 days.

4. Propose an API for using the Avatar System in the new viewer.
Estimated: 4 days

5. Discuss framework prototype implementation / design with clb.
Estimated: 0.5 days if no major issues, otherwise might be a completely
new task.

Result: documentation for the wiki. Interface code for the framework.

== MattiK & MattiR ==
Task: What is our IM/Communication Library?
Estimated time to completion: 12 days (6 working days)

* Test telepathy framework and form an opinion about should we use or
not to use telepathy for IM.

Sub tasks:
- finish build telepathy-glib on windows platform with cmake (1 days)
- build telepathy-haze / telepathy-gabble on windows platform with
cmake (3 days)
- Make example telepathy applications (python etc.) to run with
telepathy/dbus environment (2 day)
- cross compile telepathy libraries (2-4 days)
- write a sample app that uses telepathy for text chat (4 days)

Deliverable 5.1.2008:
* build environment for all necessary telepathy libraries for Windows platform
* demo text chat application
* Recommendation for IM/Communication library
! because of possible sick absences some of these might not be done by 5.1.2008

== Lasse
Task: Which 3D UI works with Ogre
Estimated time to completion: 6 days

Investigate existing 3D UI libraries (OpenGL-based) for integration
into the rex-NG viewer.
Check ease of integration & features & eyecandy level.

Some libraries that have been found:
Clutter - http://clutter-project.org/
Gigi - http://gigi.sourceforge.net/
Pigment - https://code.fluendo.com/pigment/trac/wiki/WikiStart

Time breakdown:
- Investigate clutter (2 day, maybe more if it's hard to build)
- Investigate pigment (2 day)
- Investigate gigi (2 day)

Deliverables:
- Source code, some simple demos
- Research results

== Ali
???

== Jukka
???

Toni Alatalo

unread,
Feb 4, 2009, 7:52:10 AM2/4/09
to realxtend-a...@googlegroups.com, ou...@adminotech.com, cr...@ludocraft.com, petri...@gmail.com
> 4. Propose an API for using the Avatar System in the new viewer.

A view to this on-going effort based on the work we did earlier for
custom controls of the avatar. The full doc (draft) is at
http://www.playsign.fi/engine/rex/controldevice but I summarize main
points here. Also at the end a note about direct bone control. I
realize that the RexNG avatar work so far is mostly about how to get
the assets from the server, how to render it etc., but am thinking that
the API exposed for controlling is related, as it is about how the code
will be in the viewer.

a) Basic movements

Openviewer provides a singleton class called AvCtl for controlling the
avatar. It is a single point of entry which hides the underlying
details related to updating the view, networking etc, so it is nice to
use when implementing support for a new controller device, without
exposing the murky details like: what is server side (movement), what
is client side (turning), how different parts of the viewer are updated
(camera, scene, networking).

For our purposes simple methods like this work well:
av.move(Vector3(0, 0.1, 0)) #motion vector - perhaps should be setting
it (like setting speed) instead
av.turn(0.2) #radians, for rotating the avatar sideways

The first example, arbitary movement, is AFAIK not possible with the SL
protocol (we've planned workarounds for now, perhaps using vehicles)
but I think should be a basic feat for the new API (and hence a future
protocol extension or a part of a new protocol). Currently Openviewer
avctl provides methods like av.fwd() which is the only thing you can do
with the SL way of doing the move on the server side.

This all is perhaps self evident but wanted to point it out to be sure.

Oh and a note about the camera: it's kinda nice how avctl automatically
moves the cam too. Perhaps if in RexNG we don't have a separate
AvatarControl class but just an Avatar class that provides the
interface, the camera can be somehow dependent on the AV pos&ort so
that no special code is needed in control methods. *BUT* one thing
we've learned when doing controls is that sometimes you *want* to fine
tune the camera behavior w.r.t to how the controls work, so that should
be possible in some nice way too (like writing a custom CameraControl
class). I guess I should put these requirement notes to wiki somehow.

b) Direct bone control

Besides moving the avatar around and playing back pre-made animations &
evaluating IKs w.r.t to scene objects, I think supporting direct
control of the skeleton should be a pursued relatively soon so the new
viewer arch should help there. The avatar art people (Tomi and Laura)
were requesting that already a year ago, then related to motion
sensors. Now recently Dan noted:

> I've seen motion capture solutions that claim to be moving towards
> consumer use, ie < $100 to capture your face and upper body. Once
> that sort of thing is possible, the issues of timing and
> synchronization of avatar movement and voice will become much more
> critical.

The other day I came across another solution that would be available to
basically anyone already now -- an open source toolkit that maps poses
from normal full-body video camera image of the user to a 2d skeleton,
a really fun looking VJ / real time performance tool:
http://animata.kibu.hu/ .. no strings attached.

Am so itching to get to test that for avatar control! :)

Of course that is limited: 2d, probably not very accurate, etc. etc.,
but as a key purpose of posing avatars in social VWs (and i guess this
includes videoconferencing for business too etc) is to express emotions
etc. I think that would already be fun and useful.

Haven't looked at the code at all yet, but am guessing/hoping that
integrating this to get pose info would be easy enough. As long as the
viewer / avatar api, and the protocol and the server, supported it..
Dunno if something clever could be made if the traffic load would
become too much with multiple participants (like communicate only
certain nodes in the skeleton and let the rigging IK/FK calc rest?).

Have been thinking of a setting where would combine a dance mat, a
smart board, and such a camera -> pose setup to be able to move around,
express, and interact within a VW .. a simple CAVE with cheap commodity
hardware and open source software. This video of kids using Croquet
with a SmartBoard was really inspiring, http://edusim.greenbush.us/
(the first demo there is also with fishes! :)

Smartboards are simple, for the software it is basically just a mouse,
and the device is just a large touchpad where the image is projected.
And a dance mat is a keyboard. Yet to get a smooth fine-tuned overall
UI for such a combination, it will be really nice to be able to
customize how all those different inputs are used together in a certain
setting (I wonder how e.g. pose info from camera and dance mat
keypresses could be used together for flying around, or touchboard
touches or wireless device input together with pose info from cam for
editing etc.).

~Toni

daniel miller

unread,
Feb 4, 2009, 5:58:46 PM2/4/09
to realxtend-a...@googlegroups.com
On Wed, Feb 4, 2009 at 4:52 AM, Toni Alatalo <ant...@kyperjokki.fi> wrote:

hi Toni --

excellent post! You bring up some critical points. I am totally in
sync with the idea of using Python as a rapid-application tool. You
can look at openviewer as basically a rapid prototype of a viewer. I
hope we can continue to use Python this way through the lifecycle of
the project. I might be able to contribute a bit on that front, in
terms of getting boost-python working to allow easier intermix of C++
and Py.

> b) Direct bone control

Your description lays out the issues precisely. Let me note a funny
asymmetry in the SL protocol: Avatars have beautiful mesh skin and
clothes, but can only be controlled through pre-canned animations.
Prims don't have skin and bones, but you can easily control each part
of a set of prims to do complex realtime behaviors. One of the first
things I would like to see added to the protocol is mesh for prim
sets, and direct part control for avatars. Perhaps they should both
derive from a fundamental agent object type, but that's an
implementation issue. The functionality should be there in the
protocol.

This brings up my other pet peeve: timing information. As long as we
are talking about extending the protocol, this issue should be
addressed as well. Both for prims and direct avatar control,
real-time messaging of the behavior is going to be very brittle as
long as the assumption is made that each message from the server
represents "now". Just as we do in VOIP and streaming video, there
should be a buffer, and messages should include information on exactly
when they are supposed to be executed.

> Haven't looked at the code at all yet, but am guessing/hoping that
> integrating this to get pose info would be easy enough. As long as the
> viewer / avatar api, and the protocol and the server, supported it..
> Dunno if something clever could be made if the traffic load would
> become too much with multiple participants (like communicate only
> certain nodes in the skeleton and let the rigging IK/FK calc rest?).

Ok, this leads to another protocol issue: compression. I haven't
looked too deeply, but other than the use of Jpeg2K for textures, I
don't believe the SL protocol does much by way of compressing
messages. To really do this stuff well, you want to reduce the number
of bits sent to the absolute minimum. In the case of direct bone
control, that means a specific encoding algorithm for skeletons, that
takes advantage of the redundancies, precision needs, and entropy
characteristics of skeleton motion. It's not as hard as it sounds,
and this is something I can commit to providing (at least a decent
first pass). The same technique could and should be applied to
primsets, with some caveats.

I know, I know -- I'm involved in feature begging when we should be
focused on bare-bones functionality delivered on a deadline. I'm
going to try to come up with a plan to put in the groundwork for some
of this stuff without requiring a substantial effort up front. I
believe I can make the case that it will pay off even in the first
year, by way of reducing the time spent debugging and performance
tuning.

-danx0r

Toni Alatalo

unread,
Feb 6, 2009, 8:09:00 PM2/6/09
to realxtend-a...@googlegroups.com
daniel miller kirjoitti:

> sync with the idea of using Python as a rapid-application tool. You
> can look at openviewer as basically a rapid prototype of a viewer. I
> hope we can continue to use Python this way through the lifecycle of
>

yes, that is kind of how we started using it in that work in december.

how to make the arch with c++ & py is open yet, we agreed to return to
it a bit later in a meeting some weeks ago. that time is soon now i
guess, at least some research effort should be put to it now in Feb i think.

> the project. I might be able to contribute a bit on that front, in
> terms of getting boost-python working to allow easier intermix of C++
> and Py.
>

when looking how to call py functions from c++, came across how it is in
boost, and iirc it was basically:
f(args) .. no matter where f is a c++ or py func. is that right? does it
work by calling some PyThing wrapper?

probably a good area for you to contribute in. among the others :)

~Toni

daniel miller

unread,
Feb 6, 2009, 9:28:19 PM2/6/09
to realxtend-a...@googlegroups.com
On Fri, Feb 6, 2009 at 5:09 PM, Toni Alatalo <ant...@kyperjokki.fi> wrote:

> when looking how to call py functions from c++, came across how it is in
> boost, and iirc it was basically:
> f(args) .. no matter where f is a c++ or py func. is that right? does it
> work by calling some PyThing wrapper?

For calling C++ functions from Py, that is how it works: it's
transparent to the calling function, looks just like a call to any
other Python library.

Calling Python from C++ is the same I believe, though typically I've
used it in the other direction. Aside from some tricky type
conversions, most of the work involves writing wrappers of the C++
classes to make them accessible from Python.

The workflow I've seen used successfully usually involves writing
things in Python, then progressively rewriting things in C++ from the
bottom up, ie low-level classes and functions get ported to C++ as
necessary for performance or other considerations. In this scenario,
your top-level code is typically all Python, and you tend to call C++
from Py but not so much the other way around.

For my money, if this project ever comes to fruition:
http://shed-skin.blogspot.com/ I would just replace the C++ part with
something like this. Interop between these two languages should be
almost trivial then, at least in theory.

-danx0r

Kripken

unread,
Feb 7, 2009, 1:25:28 AM2/7/09
to realxtend-a...@googlegroups.com

For my money, if this project ever comes to fruition:
http://shed-skin.blogspot.com/ I would just replace the C++ part with
something like this.  Interop between these two languages should be
almost trivial then, at least in theory.

-danx0r

IIANM Shed skin uses static type analysis, so it will never be able to compile Python - just a subset of it. The PyPy project is further along with a similar effort, compiling RPython into C, but it still isn't production-ready - these things are *hard*. And, even if they succeed, you aren't using a dynamic language anymore, just one with implicit type definitions.

- Kripken

daniel miller

unread,
Feb 7, 2009, 5:13:11 PM2/7/09
to realxtend-a...@googlegroups.com
On Fri, Feb 6, 2009 at 10:25 PM, Kripken <kripke...@gmail.com> wrote:

> IIANM Shed skin uses static type analysis, so it will never be able to
> compile Python - just a subset of it. The PyPy project is further along with
> a similar effort, compiling RPython into C, but it still isn't
> production-ready - these things are *hard*. And, even if they succeed, you
> aren't using a dynamic language anymore, just one with implicit type
> definitions.

Indeed, your points are taken. There are two possible approaches to
this issue. The first is to mix "duck typing" with static typing, as
was attempted here:
http://boo.codehaus.org/Duck+Typing

Another approach is to mix "real" python with something like shed
skin, in a fashion similar to how we mix C++ and Python using Boost.

-dan

Kripken

unread,
Feb 8, 2009, 1:16:28 AM2/8/09
to realxtend-a...@googlegroups.com


On Sun, Feb 8, 2009 at 12:13 AM, daniel miller <danb...@gmail.com> wrote:

Another approach is to mix "real" python with something like shed
skin, in a fashion similar to how we mix C++ and Python using Boost.

-dan

Yeah, that approach can be useful. Cython and Weave are doing interesting things in that area.

- Kripken

Toni Alatalo

unread,
Feb 12, 2009, 12:34:14 AM2/12/09
to realxtend-a...@googlegroups.com
On Feb 7, 2009, at 4:28 AM, daniel miller wrote:
>> when looking how to call py functions from c++, came across how it is
>> in
>> boost, and iirc it was basically: f(args) ..
>
> For calling C++ functions from Py, that is how it works: it's
> transparent to the calling function, looks just like a call to any
> other Python library.
>
> Calling Python from C++ is the same I believe, though typically I've
> used it in the other direction. Aside from some tricky type
> conversions, most of the work involves writing wrappers of the C++
> classes to make them accessible from Python.
>
> The workflow I've seen used successfully usually involves writing
> things in Python, then progressively rewriting things in C++ from the
> bottom up, ie low-level classes and functions get ported to C++ as
> necessary for performance or other considerations. In this scenario,
> your top-level code is typically all Python, and you tend to call C++
> from Py but not so much the other way around.

That's what I've usually done too, except that so far it has always
gone so that by reusing existing low level things (like Ogre) I haven't
yet encountered the situation where would have needed to port something
to c/c++. Some years ago I once did the mistake of premature
optimization by starting the project with writing (what i thought was)
the cpu intensive part in c, wrapped it in py for the rest of the app,
and later learned that c part was better of in py as well (it was a
scroller, using pygame, so the native blit operations in pygame/sdl did
the heavy work anyway and the timing etc. was easier to tweak from the
py side .. still took only 2% of cpu).

In fact the reason why I was looking into calling py from c++ is that
currently the core RexNG components (framework / module, event system)
are being researched / prototyped in c++ (also because those devs are
previously familiar with that env, visual studio etc), and the guys
have looked into the PoCo library which had some sort of event /
delegate system, so was curious if py written components could be
easily hooked to be called by such a system.

Before the actual implementations starts in March we should have a good
idea of how to go with it, so like said in a previous post I hope some
research efforts are now put to figuring that out.

For example, in an earlier thread about the '3d widgets' I've was
drafting, Heikki agreed that (something like) the Ogre manual object
API would be good to expose for plugins. I've been thinking that code
for such 'drawables' would be good to be able to send over the network
too, e.g. for a special targetter visualization to work for a specific
weapon in a game. In that case you'd most certainly want that code to
be in an interpreted lang so that viewers on different platforms could
run it (in a safe env too). So if such an API is needed, should we
actually be using python-ogre which already exposes all of Ogre,
including ManualObject, to py? If we use Ogre only from c++ directly,
the API work has to be done separately - perhaps that could be done
also by reusing parts of pyogre?

Another question is the integration of the components. So far we know
that at least one part will most probably written in c++, that is the
graphics renderer, and quite probably it will remain to be Ogre. What
about the other components? In line with the kind of standard workflow
you described, the basic rule is that cpu intensive parts should be in
native code and other things in py. Two parts that are now researched
in c++ are the network stack and the component system (framework). I'm
not sure how it is with those w.r.t cpu intensivity - certainly with a
lot of packets the handling needs to be efficient. Also with the
internal events perhaps get a lot if they are used for everything, can
it really go up to tens of thousands a second? PyOGP does messaging now
in pure py, and openviewer (and many other systems) has the internal
event system in py, so there are codebases where we can see how it
works. One example is MultiVerse3d which uses pyogre for gfx and does
own networking in Twisted, http://www.mv3d.com/ (a one man project).

Anyhow back to the integration: one model would be to use py as a
module system, even for just putting the different c++ written modules
to work together. This would be the 'extend' model in the literature.
We may be approaching that as the c++ parts are now planned so that
they'll be independent libraries. The benefit of py in gluing them
together would be in the ease of customization, e.g. switching
components and tweaking how they are run. One old rant promoting this
kind of extending is
http://www.twistedmatrix.com/users/glyph/rant/extendit.html - dunno if
there'd be some more recent and perhaps more balanced(?) articles
somewhere .. the tech doc is http://docs.python.org/extending/

This also touches the GUI integration, which seems to be currently be
experimented with QT (which i expect can be good but dunno the results
yet, how Lasse got it to run with Ogre in cpp).

> http://shed-skin.blogspot.com/ I would just replace the C++ part with
> something like this. Interop between these two languages should be
> almost trivial then, at least in theory.

I agree that shed skin and friends are interesting, have had good
experience with Pyrex (and more recently with the Cython work of it in
another project) from how it works in Soya3d.

Anyhow we are not going to replace Ogre with a shed-skin written or any
other port of it, so code written in plain c++ will most probably be
around. Also it can be that the whole framework is just made in c++
'cause like said that's what the devs busy preparing it now are
familiar with, and that's the defacto standard in game engines anyway
(commercial games typically use interpreted langs only by embedding and
exposing a custom api for restricted things, they don't have the needs
for customization and extensibility of this viewer project where a
module system like py provides might be helpful).

Oh well there are a lot of questions around this issue, some I perhaps
managed to touch here, feel that many didn't get communicated clearly
at all yet, but am out of time now - also don't have much of a chance
to look into these in the coming weeks (2hours/day tops), and can't
make it to the meeting now either, but am hoping that some sort of a
strategy is pulled together for getting a plan (and am willing to help
there where can).

In one way the question is more about the style of the framework than
languages. I mean with e.g. Twisted you use and write deferreds, no
matter whether in c, c++ or py. i guess it's the same with Kamaelia's
pipeline components. So with RexNG we'll write .. I guess I'll need to
do reading in the wiki to catch up later today when the daughter is
having her day sleep.

Now to babycare business,

> -danx0r

~Toni

Toni Alatalo

unread,
Feb 21, 2009, 6:14:43 AM2/21/09
to realxtend-a...@googlegroups.com, ou...@adminotech.com, cr...@ludocraft.com
Hi,

just a quick remark that had forgotten to send earlier regarding the
work on the networking for rexng viewer.

It was mentioned in some meeting early on (by Jukka?) that it's
impossible to 'design for an unknown protocol', so only the stripped
down sludp has been considered now. The wiki doc seems to now echo that
idea: "One of the concerns was whether this implementation would later
on work for connecting to a totally different virtual world system that
is using a different protocol, but it was seen more effective in that
case to write another specialized protocol stack from scratch rather
than trying to build too many abstractions for unknown systems in mind."

There are known systems, existing implementations, that could be
examined now in order to evaluate the design of the viewer, and whether
e.g. that idea of replacing the protocol stack would go smoothly in
practice.

One example is MXP that has been discussed on this list (rexarch)
before, http://www.bubblecloud.org/

Was reminded of this now when talking with Tommi L., one of the MXP
authors, on irc about possible experimental server and client
implementations. He is planning adding it to OpenSim at some point, and
I was telling / wondering a bit how we could perhaps experiment with it
using Openviewer or Idealist before RexNG exists (also the ref impl of
mxp is in c#, like libomv used via python.net in pyov and in the c#
idealist, so testing with those might be quick now. for rexng perhaps a
new implementation in c++ would be in place if that protocol seems like
the way to go).

There are of course other protocols too which could be used to get
perspectives to analyzing the viewer w.r.t to protocol dependencies,
like Croquet or gaming protocols like what Quake etc use perhaps, but
MXP seems like cool fresh start and the guys are planning work on it
anyway so perhaps that's a good first case now. The idea being the
viewer should not be too much entangled with SLisms all over.

At minimum I think it'd be good for both LC guys to take a look at MXP,
and the MXP guys to look at the RexNG netstack plans (I already pointed
Tommi to
http://www.rexdeveloper.org/wiki/index.php?title=Low-level_client_networking_interface)

But actually prototyping would of course be very exciting.

BTW Adam F. also mentioned MXP on the LL & IBM initiated MMOX IETF list
but I didn't see any reactions to that yet (there's been some debate
whether it's ok for that standardization effort to go on as focused on
OGP, LLSD etc., or whether to more start from scratch / look at others
too etc).

~Toni

Tommi Laukkanen

unread,
Feb 21, 2009, 9:57:42 AM2/21/09
to realxtend-a...@googlegroups.com, ou...@adminotech.com, cr...@ludocraft.com
Hello
 
The wiki page (http://www.rexdeveloper.org/wiki/index.php?title=Low-level_client_networking_interface) is a solid design for network layer implementation. What it seems to be missing from my view point is how to plugin a network library which already has connectivity and session handling. One might consider wrapping the existing design at wiki with a higher level API. There is probably need for two different APIs at this level. One for client and another for server as their use cases are pretty different. For working implementation example please check out the MXP reference implementation at:
 
 
One important observation based on experience: the code outside the API should expect the messages to be plain objects with property accessors and object aggregation if needed. Creating dynamic object model for messages is in most cases an anti pattern as any changes to protocol will usually require code changes as well. So creating DOM type model will only make the development more complex. To list mandatory exception is an asset text metadata which could be presented as dictionary with string key value pairs.
 
MxpClient and MxpServer are respective API classes. MxpServer has MxpBubble aggregate objects which roughly correspond with regions and have event hooks and message sending methods.
 
One of the MXP key concept is to have small set of UDP messages concentrated on connectivity, transport and distributed virtual environment scene graph synchronization which form the core of the protocol. All application specific information is transmitted as payload of the core UDP messages encoded in MXMP format. MXMP is designed to be XML but defined only to skeletal level as it is  application specific in nature.
 
Another key concept is to multiplex all the data through one UDP port to simplify the session handling and networking like firewall and nat penetration. There is a lot of speculation about using TCP + UDP or several TCP connections but if you look at any commercial real time shooter or MMORG they are based on UDP (At least to my best knowledge). You could argue that there is a reason for this.
 
Most obvious one is the inability of TCP to cope with congestion in an intelligent matter. In other words TCP does not know how to drop unimportant packets. It will keep collecting everything in transmit buffer that the virtual world server produces and either use long lag time to unload the buffer or overflow everything not minding which is important and which is not. In practice for smooth operation you need to keep transmission rate very low to avoid running TCP in congestion state where as with UDP you can safely go closer to the bandwidth limit as congestion recovery is not a problem.
 
Summing up my opinions in slightly provocative tone: It is good idea to dedicate a little time to smartly engineer network layer which can multiplex guaranteed and not guaranteed packets through the same UDP connection. It is misinformation that this processing would be any kind of bottleneck in todays computers. You do not need hardware optimized TCP stacks for guaranteed delivery of data. It is also misinformation that it is too hard to implement. If in doubt look at MXP reference implementation and run a performance test. This implementation was written in roughly 2 man weeks.
 
I would be happy to go through both realxtend and mxp designs in irc, skype or phone if you are interested in collaboration.
 
best regards,
Tommi Laukkanen

Ryan McDougall

unread,
Feb 21, 2009, 10:55:01 AM2/21/09
to realxtend-a...@googlegroups.com
Only speaking for myself, I'd love to hear your opinions in more
detail in a more interactive fashion. If we reach any conclusions, we
can make sure we forward them to this list.

When will you have time?

Cheers,

Tommi Laukkanen

unread,
Feb 21, 2009, 11:14:55 AM2/21/09
to realxtend-architecture
Does rex have yet google calendar? Would be nice to book this kind of
sessions there. Basicly I have time all night today, tomorrow 18:00
onwards finnish time or then next week monday 17:00 onwards or tuesday-
>thursday 19:00 onwards finnish time.

regards,
Tommi

Tommi Laukkanen

unread,
Feb 22, 2009, 1:54:36 AM2/22/09
to realxtend-architecture
After reflecting on it I could add that the client network API should
be abstracted so that it has no coupling with the messages or other
concepts of protocol implementation. Instead it should offer two way
interaction mechanisms from client to server and back. Practically
this would mean methods for invoking interactions towards server and
event hooks for other direction. The method and delegate arguments can
be either primities or engine value objects. They should not be tied
say the SL protocol messages as then you will have coupling to SL all
over your engine core. I tried to write a short pseudo code example to
illustrate:

interface RemoteApi
{
...

void AddEstateManager(Guid estateId,Guid userId)
void ListEstateManagers(Guid estateId)
EstateListReceivedDelegate EstateListReceived; // this is an event
handler hook where you can hook your listening methods
...

...
void Process(); // Using this method you drive in your main thread
which then invokes the events if something is available from the
underneath network modules.
...
}

regards,
Tommi

On Feb 21, 6:14 pm, Tommi Laukkanen <tommi.s.e.laukka...@gmail.com>
wrote:

Tommi Laukkanen

unread,
Feb 22, 2009, 3:58:23 AM2/22/09
to realxtend-architecture
Here as practical example, beginning of MXP module to OpenSim by
implementing similar high level network API on server side:

using System;
using System.Collections.Generic;
using System.Net;
using System.Reflection;
using System.Text;
using log4net;
using MXP;
using MXP.Messages;
using OpenMetaverse;
using OpenMetaverse.Packets;
using OpenSim.Framework;
using OpenSim.Framework.Client;
using Packet=OpenMetaverse.Packets.Packet;

namespace OpenSim.Client.MXP.ClientStack
{
class MXPClientView : IClientAPI, IClientCore
{
internal static readonly ILog m_log = LogManager.GetLogger
(MethodBase.GetCurrentMethod().DeclaringType);

private readonly Session mxpSession;
private readonly UUID mxpSessionID;
private readonly IScene mxpHostBubble;
private readonly string mxpUsername;

private int debugLevel;

public MXPClientView(Session mxpSession, UUID mxpSessionID,
IScene mxpHostBubble, string mxpUsername)
{
this.mxpSession = mxpSession;
this.mxpUsername = mxpUsername;
this.mxpHostBubble = mxpHostBubble;
this.mxpSessionID = mxpSessionID;
}

public Session Session
{
get { return mxpSession; }
}

public bool ProcessMXPPacket(Message msg)
{
if (debugLevel > 0)
m_log.Warn("[MXP] Got Action/Command Packet: " + msg);

return false;
}

#region IClientAPI

public Vector3 StartPos
{
get { return new Vector3(128f, 128f, 128f); }
set { } // TODO: Implement Me
}

public UUID AgentId
{
get { return mxpSessionID; }
}

public UUID SessionId
{
get { return mxpSessionID; }
}

public UUID SecureSessionId
{
get { return mxpSessionID; }
}

public UUID ActiveGroupId
{
get { return UUID.Zero; }
}

public string ActiveGroupName
{
get { return ""; }
}

public ulong ActiveGroupPowers
{
get { return 0; }
}

public ulong GetGroupPowers(UUID groupID)
{
return 0;
}

public bool IsGroupMember(UUID GroupID)
{
return false;
}

public string FirstName
{
get { return mxpUsername; }
}

public string LastName
{
get { return "@mxp://" + Session.RemoteEndPoint.Address; }
}

public IScene Scene
{
get { return mxpHostBubble; }
}

public int NextAnimationSequenceNumber
{
get { return 0; }
}

public string Name
{
get { return FirstName; }
}

public bool IsActive
{
get { return Session.SessionState ==
SessionState.Connected; }
set
{
if (!value)
Stop();
}
}

// Do we need this?
public bool SendLogoutPacketWhenClosing
{
set { }
}

public uint CircuitCode
{
get { return mxpSessionID.CRC(); }
}

public event GenericMessage OnGenericMessage;
public event ImprovedInstantMessage OnInstantMessage;
public event ChatMessage OnChatFromClient;
public event TextureRequest OnRequestTexture;
public event RezObject OnRezObject;
public event ModifyTerrain OnModifyTerrain;
public event BakeTerrain OnBakeTerrain;
public event EstateChangeInfo OnEstateChangeInfo;
public event SetAppearance OnSetAppearance;
public event AvatarNowWearing OnAvatarNowWearing;
public event RezSingleAttachmentFromInv
OnRezSingleAttachmentFromInv;
public event UUIDNameRequest OnDetachAttachmentIntoInv;
public event ObjectAttach OnObjectAttach;
public event ObjectDeselect OnObjectDetach;
public event ObjectDrop OnObjectDrop;
public event StartAnim OnStartAnim;
public event StopAnim OnStopAnim;
public event LinkObjects OnLinkObjects;
public event DelinkObjects OnDelinkObjects;
public event RequestMapBlocks OnRequestMapBlocks;
public event RequestMapName OnMapNameRequest;
public event TeleportLocationRequest
OnTeleportLocationRequest;
public event DisconnectUser OnDisconnectUser;
public event RequestAvatarProperties
OnRequestAvatarProperties;
public event SetAlwaysRun OnSetAlwaysRun;
public event TeleportLandmarkRequest
OnTeleportLandmarkRequest;
public event DeRezObject OnDeRezObject;
public event Action<IClientAPI> OnRegionHandShakeReply;
public event GenericCall2 OnRequestWearables;
public event GenericCall2 OnCompleteMovementToRegion;
public event UpdateAgent OnAgentUpdate;
public event AgentRequestSit OnAgentRequestSit;
public event AgentSit OnAgentSit;
public event AvatarPickerRequest OnAvatarPickerRequest;
public event Action<IClientAPI> OnRequestAvatarsData;
public event AddNewPrim OnAddPrim;
public event FetchInventory OnAgentDataUpdateRequest;
public event TeleportLocationRequest
OnSetStartLocationRequest;
public event RequestGodlikePowers OnRequestGodlikePowers;
public event GodKickUser OnGodKickUser;
public event ObjectDuplicate OnObjectDuplicate;
public event ObjectDuplicateOnRay OnObjectDuplicateOnRay;
public event GrabObject OnGrabObject;
public event ObjectSelect OnDeGrabObject;
public event MoveObject OnGrabUpdate;
public event UpdateShape OnUpdatePrimShape;
public event ObjectExtraParams OnUpdateExtraParams;
public event ObjectSelect OnObjectSelect;
public event ObjectDeselect OnObjectDeselect;
public event GenericCall7 OnObjectDescription;
public event GenericCall7 OnObjectName;
public event GenericCall7 OnObjectClickAction;
public event GenericCall7 OnObjectMaterial;
public event RequestObjectPropertiesFamily
OnRequestObjectPropertiesFamily;
public event UpdatePrimFlags OnUpdatePrimFlags;
public event UpdatePrimTexture OnUpdatePrimTexture;
public event UpdateVector OnUpdatePrimGroupPosition;
public event UpdateVector OnUpdatePrimSinglePosition;
public event UpdatePrimRotation OnUpdatePrimGroupRotation;
public event UpdatePrimSingleRotation
OnUpdatePrimSingleRotation;
public event UpdatePrimGroupRotation
OnUpdatePrimGroupMouseRotation;
public event UpdateVector OnUpdatePrimScale;
public event UpdateVector OnUpdatePrimGroupScale;
public event StatusChange OnChildAgentStatus;
public event GenericCall2 OnStopMovement;
public event Action<UUID> OnRemoveAvatar;
public event ObjectPermissions OnObjectPermissions;
public event CreateNewInventoryItem OnCreateNewInventoryItem;
public event CreateInventoryFolder OnCreateNewInventoryFolder;
public event UpdateInventoryFolder OnUpdateInventoryFolder;
public event MoveInventoryFolder OnMoveInventoryFolder;
public event FetchInventoryDescendents
OnFetchInventoryDescendents;
public event PurgeInventoryDescendents
OnPurgeInventoryDescendents;
public event FetchInventory OnFetchInventory;
public event RequestTaskInventory OnRequestTaskInventory;
public event UpdateInventoryItem OnUpdateInventoryItem;
public event CopyInventoryItem OnCopyInventoryItem;
public event MoveInventoryItem OnMoveInventoryItem;
public event RemoveInventoryFolder OnRemoveInventoryFolder;
public event RemoveInventoryItem OnRemoveInventoryItem;
public event UDPAssetUploadRequest OnAssetUploadRequest;
public event XferReceive OnXferReceive;
public event RequestXfer OnRequestXfer;
public event ConfirmXfer OnConfirmXfer;
public event AbortXfer OnAbortXfer;
public event RezScript OnRezScript;
public event UpdateTaskInventory OnUpdateTaskInventory;
public event MoveTaskInventory OnMoveTaskItem;
public event RemoveTaskInventory OnRemoveTaskItem;
public event RequestAsset OnRequestAsset;
public event UUIDNameRequest OnNameFromUUIDRequest;
public event ParcelAccessListRequest
OnParcelAccessListRequest;
public event ParcelAccessListUpdateRequest
OnParcelAccessListUpdateRequest;
public event ParcelPropertiesRequest
OnParcelPropertiesRequest;
public event ParcelDivideRequest OnParcelDivideRequest;
public event ParcelJoinRequest OnParcelJoinRequest;
public event ParcelPropertiesUpdateRequest
OnParcelPropertiesUpdateRequest;
public event ParcelSelectObjects OnParcelSelectObjects;
public event ParcelObjectOwnerRequest
OnParcelObjectOwnerRequest;
public event ParcelAbandonRequest OnParcelAbandonRequest;
public event ParcelGodForceOwner OnParcelGodForceOwner;
public event ParcelReclaim OnParcelReclaim;
public event ParcelReturnObjectsRequest
OnParcelReturnObjectsRequest;
public event RegionInfoRequest OnRegionInfoRequest;
public event EstateCovenantRequest OnEstateCovenantRequest;
public event FriendActionDelegate OnApproveFriendRequest;
public event FriendActionDelegate OnDenyFriendRequest;
public event FriendshipTermination OnTerminateFriendship;
public event MoneyTransferRequest OnMoneyTransferRequest;
public event EconomyDataRequest OnEconomyDataRequest;
public event MoneyBalanceRequest OnMoneyBalanceRequest;
public event UpdateAvatarProperties OnUpdateAvatarProperties;
public event ParcelBuy OnParcelBuy;
public event RequestPayPrice OnRequestPayPrice;
public event ObjectSaleInfo OnObjectSaleInfo;
public event ObjectBuy OnObjectBuy;
public event BuyObjectInventory OnBuyObjectInventory;
public event RequestTerrain OnRequestTerrain;
public event RequestTerrain OnUploadTerrain;
public event ObjectIncludeInSearch OnObjectIncludeInSearch;
public event UUIDNameRequest OnTeleportHomeRequest;
public event ScriptAnswer OnScriptAnswer;
public event AgentSit OnUndo;
public event ForceReleaseControls OnForceReleaseControls;
public event GodLandStatRequest OnLandStatRequest;
public event DetailedEstateDataRequest
OnDetailedEstateDataRequest;
public event SetEstateFlagsRequest OnSetEstateFlagsRequest;
public event SetEstateTerrainBaseTexture
OnSetEstateTerrainBaseTexture;
public event SetEstateTerrainDetailTexture
OnSetEstateTerrainDetailTexture;
public event SetEstateTerrainTextureHeights
OnSetEstateTerrainTextureHeights;
public event CommitEstateTerrainTextureRequest
OnCommitEstateTerrainTextureRequest;
public event SetRegionTerrainSettings
OnSetRegionTerrainSettings;
public event EstateRestartSimRequest
OnEstateRestartSimRequest;
public event EstateChangeCovenantRequest
OnEstateChangeCovenantRequest;
public event UpdateEstateAccessDeltaRequest
OnUpdateEstateAccessDeltaRequest;
public event SimulatorBlueBoxMessageRequest
OnSimulatorBlueBoxMessageRequest;
public event EstateBlueBoxMessageRequest
OnEstateBlueBoxMessageRequest;
public event EstateDebugRegionRequest
OnEstateDebugRegionRequest;
public event EstateTeleportOneUserHomeRequest
OnEstateTeleportOneUserHomeRequest;
public event EstateTeleportAllUsersHomeRequest
OnEstateTeleportAllUsersHomeRequest;
public event UUIDNameRequest OnUUIDGroupNameRequest;
public event RegionHandleRequest OnRegionHandleRequest;
public event ParcelInfoRequest OnParcelInfoRequest;
public event RequestObjectPropertiesFamily
OnObjectGroupRequest;
public event ScriptReset OnScriptReset;
public event GetScriptRunning OnGetScriptRunning;
public event SetScriptRunning OnSetScriptRunning;
public event UpdateVector OnAutoPilotGo;
public event TerrainUnacked OnUnackedTerrain;
public event ActivateGesture OnActivateGesture;
public event DeactivateGesture OnDeactivateGesture;
public event ObjectOwner OnObjectOwner;
public event DirPlacesQuery OnDirPlacesQuery;
public event DirFindQuery OnDirFindQuery;
public event DirLandQuery OnDirLandQuery;
public event DirPopularQuery OnDirPopularQuery;
public event DirClassifiedQuery OnDirClassifiedQuery;
public event EventInfoRequest OnEventInfoRequest;
public event ParcelSetOtherCleanTime
OnParcelSetOtherCleanTime;
public event MapItemRequest OnMapItemRequest;
public event OfferCallingCard OnOfferCallingCard;
public event AcceptCallingCard OnAcceptCallingCard;
public event DeclineCallingCard OnDeclineCallingCard;
public event SoundTrigger OnSoundTrigger;
public event StartLure OnStartLure;
public event TeleportLureRequest OnTeleportLureRequest;
public event NetworkStats OnNetworkStatsUpdate;
public event ClassifiedInfoRequest OnClassifiedInfoRequest;
public event ClassifiedInfoUpdate OnClassifiedInfoUpdate;
public event ClassifiedDelete OnClassifiedDelete;
public event ClassifiedDelete OnClassifiedGodDelete;
public event EventNotificationAddRequest
OnEventNotificationAddRequest;
public event EventNotificationRemoveRequest
OnEventNotificationRemoveRequest;
public event EventGodDelete OnEventGodDelete;
public event ParcelDwellRequest OnParcelDwellRequest;
public event UserInfoRequest OnUserInfoRequest;
public event UpdateUserInfo OnUpdateUserInfo;

public void SetDebugPacketLevel(int newDebug)
{
debugLevel = newDebug;
}

public void InPacket(object NewPack)
{
//throw new System.NotImplementedException();
}

public void ProcessInPacket(Packet NewPack)
{
//throw new System.NotImplementedException();
}

public void Close(bool ShutdownCircuit)
{
m_log.Info("[MXP ClientStack] Close Called with SC=" +
ShutdownCircuit);

// Tell the client to go
SendLogoutPacket();

// Let MXPPacketServer clean it up
if (Session.SessionState != SessionState.Disconnected)
{
Session.SetStateDisconnected();
}

// Handle OpenSim cleanup
if (ShutdownCircuit)
{
if (OnConnectionClosed != null)
OnConnectionClosed(this);
}
else
{
Scene.RemoveClient(AgentId);
}
}

public void Kick(string message)
{
Close(false);
}

public void Start()
{
// We dont do this
}

public void Stop()
{
// Nor this
}

public void SendWearables(AvatarWearable[] wearables, int
serial)
{
// Need to translate to MXP somehow
}

public void SendAppearance(UUID agentID, byte[] visualParams,
byte[] textureEntry)
{
// Need to translate to MXP somehow
}

public void SendStartPingCheck(byte seq)
{
// Need to translate to MXP somehow
}

public void SendKillObject(ulong regionHandle, uint localID)
{
DisappearanceEventMessage de = new
DisappearanceEventMessage();
de.ObjectIndex = localID;

Session.Send(de);
}

public void SendAnimations(UUID[] animID, int[] seqs, UUID
sourceAgentId, UUID[] objectIDs)
{
// Need to translate to MXP somehow
}

public void SendRegionHandshake(RegionInfo regionInfo,
RegionHandshakeArgs args)
{
// Need to translate to MXP somehow
}

public void SendChatMessage(string message, byte type, Vector3
fromPos, string fromName, UUID fromAgentID, byte source, byte audible)
{
ActionEventMessage chatActionEvent = new ActionEventMessage
();
chatActionEvent.ActionFragment.ActionName = "Chat";
chatActionEvent.ActionFragment.SourceObjectId =
fromAgentID.Guid;
chatActionEvent.ActionFragment.ObservationRadius = 180.0f;
chatActionEvent.ActionFragment.ActionPayloadDialect =
"TEXT";
chatActionEvent.SetPayloadData(Encoding.UTF8.GetBytes
(message));
chatActionEvent.ActionFragment.ActionPayloadLength = (uint)
chatActionEvent.GetPayloadData().Length;

Session.Send(chatActionEvent);
}

public void SendInstantMessage(UUID fromAgent, string message,
UUID toAgent, string fromName, byte dialog, uint timeStamp)
{
// Need to translate to MXP somehow
}

public void SendInstantMessage(UUID fromAgent, string message,
UUID toAgent, string fromName, byte dialog, uint timeStamp, UUID
transactionID, bool fromGroup, byte[] binaryBucket)
{
// Need to translate to MXP somehow
}

public void SendGenericMessage(string method, List<string>
message)
{
// Need to translate to MXP somehow
}

public void SendLayerData(float[] map)
{
// Need to translate to MXP somehow
}

public void SendLayerData(int px, int py, float[] map)
{
// Need to translate to MXP somehow
}

public void SendWindData(Vector2[] windSpeeds)
{
// Need to translate to MXP somehow
}

public void MoveAgentIntoRegion(RegionInfo regInfo, Vector3
pos, Vector3 look)
{
//throw new System.NotImplementedException();
}

public void InformClientOfNeighbour(ulong neighbourHandle,
IPEndPoint neighbourExternalEndPoint)
{
//throw new System.NotImplementedException();
}

public AgentCircuitData RequestClientInfo()
{
AgentCircuitData clientinfo = new AgentCircuitData();
clientinfo.AgentID = AgentId;
clientinfo.Appearance = new AvatarAppearance();
clientinfo.BaseFolder = UUID.Zero;
clientinfo.CapsPath = "";
clientinfo.child = false;
clientinfo.ChildrenCapSeeds = new Dictionary<ulong, string>
();
clientinfo.circuitcode = CircuitCode;
clientinfo.firstname = FirstName;
clientinfo.InventoryFolder = UUID.Zero;
clientinfo.lastname = LastName;
clientinfo.SecureSessionID = SecureSessionId;
clientinfo.SessionID = SessionId;
clientinfo.startpos = StartPos;

return clientinfo;
}

public void CrossRegion(ulong newRegionHandle, Vector3 pos,
Vector3 lookAt, IPEndPoint newRegionExternalEndPoint, string capsURL)
{
// TODO: We'll want to get this one working.
// Need to translate to MXP somehow
}

public void SendMapBlock(List<MapBlockData> mapBlocks, uint
flag)
{
// Need to translate to MXP somehow
}

public void SendLocalTeleport(Vector3 position, Vector3
lookAt, uint flags)
{
//throw new System.NotImplementedException();
}

public void SendRegionTeleport(ulong regionHandle, byte
simAccess, IPEndPoint regionExternalEndPoint, uint locationID, uint
flags, string capsURL)
{
// Need to translate to MXP somehow
}

public void SendTeleportFailed(string reason)
{
// Need to translate to MXP somehow
}

public void SendTeleportLocationStart()
{
// Need to translate to MXP somehow
}

public void SendMoneyBalance(UUID transaction, bool success,
byte[] description, int balance)
{
// Need to translate to MXP somehow
}

public void SendPayPrice(UUID objectID, int[] payPrice)
{
// Need to translate to MXP somehow
}

public void SendAvatarData(ulong regionHandle, string
firstName, string lastName, string grouptitle, UUID avatarID, uint
avatarLocalID, Vector3 Pos, byte[] textureEntry, uint parentID,
Quaternion rotation)
{
// TODO: This needs handling - to display other avatars
}

public void SendAvatarTerseUpdate(ulong regionHandle, ushort
timeDilation, uint localID, Vector3 position, Vector3 velocity,
Quaternion rotation)
{
// TODO: This probably needs handling - update other
avatar positions
}

public void SendCoarseLocationUpdate(List<Vector3>
CoarseLocations)
{
// Minimap function, not used.
}

public void AttachObject(uint localID, Quaternion rotation,
byte attachPoint, UUID ownerID)
{
// Need to translate to MXP somehow
}

public void SetChildAgentThrottle(byte[] throttle)
{
// Need to translate to MXP somehow
}

public void SendPrimitiveToClient(ulong regionHandle, ushort
timeDilation, uint localID, PrimitiveBaseShape primShape, Vector3 pos,
Vector3 vel, Vector3 acc, Quaternion rotation, Vector3 rvel, uint
flags, UUID objectID, UUID ownerID, string text, byte[] color, uint
parentID, byte[] particleSystem, byte clickAction, byte material, byte
[] textureanim, bool attachment, uint AttachPoint, UUID AssetId, UUID
SoundId, double SoundVolume, byte SoundFlags, double SoundRadius)
{
MXPSendPrimitive(localID, ownerID, acc, rvel, primShape,
pos, objectID, vel, rotation);
}

private void MXPSendPrimitive(uint localID, UUID ownerID,
Vector3 acc, Vector3 rvel, PrimitiveBaseShape primShape, Vector3 pos,
UUID objectID, Vector3 vel, Quaternion rotation)
{
PerceptionEventMessage pe = new PerceptionEventMessage();

pe.ObjectFragment.ObjectIndex = localID;
pe.ObjectFragment.ObjectName = "Object";
pe.ObjectFragment.OwnerId = ownerID.Guid;
pe.ObjectFragment.TypeId = Guid.Empty;

pe.ObjectFragment.Acceleration = new[] { acc.X, acc.Y,
acc.Z };
pe.ObjectFragment.AngularAcceleration = new float[4];
pe.ObjectFragment.AngularVelocity = new[] { rvel.X,
rvel.Y, rvel.Z, 0.0f };
pe.ObjectFragment.BoundingSphereRadius =
primShape.Scale.Length()/2.0f;
pe.ObjectFragment.Location = new[] { pos.X, pos.Y,
pos.Z };
pe.ObjectFragment.Mass = 1.0f;
pe.ObjectFragment.ObjectId = objectID.Guid;
pe.ObjectFragment.Orientation = new[] {rotation.X,
rotation.Y, rotation.Z, rotation.W};
pe.ObjectFragment.ParentObjectId = Guid.Empty;
pe.ObjectFragment.Velocity = new[] { vel.X, vel.Y,
vel.Z };

pe.ObjectFragment.StatePayloadDialect = "";
pe.ObjectFragment.StatePayloadLength = 0;
pe.ObjectFragment.SetStatePayloadData(new byte[0]);

Session.Send(pe);
}

public void SendPrimitiveToClient(ulong regionHandle, ushort
timeDilation, uint localID, PrimitiveBaseShape primShape, Vector3 pos,
Vector3 vel, Vector3 acc, Quaternion rotation, Vector3 rvel, uint
flags, UUID objectID, UUID ownerID, string text, byte[] color, uint
parentID, byte[] particleSystem, byte clickAction, byte material)
{
MXPSendPrimitive(localID, ownerID, acc, rvel, primShape,
pos, objectID, vel, rotation);
}

public void SendPrimTerseUpdate(ulong regionHandle, ushort
timeDilation, uint localID, Vector3 position, Quaternion rotation,
Vector3 velocity, Vector3 rotationalvelocity, byte state, UUID
AssetId, UUID owner, int attachPoint)
{
MovementEventMessage me = new MovementEventMessage();
me.ObjectIndex = localID;
me.Location = new[] {position.X, position.Y, position.Z};
me.Orientation = new[] {rotation.X, rotation.Y,
rotation.Z, rotation.W};

Session.Send(me);
}

public void SendInventoryFolderDetails(UUID ownerID, UUID
folderID, List<InventoryItemBase> items, List<InventoryFolderBase>
folders, bool fetchFolders, bool fetchItems)
{
// Need to translate to MXP somehow
}

public void SendInventoryItemDetails(UUID ownerID,
InventoryItemBase item)
{
// Need to translate to MXP somehow
}

public void SendInventoryItemCreateUpdate(InventoryItemBase
Item, uint callbackId)
{
// Need to translate to MXP somehow
}

public void SendRemoveInventoryItem(UUID itemID)
{
// Need to translate to MXP somehow
}

public void SendTakeControls(int controls, bool passToAgent,
bool TakeControls)
{
// Need to translate to MXP somehow
}

public void SendTaskInventory(UUID taskID, short serial, byte
[] fileName)
{
// Need to translate to MXP somehow
}

public void SendBulkUpdateInventory(InventoryNodeBase node)
{
// Need to translate to MXP somehow
}

public void SendXferPacket(ulong xferID, uint packet, byte[]
data)
{
// SL Specific, Ignore. (Remove from IClient)
}

public void SendEconomyData(float EnergyEfficiency, int
ObjectCapacity, int ObjectCount, int PriceEnergyUnit, int
PriceGroupCreate, int PriceObjectClaim, float PriceObjectRent, float
PriceObjectScaleFactor, int PriceParcelClaim, float
PriceParcelClaimFactor, int PriceParcelRent, int
PricePublicObjectDecay, int PricePublicObjectDelete, int
PriceRentLight, int PriceUpload, int TeleportMinPrice, float
TeleportPriceExponent)
{
// SL Specific, Ignore. (Remove from IClient)
}

public void SendAvatarPickerReply
(AvatarPickerReplyAgentDataArgs AgentData,
List<AvatarPickerReplyDataArgs> Data)
{
// Need to translate to MXP somehow
}

public void SendAgentDataUpdate(UUID agentid, UUID
activegroupid, string firstname, string lastname, ulong grouppowers,
string groupname, string grouptitle)
{
// Need to translate to MXP somehow
// TODO: This may need doing - involves displaying the
users avatar name
}

public void SendPreLoadSound(UUID objectID, UUID ownerID, UUID
soundID)
{
// Need to translate to MXP somehow
}

public void SendPlayAttachedSound(UUID soundID, UUID objectID,
UUID ownerID, float gain, byte flags)
{
// Need to translate to MXP somehow
}

public void SendTriggeredSound(UUID soundID, UUID ownerID,
UUID objectID, UUID parentID, ulong handle, Vector3 position, float
gain)
{
// Need to translate to MXP somehow
}

public void SendAttachedSoundGainChange(UUID objectID, float
gain)
{
// Need to translate to MXP somehow
}

public void SendNameReply(UUID profileId, string firstname,
string lastname)
{
// SL Specific
}

public void SendAlertMessage(string message)
{
SendChatMessage(message, 0, Vector3.Zero, "System",
UUID.Zero, 0, 0);
}

public void SendAgentAlertMessage(string message, bool modal)
{
SendChatMessage(message, 0, Vector3.Zero, "System" +
(modal ? " Notice" : ""), UUID.Zero, 0, 0);
}

public void SendLoadURL(string objectname, UUID objectID, UUID
ownerID, bool groupOwned, string message, string url)
{
// TODO: Probably can do this better
SendChatMessage("Please visit: " + url, 0, Vector3.Zero,
objectname, UUID.Zero, 0, 0);
}

public void SendDialog(string objectname, UUID objectID, UUID
ownerID, string msg, UUID textureID, int ch, string[] buttonlabels)
{
// TODO: Probably can do this better
SendChatMessage("Dialog: " + msg, 0, Vector3.Zero,
objectname, UUID.Zero, 0, 0);
}

public bool AddMoney(int debit)
{
SendChatMessage("You were paid: " + debit, 0,
Vector3.Zero, "System", UUID.Zero, 0, 0);
return true;
}

public void SendSunPos(Vector3 sunPos, Vector3 sunVel, ulong
CurrentTime, uint SecondsPerSunCycle, uint SecondsPerYear, float
OrbitalPosition)
{
// Need to translate to MXP somehow
// Send a light object?
}

public void SendViewerEffect(ViewerEffectPacket.EffectBlock[]
effectBlocks)
{
// Need to translate to MXP somehow
}

public void SendViewerTime(int phase)
{
// Need to translate to MXP somehow
}

public UUID GetDefaultAnimation(string name)
{
return UUID.Zero;
}

public void SendAvatarProperties(UUID avatarID, string
aboutText, string bornOn, byte[] charterMember, string flAbout, uint
flags, UUID flImageID, UUID imageID, string profileURL, UUID
partnerID)
{
// Need to translate to MXP somehow
}

public void SendScriptQuestion(UUID taskID, string taskName,
string ownerName, UUID itemID, int question)
{
// Need to translate to MXP somehow
}

public void SendHealth(float health)
{
// Need to translate to MXP somehow
}

public void SendEstateManagersList(UUID invoice, UUID[]
EstateManagers, uint estateID)
{
// Need to translate to MXP somehow
}

public void SendBannedUserList(UUID invoice, EstateBan[]
banlist, uint estateID)
{
// Need to translate to MXP somehow
}

public void SendRegionInfoToEstateMenu
(RegionInfoForEstateMenuArgs args)
{
// Need to translate to MXP somehow
}

public void SendEstateCovenantInformation(UUID covenant)
{
// Need to translate to MXP somehow
}

public void SendDetailedEstateData(UUID invoice, string
estateName, uint estateID, uint parentEstate, uint estateFlags, uint
sunPosition, UUID covenant, string abuseEmail, UUID estateOwner)
{
// Need to translate to MXP somehow
}

public void SendLandProperties(int sequence_id, bool
snap_selection, int request_result, LandData landData, float
simObjectBonusFactor, int parcelObjectCapacity, int simObjectCapacity,
uint regionFlags)
{
// Need to translate to MXP somehow
}

public void SendLandAccessListData(List<UUID> avatars, uint
accessFlag, int localLandID)
{
// Need to translate to MXP somehow
}

public void SendForceClientSelectObjects(List<uint> objectIDs)
{
// Need to translate to MXP somehow
}

public void SendLandObjectOwners(Dictionary<UUID, int>
ownersAndCount)
{
// Need to translate to MXP somehow
}

public void SendLandParcelOverlay(byte[] data, int
sequence_id)
{
// Need to translate to MXP somehow
}

public void SendParcelMediaCommand(uint flags,
ParcelMediaCommandEnum command, float time)
{
// Need to translate to MXP somehow
}

public void SendParcelMediaUpdate(string mediaUrl, UUID
mediaTextureID, byte autoScale, string mediaType, string mediaDesc,
int mediaWidth, int mediaHeight, byte mediaLoop)
{
// Need to translate to MXP somehow
}

public void SendAssetUploadCompleteMessage(sbyte AssetType,
bool Success, UUID AssetFullID)
{
// Need to translate to MXP somehow
}

public void SendConfirmXfer(ulong xferID, uint PacketID)
{
// Need to translate to MXP somehow
}

public void SendXferRequest(ulong XferID, short AssetType,
UUID vFileID, byte FilePath, byte[] FileName)
{
// Need to translate to MXP somehow
}

public void SendInitiateDownload(string simFileName, string
clientFileName)
{
// Need to translate to MXP somehow
}

public void SendImageFirstPart(ushort numParts, UUID
ImageUUID, uint ImageSize, byte[] ImageData, byte imageCodec)
{
// Need to translate to MXP somehow
}

public void SendImageNextPart(ushort partNumber, UUID
imageUuid, byte[] imageData)
{
// Need to translate to MXP somehow
}

public void SendImageNotFound(UUID imageid)
{
// Need to translate to MXP somehow
}

public void SendShutdownConnectionNotice()
{
// Need to translate to MXP somehow
}

public void SendSimStats(SimStats stats)
{
// Need to translate to MXP somehow
}

public void SendObjectPropertiesFamilyData(uint RequestFlags,
UUID ObjectUUID, UUID OwnerID, UUID GroupID, uint BaseMask, uint
OwnerMask, uint GroupMask, uint EveryoneMask, uint NextOwnerMask, int
OwnershipCost, byte SaleType, int SalePrice, uint Category, UUID
LastOwnerID, string ObjectName, string Description)
{
//throw new System.NotImplementedException();
}

public void SendObjectPropertiesReply(UUID ItemID, ulong
CreationDate, UUID CreatorUUID, UUID FolderUUID, UUID FromTaskUUID,
UUID GroupUUID, short InventorySerial, UUID LastOwnerUUID, UUID
ObjectUUID, UUID OwnerUUID, string TouchTitle, byte[] TextureID,
string SitTitle, string ItemName, string ItemDescription, uint
OwnerMask, uint NextOwnerMask, uint GroupMask, uint EveryoneMask, uint
BaseMask, byte saleType, int salePrice)
{
//throw new System.NotImplementedException();
}

public void SendAgentOffline(UUID[] agentIDs)
{
// Need to translate to MXP somehow (Friends List)
}

public void SendAgentOnline(UUID[] agentIDs)
{
// Need to translate to MXP somehow (Friends List)
}

public void SendSitResponse(UUID TargetID, Vector3 OffsetPos,
Quaternion SitOrientation, bool autopilot, Vector3 CameraAtOffset,
Vector3 CameraEyeOffset, bool ForceMouseLook)
{
// Need to translate to MXP somehow
}

public void SendAdminResponse(UUID Token, uint AdminLevel)
{
// Need to translate to MXP somehow
}

public void SendGroupMembership(GroupMembershipData[]
GroupMembership)
{
// Need to translate to MXP somehow
}

public void SendGroupNameReply(UUID groupLLUID, string
GroupName)
{
// Need to translate to MXP somehow
}

public void SendJoinGroupReply(UUID groupID, bool success)
{
// Need to translate to MXP somehow
}

public void SendEjectGroupMemberReply(UUID agentID, UUID
groupID, bool success)
{
// Need to translate to MXP somehow
}

public void SendLeaveGroupReply(UUID groupID, bool success)
{
// Need to translate to MXP somehow
}

public void SendLandStatReply(uint reportType, uint
requestFlags, uint resultCount, LandStatReportItem[] lsrpia)
{
// Need to translate to MXP somehow
}

public void SendScriptRunningReply(UUID objectID, UUID itemID,
bool running)
{
// Need to translate to MXP somehow
}

public void SendAsset(AssetRequestToClient req)
{
// Need to translate to MXP somehow
}

public void SendTexture(AssetBase TextureAsset)
{
// Need to translate to MXP somehow
}

public byte[] GetThrottlesPacked(float multiplier)
{
// LL Specific, get out of IClientAPI

const int singlefloat = 4;
float tResend = multiplier;
float tLand = multiplier;
float tWind = multiplier;
float tCloud = multiplier;
float tTask = multiplier;
float tTexture = multiplier;
float tAsset = multiplier;

byte[] throttles = new byte[singlefloat * 7];
int i = 0;
Buffer.BlockCopy(BitConverter.GetBytes(tResend), 0,
throttles, singlefloat * i, singlefloat);
i++;
Buffer.BlockCopy(BitConverter.GetBytes(tLand), 0,
throttles, singlefloat * i, singlefloat);
i++;
Buffer.BlockCopy(BitConverter.GetBytes(tWind), 0,
throttles, singlefloat * i, singlefloat);
i++;
Buffer.BlockCopy(BitConverter.GetBytes(tCloud), 0,
throttles, singlefloat * i, singlefloat);
i++;
Buffer.BlockCopy(BitConverter.GetBytes(tTask), 0,
throttles, singlefloat * i, singlefloat);
i++;
Buffer.BlockCopy(BitConverter.GetBytes(tTexture), 0,
throttles, singlefloat * i, singlefloat);
i++;
Buffer.BlockCopy(BitConverter.GetBytes(tAsset), 0,
throttles, singlefloat * i, singlefloat);

return throttles;
}

public event ViewerEffectEventHandler OnViewerEffect;
public event Action<IClientAPI> OnLogout;
public event Action<IClientAPI> OnConnectionClosed;


public void SendBlueBoxMessage(UUID FromAvatarID, string
FromAvatarName, string Message)
{
SendChatMessage(Message, 0, Vector3.Zero, FromAvatarName,
UUID.Zero, 0, 0);
}

public void SendLogoutPacket()
{
LeaveRequestMessage lrm = new LeaveRequestMessage();
Session.Send(lrm);
}

public ClientInfo GetClientInfo()
{
return null;
//throw new System.NotImplementedException();
}

public void SetClientInfo(ClientInfo info)
{
//throw new System.NotImplementedException();
}

public void SetClientOption(string option, string value)
{
// Need to translate to MXP somehow
}

public string GetClientOption(string option)
{
// Need to translate to MXP somehow
return "";
}

public void Terminate()
{
Close(false);
}

public void SendSetFollowCamProperties(UUID objectID,
SortedDictionary<int, float> parameters)
{
// Need to translate to MXP somehow
}

public void SendClearFollowCamProperties(UUID objectID)
{
// Need to translate to MXP somehow
}

public void SendRegionHandle(UUID regoinID, ulong handle)
{
// Need to translate to MXP somehow
}

public void SendParcelInfo(RegionInfo info, LandData land,
UUID parcelID, uint x, uint y)
{
// Need to translate to MXP somehow
}

public void SendScriptTeleportRequest(string objName, string
simName, Vector3 pos, Vector3 lookAt)
{
// Need to translate to MXP somehow
}

public void SendDirPlacesReply(UUID queryID, DirPlacesReplyData
[] data)
{
// Need to translate to MXP somehow
}

public void SendDirPeopleReply(UUID queryID, DirPeopleReplyData
[] data)
{
// Need to translate to MXP somehow
}

public void SendDirEventsReply(UUID queryID, DirEventsReplyData
[] data)
{
// Need to translate to MXP somehow
}

public void SendDirGroupsReply(UUID queryID, DirGroupsReplyData
[] data)
{
// Need to translate to MXP somehow
}

public void SendDirClassifiedReply(UUID queryID,
DirClassifiedReplyData[] data)
{
// Need to translate to MXP somehow
}

public void SendDirLandReply(UUID queryID, DirLandReplyData[]
data)
{
// Need to translate to MXP somehow
}

public void SendDirPopularReply(UUID queryID,
DirPopularReplyData[] data)
{
// Need to translate to MXP somehow
}

public void SendEventInfoReply(EventData info)
{
// Need to translate to MXP somehow
}

public void SendMapItemReply(mapItemReply[] replies, uint
mapitemtype, uint flags)
{
// Need to translate to MXP somehow
}

public void SendAvatarGroupsReply(UUID avatarID,
GroupMembershipData[] data)
{
// Need to translate to MXP somehow
}

public void SendOfferCallingCard(UUID srcID, UUID
transactionID)
{
// Need to translate to MXP somehow
}

public void SendAcceptCallingCard(UUID transactionID)
{
// Need to translate to MXP somehow
}

public void SendDeclineCallingCard(UUID transactionID)
{
// Need to translate to MXP somehow
}

public void SendTerminateFriend(UUID exFriendID)
{
// Need to translate to MXP somehow
}

public void SendAvatarClassifiedReply(UUID targetID, UUID[]
classifiedID, string[] name)
{
// Need to translate to MXP somehow
}

public void SendClassifiedInfoReply(UUID classifiedID, UUID
creatorID, uint creationDate, uint expirationDate, uint category,
string name, string description, UUID parcelID, uint parentEstate,
UUID snapshotID, string simName, Vector3 globalPos, string parcelName,
byte classifiedFlags, int price)
{
// Need to translate to MXP somehow
}

public void SendAgentDropGroup(UUID groupID)
{
// Need to translate to MXP somehow
}

public void SendAvatarNotesReply(UUID targetID, string text)
{
// Need to translate to MXP somehow
}

public void SendAvatarPicksReply(UUID targetID,
Dictionary<UUID, string> picks)
{
// Need to translate to MXP somehow
}

public void SendAvatarClassifiedReply(UUID targetID,
Dictionary<UUID, string> classifieds)
{
// Need to translate to MXP somehow
}

public void SendParcelDwellReply(int localID, UUID parcelID,
float dwell)
{
// Need to translate to MXP somehow
}

public void SendUserInfoReply(bool imViaEmail, bool visible,
string email)
{
// Need to translate to MXP somehow
}

public void KillEndDone()
{
Stop();
}

public bool AddGenericPacketHandler(string MethodName,
GenericMessage handler)
{
// Need to translate to MXP somehow
return true;
}

#endregion

#region IClientCore

public bool TryGet<T>(out T iface)
{
iface = default(T);
return false;
}

public T Get<T>()
{
return default(T);
}

#endregion
}
}

Jukka Jylänki

unread,
Feb 25, 2009, 8:12:10 AM2/25/09
to realxtend-a...@googlegroups.com
On Sat, 21 Feb 2009 13:14:43 +0200, Toni Alatalo <ant...@kyperjokki.fi>
wrote:

> It was mentioned in some meeting early on (by Jukka?) that it's
> impossible to 'design for an unknown protocol', so only the stripped
> down sludp has been considered now.

The plan is to implement the whole sludp protocol, but not all of the
packets. This is because some of the packets are just uninteresting for
OpenSim and reX. I don't know if that's what you meant by 'stripped down'.

> The wiki doc seems to now echo that idea: "One of the concerns was
> whether this implementation would later on work for connecting to a
> totally different virtual world system that is using a different
> protocol, but it was seen more effective in that case to write another
> specialized protocol stack from scratch rather than trying to build too
> many abstractions for unknown systems in mind."

What I wrote refers to an earlier proposal that perhaps we should try to
come up with an "ideal" set of abstract app-level Virtual World
Communication messages, and the protocol abstraction would be achieved by
mapping both in- and outbound sl/other protocol messages to this abstract
set. I see this being senseless, as there doesn't exist this kind of
utopistic abstraction, the VW messages are far too app/world state
-specific (browse http://wiki.secondlife.com/wiki/Category:Messages for a
while to see the gory details). This abstraction layer would be too narrow
and outright obsolete already at its infancy.

> There are known systems, existing implementations, that could be
> examined now in order to evaluate the design of the viewer, and whether
> e.g. that idea of replacing the protocol stack would go smoothly in
> practice.

Replacing the protocol is being planned for and the low-level protocol
library (that we've already been working on for SL protocol) will be
easily changeable. Of course it requires extra work to create a separation
between the app-level scene logic part and app-level network message
processing part, but that is just natural. Seriously we couldn't expect to
run the OpenSim-centric app logic when we're connected to e.g. MXP and
pretend we could do all the OpenSim actions there. This is not something
like HTTP vs FTP here.

We will build the viewer scene model and core modules in an abstract way
so that they don't have 'SLisms'. But rather than hiding the understanding
of these SLisms to some bijective protocol mapping layer, I see we're
better off running specific OpenSimWorldLogic code in an OpenSim world and
MXPWorldLogic code in an MXP world. This will get our application vastly
better understanding/integration to the target world.

Toni Alatalo

unread,
Feb 25, 2009, 9:07:25 AM2/25/09
to realxtend-a...@googlegroups.com
Jukka Jylänki kirjoitti:

> The plan is to implement the whole sludp protocol, but not all of the
> packets. This is because some of the packets are just uninteresting for
> OpenSim and reX. I don't know if that's what you meant by 'stripped down'.
>

exactly that, implementing just a subset of the packets. should have
said 'sl protocol' instead of slupd, or better just just 'not all the
packets', sorry the inexact expression in the hasty mail.

> What I wrote refers to an earlier proposal that perhaps we should try to
> come up with an "ideal" set of abstract app-level Virtual World
> Communication messages, and the protocol abstraction would be achieved by
> mapping both in- and outbound sl/other protocol messages to this abstract
> set. I see this being senseless, as there doesn't exist this kind of
> utopistic abstraction, the VW messages are far too app/world state
> -specific (browse http://wiki.secondlife.com/wiki/Category:Messages for a
>

right, that issue i know, just didn't know that the expression
'unexisting protocol' referred to the 'ideal' thing, but thought it may
have meant possible future alternative protocols (like mxp).

you are probably right that no ideal protocol can exist, as everything
has to do compromises - by doing one solution you are not doing some other.

Openviewer kind of attempts that, as it has an abstract World model (the
World class in world.py), and it converts all the SL stuff to that in
OMVProtocol. I just got MXPProtocol working there enough to login and
create an object on the server, but am not bringing anything from MXP to
that World model yet. Hope to get to do that soon how the abstraction
may fail to be abstract. Also in one discussion with Dan we suspected
that all the conversion that's going on there may well not be wise at
all, but haven't looked closer at that yet.

Tommi also made an IProtocol to Idealist (where SL wasn't separated that
much earlier), interesting to see how that goes (in the absense of an
abstract World, perhaps the Idealist model is close to your plan even?).

> while to see the gory details). This abstraction layer would be too narrow
> and outright obsolete already at its infancy.
>

well, I hope to see what happens with it in Openviewer. it may also work
to some extent. many of these things have common ground.

> processing part, but that is just natural. Seriously we couldn't expect to
> run the OpenSim-centric app logic when we're connected to e.g. MXP and
> pretend we could do all the OpenSim actions there. This is not something
>

There you may be wrong in the sense that the guys are planning to add
MXP support to Opensim itself too, to have a real server implementation
that uses it and not just the simple minimal test server.

So how the app logics will be similar / different when using SLprotocol
/ MXP w/ Opensim is to be seen.

Then again I'm not sure what exactly you mean with logic here, but that
I can perhaps read from the wiki docs / prototype sources, or we can
talk in a meeting.

> better off running specific OpenSimWorldLogic code in an OpenSim world and
> MXPWorldLogic code in an MXP world. This will get our application vastly
> better understanding/integration to the target world.
>

So in a way MXP as a protocol is independent of 'world logic', and can
be used to implement e.g. the SL-like current logic in Opensim. For
example currently the MXP test things work so that the client injects
the avatar to the server when it logs in, but SL works so that the
client logs on to the server which injects the Avatar and sends it to
the client with the rest of the scene data. I don't know if that sort of
stuff is what you call logic. But of course either way can be
implemented using the XMP protocol, which just gives the means to inject
and receive data etc.

Basically my point in the original mail was just to note that MXP and
others exist and can be looked at, to see concrete issues that different
protocols may introduce - it is nontrivial to achieve what is wanted
from the 'staying away from SLisms' exactly because every working
implementation has to have gory details somewhere.

but i guess we have a fairly good understanding, thanks for the reply
~Toni

Tommi Laukkanen

unread,
Feb 25, 2009, 12:07:28 PM2/25/09
to realxtend-a...@googlegroups.com
The Idea of MXP is exactly what you stated. It implementes minimal set of messages and offers extension mechanism to add application specific extensions as message payload. The reference implementation has now open simulator prim shape parameters as such extension for Perception event. I am most interested to spar against any use cases whether MXP is missing something in the message set. In other words is there some area of functionality which can not be covered sanely by using the extension mechanism.

regards,
Tommi

daniel miller

unread,
Feb 25, 2009, 7:57:40 PM2/25/09
to realxtend-a...@googlegroups.com
gosh, I'm behind -- been very busy with other projects lately.

I did quickly review Ryan's alpha arch doc. My immediate reaction was
I didn't see anything related to the timing issues we've discussed a
bit on list. Perhaps I missed it by skimming. I really want to dig
in but I've been hustling for paying gigs lately...

One thought I had was to look seriously at the Croquet protocol, as I
know it implements some of my favorite functionality. It might just
give another perspective on common features that need to be
implemented if rexNG is really going to be able to separate itself
from the SL stack down the line.

Here's a quick primer on 'tea time', the simultaneous distributed
simulation engine at the core of Croquet and Qwaq:

http://www.opencroquet.org/index.php/The_Core_Model

-dan

Ben Lindquist

unread,
Feb 25, 2009, 11:23:03 PM2/25/09
to realxtend-a...@googlegroups.com
I think a forced router clock is a limitation.  Better to let time be the emergent flow of commands issued by participants.

Instead of imagining the simulated space(time?) as a set of states spaced evenly apart, what if we imagine it as a nexus of commands flowing in and events flowing out, synchronized only by the feedback loops in the participants' brains?

Arkowitz

daniel miller

unread,
Feb 26, 2009, 12:41:56 AM2/26/09
to realxtend-a...@googlegroups.com
> I think a forced router clock is a limitation.  Better to let time be the
> emergent flow of commands issued by participants.
>
> Instead of imagining the simulated space(time?) as a set of states spaced
> evenly apart, what if we imagine it as a nexus of commands flowing in and
> events flowing out, synchronized only by the feedback loops in the
> participants' brains?

this approach will pretty much make it impossible to do any serious
distributed simulation. The issue I'm trying to bring home here is
that "good enough for gamerz" is just not the only criteria to
consider. The platform should be able to do entertainment, but it
should also be possible to do real, valid simulation work, such as is
done with robotics, vehicle design, battlefield simulation and
training, medical, artificial intelligence, molecular simulation and
so on.

Doing any of this kind of stuff depends on the idea of time being
deterministic across all the nodes in a network. You need to know
which events occurred in what order. it's not just an issue of
framerate, or the annoyance of lag. It's a question of being able to
extract useful information after the fact, and being able to repeat an
experiment, or at least understand what would constitute replication
of a set of distributed events.

I just think it's shortsighted to punt this issue because it's not
implemented in SL and it hurts our brains a bit to think about it.
We're potentially crossing out much of the interesting stuff that a
Metaverse should be capable of.

-dan

Toni Alatalo

unread,
Feb 26, 2009, 2:05:50 AM2/26/09
to realxtend-a...@googlegroups.com
Toni Alatalo kirjoitti:

couple remarks here after some more thought, and work on MXPProtocol
impl against Openviewer World:


>> come up with an "ideal" set of abstract app-level Virtual World
>> Communication messages, and the protocol abstraction would be achieved by
>>

> Openviewer kind of attempts that, as it has an abstract World model (the
> World class in world.py), and it converts all the SL stuff to that in
> OMVProtocol. I just got MXPProtocol working there enough to login and
>

In fact I think the Openviewer solution is not about an abstract set of
communication messages, but just that there is an abstract model of the
scene - and then every protocol implementation can communicate with any
network messages it wishes, and update the scene correspondingly. Just
that the events that the world uses pretty much match with the (SL)
packets so in that sense that world model also describes how the network
protocol is supposed to behave.
http://www.openviewer.org/attachment/ticket/71/mxp1.patch shows how I
added a new protocol impl and basically didn't need to change anything
in the World (just made the set of events a protocol implements so that
each is optional).

So perhaps the current pyov model is what you are planning, and the mxp
thing I wrote to it would be similar to how mxp would be put to RexNG? I
didn't look yet how Tommi's work on the Idealist side seems (only know
that he refactored so that it also now has IProtocol) - one difference
is that Openviewer converts all the coordinate etc. info to nonspecific
types, whereas Idealist just uses the LibOMV types all over (and hence
depends on that lib even when using MXP, unlike Openviewer) but that
does not make such a big difference w.r.t. to these principles.

> create an object on the server, but am not bringing anything from MXP to
> that World model yet. Hope to get to do that soon how the abstraction
> may fail to be abstract. Also in one discussion with Dan we suspected
>

Well yesterday evening got to creating the own avatar, which in OV means
that it triggers the creation of the region too (any new object coming
into a previously unknown region does), and that already brought up an
interesting difference in the MXP and SL models:

in SL the viewer connects to several regions, to be able to show the
neighbouring ones as well. Internally both in OV and Idealist there is a
dictionary/map of regionID:s to region instances where the info is kept
locally, and coordinates are calculated based on region + within-region
coords.

in MXP the viewer is always connected to a single bubble, and doesn't
need to know about other bubbles. So the viewers having that region
dictionary is just an extra complexity (both me and Tommi now just put
it so that the single 'region' is there with the ID 1, and that's not
bad overhead, but still unnecessary from MXPs point of view). I don't
know yet how MXP communicates about the objects in the neighbouring
bubbles (if you can see them) and how you are supposed to deal with
coordinates within a viewer.

Perhaps that sort of things are what you mean by 'logic'?

Will be interesting to see how that business will go in the RexNG viewer
internal scene code.

> ~Toni

same.

Kripken

unread,
Feb 26, 2009, 3:19:15 AM2/26/09
to realxtend-a...@googlegroups.com
My $0.02: There are plenty of use cases which can't be covered under MXP and its extension mechanism - or any other such generic approach, in fact. But maybe you don't mean for MXP to cover those.

For example, "Client is connected to a single bubble at a time which is responsible for routing information from [adjacent] bubbles" - rules out various non-client/server architectures.

Also, high-performance protocols (FPSes, etc.) are generally hand-crafted. You might in theory do those in an extension mechanism, but I doubt many would want to (I wouldn't) because of the overhead (instead of a protocol with extensions, I'd want a generic method for generating protocols for this sort of thing).

Also, not all protocols should be UDP-based, e.g. if responsiveness is not an issue - perhaps a virtual world focused on document collaboration - then it would be simpler to just use TCP.

So, there are plenty of virtual world-related use cases that wouldn't fit. Still, MXP is a great alternative to the open-but-privately-controlled SL protocol, which works under the various assumptions of client-server (star) networking, responsiveness being of medium relevance, etc. etc. I guess my point is that MXP can surely achieve having its extension mechanism covering quite a lot of areas, so long as it is clear what it is meant to do and what it isn't.

- kripken

Ryan McDougall

unread,
Feb 26, 2009, 8:19:35 AM2/26/09
to realxtend-a...@googlegroups.com
On Thu, Feb 26, 2009 at 2:57 AM, daniel miller <danb...@gmail.com> wrote:
>
> gosh, I'm behind -- been very busy with other projects lately.
>
> I did quickly review Ryan's alpha arch doc.  My immediate reaction was
> I didn't see anything related to the timing issues we've discussed a
> bit on list.  Perhaps I missed it by skimming.  I really want to dig
> in but I've been hustling for paying gigs lately...

You're unlikely to find something in there about those timing issues,
and that's purely a function of my attention focused on use cases and
user interactions, since they are primary, and my inability to
articulate your point of view for you.

Can you suggest a way to add your point of view in somewhere?

Cheers,

Ryan McDougall

unread,
Feb 26, 2009, 8:24:41 AM2/26/09
to realxtend-a...@googlegroups.com
Quite possibly; although hurting brain or no, it has no more room left
in it atm, and it cannot make the kind arguments you could make,
having not had the luxury of being you throughout its life. Its a sad
fact of humans lacking telepathy. :)

I eagerly await your additions to our wiki.

Cheers,

Tommi Laukkanen

unread,
Feb 26, 2009, 9:17:59 AM2/26/09
to realxtend-a...@googlegroups.com
++ to that arkowitz, dont try to control the uncontrollable but aim for optimal immersion and minimize lag where you can.

Ben Lindquist

unread,
Feb 26, 2009, 10:12:09 AM2/26/09
to realxtend-a...@googlegroups.com
If somebody wants to do full-on simulations with the platform then a hub-based hard clock could be provided... but I don't get why this capability is so important.  Given the choice I would pick emergence over determinism any day.  The hard clock synchro would lead to an overall update rate which is throttled down to the worst latency any participant has; either that or not allowing high latency people to participate - or am I missing something?  Simulation, by the way, does not always imply determinism - there are simulations of emergent behaviour which get very interesting... so lets not hogtie everything to a hard clock in order to support one specific use-case.  At the same time we *should* try to support Dan's hard-clock-required use-case.

On the subject of adjacent bubbles in MXP, the plan is to have a bubble<->bubble set of messages which hubs use to inform eachother of things going on that their respective participants need to know about.  Regions in MXP are really just land content; if some land is adjacent to some other land it really could just be injected into the same bubble a participant is already in when one of the participant's objects gets to where it would perceive the land.  It would be up to the "land daemon" (most likely coexisting with the hub) to do this injection.

This brings us to load balancing at the server (hub) level.  Load balancing should be separated from space coordinates and tied instead to activity levels.  There could be a small bubble with lots of objects and lots of updates - this will need more CPU thrown at it; multiple hub processes need to be able to cooperatively serve the comms needs.  There could be a gigantic space with only a few participants wandering it and few objects; this should be handled by one hub process.  A "hub" should really be a federation of comms processes.

Bubbles do not have fixed sizes, I think... they expand and contract as the participants' objects' awareness bounds move around.  The overlap, and the bubble<->bubble part of the protocol, come into play when you have two bubbles which expand into or move onto the same land content; now the participants in each need to know about eachothers' objects (based on awareness bounds of course).  So the two hubs which are handling the comms can talk to eachother, rather than having each participant join two bubbles.

Arkowitz

daniel miller

unread,
Feb 26, 2009, 7:18:56 PM2/26/09
to realxtend-a...@googlegroups.com
[ryan:] I eagerly await your additions to our wiki.

Fair enough. I am trying to find the time to get my thoughts in order
and put them up.

[ben:] If somebody wants to do full-on simulations with the platform


then a hub-based hard clock could be provided... but I don't get why
this capability is so important. Given the choice I would pick
emergence over determinism any day. The hard clock synchro would lead
to an overall update rate which is throttled down to the worst latency
any participant has; either that or not allowing high latency people
to participate - or am I missing something?

No, that's a fair observation. There are really two inter-related
issues here -- time sync and determinism *within* our application, and
time sync over the wire. My point is that if you don't have the
former, you can never properly implement the latter. OTOH, you can
have strong time control within the app but still support protocols
that don't care so much about that.

I think the best I can do right now is to look more deeply into
Croquet, since I am pretty sure it does things "my" way. I do
acknowledge the issue that you don't want a large, multi-node system
to be so interdependent that when one node has problems it propagates
to everyone else's experience. There are ways to address this -- it's
basically an issue of proper error handling. Nodes that cannot keep
up with their responsibilities in terms of throughput and lag should
be considered in fault mode, and handled accordingly. Frankly this is
an issue that bugged me with Croquet, since it seems like any machine
could grind the whole scenario to a halt. That's another reason I
want to understand it better.

So I guess I will research Croquet and Qwaq as another protocol we may
wish to support in the future, and let's see if that brings some of
these issues into sharper focus.

I appreciate everyone's valuable time on the project and I don't mean
to drag it behind schedule; I just feel passionately that we might be
missing a crucial ingredient that will be very painful to introduce
later in the game.

-dan

daniel miller

unread,
Feb 26, 2009, 7:24:13 PM2/26/09
to realxtend-a...@googlegroups.com
> On the subject of adjacent bubbles in MXP, the plan is to have a
> bubble<->bubble set of messages which hubs use to inform eachother of things
> going on that their respective participants need to know about.

As Kripken mentioned, doesn't this preclude a p2p model? In general,
I'm in favor of *any* resource, including land and static objects,
being handled in the webby way, with indirect URI's or what have you
(not my expertise, but I think you know what I mean). hard-coding
things so you get all information through a 'hub' that you need to
connect to seems somewhat too strict to handle all possible scenarios.

Ben Lindquist

unread,
Feb 27, 2009, 9:06:28 AM2/27/09
to realxtend-a...@googlegroups.com
The issue is even bigger than hub<->hub communication.  I think it is solvable but has not been fully fleshed out (at least not that I have seen yet).  The issue is how can some users who want to experience a bubble together, with some particular land resource defining the beginning coordinates of the bubble, procure a hub they can use to communicate?

Perr-to-peer would be great but it has two problems: one it gets slow and laggy; two you still need some sort of directory for users to find eachother.  So what I think we need is a mechanism for users to find eachother and get a hub launched.  Hubs should really be like cloud resources - they launch and die, and federate with eachother, as user needs demand.  Then we get the good experience of hub-based networking, with the spontaneity of peer-to-peer.

The "land directory", "pools of hubs" ready to be launched as users visit land... who owns these?  Lots of providers should be able to provide these things... and how do they get paid back for the servers they must run to provide them?  And how do they authenticate and federate across providers?

Resources, as always, are loaded via http from whatever repository they live on - the hub<->hub stuff, just to make sure it's clear, is simply for commands and events to be proxied form one bubble to another, so that participants don't have to join multiple bubbles.  And also to be clear I for one have no issue with allowing a participant to join as many bubbles as her hardware can handle.

Arkowitz
Reply all
Reply to author
Forward
0 new messages