=== Tuomo
Modrex:rex ode physics
- make rex collision physics work with modrex. Requires implementing own
rex ode physics plugin. Implement as clean as possible with no hacks.
== Heikki
Task: rex avatar, migration from current viewer to rexNG. Continue work
with framework.
1. Compile framework on Visual Studio 2008. Estimated: 0.5 days.
2. Gather avatar specific asset server / authentication information to one
place. Estimated: 0.5 days.
3. Go over rex avatar specific code in the current viewer, see what parts
can be used and how. Rough overview of how the rex avatar works
clientside. Estimated: 2 days.
4. Propose an API for using the Avatar System in the new viewer.
Estimated: 4 days
5. Discuss framework prototype implementation / design with clb.
Estimated: 0.5 days if no major issues, otherwise might be a completely
new task.
Result: documentation for the wiki. Interface code for the framework.
== MattiK & MattiR ==
Task: What is our IM/Communication Library?
Estimated time to completion: 12 days (6 working days)
* Test telepathy framework and form an opinion about should we use or
not to use telepathy for IM.
Sub tasks:
- finish build telepathy-glib on windows platform with cmake (1 days)
- build telepathy-haze / telepathy-gabble on windows platform with
cmake (3 days)
- Make example telepathy applications (python etc.) to run with
telepathy/dbus environment (2 day)
- cross compile telepathy libraries (2-4 days)
- write a sample app that uses telepathy for text chat (4 days)
Deliverable 5.1.2008:
* build environment for all necessary telepathy libraries for Windows platform
* demo text chat application
* Recommendation for IM/Communication library
! because of possible sick absences some of these might not be done by 5.1.2008
== Lasse
Task: Which 3D UI works with Ogre
Estimated time to completion: 6 days
Investigate existing 3D UI libraries (OpenGL-based) for integration
into the rex-NG viewer.
Check ease of integration & features & eyecandy level.
Some libraries that have been found:
Clutter - http://clutter-project.org/
Gigi - http://gigi.sourceforge.net/
Pigment - https://code.fluendo.com/pigment/trac/wiki/WikiStart
Time breakdown:
- Investigate clutter (2 day, maybe more if it's hard to build)
- Investigate pigment (2 day)
- Investigate gigi (2 day)
Deliverables:
- Source code, some simple demos
- Research results
== Ali
???
== Jukka
???
A view to this on-going effort based on the work we did earlier for
custom controls of the avatar. The full doc (draft) is at
http://www.playsign.fi/engine/rex/controldevice but I summarize main
points here. Also at the end a note about direct bone control. I
realize that the RexNG avatar work so far is mostly about how to get
the assets from the server, how to render it etc., but am thinking that
the API exposed for controlling is related, as it is about how the code
will be in the viewer.
a) Basic movements
Openviewer provides a singleton class called AvCtl for controlling the
avatar. It is a single point of entry which hides the underlying
details related to updating the view, networking etc, so it is nice to
use when implementing support for a new controller device, without
exposing the murky details like: what is server side (movement), what
is client side (turning), how different parts of the viewer are updated
(camera, scene, networking).
For our purposes simple methods like this work well:
av.move(Vector3(0, 0.1, 0)) #motion vector - perhaps should be setting
it (like setting speed) instead
av.turn(0.2) #radians, for rotating the avatar sideways
The first example, arbitary movement, is AFAIK not possible with the SL
protocol (we've planned workarounds for now, perhaps using vehicles)
but I think should be a basic feat for the new API (and hence a future
protocol extension or a part of a new protocol). Currently Openviewer
avctl provides methods like av.fwd() which is the only thing you can do
with the SL way of doing the move on the server side.
This all is perhaps self evident but wanted to point it out to be sure.
Oh and a note about the camera: it's kinda nice how avctl automatically
moves the cam too. Perhaps if in RexNG we don't have a separate
AvatarControl class but just an Avatar class that provides the
interface, the camera can be somehow dependent on the AV pos&ort so
that no special code is needed in control methods. *BUT* one thing
we've learned when doing controls is that sometimes you *want* to fine
tune the camera behavior w.r.t to how the controls work, so that should
be possible in some nice way too (like writing a custom CameraControl
class). I guess I should put these requirement notes to wiki somehow.
b) Direct bone control
Besides moving the avatar around and playing back pre-made animations &
evaluating IKs w.r.t to scene objects, I think supporting direct
control of the skeleton should be a pursued relatively soon so the new
viewer arch should help there. The avatar art people (Tomi and Laura)
were requesting that already a year ago, then related to motion
sensors. Now recently Dan noted:
> I've seen motion capture solutions that claim to be moving towards
> consumer use, ie < $100 to capture your face and upper body. Once
> that sort of thing is possible, the issues of timing and
> synchronization of avatar movement and voice will become much more
> critical.
The other day I came across another solution that would be available to
basically anyone already now -- an open source toolkit that maps poses
from normal full-body video camera image of the user to a 2d skeleton,
a really fun looking VJ / real time performance tool:
http://animata.kibu.hu/ .. no strings attached.
Am so itching to get to test that for avatar control! :)
Of course that is limited: 2d, probably not very accurate, etc. etc.,
but as a key purpose of posing avatars in social VWs (and i guess this
includes videoconferencing for business too etc) is to express emotions
etc. I think that would already be fun and useful.
Haven't looked at the code at all yet, but am guessing/hoping that
integrating this to get pose info would be easy enough. As long as the
viewer / avatar api, and the protocol and the server, supported it..
Dunno if something clever could be made if the traffic load would
become too much with multiple participants (like communicate only
certain nodes in the skeleton and let the rigging IK/FK calc rest?).
Have been thinking of a setting where would combine a dance mat, a
smart board, and such a camera -> pose setup to be able to move around,
express, and interact within a VW .. a simple CAVE with cheap commodity
hardware and open source software. This video of kids using Croquet
with a SmartBoard was really inspiring, http://edusim.greenbush.us/
(the first demo there is also with fishes! :)
Smartboards are simple, for the software it is basically just a mouse,
and the device is just a large touchpad where the image is projected.
And a dance mat is a keyboard. Yet to get a smooth fine-tuned overall
UI for such a combination, it will be really nice to be able to
customize how all those different inputs are used together in a certain
setting (I wonder how e.g. pose info from camera and dance mat
keypresses could be used together for flying around, or touchboard
touches or wireless device input together with pose info from cam for
editing etc.).
~Toni
hi Toni --
excellent post! You bring up some critical points. I am totally in
sync with the idea of using Python as a rapid-application tool. You
can look at openviewer as basically a rapid prototype of a viewer. I
hope we can continue to use Python this way through the lifecycle of
the project. I might be able to contribute a bit on that front, in
terms of getting boost-python working to allow easier intermix of C++
and Py.
> b) Direct bone control
Your description lays out the issues precisely. Let me note a funny
asymmetry in the SL protocol: Avatars have beautiful mesh skin and
clothes, but can only be controlled through pre-canned animations.
Prims don't have skin and bones, but you can easily control each part
of a set of prims to do complex realtime behaviors. One of the first
things I would like to see added to the protocol is mesh for prim
sets, and direct part control for avatars. Perhaps they should both
derive from a fundamental agent object type, but that's an
implementation issue. The functionality should be there in the
protocol.
This brings up my other pet peeve: timing information. As long as we
are talking about extending the protocol, this issue should be
addressed as well. Both for prims and direct avatar control,
real-time messaging of the behavior is going to be very brittle as
long as the assumption is made that each message from the server
represents "now". Just as we do in VOIP and streaming video, there
should be a buffer, and messages should include information on exactly
when they are supposed to be executed.
> Haven't looked at the code at all yet, but am guessing/hoping that
> integrating this to get pose info would be easy enough. As long as the
> viewer / avatar api, and the protocol and the server, supported it..
> Dunno if something clever could be made if the traffic load would
> become too much with multiple participants (like communicate only
> certain nodes in the skeleton and let the rigging IK/FK calc rest?).
Ok, this leads to another protocol issue: compression. I haven't
looked too deeply, but other than the use of Jpeg2K for textures, I
don't believe the SL protocol does much by way of compressing
messages. To really do this stuff well, you want to reduce the number
of bits sent to the absolute minimum. In the case of direct bone
control, that means a specific encoding algorithm for skeletons, that
takes advantage of the redundancies, precision needs, and entropy
characteristics of skeleton motion. It's not as hard as it sounds,
and this is something I can commit to providing (at least a decent
first pass). The same technique could and should be applied to
primsets, with some caveats.
I know, I know -- I'm involved in feature begging when we should be
focused on bare-bones functionality delivered on a deadline. I'm
going to try to come up with a plan to put in the groundwork for some
of this stuff without requiring a substantial effort up front. I
believe I can make the case that it will pay off even in the first
year, by way of reducing the time spent debugging and performance
tuning.
-danx0r
yes, that is kind of how we started using it in that work in december.
how to make the arch with c++ & py is open yet, we agreed to return to
it a bit later in a meeting some weeks ago. that time is soon now i
guess, at least some research effort should be put to it now in Feb i think.
> the project. I might be able to contribute a bit on that front, in
> terms of getting boost-python working to allow easier intermix of C++
> and Py.
>
when looking how to call py functions from c++, came across how it is in
boost, and iirc it was basically:
f(args) .. no matter where f is a c++ or py func. is that right? does it
work by calling some PyThing wrapper?
probably a good area for you to contribute in. among the others :)
~Toni
> when looking how to call py functions from c++, came across how it is in
> boost, and iirc it was basically:
> f(args) .. no matter where f is a c++ or py func. is that right? does it
> work by calling some PyThing wrapper?
For calling C++ functions from Py, that is how it works: it's
transparent to the calling function, looks just like a call to any
other Python library.
Calling Python from C++ is the same I believe, though typically I've
used it in the other direction. Aside from some tricky type
conversions, most of the work involves writing wrappers of the C++
classes to make them accessible from Python.
The workflow I've seen used successfully usually involves writing
things in Python, then progressively rewriting things in C++ from the
bottom up, ie low-level classes and functions get ported to C++ as
necessary for performance or other considerations. In this scenario,
your top-level code is typically all Python, and you tend to call C++
from Py but not so much the other way around.
For my money, if this project ever comes to fruition:
http://shed-skin.blogspot.com/ I would just replace the C++ part with
something like this. Interop between these two languages should be
almost trivial then, at least in theory.
-danx0r
For my money, if this project ever comes to fruition:
http://shed-skin.blogspot.com/ I would just replace the C++ part with
something like this. Interop between these two languages should be
almost trivial then, at least in theory.
-danx0r
> IIANM Shed skin uses static type analysis, so it will never be able to
> compile Python - just a subset of it. The PyPy project is further along with
> a similar effort, compiling RPython into C, but it still isn't
> production-ready - these things are *hard*. And, even if they succeed, you
> aren't using a dynamic language anymore, just one with implicit type
> definitions.
Indeed, your points are taken. There are two possible approaches to
this issue. The first is to mix "duck typing" with static typing, as
was attempted here:
http://boo.codehaus.org/Duck+Typing
Another approach is to mix "real" python with something like shed
skin, in a fashion similar to how we mix C++ and Python using Boost.
-dan
Another approach is to mix "real" python with something like shed
skin, in a fashion similar to how we mix C++ and Python using Boost.
-dan
That's what I've usually done too, except that so far it has always
gone so that by reusing existing low level things (like Ogre) I haven't
yet encountered the situation where would have needed to port something
to c/c++. Some years ago I once did the mistake of premature
optimization by starting the project with writing (what i thought was)
the cpu intensive part in c, wrapped it in py for the rest of the app,
and later learned that c part was better of in py as well (it was a
scroller, using pygame, so the native blit operations in pygame/sdl did
the heavy work anyway and the timing etc. was easier to tweak from the
py side .. still took only 2% of cpu).
In fact the reason why I was looking into calling py from c++ is that
currently the core RexNG components (framework / module, event system)
are being researched / prototyped in c++ (also because those devs are
previously familiar with that env, visual studio etc), and the guys
have looked into the PoCo library which had some sort of event /
delegate system, so was curious if py written components could be
easily hooked to be called by such a system.
Before the actual implementations starts in March we should have a good
idea of how to go with it, so like said in a previous post I hope some
research efforts are now put to figuring that out.
For example, in an earlier thread about the '3d widgets' I've was
drafting, Heikki agreed that (something like) the Ogre manual object
API would be good to expose for plugins. I've been thinking that code
for such 'drawables' would be good to be able to send over the network
too, e.g. for a special targetter visualization to work for a specific
weapon in a game. In that case you'd most certainly want that code to
be in an interpreted lang so that viewers on different platforms could
run it (in a safe env too). So if such an API is needed, should we
actually be using python-ogre which already exposes all of Ogre,
including ManualObject, to py? If we use Ogre only from c++ directly,
the API work has to be done separately - perhaps that could be done
also by reusing parts of pyogre?
Another question is the integration of the components. So far we know
that at least one part will most probably written in c++, that is the
graphics renderer, and quite probably it will remain to be Ogre. What
about the other components? In line with the kind of standard workflow
you described, the basic rule is that cpu intensive parts should be in
native code and other things in py. Two parts that are now researched
in c++ are the network stack and the component system (framework). I'm
not sure how it is with those w.r.t cpu intensivity - certainly with a
lot of packets the handling needs to be efficient. Also with the
internal events perhaps get a lot if they are used for everything, can
it really go up to tens of thousands a second? PyOGP does messaging now
in pure py, and openviewer (and many other systems) has the internal
event system in py, so there are codebases where we can see how it
works. One example is MultiVerse3d which uses pyogre for gfx and does
own networking in Twisted, http://www.mv3d.com/ (a one man project).
Anyhow back to the integration: one model would be to use py as a
module system, even for just putting the different c++ written modules
to work together. This would be the 'extend' model in the literature.
We may be approaching that as the c++ parts are now planned so that
they'll be independent libraries. The benefit of py in gluing them
together would be in the ease of customization, e.g. switching
components and tweaking how they are run. One old rant promoting this
kind of extending is
http://www.twistedmatrix.com/users/glyph/rant/extendit.html - dunno if
there'd be some more recent and perhaps more balanced(?) articles
somewhere .. the tech doc is http://docs.python.org/extending/
This also touches the GUI integration, which seems to be currently be
experimented with QT (which i expect can be good but dunno the results
yet, how Lasse got it to run with Ogre in cpp).
> http://shed-skin.blogspot.com/ I would just replace the C++ part with
> something like this. Interop between these two languages should be
> almost trivial then, at least in theory.
I agree that shed skin and friends are interesting, have had good
experience with Pyrex (and more recently with the Cython work of it in
another project) from how it works in Soya3d.
Anyhow we are not going to replace Ogre with a shed-skin written or any
other port of it, so code written in plain c++ will most probably be
around. Also it can be that the whole framework is just made in c++
'cause like said that's what the devs busy preparing it now are
familiar with, and that's the defacto standard in game engines anyway
(commercial games typically use interpreted langs only by embedding and
exposing a custom api for restricted things, they don't have the needs
for customization and extensibility of this viewer project where a
module system like py provides might be helpful).
Oh well there are a lot of questions around this issue, some I perhaps
managed to touch here, feel that many didn't get communicated clearly
at all yet, but am out of time now - also don't have much of a chance
to look into these in the coming weeks (2hours/day tops), and can't
make it to the meeting now either, but am hoping that some sort of a
strategy is pulled together for getting a plan (and am willing to help
there where can).
In one way the question is more about the style of the framework than
languages. I mean with e.g. Twisted you use and write deferreds, no
matter whether in c, c++ or py. i guess it's the same with Kamaelia's
pipeline components. So with RexNG we'll write .. I guess I'll need to
do reading in the wiki to catch up later today when the daughter is
having her day sleep.
Now to babycare business,
> -danx0r
~Toni
just a quick remark that had forgotten to send earlier regarding the
work on the networking for rexng viewer.
It was mentioned in some meeting early on (by Jukka?) that it's
impossible to 'design for an unknown protocol', so only the stripped
down sludp has been considered now. The wiki doc seems to now echo that
idea: "One of the concerns was whether this implementation would later
on work for connecting to a totally different virtual world system that
is using a different protocol, but it was seen more effective in that
case to write another specialized protocol stack from scratch rather
than trying to build too many abstractions for unknown systems in mind."
There are known systems, existing implementations, that could be
examined now in order to evaluate the design of the viewer, and whether
e.g. that idea of replacing the protocol stack would go smoothly in
practice.
One example is MXP that has been discussed on this list (rexarch)
before, http://www.bubblecloud.org/
Was reminded of this now when talking with Tommi L., one of the MXP
authors, on irc about possible experimental server and client
implementations. He is planning adding it to OpenSim at some point, and
I was telling / wondering a bit how we could perhaps experiment with it
using Openviewer or Idealist before RexNG exists (also the ref impl of
mxp is in c#, like libomv used via python.net in pyov and in the c#
idealist, so testing with those might be quick now. for rexng perhaps a
new implementation in c++ would be in place if that protocol seems like
the way to go).
There are of course other protocols too which could be used to get
perspectives to analyzing the viewer w.r.t to protocol dependencies,
like Croquet or gaming protocols like what Quake etc use perhaps, but
MXP seems like cool fresh start and the guys are planning work on it
anyway so perhaps that's a good first case now. The idea being the
viewer should not be too much entangled with SLisms all over.
At minimum I think it'd be good for both LC guys to take a look at MXP,
and the MXP guys to look at the RexNG netstack plans (I already pointed
Tommi to
http://www.rexdeveloper.org/wiki/index.php?title=Low-level_client_networking_interface)
But actually prototyping would of course be very exciting.
BTW Adam F. also mentioned MXP on the LL & IBM initiated MMOX IETF list
but I didn't see any reactions to that yet (there's been some debate
whether it's ok for that standardization effort to go on as focused on
OGP, LLSD etc., or whether to more start from scratch / look at others
too etc).
~Toni
> It was mentioned in some meeting early on (by Jukka?) that it's
> impossible to 'design for an unknown protocol', so only the stripped
> down sludp has been considered now.
The plan is to implement the whole sludp protocol, but not all of the
packets. This is because some of the packets are just uninteresting for
OpenSim and reX. I don't know if that's what you meant by 'stripped down'.
> The wiki doc seems to now echo that idea: "One of the concerns was
> whether this implementation would later on work for connecting to a
> totally different virtual world system that is using a different
> protocol, but it was seen more effective in that case to write another
> specialized protocol stack from scratch rather than trying to build too
> many abstractions for unknown systems in mind."
What I wrote refers to an earlier proposal that perhaps we should try to
come up with an "ideal" set of abstract app-level Virtual World
Communication messages, and the protocol abstraction would be achieved by
mapping both in- and outbound sl/other protocol messages to this abstract
set. I see this being senseless, as there doesn't exist this kind of
utopistic abstraction, the VW messages are far too app/world state
-specific (browse http://wiki.secondlife.com/wiki/Category:Messages for a
while to see the gory details). This abstraction layer would be too narrow
and outright obsolete already at its infancy.
> There are known systems, existing implementations, that could be
> examined now in order to evaluate the design of the viewer, and whether
> e.g. that idea of replacing the protocol stack would go smoothly in
> practice.
Replacing the protocol is being planned for and the low-level protocol
library (that we've already been working on for SL protocol) will be
easily changeable. Of course it requires extra work to create a separation
between the app-level scene logic part and app-level network message
processing part, but that is just natural. Seriously we couldn't expect to
run the OpenSim-centric app logic when we're connected to e.g. MXP and
pretend we could do all the OpenSim actions there. This is not something
like HTTP vs FTP here.
We will build the viewer scene model and core modules in an abstract way
so that they don't have 'SLisms'. But rather than hiding the understanding
of these SLisms to some bijective protocol mapping layer, I see we're
better off running specific OpenSimWorldLogic code in an OpenSim world and
MXPWorldLogic code in an MXP world. This will get our application vastly
better understanding/integration to the target world.
exactly that, implementing just a subset of the packets. should have
said 'sl protocol' instead of slupd, or better just just 'not all the
packets', sorry the inexact expression in the hasty mail.
> What I wrote refers to an earlier proposal that perhaps we should try to
> come up with an "ideal" set of abstract app-level Virtual World
> Communication messages, and the protocol abstraction would be achieved by
> mapping both in- and outbound sl/other protocol messages to this abstract
> set. I see this being senseless, as there doesn't exist this kind of
> utopistic abstraction, the VW messages are far too app/world state
> -specific (browse http://wiki.secondlife.com/wiki/Category:Messages for a
>
right, that issue i know, just didn't know that the expression
'unexisting protocol' referred to the 'ideal' thing, but thought it may
have meant possible future alternative protocols (like mxp).
you are probably right that no ideal protocol can exist, as everything
has to do compromises - by doing one solution you are not doing some other.
Openviewer kind of attempts that, as it has an abstract World model (the
World class in world.py), and it converts all the SL stuff to that in
OMVProtocol. I just got MXPProtocol working there enough to login and
create an object on the server, but am not bringing anything from MXP to
that World model yet. Hope to get to do that soon how the abstraction
may fail to be abstract. Also in one discussion with Dan we suspected
that all the conversion that's going on there may well not be wise at
all, but haven't looked closer at that yet.
Tommi also made an IProtocol to Idealist (where SL wasn't separated that
much earlier), interesting to see how that goes (in the absense of an
abstract World, perhaps the Idealist model is close to your plan even?).
> while to see the gory details). This abstraction layer would be too narrow
> and outright obsolete already at its infancy.
>
well, I hope to see what happens with it in Openviewer. it may also work
to some extent. many of these things have common ground.
> processing part, but that is just natural. Seriously we couldn't expect to
> run the OpenSim-centric app logic when we're connected to e.g. MXP and
> pretend we could do all the OpenSim actions there. This is not something
>
There you may be wrong in the sense that the guys are planning to add
MXP support to Opensim itself too, to have a real server implementation
that uses it and not just the simple minimal test server.
So how the app logics will be similar / different when using SLprotocol
/ MXP w/ Opensim is to be seen.
Then again I'm not sure what exactly you mean with logic here, but that
I can perhaps read from the wiki docs / prototype sources, or we can
talk in a meeting.
> better off running specific OpenSimWorldLogic code in an OpenSim world and
> MXPWorldLogic code in an MXP world. This will get our application vastly
> better understanding/integration to the target world.
>
So in a way MXP as a protocol is independent of 'world logic', and can
be used to implement e.g. the SL-like current logic in Opensim. For
example currently the MXP test things work so that the client injects
the avatar to the server when it logs in, but SL works so that the
client logs on to the server which injects the Avatar and sends it to
the client with the rest of the scene data. I don't know if that sort of
stuff is what you call logic. But of course either way can be
implemented using the XMP protocol, which just gives the means to inject
and receive data etc.
Basically my point in the original mail was just to note that MXP and
others exist and can be looked at, to see concrete issues that different
protocols may introduce - it is nontrivial to achieve what is wanted
from the 'staying away from SLisms' exactly because every working
implementation has to have gory details somewhere.
but i guess we have a fairly good understanding, thanks for the reply
~Toni
this approach will pretty much make it impossible to do any serious
distributed simulation. The issue I'm trying to bring home here is
that "good enough for gamerz" is just not the only criteria to
consider. The platform should be able to do entertainment, but it
should also be possible to do real, valid simulation work, such as is
done with robotics, vehicle design, battlefield simulation and
training, medical, artificial intelligence, molecular simulation and
so on.
Doing any of this kind of stuff depends on the idea of time being
deterministic across all the nodes in a network. You need to know
which events occurred in what order. it's not just an issue of
framerate, or the annoyance of lag. It's a question of being able to
extract useful information after the fact, and being able to repeat an
experiment, or at least understand what would constitute replication
of a set of distributed events.
I just think it's shortsighted to punt this issue because it's not
implemented in SL and it hurts our brains a bit to think about it.
We're potentially crossing out much of the interesting stuff that a
Metaverse should be capable of.
-dan
couple remarks here after some more thought, and work on MXPProtocol
impl against Openviewer World:
>> come up with an "ideal" set of abstract app-level Virtual World
>> Communication messages, and the protocol abstraction would be achieved by
>>
> Openviewer kind of attempts that, as it has an abstract World model (the
> World class in world.py), and it converts all the SL stuff to that in
> OMVProtocol. I just got MXPProtocol working there enough to login and
>
In fact I think the Openviewer solution is not about an abstract set of
communication messages, but just that there is an abstract model of the
scene - and then every protocol implementation can communicate with any
network messages it wishes, and update the scene correspondingly. Just
that the events that the world uses pretty much match with the (SL)
packets so in that sense that world model also describes how the network
protocol is supposed to behave.
http://www.openviewer.org/attachment/ticket/71/mxp1.patch shows how I
added a new protocol impl and basically didn't need to change anything
in the World (just made the set of events a protocol implements so that
each is optional).
So perhaps the current pyov model is what you are planning, and the mxp
thing I wrote to it would be similar to how mxp would be put to RexNG? I
didn't look yet how Tommi's work on the Idealist side seems (only know
that he refactored so that it also now has IProtocol) - one difference
is that Openviewer converts all the coordinate etc. info to nonspecific
types, whereas Idealist just uses the LibOMV types all over (and hence
depends on that lib even when using MXP, unlike Openviewer) but that
does not make such a big difference w.r.t. to these principles.
> create an object on the server, but am not bringing anything from MXP to
> that World model yet. Hope to get to do that soon how the abstraction
> may fail to be abstract. Also in one discussion with Dan we suspected
>
Well yesterday evening got to creating the own avatar, which in OV means
that it triggers the creation of the region too (any new object coming
into a previously unknown region does), and that already brought up an
interesting difference in the MXP and SL models:
in SL the viewer connects to several regions, to be able to show the
neighbouring ones as well. Internally both in OV and Idealist there is a
dictionary/map of regionID:s to region instances where the info is kept
locally, and coordinates are calculated based on region + within-region
coords.
in MXP the viewer is always connected to a single bubble, and doesn't
need to know about other bubbles. So the viewers having that region
dictionary is just an extra complexity (both me and Tommi now just put
it so that the single 'region' is there with the ID 1, and that's not
bad overhead, but still unnecessary from MXPs point of view). I don't
know yet how MXP communicates about the objects in the neighbouring
bubbles (if you can see them) and how you are supposed to deal with
coordinates within a viewer.
Perhaps that sort of things are what you mean by 'logic'?
Will be interesting to see how that business will go in the RexNG viewer
internal scene code.
> ~Toni
same.
Fair enough. I am trying to find the time to get my thoughts in order
and put them up.
[ben:] If somebody wants to do full-on simulations with the platform
then a hub-based hard clock could be provided... but I don't get why
this capability is so important. Given the choice I would pick
emergence over determinism any day. The hard clock synchro would lead
to an overall update rate which is throttled down to the worst latency
any participant has; either that or not allowing high latency people
to participate - or am I missing something?
No, that's a fair observation. There are really two inter-related
issues here -- time sync and determinism *within* our application, and
time sync over the wire. My point is that if you don't have the
former, you can never properly implement the latter. OTOH, you can
have strong time control within the app but still support protocols
that don't care so much about that.
I think the best I can do right now is to look more deeply into
Croquet, since I am pretty sure it does things "my" way. I do
acknowledge the issue that you don't want a large, multi-node system
to be so interdependent that when one node has problems it propagates
to everyone else's experience. There are ways to address this -- it's
basically an issue of proper error handling. Nodes that cannot keep
up with their responsibilities in terms of throughput and lag should
be considered in fault mode, and handled accordingly. Frankly this is
an issue that bugged me with Croquet, since it seems like any machine
could grind the whole scenario to a halt. That's another reason I
want to understand it better.
So I guess I will research Croquet and Qwaq as another protocol we may
wish to support in the future, and let's see if that brings some of
these issues into sharper focus.
I appreciate everyone's valuable time on the project and I don't mean
to drag it behind schedule; I just feel passionately that we might be
missing a crucial ingredient that will be very painful to introduce
later in the game.
-dan
As Kripken mentioned, doesn't this preclude a p2p model? In general,
I'm in favor of *any* resource, including land and static objects,
being handled in the webby way, with indirect URI's or what have you
(not my expertise, but I think you know what I mean). hard-coding
things so you get all information through a 'hub' that you need to
connect to seems somewhat too strict to handle all possible scenarios.