Alpha draft of reX-NG design document

0 views
Skip to first unread message

Ryan McDougall

unread,
Feb 17, 2009, 4:47:15 PM2/17/09
to realxtend-a...@googlegroups.com
Available here:
http://rexdeveloper.org/wiki/index.php?title=ReX-NG_Design_Document#Sections

This documentation is of alpha form. Not really ready for consumption,
however you can take a peek at some of the sections now.

If you have spelling/grammar fixes, they can go directly in. If you
have other opinions or suggestions, please constrain them to the talk
page or this list.

Cheers,

Misterblue

unread,
Feb 21, 2009, 12:41:02 PM2/21/09
to realxtend-architecture
Hello all,
My name is Robert Adams (aka, Misterblue) and I've just been
introduced to this project. I have been working on my own virtual
world viewer for a while but Ryan thought I should check out what's
happening here.

Most comments and questions I have probably have been covered in
previous discussions so feel free to point me to threads.

I read over the Design Documents and have already sent some comments
to Ryan, but here I'd like to ask about the general viewer design.

From the design documents, the viewer is being designed like a tightly
integrated FPS application. Is that's what's intended? Virtual world
systems are all about interaction and, while combat and twitch
reaction time is a type of interaction, when most non-game people talk
about virtual world collaboration they talk about something like
document sharing.

I'm not suggesting that document sharing cannot be implemented in an
FPS application, I was more wondering why every module has to deal
with FrameUpdate calls. For instance, if I built a document sharing
module which allows multiple people to edit a document by clicking on
it and typing with everyone's view updating, that module must have a
portion of itself that is tied to the renderer and it's frame update
requirements but most of the module is dealing with networking and
talking to the versioning repository,etc. In a multi-processor system,
the hardware thread doing the communication and interaction logic
could be totally separate from any rendering threads.

To fit into the described framework, every module would have to
implement itself in (at least) two parts -- the part that dealt with
the renderer and it's frame time requirements and the other parts that
compute and communication independently from the rendering. I would
think the framework would support that construction by providing the
support for the renderer and non-renderer parts. Such support for
modules is not described in the framework.

Architecturally, the current design has the renderer at the center.
Wouldn't a module, flexible, extendable design have the renderer as a
subsystem that had all the right interfaces for other modules to
interact with? Then the general framework supports multiple module
subsystems and provides the general services (threads, timing, module
management, inter-module plumbing, ...).

-- mb


On Feb 17, 1:47 pm, Ryan McDougall <ryan.mcdoug...@realxtend.org>
wrote:
> Available here:http://rexdeveloper.org/wiki/index.php?title=ReX-NG_Design_Document#S...

Ryan McDougall

unread,
Feb 23, 2009, 4:56:05 AM2/23/09
to realxtend-a...@googlegroups.com
Thank you very much for giving your input. I think its necessary to
have a lot of differing view points so we can arrive at a commonly
respectable consensus.

I don't think your criticism is without merit at all. In fact its a
point of some ongoing contention.

Basically the idea is that the core framework should be single
threaded, so as to make it simple to debug. That means sharing the CPU
time of that single thread evenly out to all modules who need to
execute. A "frame" is simply an internal concept of time.

If you were to multi-thread the core framework, you'd need some sort
of synchronization point. One of which would be a frame boundary: you
could assume that no shared variables would change state within a
frame, only on a boundary.

So your document sharing example: the document sharing example would
simply have its own idea of time, say from the RDP protocol, and would
ignore the client engine's concept of time.

However I think that the general your concern is legitimate, and my
personal opinion is that while the renderer is one of the most
critical parts of the code, and cannot be slowed down by bad
architecture, we should end up with something that has a concept of
multi-threading explicit in the core design, and not dependent of
choice of renderer.

I think its our goal, a possible goal, but one that might only make
itself clear after having these kinds of discussions.

Cheers,

Ryan McDougall

unread,
Feb 23, 2009, 4:59:12 AM2/23/09
to realxtend-a...@googlegroups.com

I really should have ended that more constructively:

Mr.Blue, can you propose an alternative?

reX-NG is an ongoing design, and the document you read is just a
snapshot in time for outsiders who want a quick overview. Nothing is
set in stone, only good or bad engineering.

Cheers,

Misterblue

unread,
Feb 28, 2009, 1:40:22 AM2/28/09
to realxtend-architecture
I think the framework should support the modules more. Not that the
renderer portion doesn't need a framework, but I think there will be
more modules then there will be renderers. The tight, single threaded
framework laid out in the architecture document would work great for a
twitch game where the click of the mouse has to turn into a UDP packet
on the wire as quickly as possible. Other uses will have queues and
abstraction. In fact, most of the uses of the viewer will.

I lean to an architecture like the Idealist viewer: there is a module
that talks some virtual world protocol (P2P, MXP, LL, ...) and it
handles all networking. This module understands the concepts from the
protocol and maps these into a high level abstraction used by the
viewer. By "high level" I mean something simple like the entity/scene
model in the proposal. These converted items pass from "network time"
to "renderer time" through some queue or synchronization interface
into the renderer where they render. The renderer itself could be a
single threaded frame-time synchronized piece of code.

Network -> Protocol handler -> Object Mapper -> High level
abstraction -> Queue -> Renderer

Since things are pluggable, there could be multiple of these protocol
modules connected into the renderer at the same time.

This makes the construction of the network side of the virtual world
separate from the render side of the renderer. This frees the module
writer from worrying about the timing and context required in the
renderer. If one needed closer integration, a module writer could
eliminate the queue but I think most would want the freedom to focus
on all the problems of maintaining the distributed, virtual world
state.

--mb

On Feb 23, 1:59 am, Ryan McDougall <sempu...@gmail.com> wrote:
> On Mon, Feb 23, 2009 at 11:56 AM, Ryan McDougall <sempu...@gmail.com> wrote:
Reply all
Reply to author
Forward
0 new messages