Obviously at the very early planning stages here :-) I have a decision...
In my proposed Virtual World Environment, I am deciding whether I should write the whole stack in Newspeak or use Newspeak at the general-user programming* level.
* That is, it is the language used to program scene graph entities from user-side (keeping in mind that the environment is explicitly about user-generated content).
My current idea is to split the system into multiple entirely independent parts that communicate via a single shared data structure (namely the 3D scenegraph)...
Independent execution threads (possibly eventually different physical processors each optimized for their task, they are strict black boxes from the user perspective):
- The Renderer engine repeatedly scans through the scene graph and draws the graphics and activates/attenuates/deactivates scene sound sources relative to user position in world (it can use OpenGL, Mantle, DirectX, soft-render, or specialized GPU/DSP code on the back-end but this is entirely transparent to the environment the user sees).
- The Audio streamer handles actual playing of audio streams and mapping of audio source directionality in direct response to the Render engine's updates to audio-source positions in world relative to user
- The Physics engine repeatedly scans through the scene graph and updates objects' physical properties (location, spin, etc.) according to their set delta-Vs, accounting for collisions, friction, etc.
- The scripting engine maintains a collection of event-triggered scripts for each scene graph object (events coming from user interaction, other scene-graph objects, and the physics engine). This one would definitely want to be in a language like Newspeak. Scripts themselves are, ideally, distributed as source for compile-on-load use as well as being available for user inspection. The primary appeal to me of Newspeak so far is its exceptional IDE. Not so keen on curly brackets (or even square ones) as I use a one-handed keyboard and those glyphs are a bit of a finger-twister! For that reason, I have been looking at Ada a bit too. Nice language, needs a Newspeak-like IDE :-)
- The synchronization engine keeps multiple users of the virtual world data-synchronized across a network (potentially allowing a distributed but contiguous world). This is more long-term. Initially I am targeting single-user applications, but keeping this in mind.
...
My concern with doing the whole thing in Newspeak would be what the scene graph data would look like if it was constructed from actual Newspeak objects, rather than a naive array of C-like structs containing the raw values and pointers-to-more-complex-data that the various engines above can quickly scan over picking out just the bits that are relevant to them.
The Newspeak way seems to imply that each engine would have to message each object to 'draw yourself', 'update your physics' etc., which implies a more homogeneous overall system (this may be better than my way for all I know, so I'm happy to be talked into it if it is).
Also the fact that, for example, while prototyping the graphics subsystem in Newspeak (or any high-level language) would certainly be easier in the short term, re-basing it to run on a dedicated GPU or DSP would likely be tricky if even possible if the code is entwined in a homogeneous object**
** My long-term hardware target at present is the RaspberryPi, mainly because it has properly open video drivers, and secondarily because of its education/hobbyist focus which matches my own target demographic for my project. Also, replacing the Vidcore's custom RTOS with a non-open-GL AV stack optimized explicitly for my own, simpler, system has some potential worth exploring (I'm kind of relying on my project gaining enough momentum to attract someone who might actually be able to do that, but it appears technically possible from my research into the matter).
Will stop there. I tend to be a bit long-winded!
-Lae.