http://gcc.gnu.org/projects/cxx0x.html
http://blogs.msdn.com/vcblog/archive/2008/10/28/lambdas-auto-and-static-assert-c-0x-features-in-vc10-part-1.aspx
Should we enumerate common features and use them in our code?
ps. http://sourceware.org/gdb/wiki/ProjectArcher improves GDB support for C++
Including the ability to use python to script within GDB
http://tromey.com/blog/?p=494
Well I've been following the c++0x drafting process, and the features
in common, such as the re-purposed auto keyword, or the grammar
extension to accept ">>" in templates, etc., are really minor, have
been decided on years ago, are trivial to implement, and existed as an
extension in GCC 4.3 for over a year now, and GCC is most certainly
not a beta compiler. I have a similar confidence that while MS's
compiler is technically a "beta", its engineering team hasn't ruined
the compiler portion of VC++ since the last version.
That said, if you are uncomfortable with things you don't know, then
there is no need to force them.
However, this project has definite goals, and some of those goals can
only be accomplished with newer code. Whether the code happens to be
"beta" is up for a lot of interpretation. We will make our decisions
not based on perception based on version numbers, rather by grounding
them with objectively evaluable experiments.
>
>> ps. http://sourceware.org/gdb/wiki/ProjectArcher improves GDB support for
>> C++
>> Including the ability to use python to script within GDB
>> http://tromey.com/blog/?p=494
>
> Dunno if it's anything serious, but Norton intercepts me when loading the
> ProjectArcher site saying that it's a known fraud site. Well, not a GDB
> fan anyway, so didn't start investigating if it was something to worry
> about.
Your Norton is clearly mis-configured. ProjectArcher is supported by
senior engineers from major multinational companies, who I dare say,
are more skilled than all us combined.
>
> Yes, those are good questions. And before we actually have something to
> modularize, a lot of the discussion will just stay at a painfully abstract
> level. I suggest we don't try to stress over this one too much, instead
> just create abstractions/"modules"/plugins/whatever you call them only
> when we identify some functionality we need and indirection layer for. In
Yes you are absolutely right. There is interplay between abstract
design and practical implementation. We will try to ride an acceptable
middle ground.
> more practical terms, what I mean is that we'd go the route of first
> building tightly-coupled implementations and refactoring them to go
> through an abstract interface only when we find necessary, instead of
> first trying to identify and write dozens of interfaces and/or modules
> before having the implementations nailed down.
I disagree somewhat here. I think that even if we accept our planning
and design will have a decent amount of failure, its still our
professional responsibility to attempt to plan and design. I will
spend a certain amount of time trying my best to think out design
issues before-hand, rather than be surprised later when it'll mean a
huge re-write.
Moreover, the Steering Group is expecting a formal design document of
some kind to be drafted. We must accommodate this wish or find
ourselves out of a job, and me and my family being booted out of the
country.
> As for the abstraction layers we'd initially need, I'm not sure. The
> protocol module was identified as being one, wanting to support several
> protocols, but what that means in more practical terms, I have no idea.
Well at first I had no idea. Then I bent my will upon getting some
idea, and over time I have started to. Lets continue this process
together. Its work, but its also a requirement of our profession.
> The issues there I start thinking about at first are that lots of the
> protocols out there are nowhere near "equivalent" - they have different
> functionality with different focuses and I'm not at all sure if it's
> possible to create functioning indirection layers for them so that the
> application doesn't have to know about the specifics. The guys at Admino's
> end are probably more versed in this and know some ideas for solutions, so
> I'm not going to go any further with this.
I think making an abstraction to end all abstractions is senseless.
However we can try to identify classes of behavior we want based on
the high-level requirements I sent to you all. I think we can start
out by making a subtle generalization of SL protocol, and see where it
takes us.
>> What should our data model be?
>
> The OpenSim protocol (libOpenMetaverse) defines a lot of this data model.
> For example, see
> http://openmetaverse.org/svn/omf/libopenmetaverse/trunk/OpenMetaverse/Primitives/Primitive.cs
> to get a grasp of what's there. Our application does an internal data
Indeed, however, that would certainly tie us to SL and primitives, and
I am not sure we are ready to do that at this point. I think there is
more than one kind of data model, just as long as we bend our minds to
discover it.
> mapping to our own data structures, which are most convenient to be ready
> for Ogre to use. If we're going to be OpenSim-compliant, then the question
> is more about what kind of extensions we want to the data model? And if we
> work our wholly own data model for the application, we need to overcome
> the fact that the OpenSim-protocol doesn't "talk our data model" language,
> and we would need to extend the protocol with our own corresponding
> messages. (which is the idea with our own reX protocol)
Yes. I think that there will be a certain amount of translation from
SL data model, our modified reX model (as defined by modrex changes to
the SL protocol), and the model within the viewer. We will have to
investigate in some detail the trade-offs involved in this translation
on a case-by-case basis, as our design and implementation work
progresses.
> In general, application-wise, it's best to go with the true-and-tried
> scene hierarchy where we have nodes with transformation bases that contain
> objects of different types, pretty much what Ogre already does for us.
Yes a scene hierarchy is indeed a very good abstraction. However as
the Intel paper lays out, there are ways to make it so that there is a
universal model that is not tightly coupled to the rendering engine.
Please read the paper and study the Smoke demo for more information.
>> * how does a rendering engine play peacefully with a UI library in a
>> modern multi-threaded context?
>
> One of the things that we would find useful is to have our own "User
> Interface Data Model", i.e. the client wouldn't work in a hardcoded UI
> state at all, but the server would send the clients the UI components they
> have in use. This is one of the things that would be crucial for
> implementing worlds with better interaction.
Yes skinning is I think a critical feature for enabling the multiple
use cases we're aiming for. However the precise interactions at the
video card level is my concern. Right now I am thinking on using an
existing OGL UI layer to composite a 2D UI on top of the rendered
frame. This can be accomplished by having Ogre render to texture, then
have the UI layer draw on top of the texture. However keeping the
texture in video memory, and avoiding OGL context switches, will be
serious concerns.
AFAIK this is the technique game programmers use to create UIs from Flash.
> How the UI library works with the rendering engine depends on what UI
> library we choose to use. I'm not familiar with Python UI libraries, so
> can't say anything authoritative on the matter. I would actually prefer
> having a C++ UI library, since that would let us build the UI<->Renderer
> bridge in an easier way.
The precise language of the library, now that we have agreed that C++
and Python are acceptable, is of tertiary concern. We should figure
out our requirements first, which I hope I did a little, then decide
the general mechanism of enabling them, which I am now trying to do.
>
>
> Best Regards,
> Jukka
Thanks for your input, it is really very helpful and critical to our
success going forward.
However it would be more helpful if you could read more carefully the
emails I have previously sent, specifically the requirements and the
Intel paper, and provide more constructive criticism so we can move
forward with the project instead of getting bogged down in
disagreements.
Its easy to shoot something down, quite another task to make something
up on your own.
Cheers,
As a starting point, here are couple of libraries worth investigating,
which integrate wxWidgets with Ogre:
http://www.ogre3d.org/phpBB2/viewtopic.php?t=41153
http://www.ogre3d.org/phpBB2/viewtopic.php?t=46073
Actually an UI that does not use any 3D rendering interface itself,
would be preferable, because otherwise we *will* be bogged down with
context switches and possibly the same kinds of odd bugs we have with
the current viewer. Furthermore forcing OpenGL on Win32 platform would
not be nice, because we know Ogre works better in Direct3D ;)
- Lasse / realXtend developer
My only concern is that a traditional GUI will not allow the sort of
innovated interfaces the Steering Group is interested in seeing, and
that the content creators are interested in using for their designs.
It'd look MS Word.
If we do the UI in Ogre then we've tied ourselves to one rendering
engine forever, which IMO is worse than tying ourselves to one 3D
driver. Moreover, we'd have to maintain our own skinnable Ogre UI
library, when things like
http://elisa.fluendo.com/screenshots/
http://clutter-project.org/
http://git.moblin.org/repos/users/pippin/screencasts/2008-06-25.html
are already written and will be maintained by a large group of
programmers for a long time. Reusing code instead of inventing our own
will be the only way to be both innovative and timely.
So we have 3 choices:
1. Use an old-fashioned, non-skinnable, mouse-based, native UI
Pros: mature, maintained, features for a11y and i18n, simple, well known
Cons: boring, previous/current generation, not adaptable to multiple usees
2. Write an Ogre-based custom UI toolkit
Pros: fastest performing choice, simple, flexible
Cons: time to develop, time to maintain, no a11y or i18n, lots of
work, tied to Ogre, mixing UI and 3D code
3. Use an existing, next-gen, scene-graph based, OpenGL UI
Pros: best usability, flexible, skinnable, maintained, becoming more mature
Cons: will pay performance overhead (though there is no way it has to
completely kill a modern GPU), ties us to OpenGL
As you can guess, after weighing this matter over for the past week, I
have decided I prefer 3. What are your thoughts?
While your concern about killing performance, or debugging complexity
are well founded, in my discussions with the maintainers of clutter
and with my research into the matter, there is no reason why it cannot
be done in a reasonable clean and performant way.
As usual, whether we choose to do so or not does depend on the results
of our feasibility prototyping in Jan/Feb.
Cheers,
One problem is graphics card drivers and their obscure optimizations:
if we have two OpenGL renderers (Ogre and the UI) and therefore two
OpenGL contexts, we will have same kinds of problems as in the old reX
viewer. Unless we can convince Nvidia et al. to not do obscure
optimizations :)
On the other hand, we can hack both Ogre and the UI library to coexist
in a single context, but then we will have problems with "forking out"
from both libraries. I wouldn't call that clean.
Well I have asked the clutter devs, and they intend to make an API for
precisely this kind of compositing, but sharing contexts.
Can you define "forking out"?
Cheers,
For example, hacking Ogre & its internals to such degree (in this
case, to accomodate the UI into a single context) that when they
release the next version, we'll have an awesome trouble on our hands.
Basically like the reX server is forked from Opensim ;)
I'm not saying this will have to happen, but it could.
OK, I understand.
The current clutter API allows you to create an OpenGL context when needed:
http://www.clutter-project.org/docs/cogl/0.8/cogl-Utility-API.html#cogl-create-context
It would be a simple patch to have to add getter/setters. Its been
said that such function should be added in the future.
For the ogre side, we have
http://www.ogre3d.org/docs/api/html/classOgre_1_1Root.html#537b7d1d0937f799cfe4936f6b672620
"Key: "externalGLContext" [Win32 OpenGL specific] Description: Use an
externally created GL context Values: <Context as="" unsigned=""
long>=""> Default: 0 (create own context)"
which appears to show that it only works on WGL for some reason.
However this thread seems to imply it works on GLX too, and is done
for precisely the same reason we want to, which implies it may become
more supported in the future:
http://www.ogre3d.org/phpBB2/viewtopic.php?p=230333&sid=ce193664e1d3d7c4af509e6f4e2718c6
Which should mean that any changes from upstream should be able to be
kept in a small patch, or even have this supported directly some day
in the future.
Basically I'd like to actually try this before we rule it out.
Cheers,
I had another crazy idea, we could avoid the complexity of sharing
contexts, etc, by render clutter in software using http://mesa3d.org
...
cpu with two cores should be able to handle it fine.
Cheers,
then you'd basically loose the benefits of Clutter's ability to use a
GPU alltogether and I think the gfx ops it needs are better run on GPU,
i.e. would eat a lot of CPU.
i haven't read all the references and ideas thrown here, so dunno how it
was addressed, but don't yet understand the problem of separate contexts
for 2d and 3d. i mean you can of course have two different independent
things that use the gpu, like Blender or 3dsMax and the Rexviewer
running at the same time. or with MacOSX, Compiz or Vista the windowing
environment itself uses the gpu too (afaik the windows being textures
etc), but having other windows does not hurt Rexviewer, or does it?-o
so instead of using Mesa, wouldn't it be better to just have an own (gl)
context for Clutter, and composite the images from there? at least if
you have a powerful gfx card. if you don't but have powerful CPUs
perhaps Mesa can make sense then. and if you don't have a gfx card at
all you can use Mesa to render the viewer display too, with 0.1 fps or so :)
i understand that this two-contexts situation is problematic right now
with the current viewer, but don't know yet where the problem actually
is - just based on seeing how things run in a windowed environment where
many things use the gpu think that it can be ok too.
of course being within a single context would be the most optional,
dunno if the VBO passing techniques etc. that were discussed the other
day could work for that (and if there is any way anything like that
could work for cross gl - d3d interop, i guess not)
~Toni
Been doing a lot of research on the issue over the holidays, and
basically the GPU is not unlike the CPU. Even if it has parallel
processing units, there is shared memory that has to be locked or
copied before it can be used without race conditions. Unfortunately
this will, especially on the GPU, kill performance.
So if we target 60Hz refresh using two contexts, that means swapping
out then entire state of the GPU (possibly across AGP to system
memory??), flushing all OpenGL commands, and flushing the whole
pipeline 60 times per second. Ouch.
If we share a context, we have to write a shim to manually save any
context that our UI code might trample, and then worry about badly
written drivers optimizing things they probably shouldn't and give us
some angry users.
Unlike the CPU where we have well known OSes, the way GPUs handle
concurrency is heavily dependent on the driver, and because the
compositing work we want to do is a relatively new use case (as new as
OS X, Vista, and Compiz, respectively), there is no guarantee that any
closed-source driver will behave well.
All that said, GUI development is clearly moving towards what we want
to do -- we are trying to be on the cutting edge. Libraries and
drivers will over time support our use case, and in the near future it
will considered to be a normal thing.
See below to see one example of how Linux intends to support
"redirected direct rendering" with DRI2 driver infrastructure:
http://hoegsberg.blogspot.com/2007/08/redirected-direct-rendering.html
Microsoft being as awesome as they are, are no doubt way ahead of the
game with vista. Its only a matter of time before the insist driver
writers conform to what ever the windows equivalent of DRI2 is.
I would like to have this conversation in depth, with some feasibility
studies, in January. Worst case we decide things aren't ready for what
we want to do, and fall back on a traditional UI library like
WX/GTK/QT while we work on other areas.
Cheers,
Intel's GEM GPU memory manager has been merged into the latest Linux
kernel released last week.
http://www.phoronix.com/scan.php?page=news_item&px=Njc2NA
Information about what GEM is, and how it will enable multiple
applications to share GPU resources, is at the bottom of this mail
from Keith Packard of Intel.
http://lwn.net/Articles/283798/
Intel and AMD-based GPU open source drivers have preliminary support
for GEM. Proprietary AMD and Nvidia drivers will unlikely support GEM,
but will likely have similar proprietary features.
Support is on the way.
Any ideas what the windows or mac picture looks like?
Cheers,