[osg-users] Osing OpenXR in OSG

150 views
Skip to first unread message

David Glenn

unread,
Jun 24, 2019, 6:07:53 PM6/24/19
to osg-...@lists.openscenegraph.org
Greetings!

I guess that I'm going to gripe on this subject like I did a year ago!
I know that OpenXR is at least in Open Bata and I was wondering what progress anyone has made incorporating it in OSG.

While I was in GDC I did see Khronos make some progress in this area and I even got to see someone do a demo of a VR display using HTC Vive. I challenged the group that worked on that and never heard from them again.

I think one of the holdbacks was the interactive controls was not set yet, but from my perspective, they could have worked at the visual.

I know that if I had the time and resources that I would hack this out, but one of the sad drawbacks of having a job is not having the time. It must be that most people still see this technology as a flash in the pan, but I think it’s taking on traction.


...

Thank you!

David Glenn.

------------------------
David Glenn
---------------
D Glenn 3D Computer Graphics Entertainment.
www.dglenn.com

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=76327#76327

_______________________________________________
osg-users mailing list
osg-...@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Garfield Pig

unread,
Jun 24, 2019, 11:25:57 PM6/24/19
to osg-users
Hi David,
 There is only OpenVR integrated in osg,and It's not maintained anymore. [osgOpenVrViewer](https://github.com/ChrisDenham/osgopenvrviewer) And It only supports osg single thread mode.


------------------ 原始邮件 ------------------
发件人: "David Glenn"<da...@dglenn.com>;
发送时间: 2019年6月25日(星期二) 上午6:08
收件人: "osg-users"<osg-...@lists.openscenegraph.org>;
主题: [osg-users] Osing OpenXR in OSG

Chris Hanson

unread,
Jun 25, 2019, 4:42:38 PM6/25/19
to OpenSceneGraph Users
We've used the osgOpenVRViewer codebase and its friends and cousins.

We could probably adapt it to work with OpenXR if anyone was really motivated and needed it, but so far nobody has come up with a compelling enough need to fund such work.

Maybe if it's important enough to someone, they can do the work themselves and release it, or contract someone else to do so. But so far, nobody has committed to it.
--
Chris 'Xenon' Hanson, omo sanza lettere. Xe...@AlphaPixel.com http://www.alphapixel.com/
Training • Consulting • Contracting
3D • Scene Graphs (Open Scene Graph/OSG) • OpenGL 2 • OpenGL 3 • OpenGL 4 • GLSL • OpenGL ES 1 • OpenGL ES 2 • OpenCL
Legal/IP • Forensics • Imaging  UAVs • GIS • GPS • osgEarth • Terrain • Telemetry • Cryptography • LIDAR • Embedded • Mobile • iPhone/iPad/iOS • Android
@alphapixel facebook.com/alphapixel (775) 623-PIXL [7495]

James Hogan

unread,
Jun 11, 2021, 4:37:18 PMJun 11
to OpenSceneGraph Users
Hi

On Monday, 24 June 2019 at 23:07:53 UTC+1 davidg...@gmail.com wrote:
Greetings!

I guess that I'm going to gripe on this subject like I did a year ago!
I know that OpenXR is at least in Open Bata and I was wondering what progress anyone has made incorporating it in OSG.

While I was in GDC I did see Khronos make some progress in this area and I even got to see someone do a demo of a VR display using HTC Vive. I challenged the group that worked on that and never heard from them again.

I think one of the holdbacks was the interactive controls was not set yet, but from my perspective, they could have worked at the visual.

I know that if I had the time and resources that I would hack this out, but one of the sad drawbacks of having a job is not having the time. It must be that most people still see this technology as a flash in the pan, but I think it’s taking on traction.


I had a play with this over the last week or so (in the hopes of eventually getting Flightgear working in VR with OpenXR since it seems to be the future), and have managed to get something *extremely* minimal and half broken going, enough to run some of the osg demos on Linux with SteamVR's OpenXR runtime with an HTC vive (but not flightgear yet). I've pushed a WIP version (see below), in the spirit of releasing early and often, in case anybody here is interested in providing general feedback or helping. I'll be able to get back to it in a few weeks when I'll try to get more of it working & cleaned up:

This is my first dive into OSG (and OpenXR), so i'm definitely open to suggestions for improvements or the best way to integrate it (or whether it should even be integrated into OSG rather than as an external plugin or viewer). Currently I think it should get built into OSG since OSG already has a concept of stereo (which currently this code doesn't interact with), and this approach allows some rudimentary VR support even without the app explicitly supporting it (though clearly app support is preferable especially for menus and interaction), but I am not very familiar with OSG.

Braindump below for anyone interested in the details.

Cheers
James

It is added as an osgViewer config OpenXRDisplay, which can be applied automatically to the View by osgViewer::Viewer using environment variables OSG_VR=1 and OSG_VR_UNITS_PER_METER=whatever. Some C++ abstractions of OpenXR are in src/osgViewer/OpenXR, which are used by src/osgViewer/config/OpenXRDisplay.cpp to set up the OpenXR instance, session, swapchains, and slave cameras for each OpenXR view (e.g. each eye for most HMDs, but it could be one display for handheld, or more for other setups), and various callbacks for updating them and draw setup / swapping. OpenXR provides multiple OpenGL textures to write to for each swapchain, and we create a swapchain for each view, and an OpenGL framebuffer object for each image texture in each the swapchain (i assume its faster not to rebind the fbo attachments). Callbacks switch between the framebuffer objects (like in osgopenvrviewer), and OpenXR frames are started automatically before first render (or on first slave camera update), and ended in the swap callback. The OpenXR session is created using OpenGL graphics binding info provided via GraphicsWindow::getXrGraphicsBinding() which is only implemented for X11.

Current issues:
* i haven't mirrored the image to the window yet (there's probably a nice OSG way to blit the view textures to the main camera?). it could perhaps integrate with the DisplaySettings stuff somehow to decide what should be mirrored.
* the application name (used for creating an XR instance which is shown on HMD when app is starting) isn't discovered automatically and is still set to "osgplanets". This can probably be discovered automatically in an OSG way from argv[0] with the arguments stuff... haven't quite figured how yet.
* Performance is currently terrible. CPU usage and frame times don't seem high, so its blocking excessively somewhere. I briefly tried modding the ViewerBase loop to avoid sleeping there, but haven't got to the bottom of it yet. OpenXR does complain about validation errors on EndFrame, but its unclear why & whether thats related, and it doesn't stop the images being displayed in the HMD.
* synchronisation isn't handled between threads, as i don't yet have a good grasp of how OSG uses threads to figure out exactly whats needed. Currently threaded rendering is disabled (like in osgopenvrviewer).
* flightgear: currently it appears to fail due to SteamVR changing GL context somewhere (a known bug, worked around in the swapchain abstraction), resulting in OSG framebuffer objects being unable to be created. I haven't had much time yet to figure it out. In any case it'll need a fair bit of more custom setup eventually.
* there's a couple of places where I expect it may not build without OpenXR. Fully intend to fix.

Other thoughts about integration into OSG:
* The custom projection matrix calculation based on fov{left,right,top,bottom} could be moved into osg::Matrix to join the other projection matrix functions there.
* Maybe controller inputs could get exposed to OSG applications via osg::Device events, though they're abstracted by OpenXR into app specific actions.
* I wonder if OpenXR composition layers (which with extensions can be composited as cubemaps, quads, part cylinders like curved TV, part spheres, as well as the usual eye projections) should be represented as some weird windowing system api with GraphicsWindows etc (though still tied to the underlying window manager one), to allow for easier rendering of 2d interfaces in VR...
* Advancement is ideally driven by the expected display times of individual frames, i.e. the next frame should show the scene at exactly the moment when it is expected to be displayed to the user to avoid jitter and nausia. This may well be more of an app level concern (certainly is for flightgear which AFAICT currently uses fixed 120Hz simulation steps), but a general VR-specific viewer mainloop is probably needed in any case.
* Choosing pixel format from the list provided by OpenXR runtime... i haven't looked into how OSG picks for non-VR yet.
* Each frame, the environment blend mode can be chosen from a set of supported ones, and the frame may be rendered differently depending on it, i.e. opaque (VR), alpha blended with camera (AR), or additive blending with camera/background (for some AR displays). Need to figure out where that should be decided, and whether to expose that in some rendering state somewhere.

Other misc development todo:
* use depth info extension to provide runtime with depth info for better reprojection by runtime in event of missed deadlines
* use visibility mask extension to reduce rendering to part of screen visible to eyes
* haven't looked into multisampling properly yet
* internals of OpenXRDisplay.cpp need splitting out into multiple files

Robert Osfield

unread,
Jun 15, 2021, 12:30:29 PMJun 15
to OpenSceneGraph Users
Hi James,

I don't have a work VR headset to test against right now so can't pitch in with any testing or insights into OpenXR.  I did a quick look over your changes and can share a few thoughts about general direction.

My recommendation would be to move your code into it's own osgXR library, but stick with the osgViewer::ViewConfig approach as this as it should make it easier for developers to switch between desktop and VR configurations - a strength of your current implementation.

Creating a separate osgXR library will allow developers to use it against a wide range of OSG versions, so won't need to do any updates, just link to osgXR set up the viewer configuration and away they go.  This also decouples the XR functionality from needing to be integrated within mainly OSG and being released as part of an official release.  I'm spending most of my time on the VSG project these days so have put the OSG primarily in maintenance mode, so new stable releases are off the table till I get the VSG to 1.0 (hopefully later this year.)

Within the VSG community we've been discussing OpenXR integration as well,  Again I see this is type of functionality that a dedicate vsgXR library would provide, rather than being integrated into the core VSG.  While I haven't personally done any work in this direction I can certainly see that OSG and VSG XR integration could well follow similar approaches even at the code level then are entirely separate.  Potentially both efforts could draw experience and knowledge form each other.

Cheers,
Robert.

Mads Sandvei

unread,
Jun 15, 2021, 1:42:30 PMJun 15
to OpenSceneGraph Users
Hi

I have some experience integrating OpenXR and OSG from my work on OpenMW-VR.
I'll share some of what i've learned

 > OSG already has a concept of stereo (which currently this code doesn't interact with)
OSG's multithreaded rendering works better with its own stereo method than the slave camera method, so i would recommend integrating with this instead.
For example, if a user uses DrawThreadPerContext, the main thread can continue to the update phase of the next frame immediately when the last of slave cameras have begun its draw traversals.
With two cameras you get two separate traversals and the main thread may is held up until the first camera is done with its draw, costing performance.

In my work this meant using a single doublewide framebuffer instead of one framebuffer per eye. This is not a problem for OpenXR as you can create a doublewide swapchain and use the subimage structure
to control the regions rendered to each eye when composing layers. I haven't looked to closely at whether OSG supports attaching different framebuffers per eye so that might be a moot point.

It's worth noting that OSG is adding support for the GL_OVR_multiview2 extension: https://groups.google.com/g/osg-users/c/__WujmMK5KE
It would be worth integrating this in the future as this would easily be the fastest stereo method, though I don't have any personal experience with it.


 > Performance is currently terrible. CPU usage and frame times don't seem high, so its blocking excessively somewhere
Comparing your code to mine the only notable performance issues, that are under your control, is forcing single-threaded and the choice of stereo method.
The code that is blocking is the xrWaitFrame() method, which is by design. See what i wrote below about nausea. It is okay to delay xrWaitFrame until the first time you need the predictedDisplayTime, but not any longer.

Forcing single-threaded is undoubtably the biggest issue for performance.
I see in your code a comment that the reason is so that no other thread can use the GL context.
I have never touched openvr, so it's possible to openvrviewer has a good reason for this concern. With OpenXR i don't think there is any good reason for this.
https://www.khronos.org/registry/OpenXR/specs/1.0/html/xrspec.html#XR_KHR_opengl_enable
The OpenXR spec explicitly demands that the runtime does not use the context you give it except for a specific subset of functions you would only call from the rendering thread, which will have the context every time.
Inspecting your code, the flow of openxr calls is very similar to my own and i have no issues running the threading mode DrawThreadPerContext. But i cannot speak for the other threading modes.


> due to SteamVR changing GL context somewhere (a known bug, worked around in the swapchain abstraction
My understanding is that the openxr spec doesn't actually forbid this behaviour. It only limits when when the runtime is allowed to use the context you gave it, not whether it binds/unbinds that or other contexts.
This doesn't sound like behaviour anyone would want, though. Maybe an oversight in the openxr standard?

The runtime cost of verifying the OpenGL context after each of the relevant functions is low since you're only doing it a handful of times per frame,
so it might be a good idea to just wrap all of the mentioned methods in code that checks and restores opengl context.
Of course, the best would be if all vendors adopted reasonable behaviour.


 > Advancement is ideally driven by the expected display times of individual frames, i.e. the next frame should show the scene at exactly the moment when it is expected to be displayed to the user to avoid jitter and nausia. This may well be more of an app level concern (certainly is for flightgear which AFAICT currently uses fixed 120Hz simulation steps), but a general VR-specific viewer mainloop is probably needed in any case.
This is the purpose of the xr[Wait,Begin,End]Frame loop, and why you're passing the predictedDisplayTime returned by xrWaitFrame() on to xrEndFrame().
https://www.khronos.org/registry/OpenXR/specs/1.0/html/xrspec.html#frame-synchronization
In short: you don't have to care, OpenXR is already doing this for you.

Perhaps this is the issue for the openvrviewer, that openvr doesn't have this and so isn't automatically synchronized?
My interpretation of xrBeginFrame is that it exists precisely so that the next frame never begins rendering operations before the runtime is done compositing the latest xrEndFrame.

The only Nausea element you have to consider is when to locate an XrSpace.
When locating an XrSpace, what you get is a predicted pose for the time you give it (usually the predictedDisplayTime you got from xrWaitFrame()). The close you get to the predicted time, the better the prediction will be. So it is encouraged to predict as close to draw as possible.
By using the update slave callback, i believe you are accomplishing this as well as can be.
This is also the motivation for the xrWaitFrame call, it delays your processing so that your poses will be predicted closer to the time they will actually be displayed.

For the same reason, that predictions change in quality over time, it is encouraged to make all predictions at the same time and not spread out over time.
Action spaces (i.e. motion controllers) have their pose data updated only when you sync actions, so sync these immediately before locating. I deal with this by putting all pose actions in their own action set so they don't get lumped together with other inputs.

 > The OpenXR session is created using OpenGL graphics binding info provided via GraphicsWindow::getXrGraphicsBinding() which is only implemented for X11
Just a heads up. On windows you will find that some OpenXR runtimes, such as WMR, do not support OpenGL. Not surprising, being microsoft's own runtime.
I worked around this by using the wgl extension WGL_NV_DX_interop2 to share DirectX swapchains with OpenGL. I believe this would be the only way to support such runtimes in OSG.

Hope this is of some help!
Mads

Gareth Francis

unread,
Jun 15, 2021, 4:05:27 PMJun 15
to osg-...@googlegroups.com
Hi all,

I'm the one looking into vsg vr (at least when I get the time, slow progress overall). See https://github.com/geefr/vsg-vr-prototype for progress so far.

Only using openvr for the moment, but this looks very relevant, some interesting concepts to consider if my stuff ends up with XR support as well. Hopefully being vulkan can avoid some of the platform-specific issues (hopefully)..

Not much to add specifically for openXR yet, but happy to test or debug if blockers/quirks pop up (windows or linux, htc vive, nothing fancy)


--
You received this message because you are subscribed to the Google Groups "OpenSceneGraph Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osg-users+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/osg-users/b978786b-1d06-4ade-b111-768d96363187n%40googlegroups.com.


--
----
Gareth Francis
www.gfrancisdev.co.uk

James Hogan

unread,
Jun 22, 2021, 5:15:44 PMJun 22
to osg-...@googlegroups.com
On Tue, 15 Jun 2021 at 17:30, Robert Osfield <robert....@gmail.com> wrote:
> My recommendation would be to move your code into it's own osgXR library, but stick with the osgViewer::ViewConfig approach as this as it should make it easier for developers to switch between desktop and VR configurations - a strength of your current implementation.
>
> Creating a separate osgXR library will allow developers to use it against a wide range of OSG versions, so won't need to do any updates, just link to osgXR set up the viewer configuration and away they go. This also decouples the XR functionality from needing to be integrated within mainly OSG and being released as part of an official release. I'm spending most of my time on the VSG project these days so have put the OSG primarily in maintenance mode, so new stable releases are off the table till I get the VSG to 1.0 (hopefully later this year.)

Okay, that makes sense. I'll work in that direction and see how it
goes. Thanks for the feedback!

> Within the VSG community we've been discussing OpenXR integration as well, Again I see this is type of functionality that a dedicate vsgXR library would provide, rather than being integrated into the core VSG. While I haven't personally done any work in this direction I can certainly see that OSG and VSG XR integration could well follow similar approaches even at the code level then are entirely separate. Potentially both efforts could draw experience and knowledge form each other.

Yes, I'd be interested in following any equivalent VSG XR library.
I'll keep an eye on the vsg mailing list.

Cheers
--
James Hogan

James Hogan

unread,
Jun 22, 2021, 6:10:45 PMJun 22
to osg-...@googlegroups.com
Hi,

On Tue, 15 Jun 2021 at 18:42, Mads Sandvei <sand...@gmail.com> wrote:
> I have some experience integrating OpenXR and OSG from my work on OpenMW-VR.
> I'll share some of what i've learned

Ooh, thanks, I'll have a peak at how you've gone about it.

> > OSG already has a concept of stereo (which currently this code doesn't interact with)
> OSG's multithreaded rendering works better with its own stereo method than the slave camera method, so i would recommend integrating with this instead.
> For example, if a user uses DrawThreadPerContext, the main thread can continue to the update phase of the next frame immediately when the last of slave cameras have begun its draw traversals.
> With two cameras you get two separate traversals and the main thread may is held up until the first camera is done with its draw, costing performance.

Ah okay, thats very useful to know. I can see that resulting in
preferential treatment for the stereo view configuration (but that is
the case that matters most to me anyway...).

> In my work this meant using a single doublewide framebuffer instead of one framebuffer per eye. This is not a problem for OpenXR as you can create a doublewide swapchain and use the subimage structure
> to control the regions rendered to each eye when composing layers. I haven't looked to closely at whether OSG supports attaching different framebuffers per eye so that might be a moot point.

Makes sense.

> It's worth noting that OSG is adding support for the GL_OVR_multiview2 extension: https://groups.google.com/g/osg-users/c/__WujmMK5KE
> It would be worth integrating this in the future as this would easily be the fastest stereo method, though I don't have any personal experience with it.

Thanks. Unfortunately its still wholly in a separate branch of OSG AFAICT?

> > Performance is currently terrible. CPU usage and frame times don't seem high, so its blocking excessively somewhere
> Comparing your code to mine the only notable performance issues, that are under your control, is forcing single-threaded and the choice of stereo method.
> The code that is blocking is the xrWaitFrame() method, which is by design. See what i wrote below about nausea. It is okay to delay xrWaitFrame until the first time you need the predictedDisplayTime, but not any longer.
>
> Forcing single-threaded is undoubtably the biggest issue for performance.
> I see in your code a comment that the reason is so that no other thread can use the GL context.
> I have never touched openvr, so it's possible to openvrviewer has a good reason for this concern. With OpenXR i don't think there is any good reason for this.

agreed, its mostly a hack to avoid having to understand how OSG uses
multithreading straight away.

> https://www.khronos.org/registry/OpenXR/specs/1.0/html/xrspec.html#XR_KHR_opengl_enable
> The OpenXR spec explicitly demands that the runtime does not use the context you give it except for a specific subset of functions you would only call from the rendering thread, which will have the context every time.

Thats probably where my multithreaded OSG was going wrong :-)

> Inspecting your code, the flow of openxr calls is very similar to my own and i have no issues running the threading mode DrawThreadPerContext. But i cannot speak for the other threading modes.
>
> > due to SteamVR changing GL context somewhere (a known bug, worked around in the swapchain abstraction
> My understanding is that the openxr spec doesn't actually forbid this behaviour. It only limits when when the runtime is allowed to use the context you gave it, not whether it binds/unbinds that or other contexts.
> This doesn't sound like behaviour anyone would want, though. Maybe an oversight in the openxr standard?

Certainly annoying behaviour yes, It should be specified whether that
is permitted either way.
ftr: https://github.com/ValveSoftware/SteamVR-for-Linux/issues/421

> The runtime cost of verifying the OpenGL context after each of the relevant functions is low since you're only doing it a handful of times per frame,
> so it might be a good idea to just wrap all of the mentioned methods in code that checks and restores opengl context.
> Of course, the best would be if all vendors adopted reasonable behaviour.

Yes, thats sounds like the best we can do right now until it becomes
clearer whether the behaviour will be fixed.

> > Advancement is ideally driven by the expected display times of individual frames, i.e. the next frame should show the scene at exactly the moment when it is expected to be displayed to the user to avoid jitter and nausia. This may well be more of an app level concern (certainly is for flightgear which AFAICT currently uses fixed 120Hz simulation steps), but a general VR-specific viewer mainloop is probably needed in any case.
> This is the purpose of the xr[Wait,Begin,End]Frame loop, and why you're passing the predictedDisplayTime returned by xrWaitFrame() on to xrEndFrame().
> https://www.khronos.org/registry/OpenXR/specs/1.0/html/xrspec.html#frame-synchronization
> In short: you don't have to care, OpenXR is already doing this for you.

Won't that result in fixed velocity objects not moving smoothly
though, since the objects won't necessarily be in the positions they
should at the time of display? (I'm not getting high enough frame
rates yet for this to be apparent, so nvidia on linux not supporting
async reprojection is the much larger cause of nausia! *sigh*).

I suppose in practice if you do the common sampling of time delta each
frame it'd work out pretty steady, its only with flightgear's "perform
so many 120Hz timesteps until we've caught up to the current time"
that a 90Hz HMD refresh would result in jittery motion. I'll worry
about it when I observe it!

> Perhaps this is the issue for the openvrviewer, that openvr doesn't have this and so isn't automatically synchronized?
> My interpretation of xrBeginFrame is that it exists precisely so that the next frame never begins rendering operations before the runtime is done compositing the latest xrEndFrame.
>
> The only Nausea element you have to consider is when to locate an XrSpace.
> When locating an XrSpace, what you get is a predicted pose for the time you give it (usually the predictedDisplayTime you got from xrWaitFrame()). The close you get to the predicted time, the better the prediction will be. So it is encouraged to predict as close to draw as possible.
> By using the update slave callback, i believe you are accomplishing this as well as can be.
> This is also the motivation for the xrWaitFrame call, it delays your processing so that your poses will be predicted closer to the time they will actually be displayed.
>
> For the same reason, that predictions change in quality over time, it is encouraged to make all predictions at the same time and not spread out over time.
> Action spaces (i.e. motion controllers) have their pose data updated only when you sync actions, so sync these immediately before locating. I deal with this by putting all pose actions in their own action set so they don't get lumped together with other inputs.

Okay, thanks for the tip.

> > The OpenXR session is created using OpenGL graphics binding info provided via GraphicsWindow::getXrGraphicsBinding() which is only implemented for X11
> Just a heads up. On windows you will find that some OpenXR runtimes, such as WMR, do not support OpenGL. Not surprising, being microsoft's own runtime.
> I worked around this by using the wgl extension WGL_NV_DX_interop2 to share DirectX swapchains with OpenGL. I believe this would be the only way to support such runtimes in OSG.

Yeh, I spotted that mentioned for Blender's OpenXR support, but since
I don't run windows I'll let somebody else worry about implementing or
testing it :-)
Does that work out fairly straightforward in the end though? I suppose
it depends on nvidia, which perhaps is why the person who did the
Blender work talked about doing a final DirectX frame copy, which
sounds more heavyweight than sharing swapchains between DX and GL.

> Hope this is of some help!

Definitely! Thanks for the detailed feedback!

Cheers
--
James Hogan

James Hogan

unread,
Jun 22, 2021, 6:13:36 PMJun 22
to osg-...@googlegroups.com
On Tue, 15 Jun 2021 at 21:05, Gareth Francis <gfranc...@gmail.com> wrote:
> Not much to add specifically for openXR yet, but happy to test or debug if blockers/quirks pop up (windows or linux, htc vive, nothing fancy)

Thanks Gareth!

Cheers
--
James Hogan

Robert Osfield

unread,
Jun 23, 2021, 6:29:30 AMJun 23
to OpenSceneGraph Users
Hi Guys,

I'm just lurking on this topic so can't provide guidance on low level stuff at this point, on the high level side I provide a bit of background that might be helpful.

I'd like to chip in is that OVR_multiview functionality integrated into the MultiView branch will be rolled into the next stable release. The MeshShaders branch was made off the MultiView branch so can also be used.

My thought is that the MeshShader branch would be a better basis for a OpenSceneGraph-3.8 stable release rather than the present master, and as master contains a big block of experimental shader composition code that is only 60% complete.  The VulkanSceneeGraph project ended up kicking off before I completed the work on the experimental shader composition side.  This project is now my primary focus so finding safe paths to progress OpenSceneGraph without requiring a major chunk of year to complete is the route to take.

For a osgXR library if it can work against OpenSceneGraph-3.6 and then if MultiView/MeshShader branches are detected then OVR_multiview could be used.  OVR_multiview does require custom shaders but pretty well doubles the performance so can be well worth it.  In the test I did when working on OVR_multiview I found that you could essentially render stereo at same cost as mono - simply because the bottleneck for most OSG/OpenGL applications is the CPU side, so even doubling vertex load on the GPU doesn't result in a performance hit.

OVR_multiview is also supported in Vulkan but I haven't implemented it yet in the VulkanSceneGraph, this is less critical though as the CPU overhead of the VSG and Vulkan are so much lower than the CPU is far less of bottleneck - the VSG without multiview will likely still be much faster than the OSG woth OVR_multiview.

Cheers,
Robert.

Mads Sandvei

unread,
Jun 30, 2021, 4:19:55 PMJun 30
to OpenSceneGraph Users
> Won't that result in fixed velocity objects not moving smoothly though, since the objects won't necessarily be in the positions they should at the time of display? (I'm not getting high enough frame rates yet for this to be apparent, so nvidia on linux not supporting async reprojection is the much larger cause of nausia! *sigh*).

> I suppose in practice if you do the common sampling of time delta each frame it'd work out pretty steady, its only with flightgear's "perform so many 120Hz timesteps until we've caught up to the current time" that a 90Hz HMD refresh would result in jittery motion. I'll worry about it when I observe it!
I read your note about flightgear too quickly and did not fully understand the problem properly, my apologies! My understanding is that the 120hz is a default value and can be changed to a multiple of the display refresh rate, which might be a better solution if jitter does become an issue, but i'm not very familiar with FG so i shouldn't comment too much. Either way, make sure you know it's a real issue before optimizing for it!

> Does that work out fairly straightforward in the end though? I suppose it depends on nvidia, which perhaps is why the person who did the
> Blender work talked about doing a final DirectX frame copy, which sounds more heavyweight than sharing swapchains between DX and GL.
I've never had any big issues with  WGL_NV_DX_interop2. I am forced to do a gpu-gpu copy, which is a fairly negligible cost, as the swapchains returned by WMR have attributes that prevent directly sharing these with OpenGL. Instead i share a second set of DirectX textures and then copy those back to the swapchains.
If the blender guy means doing a gpu-cpu-gpu copy then that is certainly a lot more heavyweight.

Mads.

James Hogan

unread,
Jul 4, 2021, 8:43:50 AMJul 4
to osg-...@googlegroups.com
On Tue, 22 Jun 2021 at 22:15, James Hogan <ja...@albanarts.com> wrote:
>
> On Tue, 15 Jun 2021 at 17:30, Robert Osfield <robert....@gmail.com> wrote:
> > My recommendation would be to move your code into it's own osgXR library, but stick with the osgViewer::ViewConfig approach as this as it should make it easier for developers to switch between desktop and VR configurations - a strength of your current implementation.
> >
> > Creating a separate osgXR library will allow developers to use it against a wide range of OSG versions, so won't need to do any updates, just link to osgXR set up the viewer configuration and away they go. This also decouples the XR functionality from needing to be integrated within mainly OSG and being released as part of an official release. I'm spending most of my time on the VSG project these days so have put the OSG primarily in maintenance mode, so new stable releases are off the table till I get the VSG to 1.0 (hopefully later this year.)
>
> Okay, that makes sense. I'll work in that direction and see how it
> goes. Thanks for the feedback!

FYI, I've separated it out into a separate library, which I'll work on here:
https://github.com/amalon/osgXR

The _visualInfo->visualid and _fbConfig of GraphicsWIndowX11 used for
X11 graphics bindings aren't externally accessible, but don't seem to
be needed in practice.

It also requires explicit integration into an application (since I
don't seem to be able to hook into Viewer creation), which in practice
is going to be needed anyway for proper VR support, so I'll go with
that.

Cheers
--
James Hogan

James Hogan

unread,
Jul 9, 2021, 6:32:16 PMJul 9
to osg-...@googlegroups.com


On 22 June 2021 23:10:32 BST, James Hogan <ja...@albanarts.com> wrote:
>> > Performance is currently terrible. CPU usage and frame times don't
>seem high, so its blocking excessively somewhere

Fortunately the main performance issue turned out to be my misinterpretation of XrSwapchainSubImage::imageArrayIndex as referring to the index of the swapchain images. With that fixed the xrEndFrame validation error is gone & performance is at least in the right ballpark :)

Cheers
James

James Hogan

unread,
Jul 11, 2021, 2:43:52 AMJul 11
to Mads Sandvei, OpenSceneGraph Users


Hi Mads

On 15 June 2021 18:42:29 BST, Mads Sandvei <sand...@gmail.com> wrote:
>> OSG already has a concept of stereo (which currently this code
>doesn't
>interact with)
>OSG's multithreaded rendering works better with its own stereo method
>than
>the slave camera method, so i would recommend integrating with this
>instead.
>For example, if a user uses DrawThreadPerContext, the main thread can
>continue to the update phase of the next frame immediately when the
>last of
>slave cameras have begun its draw traversals.
>With two cameras you get two separate traversals and the main thread
>may is
>held up until the first camera is done with its draw, costing
>performance.

I'm not sure I follow this. Doesn't OSG's own stereo method use slave cameras too, or does it somehow avoid multiple cull traversals?

Cheers
James

Robert Osfield

unread,
Jul 11, 2021, 4:31:19 AMJul 11
to OpenSceneGraph Users
Hi James,

On Sun, 11 Jul 2021 at 07:43, James Hogan <ja...@albanarts.com> wrote:
I'm not sure I follow this. Doesn't OSG's own stereo method use slave cameras too, or does it somehow avoid multiple cull traversals?

The built in strereo is one of early parts of the OSG, so 20+ years old, and can be found in osgUtil::SceneView.  The interbakky manages two cull and draw traversals, but isn't thread aware itself, so just calls them in series.

Most modern OSG applications will use vsgViewer which was introduced in OSG-2.x, this has the capability of doing stereo at the viewer level, but it's up to the application to configure the master/slave cameras to create the stereo.  The osgViewer still uses osgUti;::SceneView under the hood so inherits it's stereo capabilities.  Personally I'd prefer to just implement high level stereo uses master/slave Camera's in osgViewer and had a plan to steadily replace SceneView usage, but never got there before starting the VSG project. 

The OVR_multiview functionality in the MultiView branch uses osgViewer level setup of stereo, but it's at the application level it's one cull and one draw traversal, the stereo happens entirely on the GPU.

Cheers,
Robert.

James Hogan

unread,
Jul 17, 2021, 6:12:12 PMJul 17
to OpenSceneGraph Users
Hi Robert,

On Sun, 11 Jul 2021 at 09:31, Robert Osfield <robert....@gmail.com> wrote:
> On Sun, 11 Jul 2021 at 07:43, James Hogan <ja...@albanarts.com> wrote:
>> I'm not sure I follow this. Doesn't OSG's own stereo method use slave cameras too, or does it somehow avoid multiple cull traversals?
>
> The built in strereo is one of early parts of the OSG, so 20+ years old, and can be found in osgUtil::SceneView. The interbakky manages two cull and draw traversals, but isn't thread aware itself, so just calls them in series.

Thanks, the SceneView code is what I was looking for, and it all kind
of makes sense in my head now I think. I've added a mode to hook into
SceneView stereo matrices callbacks and got something rough going,
which automatically looks for existing slave cameras using
FRAME_BUFFER, and even works(ish) with flightgear :-).
--
James Hogan
Reply all
Reply to author
Forward
0 new messages