Simultaneous Multi-Projection with one graphics card

120 views
Skip to first unread message

Hyper Sonic

unread,
May 7, 2016, 8:36:38 PM5/7/16
to liquid-galaxy
"Simultaneous Multi-Projection is a new rendering pipeline for Pascal cards that allows them to render 16 independent "viewpoints" in a single rendering pass. In a regular graphics card, a single viewpoint—i.e. what a user sees on a monitor—is rendered in one pass. That's fine for most applications, but problems occur with multi-monitor setups and VR. In a triple monitor setup where a users curves the monitors, the graphics card can only render a single viewpoint, where it assumes all the monitors are arranged in a straight line, resulting in the images on the left and right monitors looking warped."

"Traditionally, this problem is solved by using three separate graphics cards in supported games, but with Multi-Projection, the single GPU can render three different viewpoints, with two of them correcting the distortion. The company uses a similar technique to speed up VR rendering, allowing for a stereo image to be rendered in a single pass, dramatically improving the frame rate—a particularly big problem to solve when VR needs to run at a hefty 90 FPS."

16 independent viewpoints, now how to you hook up 16 displays to one graphics card!?

Nvidia published a video about this today

Andrew Leahy

unread,
May 8, 2016, 3:08:50 AM5/8/16
to liquid-galaxy
Looks look I'm going to be ordering some new video cards!

Anyone with a Liquid Galaxy has a physical setup which is IDEAL for this technology. ie. we should start playing with this stuff now.

hmmm.. someone close to Santa Clara should approach NVIDIA and see if they can get us a good deal on some hardware upgrades :-)

It's going to be a while before multi-projection is implemented in software though. It's not a "magic fix" for most existing applications. Most 3d engines clip or dump the content that is 'outside' a traditional 2D camera frustrum. So a "driver" can't magically re-create the correct projection for the side scenes.

Google Earth is a good example -- to keep the GPU memory demands down it quickly dumps geometry/texture outside the normal view port.

But going forward it should be much easier to get wraparound content onto Liquid Galaxy. Exciting times to have an interest in multi-screen displays!

Andrew | Western Sydney University

--
You received this message because you are subscribed to the Google Groups "liquid-galaxy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to liquid-galax...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hyper Sonic

unread,
May 8, 2016, 2:17:51 PM5/8/16
to liquid-galaxy
"Most 3d engines clip or dump the content that is 'outside' a traditional 2D camera frustrum"
With re-programming it shouldn't be difficult to disable frustum culling, at least for when you pause the game/simulation to take a 360deg screenshot (or have a surround display) as there is no frustum anymore, at least not for culling purposes. In the free camera mode surfaces that were not facing you could suddenly be facing you and surfaces that were behind something might no longer be behind something. I wonder how those are handled, perhaps those surfaces are still not shown in free camera mode (or perhaps you can roam the entire map, depending on how it's implemented.)

For applications that stream geometry and textures like Cesium and Google Earth you'd probably have to wait for a few seconds for the resources outside of the frustum to load when you go into a '360 screenshot' mode.

Cesium seems to retain some resources behind the camera (at least for a few seconds) unlike Google Earth, though this is from 1 to 2 years ago, it may have changed.

I wonder if this stuff will be available for WebGL applications, I'd love to see this in Cesium!

Some random thoughts:
-I wonder how curved displays are handled, the driver would have to know the radius and angle of the curve for proper rendering (cylindrical and spherical curves )
-For now it's assumed that the viewer is centered in front of a screen, I wonder when video drivers will also account for off-center viewing, though this would require head position tracking.

Jason Holt

unread,
May 8, 2016, 2:54:58 PM5/8/16
to liquid...@googlegroups.com

> -For now it's assumed that the viewer is centered in front of a screen, I wonder when video drivers will also account for off-center viewing, though this would require head position tracking.

Yeah, I think there's a lot of neat potential in view dependent rendering.

sent from my android

Hyper Sonic

unread,
May 8, 2016, 5:07:59 PM5/8/16
to liquid-galaxy
The HTC Vive already does head-position-tracking via 'Lighthouse', though with HMD your head never moves in relation to the display. It would be nice to use Valve's Lighthouse or Oculus's Constellation head tracking systems on non-HMD.

Stereoscopic (via passive 3D) head-position-tracking non-head-mounted displays might end up looking like genuine holographs! Though it would only work for one person at a time (just switch on/off the head-position-tracking.)
Reply all
Reply to author
Forward
0 new messages