I'm working on the 'multiple projector on a budget' problem' (one image blended across two projectors). I have some workable solutions and am pursuing more. But that's just the context of the question. Here's the machine:
Mac Pro - Quad core
5GB Ram
2 NVidia GeForge GT120 512MB each
One projector is connected to each Nvidia card on the DVI ports
One card also drives the system monitor on a MiniDV port
This beefy little machine is pretty 'clean' (not a lot of extraneous resident stuff installed - I like 'em lean and mean).
Basically, this works beautifully except for a specific situation I ran into. I have a JPEG image file of an 8:3 aspect ratio white block I was using to test color and brightness between the projectors (299KB). I used custom geometry to map it across both - no problem.
Then I laid a two projector wide animation of a couple of leaves dancing on top of it (Apple PreRes 4444, 30FPS, Millions of colors - 8.1MB). This was done in 2 cues - one for each projector. I'm using custom geometry to overlap them to try to smooth out the break in the projected images. Performance went right to… heck - choppy movement, LOTS of lost frames. One might think my animation is too huge EXCEPT when I play it without the single cue white block, it performs flawlessly.
Curious…
Playing around further, I discovered that if I broke the white block into 2 cues instead of one, the problems went away and my animation operated smoothly. I dumped another animation on top of that and everything was still smooth despite having 6 cues active simultaneously.
SOOOO… the question I have is, what's really going on here? Is Qlab getting bogged down by having to split the Jpeg across screens? I'm not hugely concerned - we're a couple of months out from tech and I don't have to have that static image under everything, but it did make me ponder… Any thoughts? And any ideas about better ways to do this are always appreciated. I can't afford the pretty imaging blending projectors but I'm open to other ideas.
Thanks in advance!
Mike Post
________________________________________________________
WHEN REPLYING, PLEASE QUOTE ONLY WHAT YOU NEED. Thanks!
Change your preferences or unsubscribe here:
http://lists.figure53.com/listinfo.cgi/qlab-figure53.com
Spanning multiple graphics cards means writing all of the data first to the CPU, then to the graphics cards, and the CPU is probably already busy working on other important things, and does its best to optimize your signal flow. So, splitting it into multiple cues respective of which graphics cards they're hitting fixes it. That's just the way the data moves when dealing with multiple graphics cards. When you break it up like that, it can skip the CPU and go straight to the GPU's, bypassing that problematic routing. It sounds like you saw this problem only with the JPG because only the JPG was routed that way. If you'd route any other cue that way - spanning multiple graphics cards - you should see the same issue. To test this, try hooking both projectors up to the same card, and spanning them both with that same JPG. The problem shouldn't present itself.
I love that you have given a completely absurd, but perfect, illustration of that issue here. It's nothing QLab can fix, since it's about what happens to the data outside of QLab, and has more to do with the hardware. But it's wonderfully illustrative.
Thanks,
luckydave
Yup.
Just to add to Luckydave's previous answer, the technical details are as follows:
When a video cue is assigned to only a single screen, it is able to tell QuickTime to render video frames straight to an OpenGL buffer, which is in video RAM on the card for that screen.
When a video cue is assigned to multiple screens, it first renders to a pixel buffer in normal RAM, which then must be moved around and copied to all the various screens that need it.
There are pieces of this process which I can't claim to fully understand, because I can't see exactly what QuickTime is actually doing. But that's the gist of it.
Best,
Chris