Nudging fwd, or scrubbing Group timelines?

242 views
Skip to first unread message

jean poole

unread,
Dec 16, 2015, 9:25:05 AM12/16/15
to QLab

As someone new to QLAB (but very familiar with Millumin, VDMX, Resolume etc) - 

I would love some advice on some basic workflow problems I’m having… 


Am figuring I must be missing something, and would love any pointers…

(need to finalise a solution within the next 48 hours)


What I can’t find / work-out in Q-LAB

 (having long-searched interface, manual, site)


- a simple ability to nudge/drag a GROUP timeline forward… 

(for example - an actor speaks faster than usual, or omits something - and a grouped (projection-mapped/multi-surfaced) collection of video now runs behind in time where the actor is, and needs to adjust forward slightly……. (or sometimes… go slightly backwards… )


….or have a button that allowed to  jump forward by 1,2,3,4,5,10 or any arbitrary number of seconds….

.. or to be able to set cue points within a GROUP timeline… 

eg a complex 5 minute piece plays…. and there are a range of cue points eg 0.30, 0.47, 0.53, 1.15, 2.35, etc 

so can jump ahead to key grouped-video moment in time when actor finally hits a cue...


My Mapping + Q-LAB process so far..

- I’m involved with creating video for a 1 hour theatre show that has a full hour of video (8 video files / chapters )…

- The theatre company has specified it wants Q-LAB used for the show...

- The video is projected onto a split screen theatre set backdrop (which is being covering using 4 x Q-LAB video surfaces ) 

- I have made 8 x video chapter files , that are each 1920x1080, and these include a variety of in-built cinematic cross-fades between animated collages of nostalgic media from the actors family histories… 

- my mapping process - map a template image - which meant duplicating the image 4 times, then selecting a portion of that template image, for each of 4 video surfaces…. I’ve then used shift+command+c / v to cut and paste the template mapping settings onto each of the 32 video files for the show (8 chapter videos. duplicated 4 times each, to align to 4 different surfaces)


Project Oddities - 

- For this project - the video needs to stay time-aligned with non-professional actors who vary their talking speeds, and occasionally skip lines etc…

 

- I am setting this up during a short-rehearsal time, but another person will be taking care of the mapping at an interstate venue, and I have been asked to make this as minimal as possible for them… 

(At the moment, we have 8 chapter-videos - which need to be duplicated 4 times, so we have to copy and paste the mapping 32 times from the template. If we were to break each chapter into 5 parts, this’d mean having to copy the settings 150ish times… and it would only add a degree of timing adjustment - we still couldn’t nudge forward to quickly improve sync between video and actor, only re-align every 1/5th of a chapter… ) 


Close, but not quite??

- I can see in ‘Active Cues’ list on the far right of the interface, and can adjust timelines for individual videos that are part of a group - but not the timeline for a group itself?


- I can pause a group - which is great if an actor slows down suddenly, and we’re on a still.. .and then make the group play again shortly after…

but if an actor speeds up, or mistakenly makes his section shorter… I can’t find a way of keeping all the projection mapped surfaces in sync and moving forward - only dragging their timelines individually… which isn’t really workable.


- Feasibly the clips could be cut-up into 10second sections - to give a more granular capacity to jump around - but this’d mean 6x60x4=1440 video files that need mapping copied to… 


Possible Options?

- Ask theatre company to buy something like Resolume Avenue or VDMX - to allow access to timelines (+ easy timeline scrubbing) and send video via syphon to QLAB for mapping. Or the more expensive Millumin to do both timeline scrubbing + mapping. 


- Break up chapters into smallest number of meaningful chunks, and ‘suck it up’ that much more map-copying will have to be done at next venue - and can’t nudge timelines forwards… can only re-align time-sync every chunk of a chapter… 


ANY ADVICE GREATLY APPRECIATED…. 


(Especially if I’ve been tunnel-blind to something that’s been in front of me the whole time!! ) 


cheers from Melbourne, Australia




micpool

unread,
Dec 16, 2015, 12:26:52 PM12/16/15
to QLab
My 2 cents below

Mic


On Wednesday, December 16, 2015 at 2:25:05 PM UTC, jean poole wrote:



- a simple ability to nudge/drag a GROUP timeline forward… 

(for example - an actor speaks faster than usual, or omits something - and a grouped (projection-mapped/multi-surfaced) collection of video now runs behind in time where the actor is, and needs to adjust forward slightly……. (or sometimes… go slightly backwards… )


….or have a button that allowed to  jump forward by 1,2,3,4,5,10 or any arbitrary number of seconds….

.. or to be able to set cue points within a GROUP timeline… 

eg a complex 5 minute piece plays…. and there are a range of cue points eg 0.30, 0.47, 0.53, 1.15, 2.35, etc 

so can jump ahead to key grouped-video moment in time when actor finally hits a cue...


The aim of most video sequences would be to present something to the audience that appears as one seamless video but is actually composed of many individually cued chunks. The method you propose, skipping around a timeline in anything up to a 10 sec jump wouldn't seem to be a good way of achieving this without glaring discontinuities.

You can slice video, loop it, and vamp it, or hold on end , so it should be possible that every time an actor says or does something where a change in the video is needed (commonly called a cue), the operator will press a button at that point and the next section of video will be triggered. The trick is to have the right number of cues.  

Sometimes you have to resort to other techniques. For instance lets say that at an arbitrary cue in a 1 min video you want the video to fade to black and white with television interference. Although a straight fade to black and white could be achieved  by fading the video effec in a fade cue, t the television interference can't. So what you could do in this instance is run 2 copies of the video in a group cue, one straight and one with the effects and crossfade between them at the cue point.
 


My Mapping + Q-LAB process so far..

- I’m involved with creating video for a 1 hour theatre show that has a full hour of video (8 video files / chapters )…

- The theatre company has specified it wants Q-LAB used for the show...

- The video is projected onto a split screen theatre set backdrop (which is being covering using 4 x Q-LAB video surfaces ) 

- I have made 8 x video chapter files , that are each 1920x1080, and these include a variety of in-built cinematic cross-fades between animated collages of nostalgic media from the actors family histories… 

- my mapping process - map a template image - which meant duplicating the image 4 times, then selecting a portion of that template image, for each of 4 video surfaces…. I’ve then used shift+command+c / v to cut and paste the template mapping settings onto each of the 32 video files for the show (8 chapter videos. duplicated 4 times each, to align to 4 different surfaces)



Unless I am misunderstanding what you are doing, you only need one surface. You then assign four screens (your 4 projectors) to this surface and move the screens to cover the area of the surface you want to send to each projector. You may want to make your master videos larger than standard HD. For instance if you were going to send the video on this surface to 4 1024x768 projectors you might use a surface and source video 2048x1536.

If you require greater control many people find the best way of dealing with this is to output a single syphon feed from QLab into MadMapper.

 

Andy Dolph

unread,
Dec 16, 2015, 3:34:47 PM12/16/15
to ql...@googlegroups.com
I agree with everything Mic said below - I want to highlight something here though. The language that you used "group timeline" suggests to me that you are thinking about Qlab the way you would think of a traditional video program – the idea is that everything is locked to a timeline.

To me, the whole reason that I use Qlab for projection design is because it frees me from that timeline.  Everything happens in cues – in those cues can run for as long or as short a period of time as I need them to, and then I trigger the next one.

Because video has its own playback speed, there are times when I need to produce content with a specific length in mind – I try very hard to do that as flexibly as possible – ideally I've created a situation that if the performers are faster than I expected, I can jump to the next cue in a way that feels natural to the audience even if it isn't as perfect as I would like it to be. Likewise, if the performers are slower Then I'm expecting, the cue ends in a way that is a natural hold. It may be that the motion gently comes to a stop and it ends on a still frame which can hold forever, or it may go into a loop which will loop forever, or it may be that I have created enough video that it's not reasonable to imagine getting to the end before i trigger the next cue. In that last scenario, I generally make 5 to 10 minutes more video then I expect I could use even with the slowest pacing of the performance that I can imagine.

So in effect what I'm doing is "editing" the video life, in real time during every performance. The order of the clips is selected ahead of time, as are the transitions, but the actual edit points are triggered live during the show.

To me this is the single greatest value that Qlab provides.

*Looks down, wonders where the soapbox came from, steps off of it, wanders away*


Sent from my iPhone
--
--
Change your preferences or unsubscribe here:
http://groups.google.com/group/qlab
 
Follow Figure 53 on Twitter: http://twitter.com/Figure53
---
You received this message because you are subscribed to the Google Groups "QLab" group.
To unsubscribe from this group and stop receiving emails from it, send an email to qlab+uns...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/qlab/a1a3f98e-3505-46f2-ab5f-d0dfd023a23e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

jean poole

unread,
Dec 16, 2015, 5:53:35 PM12/16/15
to QLab
Thanks for the reply Mic! 

The aim of most video sequences would be to present something to the audience that appears as one seamless video but is actually composed of many individually cued chunks. The method you propose, skipping around a timeline in anything up to a 10 sec jump wouldn't seem to be a good way of achieving this without glaring discontinuities.

I'm used to using apps like VDMX, Millumin + Resolume for this - which allow both cue triggers *and* scrubbing through a timeline..

While I'm hoping as the show matures, there'd be minimal need for adjusting the video, I suspect it will still need a little... 
and with the type of material we have (lots of composed photo collages more so than dynamic video action shots) - scrubbing slightly forward to get to the next section, works fine - would be fine..
and it does work fine within QLAB - for one screen at a time - in the active cue list - but I'm looking for a group solution to avoid having to go super-granular on the number of video files needed for the 1 hour show.....

You can slice video, loop it, and vamp it, or hold on end , so it should be possible that every time an actor says or does something where a change in the video is needed (commonly called a cue), the operator will press a button at that point and the next section of video will be triggered. The trick is to have the right number of cues.  

Thanks - yeah all my reading is suggesting the only way QLAB can deal with needing to go forward slightly - is having lots and lots of files... which will unfortunately - across a one hour show, will be trouble for the operator setting up in Sydney..
 
Sometimes you have to resort to other techniques. For instance lets say that at an arbitrary cue in a 1 min video you want the video to fade to black and white with television interference. Although a straight fade to black and white could be achieved  by fading the video effec in a fade cue, t the television interference can't. So what you could do in this instance is run 2 copies of the video in a group cue, one straight and one with the effects and crossfade between them at the cue point.

That sounds useful for somethings - but yeah, ttrying to get away from grouped files at the moment if possible, as there seems to be no capacity to access group timelines... 
our problems would be solved if all 4 screens were able to be hit with one video file (we could then do slight scrubs with the active-cue-list timeline)

Unless I am misunderstanding what you are doing, you only need one surface. You then assign four screens (your 4 projectors) to this surface and move the screens to cover the area of the surface you want to send to each projector. You may want to make your master videos larger than standard HD. For instance if you were going to send the video on this surface to 4 1024x768 projectors you might use a surface and source video 2048x1536.

We are using one projector to cast light across 4 different, separated, different angled screens.  

Bad phone rehearsal photo of stage set-up : 
(There are 4 screens, largest on left, smallest in front (tablefront) and hard to see - but a wide strip that connects with the main, and a tall strip about a metre behind that) 


 
If you require greater control many people find the best way of dealing with this is to output a single syphon feed from QLab into MadMapper.

QLAB's mapping for these planes is fine - it's only the time-control or having to make that is a potential problem ... 


Thanks again for your thoughts though - much appreciated, 
am still digesting the differences between QLAB and the likes of VDMX, Millumin + Resolume...


micpool

unread,
Dec 16, 2015, 6:02:45 PM12/16/15
to QLab


On Wednesday, December 16, 2015 at 10:53:35 PM UTC, jean poole wrote:
Thanks for the reply Mic! 

We are using one projector to cast light across 4 different, separated, different angled screens.  

Bad phone rehearsal photo of stage set-up : 
(There are 4 screens, largest on left, smallest in front (tablefront) and hard to see - but a wide strip that connects with the main, and a tall strip about a metre behind that) 


 
If you require greater control many people find the best way of dealing with this is to output a single syphon feed from QLab into MadMapper.

QLAB's mapping for these planes is fine - it's only the time-control or having to make that is a potential problem ... 


If you are using one projector with four surfaces (one for each physical screen) then syphoning to mad mapper may solve your problem. It would enable you to prepare your 4 screens of video arranged how you liked on a single video file. Mad Mapper would then take 4 regions from this file and you could then map each of these regions to the physical position of your screens within the output projector beam.

At any one time you would only have a single video file running in QLab and when you move venues you only have to move 16 corner points in mad mapper and the job is done.

Best Regards

Mic



jean poole

unread,
Dec 16, 2015, 6:10:35 PM12/16/15
to QLab
Thanks for the reply Andy! 
 
I agree with everything Mic said below - I want to highlight something here though. The language that you used "group timeline" suggests to me that you are thinking about Qlab the way you would think of a traditional video program – the idea is that everything is locked to a timeline. 

Well - I'm more comparing it to VDMX, Millumin + Resolume (rather than video editing software) - which let you have as many cues and samples as you want, or have multiple cue points within any single video file etc... and have group timelines for video files nested together (Millumin in that case) ... 

To me, the whole reason that I use Qlab for projection design is because it frees me from that timeline.  Everything happens in cues – in those cues can run for as long or as short a period of time as I need them to, and then I trigger the next one.

I love real-time video too... but for me that has always meant easy, fine-toothed access to a timeline - and video playback speed...  *and* as many cues, x-fade layers and cutaways as desired... 
so I'm finding the lack of a group timeline in QLAB a limitations, rather than a giver of freedom... 

I definitely don't know enough about QLAB's cue capacities yet though.. 

Because video has its own playback speed, there are times when I need to produce content with a specific length in mind – I try very hard to do that as flexibly as possible – ideally I've created a situation that if the performers are faster than I expected, I can jump to the next cue in a way that feels natural to the audience even if it isn't as perfect as I would like it to be.
 
For a dialogue heavy one hour show -with non-professional actors - how many clips would you make - or what would be your preferrred minimum time length? 
(Currently we're serving up 8 chapters of video - and am prepared to split each of those in half... or a bit more, but am reluctant to split much more than that, as it will increase the workload for the next operator in sydney who has many other things to do... ) 

And what is a better way of shifting from one video file/ cue - gracefully to the next - if say an actor makes a mistake - rather than going to black, and then triggering go for the next cue?

Likewise, if the performers are slower Then I'm expecting, the cue ends in a way that is a natural hold. It may be that the motion gently comes to a stop and it ends on a still frame which can hold forever, or it may go into a loop which will loop forever, or it may be that I have created enough video that it's not reasonable to imagine getting to the end before i trigger the next cue. In that last scenario, I generally make 5 to 10 minutes more video then I expect I could use even with the slowest pacing of the performance that I can imagine.

This all makes sense... though in the short-term, I have a looming deadline, and a theatre request to minimise the number of video files - and to give instructions for troubleshooting when actors are out of sync... 
 
So in effect what I'm doing is "editing" the video life, in real time during every performance. The order of the clips is selected ahead of time, as are the transitions, but the actual edit points are triggered live during the show.

*real-time video hi-5s from melbourne* 

To me this is the single greatest value that Qlab provides.
*Looks down, wonders where the soapbox came from, steps off of it, wanders away*

QLAB pride noted ; ) (someone give Andy a free licence extension.. ) 

Thanks again... 
 

jean poole

unread,
Dec 16, 2015, 6:17:46 PM12/16/15
to QLab
Thanks Mic - 
I think that solution would work great - will see later today how the theatre group will feel about 300 euros for madmapper...

It just feels weird though - to be buying madmapper, when QLABs mapping works so fine for planar screens like this... 

.. to effectively solve the problem of not having good time control over grouped cues... 

I'm sure I'm not understanding QLAB properly yet - but I'm so surprised at this lack of access to time controls... or say playback speed...

Thanks for your suggestion though, super helpful!

micpool

unread,
Dec 16, 2015, 8:30:31 PM12/16/15
to QLab
Just one question.

Is your lighting and sound being programmed the same way.

Is the lighting operator putting all his cues as timed waits with just a single go button push at the top of show,  and then hitting pause or speeding up cue playbacks to try and make the cues happen at the right time?  Probably not?

And the sound. Is it a 1 hour file that is speeded up slowed down and paused to try and make it match the actors speed. Again probably not? I would imagine that both lighting and sound  are designed with a lot of cues that happen at precise cue points in the script.

Why would the video need to work in a completely different way? 

I find it difficult to understand how it is more difficult for an operator to execute a couple of hundred cues at defined and clearly understood points over an hour (which may be at the same time as lighting or sound cues), as opposed to what you seem to be suggesting that they constantly monitor whether an actor is slightly behind or in front of where they are meant to be in the video timeline and then stab at scrub controls, nudge buttons, pause buttons, and play buttons to try and maintain sync.  

Obviously I don't know anything about your show, but your proposed  method to run the video  does seem a peculiarly difficult way of making cues happen in the right place.

I am also not really understanding the maths of your mapping. Regardless of the number of video files you have you only have to adjust the corner points of each of the  four surfaces once for each venue. Do you mean you are playing 4 copies of a single HD video file and are using the custom geometry in the cues to select which portion of the file you are sending to each surface? If that's the case you are wasting an enormous amount of video bandwidth. If you ever do need to crossfade between 1 group of four video cues and another  group you would be playing 8 full HD videos simultaneously, which would need a fairly powerful Mac to achieve. Why not just chop up your master video by rendering out to 4 video files one for each surface,  with roughly the correct pixel dimensions that correspond to  the number of projector pixels which are going to land on each screen area on the set?




Mic

jean poole

unread,
Dec 16, 2015, 8:51:59 PM12/16/15
to ql...@googlegroups.com
On 17 Dec 2015, at 12:30 pm, micpool <m...@micpool.com> wrote:

Is the lighting operator putting all his cues as timed waits with just a single go button push at the top of show,  and then hitting pause or speeding up cue playbacks to try and make the cues happen at the right time?  Probably not?

Lighting has cues that happen as per events in the show… 

And the sound. Is it a 1 hour file that is speeded up slowed down and paused to try and make it match the actors speed. Again probably not? I would imagine that both lighting and sound  are designed with a lot of cues that happen at precise cue points in the script.

Sound is being played live - songs, sample triggers etc - it’s two musicians telling their life stories, and their live playing happens a lot in the show…
As musicians though, all their sound cues are on target, it’s their vocal delivery and storytelling that varies so much.. they’re new to theatre

Why would the video need to work in a completely different way? 

actor timing… 

I find it difficult to understand how it is more difficult for an operator to execute a couple of hundred cues at defined and clearly understood points over an hour (which may be at the same time as lighting or sound cues), as opposed to what you seem to be suggesting that they constantly monitor whether an actor is slightly behind or in front of where they are meant to be in the video timeline and then stab at scrub controls, nudge buttons, pause buttons, and play buttons to try and maintain sync.  

I understand what you mean - but as it is the show's able to work fine - with those slight, occasional adjustments… if we just use 

Obviously I don't know anything about your show, but your proposed  method to run the video  does seem a peculiarly difficult way of making cues happen in the right place.


I am also not really understanding the maths of your mapping. 
Why not just chop up your master video by rendering out to 4 video files one for each surface,  with roughly the correct pixel dimensions that correspond to  the number of projector pixels which are going to land on each screen area on the set?

That was my first preference too - rendering separated versions - however the theatre company cannot afford to take me to the next site in Sydney, 
and the dimensions will be different there… Having the video in HD, will allow the operator up there, to enable the QLAB surfaces to match up as a panorama… by being able to move around until the strips line up… rather than be stuck with the forced perspective of a strip rendered from one location in the image only.  

Regardless of the number of video files you have you only have to adjust the corner points of each of the  four surfaces once for each venue. Do you mean you are playing 4 copies of a single HD video file and are using the custom geometry in the cues to select which portion of the file you are sending to each surface? If that's the case you are wasting an enormous amount of video bandwidth.

It’s actually roughly 1440x1080 (have cropped during render, to cover where the projectors hit and not bother rendering where it’ll hit the stage floor)

If you ever do need to crossfade between 1 group of four video cues and another  group you would be playing 8 full HD videos simultaneously, which would need a fairly powerful Mac to achieve. 

The show doesn’t currently require need for mixing groups.. 

Again - thanks for your feedback - 

As a QLAB newbie - I’ve just been throwing out questions for inherited circumstances ..
that I would’ve been able to resolve easily in VDMX, Millumin or Resolume… 
but the theatre company insisted on QLAB, not sending me to the next venue, and minimising the number of video files and mapping needed for the next operator, who will be multi-tasking up there..

Trying to manage it as best I can...
And for the record, it’s a wonderful show, though the budget or timeline isn’t… ; ) 

cheers!

Andy Dolph

unread,
Dec 16, 2015, 9:14:27 PM12/16/15
to ql...@googlegroups.com
I tend to use lots of small files - switching files whenever there is a natural dramatic beat that I want to hit visually.  I don't see lots of cues as a problem so long as there is a clearly marked script for the operator or a stage manager calling cues.

Sent from my iPhone
--
--
Change your preferences or unsubscribe here:
http://groups.google.com/group/qlab
 
Follow Figure 53 on Twitter: http://twitter.com/Figure53
---
You received this message because you are subscribed to the Google Groups "QLab" group.
To unsubscribe from this group and stop receiving emails from it, send an email to qlab+uns...@googlegroups.com.

jean poole

unread,
Dec 16, 2015, 9:14:44 PM12/16/15
to ql...@googlegroups.com

On 17 Dec 2015, at 12:51 pm, jean poole <jeanp...@gmail.com> wrote:

It’s actually roughly 1440x1080 (have cropped during render, to cover where the projectors hit and not bother rendering where it’ll hit the stage floor)

oops - wrong axis there…

it’s 1920 x 800… just cropping the bottom of a 1080P image off, 
as this hits the floor…

micpool

unread,
Dec 17, 2015, 4:04:25 AM12/17/15
to QLab
Ok

So it's clear that you need your operator to be able to select which area of  a larger image is sent to which screen and to corner pin those 4  source areas to fit a physical   screen covered by a single projector.

I would say the only sensible solution is to use QLab syphoned to  MadMapper   where your op will just adjust 32 corner points which are available in a single screen layout, once for each venue , and the job is done.  

You would then have a single file playing in QLab, and all your varispeed, nudge buttons, pause buttons etc could be scripted so the last active cue (which would be your only playing video file in QLab) could be controlled from something like a Contour ShuttlePro.

Mic

John Rose

unread,
Jun 14, 2023, 12:11:55 AM6/14/23
to QLab
Stop me if you've heard this one, BUT ... If the actors are performing mostly like narrators and can occasionally pay attention to a dedicated video display concealed by part of the set, or on a lectern, why not show them a sequence of Text cues on it, like a Teleprompter? Then they'll know when they're drifting out of synch. A video of scrolling text is probably better.

Or - have three small lights not visible to the audience, like a small traffic light. Yellow to slow down, green to pick up the pace, and blue to continue at that rate. A stage manager or production assistant follows a script with thumbnail images or something, and manually selects which light is on. If the actor skips a whole page or some such, give them flashing yellow.
Reply all
Reply to author
Forward
0 new messages