Re: [QLab] Guaranteed Sync

926 views
Skip to first unread message

Christopher Ashworth

unread,
Nov 1, 2010, 9:42:45 AM11/1/10
to Discussion and support for QLab users.
These are all good questions Rich. I'm not confident I can give completely convincing answers without going into debilitating detail, but I'll try to provide some convincing hand waving.

(Editor's note: this ended up getting into a fair amount of detail. Warning: many words ahead.)

First, some context:

Cues, as an abstract concept in QLab, exist as actions that are placed by an operator into a timeline. Those actions may be just about anything; logical state changes to the flow of the workspace, audio events, video events, running a script, etc. So far, so easy.

The concept of a timeline implies the existence of clocks that mark the time. Here is where the first wrinkle starts to appear. Clocks never run at exactly the same rate, which means that different clocks define different timelines. Ideally, QLab would work with a single clock, to make it easy to schedule all events on that same timeline. But we don't get that luxury. Different audio devices, for example, each bring their own clock to the party. Video display can also bring a clock. And of course the computer QLab is running on has its own clock.

The whole point of QLab, however, is to provide a single timeline into which these arbitrary abstract cue thingies are sequenced. Thus, we need to pick a master clock. Do we pick the clock of an audio device? No good; the audio played out other devices can't use that time information, and if that audio device disappears we're screwed. The natural choice for the master clock is the integrated clock inside the computer.

Picking the master clock doesn't solve all our problems, but it does give us a place to start. To begin with, all the logical state changes of the workspace must be applied in a strict order on that timeline. A cue that is sent a "disarm" command immediately followed by a "play" command must (obviously) never receive those two commands out of order, even though they theoretically happen at the same instantaneous moment. They're placed in a strict order on a specific point in the master clock's timeline. One or more threads of operation in the program must then be responsible for processing these actions in the specified order.

But what about all those other clocks stilling hanging around? We don't get to ignore those. The audio device doesn't care about the computer's clock; it's got its own clock, and that's what it's using. If you tell QLab to play exactly one second of an audio clip, it can only do that by watching the clock of the audio device where the audio is actually being played. Moreover, providing audio to the device accurately requires doing it under a specific set of conditions. Namely, within a high-priority audio thread, where QLab receives requests for a small buffer of audio samples all at one time. The needs of audio being what they are, this audio thread is a sensitive place, where one can't just go doing any old computation willy-nilly. One must stick more or less to handing over the audio, with a little extra computation allowed if you have the time.

Now the wrinkles are getting deeper. Now we've got a master timeline that defines the overall logic and order of your actions, we've got a cast of secondary timelines into which some of those actions must be translated, and we've got different classes of threads in which the work of processing these different timelines must take place.

What kinds of threads? Well, we've got audio threads, which require a certain amount of fast-acting, non-blocking state by which to judge very quickly what series of audio samples get delivered from moment to moment. We've also got one or more other threads, which must process the state changes of the workspace in a strict order. And, critically, the latter must hold authority over the former. If the logic of the master timeline says a cue has devamped, by golly, the cue had better actually be devamped over in that mirror world of the audio timeline. Not all state between the two worlds must be mirrored that closely. It might sometimes be okay for an audio cue to report that, logically speaking, yes, it's playing, even if the audio samples won't be delivered for a few more milliseconds. But devamping at a loop point is the kind of thing that must always happen instantaneously. It's not okay in the audio timeline to go "oops, I started the loop over again for a few fra
mes, but then I noticed I was supposed to devamp that time around, so I'll just go ahead and do that now". To handle this sort of thing, the fast-acting, non-blocking state in the audio threads must be sitting there prepared to handle instantaneous adjustments for (at least) certain kinds of state changes. In this case, devamps. But there's only so far we can take this, because at the limit the logic of the master timeline could instantaneously and without warning send the audio state to *any* location in playback. An audio cue could be told to stop, load to time X, and play, all instantly. Unless the entire set of audio frames is available for fast-acting, non-blocking delivery at all times, then we now are in the position of trying to set this state up ASAP when that is sufficient, and instantly in the special cases where it really must be instant.

This, then, is the thorny world of arbitrary, human-triggered actions.

It is not, as it turns out, terribly difficult to schedule audio in a sample-accurate way for a single audio device when you know in advance when and what audio should play. All you have to do is get your buffers ready, count the frames as they are requested, and deliver the right audio at the right time.

The thing that makes it hard is trying to update the schedule across different timelines based on either a human trigger or the meddlesome state change from some other cue. Buffers must be updated and logic must be synchronized across different timelines and threads. Some of the state MUST be changed instantly, and some can afford to be done ASAP.

In short, scheduling sample accurate audio to a single audio device is easy. Scheduling sample accurate audio across multiple audio devices within a framework of arbitrary generic (non-audio) events with unpredictable branching behavior is not easy.

NOW. With that context in place, it will (hopefully) be easier to answer your questions:

On Oct 31, 2010, at 11:16 PM, Rich Walsh wrote:
>
> I feel like I'm still missing something: why is triggering a cue with a 1s pre-wait different from triggering the same cue with 1s of silence built-in? If I've understood you correctly, the former would not be sample-accurate with a cue fired at the same time, but the latter would.

Depending on how one implements pre-waits, this could go either way.

It *could* be that in version 3, "starting two cues at the same time" just means that a trigger caused them to go from a completely stopped state to a state where *something* is running. (Either a pre-wait or an action.)

Because yes, as you say, a pre-wait can be seen as triggering a cue with a period of silence built in to the file. If pre-waits are something that the audio threads "know" about (as part of their fast-acting, non-blocking state), then two cues started at the same time will be in guaranteed sync, even if one of them began playing audio immediately and the other waited one second first.

On the other hand, if audio threads do not store knowledge about pre-waits -- if that is a concept outside of their specialized audio timeline state -- then it could be that the master timeline is the only thing tracking the pre-wait. In that case, one could have a case of two different cues where the times programmed into the cue sequence defines that their actions will start at precisely the same time. In this case, the master timeline might be keeping track of all the timing of pre-waits and post-waits, until the moment it sees that two actions should begin playing at exactly the same time, in which case it then ensures that these two separate cues will begin playing in sync as soon as possible.

Now, the former scenario sounds great. But why stop there? If a pre-wait is equivalent to some number of frames of silence, isn't a cue sequence just a collection of pre- and post-waits with some actions dropped in the intervals? In essence, shouldn't we just schedule all audio on its audio timeline as far in advance as possible? Then every cue in the sequence is in sync with every other cue.

It is this act of scheduling, however, that leads to tremendous complexity, as described in my introduction above. Suddenly we are in a world in which all branches of all possible schedules must be kept up to date for all cues that are now running or will be running or might be running in the future. If a cue sequence is stopped, the cues that were scheduled must be unscheduled. This may cause other branches of the logical flow of the workspace to assert themselves. Instead of working out the logic of "who goes when" just in the moment of doing it, we now are predicting a complex branching future of possible events, some number of layers deep. (Reference: notebook pages 1 through 23.)

There may be a compromise here, in which we only go one layer deep -- in which the audio timeline knows about pre-waits, and two cues are in sync if they both go from a stopped to a waiting or running state at the same moment. This covers the "first level" of Rich's scenario, and as I think about it now, could probably be done without introducing destabilizing complexity. But going the next logical steps beyond that is where I can't presently come up with an architecture that I trust. This may reflect my limitation as a programmer, but there it is.

> Even the most complex form of vamping can be reduced to the same basic thing: a cue fires at a (reasonably) determined interval after another (ie: the time is a quantised variable, not a continuous one). You presumably have a mechanism whereby the frames being delivered from the file can change at the drop of a hat, otherwise the internal devamping would not work. What is the subtle difference that means that you can jump around within a file with sample accuracy,

Just to re-iterate the description above, this comes down to "special-casing" loops and vamps, since a more general mechanism to change the frames at the drop of a hat is much more difficult (or at least requires much more RAM).

> but you can't deliver a second file with sample accuracy unless you start it at the same time? Do they both need to be actually presenting frames? Can the one that starts later be made to appear to present empty frames while the pre wait elapses?
>
> What is the current status of the mechanism? Sample-accurate sync has no meaning for cues that are not tied together in some way by a single GO-event-triggered domino cascade, so if guaranteed sync is off the 1s gap between the two cues may not be exactly 1s - whilst if it's on, the gap will be exactly 1s, but the silence between may be a fraction longer. Right?

If I am reading your description correctly, yes.

The current mechanism goes like this: every event in QLab takes place at One True Time, which is the time it was scheduled to run on the master timeline.

When translating this into the timeline of the audio devices, one of two things happens depending on whether sync is set:

1) If sync is NOT set, the cue simply begins playing the audio from the top of the file, ASAP. This means the audio is technically a bit later than the One True Moment that the cue started playing, but hey, big deal.

2) If sync IS set, the cue must account for the One True Moment when it nominally began to play. It supplies audio as if it had been doing it from the very instant of that One True Moment. (Perhaps shaving off a few few frames from the top of the file in the process.) Since ALL cues with guaranteed sync are doing this, they are ALL locked together, no matter when or how they were triggered in this larger abstract world of QLab cues. They are all slaving themselves to the master timeline in a way that guarantees they will be in sync, but since it is done "just in time", rather than with a scheduling-all-possible-futures mechanism, it is at the cost of losing a few frames from the top of the file.

I'm sure everything is perfectly and unambiguously clear now, yes? :)

Best,
Chris
________________________________________________________
WHEN REPLYING, PLEASE QUOTE ONLY WHAT YOU NEED. Thanks!
Change your preferences or unsubscribe here:
http://lists.figure53.com/listinfo.cgi/qlab-figure53.com

Jeremy Lee

unread,
Nov 1, 2010, 9:59:31 AM11/1/10
to Discussion and support for QLab users.
You wrote this all at 9:42 in the morning? How many cups of coffee was this into your day?

This depth of thought, and considering all of the possibilities is both why I don't envy you your work, and why I trust your software implicitly. Keep up the good work, allowing me to do mine!

On Nov 1, 2010, at 9:42 AM, Christopher Ashworth wrote:

> I'm sure everything is perfectly and unambiguously clear now, yes? :)

--
Jeremy Lee
Sound Designer, NYC - USA 829
http://www.jjlee.com

Christopher Ashworth

unread,
Nov 1, 2010, 10:04:56 AM11/1/10
to Discussion and support for QLab users.
On Nov 1, 2010, at 9:59 AM, Jeremy Lee wrote:

> You wrote this all at 9:42 in the morning?

Sent at 9:42. Begun much earlier.

> How many cups of coffee was this into your day?

1.5.

Cheers,
Chris

*

unread,
Nov 1, 2010, 10:22:48 AM11/1/10
to Discussion and support for QLab users.
Chris,

If I cut the front off a file mid note / music / sound outside of Qlab, I
take the risk of having a POP when that file plays (not just in Qlab).

I usually put a very fast fade on the front & end of audio files to make
sure that they start & stop on the 0 line.

In the case of "guaranteed sync", what happens within the "few frames"
period of time?

Does it get faded in? Is it unnoticeable?

I've had POP issues with as little as 1 samples worth of trash & so I
would presume that Qlab is doing something to avoid this but I'd like to
understand what it's doing.

Re: Vamp / Devamp

In the old days, I would use ACID & create a loop point on a cue & once
the playback position hit that point it would get stuck inside the two
"yellow" flags" & when I wanted to devamp, I'd turn off the LOOP function
& the audio would play on out.

If I were a DJ with two turn tables, I would have the same record on both
with marks on the middle & I would fade back & forth between the two to
create my "loop" & when I wanted to devamp, I would just let the current
record play out...

So we know that Qlab isn't the only way to loop audio.

Long before there was a vamp / devamp cue option, I was making two files
to do the same thing & manually triggering the change over in whatever
application I was using to play back audio.

IF the vamp / devamp cue was lost in a future version of Qlab, I would
still gladly use the future version.

Just as Qlab doesn't allow me to add reverb & perform EQ & compress, I
don't expect it to do everything else I might need. I don't mind asking.
If it can, great!!! But if it can't, I'll manage. There are limitations to
everything...

Best Regards,

*

On Mon, November 1, 2010 8:42 am, Christopher Ashworth wrote:
> (Perhaps shaving off a few
> few frames from the top of the file in the process.) Since ALL cues
with
> guaranteed sync are doing this, they are ALL locked together, no matter
when or how they were triggered in this larger abstract world of QLab
cues. They are all slaving themselves to the master timeline in a way
that guarantees they will be in sync, but since it is done "just in
time",
> rather than with a scheduling-all-possible-futures mechanism, it is at
the
> cost of losing a few frames from the top of the file.

Christopher Ashworth

unread,
Nov 1, 2010, 11:32:55 AM11/1/10
to Discussion and support for QLab users.
On Nov 1, 2010, at 10:22 AM, * wrote:
>
> In the case of "guaranteed sync", what happens within the "few frames"
> period of time?
>
> Does it get faded in? Is it unnoticeable?

No fade. Just jumps to the correct frame. Pops certainly possible.

> Re: Vamp / Devamp

To clarify, I'm not proposing the removal of vamping -- I think it's a good thing for QLab to make vamps easy.

The change would only affect how you create audio with multiple vamped sections.

Instead of this:

AUDIO
DEVAMP
AUDIO
DEVAMP
AUDIO

You'd have this:

AUDIO
DEVAMP
DEVAMP

The idea is that (I think) it's easier to put multiple looped sections in a single audio cue than it is to ensure that cues triggered later in a sequence are synced with cues earlier in the sequence.

The (current!) plan is that in v3 only cues started at the same would be in sync, rather than cues started at any arbitrary time.

-C

Søren Knud

unread,
Nov 1, 2010, 12:27:52 PM11/1/10
to Discussion and support for QLab users.
> The (current!) plan is that in v3 only cues started at the same would be in sync, rather than cues started at any arbitrary time.

Sounds great to me.

best,

soren

sam kusnetz

unread,
Nov 1, 2010, 3:04:29 PM11/1/10
to ql...@lists.figure53.com

first, chris, you are extraordinary. as jeremy said, this sort of thing is exactly why we trust you and your software.

second, here are some thoughts and some questions:

it is my understanding that core audio does not make as much use of a multi-core system as it could. is that true? it is certainly my experience that the hyperthreaded eight core mac pro that we have a portland center stage never, ever uses all its cores during show playback (i've also never used up all of the 6 GB of RAM, but i have repeatedly choked the "professional" video card by trying to fade a 720x480 h.264 video under a 50% transparent PNG. go figure.) historically, i have always noticed that qlab appears to be a very low-impact program on the processor. now that even a mac mini has four logical cores, is there any way to make more and better use of all the horsepower?

> Unless the entire set of audio frames is available for fast-acting, non-blocking delivery at all times, then we now are in the position of trying to set this state up ASAP when that is sufficient, and instantly in the special cases where it really must be instant.

well, in the case of a devamp as you mentioned, do you really need the entire set of frames available? don't you just need the set of frames from the looped section plus some reasonable amount of frames from directly after the looped section? it seems to me that that would be a fairly modest RAM requirement...

> It is not, as it turns out, terribly difficult to schedule audio in a sample-accurate way for a single audio device when you know in advance when and what audio should play. All you have to do is get your buffers ready, count the frames as they are requested, and deliver the right audio at the right time.

if this is true, then i would like to hear from the community about how many folks would be disappointed if guaranteed sync was only available per audio device. because that really doesn't sound like much of a compromise to me... i would bet that 90% of all qlab shows are being run off a single audio device anyway.

> It *could* be that in version 3, "starting two cues at the same time" just means that a trigger caused them to go from a completely stopped state to a state where *something* is running. (Either a pre-wait or an action.)


and then you wrote a lot more :) but i won't quote it.

chris, i think it may be possible that you are allowing edge cases to prevent you from neatly solving a problem. i may be wrong, and i would really like to hear from the community about this, but for me, there are only two conditions under which guaranteed sync matters to me:

- the first situation is when a single GO triggers more than one event (i.e. more than one line in the cue list) linked together either by virtue of being in a fire-all group cue, or by having autofollows/autocontinues with or without pre and post waits.

- the second situation is devamping. when i have a loop playing, i want the exit from the loop to be seamless.

within these two scenarios, there is really only one moment of arbitrary decision making going on (from the computer's point of view), and that is when the operator triggers the devamp. if there are multiple loops being devamped, i guess it's more than one moment of arbitrary decision making. but you get the idea.

to me, it is not some wide-open field in which anything can happen. even the most complex shows run in a pattern.

> isn't a cue sequence just a collection of pre- and post-waits with some actions dropped in the intervals? In essence, shouldn't we just schedule all audio on its audio timeline as far in advance as possible? Then every cue in the sequence is in sync with every other cue.

may i request some language clarity, please? what does "sequence" mean to you, in this context?

> Suddenly we are in a world in which all branches of all possible schedules must be kept up to date for all cues that are now running or will be running or might be running in the future. If a cue sequence is stopped, the cues that were scheduled must be unscheduled. This may cause other branches of the logical flow of the workspace to assert themselves. Instead of working out the logic of "who goes when" just in the moment of doing it, we now are predicting a complex branching future of possible events, some number of layers deep. (Reference: notebook pages 1 through 23.)

this is a very intelligent and cogent thought process here, and from the perspective of building a tool (qlab) which will be used in unpredictable ways (which is a good thing) i completely see the merit of what you're saying.

but

is it actually true that tracking the whole logical tree is necessary? what is an example of situation in which stopping a cue sequence doesn't automatically imply that guaranteed sync is no longer necessary?

> There may be a compromise here, in which we only go one layer deep -- in which the audio timeline knows about pre-waits, and two cues are in sync if they both go from a stopped to a waiting or running state at the same moment.

and post-waits, please. :)

> This covers the "first level" of Rich's scenario, and as I think about it now, could probably be done without introducing destabilizing complexity. But going the next logical steps beyond that is where I can't presently come up with an architecture that I trust. This may reflect my limitation as a programmer, but there it is.

i do not believe that it reflects your limitations as a programmer, i believe it reflects your strengths as a programmer: the ability to foresee a situation which contains variables which you cannot anticipate. this is a good thing.

what i suspect, however, is that you are foreseeing something which though mechanically possible, may not ever actually come up.

> Just to re-iterate the description above, this comes down to "special-casing" loops and vamps, since a more general mechanism to change the frames at the drop of a hat is much more difficult (or at least requires much more RAM).

special-casing loops and vamps is, in my opinion, a good thing. as a designer and composer, i certainly think of vamps as a special case.

> Since ALL cues with guaranteed sync are doing this, they are ALL locked together, no matter when or how they were triggered in this larger abstract world of QLab cues.

think about it this way: if cue 1 is guaranteed sync and cue 12 is guaranteed sync, but there are no autofollows or devamps or anything which link them together, then the only thing that could happen to cause them to play simultaneously is the operator hitting GO at some point, right? well, if that's the case, then guaranteed sync is superfluous since the operator him/herself is not able to press the GO button with anywhere near to sample-level accuracy.

i suppose the other possibility is that the show is running off of timecode, in which case a sample accurate duration between the GO of cue 1 and the GO of cue 12 really is possible. but in that case, it's a bit of a fallacy to say that cue 1 and cue 12 aren't autofollowed, and it calls for another separate conversation (in short: i think it's ok to insist on specific best practices when building a workspace which will be fired by timecode and requires sample accurate sync across the *entire* workspace).

so. there you go.

cheers
sam

Christopher Ashworth

unread,
Nov 1, 2010, 5:07:04 PM11/1/10
to Discussion and support for QLab users.
On Nov 1, 2010, at 3:04 PM, sam kusnetz wrote:
>
> it is my understanding that core audio does not make as much use of a multi-core system as it could.

That's an interesting observation. I'm not actually sure whether it's true off the top of my head. I'd imagine they'd put separate audio device callbacks in separate high priority threads, but maybe not. I've never had occasion to check.

Either way, CoreAudio doesn't require much CPU, period. Until DSP is involved, it's just not a very demanding kind of computation, once the audio is out of the file and in a buffer.

Getting the audio out of a file and in a buffer is the more time-consuming part, but that's also not really CPU-bound, except for compressed files, and even then it's still the disk that's going to be a problem first.

>> Unless the entire set of audio frames is available for fast-acting, non-blocking delivery at all times, then we now are in the position of trying to set this state up ASAP when that is sufficient, and instantly in the special cases where it really must be instant.
>
> well, in the case of a devamp as you mentioned, do you really need the entire set of frames available?

No, not in that case. But that's not the case that causes the problems.

>> It is not, as it turns out, terribly difficult to schedule audio in a sample-accurate way for a single audio device when you know in advance when and what audio should play. All you have to do is get your buffers ready, count the frames as they are requested, and deliver the right audio at the right time.
>
> if this is true, then i would like to hear from the community about how many folks would be disappointed if guaranteed sync was only available per audio device. because that really doesn't sound like much of a compromise to me... i would bet that 90% of all qlab shows are being run off a single audio device anyway.

I've been unclear. QLab already works this way. Guaranteed sync only makes sense for one device at a time -- one clock at a time. (Which, however, includes Apple's aggregate devices. But QLab doesn't see those as multiple devices, just as a single device.)

The addition of multiple devices is one reason why there must be a master timeline using a separate clock, but the real issue isn't so much the "multiple devices" part as it is the "plan all audio in advance" part.

> chris, i think it may be possible that you are allowing edge cases to prevent you from neatly solving a problem. i may be wrong, and i would really like to hear from the community about this, but for me, there are only two conditions under which guaranteed sync matters to me:
>
> - the first situation is when a single GO triggers more than one event (i.e. more than one line in the cue list) linked together either by virtue of being in a fire-all group cue, or by having autofollows/autocontinues with or without pre and post waits.

See, this is exactly the trick.

Think of it this way:

QLab already guarantees sync in this situation, but it does NOT guarantee that every single frame of the file will be played.

It's become clear that the sync guarantee is often not very useful to people if they don't have the "I hear every frame" guarantee.

You can get the "I hear every frame" guarantee without sync, but again, then sometimes people want the sync back.

What you're *really* asking for above is a guarantee for *both*. And this, in the context of what QLab is giving you in terms of cue building blocks that can be triggered at any time and modify each other and all that good stuff, is Way Not Easy.

Note that, yes, in the normal cases, we should be able to have buffers set up to anticipate the 90% of times when a series of cues just goes "down the line" and in that case it should be both synced and not drop frames.

But *guaranteeing* both means it's 100% of the time. And that's the thing that is hard.

Unless or until Sean and I can come up with good way to *guarantee* it, then I'd rather only try to guarantee something more limited, even if it works even better 90% of the time.

In other words, I'm trying to *avoid* the edge cases, and keep life simple for the most common cases. The goal is to *limit* the strict guarantees to things that really ought to be guaranteed, and avoid the insanity (and the destabilizing complexity) for the rest.

>> isn't a cue sequence just a collection of pre- and post-waits with some actions dropped in the intervals? In essence, shouldn't we just schedule all audio on its audio timeline as far in advance as possible? Then every cue in the sequence is in sync with every other cue.
>
> may i request some language clarity, please? what does "sequence" mean to you, in this context?

A series of cues linked by auto-follow, auto-continue, or start cues.

AKA: cues that will all eventually be fired from a single GO.

> is it actually true that tracking the whole logical tree is necessary? what is an example of situation in which stopping a cue sequence doesn't automatically imply that guaranteed sync is no longer necessary?

Here's a simple example:

Imagine a single audio cue that is triggered by two separate cue sequences.

In the first sequence, the audio cue would play back from time zero.

In the second sequence, due to a Load Cue, the audio cue would play back from time 00:01. (One second in.)

The first sequence begins to play. Then the second sequence also begins to play.

Assume that by the logic of how the first sequence is strung together with auto-follows and auto-continues, it will trigger the audio cue first. So the audio cue should be buffered to start at time zero. Also assume that if things proceed as-is, the trigger from the second sequence would be fired on an already-running audio cue, and thus do nothing.

But at some point let's say a cue in that first sequence is stopped. In that instant the second cue sequence is now the influential sequence. The audio cue should now be buffered at time 00:01.

Now imagine that we ADD a cue to the second sequence. This cue stops a critical cue in the first sequence *immediately* before the first sequence would have played the audio cue. Then the second sequence immediately triggers the cues that will cause the audio to play back from time 00:01.


Now, here we have two perfectly simple cue sequences. (Cues linked by auto-continue, auto-follow, etc.) Nothing all that fancy is going on. No scripting or anything.

But in order to *guarantee* that the audio cue will play *all* frames in a *synchronized* way, QLab must compute the playback logic of the cues from both sequences, and recognize that, instead of buffering the audio cue at time 00:00, it should actually be buffering it at time 00:01.

If you ask QLab to guarantee sync *and* no dropped frames for all cues along a sequence, this is the kind of thing this guarantee implies.

And it can obviously get much thornier, if we want to make it thornier.


Now, again, I agree that these cases are perhaps not the common cases. But when it comes to what Figure 53 can "put on the tin" in terms of guarantees, this is the kind of stuff we need to think through.


>> Since ALL cues with guaranteed sync are doing this, they are ALL locked together, no matter when or how they were triggered in this larger abstract world of QLab cues.
>
> think about it this way: if cue 1 is guaranteed sync and cue 12 is guaranteed sync, but there are no autofollows or devamps or anything which link them together, then the only thing that could happen to cause them to play simultaneously is the operator hitting GO at some point, right? well, if that's the case, then guaranteed sync is superfluous since the operator him/herself is not able to press the GO button with anywhere near to sample-level accuracy.

Quite right.

Your instinct is right here; it only ever comes down to the cues that are bonded together in a cue sequence. At first blush, this looks like a simple thing to predict. But as described above, it's not.


Believe it or not, I think we're actually more or less on the same page here. The common cases are the important thing, and QLab should be only as complex as necessary -- and no more so.


At this point, I think I've probably hit most of the points that I can hit without having real, running code in my hands. I hesitate to make to many claims or statements before the proof is in the pudding.

My "you're talking without having done it yet!" alarm bells are going off something awful. ;-)

Cheers,
C

Sean Dougall

unread,
Nov 1, 2010, 6:51:57 PM11/1/10
to Discussion and support for QLab users.
Just to address one other case that you brought up, Sam: strictly speaking, timecode itself isn't sample-accurate. QLab's LTC-reading code is very careful about timing, and gets extremely close, but actual sample-accurate sync is impossible. (I know MOTU uses "sample-accurate" and "SMPTE" in the same breath, but this is sheer marketing talk. Even in hardware, the best you can get is an approximation.) And MTC is carried by discrete packets that just get there whenever they happen to get there, so its sync is even worse than LTC. It's close enough for almost any scenario, but still not sample-accurate.

But, as far as QLab cues being triggered by timecode, what we can do is to minimize the delay between when they should trigger and when they do. So, if cue 1 and cue 12 have different timecode triggers, they won't have a sample-accurate offset from each other, because timecode isn't designed to provide that level of sync.

If you have two cues with the *same* timecode trigger, though, you can (and generally probably should) just have them all auto-continued. That way QLab will know they all go together, and as a bonus, if the sync point changes you only have to update the first cue in the sequence.

Cheers,
Sean

Rich Walsh

unread,
Nov 1, 2010, 9:32:19 PM11/1/10
to Discussion and support for QLab users.
On 1 Nov 2010, at 13:42, Christopher Ashworth wrote:

> I'm sure everything is perfectly and unambiguously clear now, yes? :)

Yes. Thank you for finding the time to write the detailed explanation.

It looks like it comes down to where you want to draw the line between "all reasonable futures" and "all possible futures" when offering to guarantee sync. I don't see any need to promise sync when the triggering or modification of the cue/sequence requires human intervention - except of course for vamping.

I fear I am now stuck in a loop too, so I opt to devamp myself...

Rich

Reply all
Reply to author
Forward
0 new messages