Apologies if this has been discussed before, or is hidden in documentation, but I've been looking for a while without success, and I am new to the forum.
Media:
1 track video (currently ProRes HQ 29.97 nd) with dummy audio (mixdown of a performance)
10 mono audio tracks 16bit AIF (1 is a click track for live performers)
The media simply plays, start to finish continuously for just over 1 hour, there is heavy use of audio and video sync in the design, meaning video changes land on percussive time points. Obviously over the course of an hour, we can't count on two machines ending within 1 frame of each other if they are running independent clocks.
I have been charged with updating the playback method (from timecode sync'd Beta SP and DA88's) but it must be a redundant setup, hence my sync problem.
A lot of the sync discussions I have found are only relevant to QLab 2, so this is really a question of how the timeline functions in 3. What I did catch, is that if you word clock the two audio interfaces (of the two separate playback machines) they will stay in time with each other. Does this mean that video playback speed is then a slave to the audio interface (I believe I understood that the video MUST have an audio track for this to be true)? Even if the audio levels are all at -inf?
My proposed setup would then be:
Word Clock generator (Big Ben say) feeds the two audio interfaces (such that the backup machine is not receiving clock from the primary machine, which would defeat it's role as a backup). Of course the clock is then a shared resource, but obviously it's not being taxed the way one of the media machines will be.
With the interfaces slaved to their WC input, is QLab 3 (and thus the video) deriving it's timeline from the interface clock?
Is there any method to achieve sample accurate start perhaps via MTC? I know the relative time would be held (if my understanding is correct), but not necessarily the actual start. With everything loaded and ready to go, I can't imagine they would be more than 50ms out of each other, which is probably the maximum difference allowable (imagine primary machine fails and thus click track goes out, musicians keep playing, we fade in audio of backup machine, and the musicians have to lock back into the click track). With sample accurate start and lock, we could leave the backup audio playing all the time, and overlay the projectors, such that a failure could result only in the projection getting dimmer and the audio dropping 3db. To me, this is the 'right' way to do it.
Final thoughts, I've looked at other media server/advertising solutions that have frame sync, but this piece will be performed around the world by different organizations, so using local/ready available tools such as mac computers and interfaces, plus the daily rental licenses is a huge plus for QLab, even if network sync isn't it's forte right now (hope it's coming!). Also, I am not building a rig that would follow the show, just media that would be shipped with the music and each group would have to build the hardware side. So again, simplicity, flexibility and common tools a huge plus for why I want to use QLab. I also can not count on the technical skill of the people building and running the show.
Many advanced thanks for insights, comments and suggestions!
Brian Mohr