Hi all
I have a project coming up where we'd like to send separate audio and video feeds to audience's mobiles - has anyone ever done anything like this before? or know of a way to do it?!
We need to broadcast both audio-only and video-only streams to any and all audience members who choose to watch and/or listen. We can't hand out devices, so audience members need to be able to use their smartphone, though we can ask them to download an app if necessary.
I need the feeds to be low-latency, as they relate to the action on stage.
We'll need to trigger both audio and video streams from Qlab (at specific, cued points int he action), but I don't mind whether that means playing back the media within Qlab or trigging another system by OSC/midi/applescript.
I've also thought about live-streaming the two feeds on the internet and sending audiences to a webpage to view in their browser - I suspect this wouldn't be low enough latency, or that latency wouldn't be reliable. I don't know if the venue's wifi/internet access would be up to it either!
And finally, I wondered if there's an app out there that would cache the feeds in advance and could be triggered to playback specific clips over wifi (ideally by eg osc/applescript commands from qlab). Like Multivid or
https://itunes.apple.com/us/app/ivideoshow/id501016015?mt=8 - but it seems like neither of these would be that easy for audience members to self instal and use.
Mic, you mentioned in that post above the idea of triggering an OSC server "OSC cues off timecode in QLab targeting OSC apps on the iDevices. (or browsers targeting an OSC server)" - can you expand on that idea for me please? It sounds like it might be promising, but I don't fully understand the idea!
Thanks all!
Will