Re: [QLab] Ableton -> Qlab -> MAX -> Speakers?

282 views
Skip to first unread message
Message has been deleted

Daniel Perelstein

unread,
Apr 11, 2017, 9:52:45 AM4/11/17
to ql...@googlegroups.com
Yes, different DAW without the limitations you're working so hard to overcome here.

Firing MIDI from QLab to REAPER takes both soundflower and Max out of the equation, and also means you can easily use the same interface no matter how many outputs you need, so you're not changing interfaces, dealing with aggregates, etc.

I understand making it more complex for pedagogical reasons, but your email states both that your professor is currently being driven crazy by it, and also that you're looking to make something you can eventually use in your professional work. So use the simple solution, I say.
Dan

On Tue, Apr 11, 2017 at 2:39 AM 'Sean Tingle' via QLab <ql...@googlegroups.com> wrote:
Hello everyone!

My name is Sean Tingle and I am a Sound Design student at CCM. I am currently working on a project for my Sound class that I have currently hit a brick wall with. I would love some outside opinions so I don't drive myself/my professor crazy with trying to figure out how to actually make the next step happen. 

*For the record, the relationship that is the title of this post (Ableton -> Qlab -> MAX -> Speakers) actually already works as it is on a very basic level. However, for this project my main goal is to constantly push the programs and their relationships with each other to the point of breaking in order to establish a flexible workflow I can use in the professional field one day.*

So let's get into it!

Essentially what I am doing is using Qlab as a user interface, for people such as a Board OP or Stage Manager, to fire off scenes (cues) in Ableton. What I have done in the past is actually just use Ableton Sends/Returns to route the audio to individual speakers in the system (via Dante for instance). However, within Ableton the max number of Sends/Returns that you can actually have is 12. So essentially I am stuck at only having 12 outputs out of Ableton going this route, which is very limiting. 

So the thought process is, to send audio directly out from each Ableton track via some virtual sound card, such as Soundflower, and use mic cues in Qlab to get the audio from Ableton and route the audio (mic cues) to whatever Qlab output I want via Soundflower . We are still using Soundflower to output audio from QLab due to the mic cues needing the same output as input which then guarantees the outputs are not being messed up which might happen when using an aggregate device. (bouncing back from using 2 speakers to 8 back to 2 etc.) It is then is picked up by MAX and actually routed to a physical interface such as a MOTU. 

Yet again, this works already! However, I am being greedy and want to see what actually can be done. What I have quickly realized is what if I want to run Stereo outs (for each track) of Ableton instead of mono? I then double my channel count (which I can get back by using an aggregate device out of Ableton) but then am limited to 24 mic inputs in Qlab. I can't use multiple mic patches due to the fact of Ableton only being able to configure 1 output device. 

So I guess my problem is, how do I use Ableton as playback for my audio but use Qlab for all of my routing needs? 

Thoughts? Sorry for all of the craziness and seemingly ridiculousness when a valid answer is to just use a different DAW to get audio out. But then again, where is the fun in that? Thank you again in advance for all of your input! I really do appreciate your time!

-Tingle

--
Contact support anytime: sup...@figure53.com
Follow Figure 53 on Twitter: http://twitter.com/Figure53
---
You received this message because you are subscribed to the Google Groups "QLab" group.
To unsubscribe from this group and stop receiving emails from it, send an email to qlab+uns...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/qlab/ede58f07-94eb-4dc8-8c94-cd1c26709cf4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Sam Kusnetz

unread,
Apr 11, 2017, 10:06:21 AM4/11/17
to ql...@googlegroups.com
Hello Sean

If I were your professor, the question I’d ask you is this: why are you using Live in this scenario? As far as I can tell from your screen shots, each Live cue could easily be rendered out as a set of multitrack audio stems, which you could then cue directly in QLab, thus eliminating Live and Max from the equation.

If you’re doing more involved stuff in Live than it seems, my question becomes: why are you using QLab? Why not just let Live be Live, and operate your show straight from there?

Best
Sam

Sam Kusnetz | Figure 53
Message has been deleted

Sam Kusnetz

unread,
Apr 12, 2017, 10:57:43 AM4/12/17
to ql...@googlegroups.com
Tingle

Well… I cannot say that I fully understand, but of course I don’t need to understand! Whatever works for you, works for you and that’s terrific.

I can say that it really seems to me that you’ve gotten yourself to point where it boils down to “one tool has ABC features and DEF limitations, and the other tool has PQR features and XYZ limitations,” and the answer to your conundrum, ultimately is really just “yes, you have correctly observed the situation!”

Best
Sam
Sam Kusnetz | Figure 53

On April 12, 2017 at 10:50:08 AM, 'Sean Tingle' via QLab (ql...@googlegroups.com) wrote:

Thank you guys for your replies!

The reason why I am using Live is because it allows me to have complete control over what I am putting through the system. I don't want to have to sit in tech and constantly export all of my tracks out. I have done it that way in the past and get annoyed when needing to change something. If I use live's session view I can not only have automatic access to my content but also have a wide range of live control that I haven't been able to do in QLab. 

I also have to use QLab because Ableton does not have the capability to route audio to enough sends to have the flexibility of a larger system. The most I can do is 12 Sends if I purely use Live. (Which works great! Definitely recommend it!) QLab is a way for me to route my audio to anywhere I want it to go. But I am then limited to 24 tracks on the QLab side. 

Thanks for the help!

-Tingle
--

Contact support anytime: sup...@figure53.com
Follow Figure 53 on Twitter: http://twitter.com/Figure53
---
You received this message because you are subscribed to the Google Groups "QLab" group.
To unsubscribe from this group and stop receiving emails from it, send an email to qlab+uns...@googlegroups.com.

Johannes Halvorsen

unread,
Apr 13, 2017, 9:12:48 PM4/13/17
to QLab
Why do you need the audio tracks to go through Qlab...? I don't have extensive experiense with Soundflower, but isn't it perfectly capable of doing all the routing you need without throwing Qlab into this mix?

If Soundflower can't do this, there are certainly other solutions who can, e.g. Rogue Amoeba's Loopback (32ch).

You don't even need 3d party sw for this, I think. If I don't remember this all wrong OSX has built in native support for creating aggregate sound devices.

So: Let Live handle it's own outputs, and use Qlab for controll.

Jeremy Lee

unread,
Apr 17, 2017, 12:46:16 PM4/17/17
to ql...@googlegroups.com
Sam to the rescue!

Sean has indeed correctly observed the situation.  As his professor, I’m not actually being driven crazy by this exercise, but always encourage students to push things as far as they can until they break!

Sean designed and performed an EDM version of Romeo & Juliet in October using Live.  It really was fantastic. Experience it in my experiment of hemispherical video and ambisonic audio here:


The thing is that there was no way to get this production to be “Hit by a bus-proof”.  If Sean couldn’t make it to the show, nobody else on the planet could possibly have performed it.  He was truly pushing live to its limit, but it has a limitation of only being able to address 12 “Aux Sends” per track.

QLab is awesome, but a totally different beast than LIVE.  He was doing lots of live filter sweeps, FX changes, etc, as well as live tempo shifts and triggering of new layers in musical time.

We’re really just looking for a way for him to be as creative as possible in LIVE, while using QLab as an operator interface, as well as matrix routing and MIDI automation master.  We’re partway there, but it is a rather complicated beast...

-- 
Jeremy Lee
    Sound Designer - USA 829



Reply all
Reply to author
Forward
0 new messages