Virtual Audio Revisited

30 views
Skip to first unread message

nlogmusic (TempoRubato - NLogSynth)

unread,
Oct 19, 2011, 9:24:05 PM10/19/11
to Open Music App Collaboration
Hi,

maybe it is too late here and I had a glass of wine to much:

But in my last post in the Sonoma thread I put in an idea (or better
put 'hope' ;-) to piggyback audio buffers over Virtual Core MIDI.

Perhaps an impulse to open again the discussion for how to do
virtual audio?

Best
Rolf

Hamilton Feltman

unread,
Oct 20, 2011, 9:52:33 PM10/20/11
to open-music-app...@googlegroups.com
Hello, i'm Hamilton of Harmonicdog (MultiTrack DAW) and interested in this work you done. Previously we had a group for interapp audio (AppWire) but it didn't get far, only some discussion and a few simple bandwidth tests using mach ports. This is great what you have done here for MIDI, and I'm considering adding midi tracks to multiTrack daw.

I like that you're trying to keep it open, the way it should be.

If MTD (multiTrack daw) can record midi tracks, it should be able to send them out, whether to external output, or to a virtual destination on the same device. If it sends them to the virtual destination, either that destination is responsible for rendering them and playing them, or it needs to render them and package up the audio data and send it back to MTD, or both. Both would be ideal, because MTD can use this prerendered data to make it much more efficient, and it would also reduce latency.

So it doesn't need to keep asking the app to render and playback a track. Only if the midi data changes. This would also mean that the other app doesn't even need to be started most of the time, if you only want to playback a song and not change anything.

That is my reasoning for ability to send audio data between applications. To reduce the amount of processing and bandwidth, basically a massive optimization that fits in with the way MTD works.

The single playback or daw application would be responsible for starting all the dependent apps, if those apps are needed for a track rendering. The command line could be used to start the other app in a specific mode or preset, like previously discussed. There must be instructions to the user to close the dependent application after it starts and return to the daw or playback app.

I hope my thinking is not fuzzy about all of this, I'm trying to think about it as clean as possible but it's seems a bit complex. Again, thanks for doing what you done, it's great.

Regards,
Hamilton

nlogmusic (TempoRubato - NLogSynth)

unread,
Oct 21, 2011, 2:48:42 AM10/21/11
to Open Music App Collaboration
Hi Hamilton,

welcome!

I guess what you describe goes into the direction of what is called
"freezing" tracks
on DAWs. I thought also about it and this may fit very well into MTD!

As I understand your idea there two modes in regards to "instrument
app midi" tracks:

First, a realtime mode. MTD sends via virtual MIDI events to the
instrument app
and the app like NLog etc. would play these events in realtime to
audio out.
MTD is as well playing to audio out and iOS is mixing them together.

Here it would be great if you could link the respective track volume &
pan UI elements
to the standard MIDI CCs for vol & pan. Thus, the user could control
the playback
vol & pan of the synth app from MTD without toggling to the sync app.

Second, a freeze mode where MTD plays a pre rendered audio recording
of
the instrument track. For this the synth app needs to be able to
record its output
and send the result back to MTD. We already discussed this sending
back and
found several approaches to be useful like either use the AudioCopy
Pasteboard
or MIDI SysEx messages.

This could be a potential workflow:

A defined MIDI message (like an agreed OMAC SysEx) would put
the synth app in record mode. Since Core MIDI always have timestamps
in host
time precision we could agree that the timestamp of this "start
record" message
to be used as the sync point where MTD then could insert it with
sample precision.

MTD then sends its playback MIDI messages to the synth app.

When done, another defined OMAC SysEx stops recording and the synth
app puts
the recording on to the pasteboard. Since this could take a little
while, it would
be good that the synth app after finishing pasteboard work, is sending
a third
OMAC SysEx like "recording on pasteboard ready". After receiving this
MTD
would grab the audio data from the pasteboard.

This scenario would be a realtime recording scenario which means that
MTD
needs to play (& the user wait) exactly the time needed to play the
track.
Of course, also an offline bounce mode would be thinkable. Here, MTD
could
send a MIDI file to the synth app which simply gets reddened there.
However,
there is no MIDI pasteboard definition yet, and to my info from Sonoma
they
are working on something but won't get anything ready soon. So either,
we
have to define out own MIDI pasteboard, or we use SysEx for this.

The other question is: Is that offline rendering mode worth the effort
if you take
into consideration, that most current "pro" ;-) synths maxing the CPU
anyhow
quite out.


The "freeze" mode would also have the advantage that MTD could add
effects
and other audio processing on the synth track, which is currently not
possible
in realtime mode unless we have implemented virtual audio.


If you want to go further here, I am happy to volunteer with NLog
since all is
already there: Recording, Pasteboard, Virtual MIDI etc. We need only
to agree
on the three OMAC SysEx messages.

Cherers
Rolf

nlogmusic (TempoRubato - NLogSynth)

unread,
Oct 21, 2011, 11:32:50 AM10/21/11
to Open Music App Collaboration
Hi

I have a beta version implemented of NLogSynth PRO which can run
& record in the background and puts the recording afterwards at the
pasteboard as described in my previous post.

If anyone wants to test this and experiment with its own app, just
email me for an ad hoc version.

Cheers
Rolf


On Oct 21, 8:48 am, "nlogmusic (TempoRubato - NLogSynth)"

Hamilton Feltman

unread,
Oct 22, 2011, 4:28:03 AM10/22/11
to open-music-app...@googlegroups.com
Good job. And thanks for the warm welcome Rolf!

About playback in a daw and sending to synth app in realtime. I guess this is different from performing on the one app and rendering on the other, as you can queue up midi messages beforehand (same as the way coremidi works) since each message can be timestamped and queued. Therefor no latency. Is this what you had in mind?

The freeze thing is a great idea. In fact, it might be the only way to make this work smoothly for many tracks and synths/samplers at the same time. MTD does a lot of this already, sort of a background autofreezing system (block based) and it only uses a couple freeze tracks for all 24 stereo tracks in MTD! So this would fit in nicely with the way that works.

I like the idea of using sysex and embedding audio into it, it seems the cleanest way with least latency as well as most likely to not cause problems with apple review process. The idea about using the pasteboard seems like a kludge, and would add a user wait step for no good reason.

If the synth app could receive input from MTD, render in realtime and send to coreaudio, and also send it back as audio in sysex messages, this would be ideal as it would use the least amount of processing and MTD could simply write it to a file as the packets come in.

Another simpler alternative is that the daw app doesn't retain the midi data at all, it simply listens for audio in sysex messages (or the machports way if thats decided) and records them to files. This is similar to the initial thoughts on sending audio data with appwire. This would definitely be simpler for the daw application.

I guess a clean workflow needs to be developed, like how to arm tracks, messages sent, states of each app, and communications/modes of all apps involved. I'm always of the mindset, the less "states" the better, as it's more simple for the user.

Anyway, you gave me a lot to think about and I'm going to get started :)

Regards,
Hamilton

nlogmusic (TempoRubato - NLogSynth)

unread,
Oct 22, 2011, 5:05:49 AM10/22/11
to Open Music App Collaboration
"Therefor no latency. Is
this what you had in mind?"

Yes, exactly!

"The idea about using the pasteboard seems like a
kludge, and would add a user wait step for no good reason."

Not really, there is no user step involved. The current NLog beta
automatically puts the recorded wav date to the pasteboard
and just signals the DAW with a SysEx that new data is ready.
Thus, no user interaction is needed.

Of course, if preferred I can also package the wave data into
a sysex.

"... and also send it back as audio in sysex messages, this would
be ideal"

Oh yes, and in addition if the DAW sends the MIDI data well ahead
in time, then the audio sysex messages would arrive at the DAW
also ahead (minus synth buffer & core midi latency) in time that
the DAW would be able to play the MIDI track with no latency!

There might be the option that the synth isn't even output to
CoreAudio, because if the audio data arrived well ahead in time at
the
DAW, then the DAW can mix and even effect it.

This is true when in the DAW the midi synth track plays back
ready made midi data. If the user starts to edit it while running,
the data must be edited also 'ahead of time' but that should be ok.

In the mode where somebody wants to record MIDI in realtime
in an synth midi track, then due to the latencies issues, the
monitoring of the recording shall be done by the synth app
which outputs directly to CoreAudio.

So, we need probably a way to control if the synth app outputs
as well to CoreAudio or not. Could be again switch by sysex.

The basic question is, if Core MIDI allows for enough throughput.
But if programmed well by Apple, I see no reason why it shouldn't.
I will prepare some tests.

In regards to sysex, we do not have a manufacturer ID for OMAC.
what I am currently using is a magic of 7 bytes after to 0xf0:
0x00, 0x00, 0x00, 0x6f, 0x6d, 0x61, 0x63
The first 3 bytes are just zero not to interfere with any manufacturer
id.
The last 4 bytes are the ascii values for 'omac' in lower case.
Since we do not have an organizational body for OMAC, I am considering
to apply for TempoRubato for a manufacturer ID and give it free for
OMAC use (I would just replace the three zeros with the TempoRubato ID
and still the next 4 bytes would be 'omac' in order being able to
differentiate
from own use.

After the 7 bytes magic I currently use in my beta the next byte as a
message selector:

Received by synth app:

0x01 for start audio recording
0x02 for stop audio recording & copy wav to pasteboard
0x03 for start MIDI recording
0x04 for stop MIDI recording

Received by DAW app:

0x05 recorded audio is available at Intua pasteboard
0x06 recorded MIDI embedded in sysex

For the mentioned extensions above we could simply use:

0x07 to signal the synth app to output to CoreAudio only silence
0x08 to signal the synth app to output to CoreAudio again its sound

0x09 would be an embedded audio package sent to the DAW

There could 255 message selectors and keeping the value 0x00 reserved
for future extensions.

Any way this is just a simple beginning and things may change later
if we find its stupid ;-)
Any input is welcome! And if any DAW wants to play with NLog betas,
just email.

Cheers
Rolf

Hamilton Feltman

unread,
Oct 24, 2011, 4:27:44 AM10/24/11
to open-music-app...@googlegroups.com
Hey nice!. I'm going to send you a UID, I just got a tascam us-800 for messing with midi.

Paul Slocum (SOFTOFT TECHECH)

unread,
Oct 24, 2011, 2:06:04 PM10/24/11
to Open Music App Collaboration
I certainly don't have much influence with Apple, but do you guys
think there's any possibility we could convince Apple to include
virtual audio ports? If Apple would create some inter-app audio units/
buffers and bundle them with the virtual MIDI port, then we'd have a
VST-like system, and it seems like this could be introduced with
minimal changes to existing apps. The DAWs could then freeze tracks,
control volume of and add effects to other apps, etc. Plus we could
start developing audio effects apps. Seems possible that Apple might
already be working on or planning something like this since it is a
next major step towards making iOS competitive with other platforms
for creating music.

That said, using the audio pasteboard and sysex MIDI or just sysex as
a workaround are great ideas. How fast are these methods for
transferring data between apps?
Reply all
Reply to author
Forward
0 new messages