Background MIDI comes very close to addressing this issue; with the
remaining problems mostly being some of the limitations of
representing certain things in MIDI or if the controller happens to
have a really great sound engine that the recording background synth
just can't match.
As I understand it, copy/paste is limited to non-real-time transfers
of audio already recorded; so I have always immediately dismissed it
as a solution to the problem of properly doing audio tracks from an
on-board controller. (Ie: music decidedly not oriented around
recording short loops, but multi-track bouncing very long
Is iOS5 bringing something new? Something to do with Mach ports?
Something that cannot be discussed here? Or do I have this wrong?
I bet everyone in this group thinks about abusing MIDI for an
arbitrary byte stream; and I have as well. :-) But then I come to my
senses and think to myself...what a mess! But yes, midi can be abused
into being a subliminal channel. (ie: an alternate way to transport
information, another example is streaming mpeg over dns so that you
can watch internet videos over a hotspot at a cafe even though it
doesn't let you actually connect to its internet; etc, etc.)
Mach ports comes to mind, and I read enough about it existing on iOS
that I have my doubts that it is strictly forbidden. It appears to be
an arbitrary byte stream with unknown latency characteristics,
correct? The ideal thing would be to push renderbuffers into it at
the place where the audio callback normally is. Maybe it's doable but
not standardized? I think the right approach is to keep using MIDI
until it just gets so wierd that it's no longer compatible enough to
justify the complexity of transporting over it; at which point we
investigate OSC. (OSC seems like such a good idea, but you can't do
anything with music over Wifi as that seems to be the current
option... the latency guarantees are just nowhere near where they need
to be for starters.)
1) Can you do Grand Central Dispatch between processes?
2) If so, are there obvious disadvantages to it, like callback
scheduling introducing latency and creating unreasonable minimum
3) Is CoreMIDI basically just some trickery with mach ports?
In the kernel inter-process communication is carried out with mach ports. The user-land API for this is CFMessaegPort. So BSD sockets etc will simply be a stack on top of that. At the basic level a task can hold a reference to a port. The port can either send or receive a queue of messages. A message is a data structure.
On OS X CFMessagePort works inter-process as well as inter-thread. The CFMessagePort is attached to a run-loop and you get a callback on receive. I'm now using CFMessagePort to do inter-thread communication in Arctic rather than the higher level NSNotification and performSelectorOnMainThread. I've yet to submit this to the App Store but it passes Xcode's Validation step.
In theory all you'll need is some way to advertise what processes are available and a memory buffer. Then you can copy the audio data directly between processes. Actually what happens is you create a CFDataRef and send it to the port. You then get a receive callback which copies the data into a new buffer.
Check the sample code for BackgroundExporter. If you can do this in iOS then you can open remote ports and talk between processes. It's very likely that this is how CoreMIDI actually implements the virtual ports. They simply have a dictionary that everyone registers against, each port has a UID and from there at the lower levels it knows what (mach) port to send to. Though this is likely implemented in kernel space in the CoreAudio driver because of tight timing, the IPC is still likely implemented with mach ports.
IMHO, this would be the only way to actually meet the real requirement to be able to bounce audio tracks correctly. It fits with background midi conceptually as well. My instrument is geared towards long improvisations and, and recording little snippets misses the point completely.
I am currently balking at requests for ACP because that is a lot of work to get only part of what i really want, especially because the ios memory model is to crash you when you are overbooked for memory. I also think it ridiculous that controllers not only embed a synth, but a mini track recorder as well; rather than communicating with apps that do this as their core competence.
Sent from my iPhone, which is why everything is misspelled. http://rfieldin.appspot.com
I'm on it!
Michael Tyson | atastypixel.com
A Tasty Pixel: Artisan apps
I'd definitely be interested to hear about your impressions thus far, regarding the realtime SysEx transport - how far have you gotten to date? Does it look like it might be viable?
If it does, I'd love to put together a library and invite app developers to start supporting it - I think it could be quite a significant move!
> The bad news (but I'm nowhere near done exploring yet) is that, in "live mode" (where old samples are dropped to keep latency up), there's a lot of stuttering when doing almost anything on the device, even innocuous things like scrolling in a table view. I have no idea how backgrounding is implemented in iOS, but I suspect that background apps are given a low execution priority, presumably except for the high-priority core audio thread. This isn't a problem when not playing the audio at the end of the pipe live (for example, when recording in one app a performance from another instrument app, where the audio timestamp is sufficient to record the audio at the right time - I'm very optimistic about this scenario), but for live audio, a solution will need to be found before it's viable.
IMHO virtual MIDI ports are implemented on top of mach_ports (I use these to pass messages between the audio thread and UI main thread) and the memory buffers are malloc'ed. I think you'll have to buffer the audio at the receiving thread (something like CARingBuffer) and then pass it off to the audio render proc. If you're copying the audio from the MIDI receive callback straight into the audio render proc, I think you'll have dropped audio. I'm also not sure what would happen if you overly delay the MIDIReadProc. Of course if you're already doing this then please disregard my thread hijack.
AUHAL and aggregate devices would be ideal
Actually, I already am using a ring buffer to store the incoming audio, which is then drained by the render thread. It occurs to me that my prior theory is totally wrong, though - the holdup isn't at the sender's end, it's on the receiver, as it has to skip buffers to keep the latency low. Turning live mode off to avoid skipping buffers prevents the glitching, but causes big latency issues.
I'm just tweaking the receiver now, trying to figure out where the bottleneck is; I was using a GCD queue to do the processing of the incoming MIDI packets, which might be lagging behind when things get busy. I'm getting rid of that double-handling and moving the processing straight into the buffer drain routine, which may help.
Lovely - that's totally fixed it! Live audio now coming through smoothly and with nice low latency.I'm just tweaking the receiver now, trying to figure out where the bottleneck is; I was using a GCD queue to do the processing of the incoming MIDI packets, which might be lagging behind when things get busy. I'm getting rid of that double-handling and moving the processing straight into the buffer drain routine, which may help.
If there is anything we can test, just let us know!
Also, it would be interesting to check with Apple review process if
they are getting against it. So, before
doing too much functional development on it, it might be wise to put
something early in review to check out.
You can even hold the app store release back.
Anyway, cheers to this great stuff!
Audio Damage, Inc.
Does anyone know if there's a way to actually directly ask the review team questions (like, "is this okay"), or do we truly have to go through the entire app submission pantomime to test acceptability? If so, it seems rather inefficient =)
Audio Damage, Inc.
It would be great if you put the code up into a git repository somewhere!
Is the way this is going to be accomplished going to involve any closed source third party libraries? How's the behavior of apps running in the background going to be changed? These are the questions I'd probably have to address.
My skype nick is sebastian.dittmann - if Michael (or anyone else) wants to contact me about this.