Background MIDI comes very close to addressing this issue; with the
remaining problems mostly being some of the limitations of
representing certain things in MIDI or if the controller happens to
have a really great sound engine that the recording background synth
just can't match.
As I understand it, copy/paste is limited to non-real-time transfers
of audio already recorded; so I have always immediately dismissed it
as a solution to the problem of properly doing audio tracks from an
on-board controller. (Ie: music decidedly not oriented around
recording short loops, but multi-track bouncing very long
improvisations, etc.)
Is iOS5 bringing something new? Something to do with Mach ports?
Something that cannot be discussed here? Or do I have this wrong?
I bet everyone in this group thinks about abusing MIDI for an
arbitrary byte stream; and I have as well. :-) But then I come to my
senses and think to myself...what a mess! But yes, midi can be abused
into being a subliminal channel. (ie: an alternate way to transport
information, another example is streaming mpeg over dns so that you
can watch internet videos over a hotspot at a cafe even though it
doesn't let you actually connect to its internet; etc, etc.)
Mach ports comes to mind, and I read enough about it existing on iOS
that I have my doubts that it is strictly forbidden. It appears to be
an arbitrary byte stream with unknown latency characteristics,
correct? The ideal thing would be to push renderbuffers into it at
the place where the audio callback normally is. Maybe it's doable but
not standardized? I think the right approach is to keep using MIDI
until it just gets so wierd that it's no longer compatible enough to
justify the complexity of transporting over it; at which point we
investigate OSC. (OSC seems like such a good idea, but you can't do
anything with music over Wifi as that seems to be the current
option... the latency guarantees are just nowhere near where they need
to be for starters.)
1) Can you do Grand Central Dispatch between processes?
2) If so, are there obvious disadvantages to it, like callback
scheduling introducing latency and creating unreasonable minimum
buffer sizes?
3) Is CoreMIDI basically just some trickery with mach ports?
In the kernel inter-process communication is carried out with mach ports. The user-land API for this is CFMessaegPort. So BSD sockets etc will simply be a stack on top of that. At the basic level a task can hold a reference to a port. The port can either send or receive a queue of messages. A message is a data structure.
On OS X CFMessagePort works inter-process as well as inter-thread. The CFMessagePort is attached to a run-loop and you get a callback on receive. I'm now using CFMessagePort to do inter-thread communication in Arctic rather than the higher level NSNotification and performSelectorOnMainThread. I've yet to submit this to the App Store but it passes Xcode's Validation step.
In theory all you'll need is some way to advertise what processes are available and a memory buffer. Then you can copy the audio data directly between processes. Actually what happens is you create a CFDataRef and send it to the port. You then get a receive callback which copies the data into a new buffer.
Check the sample code for BackgroundExporter. If you can do this in iOS then you can open remote ports and talk between processes. It's very likely that this is how CoreMIDI actually implements the virtual ports. They simply have a dictionary that everyone registers against, each port has a UID and from there at the lower levels it knows what (mach) port to send to. Though this is likely implemented in kernel space in the CoreAudio driver because of tight timing, the IPC is still likely implemented with mach ports.
IMHO, this would be the only way to actually meet the real requirement to be able to bounce audio tracks correctly. It fits with background midi conceptually as well. My instrument is geared towards long improvisations and, and recording little snippets misses the point completely.
I am currently balking at requests for ACP because that is a lot of work to get only part of what i really want, especially because the ios memory model is to crash you when you are overbooked for memory. I also think it ridiculous that controllers not only embed a synth, but a mini track recorder as well; rather than communicating with apps that do this as their core competence.
Sent from my iPhone, which is why everything is misspelled. http://rfieldin.appspot.com
http://rrr00bb.blogspot.com
I'm on it!
--
Michael Tyson | atastypixel.com
A Tasty Pixel: Artisan apps
aim: mikerusselltyson
twitter: MichaelTyson
I'd definitely be interested to hear about your impressions thus far, regarding the realtime SysEx transport - how far have you gotten to date? Does it look like it might be viable?
If it does, I'd love to put together a library and invite app developers to start supporting it - I think it could be quite a significant move!
Cheers,
Michael
--
Michael Tyson | atastypixel.com
A Tasty Pixel: Artisan apps
aim: mikerusselltyson
twitter: MichaelTyson
Very cool!
> The bad news (but I'm nowhere near done exploring yet) is that, in "live mode" (where old samples are dropped to keep latency up), there's a lot of stuttering when doing almost anything on the device, even innocuous things like scrolling in a table view. I have no idea how backgrounding is implemented in iOS, but I suspect that background apps are given a low execution priority, presumably except for the high-priority core audio thread. This isn't a problem when not playing the audio at the end of the pipe live (for example, when recording in one app a performance from another instrument app, where the audio timestamp is sufficient to record the audio at the right time - I'm very optimistic about this scenario), but for live audio, a solution will need to be found before it's viable.
>
IMHO virtual MIDI ports are implemented on top of mach_ports (I use these to pass messages between the audio thread and UI main thread) and the memory buffers are malloc'ed. I think you'll have to buffer the audio at the receiving thread (something like CARingBuffer) and then pass it off to the audio render proc. If you're copying the audio from the MIDI receive callback straight into the audio render proc, I think you'll have dropped audio. I'm also not sure what would happen if you overly delay the MIDIReadProc. Of course if you're already doing this then please disregard my thread hijack.
AUHAL and aggregate devices would be ideal
cheers
peter
Actually, I already am using a ring buffer to store the incoming audio, which is then drained by the render thread. It occurs to me that my prior theory is totally wrong, though - the holdup isn't at the sender's end, it's on the receiver, as it has to skip buffers to keep the latency low. Turning live mode off to avoid skipping buffers prevents the glitching, but causes big latency issues.
I'm just tweaking the receiver now, trying to figure out where the bottleneck is; I was using a GCD queue to do the processing of the incoming MIDI packets, which might be lagging behind when things get busy. I'm getting rid of that double-handling and moving the processing straight into the buffer drain routine, which may help.
Lovely - that's totally fixed it! Live audio now coming through smoothly and with nice low latency.I'm just tweaking the receiver now, trying to figure out where the bottleneck is; I was using a GCD queue to do the processing of the incoming MIDI packets, which might be lagging behind when things get busy. I'm getting rid of that double-handling and moving the processing straight into the buffer drain routine, which may help.
If there is anything we can test, just let us know!
Also, it would be interesting to check with Apple review process if
they are getting against it. So, before
doing too much functional development on it, it might be wise to put
something early in review to check out.
You can even hold the app store release back.
Anyway, cheers to this great stuff!
Chris Randall
Audio Damage, Inc.
http://www.audiodamage.com
Cheers
Ben
Does anyone know if there's a way to actually directly ask the review team questions (like, "is this okay"), or do we truly have to go through the entire app submission pantomime to test acceptability? If so, it seems rather inefficient =)
--
Michael Tyson | atastypixel.com
A Tasty Pixel: Artisan apps
aim: mikerusselltyson
twitter: MichaelTyson
Chris Randall
Audio Damage, Inc.
http://www.audiodamage.com
It would be great if you put the code up into a git repository somewhere!
jlc
Is the way this is going to be accomplished going to involve any closed source third party libraries? How's the behavior of apps running in the background going to be changed? These are the questions I'd probably have to address.
My skype nick is sebastian.dittmann - if Michael (or anyone else) wants to contact me about this.
Best,
Sebastian
Do you really think so? It all seems eminently innocuous to me - but it probably can't hurt to be a little cautious.
I'll most definitely git this baby up, in a couple days (or possibly sooner) once I've ironed out some more kinks.
There'll be no closed source third party things - it's going to be an open library (hosted on GitHub), which will build as a static lib that can be included in the host project. It includes PGMidi (with a few of my own improvements), and a couple of classes (APAudioSender and APAudioReceiver), which make calls to the PGMidi interface (which just uses the standard Core MIDI API, nothing extra).
As for as background behaviour goes, it won't be any different to the way that apps (like MoDrum, Bassline, soon-to-be Loopy, etc) with MIDI sync work. The only potentially funny part would be apps that *only* act as an audio filter for other apps (and don't actually create or playback audio themselves), as they need to (arguably spuriously) request background audio, and keep an active audio session in order to continue to be run in the background. This *may* be problematic.
If you don't mind waiting a couple of days for me to pop the source up on GitHub along with a couple of sample apps, then it could be scrutinised directly for kosher-ness.
I really don't think there'll be any issues with it, for the most part - I'm not doing anything at all unusual, or outside the public API, and as far as transporting audio data over SysEx messages goes - that's what SysEx was *designed* for, among other things =)
I do know that the CoreAudio implementation makes my app something of a CPU pig, but I can still drive two synths on an iPad 1, as long as one of them isn't AniMoog. Being a Pro Synth really takes a lot of CPU, I guess. (NLog is fine if the effects are off, as one might expect.)
Also, Michael, that link you gave me the other day for your altered PGMidi thing, it unzips in to the dreaded CGPZ loop, and none of the normal tricks can get it out of the loop. (I even tried a CGPZ utility on my PC, and it says the archive is corrupted.)
Chris Randall
Audio Damage, Inc.
http://www.audiodamage.com
No, no impact on your app unless you use the library =)
Whoops - sorry about that. My dodgy wifi connection died while I was uploading it and I couldn't get it back. Try again now: http://resources.atastypixel.com/PGMidi+TPAdditions.zip
When describing it to Apple, it's probably best to portray this as
simply audio sharing with *very* excellent metadata about the audio.
If you are going to shoot audio between apps, then it's of negligible
cost to also embed a MIDI transcript of what the audio is.
I'm currently working on a very portable (ie: no external references
at all) high-level fretless MIDI API that is essentially a MIDI stream
generator. It actually would make a whole lot of sense if it works by
submitting audio buffers interleaved with MIDI messages. You can
dispense with attempts to analyze the signal in a lot of cases if you
are simply given a high level description along with it.
On Nov 29, 10:57 am, Christopher Randall <ch...@audiodamage.com>
wrote:
> My upcoming sequencer app doesn't generate any audio at all; it is only a MIDI sequencer. However, I use a CoreAudio record/playback loop for timing since this is the only way to get a rock solid clock on an iOS device that doesn't get superseded by UI events, best I can tell. Will my app affect this system at all? Or will it affect my app?
>
> I do know that the CoreAudio implementation makes my app something of a CPU pig, but I can still drive two synths on an iPad 1, as long as one of them isn't AniMoog. Being a Pro Synth really takes a lot of CPU, I guess. (NLog is fine if the effects are off, as one might expect.)
>
> Also, Michael, that link you gave me the other day for your altered PGMidi thing, it unzips in to the dreaded CGPZ loop, and none of the normal tricks can get it out of the loop. (I even tried a CGPZ utility on my PC, and it says the archive is corrupted.)
>
> Chris Randall
> Audio Damage, Inc.http://www.audiodamage.com