Best way to capture Microphone input on iOS with low latency

2,195 views
Skip to first unread message

rafael...@gmail.com

unread,
Jul 5, 2016, 4:57:27 PM7/5/16
to AudioKit Users Group
Hello,

I am looking for a way to capture microphone samples in iOS and Swift with very low latency. There are two things complicating this matter:

1. I am new to iOS (or generally Apple-specific) development. I am experienced with many other platforms though.

2. Looking on the web there are pointers in so many directions and it's hard for me to tell which ones to follow:
AudioKit
AudioQueues
AudioUnits
AVAudioXYZ
CoreAudio
EZAudio
Novocaine
Superpowered

My requirements are actually really simple: I'd just like to access the microphone and grab a continuous stream of data from it, just like I'd access a gyroscope for example. I realize that mic data has a much higher sampling rate so I probably need to get it in chunks (and need to set buffer sizes) but that's ok. I'd need very low latency and CPU overhead though. Also, I'd prefer a simple solution, i.e. one that doesn't require me to write all sorts of boilerplate and configuration and lifecycle stuff.

I apologize for my ignorance on this matter and for posting this broad question on a board specific to AudioKit but it looks like there's quite knowledgable folks hanging out here and Stackoverflow these days isn't what it used to be either.

Thanks!

Aurelius Prochazka Ph.D.

unread,
Jul 5, 2016, 5:17:51 PM7/5/16
to AudioKit Users Group
AudioKit, AVAudioEngine, EZAudio, all make microphone access pretty easy. AudioKit is of course, the best. :)

Aure

rafael...@gmail.com

unread,
Jul 5, 2016, 6:35:55 PM7/5/16
to AudioKit Users Group
Thanks. Is there a difference in performance / CPU load between the ones you mentioned?

Aurelius Prochazka Ph.D.

unread,
Jul 5, 2016, 6:38:03 PM7/5/16
to AudioKit Users Group
Probably not much.

Aure

rafael...@gmail.com

unread,
Jul 8, 2016, 1:05:21 AM7/8/16
to AudioKit Users Group
I've looked more in the examples, playgrounds and reference docs but I find it hard to get any traction on this. AudioKit looks very powerful but what I'd like to do is very basic. I probably need to instantiate AKMicrophone at some point. But what before and what next?

Most examples seem to either generate some audio output or do some analysis on the input. I don't need either, I'd just like to grab the raw signal from the mic as it comes in.

Any sample code or pointers would be highly appreciated.

Aurelius Prochazka Ph.D.

unread,
Jul 8, 2016, 1:48:55 AM7/8/16
to AudioKit Users Group
There's a Recorder example in the develop branch.  Were you just looking in master maybe?

Aure

rafael...@gmail.com

unread,
Jul 8, 2016, 8:49:11 PM7/8/16
to AudioKit Users Group
Thanks. I just looked at the recorder example but unfortunately it's not what I need. The example is just using a pre-existing AKNodeRecorder. Also in the example it's not obvious how the recorder interacts with the microphone.
I still have a hard time understanding the core concepts of AudioKit. In the example record() function there's a line

AudioKit.output = mic

Why do I assign the mic to the *output* of AudioKit? I don't understand the semantics of this.
What does the AudioKit class do in the first place? The reference says "Top level AudioKit managing class" but that doesn't help me much.

Is there a place that's explaining the basics of AudioKit?

As I said I'd like to do my own real-time signal processing on the microphone data as it streams in. I don't intend to record any data.

Regards,
Rafael

Aurelius Prochazka Ph.D.

unread,
Jul 8, 2016, 9:02:32 PM7/8/16
to AudioKit Users Group
I think there's only two things that will help set you straight:

1. AudioKit works on a pull model.  Nodes only generated samples when they are requested from something down the line.  So, the output must be set to the mic so that the mic can deliver something, even if its delivering zeroes.

2. If you want to get access to those samples and use them for something other than the output, you use a tap.  Taps can steal the samples off of most any node (in theory, though it seems to work best if you access a mixer node).  Then you can do other things with those samples.  In AudioKit, we plot using those samples, do amplitude analysis, and FFT.  You'd probably write your own tap.

That's only if you're still set on using AudioKit.  EZAudio seems like a fine solution for what you're doing though, might want to look at that closer.  Its not being maintained anymore but only just recently, so I think its still fairly fresh.

Aure

rafael...@gmail.com

unread,
Jul 9, 2016, 12:07:21 PM7/9/16
to AudioKit Users Group
Ok, thanks. I'll have some more time over the weekend to dive into this.

Will also have a look at EZAudio. It seems to be in Objective C though and I find Swift a lot easier to read and write.

Regards,
Rafael

laurent veliscek

unread,
Jul 11, 2016, 11:47:24 AM7/11/16
to AudioKit Users Group
Hi Rafael,

We're working on AVAudioSession management, and I experiment some new AudioKit features I'm working on that should be implemented very soon.

I've just coded a recorder app and tested it using my iPhone 4s this morning.

I can monitor the mic (dry or even processed thru a reverb) while recording it, with an unnoticeable latency, using AudioKit.
In fact, internal latency is less than 1 ms, but it is probably a little more as hardware in/out latency cannot be reduced.

I share the viewdidload() code, just to demonstrate how AudioKit is incredibly cool and easy to use:

        // .shortest is 32 samples = 0.72 ms @ 44100 Hz
        AKSettings.bufferLength = .Shortest
        let mic = AKMicrophone()

        let micMixer = AKMixer(mic)
        let tape = try? AKAudioFile()
        func setMonitorOn() {
            mixer!.balance = 0
        }
        player = try? AKAudioPlayer(file: tape!, completionHandler: setMonitorOn)
        let reverb = AKReverb(player!)
        mixer = AKDryWetMixer(micMixer,reverb, balance: 0)

        AudioKit.output = mixer
        AudioKit.start()

        recorder = try? AKNodeRecorder(node: micMixer, file: tape!)


Here's the code to monitor mic thru iPhone, using AudioKit :

AKSettings.bufferLength = .Shortest
let mic = AKMicrophone()
AudioKit.output = mic
AudioKit.start()

No so complex code... ;-)

If you can wait one or two days, AudioKit develop branch should be updated so you could experiment  AudioKit improved latency settings.

Best !

Laurent.


Reply all
Reply to author
Forward
0 new messages