Sequencer first beat timing issue / writing my own Sequencer

447 views
Skip to first unread message

Tom Duncalf

unread,
Sep 18, 2017, 2:16:04 PM9/18/17
to AudioKit Users Group
Hey,

First of all, I'd like to say how exciting it is to have found this project. The existence of such a well documented and professional library with a huge set of audio primitives has got me thinking of all kinds of fun ideas, so thanks for all your hard work!

I've been playing with AKSequencer, and have come across the issue (documented in other threads e.g. https://groups.google.com/forum/#!searchin/audiokit/sequencer|sort:relevance/audiokit/fXvtpPvSzO4/7pBgWwebBwAJ) with the first beat's timing being off. I've tried the seek/stop and rewind work arounds mentioned but it doesn't make a difference. Unfortunately for my use case, putting everything on the second beat isn't an acceptable solution as I need playback to start immediately and to loop – when I tried putting the events on beat 2 then seeking to that beat and playing, the timing was again off; and it seems the loop always goes back to beat 1.

My first question was whether anyone has any other ideas to try to get around the first beat issue, while also starting playback immediately and being able to loop? I understand this a problem with the underlying Apple implementation, so AudioKit's hands are somewhat tied.

Assuming there is no satisfactory solution, I was hoping to write my own sequencer class for my use case (which I guess could be extended to become another sequencing option for AudioKit in general, if it works well!).

My question is, has anyone got any thoughts on how I should go about doing this, primarily in terms of ensuring timing is sample accurate? I've written a sequencer in JUCE before, which was implemented as an AudioProcessor (so just a normal "processing" node in the audio/MIDI graph), and in the "processBlock" method (called on every buffer with any audio/MIDI input, and you then fill those buffers if you want to output any audio/MIDI), I outputted MIDI events with sample offsets, to offset the events to the correct place in the buffer - so the node was generating MIDI events (like you would with say an arpeggiator), and the timing was sample accurate by the nature of being called on the audio thread at the same interval as everything else, and the ability to output the events with a specified sample offset.

My first thought was to try and do the same kind of thing in AudioKit (create an AKNode which does the sequencing), but it seems that AudioUnits cannot output MIDI, so this isn't going to work (I could call .play() on the instruments directly in my process function, but the resolution/accuracy would only be to the buffer level, which is obviously no good). 

My other thought is that I could just implement a high resolution timer, independent of anything else, and that would be responsible for calling .play() on the instruments. – but I wonder if there could be some timing issues if its on a different thread to the audio stuff?

I'm no expert in this and am just hacking my way around, so if anyone has any thoughts or suggestions or other ideas, it would be much appreciated!

Thanks,
Tom

mekohler

unread,
Sep 19, 2017, 3:31:28 PM9/19/17
to AudioKit Users Group
I think a few of us have this question. Someone suggested starting the sequencer immediately after init and never stopping it. You could then implement start / stop by ignoring midi and setting the time manually, perhaps. I haven't tried this yet.

mekohler

unread,
Sep 20, 2017, 6:38:56 PM9/20/17
to AudioKit Users Group
Another thought: Whenever you start the sequencer, check if step 0.0 has any notes... If it does, temporarily remove 0.0 notes from the sequencer, trigger those notes on your own, and then add them back for when the sequencer loops. Could that work?

Tom Duncalf

unread,
Sep 20, 2017, 6:39:03 PM9/20/17
to AudioKit Users Group
Thanks, that's a good a idea.

I just tried it though and timing of the first beat is still off if I start playback, wait a bit, add some events then seek to beat 0.

More importantly for my use case, analysing the output of triggering an AKSampler wrapped in an AKPolyphonicNode on every beat (with a play() method in the node that just calls the sampler's play method) by recording it through Soundflower into Ableton and putting a grid on the waveform shows jitter of up to 10ms on other beats. Unfortunately for my use case, this isn't really going to be acceptable.

I wonder if the jitter is caused by the fact that there is Swift code being invoked to play the sample (e.g. in the AKPolyphonicNode), and from what I've read, Swift/Obj-C is too unpredictable in terms of "real time"-ness for audio? Or if it is a problem with the Apple MusicPlayer stuff? Or something else entirely? 

It would be great to get feedback from one of the Audiokit devs on if they would expect sample accurate sequencing of samples to be possible with Audiokit. If not then I'll pursue other avenues as unfortunately for my project, sample accuracy (or something very close) is important.

I'd also be happy to discuss this in more detail on Slack or whatever, as I'd love to make AK work for my use case, and I'd be happy to invest some time in writing a new sequencing engine if that is where the problem lies! 

Cheers,
Tom

Dave O'Neill

unread,
Sep 20, 2017, 6:39:12 PM9/20/17
to AudioKit Users Group
Hi Tom,

I hope for a better AudioKit sequencer someday too.  AudioUnit's can send/schedule midi from the render thread, but the documentation is cryptic at best.  Take a look at the implementation of AKSamplerMetronome for an example of how to schedule midi with proper render offset (when you have a reference to a MusicDevice type AudioUnit).  If you do start to implement a Sequencer within AudioKit, I know there are a number of contributors that would help.

Dave 

Tom Duncalf

unread,
Sep 21, 2017, 3:31:23 PM9/21/17
to AudioKit Users Group
Thanks Dave! I was just looking at your commit, interesting. My research had led to me concluding that using MusicDeviceMIDIEvent on the render thread was the way to go, but I wasn't sure of the best way to actually do it - it looks like your solution is quite neat with the timeline tap.

What I now need to work out is how generally applicable your AKSamplerMetronome example is to the rest of AudioKit. Ideally what I want is to trigger samples in a sample-accurate way, but also trigger things like filter envelopes etc. at the same time (also sample accurate), while taking advantage of the ease of use of AudioKit. 

As I suspected, it seems like Swift code has to be avoided in sample accurate scenarios which probably makes things a bit trickier. My vague direction of thinking had been to wrap up a sampler plus filter, envelope, etc. in an AKPolyphonicNode and then trigger everything at the same time in the play() method to get it all to play in sync. However, the fact that this code is in Swift makes me worry it might not be sample accurate - I don't want the envelopes triggering with jitter relative to the sample, for example. 

I guess I'll have a play around, your code has certainly illuminated things somewhat so thanks for that! I'm at the point right now of evaluating AudioKit and working out if it can meet my needs, or if I should just bite the bullet and go with JUCE, which is more complex and lacks the "off-the-shelf" nodes to work with, but I am somewhat familiar with and I know it can handle sample accuracy no problem. I'm hoping AudioKit can do what I want as it looks like it would be quicker to get something going with, but I'd hate to back myself into a corner where I have timing problems that can only be solved by moving away from AudioKit later.

I'll keep you posted – it would be interesting to hear your thoughts on triggering effect envelopes etc. in a sample accurate manner. If this is possible then I'd probably be up for investing the effort in writing a new sequencer class.

Cheers,
Tom

Dave O'Neill

unread,
Sep 22, 2017, 11:49:39 AM9/22/17
to Tom Duncalf, AudioKit Users Group
Most AudioKit nodes don't do any scheduling (yet), but there are a few, check out AKClipPlayer and AKClipRecorder.  They both conform to AKTiming, which is a protocol for synchronizing audio sources with sample accuracy.  Regarding triggering effect envelopes, there's currently work being done on an AUAudioUnit  subclass that is intended to act as a base class for future AudioUnits, part of this effort will include a generalized parameter ramping which currently isn't honoring render offsets, so now would be a good time to get in on the design phase to ensure it will meet your requirements.  If you do end up using AudioKit, you'd be getting involved at the perfect time.  Your experience with sample accurate scheduling would be greatly appreciated.

Dave

--
You received this message because you are subscribed to a topic in the Google Groups "AudioKit Users Group" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/audiokit/Ig934_C7Guk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to audiokit+unsubscribe@googlegroups.com.
To post to this group, send email to audi...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/audiokit/a94274eb-47e2-410a-8ab7-52bd0624382d%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages