Hey,
First of all, I'd like to say how exciting it is to have found this project. The existence of such a well documented and professional library with a huge set of audio primitives has got me thinking of all kinds of fun ideas, so thanks for all your hard work!
I've been playing with AKSequencer, and have come across the issue (documented in other threads e.g.
https://groups.google.com/forum/#!searchin/audiokit/sequencer|sort:relevance/audiokit/fXvtpPvSzO4/7pBgWwebBwAJ) with the first beat's timing being off. I've tried the seek/stop and rewind work arounds mentioned but it doesn't make a difference. Unfortunately for my use case, putting everything on the second beat isn't an acceptable solution as I need playback to start immediately and to loop – when I tried putting the events on beat 2 then seeking to that beat and playing, the timing was again off; and it seems the loop always goes back to beat 1.
My first question was whether anyone has any other ideas to try to get around the first beat issue, while also starting playback immediately and being able to loop? I understand this a problem with the underlying Apple implementation, so AudioKit's hands are somewhat tied.
Assuming there is no satisfactory solution, I was hoping to write my own sequencer class for my use case (which I guess could be extended to become another sequencing option for AudioKit in general, if it works well!).
My question is, has anyone got any thoughts on how I should go about doing this, primarily in terms of ensuring timing is sample accurate? I've written a sequencer in JUCE before, which was implemented as an AudioProcessor (so just a normal "processing" node in the audio/MIDI graph), and in the "processBlock" method (called on every buffer with any audio/MIDI input, and you then fill those buffers if you want to output any audio/MIDI), I outputted MIDI events with sample offsets, to offset the events to the correct place in the buffer - so the node was generating MIDI events (like you would with say an arpeggiator), and the timing was sample accurate by the nature of being called on the audio thread at the same interval as everything else, and the ability to output the events with a specified sample offset.
My first thought was to try and do the same kind of thing in AudioKit (create an AKNode which does the sequencing), but it seems that AudioUnits cannot output MIDI, so this isn't going to work (I could call .play() on the instruments directly in my process function, but the resolution/accuracy would only be to the buffer level, which is obviously no good).
My other thought is that I could just implement a high resolution timer, independent of anything else, and that would be responsible for calling .play() on the instruments. – but I wonder if there could be some timing issues if its on a different thread to the audio stuff?
I'm no expert in this and am just hacking my way around, so if anyone has any thoughts or suggestions or other ideas, it would be much appreciated!
Thanks,
Tom