To follow up on my questions:
After some more digging I discovered that the OpenSL ES interface's
Buffer Queue abstractions provides a callback notification interface.
An example of this is in the latest NDK at:
A lot of the questions I raised above are answered in the R5 NDK docs
there's an online copy of that page you can read at the link below
(google: that would be a useful doc to have online somewhere :-) :
As I think the CDD mentions too, it says:
As OpenSL ES is a native C API, non-Dalvik application threads which
call OpenSL ES have no Dalvik-related overhead such as garbage
collection pauses. However, there is no additional performance benefit
to the use of OpenSL ES other than this. In particular, use of OpenSL
ES does not result in lower audio latency, higher scheduling priority,
etc. than what the platform generally provides. On the other hand, as
the Android platform and specific device implementations continue to
evolve, an OpenSL ES application can expect to benefit from any future
system performance improvements.
So I guess the CDD latency guidelines Glenn posted earlier are to be
interpreted in the context of OpenSL buffer queues. Correct me if I'm
wrong (since I'm clutching at straws trying to understand what the CDD
latency numbers actually mean), but if the warm-start latency is 10ms,
and the continuous latency is 45ms, that implies to me that there is
an allowance for 35ms of scheduling jitter in the OpenSL buffer queue
notification callbacks. I don't know much about the Android kernel,
but on iOS, scheduling jitter for a real-time thread is around 200us..
so 35ms seems pretty high. Can someone who understands this better
explain what that 35ms covers please?
Glenn, can I suggest that the CDD reference concrete test cases in
both Java and the NDK that are supposed to exhibit the required
latency, that way we can understand exactly what is being guaranteed.
I am a little unclear whether the 45ms continuous figure includes a
margin for masking Dalvik GC pauses. It would make sense to have
separate requirements/constraints on the platform (Dalvik) and the C-
level and NDK capabilities.
In the end, a latency guarantee will require the vendor to make sure
that no driver will cause audio related native threads (in kernel or
userspace) to miss a deadline, that intermediate buffering can be
dimensioned to accommodate low latency, and that all inter-thread and
inter-process communication mechanisms are non-blocking (lock-free
FIFOS or similar mechanisms).
I am alarmed to see the following in the above cited document:
But this point is asynchronous with respect to the application. Thus
you should use a mutex or other synchronization mechanism to control
access to any variables shared between the application and the
callback handler. In the example code, such as for buffer queues, we
have omitted this synchronization in the interest of simplicity.
However, proper mutual exclusion would be critical for any production
Employing mutexes will almost certainly cause priority inversion at
some stage and glitch audio rendering. The usual (and simplest) safe
technique to communicate with asynchronous audio callbacks are lock-
free fifo command queues. Try-locks are may also be an option in some
cases. I've already mentioned the learning curve involved in writing
real-time audio software on non-real time OSes.. clearly whoever wrote
that document hasn't traversed it.