Where in the Android OS is the single sink into which audio gets mixed?

903 views
Skip to first unread message

AudioLinger

unread,
Mar 5, 2014, 9:20:02 PM3/5/14
to android...@googlegroups.com
Hi all,

I've been reading through AudioFlinger and audio hardware code looking for the place where all audio from all tracks and sessions comes together. My initial guess was the audio mixer, but there are now two audio mixers (the normal one and the FastMixer). When looking at the ssize_t AudioFlinger::PlaybackThread::threadLoop_write() method inside frameworks/av/services/audioflinger/Threads.cpp, depending on existence of a "normal sink" we either process audio through a mixer or, if I understand it correctly, send it directly to the hardware to take care of. My goal is to be able to process all audio (except phone call and other such special things) that the device outputs. The above mentioned method was my best guess, but even at that point it splits into different mixing strategies.

Any pointers?

Glenn Kasten

unread,
Mar 6, 2014, 10:10:13 AM3/6/14
to android...@googlegroups.com
See attached diagram "Audio playback architecture.pdf".
This shows 3 major paths that audio can be played out, 
however these are not the only paths.

1. Low latency tracks are mixed directly by fast mixer.
They have attenuation applied, but no resampling or app processor effects.

2. Normal latency tracks are mixed by normal mixer,
and in addition to attenuation can have optional resampling
or app processor effects applied. Both of the latter can use
significant CPU time, and may be bursty in CPU consumption,
thus this is why they are limited to normal tracks.
The output of normal mixer is a single fast track (via a memory pipe),
which then is treated as (1) above by fast mixer.

3. Deep buffer tracks, used for music playback with screen off,
go through a similar path as #2 but without the fast mixer part.
After the mix is done, it is written directly to HAL.

There are other paths but they are less relevant to your question.

So to answer your question: no there is no single point.
If you're applying CPU-intensive processing, I would recommed
adding them to the normal mixer path used for #2 and #3.
Avoid adding them to fast mixer as this will likely cause
performance problems for lower latency tracks.
Audio playback architecture.pdf

AudioLinger

unread,
Mar 6, 2014, 3:08:53 PM3/6/14
to android...@googlegroups.com
Hi Glenn,

Thanks for your reply, it's very helpful and confirms most of the things I've discovered in my investigation.

As a short followup question, where can I read more (either documents or source) about deep buffers? I enabled logging and added some custom log messages in AudioFlinger::PlaybackThread::threadLoop_write(), and I see that it writes to two different sinks depending on the application. One of them (the "normal sink") writes frames of audio depending on how I configure AudioFlinger, and the other one writes to an AudioStreamOut (HAL object) directly. Whenever the latter happens, I also see messages about offloading printed. Is this in any way related to deep buffers or fast/normal mixer selection?

Glenn Kasten

unread,
Mar 6, 2014, 3:26:24 PM3/6/14
to android...@googlegroups.com
I was afraid you would mention offloading! :-)
OK, so offloading is yet another way for audio to be played.

So here's the distinction between deep buffer and offloaded tracks:

 - Deep buffer tracks are decoded from MP3, AAC, etc. to PCM on app processor
   and then written as PCM to HAL [driver].
   The buffer size is larger than normal mixer's typical 20 milliseconds,
   perhaps on the order 100 to 200 milliseconds or so.
   The key thing is that app processor still needs to wake up several times a second,
    and must run the software decoder on app processor.
   This does save wakeup & context switch time relative to normal tracks,
    but the decode time is the same.  Implementation is mostly portable,
   although it does require device to support multiple concurrent output streams
   (recall that FastMixer is also using a stream), so not all devices can do it.

 - Offloaded tracks are new for Android 4.4 (KitKat),
    and implementation is even more device-specific.  It is enabled on Nexus 5.
    Offloaded track data is transferred from app processor to HAL/driver
    in encoded format (MP3, AAC, etc.).   Decode is then (partially) implemented
    in hardware, typically by a DSP attached to app processor.
    This has the advantage of even fewer app processor wakeups,
    and no decode on app processor.  Presumably the DSP is optimized
    for decode and so can do it more efficiently than app processor.
    Also as the data sent to DSP is in encoded format which is much smaller,
    a given buffer size can translate to longer elapsed time per buffer,
    on the order of seconds.
    The principle downside to offloading is that it requires a DSP,
    and the greater complexity of implementation.

To read more (either documents or source) about deep buffers,
   grep -r AUDIO_OUTPUT_FLAG_DEEP_BUFFER frameworks/av/*
   grep -r AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD frameworks/av/*
These grep results will give you the starting points for your code reading.
Then continue following the code from there. Adding logs is always helpful! :-)

AudioLinger

unread,
Mar 6, 2014, 8:56:03 PM3/6/14
to android...@googlegroups.com
Thanks, Glenn. I'll take a look at those.
Reply all
Reply to author
Forward
0 new messages