AudioEffect integration

680 views
Skip to first unread message

eugene

unread,
Jan 20, 2011, 5:55:44 PM1/20/11
to android-porting
Hi
I'm looking to add some new AudioEffects into the framework and also
get a good understanding of the implementation architecture of the
AudioEffects framework.
I can't find any documentation about these. Is there any publicly
available that I've missed?

Specifically I'm not clear on how the Effects framework instantiates
effects when the same effect is applied to multiple tracks. Looking at
the AudioFlinger source code, it seems like effects are applied either
per-track before mixing, or on the output mix from the mixer. This all
makes sense, but the higher level APIs/Java API refers to:
* same effect engine being used multiple times
* CPU load limiting to allocate effects to particular users
* and some "connectMode" (EFFECT_INSERT or EFFECT_AUXILIARY), the
ultimate purpose/implementation of which I'm unsure.
In general, I'm not sure what this means for the implementation.
The documentation states that an EFFECT_AUXILIARY effect must be
created on the global output mix, but then implies that a media player
or audio track will only be "fed into this effect" if they are
explicitly attached and a send-level is specified. How can an effect
that is configured for session 0 (global output mix) end up not
applying to an audio track from an application? Does it get mixed
seperately in Audio Flinger?

Also, is there a way to enforce that effects don't end up conflicting
with eachother? e.g. Bass enhancement applying on an Audio Track and a
Bass enhancement effect on the global output mix.

Finally, what do Audio Effect implementations need to consider for the
case that the output device is transitioning from one to another (e.g.
speaker to bluetooth headset), or when 2 output devices are being used
at the same time (e.g. when ringtone plays it can play over speaker
and bluetooth headset simultaneously).

Thanks
Eugene

Eric Laurent

unread,
Jan 21, 2011, 1:29:34 PM1/21/11
to android-porting
Hi,

There is no other documentation on audio effects framework than the
java doc for APIs
and the comments in EffectApi.h for effect engine implementers.

You are almost correct when saying that "effects are applied either
per-track before mixing, or on the output mix".
The exact statement would be that "INSERT effects are applied either
per AUDIO SESSION before mixing, or on the output mix".
With the exception of audio session 0 which by convention refers to
the output mix, an audio session refers to either an AudioTrack or a
group of
AudioTracks. An insert effect created on a particular session applies
to all AudioTracks in that session.
Now the AUXILIARY effects (currently only the Reverb when attached to
output mix - session 0) are handled differently. An Auxiliary effect
will process several inputs from AudioTracks (or MediaPlayers) and
accumulate the result to the output mix. When the auxiliary effect is
created, it will not process anything by default. An AudioTrack must
be explicitly sent to the the effect with the attachAuxEffect
method(). Then the amount of signal sent to the effect is modified by
the
setAuxEffectSendLevel() method. This means that the AudioTrack has a
WET path (going though the effect) and a DRY path going directly to
the output mix. So, yes, although instantiated on the session 0
(output mix), auxiliary effects do not
apply equally to all sources.

The statement "same effect engine being used multiple times" means
that INSIDE A GIVEN AUDIO SESSION, the same effect engine is reused if
2 applications create 2 instances of a the same effect type.

There is no built-in mechanism in the framework to ensure that effects
applied in different sessions do not conflict with each other.
However, current effect library implementation has an optimized
processing when multiple effects are applied within the same session.
It is up to to the implementor of platform specific effect libraries
that would replace the default ones to add this type of optimizations.

A similar reply applies to your last question. The effect engines are
notified by the effect frameworks of audio device changes. It is up to
the implementor to alter the process according to the type of device
or devices connected.
For instance, current virtualizer implementation behaves differently
whether headphones are connected or not.

Hope this helps.

Eric.


> per-track before mixing, or on the output mix

Reply all
Reply to author
Forward
0 new messages