--
You received this message because you are subscribed to the Google Groups "android-ndk" group.
To post to this group, send email to andro...@googlegroups.com.
To unsubscribe from this group, send email to android-ndk...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/android-ndk?hl=en.
I looked at the CDD section 5.3, and it says:
"continuous output latency of 45 milliseconds or less"
45ms is not low latency.
Implementing a drum pad, or some synth which react to touch events, is not
realistic over 10ms latency, unless you want users to laugh at your product.
Olivier
------------
"warm output latency" is defined to be the interval between when an application
requests audio playback and when sound begins playing, when the
audio system has been recently used but is currently idle (that is, silent)
"continuous output latency" is defined to be the interval between when an
application issues a sample to be played and when the speaker physically
plays the corresponding sound, while the device is currently playing back audio
------------
IIUC, "continuous" latency is the delay between the moment audio data is passed
to the OpenSL API, and the moment it gets out of the speaker.
"warm" latency just seems to be some startup time. It seems to be about resuming
playback, in which case an internal buffer may already carry some data.
Therefore these 10ms could be the real hardware latency, while the extra
continuous latency may come from intermediary software layers, I suppose.
Unless I'm wrong, continuous latency is the main one for us app developers.
Olivier
Video (visual) sync is not that crucial to me. For example, with a drum pad,
what's important is that the user doesn't notice any delay between the moment he
or she tap the screen and the moment the sounds gets out of the speaker. 10ms is
a *maximum* for this.
I'm not a game developer, but I suppose that similar constraints apply.
16ms, that's about 700 frames at 44.1Khz. Again, that really can't be called low
latency to me.
> Comparisons:
[...]
> Apple has been doing it since the first iPhone
> without problem, and that is now only a fraction of the speed of the
> current batch of hardware.
From what I read, you can get 5ms on the iPhone. That's certainly why serious
audio software companies such as Izotope are targeting this platform. Check
their iDrum app to understand what can't be done on Android currently.
[...]
> When I run my own mixer, I've found that running 44.1khz stereo gives
> the lowest latency via AudioTrack for most devices. It's because the
> smallest allowable buffer size is the same for stereo and mono, so
> obviously stereo will be half the latency. Still, the results I get
> are unsuitable for most real-time games and ensuring a 45ms buffer
> would _barely_ get in the ballpark of what would look like a realistic
> sound response time for what you're seeing on-screen. I can't imagine
> it would work well enough for a good rhythm game or a drum synth.
I agree.
Is there an OS design problem?
--
Olivier
If you take the iPhone iDrum app which I mentioned, most of it can actually be
implemented on Android. The only feature which is problematic is the pad which
allows to play over, or record, a drum sequence in real time.
And that's quite a central feature, but the app would be usable without this
pad. Now, whether it would be popular is another question... I suppose it would.
Olivier
This CDD makes it hard to deal with device fragmentation.
It says that with 45ms or less, the device can be considered to feature
android.hardware.audio.low_latency.
Therefore, you may have some devices which support much lower latency, suitable
for advanced games and audio apps. But some device may be around the 45ms limit
and still report themselves as featuring low latency.
In this situation, the corresponding <uses-feature> tag is unreliable.
I believe that things should be called by their name, and 45ms is not low
latency. Maybe it was in the 50's, but I can't tell, I wasn't born.
10ms sounds ok to me. You can't expect a professional zero-latency DAW anyway.
--
Olivier
.
Yeah, that's what I mean. A compromise, but written "10ms or less". Maybe that
some capable manufacturers would then provide true low latency. I don't known
what's under OpenSL, so I can't tell.
With the Java API, latency is apparently caused by extra buffers and IPC in what
stands between the app and the hardware. I'm not sure but I think that, as it
relates to telephony, sound is a critical part of the platform. Whereas you get
a rather direct access to the GPU, the audio subsystem seems over-encapsulated.
Plus, last time I checked, patches were not accepted for the audio stack
(critical again). So, commenting this CDD is quite the only thing that we can do
currently.
Apart maybe from releasing an Android variant where all audio is handled by
JACK. That would be something :)
--
Olivier
Otherwise we are designing in a need for a rather silly
"android.hardware.audio.actually_low_latency" so app authors can
differentiate genuinely low latency audio devices from these 45ms
ones.
How was the current 45ms threshold chosen?
Choosing the limit around some human perception threshold rather than
comparison to competing platforms seems the right thing to do - since
then the classes of features it enables will be constant over time,
regardless of hardware improvements.
- Gus
> --
> You received this message because you are subscribed to the Google Groups "android-ndk" group.
> To post to this group, send email to andro...@googlegroups.com.
> To unsubscribe from this group, send email to android-ndk...@googlegroups.com.
Agreed. And aside of that, Apple has been making critical professional audio
hardware and software for decades. They clearly have the skills.
And, there are such skilled people amongst free audio software developers, but
for some reason it doesn't benefit to Android.
> I don't think it's so much a problem of over-encapsulation as it is a
> poor choice of abstractions and layering.
I agree.
> One thing I've noticed, is that when application programmers start
> learning how to do real-time audio programming on normal (non hard-
> real-time) OSes there is a steep learning curve because they don't
> understand what's required for real time code (no locks, no memory
> allocation, no blocking apis, etc). I've been there, I've seen this on
> many mailing lists on many platforms (ALSA, JACK, CoreAudio,
> PortAudio, etc)... everyone goes through that stage.
That's correct. I once submitted a JACK-related patch to the FFmpeg project. It
was accepted, but the FFmpeg devs, although really good at what they do, had
quite a lot of trouble understanding the requirements and semantics of realtime
audio, no memory allocations, lock-free ringbuffers, etc...
And what's confusing is that they are audio codecs experts, but that doesn't
make them application-level realtime audio devs.
And I'm afraid that such skills seem to be missing in the Android teams.
> Of course, you can sidestep all this if you defined "low-latency" as
> 45ms ;)
Reading Glenn's answer about the difference between "warm" and "continuous"
latency, it seems that 35ms of these 45ms come from software layers, flinger and
the like.
> Sorry for another long post but I think Android is important enough
> for this not to get f**kd up yet again...
I agree, it's really time to improve this poor audio situation, but the more it
goes, the more I think there are critical design flaws in the OS.
Also, I looked at the OpenSL API and I clearly don't understand why so much work
is being put into bells and whistle such as reverb, etc.. Whereas the bare
minimum, reliable low latency pcm input/output, is not provided.
I'm sorry if I'm a bit harsh, but I've been working with Android audio APIs for
over a year now, and I feel like telling the truth.
That said, happy holidays to everyone!
--
Olivier
> I am alarmed to see the following in the above cited document:
So am I!
> But this point is asynchronous with respect to the application. Thus
> you should use a mutex or other synchronization mechanism to control
> access to any variables shared between the application and the
> callback handler. In the example code, such as for buffer queues, we
> have omitted this synchronization in the interest of simplicity.
> However, proper mutual exclusion would be critical for any production
> code.
> <<<<
Yeah, of course, locking a mutex in an audio process callback. This is a newbie
audio development mistake.
> Employing mutexes will almost certainly cause priority inversion at
> some stage and glitch audio rendering. The usual (and simplest) safe
> technique to communicate with asynchronous audio callbacks are lock-
> free fifo command queues. Try-locks are may also be an option in some
> cases. I've already mentioned the learning curve involved in writing
> real-time audio software on non-real time OSes.. clearly whoever wrote
> that document hasn't traversed it.
I completely agree. This situation is not professional, it's 100% amateurism.
--
Olivier
Okay, I'm a bit harsh here, sorry. But really, all of this isn't serious.
--
Olivier
> Regarding "patches not accepted", I'm not aware of any previous
> policy, so I can't comment on that.
Here we go:
http://groups.google.com/group/android-ndk/msg/9f6e46b39f4f1fae
> However, to quote a well-worn phrase, the current state is "quality
> patches welcomed :-)". Generally "quality" means good code that is
> easy to review, well tested, won't break the platform portability or
> compatibility, etc.
Thanks for the update. That's interesting. I know we're not on android-contrib,
but is the audio stack git head up-to-date?
--
Olivier
I didn't mean to focus on the Java vs native discussion. It's OT here IMO.
About bells and whistles in OpenSL, effect libraries, etc.. I do agree with you.
All of these application-level features are out of scope, when the current audio
API fails to provide reliable core I/O functionality.
I am sure that working with so many manufacturers is a big challenge, and in
this context all efforts should IMO focus on providing reliable audio
input/output, instead of bringing extra complexity and high-level features to
the audio stack.
Olivier
Le 25/12/10 21:25, Robert Green a �crit :