There seems to be a callback interface, see AAudioStreamBuilder_setDataCallback(), which might be closer to how Jack et. al. work. I can't find anything about it in the docs, and neither in the code samples. Could you please elaborate on that?
1. API DESIGNAAudio seems simpler than OpenSL ES and that is very welcome. I really like to call simple C functions. I also like the fact that AAudio doesn't force us to learn new abstract concepts, such as graphs and flows. Awesome!
2. ROUTINGI understand that audio routing is not planned to be a part of AAudio now, however it offers to connect to specific audio devices, so some kind of routing is already part of the API.Using AudioManager's getDevices() in Java to discover audio devices, then going to native code using JNI just to select that device goes against the simplicity goal of AAudio.Please consider implementing at least the same audio routing and device discovery features available in Java, so developers don't need to write Java/JNI code for that.
3. EXCLUSIVE SHARING MODEThe exclusive audio device sharing mode is interesting.Question: Does it mean some kind of "pass-through" access in the Android media server, connecting the user app with the audio HAL in a direct way? So developers can directly connect audio input and output in the audio processing thread, eliminating the extra buffering required by the separate audio input and output threads of the current audio stack?
4. SAMPLE RATE, BUFFER SIZE, FORMATIt's great finally having native APIs to get the native sample rate and buffer size of the audio device without Java. I understand that the native sample rate and buffer size is the best to talk with an audio device to achieve low latency audio.Question: If a developer chooses a different sample rate or buffer size, will the API try to configure/restart the HAL with those settings? Or will it "just" handle sample rate conversion and buffering before the audio data reaches the HAL, which will always run at its native sample rate and buffer size?
5. THREAD PRIORITYQuestion: Since high priority threads are paramount to prevent audio dropouts, why is there a possibility to run a non-high priority thread at all? Perhaps some of the APIs could return with errors in this case.
6. STREAM BUFFER FRAME NUMBERSThere are APIs getting frame numbers in internal buffers, such as AAudioStream_getBufferSizeInFrames(). Nice!Question: In exclusive mode, are they directly connected to the similar properties of the HAL, or are they connected to some kind of internal loop/buffering mechanism (in the media server)?
7. TUNING FRAME SIZEQuestion: Since AAudioStream_getFramesPerBurst() provides the optimal, native number of frames of the audio device for the lowest latency, what's the purpose of "tuning" the frame size as shown in the example app?
8. TIMESTAMPS
Question: Since the reported latency of the media server can not be trusted as audio HALs may report false numbers, how does AAudioStream_getTimestamp() know the timestamp?
9. DATA CALLBACKQuestion: Is AAudioStreamBuilder_setDataCallback() some kind of convenience feature, for the case when a developer doesn't want to create a custom audio processing loop (as shown in the example app)?
Thank you for catching this. It was an error in the documentation, and is now fixed in the online version.
Phil Burk
2. ROUTINGI understand that audio routing is not planned to be a part of AAudio now, however it offers to connect to specific audio devices, so some kind of routing is already part of the API.Using AudioManager's getDevices() in Java to discover audio devices, then going to native code using JNI just to select that device goes against the simplicity goal of AAudio.Please consider implementing at least the same audio routing and device discovery features available in Java, so developers don't need to write Java/JNI code for that.
Thank you. We will consider native device enumeration and selection for a future release.
Note that there is no plan for AAudio to support “auto-routing” . Auto-routing can occur when the “primary” device changes. This could happen, for example, when the user connects a Bluetooth headset. This automatic device switch can cause an irreversible increase in latency that the app is unaware of. The plan for AAudio is to disconnect the stream when auto-routing would normally occur. Then the app has the choice of reconnecting to the primary output device with new settings for latency, etc. What do you think about not supporting auto-routing in AAudio?
3. EXCLUSIVE SHARING MODEThe exclusive audio device sharing mode is interesting.Question: Does it mean some kind of "pass-through" access in the Android media server, connecting the user app with the audio HAL in a direct way? So developers can directly connect audio input and output in the audio processing thread, eliminating the extra buffering required by the separate audio input and output threads of the current audio stack?
That is the idea. We are thinking about providing a more direct path between the application and a HAL level buffer, bypassing the shared mixer stage. Note that this is not implemented in DP1. But we are very interested in feedback on whether this would be useful.
4. SAMPLE RATE, BUFFER SIZE, FORMATIt's great finally having native APIs to get the native sample rate and buffer size of the audio device without Java. I understand that the native sample rate and buffer size is the best to talk with an audio device to achieve low latency audio.Question: If a developer chooses a different sample rate or buffer size, will the API try to configure/restart the HAL with those settings? Or will it "just" handle sample rate conversion and buffering before the audio data reaches the HAL, which will always run at its native sample rate and buffer size?
Ideally the app will use the default AAUDIO_UNSPECFIED value for sample rate, etc. If the app selects a rate then it will try to configure the stream with that rate, which may fail. AAudio, by itself, will not do sample rate conversion because it adds latency and complexity. It is possible, however, that the layers beneath AAudio may do sample rate conversion. Some drivers, for example, do sample rate conversion. It is just math and can be done more easily at the application level where the developer can control it.
5. THREAD PRIORITYQuestion: Since high priority threads are paramount to prevent audio dropouts, why is there a possibility to run a non-high priority thread at all? Perhaps some of the APIs could return with errors in this case.There are several ways to reduce the probability of dropouts: use high priority threads, use big buffers, reduce worst-case CPU usage per buffer, etc. Some apps do not need low latency and can live with blocking writes from a normal thread.
7. TUNING FRAME SIZEQuestion: Since AAudioStream_getFramesPerBurst() provides the optimal, native number of frames of the audio device for the lowest latency, what's the purpose of "tuning" the frame size as shown in the example app?
“burst” != “buffer”
The tuning process involves setting the size of the larger buffer. For output, the app can write arbitrary number of frames. But the reader will typically read in a fixed size burst, which is returned by AAudioStream_getFramesPerBurst(). It is best for the apps to write one burst at a time. There can be room for multiple bursts in the large buffer.
If you are running with a moderate load then you may not be able to get by running “single buffered”, ie. one burst in the buffer. If you want to be “double buffered” then call AAudioStream_setBufferSizeInFrames(stream, 2 * framesPerBurst).
There is a diagram and an explanation here:
https://developer.android.com/ndk/guides/audio/aaudio/aaudio.html#tuning-buffers
--
You received this message because you are subscribed to the Google Groups "android-ndk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to android-ndk+unsubscribe@googlegroups.com.
To post to this group, send email to andro...@googlegroups.com.
Visit this group at https://groups.google.com/group/android-ndk.
To view this discussion on the web visit https://groups.google.com/d/msgid/android-ndk/8650badc-2132-49d8-bfdb-6c2bfe98ab6d%40googlegroups.com.
It would be great if AAudio would not do additional buffering, and buffer sizes would be 1-1 to the HAL's buffer size. I don't really get this tuning thing.
Hello Felix,Please note that the API is not final. There will be changes between DP1 and the next Beta.
I see two use-cases here:1. Low latency audio requirements, solvable by AAudioStream_setBufferSizeInFrames(stream, AAudioStream_getFramesPerBurst(stream));
2. Low latency audio is not a requirement. Tuning?
Regarding tuning i see the following problems:- In order to get a rock-solid buffer size, during the tuning process, the user application must set up it's audio chain to operate at maximum CPU load (enable all effects and features). Plus all synchronization problems need to be existing too (if the developer was not careful about mutexes, locks and stuff). I'm not sure if most developers will do this extra work.
- If the audio processing doesn't happen on a high priority thread, the tuning process may not catch every scheduling problem in that short time when it runs. Such as the scenario when the app goes into background.
- On other platforms developers are simply setting larger buffer sizes for this. 4096 is a popular number on iOS for example. Much less work and code to do!
--
You received this message because you are subscribed to the Google Groups "android-ndk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to android-ndk+unsubscribe@googlegroups.com.
To post to this group, send email to andro...@googlegroups.com.
Visit this group at https://groups.google.com/group/android-ndk.
To view this discussion on the web visit https://groups.google.com/d/msgid/android-ndk/fd4f7990-0ae8-41e4-ae88-f7ab81c51ff3%40googlegroups.com.
Hello Gábor,Thanks for the feedback. Responses below...
On Thursday, April 13, 2017 at 7:28:38 AM UTC-7, Gábor Szántó wrote:1. API DESIGNAAudio seems simpler than OpenSL ES and that is very welcome. I really like to call simple C functions. I also like the fact that AAudio doesn't force us to learn new abstract concepts, such as graphs and flows. Awesome!
We are glad you like it. If you have specific suggestions for improvements, please let us know ASAP.2. ROUTINGI understand that audio routing is not planned to be a part of AAudio now, however it offers to connect to specific audio devices, so some kind of routing is already part of the API.Using AudioManager's getDevices() in Java to discover audio devices, then going to native code using JNI just to select that device goes against the simplicity goal of AAudio.Please consider implementing at least the same audio routing and device discovery features available in Java, so developers don't need to write Java/JNI code for that.
Thank you. We will consider native device enumeration and selection for a future release.
Note that there is no plan for AAudio to support “auto-routing” . Auto-routing can occur when the “primary” device changes. This could happen, for example, when the user connects a Bluetooth headset. This automatic device switch can cause an irreversible increase in latency that the app is unaware of. The plan for AAudio is to disconnect the stream when auto-routing would normally occur. Then the app has the choice of reconnecting to the primary output device with new settings for latency, etc. What do you think about not supporting auto-routing in AAudio?3. EXCLUSIVE SHARING MODEThe exclusive audio device sharing mode is interesting.Question: Does it mean some kind of "pass-through" access in the Android media server, connecting the user app with the audio HAL in a direct way? So developers can directly connect audio input and output in the audio processing thread, eliminating the extra buffering required by the separate audio input and output threads of the current audio stack?
That is the idea. We are thinking about providing a more direct path between the application and a HAL level buffer, bypassing the shared mixer stage. Note that this is not implemented in DP1. But we are very interested in feedback on whether this would be useful.
Hello Felix,> Most prominently, I miss bidirectional streams like the ones Portaudio provides and then a callback interface for keeping input> and output in sync. Very much the way Portaudio works. Very much the way Jack works.Bidirectional streams callbacks are a big pain in PortAudio. They are often not implemented correctly. I believe that is a complicated problem that is best solved above the OS level. AAudio supports the use of multiple streams through non-blocking reads and writes.> I'm even more puzzled now why you have not just settled on using a well established API like Portaudio as Android's audio APIPortAudio is meant to be a wrapper for host APIs. Given that AAudio is very similar to PortAudio, it should be easy to implement PortAudio on top of AAudio.Also note that PortAudio is 20 years old. We had an opportunity to avoid some of the difficult parts of PortAudio. We do not, for example, pass pointers to structures in AAudio.Also we did not want to be restricted by the PortAudio API. That was a problem with OpenSL ES.Phil Burk
On Mon, Apr 17, 2017 at 10:58 AM, Felix Homann <showlabor....@gmail.com> wrote:
Hello Phil,Am Freitag, 14. April 2017 18:55:44 UTC+2 schrieb Phil Burk:Hello Felix,Please note that the API is not final. There will be changes between DP1 and the next Beta.OK, without mentioning it above I was hoping for features in the callback driven API not present in the rest of the API.Most prominently, I miss bidirectional streams like the ones Portaudio provides and then a callback interface for keeping input and output in sync. Very much the way Portaudio works. Very much the way Jack works.BTW, I just realized that you're actually the Phil Burk of Portaudio fame so I guess you're much more familiar with Portaudio than I am ;-) On the other hand, I'm even more puzzled now why you have not just settled on using a well established API like Portaudio as Android's audio API (I still would have preferred JACK for its interconnection capabilities.)
Regards,Felix
--
You received this message because you are subscribed to the Google Groups "android-ndk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to android-ndk...@googlegroups.com.
To post to this group, send email to andro...@googlegroups.com.
Visit this group at https://groups.google.com/group/android-ndk.
There are very big advantages to adding features above the OS.
- Multiple implementations possible. We don't have to "get it right".
- Fixes can be delivered in days instead of months or years.
- Developers do not have to wait for us.
- The are many more developers outside our Android Audio group than inside.
There has been a recommendation not to "use more than 20% of your callback budget to generate data" with the addition "Higher percentage possible on some devices" (Google I/O 2016/Android high-performance audio). It would be great if there were a way to determine the concrete percentage for a given device.
Hmm, I guess you just made a case for granting access to ALSA APIs on Android devices ;-) That would be great!Nice try! ;-)
The OS should be thin. But not too thin. Some primary goals for an OS are hardware abstraction, resource management, and security. ALSA is a little too close to the hardware, drivers need to be shared, and there are security issues in some drivers. But my ultimate goal is to make the audio framework seem like a short wire between the app and the HALs.
The issue I have is that seemingly your goals for hardware abstraction do - in the short run - not include making lots USB audio devices usable at all ;-) So, as long as class compliant USB audio interfaces won't work with Android devices because either a) they sport too many channels for Android or b) you don't have access to UAC1/2 compliant mixer controls because there isn't any HAL for that I'd prefer having access to low-level ALSA any time, maybe at least for external devices.
[...] I feel your pain. But security trumps audio features.