--
You received this message because you are subscribed to the Google Groups "android-ndk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to android-ndk...@googlegroups.com.
To post to this group, send email to andro...@googlegroups.com.
Visit this group at http://groups.google.com/group/android-ndk.
For more options, visit https://groups.google.com/groups/opt_out.
ing heared that the systrace output only works on Chrome,
Knowing that 1) is not a bug, but a "feature" is good to know, I initially had the same idea to reduce jitter, failed always. for larger block sizes this is okay, but for a higher number of buffers this is a pity because on lower end devices with a single core I have no chance to compensate against interrupts. A good idea would then infact be to put processing into a separate thread to add additional queues, but as I cannot raise the priority of my own thread I can choose between two options that are both not optimal.
What is driving me mad is that my processing usually is not that expensive that it won't fit into the native block size, but as I do not eat up a complete core the system decides to throttle down the core and *THAT* brings me into huge trouble because I exceed the time of 5ms then. There isn't a way to keep a core at its maximum frequency? I even thought about doing some spinning after audio generation completed just to keep the CPU core happy.
Another thing I noticed is, that even if the Nexus 4 has 4 cores, I often run on core 0, together with a bunch of services. If the sensor service e.g. comes into play, the OpenSL callback is called delayed, again a source for glitches.
Setting the thread affinity is unfortunately ignored.
Point 2) is clear of course, only very raw math processing is done in the callback, a mix of
computation together with a good amount of table look ups :) using lock free ringbuffers for everything...
May the value of 240 for fRdy2 be a cause by KitKat? I have this problems also on my Nexus 7a now (512/44100 native) but this device has run without any trouble before.
the algorithm has been tweaked a lot and it has a clock-per-instruction value of 0.33-0.7. There is a lot of computational work to be done for each sample, but I have my guidelines to produce a good sound and therefore cannot reduce the workload. For example, all modulations and parameter changes are completely smoothed, an oscillator can e.g. change its frequency for every sample and another one is that all oscillators are fully bandlimited, i.e. are aliasing free at any frequency they have to produce.
If looking at the load of the core that does the computation, there is, in theory, still room left for more features. For example reverberation is planned. But first I want to have it stable on as many devices as possible before adding features.
Thanks for reporting this issue, and for your analysis.
I’ve prepared a changelist that appears to improve performance for me, but I would appreciate your testing also if you can build platform from source (or know someone else who can). Unfortunately, I cannot post binaries.
The source code patch is here: https://android-review.googlesource.com/#/c/71421
I believe that the root causes of this issue are a combination of:
1. In Android 4.4 (“KitKat”), the number of client-to-server buffers for fast tracks changed from 2 to 1, in order to further reduce latency. This is unrelated to the OpenSL ES buffer count.
2. On at least two devices (Nexus 4 and Nexus 5), there is scheduling jitter e.g. the migration delay that you noted.
3. Some apps themselves contribute timing jitter, and need to be able to tolerate such jitter.
The above patch only addresses #1 by restoring the number of client-to-server buffers for fast tracks back to 2. It does not address #2 or #3.
I would appreciate feedback from original poster or others on the effectiveness of this patch.
If/when the changelist or a derivative is submitted to AOSP, I can’t commit on when it would appear in a binary build.
In the longer term, I would like to continue to work together with our OEM and SoC partners to reduce scheduling jitter, and to provide more effective ways for apps to negotiate with the platform what level of app jitter should be tolerated.