Could someone provide an Android native AAudio write sine wave or waw file

755 views
Skip to first unread message

Martin

unread,
Oct 6, 2017, 11:26:26 AM10/6/17
to android-ndk
As you might know, Android O AAudio is relatively new and there is not much documentation apart from the Android NDK AAudio API, which obviously isn't as good as Android SDK API. There is also googlesamples example of AAudio, but that is rather confusing because the AAudio write function is not being used. Is there somebody that have been doing this before that could provide an example of outputting AAudio with write, sinewave or some file (ex. .waw).
Currently I have just created a stream, but there is no output since I haven't managed to use the write function. Thanks!

Phil Burk

unread,
Oct 6, 2017, 11:58:18 AM10/6/17
to andro...@googlegroups.com
Hello Martin,

Thanks for trying AAudio.

The callback technique is normally used instead of the blocking write because it can run at a higher priority. So it is good for low latency applications like keyboard synthesizers or drum pads.

If you do not require low latency then the blocking write is fine to use. Playing a WAV file, for example, or generating a steady test tone, do not require low latency.

But if you do blocking writes then you will need to create a thread that does the writes in a loop.

Here is some code for doing blocking writes. You can do all this in one thread.

const int64_t timeoutNanos = 500000000;

const int numChannels = 1;  // mono, for example
AAudioStreamBuilder *builder;
aaudio_result_t result = AAudio_createStreamBuilder(&builder);

// Setup stream any way you want.
AAudioStreamBuilder_setChannelCount(builder, numChannels);
AAudioStreamBuilder_setFormat(builder, AAUDIO_FORMAT_PCM_FLOAT); // or PCM16

AAudioStream *stream;
result = AAudioStreamBuilder_openStream(builder, &stream);
AAudioStreamBuilder_delete(builder);
if (result != AAUDIO_OK) panic();

// You can write to the stream using any chunk size.
// But is more efficient to match the burst size of the stream.

int32_t framesPerBurst = AAudioStream_getFramesPerBurst(stream);
int32_t sampleRate = AAudioStream_getSampleRate(stream);

// Allocate a buffer for your audio data
float *audioBuffer = new float[framesPerBurst * numChannels]; // or int16_t

while (stillHavingFun() && result == AAUDIO_OK) {
    // render audio
    float *data = audioBuffer;
    for (int frame = 0; frame < framesPerBurst; frame++) {
        for (int channel = 0; channel < numChannels; channel ++) {
            *data++ = generateTone(sampleRate , channel); // render the data however you want
        }
    }
    // Write the data to the stream. It will block until the write completes or times out.
    result = AAudioStream_write(stream, audioBuffer, framesPerBurst, timeout);
}

AAudioStream_stop(stream);
AAudioStream_close(stream);

FLOAT data should be between -1.0 to +1.0.
PCM16 data should be between -32768 and 32767.

Let me know if you hit any snags.

Phil Burk


On Fri, Oct 6, 2017 at 8:26 AM, Martin <mart...@live.com> wrote:
As you might know, Android O AAudio is relatively new and there is not much documentation apart from the Android NDK AAudio API, which obviously isn't as good as Android SDK API. There is also googlesamples example of AAudio, but that is rather confusing because the AAudio write function is not being used. Is there somebody that have been doing this before that could provide an example of outputting AAudio with write, sinewave or some file (ex. .waw).
Currently I have just created a stream, but there is no output since I haven't managed to use the write function. Thanks!

--
You received this message because you are subscribed to the Google Groups "android-ndk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to android-ndk+unsubscribe@googlegroups.com.
To post to this group, send email to andro...@googlegroups.com.
Visit this group at https://groups.google.com/group/android-ndk.
To view this discussion on the web visit https://groups.google.com/d/msgid/android-ndk/a5cf35ce-7e83-4d21-91ce-92f760c91622%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Martin

unread,
Oct 6, 2017, 12:46:16 PM10/6/17
to android-ndk
Thank you very much for your example! I will try this out and let you know how things goes and if I have any more questions.
To unsubscribe from this group and stop receiving emails from it, send an email to android-ndk...@googlegroups.com.

Martin

unread,
Oct 6, 2017, 1:50:42 PM10/6/17
to android-ndk
Some questions I have: (I from Java and c++ is kind of new to me)
1. Should I start a thread in the Java layer (where I will call the c++ function via JNI) or start the thread in the c++ code?
2. So the way I understand the while loop is: It loads the audioBuffer with data, and then it will write everything, and then start over. I assume this will run until I stop the while loop?
3. The way you render the data with generateTone: How do you do that if you just want a steady tone or something similar? And how would you do that with a .wav file?
And if a .wav file were to be played, I guess it would just start over and over until stopping it?
4. What is the purpose of the timeOutNanos and why the value 500000000? Would this be different if we use .wav file?
Thanks for answers!


Den fredag 6 oktober 2017 kl. 11:58:18 UTC-4 skrev Phil Burk:
To unsubscribe from this group and stop receiving emails from it, send an email to android-ndk...@googlegroups.com.

Phil Burk

unread,
Oct 6, 2017, 2:22:07 PM10/6/17
to andro...@googlegroups.com
Hello Martin,

1. Should I start a thread in the Java layer (where I will call the c++ function via JNI) or start the thread in the c++ code?

Either way is fine, whichever you are more comfortable with.
 
2. So the way I understand the while loop is: It loads the audioBuffer with data, and then it will write everything, and then start over. I assume this will run until I stop the while loop?

Yes. When you want to stop the loop, you could set a variable that tells the loop to exit, and then join the thread.
 
3. The way you render the data with generateTone: How do you do that if you just want a steady tone or something similar? And how would you do that with a .wav file?

That is beyond the scope of AAudio.  

But here's a hint. For a tone, calculate a phase that wraps around, for example between 0.0 and 2*PI. Then convert that to a signal. For example:

    phase += frequency * TWO_PI / sampleRate;
    if (phase > TWO_PI) phase -= TWO_PI;  // wrap
    return amplitude * sin(phase);

And if a .wav file were to be played, I guess it would just start over and over until stopping it?

For a WAV file, you can stop at the end of the data or loop back to beginning. Or whatever you want.
 
4. What is the purpose of the timeOutNanos and why the value 500000000? Would this be different if we use .wav file?


If the stream hangs, for whatever reason, then you don't want the stream to block forever. This sets the max time it will block.
500000000 is half a second. That is arbitrary. It needs to be higher than the normal wait period. It is not different for a WAV file.

If you are reading and writing multiple streams then you generally block in one and then use a timeout=0 to not block on the other streams.

Phil Burk

To unsubscribe from this group and stop receiving emails from it, send an email to android-ndk+unsubscribe@googlegroups.com.

To post to this group, send email to andro...@googlegroups.com.
Visit this group at https://groups.google.com/group/android-ndk.

Martin

unread,
Oct 9, 2017, 9:41:34 AM10/9/17
to android-ndk
So I have now played around with your example, but I am having some issues:
First of all, thread works fine, it is created in the Java code.
In my C++ code I have this:
...
result = AAudioStream_requestStart(stream);
int32_t framesPerBurst = AAudioStream_getFramesPerBurst(stream);
int32_t sampleRate = AAudioStream_getSampleRate(stream);
float *audioBuffer = new float[framesPerBurst * numChannels];
    playingAAudio = true;
    while (playingAAudio && result == AAUDIO_OK) {
        float *data = audioBuffer;
        int phase = 0;
        for (int i = 0; i < framesPerBurst*numChannels+1; i++) {
            phase = phase + ((2*M_PI*sineFreq)/sampleRate);
            if(phase >2*M_PI) {
                phase + phase - (2*M_PI);
            }
            *data++ = amplitude*sinf(phase);
        }
        result = AAudioStream_write(stream, audioBuffer, framesPerBurst, timeoutNanos);
    }
...
However, I am not getting any sound. I also tried this instead of the phase thing:
*data++ = (sinf(i*2*M_PI*sineFreq/sampleRate))
and it just results in a millisecond of a tone. And I want a continuous tone until I stop it.

What should the amplitude be? I tried to set it to 20000 in;
*data++ = 20000*(sinf(i*2*M_PI*sineFreq/sampleRate))
and I almost went deaf lol.

And it never makes more than one while loop, AAUDIO_OK is turning false somehow.

Also, sometimes the buffer doesn't get filled. If the buffer size is around 600, it sometimes just stops at 400 something.
Thanks for answers!

Martin

unread,
Oct 9, 2017, 11:38:41 AM10/9/17
to android-ndk
EDIT:
After some more testing:
By declaring a variable, lets say int x = 44100
Which is 1 second.
And this x will be used in the for loop, looping x times,
in the buffer size,
and in the write(), instead of framesPerBurst.
I also remove the AAUDIO_OK, so that I get a continous tone.
I guess this works, but I am not sure if this is the correct approach.
Actually I do not dare to mix with parameters that is now containing x, because I will get extremely loud weird sounds.

Also, for some reason the app crashes after playing and stopping a couple of times.

Phil Burk

unread,
Oct 9, 2017, 2:40:44 PM10/9/17
to andro...@googlegroups.com
Hello Martin,

On Mon, Oct 9, 2017 at 8:38 AM, Martin <mart...@live.com> wrote:
Also, for some reason the app crashes after playing and stopping a couple of times.

You may be overwriting your audioBuffer. Watch your indices.

Den måndag 9 oktober 2017 kl. 09:41:34 UTC-4 skrev Martin:
result = AAudioStream_requestStart(stream);

Start the stream right before entering the loop, after allocating the buffer, to prevent underflows.
 
    while (playingAAudio && result == AAUDIO_OK) {
        float *data = audioBuffer;
        int phase = 0;

No. phase should be a float or a double. And declare it outside the loop.
You are always resetting the phase.

        for (int i = 0; i < framesPerBurst*numChannels+1; i++) {

You are spreading one sine wave across multiple channels.
Look the original example.
 
            if(phase >2*M_PI) {
                phase + phase - (2*M_PI);

That is incorrect. Use:

    phase = phase - (2*M_PI)
 
What should the amplitude be? I tried to set it to 20000 in;

Between 0.0 and 1.0. Start around 0.1.
Try a sineFreq of 440.0.
 
*data++ = 20000*(sinf(i*2*M_PI*sineFreq/sampleRate))
and I almost went deaf lol.

And it never makes more than one while loop, AAUDIO_OK is turning false somehow.

Oops. AAudioStream_write returns the number of frames written. while loop should be:

    while (playingAAudio && result >= AAUDIO_OK) {
 
Phil Burk

Phil Burk

unread,
Oct 9, 2017, 2:45:38 PM10/9/17
to andro...@googlegroups.com
Here is an updated example:

1) Added AAudioStream_requestStart().
2) Changed AAudioStream_stop() to AAudioStream_requestStop().
3) Fixed loop, now "result >= AAUDIO_OK"

--------------------------------------------------
const int64_t timeoutNanos = 500000000;

const int numChannels = 1;  // mono, for example
AAudioStreamBuilder *builder;
aaudio_result_t result = AAudio_createStreamBuilder(&builder);

// Setup stream any way you want.
AAudioStreamBuilder_setChannelCount(builder, numChannels);
AAudioStreamBuilder_setFormat(builder, AAUDIO_FORMAT_PCM_FLOAT); // or PCM16

AAudioStream *stream;
result = AAudioStreamBuilder_openStream(builder, &stream);
AAudioStreamBuilder_delete(builder);
if (result != AAUDIO_OK) panic();

// You can write to the stream using any chunk size.
// But is more efficient to match the burst size of the stream.

int32_t framesPerBurst = AAudioStream_getFramesPerBurst(stream);
int32_t sampleRate = AAudioStream_getSampleRate(stream);

// Allocate a buffer for your audio data
float *audioBuffer = new float[framesPerBurst * numChannels]; // or int16_t

result = AAudioStream_requestStart(stream);
while (stillHavingFun() && result >= AAUDIO_OK) {
    // render audio
    float *data = audioBuffer;
    for (int frame = 0; frame < framesPerBurst; frame++) {
        for (int channel = 0; channel < numChannels; channel ++) {
            *data++ = generateTone(sampleRate , channel); // render the data however you want
        }
    }
    // Write the data to the stream. It will block until the write completes or times out.
    // It returns the number of frames written or a negative error.
    result = AAudioStream_write(stream, audioBuffer, framesPerBurst, timeout);
}

AAudioStream_requestStop(stream);
AAudioStream_close(stream);

Martin

unread,
Oct 9, 2017, 2:47:05 PM10/9/17
to android-ndk
Thanks!
Everything seems to work. What I wonder though is: is this a correct approach?
int32_t sampleRate = AAudioStream_getSampleRate(stream);
    // Allocate a buffer for your audio data.
    float *audioBuffer = new float[sampleRate];
    playingAAudio = true;
    result = AAudioStream_requestStart(stream);
    while (playingAAudio) {
        float *data = audioBuffer;
        for (int i = 0; i < sampleRate; i++) {
            *data++ = (sinf(i * 2 * M_PI * sineFreq / sampleRate));
        }
        result = AAudioStream_write(stream, audioBuffer, sampleRate, timeoutNanos);
    }
    delete[] audioBuffer;
Because here I am not using the phase thing you wrote.

Phil Burk

unread,
Oct 9, 2017, 3:30:34 PM10/9/17
to andro...@googlegroups.com
Hello Martin,

There are still problems with that code.
You use i instead of phase. So you reset the sine signal to phase zero at the beginning of every buffer.
But you are playing a sound. So I think that is all I can help with.

Now you need to figure out how to generate the sound. That is outside the scope of this list.

Good luck and have fun. Software synthesis is challenging but really fun.

Phil Burk



--
You received this message because you are subscribed to the Google Groups "android-ndk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to android-ndk+unsubscribe@googlegroups.com.

To post to this group, send email to andro...@googlegroups.com.
Visit this group at https://groups.google.com/group/android-ndk.

Martin

unread,
Oct 9, 2017, 4:32:04 PM10/9/17
to android-ndk
In the write function, shouldn't it be framesPerBurst*numChannels, instead of only framesPerBurst?

Phil Burk

unread,
Oct 9, 2017, 6:48:41 PM10/9/17
to andro...@googlegroups.com
AAudioStream_write() takes numFrames, not numSamples.
That is because you can only write entire frames, not partial frames.

So pass the number of valid frames in your buffer. It does not have to be framesPerBurst but that is optimal for performance. 



--
You received this message because you are subscribed to the Google Groups "android-ndk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to android-ndk+unsubscribe@googlegroups.com.

To post to this group, send email to andro...@googlegroups.com.
Visit this group at https://groups.google.com/group/android-ndk.
Reply all
Reply to author
Forward
0 new messages