OpenSL ES - Is possible to set DataSource as MP3 buffer?

1,227 views
Skip to first unread message

Dom Bhuphaibool

unread,
Oct 22, 2013, 9:57:10 PM10/22/13
to andro...@googlegroups.com
I know this topic has been covered in the past but I just wanted to double check. The docs state that when using a buffer queue, only PCM format is supported. I know you can set the DataSource as a URI and convert the MP3 to PCM. My problem is that I'm reading in a file and extracting an MP3 out of that file. I'm not allowed to write the MP3 to disk. Having the MP3 in memory (or portions of it), is there any way to send this to be processed by OpenSL? Can I set the DataSource as AndroidSimpleBufferQueue with the MP3 data in there? If not, as the docs states that only PCM is supported, is there any workaround for this (without having to write the MP3 to disk)? 

Thanks!

Dom Bhuphaibool

unread,
Oct 23, 2013, 6:49:30 PM10/23/13
to andro...@googlegroups.com
So, after digging around some more I see that you can specify as a DataSource, an AndroidBufferQueue which can take either mpeg-2 ts or aac adts. If I have an mp3 in memory, are there any services to convert it to mpeg-2 ts? I haven't looked at the mpeg-2 ts spec but it seems like it would just be chopping up the mp3 into chunks and encapsulating it with the appropriate metadata? Is there something in opensl or ndk that would do this for you?

Philippe Simons

unread,
Oct 24, 2013, 3:28:15 AM10/24/13
to android-ndk
If your mp3 data is embedded in an other file, you can use the SLDataLocator_AndroidFD and set the offset from the beginning of that file.

Else you need to use an external mp3 library to decode your stream into PCM and use the buffer queue.


--
You received this message because you are subscribed to the Google Groups "android-ndk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to android-ndk...@googlegroups.com.
To post to this group, send email to andro...@googlegroups.com.
Visit this group at http://groups.google.com/group/android-ndk.
For more options, visit https://groups.google.com/groups/opt_out.

Glenn Kasten

unread,
Oct 24, 2013, 10:54:13 AM10/24/13
to andro...@googlegroups.com
One more option, besides Loki's suggestion to use an external open source decoder with suitable license, is to use this API:    http://developer.android.com/reference/android/media/MediaCodec.html
(I have not personally used MediaCodec myself, but I understand it's designed for this purpose).

Dom Bhuphaibool

unread,
Oct 24, 2013, 5:22:40 PM10/24/13
to andro...@googlegroups.com
Thanks Glenn! I will look into the MediaCodec option.

Thanks Loki, I knew about the file descriptor option but my problem is that the mp3 I'm extracting from the blob is not just plain mp3. I have to do some preprocessing to get the mp3 out into a memory buffer, so unfortunately I can't just pass it to openSL "as is" with the offset from the beginning of the blob :(

Dom Bhuphaibool

unread,
Oct 24, 2013, 8:18:49 PM10/24/13
to andro...@googlegroups.com
Hi Glenn,

I was trying something today... Since OpenSL can take a file descriptor as a DataSource. I attempted to use ashmem to get a file descriptor and then mmap() to write the data into there. Everything was looking okay except that in SfPlayer::setDataSource(fd, offset, length), the code is retrieving the size of the "file" via fstat(). This is returning 0 for the file descriptor returned from ashmem_create_region(). If I call ashmem_get_size_region(), it returns the correct size. Is there something I can do in user-land to resolve this? 

In file android_SfPlayer.cpp, line 141, method SfPlayer::setDatatSource()
sb.st_size is 0, so it always produces an error.  

Shouldn't ashmem appropriately set the stat.st_size when returned from fstat()? I feel like if we can just get pass this check (since the size of the reserved memory is actually valid), everything would work...

Any insights would greatly be appreciated! Thanks!

Glenn Kasten

unread,
Oct 25, 2013, 6:06:59 PM10/25/13
to andro...@googlegroups.com
It sounds like you may have found one or more issues: in ashmem (by not reporting the file size),
and a limitation in Stagefright (that it relies on the file size ... e.g. sockets would also omit the size).
Can you please file an issue at https://code.google.com/p/android/issues/list ?

As a workaround, can you copy the content to local temporary filesystem and then pass an fd to that file?

Dom Bhuphaibool

unread,
Oct 25, 2013, 6:35:03 PM10/25/13
to andro...@googlegroups.com
As a requirement, we're not allowed to write the mp3 file to disk. It's more of a security issue. If I write to local temporary filesystem, could someone with a rooted phone access the file? I was assuming that that writing to virtual memory would be harder to get at. Could you point me to docs regarding security in local temporary filesystem? I'm assuming that in normal mode, other apps cannot access it. But what I would like to know is in what scenario can someone get access to another app's local temporary filesystem. 

Thanks for the feedback on Stagefright and file descriptors. I will file the issue. 

Another quick 2 questions: 
1. Regarding the MediaCodec api you sent me, is this only accessible through the Java layer? Is there any APIs to call it from NDK? 
2. Can I hook into Stagefright directly from NDK? Or can I hook into the OMX codec stuff from the NDK?

Thanks!

Glenn Kasten

unread,
Oct 28, 2013, 5:29:29 PM10/28/13
to andro...@googlegroups.com
1. Use JNI.
2. No and No for portable apps. Yes only if you are building an entire platform from source code;
Stagefright is an internal component and subject to change without notice, so it can't be used portably by apps.

Dom Bhuphaibool

unread,
Oct 31, 2013, 1:47:29 AM10/31/13
to andro...@googlegroups.com
Thanks Glenn! So, I got MediaCodec to work decoding from memory buffer. Now I'm wondering since I'm up in the Java layer anyways, is there any advantage of playing audio via OpenSL ES versus Java class AudioTrack (in terms of low latency and gapless playback)? Does the java class AudioTrack eventually call the same media service in Stagefright as the OpenSL ES implementation? Or do they access different classes in Stagefright? Any insights would be greatly appreciated!

Thanks!
Message has been deleted

Glenn Kasten

unread,
Oct 31, 2013, 12:08:07 PM10/31/13
to andro...@googlegroups.com
Using Android native audio APIs based on OpenSL ES permits lower output latency,
but there are more requirements on the app to be deterministic in CPU usage, avoid blocking,
use the right buffer size, use the right sample rate etc.  So if you don't need lower
output latency I recommend you use android.media.AudioTrack. If you do need
the lower output latency, please see the Google I/O video on this, it goes
and also see the links at left side of https://code.google.com/p/high-performance-audio/

Dom Bhuphaibool

unread,
Oct 31, 2013, 2:07:46 PM10/31/13
to andro...@googlegroups.com
Thanks Glenn!!! The video and link were very useful! Much appreciated!

Adam

unread,
Oct 31, 2013, 4:14:50 PM10/31/13
to andro...@googlegroups.com
Just be aware that MediaCodec was introduced in API 16, so any pre-4.1 devices won't be compatible.
Reply all
Reply to author
Forward
0 new messages