The NDK docs say that if I use only NDK library headers, then my
application should not break if the user performs an over-the-air
system update. Great! It also says that the only current supported
instruction set is ARMv5TE. As all public devices are ARMv5TE, then
that's great as well.
However once other instructions sets are used, how do people predict
we will go about compiling for them? Ideally the Android NDK will
always ship with the ability to compile across all instruction sets
and will dynamically determine which native implementation to use. Is
this what people expect?
My guess is people would simply respond "YES" to the above.
However...what cross-platform concerns might I be missing here? If I
do code only to the NDK library headers, is there still anything else
I should be concerned about as more instruction sets are introduced?
Sorry to be so vague here. I've never dealt with native
implementations before and want to fully understand potential problems
that might arise.
--
You received this message because you are subscribed to the Google Groups "android-ndk" group.
To post to this group, send email to andro...@googlegroups.com.
To unsubscribe from this group, send email to android-ndk...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/android-ndk?hl=en.
However once other instructions sets are used, how do people predict
we will go about compiling for them? Ideally the Android NDK will
always ship with the ability to compile across all instruction sets
and will dynamically determine which native implementation to use. Is
this what people expect?
My guess is people would simply respond "YES" to the above.
However...what cross-platform concerns might I be missing here? If I
do code only to the NDK library headers, is there still anything else
I should be concerned about as more instruction sets are introduced?
Sorry to be so vague here. I've never dealt with native
implementations before and want to fully understand potential problems
that might arise.
I'm writing a processing intensive digital sound processing app
(requires numerous (50,000) Fast Fourier Transforms - FFT). We are
using the new AudioRecord api (v1.5 Donut) to access the raw PCM data
acquired from via the phone's microphone. The PCM data is then
transformed into a unique fingerprint that we then use to perform
audio recognition for background song determination.
For usability reasons, we need the sound processing to ideally occur
in the sub-second range. My Java implementation was taking ~10
seconds which was unacceptable from a user perspective. I've since
begun porting the code to Android's native NDK solution. Initial
tests are promising so we expect to have a solution performing our
FFTs in ~1 second.
I have implemented some audio playing functionality for a few games
and ran into some low latency issues. Pre-loading the sounds resolved
those issues though. In your case, it appears you want to capture a
touch gesture, compute a sound, and then dynamically generate and play
that sound. I don't have experience there. It would seem the only
challenge would be with playing the generated sound. So am I to
assume that the various sound generating apps out there don't require
the same audio generation performance that you require: Ocarina,
Band, SOnic Vox.
E.g. as Smule says of Sonic Vox: "The Sonic Vox iPhone app from Smule
is a brand new audio engine that allows you to do real-time audio
processing on the Apple cell phone."
Ideally we could have a single .apk to support multiple machine codes
or the marketplace would support our uploading an .apk for each
instruction set type.
Is that what the Android team envisions? We don't mind supporting too
many instruction sets as long as the creation and distribution aspect
is simplified.
Thanks
Sorry for my confused ramblings above. I code on both the iPhone and
Android so had a slight brain fart just now. The apps I listed are
only available on the iPhone. Yes...Android has major limitations
when it comes to accessing audio/video functionality.
I doubt these limitations will change if we continue to rely only on
the official Android team. People like us in the community must start
being more involved in submitting upgrades to the Android project.
That said as David says, the 'official' Android team (aka Google)
needs to be far more open about what's going on otherwise non-official
contributors like is will never be able to fully contribute to the
effort at the level we would like.
David.. assuming I am correct and you work for google (or part of android platform based on your email), I understand the need to keep things quiet until fleshed out and ready for alpha/beta use. I would just like to know if we're going to see something soon that will allow us to not only potentially utilize low latency audio for games and music apps (playback), but also the ability to respond to touch responses fast enough to play a sound almost immediately. The current crop of apps I've seen seem to be about 1/2 a second delay or so. They definitely are not responsive enough to compare to a program like BeatMaker on iPhone. I would love to know, at least on the audio front, that sometime sooner than later, we'll see such capabilities for Android as well.. and hopefully regardless of device.. or at least those that are 2.0+ compatible. I am guessing a lot of 1.6 devices may not upgrade to 2.0 due to larger size and requirements possibly, but it sounds like some may allow for it still. It would of course be great if there was a low resolution timer available that we could use for game loops, audio synch and so forth... not sure if that is possible at the NDK level or not yet.