How can the performance of Android applications be improved (and at
the same time help to reduce battery consumption)?
I have seen some graphs/benchmarks where the performance on simple
calculations (which can just give an idea about the whole performance
side) was about 20X less effective than if it were written with NDK
(Native code).
That makes me think that to achieve a task (in case it is a pure
calculation app) it would drain roughly 20X more battery than if it
were a native application. Then again, they say JIT compilation is
coming to the Android VM, but even if it increases the performance by
10 (which is unlikely, at least in the first versions) the application
will still be wasting cycles (and thus draining the battery faster
than it could in an optimal configuration).
That's one advantage I see Iphone platform has over Android. I like
Android (and don't agree with Apple closed platform paradigm), don't
get me wrong, but I would like to know what the engineers at Google
are working on to solve that problem.
Would it be possible to (when installing the application) to choose if
I can store the application in "Native code" by passing the bytecodes
and VM? It would take a lot of time for compiling, but just when
installing and then after that I would have that application highly
optimised.
If the hardware were more homogeneous (just like the Iphone platform),
that "compilation" could be done on the server side, when selecting
the application to be downloaded, but we should choose which is the
target device (or it could be captured from the "user agent"
automagically).
As you already know, I don't know the answer, but I think things have
to change on that field.
What do you guys think?
Cheers
Hi,
How can the performance of Android applications be improved (and at
the same time help to reduce battery consumption)?
I have seen some graphs/benchmarks where the performance on simple
calculations (which can just give an idea about the whole performance
side) was about 20X less effective than if it were written with NDK
(Native code).
That makes me think that to achieve a task (in case it is a pure
calculation app) it would drain roughly 20X more battery than if it
were a native application. Then again, they say JIT compilation is
coming to the Android VM, but even if it increases the performance by
10 (which is unlikely, at least in the first versions) the application
will still be wasting cycles (and thus draining the battery faster
than it could in an optimal configuration).
That's one advantage I see Iphone platform has over Android. I like
Android (and don't agree with Apple closed platform paradigm), don't
get me wrong, but I would like to know what the engineers at Google
are working on to solve that problem.
Would it be possible to (when installing the application) to choose if
I can store the application in "Native code" by passing the bytecodes
and VM? It would take a lot of time for compiling, but just when
installing and then after that I would have that application highly
optimised.
If the hardware were more homogeneous (just like the Iphone platform),
that "compilation" could be done on the server side, when selecting
the application to be downloaded, but we should choose which is the
target device (or it could be captured from the "user agent"
automagically).
As you already know, I don't know the answer, but I think things have
to change on that field.
What do you guys think?
Cheers
--
You received this message because you are subscribed to the Google Groups "android-ndk" group.
To post to this group, send email to andro...@googlegroups.com.
To unsubscribe from this group, send email to android-ndk...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/android-ndk?hl=en.
For example, I have audio resampling code that runs probably 50 to 100
times faster in native code using fixed point math than it ever ran in
the Dalvik.
Also, if you are loading files that were generated on a PC, keep in
mind that the byte order is little-endian, but Java is big-endian! So
if you use the file operations in Java after reading you have to flip
the buffer data! This definitely slows things down! If you use the NDK
and use fopen() and such and then fill a JNI buffer with the data and
pass it back, it's waaaayyy faster, and especially since the native
functions will read in little-endian automatically too.
That's my experience. It's definitely worth the extra effort.
-niko
On Feb 3, 3:17 pm, David Turner <di...@android.com> wrote:
> > android-ndk...@googlegroups.com<android-ndk%2Bunsu...@googlegroups.com>
It would be more accurate to say that some of the Java file I/O
operations are big-endian. Some, like the NIO buffer operations, can
be configured either way.
The VM itself uses native host byte ordering. This is why you don't
have to byte-swap when you get an array with JNI calls like
GetIntArrayElements.
Android isn't defined as solely little- or big-endian, so paying
attention to endianness is always a good idea.