Optimising Android applications' performance?

183 views
Skip to first unread message

jfbaro

unread,
Feb 3, 2010, 9:24:22 AM2/3/10
to android-ndk
Hi,

How can the performance of Android applications be improved (and at
the same time help to reduce battery consumption)?
I have seen some graphs/benchmarks where the performance on simple
calculations (which can just give an idea about the whole performance
side) was about 20X less effective than if it were written with NDK
(Native code).
That makes me think that to achieve a task (in case it is a pure
calculation app) it would drain roughly 20X more battery than if it
were a native application. Then again, they say JIT compilation is
coming to the Android VM, but even if it increases the performance by
10 (which is unlikely, at least in the first versions) the application
will still be wasting cycles (and thus draining the battery faster
than it could in an optimal configuration).
That's one advantage I see Iphone platform has over Android. I like
Android (and don't agree with Apple closed platform paradigm), don't
get me wrong, but I would like to know what the engineers at Google
are working on to solve that problem.
Would it be possible to (when installing the application) to choose if
I can store the application in "Native code" by passing the bytecodes
and VM? It would take a lot of time for compiling, but just when
installing and then after that I would have that application highly
optimised.

If the hardware were more homogeneous (just like the Iphone platform),
that "compilation" could be done on the server side, when selecting
the application to be downloaded, but we should choose which is the
target device (or it could be captured from the "user agent"
automagically).

As you already know, I don't know the answer, but I think things have
to change on that field.

What do you guys think?

Cheers

David Turner

unread,
Feb 3, 2010, 4:17:00 PM2/3/10
to andro...@googlegroups.com
On Wed, Feb 3, 2010 at 6:24 AM, jfbaro <jfb...@gmail.com> wrote:
Hi,

How can the performance of Android applications be improved (and at
the same time help to reduce battery consumption)?
I have seen some graphs/benchmarks where the performance on simple
calculations (which can just give an idea about the whole performance
side) was about 20X less effective than if it were written with NDK
(Native code).

Your mileage may vary, it really depends on what kind of operations your
program is doing. For many of them, the bottleneck won't be execution speed,
or the speed differential will be much less.

It's easy to come with a meaningless benchmark to make a point. It's harder
to get hard data to understand where the real-life bottlenecks are.
 
That makes me think that to achieve a task (in case it is a pure
calculation app) it would drain roughly 20X more battery than if it
were a native application. Then again, they say JIT compilation is
coming to the Android VM, but even if it increases the performance by
10 (which is unlikely, at least in the first versions) the application
will still be wasting cycles (and thus draining the battery faster
than it could in an optimal configuration).

Everything is wasting CPU cycles if you're not hand-coding in assembly,
and even there cache effects and memory latency is going to bite you
hard.

Keep in mind that the major battery draggers are the 3G radio and the
display at this point. That's why your iPhone that only uses native code
doesn't have much better battery life than a typical Android device.
 
That's one advantage I see Iphone platform has over Android. I like
Android (and don't agree with Apple closed platform paradigm), don't
get me wrong, but I would like to know what the engineers at Google
are working on to solve that problem.

We are constantly working on improving performance. This includes the
JIT work, using better algorithms in core libraries, and sometimes moving
stuff from the Java side to native in the platform libraries to improve
behaviour. However, this certainly doesn't include "compiling all apps to
native code".
 
Would it be possible to (when installing the application) to choose if
I can store the application in "Native code" by passing the bytecodes
and VM? It would take a lot of time for compiling, but just when
installing and then after that I would have that application highly
optimised.

Not at the moment, and a profile-based JIT engine will get you speed
boost where you need them (the hot spots), without eating vast amounts
of memory and flash required to fully translate your VM code to native.
 
If the hardware were more homogeneous (just like the Iphone platform),
that "compilation" could be done on the server side, when selecting
the application to be downloaded, but we should choose which is the
target device (or it could be captured from the "user agent"
automagically).


Android is open-source, we dont' want to introduce dependencies on external
servers to get decent performance on the platforn. This would also be fraught with
*lots* of perils, security-wise.
 
As you already know, I don't know the answer, but I think things have
to change on that field.


r.android.com is welcomed to your contributions. Be sure that we will benchmark then
against real-world cases when evaluating them.
 
What do you guys think?

Cheers

--
You received this message because you are subscribed to the Google Groups "android-ndk" group.
To post to this group, send email to andro...@googlegroups.com.
To unsubscribe from this group, send email to android-ndk...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/android-ndk?hl=en.


niko20

unread,
Feb 7, 2010, 9:22:13 AM2/7/10
to android-ndk
You certainly CAN use the NDK to optimize performance in any type of
math or file reading, although yes, it does depend on your situation.

For example, I have audio resampling code that runs probably 50 to 100
times faster in native code using fixed point math than it ever ran in
the Dalvik.

Also, if you are loading files that were generated on a PC, keep in
mind that the byte order is little-endian, but Java is big-endian! So
if you use the file operations in Java after reading you have to flip
the buffer data! This definitely slows things down! If you use the NDK
and use fopen() and such and then fill a JNI buffer with the data and
pass it back, it's waaaayyy faster, and especially since the native
functions will read in little-endian automatically too.

That's my experience. It's definitely worth the extra effort.

-niko

On Feb 3, 3:17 pm, David Turner <di...@android.com> wrote:

> > android-ndk...@googlegroups.com<android-ndk%2Bunsu...@googlegroups.com>

fadden

unread,
Feb 8, 2010, 3:37:46 PM2/8/10
to android-ndk
On Feb 7, 6:22 am, niko20 <nikolatesl...@yahoo.com> wrote:
> Also, if you are loading files that were generated on a PC, keep in
> mind that the byte order is little-endian, but Java is big-endian! So
> if you use the file operations in Java after reading you have to flip
> the buffer data! This definitely slows things down! If you use the NDK
> and use fopen() and such and then fill a JNI buffer with the data and
> pass it back, it's waaaayyy faster, and especially since the native
> functions will read in little-endian automatically too.

It would be more accurate to say that some of the Java file I/O
operations are big-endian. Some, like the NIO buffer operations, can
be configured either way.

The VM itself uses native host byte ordering. This is why you don't
have to byte-swap when you get an array with JNI calls like
GetIntArrayElements.

Android isn't defined as solely little- or big-endian, so paying
attention to endianness is always a good idea.

Reply all
Reply to author
Forward
0 new messages