App lifecycle and native OpenGL

2,860 views
Skip to first unread message

Phil Endecott

unread,
Nov 26, 2010, 10:06:36 AM11/26/10
to android-ndk
Dear Experts,

My native OpenGL app now works reasonably well, until I quit and
restart it.

It seems that when it is restarted it has a new OpenGL context that
doesn't know about the textures that I had loaded in the previous
session. Is this correct, or is there some way to get the same
context that it had before? When it quits, does it properly tidy up
the old textures, or are they leaked? I have the impression that
perhaps they are leaked because repeatedly quitting and restarting
will eventually cause the device to crash hard. But it's difficult
for me to destroy them in onPause because that is not called on the
rendering thread, so it doesn't have an OpenGL context.

I can get sane behaviour by calling _exit(0) in onPause(), but this
feels like a terrible hack. Is there a better way to make the process
terminate when the user quits the app? I wondered if one of the
android:launchMode values would do this, but it seems not. The
overhead of restarting the process is small compared to reloading
textures, and it looks like this has to be done in any case, unless
there is some way of making the OpenGL context persist.

How are other people managing this?


Thanks, Phil.

Tim Mensch

unread,
Nov 27, 2010, 11:31:11 PM11/27/10
to andro...@googlegroups.com
On 11/26/2010 8:06 AM, Phil Endecott wrote:

> It seems that when it is restarted it has a new OpenGL context that
> doesn't know about the textures that I had loaded in the previous
> session. Is this correct, or is there some way to get the same
> context that it had before?


That is correct.

> When it quits, does it properly tidy up the old textures, or are
> they leaked?


Yes, it properly tidies up (at least that's what the docs claim); you
are not supposed to attempt to do anything with the now-dead handles.

> I can get sane behaviour by calling _exit(0) in onPause(), but this
> feels like a terrible hack. Is there a better way to make the
> process terminate when the user quits the app?


_exit(0) is a new one on me, but is probably roughly equivalent to the
one I found. In onStop() I'd call:

android.os.Process.killProcess(android.os.Process.myPid());

If there is a way of making the OpenGL context persist, I'd love to hear
about it, but as far as I know, it's simply expected that you lose it.

> The overhead of restarting the process is small compared to
> reloading textures, and it looks like this has to be done in any
> case, unless there is some way of making the OpenGL context persist.


I agree; the Android design pretty much optimizes the part that's fast
at the expense of the part that takes real time. Of course if we could
launch native code instead of going through Java, then they wouldn't
have needed to optimize the loading part at all, but I'll stop now
before I get into a full-fledged anti-Java rant...

Tim

Phil Endecott

unread,
Nov 28, 2010, 8:12:41 AM11/28/10
to android-ndk
Hi Tim,

On Nov 28, 4:31 am, Tim Mensch <tim.men...@gmail.com> wrote:
> On 11/26/2010 8:06 AM, Phil Endecott wrote:
> > I can get sane behaviour by calling _exit(0) in onPause(), but this
> > feels like a terrible hack. Is there a better way to make the
> > process terminate when the user quits the app?
>
> _exit(0) is a new one on me, but is probably roughly equivalent to the
> one I found. In onStop() I'd call:
>
> android.os.Process.killProcess(android.os.Process.myPid());

I call this from the native code. _exit() is like exit() except that
it doesn't invoke destructors. If my destructors are called they try
to delete the textures, which fails because they're called on the
wrong thread which doesn't have the GL context that created them. But
this is OK because they are going to be destroyed by the system when
the process terminates anyway.


Phil.

Tim Mensch

unread,
Nov 30, 2010, 12:03:21 AM11/30/10
to andro...@googlegroups.com
On 11/26/2010 8:06 AM, Phil Endecott wrote:
> My native OpenGL app now works reasonably well, until I quit and
> restart it.
>
> It seems that when it is restarted it has a new OpenGL context that
> doesn't know about the textures that I had loaded in the previous
> session. Is this correct, or is there some way to get the same
> context that it had before?
I just today figured out TODAY that you CAN preserve the OpenGL context,
though I'm not sure that it's a good idea. I've already got my own
implementation of GLSurfaceView, and I just changed it so that the
primary EGL data (EGLSurface, EGLDisplay, EGLContext, and EGLConfig) are
kept around as static data (I create them once, and I don't destroy them
in onPause() or even when the surface is destroyed).

I get that this means my game is unfriendly resource-wise (GLSurfaceView
wants to destroy the context in onPause(!!)), but my question to is
this: Other than potentially forcing my app to be hard-killed sooner
rather than later (for sitting on an OpenGL context and the associated
memory resources), what are the downsides to being greedy and keeping
the context around like this? I know it's not the "Android Way," but
frankly the Android Way pretty much sucks for any app that has a
non-trivial start-up time.

The user experience is MUCH better this way in the 80th percentile case
(switching between the game and non-GL activities). The game starts
right back up when it's restored, and when you go into another activity
within the same app (Papaya), you can return to the game basically
instantly. Presumably things would be Very Bad if another app tried to
use OpenGL, though (so switching from one game to another wouldn't be
great).

Any way I can find out that another app might want OpenGL? I'd just have
my app commit suicide in that case (part of the point is that it's slow
and somewhat unnatural to get an NDK app ported from iPhone to reload
all its textures), but if someone is trying to play another game (and
not just take a call), it would be totally appropriate to finish the
shut-down of my game.

Thoughts?

Tim

David Given

unread,
Nov 30, 2010, 6:26:59 AM11/30/10
to andro...@googlegroups.com
On 30/11/10 05:03, Tim Mensch wrote:
[...]

> Presumably things would be Very Bad if another app tried to
> use OpenGL, though (so switching from one game to another wouldn't be
> great).

Yes; things are Very Bad, as I discovered when doing exactly this
(although our gaming platform runs as platform SDK code, not as NDK
code). It's sufficiently bad to involve crashes, hangs, phone reboots,
etc. Big chunks of the system use OpenGL, including live wallpapers, so
you basically as soon as you start doing stuff like reusing destroyed
contexts all hell breaks loose.

In general I've found that the OpenGL stacks on Android are exceedingly
unreliable, and we've managed to crash the GPU on a regular basis by
doing stuff that's even slightly exotic --- such as using 32 bit configs
on an HTC Desire!

[...]
> ...part of the point is that it's slow


> and somewhat unnatural to get an NDK app ported from iPhone to reload

> all its textures...

I've never used an iPhone app, but I was under the impression that when
apps get backgrounded on iOS they get destroyed entirely and need to
restart from scratch when you user changes back to them?

--
┌─── dg@cowlark.com ───── http://www.cowlark.com ─────

│ life←{ ↑1 ⍵∨.^3 4=+/,¯1 0 1∘.⊖¯1 0 1∘.⌽⊂⍵ }
│ --- Conway's Game Of Life, in one line of APL

signature.asc

Olivier Guilyardi

unread,
Nov 30, 2010, 9:52:13 AM11/30/10
to andro...@googlegroups.com
On 11/30/2010 12:26 PM, David Given wrote:
> On 30/11/10 05:03, Tim Mensch wrote:
> [...]
>> Presumably things would be Very Bad if another app tried to
>> use OpenGL, though (so switching from one game to another wouldn't be
>> great).
>
> Yes; things are Very Bad, as I discovered when doing exactly this
> (although our gaming platform runs as platform SDK code, not as NDK
> code). It's sufficiently bad to involve crashes, hangs, phone reboots,
> etc. Big chunks of the system use OpenGL, including live wallpapers, so
> you basically as soon as you start doing stuff like reusing destroyed
> contexts all hell breaks loose.

IIUC, the need to keep the OpenGL context exists when there's a lot of textures.

But have you tried to see where the bottleneck is here? Is it glTexImage2D() or
reading from permanent storage? (the later can be pretty slow in my experience)

If disk IO (and maybe color conversion) is not negligeable, then maybe that you
could get quicker texture loading by mmap'ing the texture image data.

It quite sounds better than OpenGL deadlocks...

--
Olivier

Phil Endecott

unread,
Nov 30, 2010, 11:41:23 AM11/30/10
to android-ndk
On Nov 30, 2:52 pm, Olivier Guilyardi <l...@samalyse.com> wrote:
> IIUC, the need to keep the OpenGL context exists when there's a lot of textures.
>
> But have you tried to see where the bottleneck is here? Is it glTexImage2D() or
> reading from permanent storage?

Time is spent in all three of reading from flash, decompressing (JPEG
in my case), and glTexImage2D(). But actually my app starts up
reasonably quickly despite needing to reload the textures. My problem
is not performance, but rather how I am supposed to cope with the
context going away (so it would still be an issue if I had fewer
textures).

I have a class that manages a single texture:

class texture {
int texture_num;
public:
texture() {
glGenTextures(1,&texture_num);
}
~texture() {
glDeleteTextures(1,&texture_num);
}
....
};

There are two issues:

- I need to create and destroy textures on the thread that has the
OpenGL context i.e. the rendering thread. But the onPause() event
happens on the UI thread, and I don't believe there is any way to
explicitly call something on the rendering thread at that point.

- If I let the system destroy the context, when this destructor is
called the texture has already gone and the duplicate
glDeleteTextures() call will make it misbehave. So I need some way to
make my texture objects be destroyed without having their destructors
called. Yes there are ways to do that but they turn a nice simple
design into something really complicated.

Hence my choice to just terminate the process with _exit(0).

Phil Endecott

unread,
Nov 30, 2010, 11:43:53 AM11/30/10
to android-ndk
On Nov 30, 11:26 am, David Given <d...@cowlark.com> wrote:
> I've never used an iPhone app, but I was under the impression that when
> apps get backgrounded on iOS they get destroyed entirely and need to
> restart from scratch when you user changes back to them?

Since iOS 4.0, that's not generally the case; the OpenGL context will
preserved, complete with all your textures. If the foreground app
needs more memory you will get memory warnings, and eventually be
killed; in that case you need to restart from scratch.

Tim Mensch

unread,
Nov 30, 2010, 12:05:02 PM11/30/10
to andro...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 11/30/2010 4:26 AM, David Given wrote:

> Yes; things are Very Bad, as I discovered when doing exactly this
> (although our gaming platform runs as platform SDK code, not as NDK
> code). It's sufficiently bad to involve crashes, hangs, phone reboots,
> etc. Big chunks of the system use OpenGL, including live wallpapers, so
> you basically as soon as you start doing stuff like reusing destroyed
> contexts all hell breaks loose.


Hmm... Technically the context is never destroyed in this case, but do
you mean that NOT destroying the context in onPause() or onStop() causes
these issues?

Tim
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJM9S6+AAoJENSbqLBCyKKs8UoH/RezdBHHFtAVZVDFsTaY0+oL
Yx0Peu58W3RnhMlyAKEerG+F1iuI8R0uIrpWAq6ywPJp1rXY8UEUTmQcZS0+xDyA
cuLa3CRtNz6znRMxeVC64Gx2sGHwHtkvIoZ/bUXLomkNF3wsDsfV7par0jUq3nUQ
xWR/saTXCdEfAnEhP7f9cWB55b+N1nHGU1In6MYe91mOj0dKbQloGO7g3HUOS8VU
pHGmOdJU+GiUpy2Q0shPhwEgb77rMyjxOP+N+HmdPEtLLCPqrok+az29DbmP1kDi
KBO9RnEasvrJzd/pDnHZbe9xXSmoOTzlHGUMNfXE/cJ/emhrXri0IjQaLKgkQdY=
=mLxl
-----END PGP SIGNATURE-----

David Given

unread,
Nov 30, 2010, 12:22:18 PM11/30/10
to andro...@googlegroups.com
On 30/11/10 16:41, Phil Endecott wrote:
[...]

> - I need to create and destroy textures on the thread that has the
> OpenGL context i.e. the rendering thread. But the onPause() event
> happens on the UI thread, and I don't believe there is any way to
> explicitly call something on the rendering thread at that point.

You do not need to destroy *anything* --- Android does it for you. In
fact, you can't stop it! The last point you can do anything OpenGL
related is in the SurfaceHolder.Callback.onSurfaceDestroyed() method,
which is called immediately before the context gets destroyed. As soon
as you return from this method, your OpenGL context is invalid and you
must not touch it (or bad stuff happens).

It would be nice if Android actually caught attempts to access the
context when it was invalid and failed gracefully, but... this does not
seem to happen.

What we do in our system (we make an interface layer allowing people to
write apps using OpenKODE and EGL and OpenGL, on top of various
platforms including Android: see http://antixlabs.com) is that when the
game calls eglSwapBuffers() from the rendering thread it takes and
releases a semaphore. When onSurfaceDestroyed() is called from the UI
thread we take this semaphore and wait. This causes the rendering thread
to halt at a known point. Once this has happened we change state so that
the rendering thread no longer accesses the OpenGL context, and then
allow Android to proceed to destroy it. It was a pig to get right, but
does actually appear to work.

signature.asc

Tim Mensch

unread,
Nov 30, 2010, 1:20:08 PM11/30/10
to andro...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 11/30/2010 10:22 AM, David Given wrote:

> You do not need to destroy *anything* --- Android does it for you. In
> fact, you can't stop it! The last point you can do anything OpenGL
> related is in the SurfaceHolder.Callback.onSurfaceDestroyed() method,
> which is called immediately before the context gets destroyed. As soon
> as you return from this method, your OpenGL context is invalid and you
> must not touch it (or bad stuff happens).


That's not my experience, at least on the devices I'm testing on (Droid,
Milestone, G1; we'll test on more today). My OpenGL context is now being
destroyed; it continues to work just fine after the surface is destroyed
and rebuilt.

I can exit and reload multiple times with no issues. I just tried going
into another game that uses OpenGL on the Milestone, and not only did my
game not get killed or crash, it ran the other game just fine. Then I
went back into my game. Using Advanced Task Killer, I verified that both
games are actually "running" simultaneously (though obviously idle), and
I can jump into either one pretty much instantly. Running Angry Birds
kills them both, though, probably because it requests so much memory;
Angry Birds is a bit jerky when it first starts up, but beyond that I'm
not seeing any downside yet.

I assume you're using GLSurfaceView? I started with the 1.6
GLSurfaceView code but then heavily modified it; that's how I know the
GL context isn't being destroyed. I'm no longer destroying it, you see. :)

> What we do in our system (we make an interface layer allowing people to
> write apps using OpenKODE and EGL and OpenGL, on top of various
> platforms including Android: see http://antixlabs.com) is that when the
> game calls eglSwapBuffers() from the rendering thread it takes and
> releases a semaphore.


Huh. If you're calling eglSwapBuffers(), then you must have your own
GLSurfaceView() as well. I don't know what you're doing differently, but
at least so far I can't get it to crash, much less any of the other dire
things you described. Maybe I just got lucky and found the right
incantation to make it work?

It sounds like your attempts to keep the context failed in a rather
obvious way, whereas mine at least appears to work on the phones listed
above. Is that correct? In other words, did your approach seem to work
but then fail under hard-to-reproduce circumstances (or on other phones
than I'm testing on right now), or did it always fail spectacularly?

So far I haven't seen a single crash for any reason since I got this
code in. So far I've just seen one problem on the G1 where, after
starting an OpenGL game and returning to my current game,
eglMakeCurrent() failed (no crash, no throw, just a failure return
code); under those circumstances it looks like we'll have to fall back
to the "create a new context" approach (along with texture reloads), but
the user experience is so much better on newer phones that the "keep the
context" approach is very tempting.

Your cross-platform toolkit looks nice, by the way. Too bad I didn't
find it before writing my own. :(

Tim
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEbBAEBAgAGBQJM9UBXAAoJENSbqLBCyKKsBxYH9iY/W0NIvz0IffJMFXmbhq4X
iszjMbCbY8Y/WQr1FuQevuHe5eOc73HeKAuTJauEMaqhw2HBsKfPm1RNYu3owhxK
qtC3X+p8u/N8X7eRTRJbReKt7b7i/PSn413Hk8d8EGiWxxP+PddPb/KES9V8cyLZ
UYH0eg3k28w6owUFBcHLWQ1yFGU8ohvDG9nwAmGZtvcI/xzoQt0TETESqdq76L0e
qSoQ/5/iHSmWbxJQJejC6TCZ0e4HLPZqBsJneZOm+zKWLDcpqu8UB9FPtQQ15Zig
cuHcP0/0N3O86PfTxfDnGvkBzvZV2ffBfvKmE8kCamVsliN2O26+7R0ipOBNFA==
=kpnd
-----END PGP SIGNATURE-----

David Given

unread,
Nov 30, 2010, 2:45:36 PM11/30/10
to andro...@googlegroups.com
On 30/11/10 18:20, Tim Mensch wrote:
[...]

> I assume you're using GLSurfaceView? I started with the 1.6
> GLSurfaceView code but then heavily modified it; that's how I know the
> GL context isn't being destroyed. I'm no longer destroying it, you see. :)

We're working directly with the low-level platform SDK, not the NDK, so
we have access to all the evil APIs that the NDK doesn't let you get at
--- so while I'm using a SurfaceView to get surface lifecycle
notification, drawing all happens with the C-side Surface* structure.

Bear in mind that although the application-side context structure may be
preserved, there's also state kept in the GPU and the OpenGL stack work
area, whereever that is. So if Android has discarded the GPU-side data,
and then you try to draw stuff using the application-side context...

> It sounds like your attempts to keep the context failed in a rather
> obvious way, whereas mine at least appears to work on the phones listed
> above. Is that correct? In other words, did your approach seem to work
> but then fail under hard-to-reproduce circumstances (or on other phones
> than I'm testing on right now), or did it always fail spectacularly?

A bit of both. Some devices would obviously lock up or hang. Some would
only do so if running the right set of applications. We've also had
third-party reports of games that appeared to work but would
occasionally show corrupted textures. It's a real here-be-dragons area.

I think this depends heavily on the whims of the OpenGL stack. Mobile
OpenGL stacks tend to be as buggy as hell. It's entirely possible than
one device's stack keeps its state in app memory (so running the stack
in different apps causes problems, as it can't keep things synchronised)
while another keeps state in GPU memory (so multiple apps can share the
GPU in a more sensible way).

[...]


> So far I haven't seen a single crash for any reason since I got this
> code in. So far I've just seen one problem on the G1 where, after
> starting an OpenGL game and returning to my current game,
> eglMakeCurrent() failed (no crash, no throw, just a failure return
> code);

Khronos decrees that EGL is *supposed* to report death-of-context events
by throwing EGL_CONTEXT_LOST from eglMakeCurrent() or eglSwapBuffers().
Android kinda supports this but it's incredibly buggy; we've given up
trying to use it and are doing our own thing in the EGL shim layer we're
exposing to games. This might have been fixed in Eclair, when the OpenGL
layer was given a major overhaul, but on Cupcake and Donut if you tried
to touch EGL when the the back-end GPU context was lost it would
dynamically patch the EGL interface table to point at an implementation
that just returned EGL_CONTEXT_LOST... and it was then *impossible* to
recover from this, so the only option left was to restart the app
completely!

[...]


> Your cross-platform toolkit looks nice, by the way. Too bad I didn't
> find it before writing my own. :(

<blatant plug>
We're kind of proud of it. We've only just launched, so there's a tiny
selection of Android devices supported, but there's a plugin available
to let you play the games in your web browser:

http://www.antixgames.com

That's running the *same* binary as will run on Android, mind...

Have I mentioned that the SDK is free?
</blatant plug>

--
┌─── dg@cowlark.com ───── http://www.cowlark.com ─────

│ "There does not now, nor will there ever, exist a programming
│ language in which it is the least bit hard to write bad programs." ---
│ Flon's Axiom

signature.asc

Olivier Guilyardi

unread,
Nov 30, 2010, 3:10:18 PM11/30/10
to andro...@googlegroups.com
On 11/30/2010 05:41 PM, Phil Endecott wrote:
> On Nov 30, 2:52 pm, Olivier Guilyardi <l...@samalyse.com> wrote:
>> IIUC, the need to keep the OpenGL context exists when there's a lot of textures.
>>
>> But have you tried to see where the bottleneck is here? Is it glTexImage2D() or
>> reading from permanent storage?
>
> Time is spent in all three of reading from flash, decompressing (JPEG
> in my case), and glTexImage2D(). But actually my app starts up
> reasonably quickly despite needing to reload the textures. My problem
> is not performance, but rather how I am supposed to cope with the
> context going away (so it would still be an issue if I had fewer
> textures).

Okay, I thought performance was one of your concerns by reading the OP.

For the records, I just did some benchmarks on a Moto Droid (Milestone) with one
of my apps which loads 6 jpeg textures. Each texture is 512x512px.

It's Java OpenGL. The bitmaps are read from the sdcard as well as decoded with
BitmapFactory.decodeFile(). Then they are stored into the GPU with
GLUtils.texImage2D().

D/Cube ( 4766): Loaded bitmap in 94ms
D/Cube ( 4766): texImage2D in 26ms
D/Cube ( 4766): Loaded bitmap in 88ms
D/Cube ( 4766): texImage2D in 5ms
D/Cube ( 4766): Loaded bitmap in 43ms
D/Cube ( 4766): texImage2D in 28ms
D/Cube ( 4766): Loaded bitmap in 38ms
D/Cube ( 4766): texImage2D in 4ms
D/Cube ( 4766): Loaded bitmap in 60ms
D/Cube ( 4766): texImage2D in 4ms
D/Cube ( 4766): Loaded bitmap in 91ms
D/Cube ( 4766): texImage2D in 4ms

That's 414ms / 485ms = 85% time spent reading and decoding. texImage2d only
takes 71ms on a total of almost half a second.

So, if anyone is concerned with performance, mmap'ing decoded image data could
indeed make sense I think..

--
Olivier


Tim Mensch

unread,
Nov 30, 2010, 3:28:58 PM11/30/10
to andro...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 11/30/2010 1:10 PM, Olivier Guilyardi wrote:

>
> That's 414ms / 485ms = 85% time spent reading and decoding. texImage2d only
> takes 71ms on a total of almost half a second.
>
> So, if anyone is concerned with performance, mmap'ing decoded image data could
> indeed make sense I think..


Awesome info; thanks for that. At the very least we'll need to be
reloading in the case where the context doesn't bind, and PROBABLY we'll
end up taking David's advice and let the context always die; I'm working
as a contractor on this one, so it's not my call.

Tim
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJM9V6JAAoJENSbqLBCyKKs0IYH/1Ye80iq7OYKtKOcw4DThS5/
T0688XkwWVJceJgwb0+KVqbGVgsb2E1QWuxQPaYtICbMTSr6+3niC36vN3Hrzw8w
U8B25htBgJlZoMl1TdD/NBlXy35A+O3H5bChaEEywgGF3pfAkWninY20UJy9g+u6
LNlTnr7VFrGod7h//IcCBagRTU8LzAK+nxCeYKckpJ5tSVDn/HWQYO88ggMh/HdP
SvNt9Tz/IrbUm2rx7k48YKr5hf+rg3UPlcmYXtEcHhoMi28XkMFKvdKdqUFyfbfc
ptgmSc8uGRKp2F6/6t9tgNGGa7Lrga7a56n1NIta9AdqzPiHmaQKloqVQOPKcG0=
=WB+n
-----END PGP SIGNATURE-----

Phil Endecott

unread,
Nov 30, 2010, 5:46:03 PM11/30/10
to android-ndk
On Nov 30, 8:10 pm, Olivier Guilyardi <l...@samalyse.com> wrote:
> So, if anyone is concerned with performance, mmap'ing decoded image data could
> indeed make sense I think..

This is decoded from JPEGs, right? I don't see that mmap()
particularly helps; you just need to keep the decoded data in buffers
in RAM in some convenient way. But doing so is quite costly,
considering how little of the RAM we're supposed to be using.

Tim Mensch

unread,
Nov 30, 2010, 9:16:39 PM11/30/10
to andro...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 11/30/2010 12:45 PM, David Given wrote:

> On 30/11/10 18:20, Tim Mensch wrote:
>> It sounds like your attempts to keep the context failed in a rather
>> obvious way, whereas mine at least appears to work on the phones listed
>> above. Is that correct? In other words, did your approach seem to work
>> but then fail under hard-to-reproduce circumstances (or on other phones
>> than I'm testing on right now), or did it always fail spectacularly?


I take it all back, everyone. David was simply right and doing this is a
Very Bad Idea. Works fine on Milestone and G1 no matter how I test it,
but as soon as I started doing more serious testing on my Droid (with
2.2), it hard locked. Repeatedly.

Thanks for the warnings David; I wouldn't have been able to kill the
idea as easily without verification that it was in fact a bad idea. :)

> <blatant plug>
> We're kind of proud of it. We've only just launched, so there's a tiny
> selection of Android devices supported, but there's a plugin available
> to let you play the games in your web browser:
>
> http://www.antixgames.com
>
> That's running the *same* binary as will run on Android, mind...
>
> Have I mentioned that the SDK is free?
> </blatant plug>

Sounds cool, though I'm looking for broader distribution. And I already
have my own SDK, which I've paid heavily for already in sunk time. ;)

Tim
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJM9bAHAAoJENSbqLBCyKKs92kH/38K7R2/ALSjVgsLFYCcTlZ6
xvvy/BvAiKapBrPUycu1rX5DGWG8KosxetX6fF5RNfDd4dKNET8d3cBNXsqZwGSM
TTc0WuIzdkxfI6zd2lGf3lJnnPnUOovCnmqK5Zm6FKrr6dnLxUhgvkvw9oo3AByl
X00nTzObiNZ7l+oucyEVeBVAwZ/2MaU/YcLFklD+sJkQDbSkuoO3Y27WLp9qRNyA
ven1/Sg/jr/m5Fz+tH6y7J+yxSwEiMjYxXGU1m2qUpZZTXaAXo66AUV1p/xJutrp
B2JcVzmM3MhL4LeV+TednaYo2CjkccXPgwlYIo1MBba3u6xaejvE7ejt14bm2rA=
=WmVS
-----END PGP SIGNATURE-----

Dianne Hackborn

unread,
Dec 1, 2010, 1:48:33 AM12/1/10
to andro...@googlegroups.com
Yeah generally the current OpenGL drivers don't support multiple contexts.  This makes it difficult to do hardware accelerated drawing, and results in careful management of things like when live wallpapers are running to ensure they don't conflict with an app that is using OpenGL.

-----END PGP SIGNATURE-----

--
You received this message because you are subscribed to the Google Groups "android-ndk" group.
To post to this group, send email to andro...@googlegroups.com.
To unsubscribe from this group, send email to android-ndk...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/android-ndk?hl=en.




--
Dianne Hackborn
Android framework engineer
hac...@android.com

Note: please don't send private questions to me, as I don't have time to provide private support, and so won't reply to such e-mails.  All such questions should be posted on public forums, where I and others can see and answer them.

Phil Endecott

unread,
Dec 1, 2010, 6:34:10 AM12/1/10
to android-ndk
Hi Dianne,

On Dec 1, 6:48 am, Dianne Hackborn <hack...@android.com> wrote:
> Yeah generally the current OpenGL drivers don't support multiple contexts.

Does that include additional contexts used to e.g. draw into offscreen
framebuffers or to load textures on background threads? Or were you
referring only to multiple contexts drawing to the screen, from
different apps?

(I have some code that would like to load textures on a background
thread and I've now found the egl function for creating a new context
that shares textures with the existing rendering context, but I've not
tried to use it yet - please let me know if that's doomed!)


Phil.

Olivier Guilyardi

unread,
Dec 1, 2010, 7:07:51 AM12/1/10
to andro...@googlegroups.com

mmap() is smart. It will keep the data in RAM as long as memory isn't needed by
something else. You can quite safely mmap large memory regions without being
afraid to overload the system: the kernel will swap pages to the disk as needed.

Also, mmap() is very efficient at reading from permanent storage. It's known to
be generally faster than standard file ops. Plus, the idea is to store the
RGB565 image data. IMO, even if it needs to be entirely reloaded from disk, it
will be very fast using mmap().

I haven't tested that with textures, there are a few details to get right, but
at the expense of a few MB of cache on the sdcard, I believe you can achieve
near 10x speed when compared to the standard jpeg loading route.

--
Olivier

David Given

unread,
Dec 1, 2010, 7:12:17 AM12/1/10
to andro...@googlegroups.com
On 01/12/10 11:34, Phil Endecott wrote:
[...]

> Does that include additional contexts used to e.g. draw into offscreen
> framebuffers or to load textures on background threads? Or were you
> referring only to multiple contexts drawing to the screen, from
> different apps?

I've never tried this, so I don't know if it works on Android, but in
general I have *never*, *ever* seen an OpenGL ES implementation that did
threading right. Some implementations will even crash if you call a GL
function from a thread other than the UI one; luckily Android doesn't
seem to suffer for this.

(AGP has a chunk of code for delegating GL calls from the game thread to
the UI thread on platforms that require it; we hate it, because there's
a substantial performance hit, but sometimes it's the only option...
*cough*Symbian*cough*)

What you can do, though, is do the heavy lifting of loading the image
from disk and decoding it in a background thread, and then just do the
GPU upload from the render thread. This would lend itself nicely to
approaches where you going to keep the decoded image data in memory to
allow you to reinitialise the context when backgrounded, too.

--
┌─── dg@cowlark.com ───── http://www.cowlark.com ─────

signature.asc

Phil Endecott

unread,
Dec 1, 2010, 10:35:58 AM12/1/10
to android-ndk
Hi Olivier,

On Dec 1, 12:07 pm, Olivier Guilyardi <l...@samalyse.com> wrote:
> mmap() is smart. It will keep the data in RAM as long as memory isn't needed by
> something else. You can quite safely mmap large memory regions without being
> afraid to overload the system: the kernel will swap pages to the disk as needed.

Really? Android has swap? That's news to me. Can someone confirm
please? (I don't think you mean "swap pages to disk"; I think you
mean "page parts of the file from disk". Android will certainly do
that.)

> Also, mmap() is very efficient at reading from permanent storage. It's known to
> be generally faster than standard file ops.

Well it will be faster than read() because it avoids one RAM-to-RAM
copy. But I would not expect dramatic differences. Does anyone have
any benchmarks?

> Plus, the idea is to store the
> RGB565 image data.

Hmm, you mean to _permanently_ store the decoded image data? OK, if
you do that (i.e. decode the JPEGs once when the app is first run and
store the decoded data in flash) then you can use mmap() and it will
only page in what is needed - but that's wasting a lot of the user's
flash space.

Ah, maybe you mean to decode from the JPEGs to files in flash when the
app starts, and mmap() them, and to delete them when the app
terminates? (Or in fact you can unlink them immediately and if you're
lucky they might never hit the flash.) That would be a lot closer to
real swap.

> IMO, even if it needs to be entirely reloaded from disk, it
> will be very fast using mmap().

Reading from flash is certainly slower than reading from RAM, to the
extent that reading-and-decompressing can be faster per byte of
decompressed data for some compression formats. The problem is that
implementing your own decompress-when-reading scheme doesn't let you
discard the data when there is memory pressure, as the low memory
warning on Android seems to come far too late for that.

> I haven't tested that with textures, there are a few details to get right, but
> at the expense of a few MB of cache on the sdcard, I believe you can achieve
> near 10x speed when compared to the standard jpeg loading route.

Have you actually measured that?


Regards, Phil.

Phil Endecott

unread,
Dec 1, 2010, 10:41:37 AM12/1/10
to android-ndk
On Dec 1, 12:12 pm, David Given <d...@cowlark.com> wrote:
> I've never tried this, so I don't know if it works on Android, but in
> general I have *never*, *ever* seen an OpenGL ES implementation that did
> threading right.

I've not had any problems on iOS, but I'm only using it for background
texture and vertex buffer loading.

> What you can do, though, is do the heavy lifting of loading the image
> from disk and decoding it in a background thread, and then just do the
> GPU upload from the render thread.

Yes, I've considered this. It's just a matter of the amount of code
re-organisation that would be needed to do it, for not much gain.
Currently I decode and load at most one 256x256 JPEG per rendering
cycle and that doesn't seem to cause any problems.


Phil.

Olivier Guilyardi

unread,
Dec 1, 2010, 2:55:36 PM12/1/10
to andro...@googlegroups.com
On 12/01/2010 04:35 PM, Phil Endecott wrote:

> Hmm, you mean to _permanently_ store the decoded image data?

Either permanently, or temporarily with a more or less long timeout. Also, the
mmap()ing could be kept alive for some time with a simple Service.

> OK, if you do that (i.e. decode the JPEGs once when the app is first run and
> store the decoded data in flash) then you can use mmap() and it will
> only page in what is needed - but that's wasting a lot of the user's
> flash space.

I don't know what you call a lot. With my previous benchmark, that would be
6x512x512x2 = 3MB. What's it on a 8GB memory card? 0.04%. Place that in
Context.getExternalFilesDir(), and the files will be removed when the app is
uninstalled. That's quite clean.

> Ah, maybe you mean to decode from the JPEGs to files in flash when the
> app starts, and mmap() them, and to delete them when the app
> terminates? (Or in fact you can unlink them immediately and if you're
> lucky they might never hit the flash.) That would be a lot closer to
> real swap.

No, that's not what I mean. With my idea, the initial startup would be slower
than before, because it would involve reading and writing to disk. And even if
you just ftruncate before mmap, it's performed as a write on FAT by the kernel.

So you certainly do not want this to happen everytime the app is launched. But
that could be acceptable if the user leaves the app for a long time.

>> I haven't tested that with textures, there are a few details to get right, but
>> at the expense of a few MB of cache on the sdcard, I believe you can achieve
>> near 10x speed when compared to the standard jpeg loading route.
>
> Have you actually measured that?

No, just an intuition. Been optimizing code full time for about a year on
Android. Of course, I could be wrong.

--
Olivier

David Given

unread,
Dec 1, 2010, 4:15:15 PM12/1/10
to andro...@googlegroups.com
On 01/12/10 15:35, Phil Endecott wrote:
[...]

> Really? Android has swap? That's news to me. Can someone confirm
> please? (I don't think you mean "swap pages to disk"; I think you
> mean "page parts of the file from disk". Android will certainly do
> that.)

It sounds like you know this, but for clarity:

If you mmap() a file without PROT_WRITE, you get a read-only mapping. In
this situation the kernel knows it can discard any page from the mapping
and reload it from disk when needed. So you get the benefits of having
the file in memory, but without the memory pressure.

If you mmap() a file with PROT_WRITE, you get a writable mapping, where
changes to the mapping are written back to disk. This behaves much as
above, but discarding pages becomes a bit slower as the kernel has to
flush changes back to disk before the page can be discarded from RAM.

If you mmap() a file with PROT_WRITE but also with MAP_PRIVATE, then
writing to the mapping will not write them back to the disk. So,
unmodified pages can be discarded and reloaded, but modified pages have
to be kept in RAM, so you get memory pressure. When memory gets low such
pages will be swapped out (if swap is available).

If you mmap() something with MAP_ANONYMOUS, you get a private mapping
that's not backed by a file on disk (the file descriptor is ignored).
This is equivalent to malloc(), but is more efficient for large blocks.
This is particularly valuable as it allows you to release blocks back to
the OS if you don't need them any more. (malloc() may not do this,
depending on the vagaries of the C library.)

But... this is Android, not Linux, and Android's got its own memory
constraint system. It's entirely possible that absolutely *none* of this
is relevant because by the time memory gets low enough that the kernel
is thinking about discarding pages, the Android OOM system has already
nuked your process.

--
┌─── dg@cowlark.com ───── http://www.cowlark.com ─────

signature.asc

Olivier Guilyardi

unread,
Dec 1, 2010, 6:36:22 PM12/1/10
to andro...@googlegroups.com
That's a great clarification :)

On 12/01/2010 10:15 PM, David Given wrote:

[...]

> But... this is Android, not Linux, and Android's got its own memory
> constraint system. It's entirely possible that absolutely *none* of this
> is relevant because by the time memory gets low enough that the kernel
> is thinking about discarding pages, the Android OOM system has already
> nuked your process.

That's quite scary and could explain some obscure issues..

digit, hackbod, is what David explains possible?

--
Olivier


Dianne Hackborn

unread,
Dec 1, 2010, 7:47:31 PM12/1/10
to andro...@googlegroups.com
On Wed, Dec 1, 2010 at 3:34 AM, Phil Endecott <spam_fro...@chezphil.org> wrote:
Does that include additional contexts used to e.g. draw into offscreen
framebuffers or to load textures on background threads?  Or were you
referring only to multiple contexts drawing to the screen, from
different apps?

This not really in my area of expertise, but from what I have seen I think it is less likely for them to support multiple contexts in different threads of the same process, than to support multiple contexts with each context in a different process.

Dianne Hackborn

unread,
Dec 1, 2010, 7:58:43 PM12/1/10
to andro...@googlegroups.com
No, that's not true at all.  One of the core design points Android's memory management was actually around the issues of a modern kernel like Linux where "allocated memory" is extremely fuzzy if not outright impossible to accurately define.  Instead of following the traditional embedded approach of committing memory up-front and having hard limits on allocations, Android is designed around taking advantage of being able to overcommit memory and deal with running low on available RAM as normal operation.

In fact we rely on mmap extensively -- Dalvik uses it to "load" all of your code, the resource system uses it to "load" your resources, etc.  If you look at the virtual (and to a lesser extent resident) size of a regular Android application you will see that it is pretty ridiculously large:

  PID      Vss      Rss      Pss      Uss  cmdline
 4666   22988K   22928K    2378K    1488K  com.android.protips

app_13    4666  4195  113432 22928 ffffffff 8010d0ec S com.android.protips

vss = 22988K
rss = 22928K

Note that the vsize is over twice as large as the total RAM that is available to the kernel on the G1 and original myTouch!

However the actual committed RAM used by the app as defined by pss is much smaller, about 2.3MB.

Android's out of memory killer is an extension to the Linux kernel's cache that kills processes as part of cache eviction when the available RAM gets below various points.  That is why we talk about the process model being in some ways like treating a process as part of the memory cache.

David Turner

unread,
Dec 2, 2010, 7:17:29 AM12/2/10
to andro...@googlegroups.com
On Wed, Dec 1, 2010 at 10:15 PM, David Given <d...@cowlark.com> wrote:
On 01/12/10 15:35, Phil Endecott wrote:
[...]
> Really?  Android has swap?  That's news to me.  Can someone confirm
> please?  (I don't think you mean "swap pages to disk"; I think you
> mean "page parts of the file from disk".  Android will certainly do
> that.)

It sounds like you know this, but for clarity:

If you mmap() a file without PROT_WRITE, you get a read-only mapping. In
this situation the kernel knows it can discard any page from the mapping
and reload it from disk when needed. So you get the benefits of having
the file in memory, but without the memory pressure.

true
 
If you mmap() a file with PROT_WRITE, you get a writable mapping, where
changes to the mapping are written back to disk. This behaves much as
above, but discarding pages becomes a bit slower as the kernel has to
flush changes back to disk before the page can be discarded from RAM.

true
 
If you mmap() a file with PROT_WRITE but also with MAP_PRIVATE, then
writing to the mapping will not write them back to the disk. So,
unmodified pages can be discarded and reloaded, but modified pages have
to be kept in RAM, so you get memory pressure. When memory gets low such
pages will be swapped out (if swap is available).

true (and there is no swap available on Android)
 
If you mmap() something with MAP_ANONYMOUS, you get a private mapping
that's not backed by a file on disk (the file descriptor is ignored).
This is equivalent to malloc(), but is more efficient for large blocks.
This is particularly valuable as it allows you to release blocks back to
the OS if you don't need them any more. (malloc() may not do this,
depending on the vagaries of the C library.)

To make it clear, MAP_ANONYMOUS starts by simply reserving pages in your
virtual address space. It's only when you touch them for the first time that the kernel
will try to allocate a physical page to back them. This is where Linux's kernel may
start flushing caches or killing other processes to make room for yours.

Which processes are selected for termination under these conditions is, well,
interesting, I don't think I know the details here.

But... this is Android, not Linux, and Android's got its own memory
constraint system. It's entirely possible that absolutely *none* of this
is relevant because by the time memory gets low enough that the kernel
is thinking about discarding pages, the Android OOM system has already
nuked your process.

I'm not sure about the details of our OOM implementation, but. There is the kernel side,
described above, and there is also something in the VM to limit the size of the object heap.

However, I don't think this impacts anything you can do in native code, which only is controlled
by the kernel. That's why you can typically allocate much more memory from the native side.

David Given

unread,
Dec 2, 2010, 7:50:23 AM12/2/10
to andro...@googlegroups.com
On 02/12/10 00:58, Dianne Hackborn wrote:
[...]

> Android's out of memory killer is an extension to the Linux kernel's
> cache that kills processes as part of cache eviction when the available
> RAM gets below various points. That is why we talk about the process
> model being in some ways like treating a process as part of the memory
> cache.

Good to know --- as you might have gathered I know precisely nothing
about Android's modifications to the kernel. This means that using mmap
in this way will relieve memory pressure in a way that will be useful to
Android, right?

(Although given we're only talking a few megabytes here it's really a
matter of principle than practice.)

--
┌─── dg@cowlark.com ───── http://www.cowlark.com ─────

signature.asc

Phil Endecott

unread,
Dec 2, 2010, 12:05:15 PM12/2/10
to android-ndk
Hi Dianne,

On Dec 2, 12:58 am, Dianne Hackborn <hack...@android.com> wrote:
> Instead of following the traditional embedded approach
> of committing memory up-front and having hard limits on allocations, Android
> is designed around taking advantage of being able to overcommit memory and
> deal with running low on available RAM as normal operation.

Can you reconcile that with a previous discussion that we had about
RAM use?
(http://groups.google.com/group/android-ndk/browse_thread/thread/
2a054f6a2e0ef448/135112ff5eeff60a)

I wanted to keep using more RAM until the system told me that it was
getting low with the "low memory" warning, at which point I would
discard stuff. You told me that I couldn't do that as the "low
memory" signal doesn't come until all background processes have
already been killed, and that I should instead work inside the
ActivityManager.getMemoryClass() (30 MB on my phone; only 5% of the
device's 512 MB of RAM). That sounds a lot like an "up-front limit",
rather than "dealing with running low ... as normal operation". (By
the way, I tried using twice the limit to see what would happen and I
get periodic 5 second pauses while loading textures; so it really does
seem that the system won't work properly if I try to use more than 5%
of its RAM.)

Regards, Phil.

Olivier Guilyardi

unread,
Dec 2, 2010, 12:59:29 PM12/2/10
to andro...@googlegroups.com
On 12/02/2010 01:17 PM, David Turner wrote:
>
>
> On Wed, Dec 1, 2010 at 10:15 PM, David Given <d...@cowlark.com
> <mailto:d...@cowlark.com>> wrote:
[...]

> But... this is Android, not Linux, and Android's got its own memory
> constraint system. It's entirely possible that absolutely *none* of this
> is relevant because by the time memory gets low enough that the kernel
> is thinking about discarding pages, the Android OOM system has already
> nuked your process.
>
> I'm not sure about the details of our OOM implementation, but. There is
> the kernel side,
> described above, and there is also something in the VM to limit the size
> of the object heap.
>
> However, I don't think this impacts anything you can do in native code,
> which only is controlled
> by the kernel. That's why you can typically allocate much more memory
> from the native side.

Thank you Dianne and David for explanations. It seems to me like some
uncertainty remains though..

I would quite like to see the results of a test case which: 1. try and mmap a
large file, say 100Mb, 2. read data from the mapped memory sequentially or
randomly 3. can run in the foreground on in the background 4. logs some stuff
and especially low memory warnings, if any.

It think that it would be useful to see how that runs (or gets killed..) when
backgrounded, and that another app needs memory, on a device with about 16Mb of
RAM. Maybe that I'll find time to write that kind of test, unless someone else
want to do that.

--
Olivier

Phil Endecott

unread,
Dec 2, 2010, 3:03:48 PM12/2/10
to android-ndk
Hi Olivier,

On Dec 2, 5:59 pm, Olivier Guilyardi <l...@samalyse.com> wrote:
> I would quite like to see the results of a test case which: 1. try and mmap a
> large file, say 100Mb, 2. read data from the mapped memory sequentially or
> randomly 3. can run in the foreground on in the background 4. logs some stuff
> and especially low memory warnings, if any.

I mmap() read-only about 20 files totalling several hundred MB and
read from them
mostly in sequential chunks of a few kB. This all behaves as I
expect, i.e. no
memory warnings or other problems.

> It think that it would be useful to see how that runs (or gets killed..) when
> backgrounded, and that another app needs memory, on a device with about 16Mb of
> RAM.

I can't comment on backgrounding because I just _exit(0) onPause(),
nor can I say
anything about devices with as little as 16 MB of RAM.


Phil.

Olivier Guilyardi

unread,
Dec 2, 2010, 3:22:27 PM12/2/10
to andro...@googlegroups.com
On 12/02/2010 09:03 PM, Phil Endecott wrote:
> Hi Olivier,
>
> On Dec 2, 5:59 pm, Olivier Guilyardi <l...@samalyse.com> wrote:
>> I would quite like to see the results of a test case which: 1. try and mmap a
>> large file, say 100Mb, 2. read data from the mapped memory sequentially or
>> randomly 3. can run in the foreground on in the background 4. logs some stuff
>> and especially low memory warnings, if any.
>
> I mmap() read-only about 20 files totalling several hundred MB and
> read from them
> mostly in sequential chunks of a few kB. This all behaves as I
> expect, i.e. no
> memory warnings or other problems.

Interesting info, thanks.

>
>> It think that it would be useful to see how that runs (or gets killed..) when
>> backgrounded, and that another app needs memory, on a device with about 16Mb of
>> RAM.
>
> I can't comment on backgrounding because I just _exit(0) onPause(),
> nor can I say
> anything about devices with as little as 16 MB of RAM.

Hmm, I confused the app heap limit with available RAM... Does the total size of
your mappings exceed the size of the physical RAM?

About backgrounding, I meant doing this from a Service, while trying to pressure
the memory with some other foreground app.

--
Olivier

Dianne Hackborn

unread,
Dec 3, 2010, 1:33:30 AM12/3/10
to andro...@googlegroups.com
On Thu, Dec 2, 2010 at 4:50 AM, David Given <d...@cowlark.com> wrote:
Good to know --- as you might have gathered I know precisely nothing
about Android's modifications to the kernel. This means that using mmap
in this way will relieve memory pressure in a way that will be useful to
Android, right?

We love and greatly encourage mmap for reading.  It's even better than having swap space. :)

This is used extensively by various parts of Android to make things effectively much lighter weight -- Dalvik's .odex files specifically designed to be mmapped into a process, .apks are carefully crafted to be able to mmap key pieces like the resource table (and our custom zip reader is built around mmap), etc.

mmap is great.

Dianne Hackborn

unread,
Dec 3, 2010, 1:51:42 AM12/3/10
to andro...@googlegroups.com
On Thu, Dec 2, 2010 at 9:05 AM, Phil Endecott <spam_fro...@chezphil.org> wrote:
I wanted to keep using more RAM until the system told me that it was
getting low with the "low memory" warning, at which point I would
discard stuff.  You told me that I couldn't do that as the "low
memory" signal doesn't come until all background processes have
already been killed, and that I should instead work inside the
ActivityManager.getMemoryClass() (30 MB on my phone; only 5% of the
device's 512 MB of RAM).  That sounds a lot like an "up-front limit",
rather than "dealing with running low ... as normal operation".  (By
the way, I tried using twice the limit to see what would happen and I
get periodic 5 second pauses while loading textures; so it really does
seem that the system won't work properly if I try to use more than 5%
of its RAM.)

It is a guideline.

The basic issue is -- when we did 1.0, we didn't have native code, and took some shortcuts by relying on Dalvik to impose reasonable memory limits on foreground applications.

What to do about foreground applications is one of the tricker things.  I mean, they are in the foreground, so the user clearly cares deeply about them.  So where do you draw the line and say, "this thing is becoming abusive, kill it?"  Currently we are very conservative about it and basically let the app use RAM up until no other user apps (background, services, visible etc) can run and it is the last thing that can go.  And we'll probably have started noticeably paging at that point.

That is clearly not a good solution, but as long as apps were limited to reasonable sizes by Dalvik it wasn't an issue.  Next year I have on my list doing something about this, though, probably killing such an app earlier if it is using a significant amount of memory, before it can negatively impact other services.

The general question of "how much memory can I allocate?" is really really hard.  There isn't a good answer.  With how much we rely on over-committing, in many ways there just isn't an answer, especially with the unknown of what needs to be running in the background and that changing over time.  So we have been quite conservative with limits...  quite overly conservative with the N1, but the N1 is on the extreme high end of available RAM for current Android devices, so it is dangerous to make judgments about how much RAM is okay based on just that.

(Of course if all you care about is running on the N1, feel free to have at it and gobble up lots of memory.)

This brings us to another difficult issue -- how to help applications deal with the widely varying amount of RAM available on different devices.  We don't have a great solution for this yet; the solution right now is to be conservative with memory limits, which gives a baseline that developers can trust will always be available.  To be honest, I'm not sure of a way to step outside of that without creating a much much more complicated development environment in dealing with the device variety that would create.  So our current approach is "more RAM == more things that can be running at the same time," not so much "more RAM == bigger apps."

That said, the N1 should work absolutely fine with an app allocating 60MB, even 100MB or more.  The N1 as you have noticed has RAM to spare.  It's really over-endowed in that department. :)  I have no idea what those 5 second pauses would be.

Phil Endecott

unread,
Dec 3, 2010, 6:42:04 AM12/3/10
to android-ndk
On Dec 2, 8:22 pm, Olivier Guilyardi <l...@samalyse.com> wrote:
> Does the total size of
> your mappings exceed the size of the physical RAM?

Not yet, not. But it will do. If it stops working, I'll be sure to
let you all know! (I have done this on other embedded Linux platforms
and I don't envisage any problems; running out of virtual address
space is more serious.)

Phil Endecott

unread,
Dec 3, 2010, 7:34:14 AM12/3/10
to android-ndk
Hi Dianne,

On Dec 3, 6:51 am, Dianne Hackborn <hack...@android.com> wrote:
> What to do about foreground applications is one of the tricker things. I
> mean, they are in the foreground, so the user clearly cares deeply about
> them. So where do you draw the line and say, "this thing is becoming
> abusive, kill it?"

You don't need to worry about that, as long as you've given the app a
better idea about how much memory it can reasonably use earlier on;
then, it won't become "abusive".

In my case, I have OpenGL textures that are decoded from JPEGs that
are in flash. (Think Google Earth.) Some of those textures will be
visible on the screen and I won't want to discard them unless memory
is very short (unless the app is backgrounded), but others will now be
off the screen and can be discarded if necessary. I was expecting
that the "low memory" warning would be a good trigger for this, but it
seems that this comes much too late. If you just send that warning
much earlier, I can keep the cache size under whatever ceiling is
appropriate.

I trust that you're aware of how Apple are dealing with this. There
are two issues: they have clearly thought about this, but more
importantly many of the people writing native code for Android are
porting iOS apps. So, legal issues aside, it would be great to align
with what they've done. Having said that their implementation is not
ideal; in particular, their "memory warning" doesn't give any clue how
much memory the app should free; if you don't release enough you'll
get killed and if you release too much the user experience is
degraded.

> The general question of "how much memory can I allocate?" is really really
> hard. There isn't a good answer.

So rather than trying to give a static answer to that question, just
give apps dynamic feedback about current memory pressure. Ideally,
the kernel would know how expensive it is to discard pages from the
file cache and could say to apps, "please discard all data that costs
less than X to regenerate".

> So our
> current approach is "more RAM == more things that can be running at the same
> time," not so much "more RAM == bigger apps."

Thinking about how people use their computers and how that has changed
as they have got more powerful, have you seen a big increase in the
number of applications running at the same time? I don't think I
have. I'm not convinced that this is the right choice. The exception
might be in the transition from phone-size to tablet-size displays.

> the N1 is on the extreme high end of available RAM for current
> Android devices

> the N1 should work absolutely fine with an app allocating 60MB,
> even 100MB or more. The N1 as you have noticed has RAM to spare. It's
> really over-endowed in that department. :) I have no idea what those 5
> second pauses would be.

I don't have an N1. I'm seeing the pauses on a Galaxy Tab. I also
have a Toshiba AC100 and a Motorola Defy. All three of these devices
have 512 MB of RAM, the same as the N1. So do other recent devices
including the Samsung Galaxy S, Motorola Droid 2, and HTC Desire. The
HTC Desire HD has 768 MB. The N1 is not an outlier.


Regards, Phil.
Reply all
Reply to author
Forward
0 new messages