Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Re: Time to fix the memory allocation on port-amiga

5 views
Skip to first unread message

Radosław Kujawa

unread,
Dec 22, 2010, 6:37:59 PM12/22/10
to
On Dec 21, 2010, at 5:08 AM, John Klos wrote:

> I have to start learning the internals myself if I'm to be useful. Could anyone help me get started, either by suggesting some reading materials or by suggesting a starting place to figure out where to fix these allocation issues? This should be a relatively slow week, and I'd love to get this fixed.

I consider myself a beginner, so I know getting started with kernel development is hard. Even if everyday I learn something new about NetBSD internals, there's always much to learn. I come across many problems I don't comprehend. Often I think I understand how some subsystem works, only to be defeated later by an unexplained crash. Surprisingly, NetBSD is quite big and complex OS. While extreme portability is cool, sometimes it may pose additional difficulties for a person trying to understand how NetBSD works (many abstraction layers, APIs trying to be very generic, jumping between machine-independent, architecture-dependent and machine dependent source files). We're dealing here with obsolete (some people prefer "vintage") architecture and quite modern operating system. Even this fact makes some implications: you won't find many publications on NetBSD internals, and (perhaps) none about NetBSD kernel on Amiga ;).

I'd take a quick look at maxmem variable (defined in arch/amiga/amiga/amiga_init.c). Comment suggests that maxmem holds maximum possible process size. On most architectures this variable equals physical memory size (so you can't have a process larger than you total RAM size). But on amiga this is computed in a less obvious way, which I don't even dare to understand until I'll get some sleep ;). I'm not sure if this is really related to your problem. As far as I remember, Frank once mentioned, that we're not using address space beyond 0x1000 0000. This fact may be related. Or not.

If I ever get the time to install NetBSD on my A3000 (total 260MB of Fast RAM), then I'll gladly help debugging this problem. However, lack of time is my main problem these days...

--
Best regards,
Radosław Kujawa
--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-...@muc.de

Ignatios Souvatzis

unread,
Dec 23, 2010, 7:06:22 AM12/23/10
to
On Tue, Dec 21, 2010 at 04:08:36AM +0000, John Klos wrote:
> Hi,
>
> For a while I've written about how a 256 meg Amiga system can't
> access enough memory properly. For the purposes of running a bulk

You'll probably have to rewrite the pmap code in the kernel. The
old one could only handle a limited amount of kernel virtual
memory - which is also needed to hold the page tables for user
processes, which can be huge for X-servers operating on more
modern graphics cards.

The old pmap only had a limited amount of 2nd level descriptors;
fitting into one 8k page, and leaving one for the first level.

I don't recall whether we relaxed that limitation a while ago.

-is
--
seal your e-mail: http://www.gnupg.org/

Frank Wille

unread,
Dec 23, 2010, 7:43:53 AM12/23/10
to
On Thu, 23 Dec 2010 13:06:22 +0100
Ignatios Souvatzis <i...@netbsd.org> wrote:

> > For a while I've written about how a 256 meg Amiga system can't
> > access enough memory properly. For the purposes of running a bulk
>
> You'll probably have to rewrite the pmap code in the kernel. The
> old one could only handle a limited amount of kernel virtual
> memory - which is also needed to hold the page tables for user
> processes, which can be huge for X-servers operating on more
> modern graphics cards.
>
> The old pmap only had a limited amount of 2nd level descriptors;
> fitting into one 8k page, and leaving one for the first level.
>
> I don't recall whether we relaxed that limitation a while ago.

Didn't we switch to m68k/m68k/pmap_motorola.c some months ago, like all
68k-ports did? Or doesn't that matter?

--
Frank Wille

Michael L. Hitch

unread,
Dec 23, 2010, 3:33:01 PM12/23/10
to

On Thu, 23 Dec 2010, Ignatios Souvatzis wrote:

> You'll probably have to rewrite the pmap code in the kernel. The
> old one could only handle a limited amount of kernel virtual
> memory - which is also needed to hold the page tables for user
> processes, which can be huge for X-servers operating on more
> modern graphics cards.
>
> The old pmap only had a limited amount of 2nd level descriptors;
> fitting into one 8k page, and leaving one for the first level.

That limitation is for user pmaps only - the kernel can allocate as much
contiguous memory for the segment tables as desired. I've not seen the
kernel even approach the number that the current pmap can use. There was
an option in the amiga pmap that was used to increase that, but the common
pmap_motorola.c does not have that option, and if I remember correctly, it
allocate quite a few more pages that the amiga pmap did.

Mike

Michael L. Hitch

unread,
Dec 23, 2010, 3:22:47 PM12/23/10
to
On Tue, 21 Dec 2010, John Klos wrote:

> For a while I've written about how a 256 meg Amiga system can't access

> enough memory properly. For the purposes of running a bulk package build,
> I've compiled a kernel for my A1200 which only uses 128 megs. However, now
> it seems that even 128 meg systems are having problems. At various places
> in a build of lang/php53 on a 127 meg Amiga 4000, I'm seeing "virtual
> memory exhausted: Cannot allocate memory" messages.

How much swap space do you have configured?

> I have to start learning the internals myself if I'm to be useful. Could
> anyone help me get started, either by suggesting some reading materials or
> by suggesting a starting place to figure out where to fix these allocation
> issues? This should be a relatively slow week, and I'd love to get this
> fixed.

You could start with bulding a kernel with the following options:

option USRSTACK 0x1E000000
option MAXDSIZ (32*1024*1024)
#option MAXSSIZ (16*1024*1024)

The USRSTACK value should result in the maximum user address space
useable before the 040/060 pmap limitation would cause system panics.

The MAXDSIZ is set to the default data size so that all the data segment
memory would be available by default (a non-kernel alternative is to
ulimit -d to 128MB. The defaults mean there are 96MB of virtual memory
assigned to the data segement that would normally not be available.

The MAXSSIZ would reduce the maximum stack to 1/2 the current value - if
nothing uses more than 16MB of stack, that could be reduced further (the
default is 2MB). Stack space is available only for the stack, so there's
30MB of virtual space not available for malloc().

So on the current kernel, without raising the data 'limit', there is
going to be at least 126MB of virtual memory a process won't use (the 30MB
of stack space, and the 96MB of data space). There will actually be a bit
more, since the max text size is 6MB, and very few programs will be using
all that space, but it's going to be 'reserved' out of the virtual space
as well. The MAXTSIZ could possibly be reduced, which would make a few
more MB available.

Michael L. Hitch

unread,
Dec 23, 2010, 3:34:15 PM12/23/10
to

On Thu, 23 Dec 2010, Frank Wille wrote:

> Didn't we switch to m68k/m68k/pmap_motorola.c some months ago, like all
> 68k-ports did? Or doesn't that matter?

Yes we did, but that doesn't matter since both have a limitation for
the 040/060 pmap.

Mike

Michael L. Hitch

unread,
Dec 28, 2010, 1:53:57 PM12/28/10
to

> For a while I've written about how a 256 meg Amiga system can't access
> enough memory properly. For the purposes of running a bulk package build,
> I've compiled a kernel for my A1200 which only uses 128 megs. However, now
> it seems that even 128 meg systems are having problems. At various places
> in a build of lang/php53 on a 127 meg Amiga 4000, I'm seeing "virtual
> memory exhausted: Cannot allocate memory" messages.

I was going to ask what file the compile was failing on, and how much
virtual memory it was using when it failed. I got my A4000 updated to
5.1, updated my pkgsrc, and started building php53. I was watching
process virtual memory usage and noted that a couple of compiles were
reaching around 90MB, until it hit ext/fileinfo/libmagic/apprentice.c.
That one took at least 190MB. Looking at it, I saw that it was including
a data file that appears to be a 1.7MB character array. Later, I say your
post to tech-pkg, which included the failing command.

The maximum address space for user processes on the amiga has been
224MB, with 32MB for stack and 6MB for text. That leaves 186MB for data
and shared libraries - which isn't going to work.

I have increased the USRSTACK setting to allow 480MB of user address
space (any higher, and 68040 system will crash when trying to allocate
more than that amount of address space). That will allow php53 to build
on the amiga.

I was searching for information about php failing on that file and found
this little amusing gem:

[2009-07-05 17:08 UTC] ras...@php.net
We know that file takes quite a bit of memory to compile on older versions of gcc.
It should be better in newer versions though. Not much we can do about this. We
aren't going to change perfectly valid code just because some older compilers
have trouble with it.

[2009-07-05 17:24 UTC] ibboard at gmail dot com
"Quite a bit" of memory? That seems like a bit of an understatement when it will
quite happily consume over 350MB of memory on a single file and previous versions
of PHP could be compiled in ~150MB or less (albeit without that extension) :D
Maybe libmagic needs disabling as a default module if it was in PECL before and
is known to causes problems with older compilers? How much memory am I expected
to need to compile it if it fails with 350MB? I've just watched 'top' while the
compile continued and it maxed out at ~120MB without libmagic, which is far more
reasonable.

[2009-07-05 19:48 UTC] ras...@php.net
It is probably up in the 600-700M range. If you are using an older toolchain in
a severely memory-starved environment, you shouldn't expect to be able to compile
everyCthing there. Why not simply cross-compile from a real dev box somewhere
and copy the binaries over? You can install your production OS in a vm slice on
whatever home machine you have and compile there.

[2009-07-05 20:46 UTC] scot...@php.net
What version of gcc were you using? It would be nice to track this where possible.

[2009-07-05 20:49 UTC] scot...@php.net
What version of gcc were you using? It would be nice to track this where possible.

[2010-06-21 20:32 UTC] mackeul at gmail dot com
Ran into the same problem with GCC 3.4.5 on a "2.6.9-34.EL #1 Fri Feb 24 16:44:51
EST 2006 i686 i686 i386 GNU/Linux" machine with 256MB of memory.

[2010-11-16 07:52 UTC] info at fedushin dot ru
Adding --disable-fileinfo to ./configure solves the problem.

0 new messages