Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Need help

1 view
Skip to first unread message

Ask

unread,
Nov 28, 2005, 1:31:02 AM11/28/05
to
Hi

On a pentium machine the following program crashes. Was guessing has it
something to do with the architecture ?

int main()
{
int ar[530000];
return 0;
}

Thanks

Terje Mathisen

unread,
Nov 28, 2005, 2:23:47 AM11/28/05
to
Ask wrote:

You're absolutely right!

Since the Pentium is a P5 class cpu, it cannot work with arrays larger
than 5 * 10^5 (5e5) elements. Trying to allocate 5.3e5 above is what
causes the crash.

Terje

--
- <Terje.M...@hda.hydro.com>
"almost all programming can be viewed as an exercise in caching"

Peter Dickerson

unread,
Nov 28, 2005, 3:22:24 AM11/28/05
to
"Ask" <as...@indiatimes.com> wrote in message
news:1133159462....@f14g2000cwb.googlegroups.com...

This looks like a compiler bug to me. There is no reason why such program
should crash - assuming we are talking about a C or C-like language. It
works for me unless I turn off optimization completely - I think there is a
bug in the compiler when optimization is turned off. Perhaps a bug report to
the compiler vendor is in order.

Peter


Ask

unread,
Nov 28, 2005, 4:03:08 AM11/28/05
to
> You're absolutely right!
>
> Since the Pentium is a P5 class cpu, it cannot work with arrays larger
> than 5 * 10^5 (5e5) elements. Trying to allocate 5.3e5 above is what
> causes the crash.

It doesn't seem so. The same amount of memory created on heap doesn't
cause a crash. Is it related to the stack size given to the process ?

In general, what determines the maximum size of the array for a given
architecture ?

Thanks

Ken Hagan

unread,
Nov 28, 2005, 5:14:55 AM11/28/05
to

It could be. The array is about 2 megabytes. On many (most?) platforms,
there is a maximum stack size that is less than "all available memory".

On Win32, for example, only 1 megabyte of address space is reserved for
the stack of the first thread to grow into, although you can increase
that with a linker option. (Notice that it isn't *memory* that you are
running out of, it is address space.)

Now, to put this thread on-topic for comp.arch we need someone to note
how Linux differs from Windows (not much, I expect) and how the program
runs fine on (ooo, I don't know) an AS-400, and then move on to how it
would have run happily on twenty different systems before the rise of
UNIX bull-dozed all the competently designed systems out of the market).

TOUATI Sid

unread,
Nov 28, 2005, 6:48:29 AM11/28/05
to
This program does not crash on my pentium machine under linux.

S

Greg Lindahl

unread,
Nov 28, 2005, 1:33:17 PM11/28/05
to
In article <dmela1$qp9$1$830f...@news.demon.co.uk>,
Ken Hagan <K.H...@thermoteknix.co.uk> wrote:

>It could be. The array is about 2 megabytes. On many (most?) platforms,
>there is a maximum stack size that is less than "all available memory".

On platforms where Fortran is commonly used, the stack limit isn't
that low. Low stack limits are e-vile.

-- greg

Andy Glew

unread,
Nov 28, 2005, 3:19:43 PM11/28/05
to
Ken Hagan <K.H...@thermoteknix.co.uk> writes:

> and then move on to how it would have run happily on twenty
> different systems before the rise of UNIX bull-dozed all the
> competently designed systems out of the market).

I was about to do that...

Although my pet peeve is memory mapped files breaking up the address
space.

Andy Glew

unread,
Nov 28, 2005, 3:17:46 PM11/28/05
to
> It doesn't seem so. The same amount of memory created on heap doesn't
> cause a crash. Is it related to the stack size given to the process ?
>
> In general, what determines the maximum size of the array for a given
> architecture ?

It's not the architecture, it's the OS. And the compiler. And the
linker. And the loader. Usually all of these, by convention.

In theory, an OS could look at what you ask for, and arrange to give
it all to you, so long as it fits inside the 32b or 64b or whatever
limit.

In practice, all UNIX OSes tend to allocate the stack at a fixed
address, and the heap somewhere else. The traditional way is to have
the stack and heap growing from opposite sides of the address space,
so either can use all of the address space. However, that doesn't
work when there are other "segments" (not x86 segments) in the memory
image - typically things like memory mapped files.

If I was really nice I would show you a sample memory map. But I'm
lazy, and am afraid that I would embarass myself. But that's what you
need to look for - the so-called memory map, the virtual address map,
of a typical process on your OS.

I expect somebody will post the current address maps on their favorite
OS, and I'll learn something.

---

By the way, let me recommend a good book on a related topic:
Linkers and Loaders, by John Levine.

http://www.iecc.com/linker/


Eric P.

unread,
Nov 28, 2005, 3:19:20 PM11/28/05
to

On a single threaded OS like old VMS or Unix, the answer was to
assume that touching invalid memory was a stack access and
caused an automatic stack expansion. Not necessarily a correct
assumption but what most programs found useful.

However with multi-threading, I think the only solutions are the
between-a-rock-and-a-hard-place types, at least the ones that I
have heard of. There is no place to linearly grow dynamic stacks,
and if you reserve too much space for each thread stack, you can
run out of space and limit the number of threads.
So you pick a number and Windows picked 1 MB as their default.

Another option would be to dynamically allocate the stack in heap
chunks and chain them together. You could estimate the required
storage as the distance from the current stack point.
It would make it difficult to catch wild pointers though,
and you need special exception handlers to unwind the stack
chain and ensure space is properly recovered, so it gets messy.

Are there any other mechanisms (for traditional languages)?

Eric

Greg Lindahl

unread,
Nov 28, 2005, 3:52:36 PM11/28/05
to
In article <438B6648...@sympaticoREMOVE.ca>,
Eric P. <eric_p...@sympaticoREMOVE.ca> wrote:

>However with multi-threading, I think the only solutions are the
>between-a-rock-and-a-hard-place types, at least the ones that I
>have heard of.

The common Unix solution is to treat the initial stack specially, so
non-threaded programs behave fairly sanely. And that's the case which
was at question here.

> There is no place to linearly grow dynamic stacks,
> and if you reserve too much space for each thread stack, you can
> run out of space and limit the number of threads.

Apparently you aren't a convert to 64-bit computing. One nice aspect
of it is that it papers over this problem. The OpenMP programming
model involves programs which tend to use lots of stack, and our
implementation Quietly Does The Right Thing for such programs.

-- greg
(employed by, not speaking for, PathScale.)

Andrew Reilly

unread,
Nov 28, 2005, 6:16:48 PM11/28/05
to
On Mon, 28 Nov 2005 11:14:55 +0000, Ken Hagan wrote:

> Ask wrote:
>> Hi
>>
>> On a pentium machine the following program crashes. Was guessing has it
>> something to do with the architecture ?
>>
>> int main()
>> {
>> int ar[530000];
>> return 0;
>> }

> Now, to put this thread on-topic for comp.arch we need someone to note
> how Linux differs from Windows (not much, I expect) and how the program
> runs fine on (ooo, I don't know) an AS-400, and then move on to how it
> would have run happily on twenty different systems before the rise of
> UNIX bull-dozed all the competently designed systems out of the market).

Well, not to go as far as the AS-400, the early ARM personal computers,
under the native RISC-OS (which, like early Mac OS and Windows had all
processes live in one undifferentiated address space) had page-allocated
stacks. This resulted in a wildly hideous C calling convention, where all
sorts of run-time checks and exceptional conditions were required to chain
all of the (discontiguous) chunks together. I'm not sure that this
particular program would have run on such a system at all, though: I can't
remember whether auto (stack) allocations were limited to less than
page-size objects, or whether there was another mechanism to handle large
objects.

Then there are the JVMs, which historically instantiate "activation
records" on the heap, and let the garbage collector clean up the mess
later. This ensures that references to locals returned to callers don't
wind up hanging. Apparently this is such a performance loser (stacks have
nice refrerence locality properties that work well with caches) that
recent JVMs do "escape analysis" to catch the hanging return issue and
allocate everything else on a real stack, as the scope syntax might lead
one to expect. This is relevant in the context of the OP because there
exists at least one C compiler that targets the JVM, so therefore there is
at least one C compiler that uses heap allocated stacks. I expect that
there are others, though.

Cheers,

--
Andrew

Message has been deleted

Dennis M. O'Connor

unread,
Nov 28, 2005, 8:22:15 PM11/28/05
to
"Ask" <as...@indiatimes.com> wrote ...

> Hi
>
> On a pentium machine the following program crashes. Was guessing has it
> something to do with the architecture ?

Nope. Guess again, in another newsgroup.

Greg Lindahl

unread,
Nov 28, 2005, 8:36:15 PM11/28/05
to
In article <p734q5w...@verdi.suse.de>,
Andi Kleen <fre...@alancoxonachip.com> wrote:

>Modern 32bit Linux and I believe some other Unixes solve
>the problem of the heap bumping into mmap by placing
>the mmaps down from the bottom of the stack instead of up
>from an arbitary boundary.

This is a new definition of "solve" I must not be aware of ;-)

>Disadvantage: ulimit -s has to be set properly if you
>want more main program stack than the default (8MB normally)

Many machines running numeric programs have a default of unlimited
stacks, either because the admin or the user upped it. I guess you
mostly run C programs that don't use automatic arrays for temporary
variables.

Whence my comment about OSes which are e-vile.

-- greg

Rob Warnock

unread,
Nov 29, 2005, 4:37:54 AM11/29/05
to
Andy Glew <andy...@intel.com> wrote:
+---------------

| If I was really nice I would show you a sample memory map. But I'm
| lazy, and am afraid that I would embarass myself. But that's what you
| need to look for - the so-called memory map, the virtual address map,
| of a typical process on your OS.
|
| I expect somebody will post the current address maps on their favorite
| OS, and I'll learn something.
+---------------

On 32-bit FreeBSD:

$ cat /proc/curproc/map # "cat" itself
0x8048000 0x8057000 15 16 0xdf4cc4ac r-x 2 1 0x0 COW NC vnode
0x8057000 0x8058000 1 0 0xdf6c7730 rw- 1 0 0x2180 COW NNC vnode
0x8058000 0x805a000 2 0 0xdf6d0508 rw- 2 0 0x2180 NCOW NNC default
0x805a000 0x805c000 2 0 0xdf6d0508 rwx 2 0 0x2180 NCOW NNC default
0x28057000 0x28058000 1 0 0xdf4ca114 rwx 1 0 0x2180 NCOW NNC default
0xbfbe0000 0xbfc00000 2 0 0xdf6d32e0 rwx 1 0 0x2180 NCOW NNC default

$ cat /proc/232/map # a Common Lisp process (CMUCL)
0x8048000 0x805e000 22 0 0xdf4cc05c r-x 2 0 0x0 COW NC vnode
0x805e000 0x8069000 7 0 0xdf4ca33c rw- 2 0 0x2180 NCOW NNC default
0x8069000 0x81ed000 388 0 0xdf4ca33c rwx 2 0 0x2180 NCOW NNC default
0x10000000 0x1130f000 2546 0 0xdf4fdbdc rwx 1 0 0x2180 COW NNC vnode
0x1130f000 0x1ffff000 0 0 0xdf4ed114 rwx 1 0 0x2000 NCOW NNC default
0x2805e000 0x28071000 19 0 0xc033250c r-x 101 47 0x0 COW NC vnode
0x28071000 0x28072000 1 0 0xdf4ed398 rw- 1 0 0x2180 COW NNC vnode
0x28072000 0x28074000 2 0 0xdf4ed5c0 rw- 2 0 0x2180 NCOW NNC default
0x28074000 0x2807c000 7 0 0xdf4ed5c0 rwx 2 0 0x2180 NCOW NNC default
0x2807c000 0x28092000 22 0 0xc0334000 r-x 20 12 0x0 COW NC vnode
0x28092000 0x28093000 1 0 0xdf4d7c94 r-x 1 0 0x2180 COW NNC vnode
0x28093000 0x28097000 4 0 0xdf4ec78c rwx 1 0 0x2180 COW NNC vnode
0x28097000 0x28118000 84 0 0xc0332848 r-x 148 94 0x0 COW NC vnode
0x28118000 0x28119000 1 0 0xdf4ed4ac r-x 1 0 0x2180 COW NNC vnode
0x28119000 0x2811e000 5 0 0xdf4ccac8 rwx 1 0 0x2180 COW NNC vnode
0x2811e000 0x2813e000 15 0 0xdf4ed564 rwx 1 0 0x2180 NCOW NNC default
0x28f00000 0x291b3000 691 0 0xdf4ed05c rwx 1 0 0x2180 COW NNC vnode
0x291b3000 0x37fff000 0 0 0xdf4ed508 rwx 1 0 0x2180 NCOW NNC default
0x38000000 0x3ffff000 1 0 0xdf4fa958 rwx 1 0 0x2180 NCOW NNC default
0x40000000 0x40010000 0 0 0xdf4ecac8 r-x 2 0 0x2180 NCOW NNC default
0x40010000 0x47fe2000 11 0 0xdf4ecac8 rwx 2 0 0x2180 NCOW NNC default
0x48000000 0x48001000 1 0 0xdf4ccda8 rwx 1 0 0x2180 COW NNC vnode
0x48001000 0x68000000 3330 0 0xdf4ed0b8 rwx 1 0 0x2180 NCOW NNC default
0xb0000000 0xb0100000 1 0 0xdf4fae60 rwx 1 0 0x2180 NCOW NNC default
0xbfbe0000 0xbfc00000 3 0 0xdf4ece60 rwx 1 0 0x2180 NCOW NNC default
$

On a 64-bit Linux:

$ cat /proc/self/maps # "cat" itself
0000000000400000-0000000000404000 r-xp 0000000000000000 09:01 65643 /bin/cat
0000000000504000-0000000000505000 rw-p 0000000000004000 09:01 65643 /bin/cat
0000000000505000-0000000000526000 rwxp 0000000000000000 00:00 0
0000002a95556000-0000002a9566b000 r-xp 0000000000000000 09:01 524380 /lib64/ld-2.3.2.so
0000002a9566b000-0000002a9566c000 rw-p 0000000000015000 09:01 524380 /lib64/ld-2.3.2.so
0000002a9566c000-0000002a9566d000 rw-p 0000000000000000 00:00 0
0000002a95687000-0000002a957c2000 r-xp 0000000000000000 09:01 196701 /lib64/tls/libc-2.3.2.so
0000002a957c2000-0000002a95887000 ---p 000000000013b000 09:01 196701 /lib64/tls/libc-2.3.2.so
0000002a95887000-0000002a958c7000 rw-p 0000000000100000 09:01 196701 /lib64/tls/libc-2.3.2.so
0000002a958c7000-0000002a958cc000 rw-p 0000000000000000 00:00 0
0000002a958cc000-0000002a97775000 r--p 0000000000000000 09:01 491604 /usr/lib/locale/locale-archive
0000007fbfee9000-0000007fc0000000 rw-p fffffffffff23000 00:00 0
$


-Rob

-----
Rob Warnock <rp...@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607

Eric P.

unread,
Nov 29, 2005, 8:52:23 AM11/29/05
to
Greg Lindahl wrote:
>
> In article <438B6648...@sympaticoREMOVE.ca>,
> Eric P. <eric_p...@sympaticoREMOVE.ca> wrote:
>
> >However with multi-threading, I think the only solutions are the
> >between-a-rock-and-a-hard-place types, at least the ones that I
> >have heard of.
>
> The common Unix solution is to treat the initial stack specially, so
> non-threaded programs behave fairly sanely. And that's the case which
> was at question here.

That does seem the obviously correct approach but for some bizzare
reason Windows choose not to do that. But there are also many other
problems with how Windows manages thread stacks.

> > There is no place to linearly grow dynamic stacks,
> > and if you reserve too much space for each thread stack, you can
> > run out of space and limit the number of threads.
>
> Apparently you aren't a convert to 64-bit computing.

> <...>

Bludgeon the problem with bits eh? Simple but effective.

Eric

Bernd Paysan

unread,
Nov 29, 2005, 10:23:47 AM11/29/05
to
Eric P. wrote:
> So you pick a number and Windows picked 1 MB as their default.

Is it one megabyte? IIRC, Windows doesn't allow to grow the stack by some
arbitrary constant, but if you grow it step by step, you could grow it much
larger than the 1 MB limit above. The stack could grow only by a rather
small amount at a time (I ended up to grow in pages).

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/

Dave Hansen

unread,
Nov 29, 2005, 10:47:34 AM11/29/05
to
On Tue, 29 Nov 2005 16:23:47 +0100 in comp.arch, Bernd Paysan
<bernd....@gmx.de> wrote:

>Eric P. wrote:
>> So you pick a number and Windows picked 1 MB as their default.
>
>Is it one megabyte? IIRC, Windows doesn't allow to grow the stack by some
>arbitrary constant, but if you grow it step by step, you could grow it much
>larger than the 1 MB limit above. The stack could grow only by a rather
>small amount at a time (I ended up to grow in pages).

AIUI, the 1 MB is the default address space allocation, but at thread
startup, only 1 page is committed. As the stack grows, additional
pages are committed one at a time, but the stack is not allowed to
grow beyond the allocated (e.g., 1 MB) limit.

The 1 MB limit can be overridden, either at link time or run time.
Similarly, the stack committment default can also be overridden, in
multiples of pages.

This was true of NT4 about 6 years ago. Subsequent versions of
Windoze may be different.

Regards,
-=Dave

--
Change is inevitable, progress is not.

dg...@barnowl.research.intel-research.net

unread,
Nov 29, 2005, 11:58:37 AM11/29/05
to

Andrew Reilly <andrew-...@areilly.bpc-users.org> writes:
> Then there are the JVMs, which historically instantiate "activation
> records" on the heap, and let the garbage collector clean up the mess
> later. This ensures that references to locals returned to callers don't
> wind up hanging.

I have my doubts that this would be the reason. I hadn't noticed any
facility in Java or the JVM for returning references to locals... And I
can't see any reason why a JVM would need to heap allocate activation
records (it's not like it's going to make implementation easier, AFAICT).

> Apparently this is such a performance loser (stacks have
> nice refrerence locality properties that work well with caches) that
> recent JVMs do "escape analysis" to catch the hanging return issue and
> allocate everything else on a real stack, as the scope syntax might lead
> one to expect.

This sounds like a misunderstanding of an optimisation to stack-allocate
objects which provably don't escape the function that allocates them (or,
some "obvious" enclosing function). A typical example in Java is
Enumeration objects.

> This is relevant in the context of the OP because there
> exists at least one C compiler that targets the JVM, so therefore there is
> at least one C compiler that uses heap allocated stacks. I expect that
> there are others, though.

Can you name a JVM with heap-allocated activation records?

--
David Gay
dg...@acm.org

Eric P.

unread,
Nov 29, 2005, 2:39:20 PM11/29/05
to

The Reserve value is fixed and link time. Only the Commit value can
be overridden by specifying it to the CreateThread function.

Eric


Eric P.

unread,
Nov 29, 2005, 2:45:58 PM11/29/05
to
Bernd Paysan wrote:
>
> Eric P. wrote:
> > So you pick a number and Windows picked 1 MB as their default.
>
> Is it one megabyte? IIRC, Windows doesn't allow to grow the stack by some
> arbitrary constant, but if you grow it step by step, you could grow it much
> larger than the 1 MB limit above. The stack could grow only by a rather
> small amount at a time (I ended up to grow in pages).

It has been a while since I last looked.
This might have been updated since, though I doubt it
since it has remained unchanged for over 10 years.
In a nut shell...

In Win32, the stack has two values, the Reserved and Commit sizes.
The Reserved size is the number of address bytes allocated
to each thread. The Commit size is the number that are actually
mapped to memory/backing_store. Reserve can be larger than Commit
to allow for future expension without diddling those PTE's.

The stack Reserved value is specified at link time and stored in the
EXE file. The default linker value is 1 MB but can be changed with
the -stack linker option.

The same Reserved value is used for ALL threads created by the
application, even ones created by DLL's that the app knows nothing
about!!! That means if you change the value at link time, then
you have no idea what the consequences will be to library threads!!!

The Commit size is specified as an argument to the CreateThread
function. It specifes the number of page file backed demand zero pages.
This is really pretty useless since it is the Commit value that
automatically grows toward the Reserve amount.

When a thread is created, the stack is allocated and managed,
not by the kernel, but by the Win32 user mode code. The stack top
and low water mark values are recorded in fields of a user mode data
structure called the Thread Environment Block (TEB).

A stack is allocated covering a Reserved size address range, and a set
of one or more no access Guard pages is located at top and bottom of
stack. A Guard Page is a special page type indicated by a bit in the
PTE that triggers a GuardPageFault when touched, and resets the guard
bit (a one shot trigger).
The Commit pages are bound to page file pages (demand zero).
I think the reserved but non committed pages must also be guards.

If a thread touches the guard page, it gets a guard page exception
which the kernel(?) handles by extending the stack down and updating
the low water mark in the TEB. If the thread touches the Guard page
at the stack bottom, the documentation CLAIMS it will try to extend
the stack range, but I have never seen it do so. If it cannot
it gets a Stack Overflow exception.

BUT... since the TEB does not contain the lower stack bound user
mode cannot check it at allocate time. It must rely on touching
the bottom guard page. But where is it? Only the kernel knows.
So it must touch the pages one by one to check.

When the C compiler or its runtime library want to allocate a
stack block larger than 4 KB, it performs a polling loop touching
each page, one by one, between the current ESP and the potential
new ESP, thereby causing a flood of guard page faults.
Each fault extends the stack by just one page.
It could at least check the low water mark to avoid the
polling loop when possible, but doesn't. Go figure.

Eric

ram...@bigpond.net.au

unread,
Nov 29, 2005, 9:55:46 PM11/29/05
to

On 32 bit Linux

08048000-080d1000 r-xp 00000000 16:03 5097658 /bin/zsh
080d1000-080d5000 rw-p 00088000 16:03 5097658 /bin/zsh
080d5000-08190000 rw-p 080d5000 00:00 0 [heap]
b7cc0000-b7cd5000 rw-p b7cc0000 00:00 0
b7cd5000-b7cdb000 r-xp 00000000 16:03 5101446 /usr/lib/zsh/4.2.5/zsh/zutil.so
b7cdb000-b7cdc000 rw-p 00005000 16:03 5101446 /usr/lib/zsh/4.2.5/zsh/zutil.so
b7cdc000-b7cf8000 r-xp 00000000 16:03 5101459 /usr/lib/zsh/4.2.5/zsh/complete.so
b7cf8000-b7cf9000 rw-p 0001c000 16:03 5101459 /usr/lib/zsh/4.2.5/zsh/complete.so
b7cf9000-b7d23000 r-xp 00000000 16:03 5101465 /usr/lib/zsh/4.2.5/zsh/zle.so
b7d23000-b7d28000 rw-p 0002a000 16:03 5101465 /usr/lib/zsh/4.2.5/zsh/zle.so
b7d32000-b7d35000 rw-p b7d32000 00:00 0
b7d35000-b7d43000 r-xp 00000000 16:03 5101439 /usr/lib/zsh/4.2.5/zsh/computil.so
b7d43000-b7d44000 rw-p 0000e000 16:03 5101439 /usr/lib/zsh/4.2.5/zsh/computil.so
b7d45000-b7d48000 rw-p b7d45000 00:00 0
b7d48000-b7d50000 r-xp 00000000 16:03 8574043 /lib/libnss_files-2.3.5.so
b7d50000-b7d52000 rw-p 00007000 16:03 8574043 /lib/libnss_files-2.3.5.so
b7d52000-b7d5a000 r-xp 00000000 16:03 8574185 /lib/libnss_nis-2.3.5.so
b7d5a000-b7d5c000 rw-p 00007000 16:03 8574185 /lib/libnss_nis-2.3.5.so
b7d5c000-b7d63000 r-xp 00000000 16:03 8575562 /lib/libnss_compat-2.3.5.so
b7d63000-b7d65000 rw-p 00006000 16:03 8575562 /lib/libnss_compat-2.3.5.so
b7d65000-b7d67000 rw-p b7d65000 00:00 0
b7d67000-b7e75000 r-xp 00000000 16:03 8573962 /lib/tls/libc-2.3.5.so
b7e75000-b7e76000 r--p 0010e000 16:03 8573962 /lib/tls/libc-2.3.5.so
b7e76000-b7e79000 rw-p 0010f000 16:03 8573962 /lib/tls/libc-2.3.5.so
b7e79000-b7e7b000 rw-p b7e79000 00:00 0
b7e7b000-b7e9c000 r-xp 00000000 16:03 8571549 /lib/tls/libm-2.3.5.so
b7e9c000-b7e9e000 rw-p 00020000 16:03 8571549 /lib/tls/libm-2.3.5.so
b7e9e000-b7ed8000 r-xp 00000000 16:03 8628335 /lib/libncurses.so.5.5
b7ed8000-b7ee0000 rw-p 00039000 16:03 8628335 /lib/libncurses.so.5.5
b7ee0000-b7ee1000 rw-p b7ee0000 00:00 0
b7ee1000-b7ef2000 r-xp 00000000 16:03 8576480 /lib/libnsl-2.3.5.so
b7ef2000-b7ef4000 rw-p 00010000 16:03 8576480 /lib/libnsl-2.3.5.so
b7ef4000-b7ef6000 rw-p b7ef4000 00:00 0
b7ef6000-b7ef8000 r-xp 00000000 16:03 8575629 /lib/libdl-2.3.5.so
b7ef8000-b7efa000 rw-p 00001000 16:03 8575629 /lib/libdl-2.3.5.so
b7efa000-b7efb000 rw-p b7efa000 00:00 0
b7efe000-b7f04000 r-xp 00000000 16:03 5101456 /usr/lib/zsh/4.2.5/zsh/parameter.so
b7f04000-b7f05000 rw-p 00006000 16:03 5101456 /usr/lib/zsh/4.2.5/zsh/parameter.so
b7f05000-b7f12000 r-xp 00000000 16:03 5101448 /usr/lib/zsh/4.2.5/zsh/compctl.so
b7f12000-b7f13000 rw-p 0000d000 16:03 5101448 /usr/lib/zsh/4.2.5/zsh/compctl.so
b7f13000-b7f17000 rw-p b7f13000 00:00 0
b7f17000-b7f2c000 r-xp 00000000 16:03 8576515 /lib/ld-2.3.5.so
b7f2c000-b7f2d000 r--p 00014000 16:03 8576515 /lib/ld-2.3.5.so
b7f2d000-b7f2e000 rw-p 00015000 16:03 8576515 /lib/ld-2.3.5.so
bfe17000-bfe2c000 rw-p bfe17000 00:00 0 [stack]
ffffe000-fffff000 ---p 00000000 00:00 0 [vdso]

--

Seek simplicity and mistrust it.
Alfred Whitehead

A witty saying proves nothing.
Voltaire

Andrew Reilly

unread,
Nov 29, 2005, 11:06:46 PM11/29/05
to
On Tue, 29 Nov 2005 09:58:37 -0800, dgay wrote:
> Can you name a JVM with heap-allocated activation records?

I thought that I'd read about one, but now can't find one. It's pretty
easy to find references to Smalltalk and lisp/scheme compilers that do
this, but not JVM ones.

Sorry for the noise. Chalk it up to crossed neurones.

Cheers,

--
Andrew

Andy Glew

unread,
Nov 30, 2005, 12:05:19 PM11/30/05
to
> > Andrew Reilly <andrew-...@areilly.bpc-users.org> writes:
> > > Then there are the JVMs, which historically instantiate "activation
> > > records" on the heap, and let the garbage collector clean up the mess
> > > later. This ensures that references to locals returned to callers don't
> > > wind up hanging.

> On Tue, 29 Nov 2005 09:58:37 -0800, dgay wrote:

Andrew is not completely wrong. In Java programming you do

public class C {
public void foo() {
MyClass var = new MyClass();
...
}
}

whereas in C++ (the assembly of OOP) you can do either:

class C {
public:
void foo() {
MyClass var;
...
}
}

or

class C {
public:
void foo() {
MyClass* var = new MyClass();
...
}
}


I.e. in C++ you can choose to allocate the object on the heap or on the stack.
In either case the reference or pointer to the object is on the stack.

Whereas in Java the reference s on the stack, but the object has to behave
as if it is on the heap.


It is a big deal and highly touted feature for a JVM to be able to
allocate some such objects on the stack rather than the heap.

Either it uses static analysis to do so.

Or it allocates on the stack, and detects when it should be moved to the heap.


Jeff Kenton

unread,
Nov 30, 2005, 6:18:05 PM11/30/05
to

You're overrunning the stack limit. Try it this way, with the array allocated
statically instead of on the stack:

int ar[530000];
int main()
{
return 0;
}

Paolo Molaro

unread,
Dec 1, 2005, 4:39:12 AM12/1/05
to
On 2005-11-30, Andy Glew <andy...@intel.com> wrote:
> Whereas in Java the reference s on the stack, but the object has to behave
> as if it is on the heap.
>
> It is a big deal and highly touted feature for a JVM to be able to
> allocate some such objects on the stack rather than the heap.
>
> Either it uses static analysis to do so.
>
> Or it allocates on the stack, and detects when it should be moved to the heap.

It would be great if the next gen x86 (and other arcitectures) cpus
could implement some hw support to help improve this sort of
optimizations. The hw support could likely be built to also improve
pauseless garbage collectors.
They key is to provide hw support for read and write barriers (as they
are named in GC terminalogy, not in the memory coherence meaning).
http://www.azulsystems.com apparently already have an implementation of
these concepts.

lupus

--
-----------------------------------------------------------------
lu...@debian.org debian/rules
lu...@ximian.com Monkeys do it better

0 new messages