Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

c++ problem with dynamic and static object

83 views
Skip to first unread message

aotto1968

unread,
Jul 24, 2020, 3:01:53 PM7/24/20
to
Hi,

a C++ class can be created on the "stack" or on the "heap"

class A {
int test;
}

// stack
A myA();

// heap
A* myA = new A();


Question:

it is possible (with gcc) to find out if a instance was created on a
"stack" or on a "heap" *


;-)

Mr Flibble

unread,
Jul 24, 2020, 3:29:16 PM7/24/20
to
On 24/07/2020 20:01, aotto1968 wrote:
> Hi,
>
> a C++ class can be created on the "stack" or on the "heap"

I think you mean objects not classes.

>
> class A {
>   int test;
> }
>
> // stack
> A myA();

This is a function declaration not an object definition.

>
> // heap
> A* myA = new A();
>
>
> Question:
>
> it is possible (with gcc) to find out if a instance was created on a "stack" or on a "heap" *

Only by comparing address of object with the addresses of sentinal objects in automatic storage either side of the object of interest but doing things like this means you are Doing It Wrong (TM). Why do you care if an object is in automatic storage or in the freestore?

/Flibble

--
"Snakes didn't evolve, instead talking snakes with legs changed into snakes." - Rick C. Hodgin

“You won’t burn in hell. But be nice anyway.” – Ricky Gervais

“I see Atheists are fighting and killing each other again, over who doesn’t believe in any God the most. Oh, no..wait.. that never happens.” – Ricky Gervais

"Suppose it's all true, and you walk up to the pearly gates, and are confronted by God," Byrne asked on his show The Meaning of Life. "What will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a world that is so full of injustice and pain. That's what I would say."

Barry Schwarz

unread,
Jul 24, 2020, 3:32:56 PM7/24/20
to
On Fri, 24 Jul 2020 21:01:42 +0200, aotto1968 <aott...@t-online.de>
wrote:
If you tell us why you need to know, we might be able to offer
something meaningful.

In your first example, myA is an object of type A. In your second
example, myA is a pointer to an object of type A. When you write
code, you should know the types of the variables you are using.

If A is not trivial, you might consider evaluating sizeof(myA). For a
pointer, the result should be 4 or 8. For an object, it should be
greater.

--
Remove del for email

Öö Tiib

unread,
Jul 24, 2020, 3:41:49 PM7/24/20
to
On Friday, 24 July 2020 22:32:56 UTC+3, Barry Schwarz wrote:
> On Fri, 24 Jul 2020 21:01:42 +0200, aotto1968 <aott...@t-online.de>
> wrote:
>
> >Hi,
> >
> >a C++ class can be created on the "stack" or on the "heap"
> >
> >class A {
> > int test;
> >}
> >
> >// stack
> >A myA();
> >
> >// heap
> >A* myA = new A();
> >
> >
> >Question:
> >
> >it is possible (with gcc) to find out if a instance was created on a
> >"stack" or on a "heap" *
>
> If you tell us why you need to know, we might be able to offer
> something meaningful.
>
> In your first example, myA is an object of type A.

No it is clearly a function without parameters returning A by value.

Keith Thompson

unread,
Jul 24, 2020, 4:08:22 PM7/24/20
to
Barry Schwarz <schw...@delq.com> writes:
> On Fri, 24 Jul 2020 21:01:42 +0200, aotto1968 <aott...@t-online.de>
> wrote:
>>a C++ class can be created on the "stack" or on the "heap"

An *object* can be created on the stack or on the heap. (Incidentally,
the standard doesn't use those terms.)

>>class A {
>> int test;
>>}
>>
>>// stack
>>A myA();

This declares myA as a function. Drop the parentheses.

>>// heap
>>A* myA = new A();
>>
>>
>>Question:
>>
>>it is possible (with gcc) to find out if a instance was created on a
>>"stack" or on a "heap" *
>
> If you tell us why you need to know, we might be able to offer
> something meaningful.
>
> In your first example, myA is an object of type A. In your second
> example, myA is a pointer to an object of type A. When you write
> code, you should know the types of the variables you are using.
>
> If A is not trivial, you might consider evaluating sizeof(myA). For a
> pointer, the result should be 4 or 8. For an object, it should be
> greater.

There's no reason sizeof(myA) couldn't be the same as the size of a
pointer. sizeof is not, except in some restricted cases, a good way to
distinguish among types.

As far as I know, there's no reliable way to determine whether a given
address is the address of an object allocated on the stack or on the
heap. There may be implementation-specific methods.

It's typically not a good idea to (try to) write code that depends on
this information. I suspect there's an underlying problem that has a
better solution.

--
Keith Thompson (The_Other_Keith) Keith.S.T...@gmail.com
Working, but not speaking, for Philips Healthcare
void Void(void) { Void(); } /* The recursive call of the void */

Öö Tiib

unread,
Jul 24, 2020, 4:11:30 PM7/24/20
to
On Friday, 24 July 2020 22:01:53 UTC+3, aotto1968 wrote:
> Hi,
>
> a C++ class can be created on the "stack" or on the "heap"

Life is not even near to that simple in C++. There is dynamic
storage in C++ that default operator news are providing and it
is close to what is often meant by "global heap". But user may
replace operator news and then new is providing storage what
user's version is providing.
<https://en.cppreference.com/w/cpp/memory/new/operator_new>

Rest of it is also way more fun than just "stack".
There is static storage that is global, there is thread-local
storage that is thread-specific, there is unspecified global
storage where objects of thrown exceptions reside and there is
automatic storage that is certainly not guaranteed to be in
(thread-specific) stack but most often is.

<erasing defective code>

> Question:
>
> it is possible (with gcc) to find out if a instance was created on a
> "stack" or on a "heap" *

This question is unanswerable as it is based on incorrectly simplified
concept of the classifications of kinds of memory in C++ programs.

Vir Campestris

unread,
Jul 24, 2020, 4:55:47 PM7/24/20
to
On 24/07/2020 20:29, Mr Flibble wrote:
> Only by comparing address of object with the addresses of sentinal
> objects in automatic storage either side of the object of interest but
> doing things like this means you are Doing It Wrong (TM).  Why do you
> care if an object is in automatic storage or in the freestore?

Sentinel objects won't help. There's no reason why the stacks for
threads shouldn't be allocated on the heap - in fact I'd be surprised if
they aren't.

Andy

Paavo Helde

unread,
Jul 24, 2020, 5:18:13 PM7/24/20
to
24.07.2020 22:01 aotto1968 kirjutas:
> it is possible (with gcc) to find out if a instance was created on a
> "stack" or on a "heap" *

Only heuristically, and only in "nearby" stack, unless you have recorded
this information in the object itself when creating it.

A much more important question is why do you need this information? Most
probably you want to be too clever for no good. E.g. there are reasons
why std::bad_weak_ptr exception is thrown from shared_from_this if the
original object is not managed by a std::shared_otr. Suggesting not to
mess with stack/heap detection until you have understood those reasons.

Juha Nieminen

unread,
Jul 24, 2020, 6:13:01 PM7/24/20
to
Keith Thompson <Keith.S.T...@gmail.com> wrote:
> It's typically not a good idea to (try to) write code that depends on
> this information. I suspect there's an underlying problem that has a
> better solution.

Maybe he just asked out of curiosity rather than need.

Keith Thompson

unread,
Jul 24, 2020, 6:27:56 PM7/24/20
to
A more answerable question, similar to what the OP asked, might be:

Is it possible, given a pointer value, to determine whether the object
it points to has static, thread, automatic, or dynamic storage duration?

I think the answer is basically "no".

For a given implementation, it's likely to be possible to get *some*
information. For example, on my system an object allocated with
"new" has an address that's displayed as "0x5567e9909eb0", while a
local object has an address that's diplayed as "0x7ffcaee7786c".
That's probably good enough to distinguish between them *for
debugging purposes*. But making a program's behavior depend on
such a distinction is likely to be a bad idea.

Daniel P

unread,
Jul 24, 2020, 6:46:11 PM7/24/20
to
On Friday, July 24, 2020 at 3:29:16 PM UTC-4, Mr Flibble wrote:
> On 24/07/2020 20:01, aotto1968 wrote:
> >
> > a C++ class can be created on the "stack" or on the "heap"
>
> I think you mean objects not classes.
>
If A is a class, "a A" is widely used as short for "an object of type A" (in
C++) or an instance of class A (elsewhere.) It's a concession to readability
with little danger of being misunderstood. Herb Sutter talks about "a
std::vector" this way, as does Scott Meyers.

Daniel

Keith Thompson

unread,
Jul 24, 2020, 6:50:33 PM7/24/20
to
Informally Talking about "a std::vector" to mean "an object of type
std::vector" is fine.

Referring to an object as "a class", in my opinion, is not.

Given
std::vector<int> v;
std::vector is a class.
std::vector is a type.
v is a std::vector.
v is an object.
v is not a class.

Daniel P

unread,
Jul 24, 2020, 7:01:30 PM7/24/20
to
On Friday, July 24, 2020 at 6:50:33 PM UTC-4, Keith Thompson wrote:
> Daniel P writes:
>
> Informally Talking about "a std::vector" to mean "an object of type
> std::vector" is fine.
>
> Referring to an object as "a class", in my opinion, is not.
>
> Given
> std::vector<int> v;
> std::vector is a class.
> std::vector is a type.
> v is a std::vector.
> v is an object.
> v is not a class.
>
Point taken :-)

Daniel

Öö Tiib

unread,
Jul 24, 2020, 8:04:53 PM7/24/20
to
Basically I agree with your "no". For example stacks:

It is commonly possible to inspect sizes and locations of
stacks of threads of program in platform-specific manner.
That makes it also possible to find out if a pointer points
into one of stacks or not. The pthreads or boost::thread let
to do most of it portably. So in practice it is often
possible.

But as "automatic storage" of C++ has to be implementable on
esoteric system without stacks whatsoever and so can have
whatever unimaginable layout then in theory it is
impossible. ;)


Keith Thompson

unread,
Jul 24, 2020, 9:10:36 PM7/24/20
to
Öö Tiib <oot...@hot.ee> writes:
[...]
> But as "automatic storage" of C++ has to be implementable on
> esoteric system without stacks whatsoever and so can have
> whatever unimaginable layout then in theory it is
> impossible. ;)

One source of confusion is that there are (at least) two distinct
meanings of "stack" (and the standard doesn't use either of them,
though it does refer to "stack unwinding").

One is a contiguous region of memory that grows and shrinks, with
new memory allocated and deallocated only at the "top", typically
managed by a "stack pointer" which may or may not be a dedicated
CPU register. The direction in which a stack grows is unspecified.
Most C++ implementations have a "stack" in this sense, but that's
an implementation detail.

Another is a more generic term referring to a data structure with
stack-like last-in/first-out semantics. Objects with automatic
storage duration are allocated and deallocated in a stack-like
manner. Most implementations use a "stack" (in the first sense)
to implement this stack-like behavior. Others might, for example,
allocate activation records for function invocations dynamically
(on the "heap"), resulting in no consistent relationship between
addresses of objects in nested invocations. (And I'm ignoring
threads here.)

People who talk about "the stack" usually refer to the first sense
of the word.

Code that assumes that there's such a thing as "the stack" is likely
to be (a) non-portable to exotic platforms and (b) unnecessarily
low-level.

Scott Newman

unread,
Jul 24, 2020, 11:00:22 PM7/24/20
to
Use the is_heap-operator.

Mr Flibble

unread,
Jul 24, 2020, 11:31:56 PM7/24/20
to
On 25/07/2020 04:00, Scott Newman wrote:
> Use the is_heap-operator.

Fuck. Off.

Scott Newman

unread,
Jul 25, 2020, 1:47:33 AM7/25/20
to
>> Use the is_heap-operator.

> Fuck. Off.

Why ? Because I show the real solutions and you don't ?

aotto1968

unread,
Jul 25, 2020, 1:54:10 AM7/25/20
to
ok -> why I need this Info ?

I have a method that is called "Delete()".

and now I need this information if "myA.Delete()" should do an
"delete myA" or just release the internal data and keep the outer shell
alive.

mfg

Bonita Montero

unread,
Jul 25, 2020, 3:05:46 AM7/25/20
to
Maybe something like this would help (only suitable for
the thread calling):

#include <intrin.h>

inline
bool in_our_stack( void *addr )
{
void *stackBottom,
*stackTop;
#if defined _MSC_VER
#if defined(_M_IX86)
stackBottom = (void *)__readfsdword( 0x04 );
stackTop = (void *)__readfsdword( 0x08 );
#elif defined(_M_X64)
stackBottom = (void *)__readgsqword( 0x08 );
stackTop = (void *)__readgsqword( 0x10 );
#else
#error "unsupported MSC-CPU"
#endif
#else
#error "unsupported compiler"
#endif
return addr >= stackBottom && addr < stackTop;
}

Maybe something similar and equally fast thing is possible
with Linux.

Paavo Helde

unread,
Jul 25, 2020, 3:16:20 AM7/25/20
to
That's what I thought. Anyway, knowing if the object is allocated on
heap or on stack would not tell you if you need to call 'delete this;'
or not. Consider:

class B {
int x;
A a;
};

auto b = new B;
b->a.Delete();

Here, an A instance resides on heap, yet calling 'delete this;' in
A::Delete() would be very wrong.

One relatively clean way to accomplish this is to record the needed
information when you create it. For dynamic creation of A objects you
should have a special method, e.g. static A::Create() which calls 'new
A' and sets a bit marking that a 'delete' call is needed to destroy it.
A more general approach would use a customizable deleter.



Bonita Montero

unread,
Jul 25, 2020, 3:57:02 AM7/25/20
to
I swapped the pointers, now it's right:

#include <intrin.h>

inline
bool in_our_stack( void *addr )
{
void *stackBottom,
*stackTop;
#if defined _MSC_VER
#if defined(_M_IX86)
stackBottom = (void *)__readfsdword( 0x08 );
stackTop = (void *)__readfsdword( 0x04 );
#elif defined(_M_X64)
stackBottom = (void *)__readgsqword( 0x10 );
stackTop = (void *)__readgsqword( 0x08 );
#else
#error "unsupported MSC-CPU"
#endif
#else
#error "unsupported compiler"
#endif
return addr >= stackBottom && addr < stackTop;
}

#include <iostream>

using namespace std;

int main()
{
int stacked;
int *heaped = new int;
cout << in_our_stack( &stacked ) << endl;
cout << in_our_stack( heaped ) << endl;
}

David Brown

unread,
Jul 25, 2020, 6:03:01 AM7/25/20
to
I think it is fair to say that if your code will ever run on an exotic
platform without a standard data stack, you will know about it - your
code will be specifically for that platform. But your second reason is
somewhat valid.

For systems that don't have a normal stack, I think there are perhaps
three main categories. The first is a few ancient systems designed from
before processors settled down to the modern standard form (such as
systems with ones' complement arithmetic, 36-bit ints, and that kind of
thing). You can be confident that your C++ code will not be running on
these systems unless you are employed specifically to write code for
them - people who have such machines don't run new or unknown code on them.

The second is for very limited microcontrollers (like 8051) where normal
stack usage is very inefficient or non-existent (such as some AVR Tiny's
and the smallest PIC microcontrollers). Again, you know you are
programming for these - and it is highly unlikely you are using C++ on them.

The third is for esoteric or specialised processors that are sometimes
used in ASICs or FPGAs, or as coprocessors in electronics. These might
have no stack - or might have more than one. Again, you know you are
using these, and again C++ is very unlikely.


My conclusion is that when you code in C++, you can assume you have a
"stack" in the conventional sense, because you'd know if you didn't have
one. And it is useful to know about it, because it is a lot more
efficient to put objects on the stack than on the heap. But the
/details/ shouldn't matter - such as whether the object is actually on
the stack, or held in registers, or optimised away fully or partially.
And I can think of no good reason why one might want to know if a
particular object is on the stack or on the heap in a context where it
is not obvious (such as in the function that defines the object).

Mr Flibble

unread,
Jul 25, 2020, 1:41:54 PM7/25/20
to
Troll, fuck off.

Scott Newman

unread,
Jul 25, 2020, 1:56:16 PM7/25/20
to
>>>> Use the is_heap-operator.

>>> Fuck. Off.

>> Why ? Because I show the real solutions and you don't ?

> Troll,  fuck off.

You are trolling, not I.

Mr Flibble

unread,
Jul 25, 2020, 2:18:36 PM7/25/20
to
The OP is talking about the heap as in the freestore not as in the heap as in the data structure. If you are not trolling then you are clueless as you don't know the difference between the two types of heaps but I suspect that you do know the difference and you are trolling. I am basing my opinion of you being a troll not just on these replies but on your replies to other threads in this newsgroup. Troll, fuck off.

Scott Newman

unread,
Jul 25, 2020, 2:23:52 PM7/25/20
to
> The OP is talking about the heap as in the freestore not as in the heap
> as in the data structure. If you are not trolling then you are clueless
> as you don't know the difference between the two types of heaps but I
> suspect that you do know the difference and you are trolling.  I am
> basing my opinion of you being a troll not just on these replies but
> on your replies to other threads in this newsgroup. Troll, fuck off.

You're apparently not kowning the difference between stack storage and
heap storage !!!

Mr Flibble

unread,
Jul 25, 2020, 3:11:43 PM7/25/20
to
It is called automatic storage and free store. Now, troll, fuck off.

Keith Thompson

unread,
Jul 25, 2020, 6:31:47 PM7/25/20
to
You're using "standard" here to mean "conventional". Of course the
language standard says nothing about this.

> For systems that don't have a normal stack, I think there are perhaps
> three main categories. The first is a few ancient systems designed from
> before processors settled down to the modern standard form (such as
> systems with ones' complement arithmetic, 36-bit ints, and that kind of
> thing). You can be confident that your C++ code will not be running on
> these systems unless you are employed specifically to write code for
> them - people who have such machines don't run new or unknown code on them.
>
> The second is for very limited microcontrollers (like 8051) where normal
> stack usage is very inefficient or non-existent (such as some AVR Tiny's
> and the smallest PIC microcontrollers). Again, you know you are
> programming for these - and it is highly unlikely you are using C++ on them.
>
> The third is for esoteric or specialised processors that are sometimes
> used in ASICs or FPGAs, or as coprocessors in electronics. These might
> have no stack - or might have more than one. Again, you know you are
> using these, and again C++ is very unlikely.

As I understand it, there are still mainframe systems that
allocate function call activation records on a heap. When you
call a function, space for its local variables is allocated
by the equivalent of malloc or new, and returning from a
function deallocates space by the equivalent of free or delete.
I wouldn't bet against such a mechanism becoming popular again in
the future. C++ code that doesn't go out of its way to depend on
implementation-specific details shouldn't even notice the difference.

For the second and third, if the system is unable to allocate
space for automatic objects, then the system is non-conforming
(or has absurdly small capacity limitations). Of course you can
usually assume that that your C++ implementation is conforming.
Sometimes you might need to work with a subset implementation and
work around missing features. (I suspect that C subsets are more
common than C++ subsets for such systems.)

> My conclusion is that when you code in C++, you can assume you have a
> "stack" in the conventional sense, because you'd know if you didn't have
> one. And it is useful to know about it, because it is a lot more
> efficient to put objects on the stack than on the heap. But the
> /details/ shouldn't matter - such as whether the object is actually on
> the stack, or held in registers, or optimised away fully or partially.
> And I can think of no good reason why one might want to know if a
> particular object is on the stack or on the heap in a context where it
> is not obvious (such as in the function that defines the object).

Sure, it's probably reasonable to assume that allocation and
deallocation is more efficent for objects with automatic storage
duration than for objects allocated via new/malloc. If your code
runs on a system where that isn't true, but the system is still
conforming, your program won't break.

The kind of assumption I was thinking of is, given that x0, x1,
and x2 are objects defined in nested function calls, assuming that
either (&x0 < &x1 < &x2) or (&x0 > &x1 > &x2) (that's pseudo-code;
"<" doesn't chain that way), and that their addresses don't differ
by a whole lot (maybe a few kilobytes for typical functions).
That assumption is going to be valid on a system with a conventional
stack, and invalid on a system that allocates function call
activation frames on a heap. But even if you're certain that your
code will only run on conventional systems, I can't think of any
good reason to make your code rely on those assumptions.

On the other hand, if you're examining object addresses in a
debugger, it's perfectly reasonable to make use of whatever you
know about the system.

Well written C++ code *shouldn't care* whether it's running on
a system with a contiguous stack or not, as long as the system
correctly supports automatic storage allocation and deallocation
in some manner.

Melzzzzz

unread,
Jul 26, 2020, 12:29:16 AM7/26/20
to
On 2020-07-25, Keith Thompson <Keith.S.T...@gmail.com> wrote:
> Well written C++ code *shouldn't care* whether it's running on
> a system with a contiguous stack or not, as long as the system
> correctly supports automatic storage allocation and deallocation
> in some manner.

One way or another as long as it does not have deep recursion...
>


--
current job title: senior software engineer
skills: c++,c,rust,go,nim,haskell...

press any key to continue or any other to quit...
U ničemu ja ne uživam kao u svom statusu INVALIDA -- Zli Zec
Svi smo svedoci - oko 3 godine intenzivne propagande je dovoljno da jedan narod poludi -- Zli Zec
Na divljem zapadu i nije bilo tako puno nasilja, upravo zato jer su svi
bili naoruzani. -- Mladen Gogala

Scott Newman

unread,
Jul 26, 2020, 2:05:14 AM7/26/20
to
>> You're apparently not kowning the difference between stack storage and
>> heap storage !!!

> It is called automatic storage and free store. Now, troll, fuck off.

It can be called whatever, hopefully the one using the words knows
what it means; but you don't.

Öö Tiib

unread,
Jul 26, 2020, 2:23:21 AM7/26/20
to
On Saturday, 25 July 2020 04:10:36 UTC+3, Keith Thompson wrote:
>
> Code that assumes that there's such a thing as "the stack" is likely
> to be (a) non-portable to exotic platforms and (b) unnecessarily
> low-level.

The tool behaves like its "automatic storage" is endless and does
outright anything when it overflows.
So it is somehow programmer's responsibility to ensure that
there is sufficient automatic storage. How can programmer honor
those obligations without going unnecessarily low-level?

aotto1968

unread,
Jul 26, 2020, 3:45:56 AM7/26/20
to
ok - it seems that "c" and "cP" using the SAME constructor…

"c" is using the STACK memory and cP is using the HEAP memory…

Question can I overload "new"
-> https://www.geeksforgeeks.org/overloading-new-delete-operator-c/
to set a flag that cP was created with new?


int MQ_CDECL main(int argc, MQ_CST argv[]) {
MqC c;
auto cP = new MqC();
cP->ObjLog();
try {
MqBufferLC args = {argc, argv};

c.ConfigSetName("MyClient");
c.LinkCreate(&args);
c.SendSTART();
c.SendEND_AND_WAIT("HLWO");
std::cout << c.ReadC() << std::endl;

} catch (const std::exception& e) {

c.ErrorCatch(e);
}
c.Exit();
}

David Brown

unread,
Jul 26, 2020, 7:02:38 AM7/26/20
to
Yes. I hope that didn't cause any confusion.

>
>> For systems that don't have a normal stack, I think there are perhaps
>> three main categories. The first is a few ancient systems designed from
>> before processors settled down to the modern standard form (such as
>> systems with ones' complement arithmetic, 36-bit ints, and that kind of
>> thing). You can be confident that your C++ code will not be running on
>> these systems unless you are employed specifically to write code for
>> them - people who have such machines don't run new or unknown code on them.
>>
>> The second is for very limited microcontrollers (like 8051) where normal
>> stack usage is very inefficient or non-existent (such as some AVR Tiny's
>> and the smallest PIC microcontrollers). Again, you know you are
>> programming for these - and it is highly unlikely you are using C++ on them.
>>
>> The third is for esoteric or specialised processors that are sometimes
>> used in ASICs or FPGAs, or as coprocessors in electronics. These might
>> have no stack - or might have more than one. Again, you know you are
>> using these, and again C++ is very unlikely.
>
> As I understand it, there are still mainframe systems that
> allocate function call activation records on a heap. When you
> call a function, space for its local variables is allocated
> by the equivalent of malloc or new, and returning from a
> function deallocates space by the equivalent of free or delete.

That's my first group. Such mainframes are ancient systems. They are
still some in use - there are perhaps even a few that are still
produced. But they are used in critical systems - systems that must
work correctly without pause over years or decades. Little completely
/new/ software is made for these old systems - primarily code for them
would be maintaining or expanding existing code. And the people
programming or administrating these machines don't use some code they
found off github - the people writing code that runs on these systems
/know/ they are doing that.

More modern mainframes, based on modern ISAs like Power, x86-64, SPARC,
etc., are a different matter. But on these systems you have will
typically have conventional stacks. In fact, many new mainframe
installations are primarily as hosts for Linux virtual machines, and
thus the coding is normal Linux programming with conventional stacks.

(Mainframes are not something I work with, so any corrections to what I
write would be valued.)

> I wouldn't bet against such a mechanism becoming popular again in
> the future. C++ code that doesn't go out of its way to depend on
> implementation-specific details shouldn't even notice the difference.

(I agree that C++ code should not see the details of any differences
here, unless it is very specialised for some purpose.)

I'd be surprised to see it become popular again. There are potential
security and reliability benefits from having function stack frames
allocated as separate blocks, as well as the possibility of more
efficient use of memory (especially in multi-threaded or coroutine
environments). But the costs in efficiency of the code are far from
insignificant unless you have dedicated hardware support - and "normal"
processors do not have that. You can get most of the benefits of
growable stacks on most systems by just using the OS's virtual memory
system, which already exist. And if you need something more
sophisticated, split stacks offer a better solution than heap-allocated
function stack frames.

Still, predictions are hard, especially about the future. The computing
world has gone from early days of trying out a wide variety of hardware
systems, to consolidating on "C friendly" architectures. But as such a
high proportion of software these days is written in higher level
languages, the details of the underlying architectures have become less
and less relevant. So maybe new architectures will change this.

>
> For the second and third, if the system is unable to allocate
> space for automatic objects, then the system is non-conforming
> (or has absurdly small capacity limitations). Of course you can
> usually assume that that your C++ implementation is conforming.
> Sometimes you might need to work with a subset implementation and
> work around missing features. (I suspect that C subsets are more
> common than C++ subsets for such systems.)

On some small microcontrollers, automatic storage is implemented
primarily as static addresses in ram. To do this, the toolchain has to
understand the reentrancy of functions to know that the function (or at
least the automatic variables) cannot be "live" more than once at a
time. This is done with a combination of compiler extensions to mark
functions as reentrant or not (often the default is "not") and
whole-program optimisation. Smart optimisation and lifetime analysis
lets the compiler/linker overlap these static areas to reduce ram usage.

Yes, C is usually used on such systems, rather than C++. But sometimes
C++ is used too. And yes, you usually have a variety of limitations and
non-conformity. (For C++, for example, you can be confident that
exceptions and RTTI are not supported.)

Again, the point is that you know you are working on such systems.
Conversely, if you don't know that you are writing code for something
like this, you are not - and can therefore assume you have a more
"normal" target.

>
>> My conclusion is that when you code in C++, you can assume you have a
>> "stack" in the conventional sense, because you'd know if you didn't have
>> one. And it is useful to know about it, because it is a lot more
>> efficient to put objects on the stack than on the heap. But the
>> /details/ shouldn't matter - such as whether the object is actually on
>> the stack, or held in registers, or optimised away fully or partially.
>> And I can think of no good reason why one might want to know if a
>> particular object is on the stack or on the heap in a context where it
>> is not obvious (such as in the function that defines the object).
>
> Sure, it's probably reasonable to assume that allocation and
> deallocation is more efficent for objects with automatic storage
> duration than for objects allocated via new/malloc. If your code
> runs on a system where that isn't true, but the system is still
> conforming, your program won't break.

Yes. (It's better to be potentially a little less efficient than a
little less correct!)

>
> The kind of assumption I was thinking of is, given that x0, x1,
> and x2 are objects defined in nested function calls, assuming that
> either (&x0 < &x1 < &x2) or (&x0 > &x1 > &x2) (that's pseudo-code;
> "<" doesn't chain that way), and that their addresses don't differ
> by a whole lot (maybe a few kilobytes for typical functions).
> That assumption is going to be valid on a system with a conventional
> stack, and invalid on a system that allocates function call
> activation frames on a heap. But even if you're certain that your
> code will only run on conventional systems, I can't think of any
> good reason to make your code rely on those assumptions.

It's surely going to be undefined behaviour on all systems? Of course,
that doesn't mean it won't work "as expected" on most systems, and you
can usually convert the addresses to uintptr_t and do the comparisons.

I agree entirely that your code shouldn't be using code like that - it's
unlikely to make sense, and isn't necessarily going to be stable. The
relationship between source code functions and local variables, and
their actual implementation, is very far from obvious with modern
optimising compilers.

(Implementations of functions like memcpy and memmove can be interested
in this sort of thing, but as standard library functions, these
implementations get to know more about the system details and can
"cheat" as necessary.)

>
> On the other hand, if you're examining object addresses in a
> debugger, it's perfectly reasonable to make use of whatever you
> know about the system.
>

Yes.

> Well written C++ code *shouldn't care* whether it's running on
> a system with a contiguous stack or not, as long as the system
> correctly supports automatic storage allocation and deallocation
> in some manner.
>

Agreed.

Jorgen Grahn

unread,
Jul 26, 2020, 9:17:48 AM7/26/20
to
My hunch is the OP has a fundamental design error, and if that one is
fixed he doesn't need is_on_heap(), a flag in the object, or a custom
deleter.

Finding out what that error is would be hard, though.

But as a rule, if you look at a pointer or reference to an object, you
should (as the author) be able to tell, statically, who owns the
object. And the owner is responsible for deleting it.

This rules out e.g. functions like foo(Bar*), where foo sometimes
takes over ownership of the Bar, and sometimes not.

Exceptions to that rule are rare IME, and most of those exceptions can
be handled with std::shared_ptr.

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Bo Persson

unread,
Jul 26, 2020, 10:59:19 AM7/26/20
to
On 2020-07-26 at 13:02, David Brown wrote:
> On 26/07/2020 00:31, Keith Thompson wrote:

>> As I understand it, there are still mainframe systems that
>> allocate function call activation records on a heap. When you
>> call a function, space for its local variables is allocated
>> by the equivalent of malloc or new, and returning from a
>> function deallocates space by the equivalent of free or delete.
>
> That's my first group. Such mainframes are ancient systems. They are
> still some in use - there are perhaps even a few that are still
> produced.

The lastest of these ancient systems were introduced at the end of 2019:

https://en.wikipedia.org/wiki/IBM_z15_(microprocessor)

> But they are used in critical systems - systems that must
> work correctly without pause over years or decades. Little completely
> /new/ software is made for these old systems - primarily code for them
> would be maintaining or expanding existing code.

"Expanding" existing code includes adding server support for phone apps
and webb services. That's totally new code. And lots of it.

I recently worked for a bank, where we were about 500 people doing that.


> And the people
> programming or administrating these machines don't use some code they
> found off github - the people writing code that runs on these systems
> /know/ they are doing that.
>

True, that. :-)

Paavo Helde

unread,
Jul 26, 2020, 11:43:39 AM7/26/20
to
I suspect he has shared reference counted objects which are accessed via
smart pointers. I suspect what he now wants is to have some objects on
stack, but he still wants to access them via shared pointers, presumably
because all interfaces are using smart pointers.

An easy way to solve this would abandon objects on stack. Allocate them
on heap always and use smart pointers always. Problem solved. Deal with
the lost microseconds on dynamic allocation only when the profiler shows
it's a bottleneck there.



David Brown

unread,
Jul 26, 2020, 12:25:22 PM7/26/20
to
On 26/07/2020 16:59, Bo Persson wrote:
> On 2020-07-26 at 13:02, David Brown wrote:
>> On 26/07/2020 00:31, Keith Thompson wrote:
>
>>> As I understand it, there are still mainframe systems that
>>> allocate function call activation records on a heap.  When you
>>> call a function, space for its local variables is allocated
>>> by the equivalent of malloc or new, and returning from a
>>> function deallocates space by the equivalent of free or delete.
>>
>> That's my first group.  Such mainframes are ancient systems.  They are
>> still some in use - there are perhaps even a few that are still
>> produced.
>
> The lastest of these ancient systems were introduced at the end of 2019:
>
> https://en.wikipedia.org/wiki/IBM_z15_(microprocessor)
>

Yes, sure - but are these systems using function stack frames that are
allocated in lumps from the heap for each function call, or are they
using conventional stacks?

>>  But they are used in critical systems - systems that must
>> work correctly without pause over years or decades.  Little completely
>> /new/ software is made for these old systems - primarily code for them
>> would be maintaining or expanding existing code.
>
> "Expanding" existing code includes adding server support for phone apps
> and webb services. That's totally new code. And lots of it.
>
> I recently worked for a bank, where we were about 500 people doing that.
>

Also, are these running on mainframe OS's directly, or are they on Linux
virtual machines (or are they on something else) ?

Bo Persson

unread,
Jul 26, 2020, 4:14:22 PM7/26/20
to
On 2020-07-26 at 18:25, David Brown wrote:
> On 26/07/2020 16:59, Bo Persson wrote:
>> On 2020-07-26 at 13:02, David Brown wrote:
>>> On 26/07/2020 00:31, Keith Thompson wrote:
>>
>>>> As I understand it, there are still mainframe systems that
>>>> allocate function call activation records on a heap.  When you
>>>> call a function, space for its local variables is allocated
>>>> by the equivalent of malloc or new, and returning from a
>>>> function deallocates space by the equivalent of free or delete.
>>>
>>> That's my first group.  Such mainframes are ancient systems.  They are
>>> still some in use - there are perhaps even a few that are still
>>> produced.
>>
>> The lastest of these ancient systems were introduced at the end of 2019:
>>
>> https://en.wikipedia.org/wiki/IBM_z15_(microprocessor)
>>
>
> Yes, sure - but are these systems using function stack frames that are
> allocated in lumps from the heap for each function call, or are they
> using conventional stacks?

The architecture doesn't have doesn't have a dedicated hardware stack
pointer, or push, pop, and call instructions (unlike the x86). Software
convention uses a register pointing to the current activation record
(more like a bp register). The compiler can set aside an area for these
records, but it is not really a stack.

>
>>>  But they are used in critical systems - systems that must
>>> work correctly without pause over years or decades.  Little completely
>>> /new/ software is made for these old systems - primarily code for them
>>> would be maintaining or expanding existing code.
>>
>> "Expanding" existing code includes adding server support for phone apps
>> and webb services. That's totally new code. And lots of it.
>>
>> I recently worked for a bank, where we were about 500 people doing that.
>>
>
> Also, are these running on mainframe OS's directly, or are they on Linux
> virtual machines (or are they on something else) ?

The webserver runs on a Linux partition, but it is not emulated or
anything, but compiled for the z hardware. The advantage of running the
webserver on the mainframe is that it can communicate with the back
office systems and databases extremely fast. No network delays.

Technically, everything is virtualized as you first boot z/VM and then
start any number of z/OS, Linux, and other subsystems from there.


Bo Persson

David Brown

unread,
Jul 26, 2020, 4:42:40 PM7/26/20
to
On 26/07/2020 22:14, Bo Persson wrote:
> On 2020-07-26 at 18:25, David Brown wrote:
>> On 26/07/2020 16:59, Bo Persson wrote:
>>> On 2020-07-26 at 13:02, David Brown wrote:
>>>> On 26/07/2020 00:31, Keith Thompson wrote:
>>>
>>>>> As I understand it, there are still mainframe systems that
>>>>> allocate function call activation records on a heap.  When you
>>>>> call a function, space for its local variables is allocated
>>>>> by the equivalent of malloc or new, and returning from a
>>>>> function deallocates space by the equivalent of free or delete.
>>>>
>>>> That's my first group.  Such mainframes are ancient systems.  They are
>>>> still some in use - there are perhaps even a few that are still
>>>> produced.
>>>
>>> The lastest of these ancient systems were introduced at the end of 2019:
>>>
>>> https://en.wikipedia.org/wiki/IBM_z15_(microprocessor)
>>>
>>
>> Yes, sure - but are these systems using function stack frames that are
>> allocated in lumps from the heap for each function call, or are they
>> using conventional stacks?
>
> The architecture doesn't have doesn't have a dedicated hardware stack
> pointer, or push, pop, and call instructions (unlike the x86). Software
> convention uses a register pointing to the current activation record
> (more like a bp register). The compiler can set aside an area for these
> records, but it is not really a stack.

I've worked with PowerPC microcontrollers - while not exactly the same
as Power, they share a lot of common ancestry and features. You can say
that is no dedicated stack pointer register - or you can say that /any/
GPR register (except 0) is a stack pointer. "Push" and "pop" are just
loads and stores with pre- or post- increment and decrement, using
whichever "stack pointer" register you want, and with the stack growing
upwards or downwards as you choose. "Call" is "branch and link", with
non-leaf functions starting with a "push link register" sequence. Local
data on the stack can be accessed from the "stack pointer" as just
register + index addressing. Any register can be used as an equivalent
to the BP on x86.

So you can choose to use a stack or not, and can easily have multiple
stacks (that would be nice for Forth). The details are left up to the
ABI, which is effectively an agreement between the OS, libraries, and
compilers. Since I have only used the microcontroller cores, I am only
familiar with the 32-bit EABI. There you have r1 dedicated as the
"stack pointer" with a conventional downward-growing stack. (There are
actually, I think, different variants of the EABI with slightly
different details matching different variants of the PPC cores, but
AFAIK they all use r1 as a stack pointer.)

However, I don't know if this also applies to programming on the Power
on zSeries machines.

>
>>
>>>>   But they are used in critical systems - systems that must
>>>> work correctly without pause over years or decades.  Little completely
>>>> /new/ software is made for these old systems - primarily code for them
>>>> would be maintaining or expanding existing code.
>>>
>>> "Expanding" existing code includes adding server support for phone apps
>>> and webb services. That's totally new code. And lots of it.
>>>
>>> I recently worked for a bank, where we were about 500 people doing that.
>>>
>>
>> Also, are these running on mainframe OS's directly, or are they on Linux
>> virtual machines (or are they on something else) ?
>
> The webserver runs on a Linux partition, but it is not emulated or
> anything, but compiled for the z hardware.

That is what I thought. (No need to emulate it when it is a supported
processor type!)

> The advantage of running the
> webserver on the mainframe is that it can communicate with the back
> office systems and databases extremely fast. No network delays.
>

There will be many other advantages too, as compared to running Linux on
bog-standard x86 boxes (or even a Talos II Power-based server). You get
hot-plug and hot-replacement of everything, including cpus and ram, and
huge scalability. I have also read of records for the speed at which
new Linux partitions can be installed on these things.

> Technically, everything is virtualized as you first boot z/VM and then
> start any number of z/OS, Linux, and other subsystems from there.
>

OK, so z/OS is a guest OS, just like Linux? And the virtualisation is
of the level of, say, VMWare, VirtualBox or KVM, rather than emulation
like QEMU or "super chroot jail" like Linux Containers, Docker, or
Solaris Zones ? That is to say, each partition (if that is the correct
term) has its own kernel, with hardware interaction being handled by the
hypervisor (z/VM)?

(I'm sorry, this is getting a bit off-topic for this group, but it's
nice to learn a little from someone who has used these things.)

Manfred

unread,
Jul 26, 2020, 6:33:57 PM7/26/20
to
On 7/26/20 9:45 AM, aotto1968 wrote:
> ok - it seems that "c" and "cP" using the SAME constructor…
>
> "c" is using the STACK memory and cP is using the HEAP memory…
>
> Question can I overload "new"
> -> https://www.geeksforgeeks.org/overloading-new-delete-operator-c/
> to set a flag that cP was created with new?

AFAIK you can't.
Overloading operator new is obviously possible, but in the overload you
only have access to the raw memory of the object, /before/ the
constructor is run, and you cannot pass arguments to the constructor
from there - it is an allocation function, the constructor is called by
the language.

Ben Bacarisse

unread,
Jul 26, 2020, 8:06:07 PM7/26/20
to
David Brown <david...@hesbynett.no> writes:

> On 26/07/2020 16:59, Bo Persson wrote:
>> On 2020-07-26 at 13:02, David Brown wrote:
>>> On 26/07/2020 00:31, Keith Thompson wrote:
>>
>>>> As I understand it, there are still mainframe systems that
>>>> allocate function call activation records on a heap.  When you
>>>> call a function, space for its local variables is allocated
>>>> by the equivalent of malloc or new, and returning from a
>>>> function deallocates space by the equivalent of free or delete.
>>>
>>> That's my first group.  Such mainframes are ancient systems.  They are
>>> still some in use - there are perhaps even a few that are still
>>> produced.
>>
>> The lastest of these ancient systems were introduced at the end of 2019:
>>
>> https://en.wikipedia.org/wiki/IBM_z15_(microprocessor)
>
> Yes, sure - but are these systems using function stack frames that are
> allocated in lumps from the heap for each function call, or are they
> using conventional stacks?

The choice is more between allocating them on the heap for each function
call or statically allocating one for each function. If the compiler
can determine that it's safe (basically no recursive re-entry) the
register save and parameter space can be a statically allocated block.

<cut>
--
Ben.

Bonita Montero

unread,
Jul 27, 2020, 1:38:36 AM7/27/20
to
For what do you need this ability to decide whether an object has been
allocated on the stack or the heap ? I don't see any sense in this.

Chris M. Thomasson

unread,
Jul 27, 2020, 2:13:51 AM7/27/20
to
On 7/26/2020 10:38 PM, Bonita Montero wrote:
> For what do you need this ability to decide whether an object has been
> allocated on the stack or the heap ? I don't see any sense in this.


Sorry for interjecting, however, imvho, this is an interesting question.
There is a way to create a full blown memory allocator using memory on
threads stacks only.

Chris M. Thomasson

unread,
Jul 27, 2020, 2:15:49 AM7/27/20
to
In my case I did not care if where the memory came from. If a thread
frees something it did not itself create, well, it would use an atomic
XCHG, or CAS. The creator thread, in other words, the one that
allocated, would never die until all of its allocations were
deallocated. It used a little "fancy" pointer stealing to store a little
meta data in the atomic pointer swaps. Iirc, it was only a bit.

Bonita Montero

unread,
Jul 27, 2020, 2:16:38 AM7/27/20
to
> Sorry for interjecting, however, imvho, this is an interesting question.
> There is a way to create a full blown memory allocator using memory on
> threads stacks only.

That's possible without what the OP wanted. Simply open your own heap
-arena on the stack with alloca and divide it into smaller parts after-
wards. But that's also useless.

Bonita Montero

unread,
Jul 27, 2020, 2:25:16 AM7/27/20
to
I just had the idea that the in_our_stack could be applied to the this
pointer on construction and thereby detemining if the object has been
allocated on the stack or heap.

Something like this:

#include <intrin.h>

inline
bool in_our_stack( void *addr )
{
void *stackBottom,
*stackTop;
#if defined _MSC_VER
#if defined(_M_IX86)
stackBottom = (void *)__readfsdword( 0x08 );
stackTop = (void *)__readfsdword( 0x04 );
#elif defined(_M_X64)
stackBottom = (void *)__readgsqword( 0x10 );
stackTop = (void *)__readgsqword( 0x08 );
#else
#error "unsupported MSC-CPU"
#endif
#else
#error "unsupported compiler"
#endif
return addr >= stackBottom && addr < stackTop;
}

struct MyClass
{
MyClass();
bool isStacked();
private:
bool m_isStacked;
};

MyClass::MyClass() :
m_isStacked( in_our_stack( this ) )
{
}

bool MyClass::isStacked()
{
return m_isStacked;
}

#include <iostream>

using namespace std;

void outStacked( MyClass *mcObj )
{
cout << (mcObj->isStacked() ? "stacked" : "heaped") << endl;
}

int main()
{
MyClass stackedObj,
*heapedObj = new MyClass();
outStacked( &stackedObj );
outStacked( heapedObj );
}

Chris M. Thomasson

unread,
Jul 27, 2020, 2:33:23 AM7/27/20
to
On 7/26/2020 11:16 PM, Bonita Montero wrote:
>> Sorry for interjecting, however, imvho, this is an interesting
>> question. There is a way to create a full blown memory allocator using
>> memory on threads stacks only.
>
> That's possible without what the OP wanted.

True. Shi% happens! ;^o

Bo Persson

unread,
Jul 27, 2020, 4:40:53 AM7/27/20
to
On 2020-07-26 at 22:42, David Brown wrote:
> On 26/07/2020 22:14, Bo Persson wrote:
>> On 2020-07-26 at 18:25, David Brown wrote:

>>>
>>>>>   But they are used in critical systems - systems that must
>>>>> work correctly without pause over years or decades.  Little completely
>>>>> /new/ software is made for these old systems - primarily code for them
>>>>> would be maintaining or expanding existing code.
>>>>
>>>> "Expanding" existing code includes adding server support for phone apps
>>>> and webb services. That's totally new code. And lots of it.
>>>>
>>>> I recently worked for a bank, where we were about 500 people doing that.
>>>>
>>>
>>> Also, are these running on mainframe OS's directly, or are they on Linux
>>> virtual machines (or are they on something else) ?
>>
>> The webserver runs on a Linux partition, but it is not emulated or
>> anything, but compiled for the z hardware.
>
> That is what I thought. (No need to emulate it when it is a supported
> processor type!)
>
>> The advantage of running the
>> webserver on the mainframe is that it can communicate with the back
>> office systems and databases extremely fast. No network delays.
>>
>
> There will be many other advantages too, as compared to running Linux on
> bog-standard x86 boxes (or even a Talos II Power-based server). You get
> hot-plug and hot-replacement of everything, including cpus and ram, and
> huge scalability. I have also read of records for the speed at which
> new Linux partitions can be installed on these things.

Yes. When I wrote "The advantage", I really meant "One advantage". :-)

In addition to each box being "very redundant" in itself, you can also
cluster partitions from multiple machines to get an active-active load
balancing. Whatever is up handles the load.

At the bank we were supposed to have a 99.9% uptime for each software
system. If you got down to 99.5%, your system would be red in the
monthly report and you had to come up with an improvement plan. :-)

>
>> Technically, everything is virtualized as you first boot z/VM and then
>> start any number of z/OS, Linux, and other subsystems from there.
>>
>
> OK, so z/OS is a guest OS, just like Linux? And the virtualisation is
> of the level of, say, VMWare, VirtualBox or KVM, rather than emulation
> like QEMU or "super chroot jail" like Linux Containers, Docker, or
> Solaris Zones ? That is to say, each partition (if that is the correct
> term) has its own kernel, with hardware interaction being handled by the
> hypervisor (z/VM)?

Correct. I assume z/OS could run either native or virtualized, but you
really want different copies for development, testing, and production.
Testing includes loading new versions of the OS in a separate partition
before upgrading the production one.

The OS's are aware of being virtualized and so talks to the hypervisor
instead of trying to fiddle with the hardware. IBM has done this since
the 1970's. :-)

Bonita Montero

unread,
Jul 27, 2020, 4:48:00 AM7/27/20
to
> #include <intrin.h>
> inline
> bool in_our_stack( void *addr )
> {
>     void *stackBottom,
>          *stackTop;
> #if defined _MSC_VER
>     #if defined(_M_IX86)
>     stackBottom = (void *)__readfsdword( 0x08 );
>     stackTop = (void *)__readfsdword( 0x04 );
>     #elif defined(_M_X64)
>     stackBottom = (void *)__readgsqword( 0x10 );
>     stackTop = (void *)__readgsqword( 0x08 );
>     #else
>         #error "unsupported MSC-CPU"
>     #endif
> #else
>     #error "unsupported compiler"
> #endif
>     return addr >= stackBottom && addr < stackTop;
> }

So it's one instruction more but portable to all Windows-platforms,
even ARM.

#if defined _MSC_VER
#include <windows.h>
#endif

inline
bool in_our_stack( void *addr )
{
void *stackBottom,
*stackTop;
#if defined _MSC_VER
void **teb = (void **)NtCurrentTeb();
stackBottom = teb[2];
stackTop = teb[1];

Jorgen Grahn

unread,
Jul 27, 2020, 8:45:46 AM7/27/20
to
On Sun, 2020-07-26, aotto1968 wrote:
> ok - it seems that "c" and "cP" using the SAME constructor…

Of course; it's the same class and you only use one of its
constructors. Constructors turn raw memory into objects, and don't
care what kind of memory it is.
If this is about interfacing to the Go language (like googling
"MQ_CST" indicates) then maybe it's better you ask there. The C APIs
of languages often have very specific lifetime and ownership rules
which we know nothing about.

(Based on my very old experience interfacing to Python. It had
reference counting (with rules for when you could "borrow" an object
without bumping the reference count) and everything had to be
allocated on Python's own heap IIRC. What you ended up writing
wasn't normal C or C++.)

Bonita Montero

unread,
Jul 27, 2020, 10:59:00 AM7/27/20
to
Now everything works with Windows as well as Linux / POSIX:

#if defined _MSC_VER
#include <windows.h>
#elif defined __unix__
#include <pthread.h>
#endif
#include <cstddef>

inline
bool in_our_stack( void *addr )
{
void *stackBottom,
*stackTop;
#if defined _MSC_VER
void **teb = (void **)NtCurrentTeb();
stackBottom = teb[2];
stackTop = teb[1];
#elif defined __unix__
pthread_attr_t attrs;
if( pthread_getattr_np( pthread_self(), &attrs ) != 0 )
throw 123;
size_t stackSize;
if( pthread_attr_getstack( &attrs, &stackBottom, &stackSize ) != 0 )
throw 456;
stackTop = (char *)stackBottom + stackSize;
#else
#error "unsupported compiler"
#endif
return addr >= stackBottom && addr < stackTop;
}

struct MyClass
{
MyClass();
bool isStacked();
private:
bool m_isStacked;
};

MyClass::MyClass() :
m_isStacked( in_our_stack( this ) )
{
}

bool MyClass::isStacked()
{
return m_isStacked;
}

#include <iostream>

using namespace std;

void outStacked( MyClass *mcObj )
{
cout << (mcObj->isStacked() ? "stacked" : "heaped") << endl;
}

int main()
{
MyClass stackedObj,
*heapedObj = new MyClass();
outStacked( &stackedObj );
outStacked( heapedObj );
}

I don't know how fast the pthread_self-, pthread_getattr_np-
and pthread_attr_getstack-calls are.

Bonita Montero

unread,
Jul 27, 2020, 11:16:56 AM7/27/20
to
> inline
> bool in_our_stack( void *addr )
> {
>     void  *stackBottom,
>           *stackTop;
> #if defined _MSC_VER
>     void **teb  = (void **)NtCurrentTeb();
>     stackBottom = teb[2];
>     stackTop    = teb[1];
> #elif defined __unix__
>     pthread_attr_t attrs;
>     if( pthread_getattr_np( pthread_self(), &attrs ) != 0 )
>         throw 123;
>     size_t stackSize;
>     if( pthread_attr_getstack( &attrs, &stackBottom, &stackSize ) != 0 )
>         throw 456;
>     stackTop = (char *)stackBottom + stackSize;
>     #else
>     #error "unsupported compiler"
> #endif
>     return addr >= stackBottom && addr < stackTop;
> }
> ...
> I don't know how fast the pthread_self-, pthread_getattr_np-
> and pthread_attr_getstack-calls are.

So this speeds up the code since the stack-boundaries are determined
only once while a thread is running:

inline
bool in_our_stack( void *addr )
{
#if defined __unix__
thread_local
#endif
void *stackBottom,
*stackTop;
#if defined _MSC_VER
void **teb = (void **)NtCurrentTeb();
stackBottom = teb[2];
stackTop = teb[1];
#elif defined __unix__
thread_local
bool set = false;
if( !set )
{
pthread_attr_t attrs;
if( pthread_getattr_np( pthread_self(), &attrs ) != 0 )
throw 123;
size_t stackSize;
if( pthread_attr_getstack( &attrs, &stackBottom, &stackSize ) != 0 )
throw 456;
stackTop = (char *)stackBottom + stackSize;
set = true;
}
#else
#error "unsupported platform"
0 new messages