Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[Nix] another doubt for the possible memory leak problem

56 views
Skip to first unread message

Soviet_Mario

unread,
Sep 13, 2019, 1:10:28 PM9/13/19
to

I am trying to fix some concept about "ownership" of
dynamically allocated memory (in the C-style libraries, on
linux world)


As I'm reading some man about SCANDIR (a high level
directory scanning function which in "one shot" takes care
to return everything hiding loops, recursion and all).

It says it internally allocates (using MALLOC) an array of
pointers to <dirent> structures, or better the array itself
and maybe the pointers contiguously (I'm not sure because I
see the dirent entry preallocates a static buffer for names
MAXPATH wide, so in principle it is made up of chunks all
the same size that, always in principle, can be malloc-ed in
a contiguous block and not necessarily as a sparse jagged
array).

Man page recommends explicitely to free the mem after use as
SCANDIR no longer cares about.

Now I'd like to understand how the Linux memory manager
manage memory allocated and never released at program
TERMINATION.

I dare to have some hope that memory as some ownership
allocated with it (if it was not, how to throw segfaults ?).
I also dare to think that when the central memory manager is
notified a process and all its children terminates, it can
release all the associated resources (file handles, dynamic
memory, pipes, sockets, everything).

Is this assumption ...
1) generally or always false
2) always true (the OS knows and provides post-mortem)
3) generally but not always true (in this cases, when it
holds and when not ?)

I'm also rather insecure about actual ownership in case
MALLOC is called not in code itself but, like SCANDIR case,
in the LIBRARY. The ownership is the same or not ?

The doubt arises from a possible case of SHARED libraries,
maybe called by different programs, that actually DO NOT
terminate after the said program does. So, if the memory is
malloc-ed with the "signature" of the library, no one knows
which particular client was useful to, and it could not be
safely freed on relevant client termination.

If instead the memory is somewhat attributed to the caller
and not to the library itself, it can be safely freed.

Obviously, I will try to FREE manually, but I'd like to
understand better resource management
tY


--
1) Resistere, resistere, resistere.
2) Se tutti pagano le tasse, le tasse le pagano tutti
Soviet_Mario - (aka Gatto_Vizzato)

Scott Lurndal

unread,
Sep 13, 2019, 1:33:33 PM9/13/19
to
Soviet_Mario <Sovie...@CCCP.MIR> writes:
>
>I am trying to fix some concept about "ownership" of
>dynamically allocated memory (in the C-style libraries, on
>linux world)
>
>
>As I'm reading some man about SCANDIR (a high level
>directory scanning function which in "one shot" takes care
>to return everything hiding loops, recursion and all).
>
>It says it internally allocates (using MALLOC) an array of
>pointers to <dirent> structures, or better the array itself
>and maybe the pointers contiguously (I'm not sure because I
>see the dirent entry preallocates a static buffer for names
>MAXPATH wide, so in principle it is made up of chunks all
>the same size that, always in principle, can be malloc-ed in
>a contiguous block and not necessarily as a sparse jagged
>array).
>
>Man page recommends explicitely to free the mem after use as
>SCANDIR no longer cares about.
>
>Now I'd like to understand how the Linux memory manager
>manage memory allocated and never released at program
>TERMINATION.

Unprivileged applications cannot allocate memory that survives
application (process) termination.

The recommendation should, however, be followed to avoid
unexpected /unplanned growth in the memory requirements of the
application.

Personally, I'll almost always use nftw(3) in such cases, as
scandir can require a considerable amount of memory when the
directory heirarchy is both deep and wide.

Soviet_Mario

unread,
Sep 13, 2019, 1:55:26 PM9/13/19
to
Il 13/09/19 19:33, Scott Lurndal ha scritto:
> Soviet_Mario <Sovie...@CCCP.MIR> writes:
>>
>> I am trying to fix some concept about "ownership" of
>> dynamically allocated memory (in the C-style libraries, on
>> linux world)
>>
>>
>> As I'm reading some man about SCANDIR (a high level
>> directory scanning function which in "one shot" takes care
>> to return everything hiding loops, recursion and all).
>>
>> It says it internally allocates (using MALLOC) an array of
>> pointers to <dirent> structures, or better the array itself
>> and maybe the pointers contiguously (I'm not sure because I
>> see the dirent entry preallocates a static buffer for names
>> MAXPATH wide, so in principle it is made up of chunks all
>> the same size that, always in principle, can be malloc-ed in
>> a contiguous block and not necessarily as a sparse jagged
>> array).
>>
>> Man page recommends explicitely to free the mem after use as
>> SCANDIR no longer cares about.
>>
>> Now I'd like to understand how the Linux memory manager
>> manage memory allocated and never released at program
>> TERMINATION.
>
> Unprivileged applications cannot allocate memory that survives
> application (process) termination.

yes, but memory allocated by a LIBRARY is attributed to the
client program or to the library itself ?
The library can surely be unprivileged (and its ram freed on
unloading), but can have an unpredictable lifetime and thus
survive "the" caller.

In this case how is the memory managed ?

>
> The recommendation should, however, be followed to avoid
> unexpected /unplanned growth in the memory requirements of the
> application.

yes, sure : in known contexts I try to avoid leaks. In fact
I'm trying to better understand possible origins of the leaks.

>
> Personally, I'll almost always use nftw(3) in such cases, as
> scandir can require a considerable amount of memory when the
> directory heirarchy is both deep and wide.

yes, the tree is heavy. But I'm willing to delegate robust
library function as much as possible. I also hope that it
would be fast even if consuming a lot.

And anyway I will need to store such information in another
formats no matter what. So at a given worst moment even a
second copy will be required before malloc-ed dirent array
can be freed :\

Keith Thompson

unread,
Sep 13, 2019, 2:29:31 PM9/13/19
to
It's attributed to the process.

C++ itself (or C) doesn't say much about this, but the fact that a
library function can call malloc() and then your program can call
free() to release the allocated memory implies that it's all part
of the same pool. malloc() itself is part of the standard library.

Typically if two running programs are both using the same library,
they might share the code that implements the functions in that
library (if it's a shared library -- *.so or *.dll), but not
any memory for objects created via that library. For example,
two simultaneously running processes using a library, even if
they're instances of the same program, generally can't see each
other's memory. Local data in a library function is allocated on
the calling process's stack.

Again, this is more about the OS than about anything specified
by the language, which doesn't even guarantee that you can run
more than one program simultaneously. Any reasonable OS (other
than for small embedded systems) creates a process when you run a
program, protects processes from each other, and cleans up resources
(particularly memory) when a process terminates.

[...]

--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Will write code for food.
void Void(void) { Void(); } /* The recursive call of the void */

Scott Lurndal

unread,
Sep 13, 2019, 2:36:42 PM9/13/19
to
Soviet_Mario <Sovie...@CCCP.MIR> writes:
>Il 13/09/19 19:33, Scott Lurndal ha scritto:
>> Soviet_Mario <Sovie...@CCCP.MIR> writes:

>> Unprivileged applications cannot allocate memory that survives
>> application (process) termination.
>
>yes, but memory allocated by a LIBRARY is attributed to the
>client program or to the library itself ?
>The library can surely be unprivileged (and its ram freed on
>unloading), but can have an unpredictable lifetime and thus
>survive "the" caller.

Keith has addressed this.


>
>>
>> Personally, I'll almost always use nftw(3) in such cases, as
>> scandir can require a considerable amount of memory when the
>> directory heirarchy is both deep and wide.
>
>yes, the tree is heavy. But I'm willing to delegate robust
>library function as much as possible. I also hope that it
>would be fast even if consuming a lot.
>
>And anyway I will need to store such information in another
>formats no matter what. So at a given worst moment even a
>second copy will be required before malloc-ed dirent array
>can be freed :\

If you need to store it in other formats, you might consider using
nftw(3) and you can store the data once, rather than copying what
scandir returns.

Paavo Helde

unread,
Sep 13, 2019, 2:37:49 PM9/13/19
to
Surprise-surprise, other people have worried about such things in the
past, and have come up with a notion of something called "process",
about 50 years ago.

The memory allocated by malloc() becomes part of the process memory
space and is owned by the process; such memory will be released and
recycled when the OS terminates the process.

The "ownership" of the memory inside the process is not really relevant
here, as all shared libraries loaded into the process see the same
process memory space; if anything, the memory is "owned" by the memory
manager behind malloc(). In unix/Linux there is typically only one such
memory manager in the process, which makes it possible in the first
place to design such interfaces as scandir() (in Windows different DLL-s
can easily contain or access different malloc() memory managers, making
things more interesting).

Soviet_Mario

unread,
Sep 13, 2019, 6:51:40 PM9/13/19
to
Il 13/09/19 20:29, Keith Thompson ha scritto:
ok, thanks ... very clear till here

> Local data in a library function is allocated on
> the calling process's stack.

here I have a doubt instead. As we were talking about malloc
and dynamic memory, why the "stack" ? The stack is that area
accessed by bp register for parameters and return values
(and auto variables), isnt' it ? Many years ago I used to
read about the "heap" for async. allocated memory, like the
one got by malloc ...

>
> Again, this is more about the OS than about anything specified


yes, that's why I put [Nix] in the intro of the question. I
was sure it had necessarily to do with the memory manager of
the OS. In the end a memory leak in the process should
bubble up until it.


But, to sum up : a process should not be able, willy nilly,
if using plain malloc and no particular privilege or other
tecniques (i.g. to become a daemon and remain loaded), to
produce a memory leak.

> by the language, which doesn't even guarantee that you can run
> more than one program simultaneously. Any reasonable OS (other
> than for small embedded systems) creates a process when you run a
> program, protects processes from each other, and cleans up resources
> (particularly memory) when a process terminates.
>
> [...]
>


--

Soviet_Mario

unread,
Sep 13, 2019, 6:59:14 PM9/13/19
to
Il 13/09/19 20:36, Scott Lurndal ha scritto:
> Soviet_Mario <Sovie...@CCCP.MIR> writes:
>> Il 13/09/19 19:33, Scott Lurndal ha scritto:
>>> Soviet_Mario <Sovie...@CCCP.MIR> writes:
>
>>> Unprivileged applications cannot allocate memory that survives
>>> application (process) termination.
>>
>> yes, but memory allocated by a LIBRARY is attributed to the
>> client program or to the library itself ?
>> The library can surely be unprivileged (and its ram freed on
>> unloading), but can have an unpredictable lifetime and thus
>> survive "the" caller.
>
> Keith has addressed this.
>

yes, seen

>
>>
>>>
>>> Personally, I'll almost always use nftw(3) in such cases, as
>>> scandir can require a considerable amount of memory when the
>>> directory heirarchy is both deep and wide.
>>
>> yes, the tree is heavy. But I'm willing to delegate robust
>> library function as much as possible. I also hope that it
>> would be fast even if consuming a lot.
>>
>> And anyway I will need to store such information in another
>> formats no matter what. So at a given worst moment even a
>> second copy will be required before malloc-ed dirent array
>> can be freed :\
>
> If you need to store it in other formats, you might consider using
> nftw(3) and you can store the data once, rather than copying what
> scandir returns.


yes, thanks. I'm reading the man page and seeing it allows
for a callback to receive EACH data to do whatever it wants
with them.

To be actually useful for long term storage, I'd need a way
to at least enumerate, or better, get the mere number of
inodes of the given type to know in advance how much memory
to allocate, before having the callback called by nftw.

I'd very much prefer a scenario when I try to allocate all
(at least at a directory level) of the needed entries or
none at all an fail, rather than allocating one by one and
potentially run out of memory at some point halfway of the
scan, leaving the stored tree in a non consistent state (and
then having to free the half stored and so).

I have to find other functions to get the number of files
(or subfolders, separately) in a given folder without
actually "stat"-ing anything.

Keith Thompson

unread,
Sep 13, 2019, 8:17:22 PM9/13/19
to
Soviet_Mario <Sovie...@CCCP.MIR> writes:
> Il 13/09/19 20:29, Keith Thompson ha scritto:
[...]
>> Local data in a library function is allocated on
>> the calling process's stack.
>
> here I have a doubt instead. As we were talking about malloc
> and dynamic memory, why the "stack" ? The stack is that area
> accessed by bp register for parameters and return values
> (and auto variables), isnt' it ? Many years ago I used to
> read about the "heap" for async. allocated memory, like the
> one got by malloc ...

That was an example. If you call a library function, any local
variables in that function are allocated on the stack associated with
the calling process.

The heap, the region of memory managed by malloc and free or by new and
delete, is handled the same way. It's all associated with the process,
and a new process is created every time you run your program.

(The standard doesn't necessarily refer to "stack" or "heap", but this
is typical for most operating systems.)

As far as memory allocation is concerned (not including space for
executable code), any library functions act just like functions in your
program. If you use static libraries, they're incorporated directly
into your program, as if you had defined them yourself. Dynamic
libraries by design work the same way, except that space for the code is
allocated only once.

>> Again, this is more about the OS than about anything specified
>
> yes, that's why I put [Nix] in the intro of the question. I
> was sure it had necessarily to do with the memory manager of
> the OS. In the end a memory leak in the process should
> bubble up until it.

Ah, I wasn't sure what "[Nix]" meant.

> But, to sum up : a process should not be able, willy nilly,
> if using plain malloc and no particular privilege or other
> tecniques (i.g. to become a daemon and remain loaded), to
> produce a memory leak.

A process can leak memory internally, keeping it allocated (and not
allowing other processes to use it) as long as the process is running.

while (true) {
malloc(1000); // don't do this
}

Operating systems typically place some configurable limit on how much
memory a process can allocate -- but you can bypass that by running
multiple processes. One unprivileged user typically *can* bog down a
system, if not crash it.

But when a process terminates, either because it finishes, or it
crashes, or something kills it, all allocated memory (stack and heap)
is released. That's the operating system's job.

Soviet_Mario

unread,
Sep 14, 2019, 7:10:21 AM9/14/19
to
Il 14/09/19 02:17, Keith Thompson ha scritto:
> Soviet_Mario <Sovie...@CCCP.MIR> writes:
>> Il 13/09/19 20:29, Keith Thompson ha scritto:
> [...]
>>> Local data in a library function is allocated on
>>> the calling process's stack.
>>
>> here I have a doubt instead. As we were talking about malloc
>> and dynamic memory, why the "stack" ? The stack is that area
>> accessed by bp register for parameters and return values
>> (and auto variables), isnt' it ? Many years ago I used to
>> read about the "heap" for async. allocated memory, like the
>> one got by malloc ...
>
> That was an example. If you call a library function, any local
> variables in that function are allocated on the stack associated with
> the calling process.

okay, I just wanted to check the "detail" :)

>
> The heap, the region of memory managed by malloc and free or by new and
> delete, is handled the same way. It's all associated with the process,
> and a new process is created every time you run your program.
>
> (The standard doesn't necessarily refer to "stack" or "heap", but this
> is typical for most operating systems.)

reasonable : an user program cannot force the host OS to
behave like he says, the contrary holds

>
> As far as memory allocation is concerned (not including space for
> executable code), any library functions act just like functions in yo > program. If you use static libraries, they're
incorporated directly
> into your program, as if you had defined them yourself.

intresting detail, I had totally missed this aspect. Yes I
would statically link, actually, so the question I posed
initially vanishes per se

> Dynamic
> libraries by design work the same way, except that space for the code is
> allocated only once.
>
>>> Again, this is more about the OS than about anything specified
>>
>> yes, that's why I put [Nix] in the intro of the question. I
>> was sure it had necessarily to do with the memory manager of
>> the OS. In the end a memory leak in the process should
>> bubble up until it.
>
> Ah, I wasn't sure what "[Nix]" meant.

sorry ... I seldom use that nickname, I've learnt it here on
usenet

>
>> But, to sum up : a process should not be able, willy nilly,
>> if using plain malloc and no particular privilege or other
>> tecniques (i.g. to become a daemon and remain loaded), to
>> produce a memory leak.
>
> A process can leak memory internally, keeping it allocated (and not
> allowing other processes to use it) as long as the process is running.
>
> while (true) {
> malloc(1000); // don't do this
> }
>
> Operating systems typically place some configurable limit on how much
> memory a process can allocate -- but you can bypass that by running
> multiple processes. One unprivileged user typically *can* bog down a
> system, if not crash it.
>
> But when a process terminates, either because it finishes, or it
> crashes, or something kills it, all allocated memory (stack and heap)
> is released. That's the operating system's job.

perfect
TY

Soviet_Mario

unread,
Sep 14, 2019, 8:05:02 AM9/14/19
to
Il 13/09/19 19:55, Soviet_Mario ha scritto:
> Il 13/09/19 19:33, Scott Lurndal ha scritto:
>> Soviet_Mario <Sovie...@CCCP.MIR> writes:
>>>
>>> I am trying to fix some concept about "ownership" of
>>> dynamically allocated memory (in the C-style libraries, on
>>> linux world)
>>>
>>>
>>> As I'm reading some man about SCANDIR (a high level
>>> directory scanning function which in "one shot" takes care
>>> to return everything hiding loops, recursion and all).
>>>
>>> It says it internally allocates (using MALLOC) an array of
>>> pointers to <dirent> structures, or better the array itself
>>> and maybe the pointers contiguously (I'm not sure because I
>>> see the dirent entry preallocates a static buffer for names
>>> MAXPATH wide, so in principle it is made up of chunks all
>>> the same size that, always in principle, can be malloc-ed in
>>> a contiguous block and not necessarily as a sparse jagged
>>> array).

about the former suppositions ...

from man pages I got the following usage example
int
main(void)
{
struct dirent **namelist;
int n;

n = scandir(".", &namelist, NULL, alphasort);
if (n == -1) {
perror("scandir");
exit(EXIT_FAILURE);
}

while (n--) {
printf("%s\n", namelist[n]->d_name);
free(namelist[n]);
}
free(namelist);

exit(EXIT_SUCCESS);
}


now looking at the FREEing pattern : free called for each
dirent entry and later free called again with the pointer

I'm becoming convinced that internally scandir does not
pre-alloc a monolythic array but actually a jagged array and
a further array of pointers (to each dirent structure).

apart from the example, no documentation was reported about
HOW to release resources allocated by SCANDIR.

Initially I had thought to perform just the last outermost
free (namelist) on the whole array of pointers.

But the question to avoid a possibly corrupted "half
allocated" state arises again if SCANDIR operates allocating
bit by bit :\

Paavo Helde

unread,
Sep 14, 2019, 10:35:04 AM9/14/19
to
On 14.09.2019 15:04, Soviet_Mario wrote:
> I'm becoming convinced that internally scandir does not pre-alloc a
> monolythic array but actually a jagged array and a further array of
> pointers (to each dirent structure).
>
> apart from the example, no documentation was reported about HOW to
> release resources allocated by SCANDIR.

The man page for scandir (not SCANDIR!) clearly says:

"Entries [..] are stored in strings allocated via malloc(3) [...] and
collected in array namelist which is allocated via malloc(3)."

That's all the documentation you need to know about how to release the
results, as for each malloc there has to be a corresponding free(). As
scandir() cannot release results by itself before returning them to the
caller, the caller will need to do this by itself.

> Initially I had thought to perform just the last outermost free
> (namelist) on the whole array of pointers.

Releasing an array of raw pointers does not release the memory they are
pointing to, neither in C nor in C++. For starters, the free() function
just takes a 'void*' argument to some memory block, so it would not have
any idea that it is an array of pointers it is releasing, not to speak
about if and when to do something special about these pointers.

> But the question to avoid a possibly corrupted "half allocated" state
> arises again if SCANDIR operates allocating bit by bit :\

That's none of your concern. Presumably, if the memory gets exhausted in
the middle of operation, scandir() will release everything what it has
allocated so far, and return -1 to indicate a failure.

Thankfully, in C++ one does not have to worry about such low-level
issues. One just pushes strings one-by-one to a local std::vector object
and if an exception like std::bad_alloc is thrown it the middle of the
operation, the vector and its contents get released automatically.


Soviet_Mario

unread,
Sep 14, 2019, 2:05:56 PM9/14/19
to
Il 14/09/19 16:34, Paavo Helde ha scritto:
> On 14.09.2019 15:04, Soviet_Mario wrote:
>> I'm becoming convinced that internally scandir does not
>> pre-alloc a
>> monolythic array but actually a jagged array and a further
>> array of
>> pointers (to each dirent structure).
>>
>> apart from the example, no documentation was reported
>> about HOW to
>> release resources allocated by SCANDIR.
>
> The man page for scandir (not SCANDIR!)

I used bold :)

> clearly says:
>
> "Entries [..] are stored in strings allocated via malloc(3)
> [...] and collected in array namelist which is allocated via
> malloc(3)."

true, I missed that point :\

>
> That's all the documentation you need to know about how to
> release the results, as for each malloc there has to be a
> corresponding free(). As scandir() cannot release results by
> itself before returning them to the caller, the caller will
> need to do this by itself.
>
>> Initially I had thought to perform just the last outermost
>> free
>> (namelist) on the whole array of pointers.
>
> Releasing an array of raw pointers does not release the
> memory they are pointing to, neither in C nor in C++. For

I know that, but I wrongly thought it would have been
unnecessary (I thought that a single big chunk of same-sized
entries had been allocated, instead)

> starters, the free() function just takes a 'void*' argument
> to some memory block, so it would not have any idea that it
> is an array of pointers it is releasing, not to speak about
> if and when to do something special about these pointers.

yes I know that, I did not expect some smart recursion. I
just thought of a monolythic array instead of a sparse one.

>
>> But the question to avoid a possibly corrupted "half
>> allocated" state
>> arises again if SCANDIR operates allocating bit by bit :\
>
> That's none of your concern. Presumably, if the memory gets
> exhausted in the middle of operation, scandir() will release
> everything what it has allocated so far, and return -1 to
> indicate a failure.

presumably means surely ?

>
> Thankfully, in C++ one does not have to worry about such
> low-level issues. One just pushes strings one-by-one to a
> local std::vector object and if an exception like
> std::bad_alloc is thrown it the middle of the operation, the
> vector and its contents get released automatically.

yes for what I code manually I'm doing .push_back on
std::vector <std::string> and catching exceptions.

But I was not sure about the management. I'll rely on the
fact that SCANDIR is to do complete cleanup, shoud it run
short of ram halfway.

Paavo Helde

unread,
Sep 14, 2019, 4:15:53 PM9/14/19
to
On 14.09.2019 21:05, Soviet_Mario wrote:
> Il 14/09/19 16:34, Paavo Helde ha scritto:
>> On 14.09.2019 15:04, Soviet_Mario wrote:
>>> I'm becoming convinced that internally scandir does not pre-alloc a
>>> monolythic array but actually a jagged array and a further array of
>>> pointers (to each dirent structure).
>>
>> "Entries [..] are stored in strings allocated via malloc(3) [...] and
>> collected in array namelist which is allocated via malloc(3)."
>
> true, I missed that point :\
>
>>
>> That's all the documentation you need to know about how to release the
>> results, as for each malloc there has to be a corresponding free(). As
>> scandir() cannot release results by itself before returning them to
>> the caller, the caller will need to do this by itself.
>>
>>> Initially I had thought to perform just the last outermost free
>>> (namelist) on the whole array of pointers.
>>
>> Releasing an array of raw pointers does not release the memory they
>> are pointing to, neither in C nor in C++. For
>
> I know that, but I wrongly thought it would have been unnecessary (I
> thought that a single big chunk of same-sized entries had been
> allocated, instead)

This would mean to copy over and then release all the individual strings
allocated by malloc(), meaning that the fact they were initially
allocated by malloc() would not be worth mentioning in the
documentation, and secondly it would reduce performance, which would be
a no-no in such a low-level function.

>
>> starters, the free() function just takes a 'void*' argument to some
>> memory block, so it would not have any idea that it is an array of
>> pointers it is releasing, not to speak about if and when to do
>> something special about these pointers.
>
> yes I know that, I did not expect some smart recursion. I just thought
> of a monolythic array instead of a sparse one.
>
>>
>>> But the question to avoid a possibly corrupted "half allocated" state
>>> arises again if SCANDIR operates allocating bit by bit :\
>>
>> That's none of your concern. Presumably, if the memory gets exhausted
>> in the middle of operation, scandir() will release everything what it
>> has allocated so far, and return -1 to indicate a failure.
>
> presumably means surely ?

Surely, modulo the unlikely bugs in the implementation.


Jorgen Grahn

unread,
Sep 16, 2019, 7:51:28 AM9/16/19
to
On Sat, 2019-09-14, Soviet_Mario wrote:
> Il 14/09/19 02:17, Keith Thompson ha scritto:
>> Soviet_Mario <Sovie...@CCCP.MIR> writes:
>>> Il 13/09/19 20:29, Keith Thompson ha scritto:
...
>>>> Again, this is more about the OS than about anything specified
>>>
>>> yes, that's why I put [Nix] in the intro of the question. I
>>> was sure it had necessarily to do with the memory manager of
>>> the OS. In the end a memory leak in the process should
>>> bubble up until it.
>>
>> Ah, I wasn't sure what "[Nix]" meant.
>
> sorry ... I seldom use that nickname, I've learnt it here on
> usenet

In some other group, perhaps? I suggest saying "Unix", "POSIX" or
"Linux" instead. The word "Nix" is unfamiliar to me too, and I've
been reading Unix- and C++-related newsgroups since the mid-1990s.

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Daniel

unread,
Sep 16, 2019, 8:17:38 AM9/16/19
to
On Monday, September 16, 2019 at 7:51:28 AM UTC-4, Jorgen Grahn wrote:
> On Sat, 2019-09-14, Soviet_Mario wrote:
> > Il 14/09/19 02:17, Keith Thompson ha scritto:
> >>
> >> Ah, I wasn't sure what "[Nix]" meant.
> >
> > sorry ... I seldom use that nickname, I've learnt it here on
> > usenet
>
> In some other group, perhaps?

No, this one :-) Use of "Nix" is commonplace in this group, used by David Brown, Alf Steinbach, Vir Campestris, and others.

Daniel

James Kuyper

unread,
Sep 16, 2019, 10:09:26 AM9/16/19
to
I think you may have been confused *nix, which is commonplace, with Nix,
which is not. See <https://en.wikipedia.org/wiki/Unix#Branding>:

"Sometimes a representation like Un*x, *NIX, or *N?X is used to indicate
all operating systems similar to Unix. This comes from the use of the
asterisk (*) and the question mark characters as wildcard indicators in
many utilities. This notation is also used to describe other Unix-like
systems that have not met the requirements for UNIX branding from the
Open Group."

Daniel

unread,
Sep 16, 2019, 10:36:56 AM9/16/19
to
On Monday, September 16, 2019 at 10:09:26 AM UTC-4, James Kuyper wrote:
>
> I think you may have been confused *nix, which is commonplace, with Nix,
> which is not. See <https://en.wikipedia.org/wiki/Unix#Branding>:
>
> "Sometimes a representation like Un*x, *NIX, or *N?X is used to indicate
> all operating systems similar to Unix.

Okay, you can nix my comment.

Daniel

Scott Lurndal

unread,
Sep 16, 2019, 10:43:08 AM9/16/19
to
James Kuyper <james...@alumni.caltech.edu> writes:
>On 9/16/19 8:17 AM, Daniel wrote:
>> On Monday, September 16, 2019 at 7:51:28 AM UTC-4, Jorgen Grahn wrote:
>>> On Sat, 2019-09-14, Soviet_Mario wrote:
>>>> Il 14/09/19 02:17, Keith Thompson ha scritto:
>>>>>
>>>>> Ah, I wasn't sure what "[Nix]" meant.
>>>>
>>>> sorry ... I seldom use that nickname, I've learnt it here on
>>>> usenet
>>>
>>> In some other group, perhaps?
>>
>> No, this one :-) Use of "Nix" is commonplace in this group, used by David Brown, Alf Steinbach, Vir Campestris, and others.
>
>I think you may have been confused *nix, which is commonplace, with Nix,
>which is not. See <https://en.wikipedia.org/wiki/Unix#Branding>:
>
>"Sometimes a representation like Un*x, *NIX, or *N?X is used to indicate
>all operating systems similar to Unix. This comes from the use of the
>asterisk (*) and the question mark characters as wildcard indicators in
>many utilities.

Technically, no *nix utility knows anything about the asterisk and
question mark as wild card characters. The shell expands (globs)
all wildcards and passes the expanded result(s) in argv[] to the utility.

VMS did the wild-carding in each application rather than in DCL.

Manfred

unread,
Sep 16, 2019, 11:56:33 AM9/16/19
to
On 9/16/2019 4:42 PM, Scott Lurndal wrote:
> Technically, no *nix utility knows anything about the asterisk and
> question mark as wild card characters. The shell expands (globs)
> all wildcards and passes the expanded result(s) in argv[] to the utility.
>
> VMS did the wild-carding in each application rather than in DCL.
So did MS-DOS

Paavo Helde

unread,
Sep 16, 2019, 1:21:16 PM9/16/19
to
On 16.09.2019 17:42, Scott Lurndal wrote:
> Technically, no *nix utility knows anything about the asterisk and
> question mark as wild card characters.

Except of those that do, e.g.

find . -name '*.cpp'

James Kuyper

unread,
Sep 16, 2019, 8:25:29 PM9/16/19
to
On 9/16/19 10:42 AM, Scott Lurndal wrote:
> James Kuyper <james...@alumni.caltech.edu> writes:
...
>> I think you may have been confused *nix, which is commonplace, with Nix,
>> which is not. See <https://en.wikipedia.org/wiki/Unix#Branding>:
>>
>> "Sometimes a representation like Un*x, *NIX, or *N?X is used to indicate
>> all operating systems similar to Unix. This comes from the use of the
>> asterisk (*) and the question mark characters as wildcard indicators in
>> many utilities.
>
> Technically, no *nix utility knows anything about the asterisk and
> question mark as wild card characters. The shell expands (globs)
> all wildcards and passes the expanded result(s) in argv[] to the utility.

I believe that what the author meant would have been better expressed
using "shells" rather than "utilities". With that correction, it's an
accurate statement.

0 new messages