Missing -D_POSIX_SOURCE flag?
--
Gautam
Bzzz !
On MINIX, as mandated by the POSIX standard when read strictly, if you
want to compile POSIX stuff you /need/ to use the correct instructions,
which is to #define _POSIX_SOURCE.
Not doing so is like using the compiler is strict ANSI C mode, and
popen() is not part of the ISO C standard, it is a POSIX "extension".
GCC by default has a different idea of how to use a compiler, and
consider the base mode is "full extended, including non-standard ones"
(i.e., #define _GNU_SOURCE). Issuing gcc -ansi (hence #defining
_STRICT_ANSI_) is a way to be more standard compliant, and is
recommended practice, particularly if you are using other compilers
besides GCC: conversely, not doing so obviously restricts portability.
As a result, please redo the test using
$ gcc -ansi -D_POSIX_SOURCE -Wall -Wextra <blablabla>
> #define PATH_MAX 255
Hmmmm... _MAX and _MIN are reserved suffixes under the POSIX standard
when the <limits.h> header is included, which could happen indirectly;
as a result, basically you cannot use them.
Moreover, with POSIX, PATH_MAX is indeed defined when you #include
<limits.h>: so you really should #include it instead of trying to guess
the value du jour...
Antoine
Why?
You are actually using functions beyond the ISO standard; since MINIX is
a POSIX-compliant system (or aims to), this is functional, no surprise
(it won't work as easily on say MS-DOS).
For the record: It is also so because of the legacy behaviour of
"K&R" C, without prototype, where calling functions without declarating
them before used to be the way to go. It is now considered bad-style,
but it still works (and allows to reuse old C programs seamlessly.)
Antoine
In the 70's and most of the 80's, computer language engineering was
dominated by the principles of typed languages as typified by Pr.Niklaus
Wirth. Of course, there was an engineering reason too: in the beginning,
hardware was so much a factor that made efficiency a necessary
requirement, even for compilers.
So was born C, which was created as "B with types", that is, a derived
using then up-to-date technology, types. OTOH C was a "loosely" typed
language, and with its pointers it permitted many acrobatic standings
which was really needed to write operating systems. As we all know, this
was a huge success, because it was a good equilibrium and many people
went with it. The success was so great that C compiler are, with Fortran
ones and for the same reasons, still the most optimising: quite simply,
because it is where there is the market to do so.
As time passes, hardware get more and more cheaper and programmer time
became the most limiting factor, much more than efficiency of the
resulting programs. There are many ways to reduce programmers' time. One
is to reuse already existing programs, hence backward compatibility is a
very important goal (hence Fortran and C still relevant.) Another are
components, and as object-orientation has been declared as a way to
achieve that, so were C++, Java and their alikes, huge successes since
they capitalize on both points. To make the programmer more efficient
can also be done by using lighter toolchains, for example using
interpreters: and here we have Perl, Python, Javascript, PHP, Ruby.
Okay, now you can read theory; I would recommend N. Wirth's
http://www.inf.ethz.ch/personal/wirth/Articles/LeanSoftware.pdf (1995)
Antoine
__________
PS: before someone wants to point it: Undeclarative or OO languages are
not quite new: LISP or Smalltalk actually predate C.
But they completely lack C compatibility.