On Friday, August 24, 2018 at 7:34:46 AM UTC-4, Bart wrote:
> I had this file lying around:
>
>
https://pastebin.com/raw/rbfmwDPj
>
> Called lisp.c, I expect it's some sort of Lisp interpreter.
>
> Three of my 7 C Windows compilers manage to build it, including gcc
> (5.1.0, -std=c11 -Wall -Wextra -Wpedantic), MSVC and Tiny C, but all
> with warnings. Four of them fail it with errors.
>
> On godbolt, gcc generally fails it (with one error in a function
> argument for write()). MSVC also fails it, but because there is no
> windows.h.
>
> On my old Linux machine with gcc 4.4.3, it passes (I wish it would tell
> you at the end if it succeeded or failed; instead I have to capture the
> output and look for 'error:').
>
> My question is whether this program should compile or not. Or whether
> that depends purely on what options you give to a compiler.
12345678012345678012345678012345678012345678012345678012345678012
There's only one feature a program can contain for which the
standard specifies that it should not compile: a #error directive
that survives conditional compilation (6.10.5p1).
If it's important to you to ensure that any fully conforming
implementation of C must fail to compile your program if it
contains errors, then you must include an #error directive that
is unconditionally compiled. The simplest way to do this is to
make it the first line of the file. Note that this only applies
to C99 or later; before C99, the only portable way to guarantee
that your code wouldn't compile was to avoid starting up the
compiler in the first place.
Of course, the problem is that it will also cause the program to
fail to compile even if it doesn't contain any errors at all.
This is typical of the kinds of problems you run into when you
try to use a tool incorrectly. It's just like what happens when
you try to use a hammer to cut paper.
You're not supposed to use the fact that a program compiled
successfully to determine whether it has any errors. Your first
line of defense is to look at the diagnostics that are produced.
And here is the hard part - you need to read, understand, and
think about the diagnostics to figure out whether they're valid.
Some diagnostics are mandatory, some are not. All of the
mandatory diagnostics indicate portability problems, but in many
cases there can be perfectly legitimate reasons for writing code
that triggers them. You need to know whether you have such
reasons (as a rule of thumb, it's unlikely that you have a good
reason unless you're a C expert, in which case there still a good
chance that you don't have a good reason). Non-mandatory
diagnostics can warn you about anything - and some of those
warnings point to a more serious problem than is revealed by many
of the mandatory diagnostics.
Some diagnostics cause your program to be rejected; these are
generally referred to as "error messages", and this should never
happen if your code is strictly conforming - which isn't much
help, because most useful code isn't strictly conforming. Other
messages allow compilation to complete; these are usually
referred to as "warnings". Just because a diagnostic is "only a
warning" does not justify ignoring it - many warnings point to
problems in code that might (or might not) be quite serious.
> If C, the language, does deem it OK to build as a working program, then
> it does seem astonishingly lax.
If it doesn't contain a #error directive, it's OK for a fully
conforming implementation to produce a working program, no matter
how many other problems it contains. Therefore, by your
standards, the language is indeed astonishingly lax. However, if
it produces warnings, and you chose to ignore those warning and
execute the resulting program anyway, then by my standards, it's
you who are being astonishingly lax.
> If someone was given this program as a 'black box' (not looking at or
> modifying the contents at all) and told to build it, how would they
> instruct a compiler to do so?
As a "black box", you can't tell. Nowadays, most things of any
complexity come with some system for autoconfiguration, or at
least build instructions. For code that has neither of those,
there is no substitute for reading the code and trying to figure
out which version of which language it was targeted at. This is
really bad code, contain many dodgy practices, but the fact that
it calls "write()" without #including any header which might
contain a declaration for that identifier, implies that it was
targeted at C90, where the implicit int rule allowed such calls
to be compiled, and if you were lucky, they might even link and
execute correctly, if the entire program included at least one
library which contained a definition for write() that was
compatible with those calls.
The implicit int rule was a bad idea, which is why that rule was dropped in C99.
> (Needless to say my compiler doesn't pass it. Furthermore, if I
> concentrate on this detail (the original has no prototype for write):
>
> extern void write(void);
>
> int main(void){
> write();
> }
>
> My product and Pelles C fail to link to write(). Most others do. It
> doesn't exist in msvcrt.dll except as _write(), but nothing inside
> lccwin's headers (where it works) does the usual mapping of "name" to
> "_name".
>
> (I think the prototype for write is usually inside io.h. But that is not
> a standard header, and is anyway not included inside lisp.c. More mystery.)
>
> One further question, if my compiler doesn't build this program (while
> gcc etc manage it), where does the problem lie? Do I need to downgrade
> my error checking a couple more levels?)
It depends upon which version of the C standard you're targeting,
if any. If you're targeting C99 or later, a diagnostic is
mandatory, and completing compilation is optional. If you're
targeting C90 or earlier, the diagnostic is optional, and if
there are no other syntax errors or constraint violations,
completing compilation is mandatory.