Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

#pragma once in ISO standard yet?

1,096 views
Skip to first unread message

Rick

unread,
Dec 12, 2007, 6:57:27 PM12/12/07
to
I'm told that "#pragma once" has made it into the ISO standard for
either C or C++. I can't find any reference to that anywhere. If
it's true, do any of you have a reference I can use?

Thanks...

---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std...@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.comeaucomputing.com/csc/faq.html ]

Jim Langston

unread,
Dec 12, 2007, 8:04:10 PM12/12/07
to

I could be wrong, but I think it's making it into C++0x

--
Jim Langston
tazm...@rocketmail.com

Sean Hunt

unread,
Dec 12, 2007, 8:07:57 PM12/12/07
to
On Dec 12, 4:57 pm, reply.in.newsgr...@spam.no (Rick) wrote:
> I'm told that "#pragma once" has made it into the ISO standard for
> either C or C++. I can't find any reference to that anywhere. If
> it's true, do any of you have a reference I can use?
>
> Thanks...

No. #pragma once has not made it into either standard. Furthermore,
it's not the preferred way to do things. It's better to do use #ifndef
guards:

#ifndef SOME_HEADER_EXCLUSIVE_IDENTIFIER
#define SOME_HEADER_EXCLUSIVE_IDENTIFIER

.

#endif

Alf P. Steinbach

unread,
Dec 13, 2007, 12:58:06 AM12/13/07
to
* Sean Hunt:

> On Dec 12, 4:57 pm, reply.in.newsgr...@spam.no (Rick) wrote:
>> I'm told that "#pragma once" has made it into the ISO standard for
>> either C or C++. I can't find any reference to that anywhere. If
>> it's true, do any of you have a reference I can use?
>>
>> Thanks...
>
> No. #pragma once has not made it into either standard. Furthermore,
> it's not the preferred way to do things. It's better to do use #ifndef
> guards:
>
> #ifndef SOME_HEADER_EXCLUSIVE_IDENTIFIER
> #define SOME_HEADER_EXCLUSIVE_IDENTIFIER
>
> .
>
> #endif

The "better" valuation is a bit suspect. For if that was universally
agreed on and context-independent, then no compiler vendor would have
introduced #pragma once. From the existence and widespread usage of
#pragma once, and the fact that include guard headers have been around
as a standard-conforming way since early C, one may conclude that at
least some people found an advantage to #pragma once in some contexts.

Cheers, & hth.,

- Alf

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

Pete Becker

unread,
Dec 13, 2007, 9:32:34 AM12/13/07
to
On 2007-12-12 20:04:10 -0500, tazm...@rocketmail.com ("Jim Langston") said:

> I could be wrong, but I think it's making it into C++0x

It's not.

--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of "The
Standard C++ Library Extensions: a Tutorial and Reference
(www.petebecker.com/tr1book)

Pete Becker

unread,
Dec 13, 2007, 12:39:47 PM12/13/07
to
On 2007-12-13 00:58:06 -0500, al...@start.no ("Alf P. Steinbach") said:

>
> The "better" valuation is a bit suspect. For if that was universally
> agreed on and context-independent, then no compiler vendor would have
> introduced #pragma once. From the existence and widespread usage of
> #pragma once, and the fact that include guard headers have been around
> as a standard-conforming way since early C, one may conclude that at
> least some people found an advantage to #pragma once in some contexts.
>

GCC now considers #pragma once obsolete. One could conclude that,
despite it's flash appeal, it in fact adds complexity in order to solve
a non-problem.

--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of "The
Standard C++ Library Extensions: a Tutorial and Reference
(www.petebecker.com/tr1book)

Jack Klein

unread,
Dec 13, 2007, 6:43:25 PM12/13/07
to
On Thu, 13 Dec 2007 05:58:06 GMT, al...@start.no ("Alf P. Steinbach")
wrote in comp.std.c++:

> * Sean Hunt:
> > On Dec 12, 4:57 pm, reply.in.newsgr...@spam.no (Rick) wrote:
> >> I'm told that "#pragma once" has made it into the ISO standard for
> >> either C or C++. I can't find any reference to that anywhere. If
> >> it's true, do any of you have a reference I can use?
> >>
> >> Thanks...
> >
> > No. #pragma once has not made it into either standard. Furthermore,
> > it's not the preferred way to do things. It's better to do use #ifndef
> > guards:
> >
> > #ifndef SOME_HEADER_EXCLUSIVE_IDENTIFIER
> > #define SOME_HEADER_EXCLUSIVE_IDENTIFIER
> >
> > .
> >
> > #endif
>
> The "better" valuation is a bit suspect. For if that was universally
> agreed on and context-independent, then no compiler vendor would have
> introduced #pragma once. From the existence and widespread usage of
> #pragma once, and the fact that include guard headers have been around
> as a standard-conforming way since early C, one may conclude that at
> least some people found an advantage to #pragma once in some contexts.

OK, let's say far less complex to determine, if not better.

If 1.c++ contains:

#include "../../fred.h"
#include "a/b/c/d/ethel.h"

.and fred.h contains:

#include "../fred.h"

.is that the same "fred.h" or not?

Presumably, on today's typical file systems, the compiler needs to
convert each filename into some sort of canonical representation,
which it might not need to do to merely open them using the relative
path. Each time it comes across an include directive after the first,
it must convert to the canonical representation and search through a
list of files already included in the translation unit.

Whereas if the first "fred.h" had an include guard:

#ifndef INPUT_FRED_H

.and the second had:

#ifndef OUTPUT_FRED_H

.each could be opened via its relative path without the extra
overhead.

You might comment that having two different include files with the
same base name is very bad practice, and I would certainly agree, but
the compiler must be able to deal with this directly.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://c-faq.com/
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.club.cc.cmu.edu/~ajo/docs/FAQ-acllc.html

Bronek Kozicki

unread,
Dec 14, 2007, 10:45:02 AM12/14/07
to
Sean Hunt <rid...@gmail.com> wrote:
> No. #pragma once has not made it into either standard. Furthermore,
> it's not the preferred way to do things. It's better to do use #ifndef
> guards:

just a note that this statement is an opinion and as such, some may not
agree on this. Futhermore, I'm not aware of any C or C++ compiler that
does not implement this extension. It offers a benefit of removing a bit
of coupling (a need to provide unique preprocessor symbols for all
header files) that may otherwise be an issue in a very large projects.

Personally, I would advice to use either #ifdef guards or #pragma once
consistently in a project, depending on specific circumstances and not
on universal advice such as yours.


B.

Michael Aaron Safyan

unread,
Dec 14, 2007, 1:48:40 PM12/14/07
to
Bronek Kozicki wrote:
> Sean Hunt <rid...@gmail.com> wrote:
>> No. #pragma once has not made it into either standard. Furthermore,
>> it's not the preferred way to do things. It's better to do use #ifndef
>> guards:
>
> just a note that this statement is an opinion and as such, some may not
> agree on this.

One could argue that, because it is not standard, "#pragma once" is
truly not the preferred way.

Futhermore, I'm not aware of any C or C++ compiler that
> does not implement this extension.

The GNU Compiler Collection, which is used for OS X, Linux, and many
variants of UNIX, as well as its various ports such as the Minimalist
GNU for Windows and the version of GCC distributed with Cygwin.

It offers a benefit of removing a bit
> of coupling (a need to provide unique preprocessor symbols for all
> header files) that may otherwise be an issue in a very large projects.
>

This is hardly an issue when the preprocessor symbol represents the path
from the root of project to the current file. For example,
coolproject/feature1/suchandsuch/header.h might have the following
preprocessor symbol:

COOLPROJECT_FEATURE1_SUCHANDSUCH_HEADER_H

> Personally, I would advice to use either #ifdef guards or #pragma once
> consistently in a project, depending on specific circumstances and not
> on universal advice such as yours.

Consistency is certainly the best way to go. However, in a
cross-platform project, using preprocessor guards throughout is the best
way (since GCC on Mac OS X and Linux won't like "#pragma once").

Andre Kaufmann

unread,
Dec 14, 2007, 5:29:52 PM12/14/07
to
Michael Aaron Safyan wrote:
> Bronek Kozicki wrote:
> [...]

> Futhermore, I'm not aware of any C or C++ compiler that
>> does not implement this extension.
>
> The GNU Compiler Collection, which is used for OS X, Linux, and many
> variants of UNIX, as well as its various ports such as the Minimalist
> GNU for Windows and the version of GCC distributed with Cygwin.
>

I've done a quick test with a rather old version of gcc 3.4.5 / mingw
and it happily compiled "#pragma once" and compiled it correctly,
meaning that it has implemented it and not only that the compiler has
ignored the #pragma statement.

Also Wikipedia states that it has implemented it, though it's now a
deprecated feature, which I haven't tested yet with one of the latest
versions of gcc.

> [...]

Andre

Andre Kaufmann

unread,
Dec 14, 2007, 6:08:21 PM12/14/07
to
Jack Klein wrote:
> On Thu, 13 Dec 2007 05:58:06 GMT, al...@start.no ("Alf P. Steinbach")
> wrote in comp.std.c++:
> [...]

> .each could be opened via its relative path without the extra
> overhead.

I tend to disagree, because IMHO it's generally more expensive to open
and parse the whole header file, than just adding the path and name of
the header file to a list and let the OS compare 2 paths if they are
identical.

> You might comment that having two different include files with the
> same base name is very bad practice, and I would certainly agree, but
> the compiler must be able to deal with this directly.
>

IMHO it's better to have no header files at all in C++.

Pete Becker

unread,
Dec 14, 2007, 6:02:32 PM12/14/07
to
On 2007-12-14 10:45:02 -0500, br...@spam-trap-cop.net ("Bronek Kozicki") said:

> Sean Hunt <rid...@gmail.com> wrote:
>> No. #pragma once has not made it into either standard. Furthermore,
>> it's not the preferred way to do things. It's better to do use #ifndef
>> guards:
>
> just a note that this statement is an opinion and as such, some may not
> agree on this. Futhermore, I'm not aware of any C or C++ compiler that
> does not implement this extension. It offers a benefit of removing a
> bit of coupling (a need to provide unique preprocessor symbols for all
> header files) that may otherwise be an issue in a very large projects.
>

GCC not only implements it, but declares that it's obsolete! As has
been said before in this thread, the problem with #pragma once is that
the compiler has to get it right, but doesn't have enough information
to do that. That's a classic case for not putting it in the language,
and leaving it to programmers to ensure that what they're doing works
right.

--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of "The
Standard C++ Library Extensions: a Tutorial and Reference
(www.petebecker.com/tr1book)

Andre Kaufmann

unread,
Dec 14, 2007, 6:08:42 PM12/14/07
to
Sean Hunt wrote:
> [...]

> No. #pragma once has not made it into either standard. Furthermore,
> it's not the preferred way to do things. It's better to do use #ifndef
> guards:
>
> #ifndef SOME_HEADER_EXCLUSIVE_IDENTIFIER
> #define SOME_HEADER_EXCLUSIVE_IDENTIFIER
>
> .
>
> #endif

Since I already got hit multiple times by weird errors caused by the
same include guards in 2 or more header files, I tend rather to hate
include guards.

Please could you shed some light on it, why include guards should be
that better than #pragma once or as a compiler option to generally
include header files only once ?

> [...]

Andre

Bo Persson

unread,
Dec 15, 2007, 10:15:01 AM12/15/07
to
Andre Kaufmann wrote:

:: Sean Hunt wrote:
::: [...]
::: No. #pragma once has not made it into either standard.
::: Furthermore, it's not the preferred way to do things. It's better
::: to do use #ifndef guards:
:::
::: #ifndef SOME_HEADER_EXCLUSIVE_IDENTIFIER
::: #define SOME_HEADER_EXCLUSIVE_IDENTIFIER
:::
::: .
:::
::: #endif
::
:: Since I already got hit multiple times by weird errors caused by
:: the same include guards in 2 or more header files, I tend rather
:: to hate include guards.

Yes, it is a problem, but fixable for the programmer.

::
:: Please could you shed some light on it, why include guards should


:: be that better than #pragma once or as a compiler option to
:: generally include header files only once ?

What if you have files mounted from different file systems. Is it the
compiler's responsibility to resolve all the files? How could it?

And if it doesn't work, how are you as a programmer going to fix that?


As a somewhat nasty example, I have a PC at work, which I cannot
configure myself, with

local hard disk
several (non-unique) mounts to Windows servers (group and
departmental)
several SAN mounts
access to a ClearCase mount on a UNIX server
one drive letter mapping to files on a zOS mainframe


How should the C++ standard specify #pragma once, so that it always
works? Or should it be unportable?


Bo Persson

Andre Kaufmann

unread,
Dec 15, 2007, 12:16:06 PM12/15/07
to
Bo Persson wrote:
> Andre Kaufmann wrote:
> [...]

> And if it doesn't work, how are you as a programmer going to fix that?

1) Change the environment
2) Use both - #pragma comment and include guards
3) The programmer is not responsible for the infrastructure
that's the task of the it administrator
4) But there are always other solutions - see below

>
> As a somewhat nasty example, I have a PC at work, which I cannot
> configure myself, with
>
> local hard disk
> several (non-unique) mounts to Windows servers (group and
> departmental)
> several SAN mounts
> access to a ClearCase mount on a UNIX server
> one drive letter mapping to files on a zOS mainframe

Copy the sources and directory structure to your local PC and compile
them from there ?
Anyways better to compile on a local machine than compiling over network
, or am I totally wrong ?

Must any nasty hardware construction/topology be supported or should the
hardware infrastructure be adopted to the compiler ?


> How should the C++ standard specify #pragma once, so that it always
> works? Or should it be unportable?

You can always choose to additionally use include guards - can't you ?
And C++ can't be (fully) ported to every platform. If you restrict the
compiler always to the lowest common platform you will effectively
restrict the language itself.

But anyways there is a always 100% portable solution, besides using
additionally include guards, though it doesn't offer the normal speed
gain of #pragma once.

If the compiler calculates a hash value of the content of a header file
it can compare it always with the content of another header file.
This solution could be used in non portable scenarios or when the
compiler can't detect the files to be identical or if it can't add meta
information to the code files itself.

There are 100 solutions, I know it's not that simple as it seems to be -
but IMHO there's no reason to drop such features like #pragma once.

All in all, header files are causing too much troubles. So it's best to
get rid of them as soon as possible. They aren't worth the trouble.

> Bo Persson
> [...]

Andre

James Kanze

unread,
Dec 15, 2007, 2:01:42 PM12/15/07
to

> > >> Thanks...

> > > #ifndef SOME_HEADER_EXCLUSIVE_IDENTIFIER
> > > #define SOME_HEADER_EXCLUSIVE_IDENTIFIER

> > > .

> > > #endif

> If 1.c++ contains:

> #include "../../fred.h"
> #include "a/b/c/d/ethel.h"

> .and fred.h contains:

> #include "../fred.h"

So. That's still rather trivial. There are some perverse
cases, of course, where two different canonical names actually
refer to the same file (and I think that this is the main
argument presented against it). I rather doubt whether they are
relevant in practice, but they certainly make specifying it a
lot more difficult.

> Whereas if the first "fred.h" had an include guard:

> #ifndef INPUT_FRED_H

> .and the second had:

> #ifndef OUTPUT_FRED_H

> .each could be opened via its relative path without the extra
> overhead.

On the other hand, if they were really two different files, but
both used simply FRED_H as an include guard, you'd have problems
as well. Using the full, canonical filename as a basis for an
include guard doesn't work very well when you start moving files
around. The result is that people writing library code (where
they have to deal with unknown conventions elsewhere) end up
having to use all sorts of more or less complicated schemes to
avoid clashes. (Of course, since it's all automated, it's
really no big deal.)

> You might comment that having two different include files with
> the same base name is very bad practice,

In practice, if you're using third party libraries, it's
probably unavoidable. Also, in large projects, it's likely that
several different libraries include the same headers via
completely different paths, either because of symbolic links, or
multiple mounts of the same file system. Both cases cause all
of the implementations of #pragma once to fail, which basically
means that even if it were standard, you couldn't count on it
working portably, and would probably continue to use the include
guards in portable code.

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

James Kanze

unread,
Dec 15, 2007, 2:06:51 PM12/15/07
to
On Dec 15, 12:08 am, Andre Kaufmann <akfmn...@t-online.de> wrote:
> Jack Klein wrote:
> > On Thu, 13 Dec 2007 05:58:06 GMT, al...@start.no ("Alf P. Steinbach")
> > wrote in comp.std.c++:
> > [...]
> > .each could be opened via its relative path without the extra
> > overhead.

> I tend to disagree, because IMHO it's generally more expensive to open
> and parse the whole header file, than just adding the path and name of
> the header file to a list and let the OS compare 2 paths if they are
> identical.

But modern compilers don't open and parse the whole header file,
in cases where they know that it's the same file. And #pragma
once can't be made to work in the cases where they don't.

> > You might comment that having two different include files with the
> > same base name is very bad practice, and I would certainly agree, but
> > the compiler must be able to deal with this directly.

> IMHO it's better to have no header files at all in C++.

There are certainly better solutions than textual inclusion to
handle modularization.

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

---

James Kanze

unread,
Dec 15, 2007, 2:06:51 PM12/15/07
to
On Dec 14, 7:48 pm, michaelsaf...@aim.com (Michael Aaron Safyan)
wrote:
> Bronek Kozicki wrote:

> > Sean Hunt <ride...@gmail.com> wrote:
> >> No. #pragma once has not made it into either standard. Furthermore,
> >> it's not the preferred way to do things. It's better to do use #ifndef
> >> guards:

> > just a note that this statement is an opinion and as such,
> > some may not agree on this.

> One could argue that, because it is not standard, "#pragma
> once" is truly not the preferred way.

> > Futhermore, I'm not aware of any C or C++ compiler that does
> > not implement this extension.

> The GNU Compiler Collection, which is used for OS X, Linux,
> and many variants of UNIX, as well as its various ports such
> as the Minimalist GNU for Windows and the version of GCC
> distributed with Cygwin.

Gcc has declared it obslete, but I think it's still in there at
the moment.

> > It offers a benefit of removing a bit of coupling (a need to
> > provide unique preprocessor symbols for all header files)
> > that may otherwise be an issue in a very large projects.

> This is hardly an issue when the preprocessor symbol represents the path
> from the root of project to the current file. For example,
> coolproject/feature1/suchandsuch/header.h might have the following
> preprocessor symbol:

> COOLPROJECT_FEATURE1_SUCHANDSUCH_HEADER_H

You need the project name, plus the full path. Plus some sort
of guarantee that no one else will ever use the same project
name.

Other solutions involve adding the original author's name and
the creation date -- a given author will not create header files
with the same full path in two different projects with the same
name on the same date --, or adding some random junk to the
symbol -- if it's truely random, and there's enough of it, the
probability is almost certain that you'll never have a
collision. (I currently use something like:

guard1=${prefix}` basename "$filename" | sed -e 's:[^a-zA-
Z0-9_]:_:g' `
guard2=`date +%Y%m%d`
guard3=`od -td2 -N 16 /dev/random | head -1 | awk '
BEGIN {
p =
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
m = length( p )
}
{
for ( i = 2 ; i <= NF ; ++ i ) {
x = $i
if ( x < 0 ) x += 65526
printf( "%c", substr( p, (x%m)+1, 1 ) )
x = int(x / m)
printf( "%c", substr( p, (x%m)+1, 1 ) )
x = int(x / m)
printf( "%c", substr( p, (x%m)+1, 1 ) )
}
}
END {
printf( "\n" )
}' `

guard=${guard1}_${guard2}${guard3}

in the shell script which generates my guards.)

> > Personally, I would advice to use either #ifdef guards or #pragma once
> > consistently in a project, depending on specific circumstances and not
> > on universal advice such as yours.

> Consistency is certainly the best way to go. However, in a
> cross-platform project, using preprocessor guards throughout
> is the best way (since GCC on Mac OS X and Linux won't like
> "#pragma once").

If portability is a concern, today, include guards are the only
way, because they're the only way guaranteed to work according
to the standard. But g++ 4.1.2 under Linux definitly supports
"#pragma once" in the simple cases.

Realistically, of course, "#pragma once" can't be made to work
reliably in large projects, since things like links and multiple
mounts mean that the same file can (and in large projects,
almost certainly will) have different names in different
contexts. So even if the standard did adopt #pragma once, you'd
still need the include guards when writing library code which
was meant to work in such environments.

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

---

Pete Becker

unread,
Dec 15, 2007, 4:30:34 PM12/15/07
to
On 2007-12-15 08:06:51 -0500, James Kanze <james...@gmail.com> said:

>
> Other solutions involve adding the original author's name and
> the creation date -- a given author will not create header files
> with the same full path in two different projects with the same
> name on the same date --, or adding some random junk to the
> symbol -- if it's truely random, and there's enough of it, the
> probability is almost certain that you'll never have a
> collision. (I currently use something like:

Another source of random junk is uuidgen, although you have to replace
the hyphens in the resulting string (I use underbars).

--
Pete


Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of "The
Standard C++ Library Extensions: a Tutorial and Reference
(www.petebecker.com/tr1book)

Andre Kaufmann

unread,
Dec 15, 2007, 6:03:06 PM12/15/07
to
James Kanze wrote:
> [...]

> But modern compilers don't open and parse the whole header file,

I think you mean the optimizations built in gcc for example ?

> in cases where they know that it's the same file. And #pragma
> once can't be made to work in the cases where they don't.

1) As I wrote in my other post it's the task of the OS to
reliably check if 2 paths are pointing to the same file.
Otherwise it's IMHO a security risk if this can't be done
by the OS reliably.

2) Even if the OS is incapable of doing that, you could
at least add an additional identifier to the the pragma once

#pragma once MY_HEADER_FILE

which would replace

#ifndef MY_HEADER_FILE
#define MY_HEADER_FILE

.......

#endif MY_HEADER_FILE


Or simply another new (bad word I know) keyword, instead of #pragma once.

module MY_HEADER_FILE;

> [...]


> There are certainly better solutions than textual inclusion to
> handle modularization.

> [...]

Yes 1 module 1 file 1 compilation ;-)

Andre Kaufmann

unread,
Dec 15, 2007, 6:02:54 PM12/15/07
to
James Kanze wrote:

>[...]


> multiple mounts of the same file system. Both cases cause all
> of the implementations of #pragma once to fail, which basically
> means that even if it were standard, you couldn't count on it
> working portably, and would probably continue to use the include
> guards in portable code.

When I interpret it correctly you state that under Linux, I think since
you mentioned mount points it is Linux, it's not possible to reliably
check if two file handles or paths are pointing to the same file ?

If it is the case, I don't think its the compilers fault, or shouldn't be.

Perhaps the OS can't check (always) 2 file paths being identical , but
it should be quite simple to check if the file header on the device is
the same. Same disk, same block - should be the same file ? At least for
local files. Network is perhaps a different story.

Also I tend to use full relative paths when I include header files.

"lib\a\b\c.h"

instead of:

"c.h"

So the risk of 2 files having the same relative path is IMHO very
unlikely, if relative paths are used. (I know the downsides, but I've
got used to them, because I think including libraries this way is more
safe and they represent also the hierarchical character of libraries,
just like namespaces ).


> [...]

Andre

Bo Persson

unread,
Dec 15, 2007, 6:32:38 PM12/15/07
to
Andre Kaufmann wrote:

:: Bo Persson wrote:
::: Andre Kaufmann wrote:
::: [...]
::: And if it doesn't work, how are you as a programmer going to fix
::: that?
::
:: 1) Change the environment
:: 2) Use both - #pragma comment and include guards
:: 3) The programmer is not responsible for the infrastructure
:: that's the task of the it administrator

1) I cannot
2) Doesn't help much does it? If the #pragma once makes the compiler
NOT include the file, I'm toast!
3) He won't change it, and points me to the company wide standard for
PC nad network configurations.

:: 4) But there are always other solutions - see below

ok.

::
:::
::: As a somewhat nasty example, I have a PC at work, which I cannot


::: configure myself, with
:::
::: local hard disk
::: several (non-unique) mounts to Windows servers (group and
::: departmental)
::: several SAN mounts
::: access to a ClearCase mount on a UNIX server
::: one drive letter mapping to files on a zOS mainframe
::
:: Copy the sources and directory structure to your local PC and
:: compile them from there ?
:: Anyways better to compile on a local machine than compiling over
:: network , or am I totally wrong ?
::
:: Must any nasty hardware construction/topology be supported or
:: should the hardware infrastructure be adopted to the compiler ?

You tell me!

So I go to the corporate IT manager (vice president) and tell him that
the setup of our international WAN is incorrect, because it doesn't
support the #pragma once feature of my C++ compiler.

Do you think he will order a reconfiguration? :-)

::
::
::: How should the C++ standard specify #pragma once, so that it


::: always works? Or should it be unportable?
::
:: You can always choose to additionally use include guards - can't
:: you ? And C++ can't be (fully) ported to every platform. If you
:: restrict the compiler always to the lowest common platform you
:: will effectively restrict the language itself.

Specifying a standard feature that we know cannot be implemented on
some platforms, surely makes the language less useful. The other part
of the problem is that it somehow forces the compiler writer to
consider network topologies. Hardly the right place for that.


:: All in all, header files are causing too much troubles. So it's


:: best to get rid of them as soon as possible. They aren't worth the
:: trouble.

Something better than headers would be nice. We should just don't do
it in a way that reduces the number of platforms available to C++.


Bo Persson

Andre Kaufmann

unread,
Dec 15, 2007, 7:52:14 PM12/15/07
to
Pete Becker wrote:
>[...]

>
> GCC not only implements it, but declares that it's obsolete! As has been
> said before in this thread, the problem with #pragma once is that the
> compiler has to get it right, but doesn't have enough information to do
> that. That's a classic case for not putting it in the language, and
> leaving it to programmers to ensure that what they're doing works right.

And that's a classic example, how it shouldn't be. It's all up to the
programmer to get it right ?
Which other language requests the programmer to do it right, regarding
header files ? Are all the others doing something wrong ?

Dealing with header files costs a lot of time and energy. The compiler
has to compile each header file again and again and again, every time
it's used. Why ? It's mainly the same code, besides let's say by abusing
macros. The same project using precompiled header files compiles way
more faster, than the same not using them.
Modules are already proposed, but unfortunately are not part of the new
standard. Pragma once helps somewhat, so that the compiler doesn't has
to load and parse the header file again.

And it's the task of the OS to check if 2 paths point to the same file.
Either by marking it with a hash code or by normalizing the path.

If I would be bad I would state C++ keeps to stay to be >the< climate
killer language, because it uses so much CPU cycles for compilation.
But I'm not bad ;-).
I'll just try to push C++ in the right direction, even if I get bashed
for it to be only my "right" direction.

Andre

Francis Glassborow

unread,
Dec 15, 2007, 7:54:46 PM12/15/07
to
Andre Kaufmann wrote:
> James Kanze wrote:
>
> >[...]
>> multiple mounts of the same file system. Both cases cause all
>> of the implementations of #pragma once to fail, which basically
>> means that even if it were standard, you couldn't count on it
>> working portably, and would probably continue to use the include
>> guards in portable code.
>
> When I interpret it correctly you state that under Linux, I think since
> you mentioned mount points it is Linux, it's not possible to reliably
> check if two file handles or paths are pointing to the same file ?
>
Even if you could how do you propose that we deal with identical hrader
files? Or should the standard require that only a single copy exist?

James Kanze

unread,
Dec 15, 2007, 7:57:19 PM12/15/07
to
On Dec 15, 6:16 pm, Andre Kaufmann <akfmn...@t-online.de> wrote:
> Bo Persson wrote:
> > Andre Kaufmann wrote:
> > [...]
> > And if it doesn't work, how are you as a programmer going to fix that?

> 1) Change the environment
> 2) Use both - #pragma comment and include guards
> 3) The programmer is not responsible for the infrastructure
> that's the task of the it administrator
> 4) But there are always other solutions - see below

> > As a somewhat nasty example, I have a PC at work, which I
> > cannot configure myself, with

> > local hard disk
> > several (non-unique) mounts to Windows servers (group and
> > departmental)
> > several SAN mounts
> > access to a ClearCase mount on a UNIX server
> > one drive letter mapping to files on a zOS mainframe

> Copy the sources and directory structure to your local PC and
> compile them from there ?

I'd rather use an intelligent version control system, rather
than have to be sure of updating manually at the correct moment.
Remember that intelligent version control systems, like
Clearcase, behave as file servers, so that you always see the
version you're supposed to see.

For that matter, I don't think I've ever worked at a place where
the source files were on a local disk. What do you do when you
start discussing them with a collegue, on his machine?

> Anyways better to compile on a local machine than compiling

> over network, or am I totally wrong ?

Totally wrong. Complete builds would take forever on a single
machine. Your local remakes, of course, will all be compiled on
your machine, but the only way to be sure that you've got the
right versions of all of the headers is to go through some sort
of central server.

> Must any nasty hardware construction/topology be supported or
> should the hardware infrastructure be adopted to the compiler?

No. That's exactly why `#pragma once' doesn't work: it would
require some nasty topology to work, rather than using something
sensible.

> > How should the C++ standard specify #pragma once, so that it
> > always works? Or should it be unportable?

> You can always choose to additionally use include guards -
> can't you?

What's the point in having #pragma once, if you also need
include guards.

> And C++ can't be (fully) ported to every platform. If you
> restrict the compiler always to the lowest common platform you
> will effectively restrict the language itself.

> But anyways there is a always 100% portable solution, besides
> using additionally include guards, though it doesn't offer the
> normal speed gain of #pragma once.

#pragma once doesn't offer any speed gain. There is absolutely
no difference in build times with g++ when you use include
guards instead of #pragma once. Including over a slow network.

> If the compiler calculates a hash value of the content of a
> header file it can compare it always with the content of
> another header file. This solution could be used in non
> portable scenarios or when the compiler can't detect the files
> to be identical or if it can't add meta information to the
> code files itself.

> There are 100 solutions, I know it's not that simple as it
> seems to be - but IMHO there's no reason to drop such features
> like #pragma once.

The experience of the people at gcc seems to indicate the
opposite.

> All in all, header files are causing too much troubles. So
> it's best to get rid of them as soon as possible. They aren't
> worth the trouble.

I'm all for getting rid of header files completely. As soon as
we get a replacement for them which works.

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

---

James Kanze

unread,
Dec 15, 2007, 7:57:20 PM12/15/07
to
On Dec 15, 12:02 am, p...@versatilecoding.com (Pete Becker) wrote:
> On 2007-12-14 10:45:02 -0500, b...@spam-trap-cop.net ("Bronek
> Kozicki") said:

> > Sean Hunt <ride...@gmail.com> wrote:
> >> No. #pragma once has not made it into either standard. Furthermore,
> >> it's not the preferred way to do things. It's better to do use #ifndef
> >> guards:

> > just a note that this statement is an opinion and as such,
> > some may not agree on this. Futhermore, I'm not aware of any
> > C or C++ compiler that does not implement this extension. It
> > offers a benefit of removing a bit of coupling (a need to
> > provide unique preprocessor symbols for all header files)
> > that may otherwise be an issue in a very large projects.

> GCC not only implements it, but declares that it's obsolete!
> As has been said before in this thread, the problem with
> #pragma once is that the compiler has to get it right, but
> doesn't have enough information to do that. That's a classic
> case for not putting it in the language, and leaving it to
> programmers to ensure that what they're doing works right.

That's correct, as far as it goes. The problem is also that the
programmer doesn't have enough information to get it right
either. (For the programmer, getting it right means
guaranteeing a 0% chance of collision in the include guard
symbol.) Nobody does.

Still, the technique I use for the include guard names (and I'm
almost sure that I'm not alone) guarantees that the probability
of a collision is significantly less than 2.5E38. Which is
probably less than the chance of a collision in the compiler
generated name of the anonymous namespace. (You're more likely
to draw a royal straight flush of spades four times in a row in
poker, for example.) And of course, the guards are inserted
automatically by the editor, anytime I create a new header file.

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

---

Andre Kaufmann

unread,
Dec 16, 2007, 1:06:03 AM12/16/07
to
Bo Persson wrote:
> [...]


>2) Doesn't help much does it? If the #pragma once makes the compiler
>NOT include the file, I'm toast!

Simply don't use #pragma once for this project, meaning disable the
extension completely ? I think this case is somewhat constructed, if you
are using the same include guards you are toasted too.

And since there are common naming conventions on some platforms for
include guards, you will have a high likelihood of having the same
include guards in 2 files ----> ;-)

And if the #pragma once has skipped the inclusion, it's the OS fault. ;-).

As I wrote in another post, I could live with a combination of include
guards and #pragma once. A named #pragma once

#pragma once(_MY_HEADER_FILE)
#pragma once(_MY_OTHER_HEADER_FILE)

Would save me 2 lines of code ;-), and the compiler could distinguish
header files, even if the OS can't.

> So I go to the corporate IT manager (vice president) and tell him that
> the setup of our international WAN is incorrect, because it doesn't

Please tell me that your compiler doesn't include files directly over
WAN and instead you are using a source code control system, which
synchronizes your local sources so that your compiler uses only local
files and doesn't have any problems with checking if 2 files are
identical ;-)

> support the #pragma once feature of my C++ compiler.
>
> Do you think he will order a reconfiguration? :-)

You should tell him that the current topology doesn't support the latest
(potential) compiler standard and the company will be soon out of
business if the topology isn't changed ;-9

> [...]


> Specifying a standard feature that we know cannot be implemented on
> some platforms, surely makes the language less useful. The other part

Well the language gets more and more complex and not all C++ compilers
do support all features. And on some platforms they never will. Does it
make the language less useful on these platforms ?

> of the problem is that it somehow forces the compiler writer to
> consider network topologies. Hardly the right place for that.

> [...]


> Something better than headers would be nice. We should just don't do
> it in a way that reduces the number of platforms available to C++.

Agreed, I can live without #pragma once (meaning not having to deal with
macros), when I don't have to use header files.


> Bo Persson
>
> [...]

Andre

Andre Kaufmann

unread,
Dec 16, 2007, 10:16:25 AM12/16/07
to
James Kanze wrote:
> On Dec 15, 6:16 pm, Andre Kaufmann <akfmn...@t-online.de> wrote:
> [...]

> I'd rather use an intelligent version control system, rather
> than have to be sure of updating manually at the correct moment.
> Remember that intelligent version control systems, like
> Clearcase, behave as file servers, so that you always see the
> version you're supposed to see.

I do use version control systems too. Though I have always a local view
of the source files on my local hard disk.

> For that matter, I don't think I've ever worked at a place where
> the source files were on a local disk. What do you do when you
> start discussing them with a collegue, on his machine?

And what will you do if your colleague changes the source file during
compilation ?

>> Anyways better to compile on a local machine than compiling
>> over network, or am I totally wrong ?
>
> Totally wrong. Complete builds would take forever on a single
> machine. Your local remakes, of course, will all be compiled on
> your machine, but the only way to be sure that you've got the
> right versions of all of the headers is to go through some sort
> of central server.

Perhaps you've got me wrong. The source repository is a central
database. But before you compile you get all your sources from this
central database, distributed builds will make this too. So they are
compiled locally. I can't hardly think of multiple developers developing
and compiling directly on a central database, when some of them are
editing the sources.

If you have some kind of distributed build system, to gain some speed,
you will have always have some kind of local view too.
Otherwise you would compiled directly over network, which IMHO would
make compilation painfully slow.

>> Must any nasty hardware construction/topology be supported or
>> should the hardware infrastructure be adopted to the compiler?
>
> No. That's exactly why `#pragma once' doesn't work: it would
> require some nasty topology to work, rather than using something
> sensible.

Neither header files are IHMO sensible or compiling directly over
network. If the central build machine or the developers would compile
over the network directly, they would have to lock the whole source
repository / directories during compilation.

> [...]


> What's the point in having #pragma once, if you also need
> include guards.

That this will also work on hardware topologies, where #pragma once
can't be used.
But as I wrote to #pragma once could be extended simply by an
identifier, which should work always as good as header guards

> [...]


> #pragma once doesn't offer any speed gain. There is absolutely
> no difference in build times with g++ when you use include
> guards instead of #pragma once. Including over a slow network.

If you do include the header file only once there can't be a speed gain.
Otherwise g++ will always open the header file multiple times, but why
should it ?

> [...]


> The experience of the people at gcc seems to indicate the
> opposite.

As I already wrote, IMHO not the fault of the compiler but the OS or
hardware topology, which makes it impossible to check if 2 file paths
are pointing to the same file.
If the OS can't check this reliably, it's IMHO a security problem at all.
> [...]

Andre

James Kanze

unread,
Dec 16, 2007, 10:23:46 AM12/16/07
to
On Dec 16, 7:06 am, akfmn...@t-online.de (Andre Kaufmann) wrote:
> Bo Persson wrote:
> > [...]

> >2) Doesn't help much does it? If the #pragma once makes the compiler
> >NOT include the file, I'm toast!

> Simply don't use #pragma once for this project, meaning disable the
> extension completely ? I think this case is somewhat constructed, if you
> are using the same include guards you are toasted too.

In which case, you might as well not have it in the language,
since library code can't count on it being activated.

> And since there are common naming conventions on some
> platforms for include guards, you will have a high likelihood
> of having the same include guards in 2 files ----> ;-)

The common naming conventions will normally introduce a large
degree of randomness. To the point where a collision is less
likely than the server being destroyed by a meteorite (which of
course will cause serious compilation problems too).

> And if the #pragma once has skipped the inclusion, it's the OS
> fault. ;-).

And what does that buy me?

> As I wrote in another post, I could live with a combination of
> include guards and #pragma once. A named #pragma once

> #pragma once(_MY_HEADER_FILE)
> #pragma once(_MY_OTHER_HEADER_FILE)

> Would save me 2 lines of code ;-),

2 lines which you neither write, nor ever look at. Most source
code files start with a copyright message, which is a lot longer
than 2 lines. Similarly, my source files also end with
meta-information for the editors. And the editor positions me
after the copyright and the include guard when I open file.

IMHO, it's just not worth the bother.

> and the compiler could distinguish header files, even if the
> OS can't.

> > So I go to the corporate IT manager (vice president) and
> > tell him that the setup of our international WAN is
> > incorrect, because it doesn't

> Please tell me that your compiler doesn't include files
> directly over WAN and instead you are using a source code
> control system, which synchronizes your local sources so that
> your compiler uses only local files and doesn't have any
> problems with checking if 2 files are identical ;-)

Why should I do something that dumb. The rule is simply: two
identical copies aren't. The version control system is a file
server, which serves up the version you're supposed to be using.
(I've used a lot of different version control systems, using
different models, but this one is an order of magnitude better
than the others for large projects.)

> > support the #pragma once feature of my C++ compiler.

> > Do you think he will order a reconfiguration? :-)

> You should tell him that the current topology doesn't support the latest
> (potential) compiler standard and the company will be soon out of
> business if the topology isn't changed ;-9

More likely, every one will ignore the latest standard, like
they did with "export".:-)

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

---

James Kanze

unread,
Dec 16, 2007, 11:40:40 AM12/16/07
to
On Dec 16, 1:52 am, akfmn...@t-online.de (Andre Kaufmann) wrote:
> Pete Becker wrote:
> >[...]

> > GCC not only implements it, but declares that it's obsolete!
> > As has been said before in this thread, the problem with
> > #pragma once is that the compiler has to get it right, but
> > doesn't have enough information to do that. That's a classic
> > case for not putting it in the language, and leaving it to
> > programmers to ensure that what they're doing works right.

> And that's a classic example, how it shouldn't be. It's all up
> to the programmer to get it right ?

Given that the compiler can't. And it's the programmer, or
other tools. It's been years since I've written an include
guard myself.

> Which other language requests the programmer to do it right,
> regarding header files ? Are all the others doing something
> wrong ?

As far as I know, all other languages require the programmer to
do the right thing. For different definitions of the right
thing. (Well, Java forbids the programmer to do the right
thing, since the right thing does require keeping the
implementation and the interface specification in two different
files. But from what I understand, Java programmers have come
up with work-arounds for this defect.)

> Dealing with header files costs a lot of time and energy. The
> compiler has to compile each header file again and again and
> again, every time it's used. Why ? It's mainly the same code,
> besides let's say by abusing macros. The same project using
> precompiled header files compiles way more faster, than the
> same not using them. Modules are already proposed, but
> unfortunately are not part of the new standard. Pragma once
> helps somewhat, so that the compiler doesn't has to load and
> parse the header file again.

As mentionned before: compiling with #pragma once, or with
include guards, makes absolutely no difference in build times,
at least with g++. And that despite having to read the files
over an overloaded network, from a somewhat flakey fileserver
which sometimes takes its time responding.

Whatever arguments there might be, build times is not one.

> And it's the task of the OS to check if 2 paths point to the
> same file. Either by marking it with a hash code or by
> normalizing the path.

None of the OS's I know can do this. There's no support for it
in either NFS nor SMB, so practically, it isn't implementable if
the files are mounted using one of those protocols.

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

---

James Kanze

unread,
Dec 16, 2007, 11:40:19 AM12/16/07
to
On Dec 16, 12:03 am, akfmn...@t-online.de (Andre Kaufmann) wrote:
> James Kanze wrote:
> > [...]
> > But modern compilers don't open and parse the whole header file,

> I think you mean the optimizations built in gcc for example ?

> > in cases where they know that it's the same file. And #pragma
> > once can't be made to work in the cases where they don't.

> 1) As I wrote in my other post it's the task of the OS to
> reliably check if 2 paths are pointing to the same file.
> Otherwise it's IMHO a security risk if this can't be done
> by the OS reliably.

It may be a security risk (I've not studied the issue), but
neither Unix nor Windows can do so.

Note too that in the absence of true links, it's not rare under
Windows to have two separate copies of the same file, in
different places. You still want to treat it as the same file.

> 2) Even if the OS is incapable of doing that, you could
> at least add an additional identifier to the the pragma once

> #pragma once MY_HEADER_FILE
>
> which would replace

> #ifndef MY_HEADER_FILE
> #define MY_HEADER_FILE

> .......

> #endif MY_HEADER_FILE

And what does this change?

> Or simply another new (bad word I know) keyword, instead of #pragma once.

> module MY_HEADER_FILE;

> > [...]
> > There are certainly better solutions than textual inclusion to
> > handle modularization.
> > [...]

> Yes 1 module 1 file 1 compilation ;-)

One module one file makes good software engineering more or less
impossible. At the very least, you need two files: one with the
specification, and one with the implementation. Depending on
the case, you may want more than one file for the implementation
as well. (Although you don't say so, I presume you're talking
about source files here.) Alternatively, one could imagine a
system without files at all; all sources, objects, libraries,
etc. are kept in some sort of data base. Would make integrating
with existing systems rather difficult, however.

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

---

James Dennett

unread,
Dec 16, 2007, 5:07:45 PM12/16/07
to
Andre Kaufmann wrote:
> James Kanze wrote:
>
>>[...]
>> multiple mounts of the same file system. Both cases cause all
>> of the implementations of #pragma once to fail, which basically
>> means that even if it were standard, you couldn't count on it
>> working portably, and would probably continue to use the include
>> guards in portable code.
>
> When I interpret it correctly you state that under Linux, I think since
> you mentioned mount points it is Linux, it's not possible to reliably
> check if two file handles or paths are pointing to the same file ?
>
> If it is the case, I don't think its the compilers fault, or shouldn't be.

The C++ standard isn't about "compilers" in isolation -- it's about
"implementations" of C++, effectively the compilers, linkers, runtimes,
and operating systems (host and target) working together.

It is *not* practical in current real-world operating systems to
require that the OS be able to determine if two files are "the
same file" efficiently. In particular, networked I/O is important
for many organisations, using protocols that don't allow for this.

If C++ did have such an unimplementable requirement, it would be
ignored for implementors and by many users. #pragma once has no
realistic chance of being standardized by WG21 as far as I can see.

-- James

Andre Kaufmann

unread,
Dec 16, 2007, 5:08:00 PM12/16/07
to
James Kanze wrote:
> On Dec 16, 1:52 am, akfmn...@t-online.de (Andre Kaufmann) wrote:
>> Pete Becker wrote:
> [...]
>
>
> As far as I know, all other languages require the programmer to
> do the right thing. For different definitions of the right
> thing. (Well, Java forbids the programmer to do the right
> thing, since the right thing does require keeping the
> implementation and the interface specification in two different
> files. But from what I understand, Java programmers have come
> up with work-arounds for this defect.)

Hm, managed languages. Take for example C#. There aren't any header
files and I don't have even to import or include any other source file.
The modules are separated by name spaces, rather than per file basis.
Java IMHO goes somewhat too far, separating all objects in a file, IIRC
- I'm no Java expert.

> [...]


>
> Whatever arguments there might be, build times is not one.

There are, at least to drop header files at all, which should boost
compilation time a lot. Pragma once would perhaps only have minor effects.

> [...]


> None of the OS's I know can do this. There's no support for it
> in either NFS nor SMB, so practically, it isn't implementable if
> the files are mounted using one of those protocols.

Well, you could add some kind of hash cookie to a file and mark it or
add another cookie file in the same directory. NTFS supports for example
multiple streams for a single file. As soon as the file is marked, only
the cookie must be read to check if the 2 file paths are identical.
Other variant would be to lock the files during compilation of a single
module and check if the files are locked by the same process on reopening.
I know these solutions have other (big) downsides, should be only an
example how it could be solved, if the path couldn't really be
canonicalized and resolved.

But let's forget #pragma once, better drop header files at all.

> [...]

Andre

Pete Becker

unread,
Dec 17, 2007, 1:49:19 AM12/17/07
to
On 2007-12-15 18:02:54 -0500, akfm...@t-online.de (Andre Kaufmann) said:

> James Kanze wrote:
>
> >[...]
>> multiple mounts of the same file system. Both cases cause all
>> of the implementations of #pragma once to fail, which basically
>> means that even if it were standard, you couldn't count on it
>> working portably, and would probably continue to use the include
>> guards in portable code.
>
> When I interpret it correctly you state that under Linux, I think since
> you mentioned mount points it is Linux, it's not possible to reliably
> check if two file handles or paths are pointing to the same file ?
>
> If it is the case, I don't think its the compilers fault, or shouldn't be.
>

Engineering is about making things work. If the bridge that you
designed collapses and a hundred people are killed, you don't get a
free pass by claiming that it wasn't your fault, the ground wasn't
stable enough. If #pragma once can't be made to work because of OS
limitations, then it can't be made to work.

--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of "The
Standard C++ Library Extensions: a Tutorial and Reference
(www.petebecker.com/tr1book)

Andre Kaufmann

unread,
Dec 17, 2007, 1:52:57 AM12/17/07
to
Francis Glassborow wrote:
> Andre Kaufmann wrote:
>[...]
> Even if you could how do you propose that we deal with identical hrader
> files? Or should the standard require that only a single copy exist?

I think I don't know exactly what you mean. It doesn't matter if you
have 2 identical header files, which of them is used by the compiler.

What I really want is module concept (single code file without headers)
to be part of the standard as soon as possible, since this would fix all
problems with header files.

> [...]

Andre

Andre Kaufmann

unread,
Dec 17, 2007, 1:49:57 AM12/17/07
to
James Kanze wrote:
> On Dec 16, 7:06 am, akfmn...@t-online.de (Andre Kaufmann) wrote:
> [...]

> The common naming conventions will normally introduce a large

From the current gcc source repository.


#ifndef OBJALLOC_H

Guess what header file this guard shall protect it from multiple inclusion.

> degree of randomness. To the point where a collision is less
> likely than the server being destroyed by a meteorite (which of
> course will cause serious compilation problems too).

See above. I tend to bet on the meteorite ;-).

> [...]

> [...]


> 2 lines which you neither write, nor ever look at. Most source
> code files start with a copyright message, which is a lot longer
> than 2 lines. Similarly, my source files also end with
> meta-information for the editors. And the editor positions me
> after the copyright and the include guard when I open file.
>
> IMHO, it's just not worth the bother.

I regularly make a copy of my header files, which got too big, to divide
and modularize my code. I regularly forget to rename the header guards.
Guess how much confusion this causes if the code luckily compiles and
2-3 months later the compiler will throw mysterious compilation errors.

> [...]


> Why should I do something that dumb. The rule is simply: two
> identical copies aren't. The version control system is a file
> server, which serves up the version you're supposed to be using.
> (I've used a lot of different version control systems, using
> different models, but this one is an order of magnitude better
> than the others for large projects.)

Well, may be for projects that are way to big to be hosted on a single
hard disk. Otherwise I rather would tend to use a source repository
database, which caches the sources locally.
Depends perhaps on the size of the project. But I tend to keep my
projects locally and sync them with the central repositories, just to be
independent of the network.

> [...]


> More likely, every one will ignore the latest standard, like
> they did with "export".:-)

Oh, come on you don't think #pragma once is so hard to implement as
"export" ;-).

Export doesn't pay off the effort to implement it. Module concept will
address the same problems and solve some others, like this discussion
;-999999


> [...]

Andre

James Kanze

unread,
Dec 17, 2007, 1:51:48 AM12/17/07
to
On Dec 16, 12:02 am, akfmn...@t-online.de (Andre Kaufmann) wrote:
> James Kanze wrote:

> >[...]

> > multiple mounts of the same file system. Both cases cause
> > all of the implementations of #pragma once to fail, which
> > basically means that even if it were standard, you couldn't
> > count on it working portably, and would probably continue to
> > use the include guards in portable code.

> When I interpret it correctly you state that under Linux, I
> think since you mentioned mount points it is Linux, it's not
> possible to reliably check if two file handles or paths are
> pointing to the same file ?

Neither under Linux nor under Windows (nor under Solaris, nor
under any other system I've used which supports distributed
files).

> If it is the case, I don't think its the compilers fault, or
> shouldn't be.

Does it matter whose fault it is if the thing doesn't work? The
problem occurs in practice, probably more often that you'd
realize.

> Perhaps the OS can't check (always) 2 file paths being
> identical , but it should be quite simple to check if the file
> header on the device is the same. Same disk, same block -
> should be the same file ? At least for local files. Network is
> perhaps a different story.

For local files, I don't think it's a problem. But I can't
remember the last time I compiled against libraries on a local
disk professionally. (Even at home, when I compile under
Windows, the sources are generally remote mounted on my Linux
server. Trying to keep even two copies in sync is just an
unnecessary complication.)

> Also I tend to use full relative paths when I include header files.

> "lib\a\b\c.h"

> instead of:

> "c.h"

> So the risk of 2 files having the same relative path is IMHO very
> unlikely, if relative paths are used. (I know the downsides, but I've
> got used to them, because I think including libraries this way is more
> safe and they represent also the hierarchical character of libraries,
> just like namespaces ).

I do too (although I'll use / and not \ as a separator---works
better, even under Windows). But I fail to see where that
changes anything.

Using absolute paths might avoid the problem (although you still
have the problem of multiple copies of the same file), but that
has so many other disadvantages that I'd rather not consider it.

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

---

Francis Glassborow

unread,
Dec 17, 2007, 1:00:25 PM12/17/07
to
Andre Kaufmann wrote:
>
> Oh, come on you don't think #pragma once is so hard to implement as
> "export" ;-).
>
> Export doesn't pay off the effort to implement it. Module concept will
> address the same problems and solve some others, like this discussion
> ;-999999
>
>
>> [...]
>
> Andre
>
Well we know that export is implementable we do not know that #pragma
once is always implementable. Do that means that the former is easier
than the latter :-)

Andre Kaufmann

unread,
Dec 17, 2007, 1:05:47 PM12/17/07
to
James Kanze wrote:
> On Dec 16, 12:02 am, akfmn...@t-online.de (Andre Kaufmann) wrote:
>> James Kanze wrote:
>
>> >[...]
>
>
> Does it matter whose fault it is if the thing doesn't work? The

Yes - IMHO. As I already wrote I would prefer to have no header files at
all for my new written sources (only for backwards compatibility).

> problem occurs in practice, probably more often that you'd
> realize.

Well, what I really want to say that it's just typical for C++.
Header files should be included only once in 99,9999%. Why do I have to
protect the header files with include guards or even with #pragma once ?

It feels unnatural for me using >macros< to accomplish that. For me
that's just a workaround I have to use because there's no other solution.

> [...]


> For local files, I don't think it's a problem. But I can't
> remember the last time I compiled against libraries on a local
> disk professionally. (Even at home, when I compile under
> Windows, the sources are generally remote mounted on my Linux
> server. Trying to keep even two copies in sync is just an
> unnecessary complication.)

Well the synchronization is done by the source revision system
automatically and my hard disk is still way faster than the network I'm
using.

> [...]

Andre

Dave Harris

unread,
Dec 17, 2007, 1:12:27 PM12/17/07
to
akfm...@t-online.de (Andre Kaufmann) wrote (abridged):

> I regularly make a copy of my header files, which got too big, to
> divide and modularize my code. I regularly forget to rename the
> header guards.
Have you considered writing a tool to check for this mistake? I'm
guessing it would take less than 20 lines of Perl or Python; maybe even
20 lines of C++. Probably there are Lint tools which already do it.

It is easier to solve in this way because you can arrange your source so
that there is no file aliasing going on. It's harder for the compiler to
be sure of that. And the compiler can't assume the first #define in a
header file is a header guard. That said, there's nothing to stop a
compiler vendor making some assumptions and then incorporating a check
for bad header guards in their product. The standard allows compilers to
issue warnings for anything they see fit.

-- Dave Harris, Nottingham, UK.

Andre Kaufmann

unread,
Dec 17, 2007, 1:28:06 PM12/17/07
to
James Kanze wrote:
> On Dec 16, 12:03 am, akfmn...@t-online.de (Andre Kaufmann) wrote:

> [...]


> Note too that in the absence of true links, it's not rare under
> Windows to have two separate copies of the same file, in
> different places. You still want to treat it as the same file.

Yes. But if it's the same file, there shouldn't be any problem. Either
with #pragma once or with include guards.

> [...]


>> #endif MY_HEADER_FILE
>
> And what does this change?

You could choose the compiler to handle the #pragma once(HEADER_GUARD)
to operate like a

#pragma once

or

#ifndef HEADER_GUARD
#define HEADER_GUARD

#endif // HEADER_GUARD


depending in which environment you are working.

At least only one line would be needed and it would be as save as
include guards. Although the compiler would have to open the file
anyways, contrary to the current #pragma once to read the identifier.


> [...]


> One module one file makes good software engineering more or less
> impossible. At the very least, you need two files: one with the
> specification, and one with the implementation.

Why ? For example C# doesn't need a separation between specification and
implementation. The D language, doesn't need a separation too.
Why should I keep specification and implementation separated, just to
keep them in sync ?

In C++ I could also write all code inline, though this wouldn't be quite
efficient and doesn't work generally because currently C++ is built on
this separation and requires it for efficient programming - but it
mustn't (in the future).

> Depending on
> the case, you may want more than one file for the implementation
> as well. (Although you don't say so, I presume you're talking
> about source files here.) Alternatively, one could imagine a
> system without files at all; all sources, objects, libraries,
> etc. are kept in some sort of data base. Would make integrating
> with existing systems rather difficult, however.

With Module I mean one or multiple classes, constants, etc. which are
logically grouped in a single file and not separated in 2 files. Why
should it ? Optionally - yes.

> [...]

Andre

Andre Kaufmann

unread,
Dec 17, 2007, 2:05:39 PM12/17/07
to
James Dennett wrote:
> Andre Kaufmann wrote:
> [...]
>
> It is *not* practical in current real-world operating systems to
> require that the OS be able to determine if two files are "the
> same file" efficiently. In particular, networked I/O is important
> for many organisations, using protocols that don't allow for this.

If the OS doesn't know which file is which and located where then the
user effectively doesn't know either and it's only a matter of time till
the first virus based on source code basis spreads around the world and
will be included automatically everywhere. (has nothing to do with
#pragma once)

> If C++ did have such an unimplementable requirement, it would be

You mean like "export", not impossible to implement but .....

> ignored for implementors and by many users. #pragma once has no
> realistic chance of being standardized by WG21 as far as I can see.
>

> [...]

Hans Bos

unread,
Dec 17, 2007, 3:20:24 PM12/17/07
to
"Francis Glassborow" <francis.g...@btinternet.com> wrote in message
news:3uudnVokr45FL_va...@eclipse.net.uk...

> Andre Kaufmann wrote:
>>
>> Oh, come on you don't think #pragma once is so hard to implement as "export"
>> ;-).
>>
>> Export doesn't pay off the effort to implement it. Module concept will
>> address the same problems and solve some others, like this discussion
>> ;-999999
>>
>>
>>> [...]
>>
>> Andre
>>
> Well we know that export is implementable we do not know that #pragma once is
> always implementable. Do that means that the former is easier than the latter
> :-)

You can define that if a file contains #pragma once, every file (whatever its
name) that is included and has the same contents as the original file is
ignored.

You can implement this on every OS (just compare each file to be included with
all files that were already included and that contain #pragma once).

This can be optimized by calculating a hash value for each file that is included
and compare the hash values first.
You can also use existing methods that compilers currently use for #pragma once.

Using this scheme you only have to check files for which you can't be certain
they are already included (if you know that a file is the same as an already
included file, you can skip it, just like gcc, and other compilers do right
now).
In the current situation (without #pragma once) these files must also be read
and parsed by the compiler, so there is no big loss in efficiency.

Steve Clamage

unread,
Dec 17, 2007, 5:06:51 PM12/17/07
to
On 12/17/2007 10:28 AM, Andre Kaufmann wrote:
>
> At least only one line would be needed and it would be as safe as
> include guards. Although the compiler would have to open the file
> anyways, contrary to the current #pragma once to read the identifier.

Actually, the compiler does not necessarily have to open the same file
more than once when you use include guards.

Sun C++, among others, keeps track of whether an included file has all
of its non-comment code inside include guards, and if so, ignores the
file if that include guard is still true and the file is included again.

If the compiler can't tell that the same file is being included, #pragma
once wouldn't work either.

---
Steve Clamage
Sun C++ compiler team

James Kanze

unread,
Dec 17, 2007, 6:47:27 PM12/17/07
to
On Dec 17, 7:05 pm, Andre Kaufmann <akfmn...@t-online.de> wrote:
> James Kanze wrote:

> Well, what I really want to say that it's just typical for
> C++. Header files should be included only once in 99,9999%.
> Why do I have to protect the header files with include guards
> or even with #pragma once ?

Because there's no other technical solution.

That's not really true, of course. Modula-2 did it, and so does
Ada. But in a far different context, with a somewhat different
compilation model. It doesn't work with the current compilation
model of C++, and you can't eliminate that. It's possible to
provide an alternative, of course, and there is a proposal for
modules being considered. But to make it work, you need to go
far beyond simply "#pragma once".

> It feels unnatural for me using >macros< to accomplish that.
> For me that's just a workaround I have to use because there's
> no other solution.

I don't think anyone would deny that. It's a hack.

> > [...]
> > For local files, I don't think it's a problem. But I can't
> > remember the last time I compiled against libraries on a local
> > disk professionally. (Even at home, when I compile under
> > Windows, the sources are generally remote mounted on my Linux
> > server. Trying to keep even two copies in sync is just an
> > unnecessary complication.)

> Well the synchronization is done by the source revision system
> automatically and my hard disk is still way faster than the
> network I'm using.

I've never seen a system where the synchronization actually
worked reliably, other than with Clearcase, which uses a file
server model. And of course, other machines can't read your
local disk. What happens when you do a complete rebuild, where
the compiles are distributed through out the network? What
happens when you want to discuss something with a collegue, at
his desk?

Nothing which you are actually working on should ever be on a
local disk; if you're working on it, you must be able to access
your current, working version, from all of the machines on the
network.

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

---

James Kanze

unread,
Dec 17, 2007, 6:55:25 PM12/17/07
to
On Dec 17, 7:12 pm, brang...@ntlworld.com (Dave Harris) wrote:
> akfmn...@t-online.de (Andre Kaufmann) wrote (abridged):> I regularly make a copy of my header files, which got too big, to

> > divide and modularize my code. I regularly forget to rename the
> > header guards.

> Have you considered writing a tool to check for this mistake? I'm
> guessing it would take less than 20 lines of Perl or Python; maybe even
> 20 lines of C++. Probably there are Lint tools which already do it.

I'll admit that it's a case I've never really encountered. It
sounds like it would break all client code, since they'd now
have to include two headers, instead of one. But even if you
wanted to do it, you're going to be doing it in an editor. And
as soon as you create a new header file in the editor, the
include guards are created, with a new, unique name. So you
don't copy/paste them.

Include guards would be a pain if the programmer ever had to
think about them, or look at them. But he doesn't, at least not
with any decent toolset.

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

---

James Kanze

unread,
Dec 17, 2007, 7:05:17 PM12/17/07
to
On Dec 16, 4:16 pm, akfmn...@t-online.de (Andre Kaufmann) wrote:
> James Kanze wrote:
> > On Dec 15, 6:16 pm, Andre Kaufmann <akfmn...@t-online.de> wrote:
> > [...]
> > I'd rather use an intelligent version control system, rather
> > than have to be sure of updating manually at the correct moment.
> > Remember that intelligent version control systems, like
> > Clearcase, behave as file servers, so that you always see the
> > version you're supposed to see.

> I do use version control systems too. Though I have always a
> local view of the source files on my local hard disk.

Which means that you can't discuss your work with collegues.
Not a very good practice.

> > For that matter, I don't think I've ever worked at a place where
> > the source files were on a local disk. What do you do when you
> > start discussing them with a collegue, on his machine?

> And what will you do if your colleague changes the source file
> during compilation ?

If I've got the file checked out, he can't write it. And of
course, for the files I don't have checked out, I see a stable
version.

> >> Anyways better to compile on a local machine than compiling
> >> over network, or am I totally wrong ?

> > Totally wrong. Complete builds would take forever on a single
> > machine. Your local remakes, of course, will all be compiled on
> > your machine, but the only way to be sure that you've got the
> > right versions of all of the headers is to go through some sort
> > of central server.

> Perhaps you've got me wrong. The source repository is a
> central database. But before you compile you get all your
> sources from this central database, distributed builds will
> make this too. So they are compiled locally. I can't hardly
> think of multiple developers developing and compiling directly
> on a central database, when some of them are editing the
> sources.

All I can say is that I've never seen a system with local copies
which worked reliably. You see what the file server decides
that you should see. The file server is the version control
system. As versions evolve, you see newer versions.

Obviously, you are informed that the change will take place.
But it doesn't involve copying some twenty or thirty different
files to over a hundred different machines, hoping that they're
all up at the moment you decide to do the copy, and that nothing
goes wrong.

> If you have some kind of distributed build system, to gain
> some speed, you will have always have some kind of local view
> too. Otherwise you would compiled directly over network,
> which IMHO would make compilation painfully slow.

I've rarely compiled against local files, and only in very small
projects. And compilation isn't particularly slow. It is, in
fact, a lot faster than searching down the problems because you
somehow ended up with incompatible versions.

> >> Must any nasty hardware construction/topology be supported
> >> or should the hardware infrastructure be adopted to the
> >> compiler?

> > No. That's exactly why `#pragma once' doesn't work: it
> > would require some nasty topology to work, rather than using
> > something sensible.

> Neither header files are IHMO sensible or compiling directly
> over network. If the central build machine or the developers
> would compile over the network directly, they would have to
> lock the whole source repository / directories during
> compilation.

Obviously, you've never used a modern version control system,
which works like a file server. You really should give
Clearcase (or something like it) a try.

> > [...]
> > What's the point in having #pragma once, if you also need
> > include guards.

> That this will also work on hardware topologies, where #pragma
> once can't be used. But as I wrote to #pragma once could be
> extended simply by an identifier, which should work always as
> good as header guards

And I said: why bother? What's the difference?

> > [...]
> > #pragma once doesn't offer any speed gain. There is
> > absolutely no difference in build times with g++ when you
> > use include guards instead of #pragma once. Including over
> > a slow network.

> If you do include the header file only once there can't be a
> speed gain. Otherwise g++ will always open the header file
> multiple times, but why should it ?

It doesn't. No modern compiler does. The only time it has to
open the file more than once is when it is unsure if it is the
same file.

> > [...]
> > The experience of the people at gcc seems to indicate the
> > opposite.

> As I already wrote, IMHO not the fault of the compiler but the
> OS or hardware topology, which makes it impossible to check if
> 2 file paths are pointing to the same file. If the OS can't
> check this reliably, it's IMHO a security problem at all.

You may think it's a security problem, but it's the way things
work in real life. Neither SMB nor NFS have a means of
asserting whether two pathnames are identical. So "#pragma
once" can't be made to work reliably under Windows nor under
Unix. The probability of a feature being adopted which can't be
implemented reliably under Unix nor under Windows is pretty
small.

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

---

James Kanze

unread,
Dec 17, 2007, 8:12:35 PM12/17/07
to
On Dec 17, 7:28 pm, Andre Kaufmann <akfmn...@t-online.de> wrote:
> James Kanze wrote:
> > On Dec 16, 12:03 am, akfmn...@t-online.de (Andre Kaufmann) wrote:

> > [...]


> At least only one line would be needed and it would be as save as
> include guards. Although the compiler would have to open the file
> anyways, contrary to the current #pragma once to read the identifier.

Since the lines are generated by the editor, who cares how many?

> > [...]
> > One module one file makes good software engineering more or less
> > impossible. At the very least, you need two files: one with the
> > specification, and one with the implementation.

> Why ? For example C# doesn't need a separation between
> specification and implementation. The D language, doesn't need
> a separation too.

That probably explains why those languages aren't the language
of choice for large projects. (Actually, of course, a lot of
issues enter into consideration. But good separation of the
specification and the implementation is certainly one.)

> Why should I keep specification and implementation separated,
> just to keep them in sync ?

Because other users don't want you to accidentally change the
specification when you're fixing an implementation bug. The
specification is part of the contract; you shouldn't be able to
change it without the other parties of the contract agreeing to
the change.

On large projects, of course, you can't just change it on a
whim. The normal development programmer doesn't even have write
permissions on the header files.

> In C++ I could also write all code inline, though this
> wouldn't be quite efficient and doesn't work generally because
> currently C++ is built on this separation and requires it for
> efficient programming - but it mustn't (in the future).

Efficient programming requires that the contract be separated
from the interface. This is a basic principle of software
engineering. Why do you think all of the coding guidelines
forbid implementing the functions in the class definition.

C++ is far from perfect here, but the problem with C++ is that
it still requires some implementation details (e.g. private
data) in the class definition. That it doesn't go far enough in
making this separation. A better example of how it should be
done would be Ada, or the Modula family of languages.

> > Depending on the case, you may want more than one file for
> > the implementation as well. (Although you don't say so, I
> > presume you're talking about source files here.)
> > Alternatively, one could imagine a system without files at
> > all; all sources, objects, libraries, etc. are kept in some
> > sort of data base. Would make integrating with existing
> > systems rather difficult, however.

> With Module I mean one or multiple classes, constants, etc.
> which are logically grouped in a single file and not separated
> in 2 files. Why should it ?

Because it's good software engineering.

> Optionally - yes.

For small, test programs, it's convenient not to need headers.
For those, you can always use Java. (Although with my
environment, it's pretty easy to move into C++ even for smaller
programs. But it's in big programs where C++ really shines.)

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

---

Andre Kaufmann

unread,
Dec 18, 2007, 1:59:15 AM12/18/07
to
Steve Clamage wrote:
> On 12/17/2007 10:28 AM, Andre Kaufmann wrote:
>
> Actually, the compiler does not necessarily have to open the same file
> more than once when you use include guards.
>
> Sun C++, among others, keeps track of whether an included file has all
> of its non-comment code inside include guards, and if so, ignores the
> file if that include guard is still true and the file is included again.

That's fine, the problem is another layer of complexity is added to the
compiler to get it right, checking if all code is enclosed in include
guards.

> If the compiler can't tell that the same file is being included, #pragma
> once wouldn't work either.

A pragma with an identifier would work too.

> [...]

Andrei Alexandrescu (See Website For Email)

unread,
Dec 18, 2007, 1:59:35 AM12/18/07
to
James Kanze wrote:
> You may think it's a security problem, but it's the way things
> work in real life. Neither SMB nor NFS have a means of
> asserting whether two pathnames are identical. So "#pragma
> once" can't be made to work reliably under Windows nor under
> Unix. The probability of a feature being adopted which can't be
> implemented reliably under Unix nor under Windows is pretty
> small.

I guess your argument would be strengthened by you also showing a
significant necessity for projects to use multiple aliased include
search paths that resolve to the same physical directories. In the
projects I've worked on, this was always a project management blunder.

Andrei

Alf P. Steinbach

unread,
Dec 18, 2007, 2:19:08 AM12/18/07
to
* James Kanze:

> On Dec 17, 7:28 pm, Andre Kaufmann <akfmn...@t-online.de> wrote:
>
>> Why should I keep specification and implementation separated,
>> just to keep them in sync ?
>
> Because other users don't want you to accidentally change the
> specification when you're fixing an implementation bug. The
> specification is part of the contract; you shouldn't be able to
> change it without the other parties of the contract agreeing to
> the change.
>
> On large projects, of course, you can't just change it on a
> whim. The normal development programmer doesn't even have write
> permissions on the header files.

I think the point was that the extraction of tool-compatible version of
an interface for the purpose of using a module via those tools, is
ideally a tool job, not a programmer job.

Lacking tool support, which is due to lacking language support, one may
end up with draconian rules such as you describe. At least when proper
support for hierarchical division of the work isn't in place. And with
hierarchy support one may end up with something like the now infamous
Microsoft shutdown menu case, where a simple change of layout in a menu
took several man-years, due in large part to the cumulative lags
(physical and organizational) introduced by hierarchical source control.

Context-independent rules restricting what programmers can do
essentially throw the development process back to the time of waterfall
methodology, and treat programming as something that could in principle
be accomplished mechanically, i.e. involving no intelligence but a
simple transcription of design to implementation language. Waterfall
methodology may work well (even splendidly) for certain kinds of
projects, but not in general. When the language support limits the tool
support to such a degree that one must adjust the methodology to 1950's
ways, I think it's time to start asking whether a bit of language
support wouldn't be a Good Thing -- or, some other language... ;-)


[snip]


> C++ is far from perfect here, but the problem with C++ is that
> it still requires some implementation details (e.g. private
> data) in the class definition. That it doesn't go far enough in
> making this separation. A better example of how it should be
> done would be Ada, or the Modula family of languages.

That's one way, and the Eiffel/Java/C# way is another, and IMHO better.
In both cases a compiled version of the interface is generated. But
with Eiffel/Java/C# extracting the /textual specification/ of the
interface is the tool's job, not the programmer's. And if you really
want more waterfall-like methodology and no-interface-change-by-mortals,
then again that's in principle enforcable by tools; Eiffel development
environments had some support for that by "frozen" classes, but as far
as I'm aware (I'm not really up-to-date) such support is lacking for
Java and C#, due to lack of any real demand for the feature.

So I think for a C++ module concept one should look not only to Modula-2
(which as I understand it was the basis for Daveed's proposal), but
toward more modern module implementations that have been based on and
evolved from the early Modula-ideas of Niklaus Wirth.

In short, I don't think it's a good idea to try to incorporate features
of header files in a new C++ module concept. One is free to innovate.
And in particular, I think a crucial concept here is the freedom to
/not/ support the complete language: a module need not be able to export
anything and all that can be made available via a header file, such as
preprocessor symbols, general templates (experience from Java and .NET
in this regard may be useful), and so on; it doesn't have to Save The
Whole World but only to be immensely useful and productivity-enhancing.

Cheers,

- Alf


--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

Andre Kaufmann

unread,
Dec 18, 2007, 2:23:06 AM12/18/07
to
James Kanze wrote:
> On Dec 17, 7:28 pm, Andre Kaufmann <akfmn...@t-online.de> wrote:
>> James Kanze wrote:
>>> On Dec 16, 12:03 am, akfmn...@t-online.de (Andre Kaufmann) wrote:
>
>> > [...]
>> At least only one line would be needed and it would be as save as
>> include guards. Although the compiler would have to open the file
>> anyways, contrary to the current #pragma once to read the identifier.
>
> Since the lines are generated by the editor, who cares how many?

Since when does C++ rely on an editor to do it right ? In previous
discussions I've been told the opposite ;-).

A simple task: Check if all header files in a directory have unique
include guards. Simple with only one line of code to search.

What's that example good for. Perhaps nothing. It's only an example how
C++ makes it more complex for other tools (refactoring) / editors to do
it right.

> [...]


>> Why ? For example C# doesn't need a separation between
>> specification and implementation. The D language, doesn't need
>> a separation too.
>
> That probably explains why those languages aren't the language

Hm, D isn't that old.

> of choice for large projects. (Actually, of course, a lot of
> issues enter into consideration. But good separation of the
> specification and the implementation is certainly one.)

Once again, why ? Regarding separation I work on a visual layer, showing
me only the functions and class hierarchy. It's quite simple to write
such tools in these languages, because they aren't separated.

These languages as well support abstract interfaces. Such an interface
plays the part of separation you are demanding.

>> Why should I keep specification and implementation separated,
>> just to keep them in sync ?
>
> Because other users don't want you to accidentally change the
> specification when you're fixing an implementation bug.

A base class changing the function signature will cause a compilation
error too, as well when you change an interface base class all code is
broken.

> The
> specification is part of the contract; you shouldn't be able to
> change it without the other parties of the contract agreeing to
> the change.

In C# and Java I can't change the "contract" of abstract base classes
without causing compilation errors.

In C++ I can, because the language is still missing an override keyword,
I've been told to be a minor problem here.

So I'm now excited how you argument in this case: ;-)
You want the specification be part of the contract, regarding overriding
virtual functions it won't work in C++ ?

> On large projects, of course, you can't just change it on a
> whim.

Template class + abstract virtual functions + several layers ===
sooner or later booooom in C++.

> The normal development programmer doesn't even have write
> permissions on the header files.

Well you could to the same with interfaces (abstract base classes) in C#
and D.

But another question:

How do you separate specification and implementation in template classes ?

Export, hm. if supported. Don't use templates - hm ?


> [...]


> Efficient programming requires that the contract be separated
> from the interface. This is a basic principle of software
> engineering. Why do you think all of the coding guidelines

Since in C++ I have to, to separate code. Pimpl for example.

> forbid implementing the functions in the class definition.


> C++ is far from perfect here, but the problem with C++ is that
> it still requires some implementation details (e.g. private

Not if you use virtual abstract base classes. Interfaces.

> data) in the class definition.

Or use pimpl - though too much editing for such a simple task.

> That it doesn't go far enough in
> making this separation. A better example of how it should be
> done would be Ada, or the Modula family of languages.
>

> [...]


> Because it's good software engineering.

Then throw away STL because it's bad software engineering ?
Then never use/add module concepts in/to C++ because there is no
separation in 2 header files ?

>> Optionally - yes.
>
> For small, test programs, it's convenient not to need headers.
> For those, you can always use Java. (Although with my
> environment, it's pretty easy to move into C++ even for smaller
> programs. But it's in big programs where C++ really shines.)

I don't think so. C++ shines mainly because it's the only language with
the most efficient compilers and because it supports RAII - stack
allocated objects. Huge code base and RAII are the only reasons, why I
still stick with C++.

C++ creates fast code, but currently the other languages can sometimes
outperform C++ without creating optimized code, because they now use
multiple cores more efficiently.
In C++ I have proprietary extensions like OpenMP, but it's not that
simple to deal with it as in the other languages


I don't want to start a huge discussion about the efficiency of C++,
hmpf, I think I have.
No - C++ is still a good language. But that doesn't mean it to be
generally superior to other languages.

> [...]

James Kanze

unread,
Dec 18, 2007, 9:39:01 AM12/18/07
to
On Dec 18, 8:23 am, akfmn...@t-online.de (Andre Kaufmann) wrote:
> James Kanze wrote:
> > On Dec 17, 7:28 pm, Andre Kaufmann <akfmn...@t-online.de> wrote:
> >> James Kanze wrote:
> >>> On Dec 16, 12:03 am, akfmn...@t-online.de (Andre Kaufmann) wrote:

> >> > [...]
> >> At least only one line would be needed and it would be as
> >> save as include guards. Although the compiler would have to
> >> open the file anyways, contrary to the current #pragma once
> >> to read the identifier.

> > Since the lines are generated by the editor, who cares how many?

> Since when does C++ rely on an editor to do it right ?

Since always:-). Textual inclusion is, well textual inclusion,
and text is generated by an editor. (Not always, and not only,
of course.) In some cases, the "editor" might be something like
Rational Rose, in which drawing diagrams and filling in forms
creates the text, but the principle is the same.

And I think we're in agreement that better solutions exist.

> In previous discussions I've been told the opposite ;-).

> A simple task: Check if all header files in a directory have
> unique include guards. Simple with only one line of code to
> search.

> What's that example good for. Perhaps nothing. It's only an
> example how C++ makes it more complex for other tools
> (refactoring) / editors to do it right.

If the task is so simple, I don't see how you argue that C++
makes it more complex. (In fact, of course, the preprocessor
makes other tools a lot more complex. But adding #pragma once
won't change that.)

> > [...]


> These languages as well support abstract interfaces. Such an
> interface plays the part of separation you are demanding.

It's used for that, the same as textual inclusion is in C++.
See my response to Alf.

[...]


> > The specification is part of the contract; you shouldn't be
> > able to change it without the other parties of the contract
> > agreeing to the change.

> In C# and Java I can't change the "contract" of abstract base
> classes without causing compilation errors.

> In C++ I can, because the language is still missing an
> override keyword, I've been told to be a minor problem here.

> So I'm now excited how you argument in this case: ;-) You want
> the specification be part of the contract, regarding
> overriding virtual functions it won't work in C++ ?

I'm not sure what problem you're talking about here. If you
change the signature of an abstract function in Java, then the
classes extending the abstract class or implementing the
interace will not compile, because the class will not have been
declared abstract, but will fail to implement the abstract
function. This is good. Not modifying the interface to begin
with is better, of course.

IMHO, C++ could benefit with some additional categorization of
member functions: override and final, for example, come to mind.
Whether the benefit is worth the effort is another problem; I'll
admit that in practice, I've not seen a lot of errors due to
this.

But how is this in any way relevant to #pragma once, or the
necessity of separating specification from implementation (which
are also more or less orthogonal, except that if we design a
new, really good way of making the specification, the discussion
could end up moot).

> > On large projects, of course, you can't just change it on a
> > whim.

> Template class + abstract virtual functions + several layers
> === sooner or later booooom in C++.

Unmanaged change, regardless of the language, and sooner or
later boom.

[...]
> But another question:

> How do you separate specification and implementation in
> template classes ?

The implementation is in a .tcc file, the specification in a
.hh.

> Export, hm. if supported. Don't use templates - hm ?

Most large projects do NOT allow defining templates in
application level code. Because of the coupling that they
introduce. (Also because of the instability of implementations,
of course. It's sad to say that even now, 8 years after the
formal adoption of the standard, very few compilers come
anywhere near close to implementing it, and all of them
implement something different.)

> > [...]
> > Efficient programming requires that the contract be separated
> > from the interface. This is a basic principle of software
> > engineering. Why do you think all of the coding guidelines

> Since in C++ I have to, to separate code. Pimpl for example.

> > forbid implementing the functions in the class definition.
> > C++ is far from perfect here, but the problem with C++ is that
> > it still requires some implementation details (e.g. private
>
> Not if you use virtual abstract base classes. Interfaces.

> > data) in the class definition.

> Or use pimpl - though too much editing for such a simple task.

On one project I worked on, Rational Rose generated the
compilation firewall idiom automatically in the header (so the
header never needed to be touched by a human, not even to add
private members). But I agree, a lot more tool (or language)
support for this sort of thing is necessary.

> > [...]
> > Because it's good software engineering.

> Then throw away STL because it's bad software engineering ?

Well, it's certainly not an example of good overall design:-).
But there's nothing in its specification which would prevent an
implementation from implementing it cleanly, and the
implementations I've seen (principally g++) do separate
implementation and interface in a lot of cases. (I got the idea
for using .tcc files for template implementations from g++.)

[...]


> No - C++ is still a good language. But that doesn't mean it to
> be generally superior to other languages.

Certainly not. No language is perfect, and the C++ model for
modularization is certainly not one of its strong points.
Compared to, say Ada or Modula-2.

Also, the more a language is used, the more techniques (or
work-arounds) are developped to manage its weak points. You
mentionned using interfaces in Java, to simulate header files.
One of the strong points of C++ is that it is widely used, so
solutions have been found for many of its problems---things like
the compilation firewall idiom. For that matter, I'm convinced
that its wide use has led to many problems being noticed that
often went overlooked in earlier languages which implemented the
feature: C++ was fairly late in its adoption of exceptions, but
most of our understanding of things like exception safety come
from C++, and not from its precursors.

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

James Kanze

unread,
Dec 18, 2007, 9:51:27 AM12/18/07
to
On Dec 18, 8:19 am, "Alf P. Steinbach" <al...@start.no> wrote:
> * James Kanze:
> > On Dec 17, 7:28 pm, Andre Kaufmann <akfmn...@t-online.de> wrote:

> >> Why should I keep specification and implementation separated,
> >> just to keep them in sync ?

> > Because other users don't want you to accidentally change the
> > specification when you're fixing an implementation bug. The
> > specification is part of the contract; you shouldn't be able to
> > change it without the other parties of the contract agreeing to
> > the change.

> > On large projects, of course, you can't just change it on a
> > whim. The normal development programmer doesn't even have
> > write permissions on the header files.

> I think the point was that the extraction of tool-compatible
> version of an interface for the purpose of using a module via
> those tools, is ideally a tool job, not a programmer job.

I'm having trouble parsing that sentence, but I think I agree.

The normal way of developing code, regardless of the size of the
project, is to start with a specification. At some point,
particularly on a large project, that specification becomes more
or less frozen, since other code depends on it. (Nothing is
ever completely frozen, of course. But a change in the
specification costs significantly more than a change in the
implementation.) So you do want to maintain that specification
separately from the implementation. Ideally, too, the compiler
will check use in other modules against the specification, not
the implementation, and will check the implementation itself
against the specification. Practically, of course, there are
some things that the compiler can't check: the specification
might say that your function should pop up a window on the
screen, for example. But C++, like many languages, does provide
a formalism for defining some aspects of the specification,
through its type system, for example.

The mechanism that C++ uses to pass this specification to the
compiler, include files, is extremely primitive, and more of a
hack than anything else. And the include file is probably not
the best level of abstraction for the specification---in the
better run projects, the include files are generally more or
less automatically generated by some tool, like Rational Rose.
In many such projects, the framework of the implementation files
is also tool generated; in all cases, however, you have to "fill
it out" with the actual implementation code, whereas it's usual
that most of the header files never see an editor. In at least
one project, in fact, the header files were managed under
Clearcase as a derived object, and couldn't be edited, only
(re-)generated. (On that project, the implementation files were
not generated at all, not even the framework.)

> Lacking tool support, which is due to lacking language
> support, one may end up with draconian rules such as you
> describe.

I'm not sure that you need direct language support for good tool
support, but it's certain that the preprocessor model used by
C++ makes tool support considerably more difficult. About the
only thing worse is the all-in-one model of Java and C#. (At
least in Java, the usual work-around is to define an interface,
and use it. This takes care of separating the specification
from the implemenation to a degree, but you can't specify the
constructors in the interface, and you can't add code which
enforces the contract.)

> At least when proper support for hierarchical division of the
> work isn't in place. And with hierarchy support one may end
> up with something like the now infamous Microsoft shutdown
> menu case, where a simple change of layout in a menu took
> several man-years, due in large part to the cumulative lags
> (physical and organizational) introduced by hierarchical
> source control.

> Context-independent rules restricting what programmers can do
> essentially throw the development process back to the time of
> waterfall methodology, and treat programming as something that
> could in principle be accomplished mechanically, i.e.
> involving no intelligence but a simple transcription of design
> to implementation language.

I'm not sure what your point is. Modifying an interface used by
others has significant cost, and there's nothing worse for
developers than having the interface they are using change
without warning. Good development processes manage change.
Even in a small project, with only one or two developers, there
are distinct roles, and the developers wear different hats at
different times. Even if I'm working alone on a project, I
won't modify an interface which is used throughout the project
without first doing some sort of cost-benefits analysis.

> Waterfall methodology may work well (even splendidly) for
> certain kinds of projects, but not in general. When the
> language support limits the tool support to such a degree that
> one must adjust the methodology to 1950's ways, I think it's
> time to start asking whether a bit of language support
> wouldn't be a Good Thing -- or, some other language... ;-)

I'm not sure what you really mean by "waterfall methodology".
About the only times I've heard the expression uses is as a
strawman---the user invents some horrible process methodology
that has never actually been used anywhere, in order to make his
process look good.

C++ could certainly stand some improvement in this area. Have
you seen the papers by Daveed Vandevoorte concerning modules,
for example? (Inspired by, I believe, Modula-2. Which got this
aspect more or less right. Considerably better than C++ or
Java, at any rate.)

The problem is that regardless of what may be adopted, we can't
get rid of the existing technology without breaking code. So
tools supporting C++ will still have to deal with the
preprocessor, with all of the problems that entails.

> [snip]

> > C++ is far from perfect here, but the problem with C++ is that
> > it still requires some implementation details (e.g. private
> > data) in the class definition. That it doesn't go far enough in
> > making this separation. A better example of how it should be
> > done would be Ada, or the Modula family of languages.

> That's one way, and the Eiffel/Java/C# way is another, and
> IMHO better.

Except that it doesn't work in practice. Every large Java
project ends up simulating "include files" with interfaces,
because the separation is more or less necessary.

> In both cases a compiled version of the interface is generated. But
> with Eiffel/Java/C# extracting the /textual specification/ of the
> interface is the tool's job, not the programmer's.

The real problem is that Java (and a lot of other languages and
tools) seem to be attacking the problem in reverse. Using tools
like JavaDoc or Doxygen to extract the user readable
specification from the implementation code. Whereas a good
software developement process would do just the opposite: start
with user readable specifications, and gradually formalize them
to a point where they could be used to automatically generate
the compiler readable interface specification. As I mentionned
somewhere else in this thread, the best run projects I've been
in have used Rational Rose (but I'm sure there are other good
tools as well) and Knuth has written a lot about literate
programming. I'm convinced that the best answer will combine
both somehow. At present, CWeb has no real support for any
graphics (class diagrams, etc.), and Rose is far from ideal when
the problem description is best embedded in plain English text,
rather than diagrams.

In other words, extracting the textual specification of the
interface from the implemention code doesn't work because you
cannot write the implementation code until you have the textual
specification.

(And of course, there are any number of work-arounds. We all
know that a good software process will find a way to use any
language effectively.)

> And if you really want more waterfall-like methodology and
> no-interface-change-by-mortals, then again that's in principle
> enforcable by tools; Eiffel development environments had some
> support for that by "frozen" classes, but as far as I'm aware
> (I'm not really up-to-date) such support is lacking for Java
> and C#, due to lack of any real demand for the feature.

As far as I'm aware, Java and C# aren't used that often on large
projects:-).

Seriously, there is more than one way to skin a cat, and larger
projects in Java---at least the ones I've seen---tend to use
separate interfaces (in the sense of the Java keyword) to
simulate C++ headers. With the plus that you don't have the
private date in them, and the minus that you can't specify the
constructors or implement contract checking code (and the really
big plus that you don't go through the pre-processor). I'm
certain, as well, that some larger projects will have been done
in a language which doesn't even support this---in which the
interface specification was not processed at all by the
compiler. You just need more discipline in such cases: stricter
code reviews, etc. (Projects were generally a lot smaller when
such languages were in common use, however. I seem to recall
hearing a 100 000 line project in Fortran being considered as
really large.)

None of which has anything to do with "waterfall-like
methodology" or "no-interface-change-by-mortals". It's just a
question of how (or if?) you manage change.

> So I think for a C++ module concept one should look not only
> to Modula-2 (which as I understand it was the basis for
> Daveed's proposal), but toward more modern module
> implementations that have been based on and evolved from the
> early Modula-ideas of Niklaus Wirth.

For example? Ada is pretty similar to Modula in its form, I
think. Languages like Java and C# seem to have decided to
ignore the issue completely, and require "hacks" and a lot of
external management.

> In short, I don't think it's a good idea to try to incorporate
> features of header files in a new C++ module concept. One is
> free to innovate. And in particular, I think a crucial
> concept here is the freedom to /not/ support the complete
> language: a module need not be able to export anything and all
> that can be made available via a header file, such as
> preprocessor symbols, general templates (experience from Java
> and .NET in this regard may be useful), and so on; it doesn't
> have to Save The Whole World but only to be immensely useful
> and productivity-enhancing.

The reason one feels the need of modules in C++ is precisely
because header files are a very poor surrogate. They can be
made to work, more or less, with a fair amount of discipline,
but as you say, the preprocessor alone (regardless of what it's
being used for) causes all sorts of problems, the current model
doesn't maintain a strict enough division, etc., etc. An
important feature of a module is (or should be) that the
programmer can specify exactly what is exported (and thus, by
default, what isn't).

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

---

James Kanze

unread,
Dec 18, 2007, 11:56:40 AM12/18/07
to
On Dec 18, 7:59 am, SeeWebsiteForEm...@erdani.org ("Andrei

Alexandrescu (See Website For Email)") wrote:
> James Kanze wrote:
> > You may think it's a security problem, but it's the way things
> > work in real life. Neither SMB nor NFS have a means of
> > asserting whether two pathnames are identical. So "#pragma
> > once" can't be made to work reliably under Windows nor under
> > Unix. The probability of a feature being adopted which can't be
> > implemented reliably under Unix nor under Windows is pretty
> > small.

> I guess your argument would be strengthened by you also showing a
> significant necessity for projects to use multiple aliased include
> search paths that resolve to the same physical directories. In the
> projects I've worked on, this was always a project management blunder.

On the projects I've worked on, it has generally occured because
we used third party libraries, which in turn depended on other
libraries. Or because different people's home directories were
mounted on different file systems.

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

---

Jean-Marc Bourguet

unread,
Dec 18, 2007, 12:12:56 PM12/18/07
to
James Kanze <james...@gmail.com> writes:

> That's not really true, of course. Modula-2 did it, and so does
> Ada. But in a far different context, with a somewhat different
> compilation model.

I don't see much relevant differences between the compilation model of Ada
as implemented by GNAT (the Ada compiler of GCC) and the one of C++.

> It doesn't work with the current compilation model of C++, and you can't
> eliminate that. It's possible to provide an alternative, of course, and
> there is a proposal for modules being considered. But to make it work,
> you need to go far beyond simply "#pragma once".

You have to rely on a name to ensure uniquelyness. The include guard
relies on macro name. Ada as implemented by GNAT relies on package name.
File identity is not a good way: there are too many corner cases (network
file system, hard links, symbolic links) and there are cases where you want
copies to be considered as the same header (one copy in an "installed
headers" directory, one in the same directory as the compilation units
implementing the interface is common in my experience, and you don't want
to include it twice because it's not the same file; this look to me of more
impact than having too different paths to the same network file).


On version management systems, I've used both systems where the developper
had to trigger a synchronisation manually and systems where check in to non
locally modified files where visible as soon as they happened on projects
having several developpers in different sites, working on different
timezones modifying the same files. I prefer the second and for sure it is
less errorprone -- i.e. I saw less checkins which broke the code -- but
both systems have their drawbacks and I know people with the same
experience as me who feel they are more productive with the first kind.

Note that in all cases, we relied on network file servers for some of the
files, some already compiled -- it just doesn't make sense to get
everything locally when the project use more than 40000 source files.
(There are components for which I have access only to libraries and
headers, other for which I usually use precompiled libraries but I've
access to all code and sometimes recompile it, and obviously other which I
routinely recompile because they are the one on which I work).


Yours,

--
Jean-Marc

James Kanze

unread,
Dec 18, 2007, 12:15:33 PM12/18/07
to
On Dec 16, 11:08 pm, akfmn...@t-online.de (Andre Kaufmann) wrote:
> James Kanze wrote:
> > On Dec 16, 1:52 am, akfmn...@t-online.de (Andre Kaufmann) wrote:
> >> Pete Becker wrote:
> > [...]
> > As far as I know, all other languages require the programmer
> > to do the right thing. For different definitions of the
> > right thing. (Well, Java forbids the programmer to do the
> > right thing, since the right thing does require keeping the
> > implementation and the interface specification in two
> > different files. But from what I understand, Java
> > programmers have come up with work-arounds for this defect.)

> Hm, managed languages. Take for example C#. There aren't any
> header files and I don't have even to import or include any
> other source file. The modules are separated by name spaces,
> rather than per file basis. Java IMHO goes somewhat too far,
> separating all objects in a file, IIRC - I'm no Java expert.

Where the objects are really isn't the issue (although the Java
model of one dynamically loaded object file per class is a bit
awkward). The issue is really managing change: changing an
interface requires more management than changing an
implementation. After that, different languages require
different techniques to achieve this end, but having the
interface specification in a separate file from the
implementation is certainly an advantage.

> > [...]
> > Whatever arguments there might be, build times is not one.

> There are, at least to drop header files at all, which should
> boost compilation time a lot.

Interestingly enough, I think just the opposite would be the
effect. The compiler must somehow find the interface
specifications, if it is to check them. If you, as a
programmer, specify which files to look at, it only has to look
at those files. If you don't, it may have to look at a lot more
files.

But there are a lot of other factors to consider. (If I were
designing a development system, it would probably cache most of
the "headers" in a precompiled form in a data base. Something
along the lines of what Visual Age does, or did.)

> Pragma once would perhaps only have minor effects.

It has no effect with g++. Even when the headers are served up
over a slow network.

> > [...]
> > None of the OS's I know can do this. There's no support for
> > it in either NFS nor SMB, so practically, it isn't
> > implementable if the files are mounted using one of those
> > protocols.

> Well, you could add some kind of hash cookie to a file and
> mark it or add another cookie file in the same directory. NTFS
> supports for example multiple streams for a single file. As
> soon as the file is marked, only the cookie must be read to
> check if the 2 file paths are identical. Other variant would
> be to lock the files during compilation of a single module and
> check if the files are locked by the same process on
> reopening. I know these solutions have other (big) downsides,
> should be only an example how it could be solved, if the path
> couldn't really be canonicalized and resolved.

It would be fairly easy to support uniqueness at the OS level
for local files. And it really wouldn't be that hard to provide
for it in the protocols used for handling remote files. At
present, however, neither does provide for this, and somehow, I
get the feeling that they're not going to change just because
C++ wants them to.

> But let's forget #pragma once, better drop header files at all.

Have a look at David Vandevoorte's proposal for modules.

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

---

Andre Kaufmann

unread,
Dec 18, 2007, 3:09:15 PM12/18/07
to
James Kanze wrote:
>
> [...]
>> These languages as well support abstract interfaces. Such an
>> interface plays the part of separation you are demanding.
>
> It's used for that, the same as textual inclusion is in C++.
> See my response to Alf.

I don't think so. You can't separate specification from implementation
in C++ just by using header files. Impossible.
You will always have the private part - variables - inside the header
file. Which, I'm just using your words ;-), is bad software engineering
since this part of the class is implementation specific and shouldn't be
in the header file -> what can you do in C++ ?

a) Use pimpl
b) Use Interfaces, the workaround needed in other languages !


> [...]


> function. This is good. Not modifying the interface to begin
> with is better, of course.

I implement a new interface and want to override a base class function.
No modification.

> IMHO, C++ could benefit with some additional categorization of
> member functions: override and final, for example, come to mind.
> Whether the benefit is worth the effort is another problem; I'll
> admit that in practice, I've not seen a lot of errors due to
> this.

You won't get an error, that is the problem. As you wrote as argument
for #pragma once, it might cause trouble more often as you think ;-).
It happens from time to time if you are dealing with template base
classes, to which you pass the abstract interface as a template
parameter. Additionally more often if you are using meta template
programming. I'm not using that combination anymore, because in C++ I
can't simply control if I have successfully overridden a virtual
function of the base class.


> But how is this in any way relevant to #pragma once, or the

> [...]

Not relevant. I only referred to:

"Because other users don't want you to accidentally change".

If you in C++ change the implementation of an overridden function, the
compiler silently introduces another function, if the number of
parameters don't fit or the types don't match. Some compilers throw a
diagnosis message, but not in all cases.

> [...]


> The implementation is in a .tcc file, the specification in a
> .hh.

It's IMHO not a true separation. You have to include the .tcc file if
the compiler doesn't support export.

> [...]

Andre

Andre Kaufmann

unread,
Dec 18, 2007, 4:18:46 PM12/18/07
to
James Kanze wrote:
> [...]

> Interestingly enough, I think just the opposite would be the
> effect. The compiler must somehow find the interface
> specifications, if it is to check them.

Yes and no ;-). The compiler must only compile them once and stores the
interfaces and code in a preparsed binary format and compile the
implementations (if possible). At least it's done this way by the other
languages. But in the other languages (Java, Delphi) you too specify
which files to import or use. An exception is only C#. And yes, it tends
to compile more files perhaps as the other languages and may be it
compiles all files again, if there isn't a smart background compiler
keeping track of the changes.
The trick is, it is still faster in compiling all sources, than the C++
compiler in compiling a single one. ;-).

I admit I haven't checked how C# performs when the sources are located
on a network path or in very large projects. You then perhaps have to
build libraries in binary form.

In C++ the compiler doesn't (normally) precompile any code, but inserts
the header files every time in textual form and recompiles it again and
again. I hope that this could change, when modules are supported in C++.

> If you, as a
> programmer, specify which files to look at, it only has to look
> at those files. If you don't, it may have to look at a lot more
> files.

Agreed. But even then it's commonly faster.

> But there are a lot of other factors to consider. (If I were
> designing a development system, it would probably cache most of
> the "headers" in a precompiled form in a data base. Something
> along the lines of what Visual Age does, or did.)

Yes, this compiler has/had implemented some nice features, regarding
compilation speedup and AFAIK some kind of proprietary module concept.

> [...]


> Have a look at David Vandevoorte's proposal for modules.

Yes, I know already. But unfortunately not yet in the upcoming standard.
:-/

Andre Kaufmann

unread,
Dec 18, 2007, 4:50:10 PM12/18/07
to

===================================== MODERATOR'S COMMENT:

Please do attempt to keep any followups topical for comp.std.c++. It
may be relevant to note the diversity of environments in which C++ is
used.


===================================== END OF MODERATOR'S COMMENT


James Kanze wrote:
> On Dec 17, 7:05 pm, Andre Kaufmann <akfmn...@t-online.de> wrote:

> [...]


>> Well the synchronization is done by the source revision system
>> automatically and my hard disk is still way faster than the
>> network I'm using.
>
> I've never seen a system where the synchronization actually
> worked reliably, other than with Clearcase, which uses a file
> server model. And of course, other machines can't read your
> local disk.

We have a central revision system, could be also distributed and divided
into several ones. When I want to edit a file it's either manually or
automatically checked out to my local disk. When I'm finished with
developing I check the files back in into the source database.

> What happens when you do a complete rebuild, where
> the compiles are distributed through out the network? What

Don't know what you mean. We have a central build system, doing the
builds. Before this system starts to build it downloads the latest
sources from the central revision system.

> happens when you want to discuss something with a collegue, at
> his desk?

Well I log in to my or his machine e.g with VNC. and start discussing.

> [...]

Andre

Andrei Alexandrescu (See Website For Email)

unread,
Dec 18, 2007, 6:09:14 PM12/18/07
to
James Kanze wrote:
> On Dec 18, 7:59 am, SeeWebsiteForEm...@erdani.org ("Andrei
> Alexandrescu (See Website For Email)") wrote:
>> James Kanze wrote:
>>> You may think it's a security problem, but it's the way things
>>> work in real life. Neither SMB nor NFS have a means of
>>> asserting whether two pathnames are identical. So "#pragma
>>> once" can't be made to work reliably under Windows nor under
>>> Unix. The probability of a feature being adopted which can't be
>>> implemented reliably under Unix nor under Windows is pretty
>>> small.
>
>> I guess your argument would be strengthened by you also showing a
>> significant necessity for projects to use multiple aliased include
>> search paths that resolve to the same physical directories. In the
>> projects I've worked on, this was always a project management blunder.
>
> On the projects I've worked on, it has generally occured because
> we used third party libraries, which in turn depended on other
> libraries.

How does that influence the search paths? A library depends on another,
but it does not require that it's in some specific place.

> Or because different people's home directories were
> mounted on different file systems.

How does that lead to *different* logical search paths resolving to
*identical* physical directories?


Andrei

tba...@gmail.com

unread,
Dec 18, 2007, 11:44:39 PM12/18/07
to
On Dec 17, 6:05 pm, James Kanze <james.ka...@gmail.com> wrote:
> On Dec 16, 4:16 pm, akfmn...@t-online.de (Andre Kaufmann) wrote:
> > James Kanze wrote:
> > And what will you do if your colleague changes the source file
> > during compilation ?
>
> If I've got the file checked out, he can't write it. And of
> course, for the files I don't have checked out, I see a stable
> version.

What version control system are you using? Most of the ones I've
worked with or looked at (svn, git, mercurial, bazaar) encourage some
form of branching, so multiple users can work in parallel and merge
their work at the end. Thus, if I was working on optimizing "libfoo"
for an extended period (i.e. more than 1 afternoon) all of my source
would be neatly checked into a "libfoo-optimizing" branch in the
source repository. By default the build process (and what everyone
else sees) is the stable "libfoo", but all my co-worker has to do is
check out the branch in order to see my work.

Local compilation (running in my local working copy of the
repository), but global access (via explicit checkouts of branches).
The best of both worlds!

--
Tom

Andre Kaufmann

unread,
Dec 19, 2007, 2:28:28 AM12/19/07
to
Jean-Marc Bourguet wrote:
> [...]

> Note that in all cases, we relied on network file servers for some of the
> files, some already compiled -- it just doesn't make sense to get
> everything locally when the project use more than 40000 source files.

Sorry I don't get it. When you compile a project the compiler has to
access a source file, somehow. Why can't this source file be cached
locally ?
A huge project consists surely out of many "subprojects" - libraries.
So it doesn't make sense to get the whole project, just to compile a
single library.

But IMHO it doesn't make sense too, if a compiler includes a header file
1000 times to permanently transport it over a network, but only once and
compile it locally.

> [...]

Andre

James Kanze

unread,
Dec 19, 2007, 10:09:10 AM12/19/07
to
On Dec 19, 12:09 am, SeeWebsiteForEm...@erdani.org ("Andrei

Alexandrescu (See Website For Email)") wrote:
> James Kanze wrote:
> > On Dec 18, 7:59 am, SeeWebsiteForEm...@erdani.org ("Andrei
> > Alexandrescu (See Website For Email)") wrote:
> >> James Kanze wrote:
> >>> You may think it's a security problem, but it's the way things
> >>> work in real life. Neither SMB nor NFS have a means of
> >>> asserting whether two pathnames are identical. So "#pragma
> >>> once" can't be made to work reliably under Windows nor under
> >>> Unix. The probability of a feature being adopted which can't be
> >>> implemented reliably under Unix nor under Windows is pretty
> >>> small.

> >> I guess your argument would be strengthened by you also showing a
> >> significant necessity for projects to use multiple aliased include
> >> search paths that resolve to the same physical directories. In the
> >> projects I've worked on, this was always a project management blunder.

> > On the projects I've worked on, it has generally occured because
> > we used third party libraries, which in turn depended on other
> > libraries.

> How does that influence the search paths? A library depends on another,
> but it does not require that it's in some specific place.

The library provides its own, identical copy. Library A is
designed so that it includes all that it needs of library B, so
you can use it without installing library B. You need library B
otherwise, however, so you have installed it. The #include in
library A finds its copy; the #include in your code finds the
separately installed copy.

> > Or because different people's home directories were
> > mounted on different file systems.

> How does that lead to *different* logical search paths
> resolving to *identical* physical directories?

Through various symbolic links ending up in different file
systems. It's not unusual to have identical versions of more or
less generally used tools (including libraries) installed on
several servers (perhaps by means of mounts, more often with
multiple copies). And to have a common account for files shared
in the group. (E.g. here, my home account is at
/home/team02/jakan, but I often have to link against files in
/home/team02/common. In this case, the mount point is in fact
/home/team02, but in most places I've worked, my own account
would have a separate mount.)

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

---

James Kanze

unread,
Dec 19, 2007, 12:07:04 PM12/19/07
to
On Dec 18, 10:18 pm, Andre Kaufmann <akfmn...@t-online.de> wrote:
> James Kanze wrote:
> > [...]
> > Interestingly enough, I think just the opposite would be the
> > effect. The compiler must somehow find the interface
> > specifications, if it is to check them.

> Yes and no ;-). The compiler must only compile them once and stores the
> interfaces and code in a preparsed binary format and compile the
> implementations (if possible).

You mean pre-compiled headers:-). In fact, of course, the
pre-processor works against this; the actual meaning of a header
depends on the macros which are defined when it is included.
Otherwise... I'm all for compiling the specification, and using
the compiled version.

> At least it's done this way by the other
> languages. But in the other languages (Java, Delphi) you too specify
> which files to import or use. An exception is only C#. And yes, it tends
> to compile more files perhaps as the other languages and may be it
> compiles all files again, if there isn't a smart background compiler
> keeping track of the changes.
> The trick is, it is still faster in compiling all sources, than the C++
> compiler in compiling a single one. ;-).

That's because they don't support template meta-programming:-).
And there basic IO isn't a template, which needs
instantiation:-). Seriously: the preprocessor doesn't speed
things up, and templates with all of the implementation in the
headers, for something as large as <iostream>, slows things up
significantly, even if you only read the file once. If you're
on a Unix machine, try something like: "CC -E hello.cc | wc -l",
where hello.cc is the classical hello, world program. And if
you can, then go back and try it with a compiler using the
classical iostream (which weren't templates). I get 12223 for
the standard version, 1609 for the one with the classical
iostream. That goes a long way in explaining the difference in
compile times.

> I admit I haven't checked how C# performs when the sources are located
> on a network path or in very large projects. You then perhaps have to
> build libraries in binary form.

> In C++ the compiler doesn't (normally) precompile any code, but inserts
> the header files every time in textual form and recompiles it again and
> again. I hope that this could change, when modules are supported in C++.

That's part of the purpose of modules, I think. Still...

A lot more could be done than is. There's nothing in the
standard which would preclude a compiler having an option to
"package" a library: wrap it up in a package with all of the
object files AND all of the headers (pre-compiled, of course),
and have the compiler look there when it encounters an include.
In theory, it's difficult because of the pre-processor, and you
might end up needing several versions of the headers, or perhaps
needing some sort of preconditions involving macros, and a copy
of the text to fall back on if the preconditions failed.

The problem, in general, is that in C++, including exactly the
same text doesn't guarantee that you end up with the same thing.
Mainly because of the preprocessor, although you can also play
games with typedef's and other such things. I think, however,
that it is doable, possibly by introducing some restrictions on
how macros are used in the headers. (A compiler could introduce
the restrictions, as long as the older system still worked when
they weren't met.)

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

---

James Kanze

unread,
Dec 19, 2007, 12:06:44 PM12/19/07
to
On Dec 18, 9:09 pm, akfmn...@t-online.de (Andre Kaufmann) wrote:
> James Kanze wrote:

> > [...]
> >> These languages as well support abstract interfaces. Such an
> >> interface plays the part of separation you are demanding.

> > It's used for that, the same as textual inclusion is in C++.
> > See my response to Alf.

> I don't think so. You can't separate specification from implementation
> in C++ just by using header files. Impossible.
> You will always have the private part - variables - inside the header
> file. Which, I'm just using your words ;-), is bad software engineering
> since this part of the class is implementation specific and shouldn't be
> in the header file -> what can you do in C++ ?

> a) Use pimpl
> b) Use Interfaces, the workaround needed in other languages !

It's a hack. I agree. The compilation firewall idiom helps,
but of course, ideally, we wouldn't need it (and there are times
when it really isn't applicable). Using interfaces is a
different hack. It has more or less the opposite problem---you
can't get all of the specifications in the interface, since
constructors, etc. are missing.

On the whole, I prefer the C++ hack, but that's probably at
least partially because I'm used to it, I've been using it for
many years, and I know how to manage it. (I've also worked on a
C++ project where we did use interfaces, rather than the
compilation firewall idiom.)

> > [...]


> > IMHO, C++ could benefit with some additional categorization
> > of member functions: override and final, for example, come
> > to mind. Whether the benefit is worth the effort is another
> > problem; I'll admit that in practice, I've not seen a lot of
> > errors due to this.

> You won't get an error, that is the problem.

Having working code is a problem?

> As you wrote as argument for #pragma once, it might cause
> trouble more often as you think ;-). It happens from time to
> time if you are dealing with template base classes, to which
> you pass the abstract interface as a template parameter.
> Additionally more often if you are using meta template
> programming. I'm not using that combination anymore, because
> in C++ I can't simply control if I have successfully
> overridden a virtual function of the base class.

As I said, I've never seen this to be a concrete problem in
actual code. It certainly could happen, and I'd certainly
prefer a means of making such errors detectable by the compiler.
Because it's not been an actual problem in my experience,
though, such changes are not high on my list of priorities. Not
making the language more complicated than it already is is high
on my list, so if the proposals to allow these problems to be
errors detected by the compiler increase the complexity of the
language, I'd be sceptical. It's a question of degree, however,
and if the additional complexity were minor, and the additional
checking really robust, why not?

> > [...]
> > The implementation is in a .tcc file, the specification in a
> > .hh.

> It's IMHO not a true separation. You have to include the .tcc file if
> the compiler doesn't support export.

Totally agreed. But it's better than nothing. For the issues
we were discussing, it's sufficient, since the two files are
managed separately by version control, but there's still a lot
of unnecessary compile time coupling.

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

---

Jean-Marc Bourguet

unread,
Dec 19, 2007, 1:04:45 PM12/19/07
to
akfm...@t-online.de (Andre Kaufmann) writes:

> Jean-Marc Bourguet wrote:
> > [...]
> > Note that in all cases, we relied on network file servers for some of the
> > files, some already compiled -- it just doesn't make sense to get
> > everything locally when the project use more than 40000 source files.
>
> Sorry I don't get it. When you compile a project the compiler has to access
> a source file, somehow. Why can't this source file be cached locally ?

Caching is the job of the network file system, IMHO.

> A huge project consists surely out of many "subprojects" - libraries. So
> it doesn't make sense to get the whole project, just to compile a single
> library.

I usually have to access several subprojects. And the set of subprojects I
need vary. And they depend of other subprojects. So you have to be
flexible and the way we currently achieve this flexibility is by using
symbolic links to a centralized reference build.

Other organisation could be better -- I for sure can think of better way of
doing some part of it for a restricted set of goals -- but I've seen far
worse one. And don't forget that huge projects means lot of history, lot
of people, lot of conflicting goals and so lot of inertia.

But my point here was that network file systems are used to access code,
and that won't change anytime soon. (Especially when you consider the need
of building the same set of source on several platforms or advantages of
doing distributed builds).

Yours,

--
Jean-Marc

Andre Kaufmann

unread,
Dec 19, 2007, 2:30:00 PM12/19/07
to

> James Kanze wrote:
>
> [...]

> Having working code is a problem?


Small (senseless) example:

class A
{
public: virtual bool Foo(int a) { return false; }
};

template <class T>
class B : public A
{
public: virtual bool Foo(T a) { return true; }
};

int main()
{
B<int> b1; B<unsigned int> b2;
A* p1 = &b1; A* p2 = &b2;

cout << p1->Foo(6) << endl;
cout << p2->Foo(6) << endl;
}

I want the compiler to throw an error (sorry diagnostic message).

E.g. VC has the proprietary extension "override" because it's needed for
C++/CLI:

So replacing class B above with the following code:

template <class T>
class B : public A
{
public: virtual bool Foo(T a) override { return true; }
};

Throws the error:

Error 2 error C3668: 'B<T>::Foo' : method with override specifier
'override' did not override any base class methods
d:\test\parallel\cpptest\cpptest.cpp 30 CppTest


>[...]


> As I said, I've never seen this to be a concrete problem in
> actual code. It certainly could happen,

Not only happen, I want prevent it to happen.

>[...]

Andre

Andre Kaufmann

unread,
Dec 19, 2007, 2:30:26 PM12/19/07
to
James Kanze schrieb:

> On Dec 18, 10:18 pm, Andre Kaufmann <akfmn...@t-online.de> wrote:
> James Kanze wrote:
> [...]

> You mean pre-compiled headers:-).

Well, since they these languages don't have headers - ehm - precompiled
units/modules perhaps.

> In fact, of course, the
> pre-processor works against this; the actual meaning of a header
> depends on the macros which are defined when it is included.
> Otherwise... I'm all for compiling the specification, and using
> the compiled version.

Yes, I know. That's why I hope module concept won't have macros with
such side effects. And IIRC the proposal restricts macros to be
effective only for the current module and not the imported ones.

> [...]

> That's because they don't support template meta-programming:-).

No, they do support most of the parts meta-programming is used in the
language directly, e.g. reflection.

> And there basic IO isn't a template, which needs
> instantiation:-)

No, but they also have some kind of templates. And I prefer the printf
like IO in C# over cout:

Console.WriteLine("A: {0} B: {0} C: {1}", 1, 2) outputs: 1 1 2

same as:

cout << "A:" << 1 << " B:" << 1 << " C:" << 2.

In boost there is something comparable, but I don't know if this will be
part of the standard.

. Seriously: the preprocessor doesn't speed
> things up, and templates with all of the implementation in the
> headers, for something as large as <iostream>, slows things up
> significantly, even if you only read the file once. If you're

IOStreams are IMHO both, slow in compilation and slow on runtime,
but that's another story.

> [...]


> The problem, in general, is that in C++, including exactly the
> same text doesn't guarantee that you end up with the same thing.

> [...]

Jerry Coffin

unread,
Dec 20, 2007, 11:46:11 AM12/20/07
to
In article <fk6fs1$doh$00$1...@news.t-online.com>, akfm...@t-online.de
says...

[ ... ]

> If the OS doesn't know which file is which and located where then the
> user effectively doesn't know either and it's only a matter of time till
> the first virus based on source code basis spreads around the world and
> will be included automatically everywhere. (has nothing to do with
> #pragma once)

Nonsense. In a distributed file system, transparency of location is a
very good thing. Dragging "virus" into the discussion sounds to me like
a complete red-herring.

Just FWIW, on a distributed file system, opening the same file twice in
succession does NOT normally guarantee that you'll actually open the
same physical copy of the file, nor that the physical path to the file
may not have changed between one open and the next.

--
Later,
Jerry.

The universe is a figment of its own imagination.

Andre Kaufmann

unread,
Dec 20, 2007, 1:51:30 PM12/20/07
to

===================================== MODERATOR'S COMMENT:

Please ensure that replies clearly relate to the standardization of C++.


===================================== END OF MODERATOR'S COMMENT

Jean-Marc Bourguet wrote:
> akfm...@t-online.de (Andre Kaufmann) writes:
>
>> Jean-Marc Bourguet wrote:
>>> [...]

> Caching is the job of the network file system, IMHO.

You can't really cache huge projects - IMHO.

>
>> A huge project consists surely out of many "subprojects" - libraries. So
>> it doesn't make sense to get the whole project, just to compile a single
>> library.
>
> I usually have to access several subprojects. And the set of subprojects I
> need vary. And they depend of other subprojects. So you have to be
> flexible and the way we currently achieve this flexibility is by using
> symbolic links to a centralized reference build.

It has pros and cons. Out central build server too has a local copy of
all of our sources. Would be fairly simple to access the sources
directly from another machine. But I think multiple developers starting
compilation, would unnecessarily generate a heavy load on the network.
Additionally compilation IMHO is slower.

> [...]

> But my point here was that network file systems are used to access code,
> and that won't change anytime soon. (Especially when you consider the need
> of building the same set of source on several platforms or advantages of
> doing distributed builds).

Yes, but I simply get the part I want to compile, this part could be
automated too, and compile the sources. If I compile directly over
network the compiler will get the sources again and again and again.
I don't see really the (true) advantage and I don't think that such huge
projects exists, that they can fill a (now) common 1 TB HD entirely.
Since commonly the generated debug information is much larger than the
sources itself.

> Yours,

Andre

Jerry Coffin

unread,
Dec 20, 2007, 1:52:32 PM12/20/07
to
In article <fk7o6o$7vt$02$1...@news.t-online.com>, akfm...@t-online.de
says...

[ ... ]

> That's fine, the problem is another layer of complexity is added to the
> compiler to get it right, checking if all code is enclosed in include
> guards.

But it puts the responsibility in the right place and actually stands a
chance of working correctly. "#pragma once" almost certainly canNOT work
correctly, because nobody really even knows what "correctly" means under
all circumstances. Worse, it's polluting the language with something
that's a pure compile-time optimization, and really has nothing to do
with the language itself.

--
Later,
Jerry.

The universe is a figment of its own imagination.

---

Andre Kaufmann

unread,
Dec 20, 2007, 1:59:56 PM12/20/07
to
Jerry Coffin wrote:
> In article <fk6fs1$doh$00$1...@news.t-online.com>, akfm...@t-online.de
> says...
>
> [ ... ]
>
>> If the OS doesn't know which file is which and located where then the
>> user effectively doesn't know either and it's only a matter of time till
>> the first virus based on source code basis spreads around the world and
>> will be included automatically everywhere. (has nothing to do with
>> #pragma once)
>
> Nonsense. In a distributed file system, transparency of location is a
> very good thing. Dragging "virus" into the discussion sounds to me like
> a complete red-herring.

Does it ? There have already some Open Source servers compromised.
Directly compiling from several mapped network locations simply increase
the risk. Or do they lower it ? Nothing more. Can a single source server
be compromised too ? Sure, but I referred only to the discussed fact
that the mapping location of a home directory might be unknown.

What would be more effective in the Open Source world to spread a virus
? One in binary form or one which effectively uses a small bug
introduced by changing pieces of code into some central kernel sources ?

> Just FWIW, on a distributed file system, opening the same file twice in

Open the same file twice over a network based file system is ineffective
- IMHO. And generally a C++ compiler does open header files multiple times.

Additionally all the tools, refactoring, code completion etc. are
touching the same files multiple times - additional network transfer.

And how do I backup such a system and ensure that all the distributed
source servers are properly mapped and will it be mapped too in 10 years ?

> [...]

Andre

0 new messages