Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

including files using macro for name

81 views
Skip to first unread message

Lynn McGuire

unread,
Oct 21, 2015, 3:21:09 PM10/21/15
to
I need to include the same file in multiple locations. I used the following macro for the filename in a included header file:

#define LATEST_DECODE_METHOD "decodeVer14.h"

...

#include LATEST_DECODE_METHOD

and this worked fine in Microsoft Visual Studio 2005.

The variable is declared in an included source file itself which has the include file using the variable. So, this is an included
source code file including another source code file.

bool all_is_not_ok = false; // outer include file
...
all_is_not_ok = true; // inner include file

I just upgraded to Microsoft Visual Studio 2012 and am now getting an error on a variable declaration in the include file. But, I am
not getting an error when including the file. The error is:

1>c:\sample\decodever14.h(128): error C2065: 'all_is_not_ok' : undeclared identifier

Is including a file using a macro name not legal? Or am I running into a bug in MSVS 2012?

Thanks,
Lynn

Paavo Helde

unread,
Oct 21, 2015, 3:46:24 PM10/21/15
to
Lynn McGuire <l...@winsim.com> wrote in news:n08oef$ppf$1...@dont-email.me:

> I need to include the same file in multiple locations. I used the
> following macro for the filename in a included header file:
>
> #define LATEST_DECODE_METHOD "decodeVer14.h"
> #include LATEST_DECODE_METHOD
> Is including a file using a macro name not legal? Or am I running
> into a bug in MSVS 2012?

This is called "computed include" and appears to be pretty legal. Probably
this has nothing to do with the error what you see.

> The variable is declared in an included source file itself which has
> the include file using the variable.

Typically a variable is declared in the header file and used in the source
file, not vice versa as how you claim. Please check
http://www-igm.univ-mlv.fr/~dr/CPP/c++-faq/how-to-post.html#[5.7]


Cheers
Paavo

Ben Bacarisse

unread,
Oct 21, 2015, 3:47:53 PM10/21/15
to
Lynn McGuire <l...@winsim.com> writes:

> I need to include the same file in multiple locations. I used the
> following macro for the filename in a included header file:
>
> #define LATEST_DECODE_METHOD "decodeVer14.h"
>
> ...
>
> #include LATEST_DECODE_METHOD

That's fine in standard C++ (and in C for that matter).

> and this worked fine in Microsoft Visual Studio 2005.
>
> The variable is declared in an included source file itself which has
> the include file using the variable. So, this is an included source
> code file including another source code file.
>
> bool all_is_not_ok = false; // outer include file
> ...
> all_is_not_ok = true; // inner include file

I'm a bit lost by the inners and outers here, but you should be able to
construct a two or three file version (each with only one or two lines)
that illustrates the problem. Sometimes just doing that can reveal
something unexpected.

> I just upgraded to Microsoft Visual Studio 2012 and am now getting an
> error on a variable declaration in the include file. But, I am not
> getting an error when including the file. The error is:
>
> 1>c:\sample\decodever14.h(128): error C2065: 'all_is_not_ok' :
> undeclared identifier
>
> Is including a file using a macro name not legal? Or am I running
> into a bug in MSVS 2012?

It's legal (section 16.2 paragraphs 4 and 8). What's the smallest case
you can make that shows the problem? (If it turns out to be a bug,
you'll need a simple case to report it anyway.)

--
Ben.

Lynn McGuire

unread,
Oct 21, 2015, 5:46:11 PM10/21/15
to
Sorry, my bad. I thought that the variable was declared in the code and it is not for one of the exe's built using the code. The
large exe compiles and links, the small exe does not. Turns out that I added a variable to the large exe and did not add it to the
small exe. Sigh.

Sincerely,
Lynn


David Brown

unread,
Oct 22, 2015, 3:32:03 AM10/22/15
to
On 21/10/15 21:20, Lynn McGuire wrote:
> I need to include the same file in multiple locations. I used the
> following macro for the filename in a included header file:
>
> #define LATEST_DECODE_METHOD "decodeVer14.h"
>
> ...
>
> #include LATEST_DECODE_METHOD
>
> and this worked fine in Microsoft Visual Studio 2005.
>

That's legal, but almost certainly a bad idea IMHO. It makes it more
difficult to figure out what is being included and at what point in the
program - you have to trace through other preprocessor stuff such as
macros. That's a pain for human readers - and it could be an issue with
automated tools too since the tools have to run a full cpp preprocessor
in order to identify the include file graph. Obviously compilers will
have no problem - perhaps some editors, documentation tools, automatic
build systems, etc., will have trouble or be less efficient when faced
with calculated includes.

Of course I don't know anything about your project except what you have
written here, so maybe there are particular reasons for using this
organisation. But I would be tempted to use something like:

// Files that had #include LATEST_DECODE_METHOD
#include "latest_decode_method.h"


// In latest_decode_method.h
#include "decodeVer14.h"

These keeps things neat and clear in the main code, but lets you easily
change the decoder version in a single place.



Lynn McGuire

unread,
Oct 22, 2015, 5:10:28 PM10/22/15
to
That would work also. But, I've got 20,000 files in my project. File minimization is a goal of mine. My sandbox is 12 GB and growing.

Thanks,
Lynn

David Brown

unread,
Oct 23, 2015, 3:41:26 AM10/23/15
to
That change would mean you'd have 20,001 files - that's not a big
difference. But of course, changing all the files to use the new format
could mean a good deal of effort if you use such computed includes a
lot. You might find that the result makes dependency management faster,
which can lead to significantly faster build times on large projects -
but you won't know that unless you try it!


Lynn McGuire

unread,
Oct 23, 2015, 3:28:33 PM10/23/15
to
The rebuild of my largest exe takes about a minute using MSVS 2005 nowadays. That is about 650K LOC, almost all C++.

The biggest improvement that I have ever gotten on building is moving to a SSD drive. I now have a Intel 480 GB SSD. Fastest thing
that I have ever seen. Jeff Atwood says that the new PCI Express SSD drives are even much faster than SATA SSD drives.
http://blog.codinghorror.com/building-a-pc-part-viii-iterating/

Thanks,
Lynn

David Brown

unread,
Oct 23, 2015, 6:25:34 PM10/23/15
to
OK, so your 20,000 files are not all for the same build (presumably they
are libraries or other exe files in the same project). That's more
manageable!

>
> The biggest improvement that I have ever gotten on building is moving to
> a SSD drive. I now have a Intel 480 GB SSD. Fastest thing that I have
> ever seen. Jeff Atwood says that the new PCI Express SSD drives are
> even much faster than SATA SSD drives.
> http://blog.codinghorror.com/building-a-pc-part-viii-iterating/
>

It's a pity you are stuck on Windows for MSVC - if you were using Linux,
the disk speed would mean almost nothing at all (at least with a little
care in setup). Windows is poor at cache management, and will keep
re-reading files from disk instead of keeping them in memory, and delays
while writing files instead of using write-back cache. A rebuild should
not involve any noticeable disk activity - but Windows still is not good
at disk handling (rumour has it that Win10 has made some improvements
here, but I have no experience with it).

The biggest improvement I made on compile speed was moving from Windows
to Linux.


Jorgen Grahn

unread,
Oct 24, 2015, 1:57:24 AM10/24/15
to
On Fri, 2015-10-23, David Brown wrote:
> On 23/10/15 21:28, Lynn McGuire wrote:
>> On 10/23/2015 2:41 AM, David Brown wrote:
>>> On 22/10/15 23:10, Lynn McGuire wrote:
...

>>>> That would work also. But, I've got 20,000 files in my project. File
>>>> minimization is a goal of mine. My sandbox is 12 GB and growing.

I checked: the Linux kernel is around 40,000 files, so you're near
that. On the other hand, few people configure Linux to include all of
those files: many of them are drivers for hardware or features which
you don't want.

>>> That change would mean you'd have 20,001 files - that's not a big
>>> difference. But of course, changing all the files to use the new format
>>> could mean a good deal of effort if you use such computed includes a
>>> lot. You might find that the result makes dependency management faster,
>>> which can lead to significantly faster build times on large projects -
>>> but you won't know that unless you try it!
>>
>> The rebuild of my largest exe takes about a minute using MSVS 2005
>> nowadays. That is about 650K LOC, almost all C++.
>
> OK, so your 20,000 files are not all for the same build (presumably they
> are libraries or other exe files in the same project). That's more
> manageable!

Although like the author of "Recursive Make Considered Harmful" noted,
what you really want is an efficient way of saying "rebuild everything
that needs to be rebuilt". Having to remember what binaries will be
affected by which change sucks. Been there; done that.

>> The biggest improvement that I have ever gotten on building is moving to
>> a SSD drive. I now have a Intel 480 GB SSD. Fastest thing that I have
>> ever seen. Jeff Atwood says that the new PCI Express SSD drives are
>> even much faster than SATA SSD drives.
>> http://blog.codinghorror.com/building-a-pc-part-viii-iterating/
>
> It's a pity you are stuck on Windows for MSVC - if you were using Linux,
> the disk speed would mean almost nothing at all (at least with a little
> care in setup). Windows is poor at cache management, and will keep
> re-reading files from disk instead of keeping them in memory
...

That matches my experience from a decade ago; are you saying it hasn't
improved?

> The biggest improvement I made on compile speed was moving from Windows
> to Linux.

But I think on your average project, the best improvement you can get
is from getting the build system into shape. Where I've worked, the
main reason for long compilation times has been that things get
rebuilt unnecessarily. And bad build systems seem to be the norm ...

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Ian Collins

unread,
Oct 24, 2015, 11:46:17 PM10/24/15
to
I discovered this a while back when we were comparing windows on metal
with windows in KVM on ZFS. The virtualised system performed better!

> The biggest improvement I made on compile speed was moving from Windows
> to Linux.

Or any UNIX, including a Mac!

--
Ian Collins

David Brown

unread,
Oct 25, 2015, 7:12:00 AM10/25/15
to
My preference is to always do rebuilds from a single make in the outer
directory. My makefile uses gcc to build dependency files for the C
files, then some sed magic to make even the dependency files have the
correct dependencies, so that they don't need rebuilt unnecessarily either.

Having to manually re-build in different directories in different orders
is a pain and a source of error.

>
>>> The biggest improvement that I have ever gotten on building is moving to
>>> a SSD drive. I now have a Intel 480 GB SSD. Fastest thing that I have
>>> ever seen. Jeff Atwood says that the new PCI Express SSD drives are
>>> even much faster than SATA SSD drives.
>>> http://blog.codinghorror.com/building-a-pc-part-viii-iterating/
>>
>> It's a pity you are stuck on Windows for MSVC - if you were using Linux,
>> the disk speed would mean almost nothing at all (at least with a little
>> care in setup). Windows is poor at cache management, and will keep
>> re-reading files from disk instead of keeping them in memory
> ...
>
> That matches my experience from a decade ago; are you saying it hasn't
> improved?

I gather it has improved, but in my experience (which does not include
Win10), Windows is a long way behind Linux in this area.

>
>> The biggest improvement I made on compile speed was moving from Windows
>> to Linux.
>
> But I think on your average project, the best improvement you can get
> is from getting the build system into shape. Where I've worked, the
> main reason for long compilation times has been that things get
> rebuilt unnecessarily. And bad build systems seem to be the norm ...
>

Agreed.

For example, Eclipse has automatic dependency building in its project
management - when you press "build", it will check through all the
files, see which include files are used by each C file, see which are
changed, and then re-compile only those C files that depend on header
files that have changed, or where the C file itself has changed. That
makes it quick and reliable, without having to handle dependencies
manually or run update passes.

However, it is much faster if you track those dependencies, and only
re-check the C and header files when actually required. So a more
intelligent Makefile can speed up this dependency checking
significantly, and for large projects with regular rebuilds, it is the
dependency calculations that take time.


David Brown

unread,
Oct 25, 2015, 7:28:13 AM10/25/15
to
On 24/10/15 07:57, Jorgen Grahn wrote:
> On Fri, 2015-10-23, David Brown wrote:
>> On 23/10/15 21:28, Lynn McGuire wrote:
<snip>
>>> The biggest improvement that I have ever gotten on building is moving to
>>> a SSD drive. I now have a Intel 480 GB SSD. Fastest thing that I have
>>> ever seen. Jeff Atwood says that the new PCI Express SSD drives are
>>> even much faster than SATA SSD drives.
>>> http://blog.codinghorror.com/building-a-pc-part-viii-iterating/
>>
>> It's a pity you are stuck on Windows for MSVC - if you were using Linux,
>> the disk speed would mean almost nothing at all (at least with a little
>> care in setup). Windows is poor at cache management, and will keep
>> re-reading files from disk instead of keeping them in memory
> ...
>
> That matches my experience from a decade ago; are you saying it hasn't
> improved?
>
>> The biggest improvement I made on compile speed was moving from Windows
>> to Linux.
>
> But I think on your average project, the best improvement you can get
> is from getting the build system into shape. Where I've worked, the
> main reason for long compilation times has been that things get
> rebuilt unnecessarily. And bad build systems seem to be the norm ...
>

Another point I forgot to mention in the comparison is that Linux (and
other *nix) is /much/ better than Windows at multitasking. Running
"make -j" has lower overheads than in Windows, and the system is better
able to schedule and run multiple compilations in parallel. (Windows is
not bad a multi-threading within a single application, but poor at
multi-tasking.)


Jorgen Grahn

unread,
Oct 25, 2015, 1:02:03 PM10/25/15
to
On Sun, 2015-10-25, David Brown wrote:
> On 24/10/15 07:57, Jorgen Grahn wrote:
>> On Fri, 2015-10-23, David Brown wrote:
...
>>> OK, so your 20,000 files are not all for the same build (presumably they
>>> are libraries or other exe files in the same project). That's more
>>> manageable!
>>
>> Although like the author of "Recursive Make Considered Harmful" noted,
>> what you really want is an efficient way of saying "rebuild everything
>> that needs to be rebuilt". Having to remember what binaries will be
>> affected by which change sucks. Been there; done that.
>
> My preference is to always do rebuilds from a single make in the outer
> directory. My makefile uses gcc to build dependency files for the C
> files, then some sed magic to make even the dependency files have the
> correct dependencies, so that they don't need rebuilt unnecessarily either.

That's one solution I've encountered. It's sane, and works. I think
the gcc manual more or less describes how to set it up?

> Having to manually re-build in different directories in different orders
> is a pain and a source of error.

Yes; see "sucks" above. But I suspect it's one of those suble traps:
it doesn't seem all that dangerous to require that, but then the
project grows and more people show up ... and slowly it becomes a
rather high hidden cost.

...
>>> The biggest improvement I made on compile speed was moving from Windows
>>> to Linux.
>>
>> But I think on your average project, the best improvement you can get
>> is from getting the build system into shape. Where I've worked, the
>> main reason for long compilation times has been that things get
>> rebuilt unnecessarily. And bad build systems seem to be the norm ...
>
> Agreed.
>
> For example, Eclipse has automatic dependency building in its project
> management - when you press "build", it will check through all the
> files, see which include files are used by each C file, see which are
> changed, and then re-compile only those C files that depend on header
> files that have changed, or where the C file itself has changed. That
> makes it quick and reliable, without having to handle dependencies
> manually or run update passes.

> However, it is much faster if you track those dependencies, and only
> re-check the C and header files when actually required. So a more
> intelligent Makefile can speed up this dependency checking
> significantly, and for large projects with regular rebuilds, it is the
> dependency calculations that take time.

Fortunately, the things that Make has to do when it already /has/ the
complete set of dependencies, they don't seem too expensive. It has
to check the modification time on pretty much every file. I suspect
many file systems are optimized for that kind of usage ... especially
since tools like Git also rely heavily on that.

asetof...@gmail.com

unread,
Oct 25, 2015, 1:22:31 PM10/25/15
to
monofile + monofile header

mark

unread,
Oct 25, 2015, 2:08:09 PM10/25/15
to
On 2015-10-24 00:25, David Brown wrote:
> It's a pity you are stuck on Windows for MSVC - if you were using Linux,
> the disk speed would mean almost nothing at all (at least with a little
> care in setup). Windows is poor at cache management, and will keep
> re-reading files from disk instead of keeping them in memory, and delays
> while writing files instead of using write-back cache. A rebuild should
> not involve any noticeable disk activity - but Windows still is not good
> at disk handling (rumour has it that Win10 has made some improvements
> here, but I have no experience with it).

Windows >= 7 works perfectly fine, if you have enough RAM. There is very
little disk activity after the first build. IME, there is no difference
in rebuild speed between an SSD and a 'normal' hard drive.

David Brown

unread,
Oct 26, 2015, 5:38:37 AM10/26/15
to
On 25/10/15 18:01, Jorgen Grahn wrote:
> On Sun, 2015-10-25, David Brown wrote:
>> On 24/10/15 07:57, Jorgen Grahn wrote:
>>> On Fri, 2015-10-23, David Brown wrote:
> ...
>>>> OK, so your 20,000 files are not all for the same build (presumably they
>>>> are libraries or other exe files in the same project). That's more
>>>> manageable!
>>>
>>> Although like the author of "Recursive Make Considered Harmful" noted,
>>> what you really want is an efficient way of saying "rebuild everything
>>> that needs to be rebuilt". Having to remember what binaries will be
>>> affected by which change sucks. Been there; done that.
>>
>> My preference is to always do rebuilds from a single make in the outer
>> directory. My makefile uses gcc to build dependency files for the C
>> files, then some sed magic to make even the dependency files have the
>> correct dependencies, so that they don't need rebuilt unnecessarily either.
>
> That's one solution I've encountered. It's sane, and works. I think
> the gcc manual more or less describes how to set it up?
>

There are common recipes for using gcc "-M" flags to generate dependency
information, and for using them together with sed to make Makefile
dependency files. I think the gnu make manual has examples.

However, the usual process generates dependency files looking like this:

# This is file main.d
main.o : main.c module1.h module2.h module3.h

Typically the file "main.d" is re-build whenever "main.c" changes. But
that would miss if "module2.h" was changed to also include "module4.h".
So you either have to re-build your dependency files manually on
occasion, or rebuild them on every change to any header, or (roughly
like Eclipse does) simply rebuild them for every build.

My Makefile has a somewhat more complex sed arrangement, and produces
files like this:

# This is file main.d
main.o main.d : main.c module1.h module2.h module3.h

This means dependency files get rebuilt as and when they are needed.


>> Having to manually re-build in different directories in different orders
>> is a pain and a source of error.
>
> Yes; see "sucks" above. But I suspect it's one of those suble traps:
> it doesn't seem all that dangerous to require that, but then the
> project grows and more people show up ... and slowly it becomes a
> rather high hidden cost.

Agreed.
Again, you'll find Linux faster for this sort of thing than Windows.
The common Linux filesystems all handle such metadata checking very
quickly, and typically it is all from caches.


David Brown

unread,
Oct 26, 2015, 6:03:13 AM10/26/15
to
Certainly it is better than with early Windows, and certainly more RAM
typically makes a bigger difference than moving to an SSD. Windows is
gradually catching up with the *nix world here.

(I am not anti-Windows, or at least not /entirely/ anti-Windows.
Windows works fine for many purposes, and can be better than *nix for
some things. My desk has a Windows machine and a Linux machine, and I
use both. But for dealing with lots of files, file metadata, parallel
access to files, multiple processes - *nix is a good deal more efficient
IME. Linux is a far better platform for software development, at least
for the type of development I do and the way I like to work.)

0 new messages