Building ninja with MSVC

1,188 views
Skip to first unread message

Frances

unread,
Dec 28, 2011, 11:08:30 AM12/28/11
to ninja-build
Hi,

We've mentioned a few times some problems we've had building ninja
with microsoft visual c++.
I've made the basics into a pull request - https://github.com/martine/ninja/pull/171.

This mainly covers string piece
http://groups.google.com/group/ninja-build/browse_thread/thread/ac4c439a00cbda87
and hasp_map problems - see http://groups.google.com/group/ninja-build/browse_thread/thread/32f464653ce0bc03
We also use a windows .bat file, instead of a .sh file, which I've
included, and finally we have a new header for int64_t types.

We have a few other patches which makes things easier to deal with in
the face of a subprocess crash and tend to speed things up on windows.
I'll incluie details of those shortly.

Fran.

Frances

unread,
Dec 29, 2011, 10:29:39 AM12/29/11
to ninja-build


On Dec 28, 4:08 pm, Frances <frances.buonte...@gmail.com> wrote:
> Hi,
>
> We've mentioned a few times some problems we've had building ninja
> with microsoft visual c++.
> I've made the basics into a pull request -https://github.com/martine/ninja/pull/171.
>
> This mainly covers string piecehttp://groups.google.com/group/ninja-build/browse_thread/thread/ac4c4...
> and hasp_map problems - seehttp://groups.google.com/group/ninja-build/browse_thread/thread/32f46...
> We also use a windows .bat file, instead of a .sh file, which I've
> included, and finally we have a new header for int64_t types.
>
> We have a few other patches which makes things easier to deal with in
> the face of a subprocess crash and tend to speed things up on windows.
> I'll incluie details of those shortly.
>
> Fran.

I've made various comments on the pull request. What would you like me
to do?
bootstrap.py doesn't work as it stands, if you want to invoke cl
instead of g++.
I think the command line is spelled like this

cl /EHsc /GF /GL /MT /Ox /D /WIN32 /D /NOMINMAX /D /NDEBUG /D CONSOLE
src\getopt.c src\build.cc src\build_log.cc src\clean.cc src
\depfile_parser.cc src\disk_interface.cc src\edit_distance.cc src
\eval_env.cc src\graph.cc src\graphviz.cc src\ninja.cc src\parsers.cc
src\state.cc src\subprocess-win32.cc src\util.cc /link /
out:ninja.boostrap.exe /LTCG /OPT:REF /OPT:ICF /SUBSYSTEM:CONSOLE /
NXCOMPAT /DEBUG

Fran.

Nicolas Desprès

unread,
Dec 29, 2011, 11:15:10 AM12/29/11
to ninja...@googlegroups.com

My approach to port Ninja to Windows and other compilers (like cl.exe)
would be to use CMake. We could bootstrap it using one of the
existing generators and then use the upcoming Ninja Generator to test
ninja itself using itself.

Compiling Ninja with cl.exe would help to port it to Windows since it
is much easier to debug a Windows program using Visual Studio than
using the mingw toolchain.

Cheers,

--
Nicolas Desprès

Evan Martin

unread,
Dec 29, 2011, 1:31:05 PM12/29/11
to ninja...@googlegroups.com
On Thu, Dec 29, 2011 at 7:29 AM, Frances <frances....@gmail.com> wrote:
> I've made various comments on the pull request. What would you like me
> to do?

It's always easiest for me to pull a branch that is a series of
commits so I can cherry-pick the good ones.
If it's easier for you, I'm happy to do that work.
Your commits currently don't have an author on them:
Author: unknown <Fran@Fran-PC.(none)>
I'd like to give you proper attribution, in the form of
Firstname Lastname <email@address>

> bootstrap.py doesn't work as it stands, if you want to invoke cl
> instead of g++.

Yes, I see. A separate file is ok for now, but I think this code will
probably rot over time.

Some other ideas:
- add more logic to bootstrap.py to make it work with cl.exe
- require users to download a ninja.exe for the first-time bootstrap
- require some additional build system like cmake

Of those, perhaps the second one isn't so bad... ?

Frances

unread,
Jan 3, 2012, 7:17:54 AM1/3/12
to ninja-build


On Dec 29 2011, 6:31 pm, Evan Martin <mart...@danga.com> wrote:
Hi Evan,

I've made another pull request - https://github.com/martine/ninja/pull/177
This time I made sure my user.name was still set - not sure how it got
lost before.
This includes 3 small commits - one to include <algorithm> for
std::find in test.cc, some to sort out hash_map for MSVC and we also
had a problem with string_piece using std::string(NULL,0).

I have a few other patches too - I'll make those into different pull
requests.

I looked at your bootstrap.py but it still didn't work with MSVC.
Shame, because I liked that idea. Even if we do download a ninja.exe
for the firsttime bootstrap, if we use MSVC to build it we must spell
the options for MSVC correctly.

Cheers,
Fran.

Fran.

Nicolas Desprès

unread,
Jan 3, 2012, 8:41:13 AM1/3/12
to ninja...@googlegroups.com
On Tue, Jan 3, 2012 at 1:17 PM, Frances <frances....@gmail.com> wrote:
>
[...]

>
> I looked at your bootstrap.py but it still didn't work with MSVC.
> Shame, because I liked that idea. Even if we do download a ninja.exe
> for the firsttime bootstrap, if we use MSVC to build it we must spell
> the options for MSVC correctly.
>

I really do think that the easiest to solve this issue is to use
cmake. It already knows all the options to use with the given
compiler. Although, the Ninja generator is not merged yet in CMake's
next branch, we can still start to port Ninja using a Makefile based
build-system.

Cheers,

--
Nicolas Desprès

Jean-Christophe Fillion-Robin

unread,
Jan 3, 2012, 9:30:42 AM1/3/12
to ninja...@googlegroups.com
It it helps, last year, on my spare time, I did an attempt to "CMake-ify" ninja.
See https://github.com/jcfr/ninja/blob/cmakeified-project/CMakeLists.txt

I also pushed few commits so that I could go further in compiling ninja on windows, I am sure what I did is now obsolete ... in case you are curious, you could have a look here: https://github.com/jcfr/ninja/commits/fix-windows-build

Jc


2012/1/3 Nicolas Desprès <nicolas...@gmail.com>



--
+1 919 869 8849

Scott Graham

unread,
Jan 3, 2012, 5:53:10 PM1/3/12
to ninja...@googlegroups.com
Attached patch makes bootstrap.py work with MSVC.

Does it seem reasonable to those concerned?

One thing I wasn't sure about is whether the 'win32' block that excludes some sources was supposed to be doing anything before. It looks like the slashes were in the wrong direction anyway?

I haven't changed configure.py yet, so it fails on self build, but it shouldn't be that hard to do.
bootstrap_msvc.patch

Scott Graham

unread,
Jan 3, 2012, 6:31:26 PM1/3/12
to ninja...@googlegroups.com
And here's the changes for configure.py for msvc pipelined on the previous patch. (I'm happy to repackage or figure out some github thingy if that's easier than a .patch, but just so people can have a look...)

FWIW, using the chrome ninja files here https://github.com/martine/ninja/downloads and doing the average of 5 runs for "ninja chrome"

mingw binary also from downloads: 3.02s
msvc build: 2.62s

But, I didn't get the mingw version to build locally, so that difference could easily be due other changes that have landed since then.
configure_msvc.patch

Evan Martin

unread,
Jan 3, 2012, 8:05:20 PM1/3/12
to ninja...@googlegroups.com
Awesome! I landed it with some minor changes, please pull and let me
know if I broke it.

Evan Martin

unread,
Jan 3, 2012, 8:12:07 PM1/3/12
to ninja...@googlegroups.com
On Tue, Jan 3, 2012 at 3:31 PM, Scott Graham <sgr...@gmail.com> wrote:
> FWIW, using the chrome ninja files
> here https://github.com/martine/ninja/downloads and doing the average of 5
> runs for "ninja chrome"
>
> mingw binary also from downloads: 3.02s
> msvc build: 2.62s
>
> But, I didn't get the mingw version to build locally, so that difference
> could easily be due other changes that have landed since then.

I think the most recent change I made was on the order of 5%
improvement in CPU-bound code.
That number is still 2-3x what I'd like it to be.
I will profile it soon!

Scott Graham

unread,
Jan 4, 2012, 12:57:11 PM1/4/12
to ninja...@googlegroups.com
Sorry, I wasn't clear, it needed both patches. Attached patch is vs. current head.

Other than that, just one small error introduced in configure.py.

Thanks!
msvc.patch

Scott Graham

unread,
Jan 4, 2012, 1:34:11 PM1/4/12
to ninja...@googlegroups.com
Agreed :)
 
I will profile it soon!

I don't have vtune or fancier ones installed, but I ran very sleepy on the chrome dataset (it's a sampling profiler). Based on that, some rough data:

29% is in RealDiskInterface::Stat, so switching to FindFirst/Next to make the stat'ing closer to O(#dirs) rather than O(#files) might be productive. The bigger hammer would be using the USN Journal to get to O(#changes-since-last-run), but that's a bit more work. I didn't confirm... when it errors out with the "sed.sh missing..." has it stat'd everything? If it's only done some of the work then stat may completely dominate the runtime.

13%+11%+4 = 28% is in fopen/fread/fclose for RealDiskInterface::ReadFile. Most of those are via LoadDepFile (naturally). So, there might be some gain from using MapViewOfFile instead, or more complicatedly doing some sort of .d clumping that I think someone suggested before.

The rest of the top:

5.7%: stdext::_Hash via LookupNode
5.16%: operator new, mostly allocating strings
4.25%: stdext::hash_value via LookupNode
4.09%: CanonicalizePath
3.51%: free

and then it trails off a bit.

scott

Scott Graham

unread,
Jan 4, 2012, 2:22:47 PM1/4/12
to ninja...@googlegroups.com
On Wed, Jan 4, 2012 at 10:34 AM, Scott Graham <sgr...@gmail.com> wrote:
On Tue, Jan 3, 2012 at 5:12 PM, Evan Martin <mar...@danga.com> wrote:
On Tue, Jan 3, 2012 at 3:31 PM, Scott Graham <sgr...@gmail.com> wrote:

29% is in RealDiskInterface::Stat, so switching to FindFirst/Next to make the stat'ing closer to O(#dirs) rather than O(#files) might be productive. The bigger hammer would be using the USN Journal to get to O(#changes-since-last-run), but that's a bit more work. I didn't confirm... when it errors out with the "sed.sh missing..." has it stat'd everything? If it's only done some of the work then stat may completely dominate the runtime.

Reading that post by Jonathon Blow more carefully, it seems like the simplest thing would be to use that method. Since the .d files need to be generated by a /showIncludes wrapper on msvc anyway, we could easily map all the .d into one directory too. e.g.:

    obj/webkit/blob/blob_data.o.d --> obj/dfiles/webkit$blob$blob_data.o.d

(or whatever, maybe reusing $ again isn't the best idea :).

Then there'd only need to be one FindFirst/NextFile loop so it should be pretty much as fast as possible w/o going to USN.

 

Petr Wolf

unread,
Jan 9, 2012, 5:00:26 PM1/9/12
to ninja...@googlegroups.com

Hi all,

 

the experience from our Win32 project (roughly the size of Chromium ~ 600MB of depfiles) led to the following improvements.

 

During the build

1] group the .d files in larger .D files (simple concatenation), per "component". There are cca 300 components in the project

2] compress the .D files into .D.gz

 

On ninja startup

1] decompress the .D.gz files (if the depfiles attribute points to such)

2] read the individual .d files from it (in memory)

 

This has delivered cca 5-10x speedup of ninja startup. Still, it takes tens of seconds or minutes to start (before the first action is fired), depending on a PC and state of the cache.

 

See also two "hotspot" analysis from VTune, using "pure" ninja and individual .d files (i.e. none of the above-listed improvements). One is on a cold disk, the other from a hot (after running the same thing a couple of times). Clearly, the reading of depfiles (tsopen_nolock) dominates on a cold disk.

 

Note that VTune works via sampling in this mode, checking the currently active function every 10ms. Very fast methods might therefore escape the metric.


Regards

Petr
hot.csv
cold.csv

Rachel Blum

unread,
Jan 9, 2012, 5:12:30 PM1/9/12
to ninja...@googlegroups.com
Note that it's dominated by tsopen_nolock. It's the _opening_ of files that dominates. IIRC, NTFS has horrible throughput for file-opens if the cache is cold. 

Rachel

Petr Wolf

unread,
Jan 9, 2012, 5:27:08 PM1/9/12
to ninja...@googlegroups.com
Yes, that's right. We tried to gain speedup from so called Overlapped I/O, where the call to ReadFile() returns immediately and the caller is notified via a callback once data are available. But it did not help much, because the bottleneck still was in opening (which has no non-blocking equivalent).

Evan Martin

unread,
Jan 9, 2012, 6:35:37 PM1/9/12
to ninja...@googlegroups.com
On Mon, Jan 9, 2012 at 2:00 PM, Petr Wolf <petr...@gmail.com> wrote:
> the experience from our Win32 project (roughly the size of Chromium ~ 600MB
> of depfiles) led to the following improvements.
>
> During the build
>
> 1] group the .d files in larger .D files (simple concatenation), per
> "component". There are cca 300 components in the project
>
> 2] compress the .D files into .D.gz
>
>
> On ninja startup
>
> 1] decompress the .D.gz files (if the depfiles attribute points to such)
>
> 2] read the individual .d files from it (in memory)
>
>
> This has delivered cca 5-10x speedup of ninja startup. Still, it takes tens
> of seconds or minutes to start (before the first action is fired), depending
> on a PC and state of the cache.

Thanks for experimenting with this and for providing the profiles,
this is awesome!

When you run cl.exe, do you pass multiple files to it? I was
wondering whether we could build source files in batches (like your
components) and generate the .d file one-to-one with those batches.

Compression is an interesting way to work around a slow file system,
but you trade off CPU for it. Do you know how much additional CPU
time it costs?


I am still really sad and sorry it takes so long to start. I haven't
put nearly as much effort into cold startup behavior because I was
focused on the edit-compile cycle, where everything is hot. What is
the fastest time you've found? If Linux Chrome is ~1s, multiple
minutes would indicate something is more than 60x slower on Windows.

> See also two "hotspot" analysis from VTune, using "pure" ninja and
> individual .d files (i.e. none of the above-listed improvements). One is on
> a cold disk, the other from a hot (after running the same thing a couple of
> times). Clearly, the reading of depfiles (tsopen_nolock) dominates on a cold
> disk.
>
>
> Note that VTune works via sampling in this mode, checking the currently
> active function every 10ms. Very fast methods might therefore escape the
> metric.

Awesome, thanks!

From a glance at the hot profile, it seems that while stat/read
definitely dominate, a lot of additional time is spent across many
other places. I recognize a few of them (like Tokenizer::PeekToken)
as places I've since optimized on trunk, but even without these
optimizations I'm still a little surprised at the numbers.

For example the third-highest entry is hashing a string, and the
fourth is lower_bound() on a hash (which I believe is part of the MSVC
hash_map<> implementation). Even calls to malloc in your hot trace
sum to more than the total time spent on Ninja in my tests. I wonder
if this points at differences in our hardware, which means some of my
recent micro-optimizations may pay off. (For example I have a branch
on my laptop that has eliminated a lot of string copies, which should
save on basic memory operations, and saves an addtional 5-10% or so
off of Linux Ninja trunk.)

But in any case, those hash functions together only add up to a
quarter of the time spent in stat/read. You might first start with
cherry-picking (or manually implementing, it is really smalll) this
patch into your branch to see how much of a benefit it has:
https://github.com/martine/ninja/commit/93c78361e30e33f950eef754742b236251e2c81e

Petr Wolf

unread,
Jan 11, 2012, 4:59:15 AM1/11/12
to ninja-build
> When you run cl.exe, do you pass multiple files to it? I was
> wondering whether we could build source files in batches (like your
> components) and generate the .d file one-to-one with those batches.

No, we use one cl per one .cpp file. The aggregated .D files are
created in a special build step, depending on all the .obj files in a
"component", i.e. like a linker.

>
> Compression is an interesting way to work around a slow file system,
> but you trade off CPU for it. Do you know how much additional CPU
> time it costs?

The decompression cost (paid at start up time) is negligible for us
ATM (1-2 seconds). The compression is more costly (1-2 minutes), but
that gets paid at build time and sort of "dissolves" in the myriads of
other build tasks. Also, the compression task gets easily paralelized.

> I am still really sad and sorry it takes so long to start. I haven't
> put nearly as much effort into cold startup behavior because I was
> focused on the edit-compile cycle, where everything is hot. What is
> the fastest time you've found? If Linux Chrome is ~1s, multiple
> minutes would indicate something is more than 60x slower on Windows.

No need to be sorry! Ninja is doing an excellent job for us. Even with
this startup, it is way better than the native Visual Studio build,
especially when the cache gets warm.

The fastest startup (on a Z600) is cca 25s. Our depfiles compression
and aggregation changes are responsible for up to 5s of this - we
haven't optimized that at all, as all focus has been on cold startups,
where we deal with minutes and not seconds.

> Awesome, thanks!
>
> From a glance at the hot profile, it seems that while stat/read
> definitely dominate, a lot of additional time is spent across many
> other places. I recognize a few of them (like Tokenizer::PeekToken)
> as places I've since optimized on trunk, but even without these
> optimizations I'm still a little surprised at the numbers.
>
> For example the third-highest entry is hashing a string, and the
> fourth is lower_bound() on a hash (which I believe is part of the MSVC
> hash_map<> implementation). Even calls to malloc in your hot trace
> sum to more than the total time spent on Ninja in my tests. I wonder
> if this points at differences in our hardware, which means some of my
> recent micro-optimizations may pay off. (For example I have a branch
> on my laptop that has eliminated a lot of string copies, which should
> save on basic memory operations, and saves an addtional 5-10% or so
> off of Linux Ninja trunk.)
>
> But in any case, those hash functions together only add up to a
> quarter of the time spent in stat/read. You might first start with
> cherry-picking (or manually implementing, it is really smalll) this
> patch into your branch to see how much of a benefit it has:https://github.com/martine/ninja/commit/93c78361e30e33f950eef754742b2...

Thanks! We're currently preparing an upgrade to the latest ninja, so
this would go with it. Once that's done, I'll be happy to run the
profiler again to see how the latest optimizations influence the
picture.

Now, moving forward, I would like to push our changes upstream as a
proposal. Let's start with the aggregation first. As we wrote it for
the "old" depfiles parser and did not think much about performance on
hot cache, it needs to be updated first. Now where would you prefer to
put this - in the depfiles or in the new deplists?

Regards
Petr

Evan Martin

unread,
Jan 11, 2012, 11:19:20 AM1/11/12
to ninja...@googlegroups.com
On Wed, Jan 11, 2012 at 1:59 AM, Petr Wolf <petr...@gmail.com> wrote:
> Now, moving forward, I would like to push our changes upstream as a
> proposal. Let's start with the aggregation first. As we wrote it for
> the "old" depfiles parser and did not think much about performance on
> hot cache, it needs to be updated first. Now where would you prefer to
> put this - in the depfiles or in the new deplists?

By aggregation, you mean writing out multiple outputs' worth of
dependencies into a single file?

I think using the deplist format for that is probably a good idea, as
the depfile approach is pretty tailored to mirror gcc's output. We'll
need to extend the format to support more than one file.

In either the deplist or depfile approach, where did you specify the
path to this file such that ninja could load it? I guess you might
have needed some syntax extension?

Petr Wolf

unread,
Jan 11, 2012, 11:53:41 AM1/11/12
to ninja-build
> By aggregation, you mean writing out multiple outputs' worth of
> dependencies into a single file?

Yes. This was achieved by a simple concatenation of individual .d
files.

> I think using the deplist format for that is probably a good idea, as
> the depfile approach is pretty tailored to mirror gcc's output.  We'll
> need to extend the format to support more than one file.

OTOH, the same format is used by other tools, such as SWIG. It is
therefore a convenient standard to stick to.

> In either the deplist or depfile approach, where did you specify the
> path to this file such that ninja could load it?  I guess you might
> have needed some syntax extension?

To keep the scope at minimum, I simply "overloaded" the depfiles field
so that when it contans an expression like
depfile=Big.D:small.obj
it opens the Big.D and uses small.obj from it. The rest of Big.D is
kept in memory (as a map<string, string>, filename to raw depfile
contents), so that next time (for a different file), the Big.D is not
read from disk again, but the related element is removed from the map
and parsed.

Now to push it up, I would like to suggest something nicer (and faster
and with smaller memory footprint). With no syntax change in .ninja
files, one would simply point to the same depfile from many nodes. The
parser would allow more "sections" (just concatenation, one depfile
after another) in the depfile and will place the parsed data directly
to all the nodes, when processing the depfile for the first time.
Ninja would probably still need to remember which depfiles it has
processed, so that it does not try to open some depfiles more than
once, if e.g. a new cpp file is added to a project, pointing to an
existing depfile, but having no record in it.

There will be a name lookup of the sibling nodes, when a depfile is
processed, but otherwise it should be pretty quick.

Obviously, for the deplists, the binary format would have to be
changed to allow "more in one" deplists.

What do you think about such concept?

Evan Martin

unread,
Jan 11, 2012, 12:34:34 PM1/11/12
to ninja...@googlegroups.com

Wow, on my train ride in I thought more about your prior mailed and
concluded more or less exactly all of what you just wrote: we should
support this in depfiles and we should cache a loaded depfile so that
we can extract info from it for all build edges that reference it.

So yes, sounds good. :)

I will hold off on landing the deplist code until I'm certain we need
it. I can adjust the format then.

Peter Kümmel

unread,
Jan 16, 2012, 5:36:25 PM1/16/12
to ninja...@googlegroups.com
On 29.12.2011 19:31, Evan Martin wrote:

> Some other ideas:
...


> - require some additional build system like cmake

I have a cmake file [1] which works on Windows and Linux,
it isn't in sync with the compiler flags of configure.py,
but when this one CMakeLists.txt could be added in ninja/misc
I will bring it up-to-date and test it with Win(msvc/mingw)/Linux/Mac.

Peter

[1]
https://github.com/syntheticpp/ninja/blob/martine-cmake/misc/CMakeLists.txt

Philip Craig

unread,
Jan 17, 2012, 4:28:16 AM1/17/12
to ninja...@googlegroups.com
It's not the same set of flags to the Windows compiler and linker as the python script on head. Not surprising as nothing keeps them in sync.

Also, be careful of what "building on Windows" might mean. For the python script, currently, it means "build on any of VS2005, VS2008 or VS2010". I don't think this is true of the CMakefile fragment.

At any rate, the decision comes down to:
* either maintain both the python script and the cmake
* or pick one or the other, and thus require either cmake or python in order to do a ninja build (on Windows or Linux or mingw).

Nicolas Desprès

unread,
Jan 17, 2012, 5:46:25 AM1/17/12
to ninja...@googlegroups.com
On Tue, Jan 17, 2012 at 10:28 AM, Philip Craig <phi...@pobox.com> wrote:
> It's not the same set of flags to the Windows compiler and linker as the
> python script on head. Not surprising as nothing keeps them in sync.
>
> Also, be careful of what "building on Windows" might mean. For the python
> script, currently, it means "build on any of VS2005, VS2008 or VS2010". I
> don't think this is true of the CMakefile fragment.
>

Well, CMake has a differenet generator for each of them.

> At any rate, the decision comes down to:
> * either maintain both the python script and the cmake
> * or pick one or the other, and thus require either cmake or python in order
> to do a ninja build (on Windows or Linux or mingw).

I agree with your analysis. However, I think even if we choose cmake
we will never totaly get rid of the python dependency because there is
some helper scripts like misc/measure.py that won't be translated to
cmake script.

Cheers,

--
Nicolas Desprès

Peter Kümmel

unread,
Jan 17, 2012, 2:16:39 PM1/17/12
to ninja...@googlegroups.com
On 17.01.2012 10:28, Philip Craig wrote:
> It's not the same set of flags to the Windows compiler and linker as the
> python script on head. Not surprising as nothing keeps them in sync.
>
> Also, be careful of what "building on Windows" might mean. For the python
> script, currently, it means "build on any of VS2005, VS2008 or VS2010". I
> don't think this is true of the CMakefile fragment.

I've tested it with VS2010, the other VS versions will not be a problem.

>
> At any rate, the decision comes down to:
> * either maintain both the python script and the cmake
> * or pick one or the other, and thus require either cmake or python in
> order to do a ninja build (on Windows or Linux or mingw).
>

I would add the cmake file only as an option to the official
supported python based build system.

The python scripts is OK for bootstrapping. But the problem with the
python script is, that it is not possible to generate project files
for any IDE.

Peter

d3x0r

unread,
Jul 6, 2012, 2:53:21 PM7/6/12
to ninja...@googlegroups.com
I guess your cmake file has disappeared with time?
 
Would be nice if you already had one; I threw together one which managed to get ninja built on windows with mingw (gcc 4.5) and visual studio 2010...  Can't use openwatcom - no hash_map support...
 
not sure hoow I would get a browse_py.h
;
added my quick and dirty cmake script,
 
 
CMakeLists.txt

Peter Kümmel

unread,
Jul 9, 2012, 4:10:25 AM7/9/12
to ninja...@googlegroups.com
On 06.07.2012 20:53, d3x0r wrote:
It was not lost, in some branches there was a CMakeLists ;)

But I cleaned up the branches and now there is a cmake-support branch
which will be martine/master with a CMakeLists.txt.

I also uploaded the CMakeLists.txt as download:

https://github.com/downloads/syntheticpp/ninja/CMakeLists.txt

Simple save into /ninja.

Comments and patches are welcome.

Peter

d3x0r

unread,
Jul 9, 2012, 1:46:17 PM7/9/12
to ninja...@googlegroups.com
Okay thanx.  Looks much nicer than mine (of course); I even guessed right at making a static lib and applying the executable to it...
 
so; ninja picked up using GCC as a compiler; and it works to build a target, and it is wicked fast when nothing to do. (basically 0 time)
 
but; it doesn't build the full project, and I'm not sure why....
 
if I have it print the commands it's doing... here's a sample
 
cmd.exe /c cd C:\general\build\ninja\sack\debug_solution\core && cmake -G Ninja M:/sack/cmake_all/.. -DCMAKE_BUILD_TYPE=debug -DCMAKE_INSTALL_PREFIX=C:/general/build/ninja/sack/debug_out/core -DBUILD_MONOLITHIC=ON -DNEED_FREETYPE=1 -DNEED_JPEG=1 -DNEED_PNG=1 -DNEED_ZLIB=1 && c:\tools\unix\ninja.exe install
 
but the last step 'ninja.exe install' doesn't seem to do anything.  I added an && echo test before ninja, and it does get that far.  If I change the arguments to 'ninja.exe -? install' then ninja logs its usage, and exits.  I tried adding '-v -n -d stats -d explain'  arguments, but I get no output from ninja.
 
I'll probably come up with a few cmake scripts that simply replicate the issue... at least try to

d3x0r

unread,
Jul 9, 2012, 2:00:21 PM7/9/12
to ninja...@googlegroups.com
 
but; it doesn't build the full project, and I'm not sure why....
 
if I have it print the commands it's doing... here's a sample
 
cmd.exe /c cd C:\general\build\ninja\sack\debug_solution\core && cmake -G Ninja M:/sack/cmake_all/.. -DCMAKE_BUILD_TYPE=debug -DCMAKE_INSTALL_PREFIX=C:/general/build/ninja/sack/debug_out/core -DBUILD_MONOLITHIC=ON -DNEED_FREETYPE=1 -DNEED_JPEG=1 -DNEED_PNG=1 -DNEED_ZLIB=1 && c:\tools\unix\ninja.exe install
 
but the last step 'ninja.exe install' doesn't seem to do anything.  I added an && echo test before ninja, and it does get that far.  If I change the arguments to 'ninja.exe -? install' then ninja logs its usage, and exits.  I tried adding '-v -n -d stats -d explain'  arguments, but I get no output from ninja.
 
I'll probably come up with a few cmake scripts that simply replicate the issue... at least try to
 
 
Hmm, looks like it was using a build of ninja from before I got the right cmakelists; the new build works.  Although all the output is queued internally, and once it's all done, then it's sent to the screen....
 
 
 
Reply all
Reply to author
Forward
0 new messages