Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

"Why I don't spend time with Modern C++ anymore" by Henrique Bucher

641 views
Skip to first unread message

Lynn McGuire

unread,
May 19, 2016, 9:17:07 PM5/19/16
to
"Why I don't spend time with Modern C++ anymore" by Henrique Bucher
https://www.linkedin.com/pulse/why-i-dont-spend-time-modern-c-anymore-henrique-bucher-phd

Interesting and true. We just transitioned from Visual C++ 2005 to Visual C++ 2015. Much improved. And way slower, especially the
linker!

Lynn

Marcel Mueller

unread,
May 20, 2016, 4:58:27 AM5/20/16
to
Well, VC++ never have been the yardstick by which others are measured.

I am still fine with C++. But I have to admit that I use some of the
modern features carefully. It is still possibly to write very fast code
with C++. You can wrap almost any hack by a safe class with almost no
runtime overhead.

At this point I disagree with the referenced article. On one side he
complains the speed has decreased and we should move to VHDL at the
best. On the other side he states that modern hardware is fast enough
and we do no longer have to count clock cycles. So what's the message?

OK, with templates there might have happened some over engineering.
Variadics are neat. Perfect forwarding also. But it is not always a good
advise to move anything to the compile time. The Java concept with type
erasure has also its advantages. First of all it is fast at compile time
and keeps the executables small, which yields to higher cache efficiency.
I use this pattern quite often with C++. Basically a non template core
(base class) with a template type safe wrapper. E.g. containers with
smart pointer elements reduce to the same class this way.

And lambdas (or partially functional programming in general) is always
risky with respect to performance. It is very easy to end up with very
complex common subexpressions that execute the same code over and over
and which is not always obvious at the first glance. Fast code is only
slightly different from amazingly inefficient code.
But this is no property of C++. It is just a consequence of mixing
procedural and OO code with functional code. In fact C#/LINQ shares
exactly the same problem.
If one wants to enter the 'functional world' effectively than the
functions have to be declared to be as strictly functional. With lambdas
this is for free as long as they do not contain closures. But as soon as
you call any ordinary function from this code you are lost with the
resulting sequence points and possible side effects. They introduce
important restrictions to the optimizer.
C++ offers the keyword constexpr which is quite close to the required
constraint for functional programming. A constexpr expression tree has
no sequence points at function calls and could be reordered by the
compiler, e.g. do invariant code motion. But you must not call any non
functional code. It's a pity that trivial functions like sin() cos() do
not meet this constraint because of the useless errno C compatibility -
although some platforms provide extensions that remove this dependency.
Unfortunately constexpr is not originally intended for this purpose and
has many constraints that that are more due to current implementations
rather than due to the language itself. OK, with C++14 the situation
significantly improved, but no more no less.
I remember a very old C compiler from Inmos that had a side_effect_free
attribute which exactly did this job. And in fact it had a great
influence to the optimizer even in the old days (Inmos T805 Transputer).


Marcel

Juha Nieminen

unread,
May 23, 2016, 2:08:24 AM5/23/16
to
Another developer whining about compilation times and number of features.

Whenever someone whines about compilation times, I immediately disregard
the entire thing. It's such a retarded thing to whine about.

--- news://freenews.netfront.net/ - complaints: ne...@netfront.net ---

Wouter van Ooijen

unread,
May 23, 2016, 2:57:53 AM5/23/16
to
Op 20-May-16 om 3:16 AM schreef Lynn McGuire:
> "Why I don't spend time with Modern C++ anymore" by Henrique Bucher
>
> https://www.linkedin.com/pulse/why-i-dont-spend-time-modern-c-anymore-henrique-bucher-phd

Why did that article raise such a sorm? There is hardly any real
argument in it.

He complains that the C++ comittee steers the language in the wrong
direction. But the comittee doesn't steer, it merely filters what it
gets as inputs. Now if he complained about his prosals being rejected he
would have a good point, but now he is in effect complaining that them
other people did not write the papers he would like to see. He dude, if
you want something done, do it!

Another complaint is that the language is too complex. In that he has a
point. But the alternative is a languge that is not backwards compatible
and hence almost un-used. Unless someone creates a realy good
alternative (there are attempts) I stick with what I can use now on
almost every micro-controller I need to program.

Another complaint is that he sees programs (using the new features) that
are too complex and (hence) too slow. OK< so you can use those features
in a wrong way. Tell me something new! That is a complaint about how
those features are used. I manily program small micro-controllers, and
some of those fancy new features (constexpr! templates, even some form
of lambda's) help me a lot to make my programs smaller and faster.
Especially templates.

Wouter "Objects? No Thanks!" van Ooijen

Wouter van Ooijen

unread,
May 23, 2016, 6:42:48 AM5/23/16
to
Op 23-May-16 om 11:50 AM schreef Stefan Ram:
> Wouter van Ooijen <wou...@voti.nl> writes:
>> Wouter "Objects? No Thanks!" van Ooijen
>
> Even with templates, usually storage is required too.

"Objects? No Thanks" is of course a teaser - I use classes as
compile-time objects. So in the literal C++ sense I don't use objects
(unless you count the built-in stuff like integers, arrays and
references as objects), but I do use objects in a more abstract sense.
Indeed, something must be stored.

Juha Nieminen

unread,
May 24, 2016, 2:09:31 AM5/24/16
to
I'm suspecting you are using the term "object" with a different meaning
than I undestand it.

In normal parlance "object" is simply the instantiation of a class.
For example:

std::string s = "Hello";

That 's' is an object (and std::string is a class).

But maybe you have a difference concept of what "object" means.

Wouter van Ooijen

unread,
May 24, 2016, 2:18:27 AM5/24/16
to
Op 24-May-16 om 8:09 AM schreef Juha Nieminen:
> Wouter van Ooijen <wou...@voti.nl> wrote:
>> Op 23-May-16 om 11:50 AM schreef Stefan Ram:
>>> Wouter van Ooijen <wou...@voti.nl> writes:
>>>> Wouter "Objects? No Thanks!" van Ooijen
>>>
>>> Even with templates, usually storage is required too.
>>
>> "Objects? No Thanks" is of course a teaser - I use classes as
>> compile-time objects. So in the literal C++ sense I don't use objects
>> (unless you count the built-in stuff like integers, arrays and
>> references as objects), but I do use objects in a more abstract sense.
>> Indeed, something must be stored.
>
> I'm suspecting you are using the term "object" with a different meaning
> than I undestand it.
>
> In normal parlance "object" is simply the instantiation of a class.
> For example:
>
> std::string s = "Hello";
>
> That 's' is an object (and std::string is a class).

That is exactly the kind of object that I don't use for my small-systems
compile-type-polymorphism programming style.

Wouter "Objects? No thanks!" van Ooijen

jacobnavia

unread,
May 24, 2016, 1:49:56 PM5/24/16
to
sure.

If you are paid per hour, it is very nice to wait :-)
If you are not it is dammed frustrating waiting 3 minutes at each
change or even more

Juha Nieminen

unread,
May 24, 2016, 6:40:29 PM5/24/16
to
jacobnavia <ja...@jacob.remcomp.fr> wrote:
> If you are not it is dammed frustrating waiting 3 minutes at each
> change or even more

I don't think I have had *anything* I have ever done take 3 minutes
to compile, even after a clean.

Juha Nieminen

unread,
May 24, 2016, 6:41:44 PM5/24/16
to
Why not? Why make your life more difficult than it needs to be?

Jerry Stuckle

unread,
May 24, 2016, 6:50:34 PM5/24/16
to
You haven't worked on very big projects, then. I've seen compiles take
overnight on a mainframe.

--
==================
Remove the "x" from my email address
Jerry Stuckle
jstu...@attglobal.net
==================

Wouter van Ooijen

unread,
May 25, 2016, 2:32:18 AM5/25/16
to
Op 25-May-16 om 12:41 AM schreef Juha Nieminen:
> Wouter van Ooijen <wou...@voti.nl> wrote:
>>> std::string s = "Hello";
>>>
>>> That 's' is an object (and std::string is a class).
>>
>> That is exactly the kind of object that I don't use for my small-systems
>> compile-type-polymorphism programming style.
>
> Why not? Why make your life more difficult than it needs to be?

To make the executable code smaller and faster.
<https://www.youtube.com/watch?v=k8sRQMx2qUw>

Christian Gollwitzer

unread,
May 25, 2016, 3:04:34 AM5/25/16
to
Am 25.05.16 um 00:40 schrieb Juha Nieminen:
> jacobnavia <ja...@jacob.remcomp.fr> wrote:
>> If you are not it is dammed frustrating waiting 3 minutes at each
>> change or even more
>
> I don't think I have had *anything* I have ever done take 3 minutes
> to compile, even after a clean.

I can believe that if you only compile your own code. If you are hacking
on a big project, rebuilding can take very long. Imagine patching
OpenOffice. There are build distributions available with object files,
so that you can start hacking on a source file and only need to
recompile the changed bits, because a full build can take days.

I haven't worked on OO myself, but one project of mine required building
a large number of third-party modules. On Linux, the whole build takes <
5 minutes. The same using MinGW on Windows takes almost an hour, which
is extremely annoying.

Christian

Juha Nieminen

unread,
May 25, 2016, 4:10:12 AM5/25/16
to
Jerry Stuckle <jstu...@attglobal.net> wrote:
> On 5/24/2016 6:40 PM, Juha Nieminen wrote:
>> jacobnavia <ja...@jacob.remcomp.fr> wrote:
>>> If you are not it is dammed frustrating waiting 3 minutes at each
>>> change or even more
>>
>> I don't think I have had *anything* I have ever done take 3 minutes
>> to compile, even after a clean.
>>
>> --- news://freenews.netfront.net/ - complaints: ne...@netfront.net ---
>>
>
> You haven't worked on very big projects, then. I've seen compiles take
> overnight on a mainframe.

But that's my point: Every single time, every single freaking time,
someone complains about C++, the "long compile times" argument is
brought up, like it were some kind of crucial core flaw that affects
every single C++ programmer.

Well, it doesn't. In my entire professional career it has never been
an issue for me. It is most certainly not a reason to choose some other
language. It would be one of the silliest possible reasons to change
the entire programming language to something completely different and
incompatible.

If you work on multi-million-line codebases that take three weeks
to compile, then fine, change your language if you wish. But don't
be telling others to change their language because *you* have to
work with such codebases. It's not a problem that affects
everybody.

The complaint is idiotic.

Jerry Stuckle

unread,
May 25, 2016, 12:00:58 PM5/25/16
to
No, it's not the complaint that is idiotic. The complaint is valid.
Just because it doesn't affect YOU doesn't mean it's not a valid
complaint. It's just not valid for YOU.

But it is very valid for a lot of other programmers.

Wouter van Ooijen

unread,
May 25, 2016, 1:09:07 PM5/25/16
to
Op 25-May-16 om 5:43 PM schreef Stefan Ram:
> Wouter van Ooijen <wou...@voti.nl> writes:
>> Wouter "Objects? No thanks!" van Ooijen
>
> I have not watched the video, but I guess that
> what actually is being intended to convey is:
>
> Wouter "Run-time polymorphism? No thanks!" van Ooijen
>
> . When a user of a vector-graphics editor can add several
> shapes to an image, this is often implemented using a
> collection of all the shapes of the image, and we only know
> at runtime what the actual subclass (circle, triangle, ...)
> of shape is when we loop over all shapes of the collection
> and want to process each shape in turn.
>
> How would someone implement such a loop without run-time
> polymorphis (RTP)?

I never meant to say that RTP is never usefull, to the contrary. I just
spent the last few weeks hammering that concept into the heads of a
fresh batch of students.

But when you don't realy need RTP, there are alternatives. And when the
speed and size of hardcoded C is your benchmark, it is worth considering
CTP, because you can have a *some* of the advantages of RTP without the
costs. And in my field (low-level micro-controller stuff) you often
don't need (and sometimes don't want!) run-time flexibility (for
instance because it makes it a lot harder to calculate the stack size
you need).

Wouter (still) "Objects? No Thanks!" van Ooijen


>
> Why is this better than RTP?
>
> When a new kind (subclass) of shape is being added to the
> program by the programmer, where are changes necessary when
> RTP is used and where when the alternative is used?
>

Wouter van Ooijen

unread,
May 25, 2016, 2:49:47 PM5/25/16
to
Op 25-May-16 om 7:25 PM schreef Stefan Ram:
> Wouter van Ooijen <wou...@voti.nl> writes:
>> But when you don't realy need RTP, there are alternatives. And when the
>> speed and size of hardcoded C is your benchmark, it is worth considering
>> CTP, because you can have a *some* of the advantages of RTP without the
>> costs.
>
> On the famous site
>
> benchmarksgame.alioth.debian.org/u64q/which-programs-are-fastest.html
>
> , C++ still is clearly slower than C.
>
> Like many other people I do not understand why.

Neither do I. What they problably are saying is that a certain set of
features (hopefully the same in both cases, but you can never be sure..)
was implemented by some specific groups of programmers using some
specific way of programmaing that they regarded as 'proper' for their
language, and the C++ version was slower than the C version. How very
interesting. (And how did the two versions compare on coding cost, bugs,
readability, flexibility, maintainbility?)

What I try to achieve with my "Objects? No thanks!" approach (I can sure
think of technically better names, but this one sure captures the
attention) is to achieve some abstractional benefits (call it compile
time parametrization) without paying a price in performance or size of
over (less parametrizable) C.

Wouter "Objects? No Thanks!" van Ooijen

Wouter van Ooijen

unread,
May 25, 2016, 3:18:54 PM5/25/16
to
Op 25-May-16 om 9:01 PM schreef Stefan Ram:
> That is one, possibly, /the/, strength of C++.

For me, it is /the/ strength of C++. And specifically of the template
mechanism, which is so deaded you others for causing code bloat (which
is of course perfectly possible to achieve too).

Wouter

Richard

unread,
May 25, 2016, 4:59:02 PM5/25/16
to
[Please do not mail me a copy of your followup]

Christian Gollwitzer <auri...@gmx.de> spake the secret code
<ni3iip$1t1$1...@dont-email.me> thusly:

>Am 25.05.16 um 00:40 schrieb Juha Nieminen:
>> jacobnavia <ja...@jacob.remcomp.fr> wrote:
>>> If you are not it is dammed frustrating waiting 3 minutes at each
>>> change or even more
>>
>> I don't think I have had *anything* I have ever done take 3 minutes
>> to compile, even after a clean.

Then you have only worked on small programs, fast computers or both
:-).

Clearly the compile can take longer than 3 minutes if you simply have
enough code to compile.

>I can believe that if you only compile your own code. If you are hacking
>on a big project, rebuilding can take very long. Imagine patching
>OpenOffice.

Try rebuilding clang-tidy from scratch. It takes me ~15-20 minutes.
I haven't applied various blog post hacks to get that time down to
something less because I'm lazy, but there are various ways to
decrease that time significantly.

>There are build distributions available with object files,
>so that you can start hacking on a source file and only need to
>recompile the changed bits, because a full build can take days.

While clang doesn't take days to build from scratch, the above is
essentially what the blog posts that I read talked about.

>I haven't worked on OO myself, but one project of mine required building
>a large number of third-party modules. On Linux, the whole build takes <
>5 minutes. The same using MinGW on Windows takes almost an hour, which
>is extremely annoying.

It is my understanding that the biggest thing holding back a Windows
build is that NTFS before Windows 8 performed poorly on small files
compared to linux file systems. With Windows 8, the filesystem
performance improved and coworkers reported decreased build times,
without changing anything other than the filesystem for the build.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
The Terminals Wiki <http://terminals.classiccmp.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>

Richard

unread,
May 25, 2016, 5:00:48 PM5/25/16
to
[Please do not mail me a copy of your followup]

Wouter van Ooijen <wou...@voti.nl> spake the secret code
<574546fa$0$22286$e4fe...@newszilla.xs4all.nl> thusly:

>To make the executable code smaller and faster.
><https://www.youtube.com/watch?v=k8sRQMx2qUw>

Ah, thanks for sharing this! One of the members of my user group does
embedded programming in microcontrollers and I bet he will really
enjoy this.

Wouter van Ooijen

unread,
May 25, 2016, 5:12:45 PM5/25/16
to
Op 25-May-16 om 11:00 PM schreef Richard:
> [Please do not mail me a copy of your followup]
>
> Wouter van Ooijen <wou...@voti.nl> spake the secret code
> <574546fa$0$22286$e4fe...@newszilla.xs4all.nl> thusly:
>
>> To make the executable code smaller and faster.
>> <https://www.youtube.com/watch?v=k8sRQMx2qUw>
>
> Ah, thanks for sharing this! One of the members of my user group does
> embedded programming in microcontrollers and I bet he will really
> enjoy this.

Let him check http://www.voti.nl/blog/?page_id=144 and see if I have
missed anything important on the C++/small-embedded field. He might
especially like the Kvasir approach.

Wouter van Ooijen

Chris Vine

unread,
May 25, 2016, 5:28:24 PM5/25/16
to
On Wed, 25 May 2016 20:58:50 +0000 (UTC)
legaliz...@mail.xmission.com (Richard) wrote:
> Try rebuilding clang-tidy from scratch. It takes me ~15-20 minutes.
> I haven't applied various blog post hacks to get that time down to
> something less because I'm lazy, but there are various ways to
> decrease that time significantly.

Or try building the latest webkit-gtk
( http://webkitgtk.org/releases/webkitgtk-2.12.3.tar.xz ) which takes a
little over 1 hour on my 8 core machine when compiled with -j8 (I
daren't say how long it takes on 1 core). This seems entirely down to
its C++ code. And all it is is a javascript/web/html/rendering engine.

webkit generally is a pig.

Richard

unread,
May 25, 2016, 5:39:46 PM5/25/16
to
[Please do not mail me a copy of your followup]

Wouter van Ooijen <wou...@voti.nl> spake the secret code
<57461530$0$4069$e4fe...@newszilla.xs4all.nl> thusly:

>Let him check http://www.voti.nl/blog/?page_id=144 and see if I have
>missed anything important on the C++/small-embedded field. He might
>especially like the Kvasir approach.

Some interesting stuff there. I was hoping for more from the Dan Saks
presentation, he doesn't get into the embedded stuff until slide 78!
After that it was all good stuff!

David Brown

unread,
May 25, 2016, 6:29:12 PM5/25/16
to
This is getting a little off-topic, but the filesystem performance on
Windows is only one aspect (though it can be a significant part of the
problem). Linux (and other *nix) is far more efficient at creating
processes than Windows (Windows is quite efficient at handling thread
creation, but not process creation). If you are using "make" and "gcc",
you run a number of processes for every single file you compile. On
Linux, that's no problem and it runs quickly - but on Windows, it can be
a large overhead if your project has lots of files. Windows-only
compilers like MSVC tend to be more monolithic, and perhaps do all the
compilations without restarting the main process. Finally, Linux is
better at scheduling tasks in an SMP system, making better use of all
your cores when compiling. (Win8 has, I believe, improved here.)


Thomas David Rivers

unread,
May 26, 2016, 11:14:26 AM5/26/16
to
Jerry Stuckle wrote:

>On 5/24/2016 6:40 PM, Juha Nieminen wrote:
>
>
>>jacobnavia <ja...@jacob.remcomp.fr> wrote:
>>
>>
>>>If you are not it is dammed frustrating waiting 3 minutes at each
>>>change or even more
>>>
>>>
>>I don't think I have had *anything* I have ever done take 3 minutes
>>to compile, even after a clean.
>>
>>--- news://freenews.netfront.net/ - complaints: ne...@netfront.net ---
>>
>>
>>
>
>You haven't worked on very big projects, then. I've seen compiles take
>overnight on a mainframe.
>
>
>
Which is why we provide a cross-compiler for the mainframe;
we can fix that problem... very often this is a side-effect of
an overloaded mainframe.

- Dave Rivers -

--
riv...@dignus.com Work: (919) 676-0847
Get your mainframe programming tools at http://www.dignus.com

Öö Tiib

unread,
May 26, 2016, 11:44:16 AM5/26/16
to
On Monday, 23 May 2016 09:57:53 UTC+3, Wouter van Ooijen wrote:
> Op 20-May-16 om 3:16 AM schreef Lynn McGuire:
> > "Why I don't spend time with Modern C++ anymore" by Henrique Bucher
> >
> > https://www.linkedin.com/pulse/why-i-dont-spend-time-modern-c-anymore-henrique-bucher-phd
>
> Why did that article raise such a sorm? There is hardly any real
> argument in it.

The storm is because they hate that C++ is really just a programming
tool. One can make good programs with it or bad programs with it.
However neither is what they want to do. They don't want to do *any*
engineering work whatsoever. So they do not need things like C++.
Maximum that they may be fine with is to configure some framework.

>
> He complains that the C++ comittee steers the language in the wrong
> direction. But the comittee doesn't steer, it merely filters what it
> gets as inputs. Now if he complained about his prosals being rejected he
> would have a good point, but now he is in effect complaining that them
> other people did not write the papers he would like to see. He dude, if
> you want something done, do it!

That is not what they expect to do. They expect that language designers
make the work and they are consumers. So for (possibly slightly stretched)
example they need to translate ABNF rules into regular expressions.
What function they have to call for that? No such functions? That means
C++ committee did bad work, did not add feature they happen to need
and instead there are complexities they don't need.

>
> Another complaint is that the language is too complex. In that he has a
> point. But the alternative is a languge that is not backwards compatible
> and hence almost un-used. Unless someone creates a realy good
> alternative (there are attempts) I stick with what I can use now on
> almost every micro-controller I need to program.

C++ is too universal tool. Most complex software what average human uses
(web browser) is programmed in C++, biggest servers human contacts
(Facebook, YouTube etc) are programmed in C++ and also software of smallest
micro-controllers is often programmed in C++. So it is universal and it
can't be most convenient for every context that it is usable in.

>
> Another complaint is that he sees programs (using the new features) that
> are too complex and (hence) too slow. OK< so you can use those features
> in a wrong way. Tell me something new! That is a complaint about how
> those features are used. I manily program small micro-controllers, and
> some of those fancy new features (constexpr! templates, even some form
> of lambda's) help me a lot to make my programs smaller and faster.
> Especially templates.

I still feel that the author just does not want such tools at all. Does
not want to consider what is done compile time or run-time, does not
want to care if some value is reused by move or copied, likes tight
intrusive coupling with abstract base classes (not lambdas) and so on.

All those optimizations were achievable in C++03. The effect of 'constexpr'
was achievable with template meta-programming gibberish, move had example
implementation ('std::auto_ptr' was quite tricky) and typical usage of
lambda was achievable with tricks done within 'boost::bind'.

Jerry Stuckle

unread,
May 26, 2016, 5:46:38 PM5/26/16
to
On 5/26/2016 11:19 AM, Thomas David Rivers wrote:
> Jerry Stuckle wrote:
>
>> On 5/24/2016 6:40 PM, Juha Nieminen wrote:
>>
>>
>>> jacobnavia <ja...@jacob.remcomp.fr> wrote:
>>>
>>>> If you are not it is dammed frustrating waiting 3 minutes at each
>>>> change or even more
>>>>
>>> I don't think I have had *anything* I have ever done take 3 minutes
>>> to compile, even after a clean.
>>>
>>> --- news://freenews.netfront.net/ - complaints: ne...@netfront.net ---
>>>
>>>
>>
>> You haven't worked on very big projects, then. I've seen compiles take
>> overnight on a mainframe.
>>
>>
>>
> Which is why we provide a cross-compiler for the mainframe;
> we can fix that problem... very often this is a side-effect of
> an overloaded mainframe.
>
> - Dave Rivers -
>

Not when it's the only thing running on the mainframe. That's why it's
run overnight.

A cross-compiler on a non-mainframe would take much longer.

Scott Lurndal

unread,
May 27, 2016, 9:08:10 AM5/27/16
to
Jerry Stuckle <jstu...@attglobal.net> writes:
>On 5/26/2016 11:19 AM, Thomas David Rivers wrote:

>
>Not when it's the only thing running on the mainframe. That's why it's
>run overnight.
>
>A cross-compiler on a non-mainframe would take much longer.

30 years ago, maybe. Today? High-end xeon systems are performance
competitive with IBM Z-series. Unisys mainframes (Burroughs, Sperry) _are_ now all
intel CPU's in place of custom CMOS logic.

Jerry Stuckle

unread,
May 27, 2016, 10:58:52 AM5/27/16
to
But that's not the advantage of mainframes. Mainframe's advantage is
super fast I/O access to huge datasets through multiple simultaneous
channels. Your xeon systems can't compete.

If there were no advantage to mainframes, why would companies spend
millions of dollars on them?

Scott Lurndal

unread,
May 27, 2016, 11:51:39 AM5/27/16
to
Jerry Stuckle <jstu...@attglobal.net> writes:
>On 5/27/2016 9:07 AM, Scott Lurndal wrote:
>> Jerry Stuckle <jstu...@attglobal.net> writes:
>>> On 5/26/2016 11:19 AM, Thomas David Rivers wrote:
>>
>>>
>>> Not when it's the only thing running on the mainframe. That's why it's
>>> run overnight.
>>>
>>> A cross-compiler on a non-mainframe would take much longer.
>>
>> 30 years ago, maybe. Today? High-end xeon systems are performance
>> competitive with IBM Z-series. Unisys mainframes (Burroughs, Sperry) _are_ now all
>> intel CPU's in place of custom CMOS logic.
>>
>
>But that's not the advantage of mainframes. Mainframe's advantage is
>super fast I/O access to huge datasets through multiple simultaneous
>channels. Your xeon systems can't compete.

For a compile? Yes, the xeon systems can compete. Likely outcompete.

And there are high-end xeon systems with "super fast I/O access".

The SoC I work with has 48 cores, two 40Gbit/s network controllers, along
with 16 SATA controllers and 40 lanes of Gen 3 PCI Express.

IBM is still stuck on 8Gbit/sec fiberchannel.

David Brown

unread,
May 27, 2016, 12:10:51 PM5/27/16
to
On 27/05/16 16:58, Jerry Stuckle wrote:
> On 5/27/2016 9:07 AM, Scott Lurndal wrote:
>> Jerry Stuckle <jstu...@attglobal.net> writes:
>>> On 5/26/2016 11:19 AM, Thomas David Rivers wrote:
>>
>>>
>>> Not when it's the only thing running on the mainframe. That's why it's
>>> run overnight.
>>>
>>> A cross-compiler on a non-mainframe would take much longer.
>>
>> 30 years ago, maybe. Today? High-end xeon systems are performance
>> competitive with IBM Z-series. Unisys mainframes (Burroughs, Sperry) _are_ now all
>> intel CPU's in place of custom CMOS logic.
>>
>
> But that's not the advantage of mainframes. Mainframe's advantage is
> super fast I/O access to huge datasets through multiple simultaneous
> channels. Your xeon systems can't compete.

That may be true - but compilation, even of large programs, does not
need such I/O. If you are working with such large software, and trying
to get as short rebuild times as possible, you would have all the source
and object files in ram. Most build processes are scalable across
multiple cores and multiple machines, at least until the final link
stages. Multi-core x86 systems, or blade racks if you want more power,
will outperform mainframes in compile-speed-per-dollar by an order of
magnitude at least. Even with the most compute-optimised mainframes,
you are paying significantly for the reliability and other mainframe
features that are unnecessary on a build server.

>
> If there were no advantage to mainframes, why would companies spend
> millions of dollars on them?
>

Security, reliability, backwards compatibility, guarantees of long-term
availability of parts, massive virtualisation, etc. There are plenty of
reasons for using mainframes - computational speed, however, is
certainly not one of them.


Jerry Stuckle

unread,
May 27, 2016, 12:13:18 PM5/27/16
to
On 5/27/2016 11:51 AM, Scott Lurndal wrote:
> Jerry Stuckle <jstu...@attglobal.net> writes:
>> On 5/27/2016 9:07 AM, Scott Lurndal wrote:
>>> Jerry Stuckle <jstu...@attglobal.net> writes:
>>>> On 5/26/2016 11:19 AM, Thomas David Rivers wrote:
>>>
>>>>
>>>> Not when it's the only thing running on the mainframe. That's why it's
>>>> run overnight.
>>>>
>>>> A cross-compiler on a non-mainframe would take much longer.
>>>
>>> 30 years ago, maybe. Today? High-end xeon systems are performance
>>> competitive with IBM Z-series. Unisys mainframes (Burroughs, Sperry) _are_ now all
>>> intel CPU's in place of custom CMOS logic.
>>>
>>
>> But that's not the advantage of mainframes. Mainframe's advantage is
>> super fast I/O access to huge datasets through multiple simultaneous
>> channels. Your xeon systems can't compete.
>
> For a compile? Yes, the xeon systems can compete. Likely outcompete.
>

Not a chance. Compiling is quite I/O intensive, especially when
performing parallel compliations.

> And there are high-end xeon systems with "super fast I/O access".
>

Not even close to mainframes. But I can see you've never worked on
mainframes.

> The SoC I work with has 48 cores, two 40Gbit/s network controllers, along
> with 16 SATA controllers and 40 lanes of Gen 3 PCI Express.
>

So? It doesn't even come close to what a mainframe can handle. It
*might* be able to perform some of the I/O.

But even with your claims, what is the actual throughput on your
40Gbit/s network controllers? Or any of your other controllers?


> IBM is still stuck on 8Gbit/sec fiberchannel.
>

You mean 16 fiber channels (maybe more now, I'm not positive), all able
to run full speed concurrently. Transfer rates can easily exceed
100Gbps maintained.

So I ask you again - if your xeon systems are so great, why do companies
spend millions of dollars on mainframes?

Scott Lurndal

unread,
May 27, 2016, 12:50:10 PM5/27/16
to
Jerry Stuckle <jstu...@attglobal.net> writes:
>On 5/27/2016 11:51 AM, Scott Lurndal wrote:

>> For a compile? Yes, the xeon systems can compete. Likely outcompete.
>>
>
>Not a chance. Compiling is quite I/O intensive, especially when
>performing parallel compliations.

Actually, not so much, with intellegent operating systems using
spare memory as a file cache. Which all of them do.

>
>> And there are high-end xeon systems with "super fast I/O access".
>>
>
>Not even close to mainframes. But I can see you've never worked on
>mainframes.

I designed and wrote mainframe operating systems for fourteen years. I
do know whereof I speak. And yes, in the 80's and 90's, mainframe
I/O capabilities were superior.

>
>But even with your claims, what is the actual throughput on your
>40Gbit/s network controllers? Or any of your other controllers?

Line rate, of course. Our customers would not purchase them otherwise.

>
>
>> IBM is still stuck on 8Gbit/sec fiberchannel.
>>
>
>You mean 16 fiber channels (maybe more now, I'm not positive), all able
>to run full speed concurrently. Transfer rates can easily exceed
>100Gbps maintained.

As does the 2x 40Gbps + 16 6Gbps SATA, even leaving out the PCIe.

>
>So I ask you again - if your xeon systems are so great, why do companies
>spend millions of dollars on mainframes?

They're not _my_ xeon systems.

Mainframes still exist because customers have had them for decades and can't
afford to change them out and move to a different architecture.

The same reason that the Unisys Mainframes are still being built
(albeit emulated using Intel processors).

IBM has sold _VERY FEW_ Z-series to _new_ customers for a decade or two, with
one or two exceptions. IBM's sole advantages are in the robustness
of the hardware(MTBF) and the ability to change out hardware without
massive disruptions to running code (which supports the large MTBF).
System-Z Revenue (mainly from customer refreshes) in 2015 was down 23%
(as per the 2015 annual report).

"Performance in 2014 reflected year-to-year declines related to
the System Z product cycle".

Paavo Helde

unread,
May 27, 2016, 2:20:37 PM5/27/16
to
On 27.05.2016 19:13, Jerry Stuckle wrote:
>
> So I ask you again - if your xeon systems are so great, why do companies
> spend millions of dollars on mainframes?

Maybe because highly-paid consultants advise them to continue spending
millions of dollars on obsolete technologies? Maybe otherwise their
consulting fees might start sticking out?


Jerry Stuckle

unread,
May 27, 2016, 4:37:00 PM5/27/16
to
On 5/27/2016 12:49 PM, Scott Lurndal wrote:
> Jerry Stuckle <jstu...@attglobal.net> writes:
>> On 5/27/2016 11:51 AM, Scott Lurndal wrote:
>
>>> For a compile? Yes, the xeon systems can compete. Likely outcompete.
>>>
>>
>> Not a chance. Compiling is quite I/O intensive, especially when
>> performing parallel compliations.
>
> Actually, not so much, with intellegent operating systems using
> spare memory as a file cache. Which all of them do.
>

Sure, once the files are fetched from disk, they can be cached. But not
until. And only when you have sufficient memory available to provide
the cache.

>>
>>> And there are high-end xeon systems with "super fast I/O access".
>>>
>>
>> Not even close to mainframes. But I can see you've never worked on
>> mainframes.
>
> I designed and wrote mainframe operating systems for fourteen years. I
> do know whereof I speak. And yes, in the 80's and 90's, mainframe
> I/O capabilities were superior.
>

And they still are today. That's why mainframes are still popular,
despite the higher cost.

>>
>> But even with your claims, what is the actual throughput on your
>> 40Gbit/s network controllers? Or any of your other controllers?
>
> Line rate, of course. Our customers would not purchase them otherwise.
>

That's not answering the question. No network operates at the
theoretical maximum speed except for possibly short bursts.

>>
>>
>>> IBM is still stuck on 8Gbit/sec fiberchannel.
>>>
>>
>> You mean 16 fiber channels (maybe more now, I'm not positive), all able
>> to run full speed concurrently. Transfer rates can easily exceed
>> 100Gbps maintained.
>
> As does the 2x 40Gbps + 16 6Gbps SATA, even leaving out the PCIe.
>

Wrong, 16 SATA cannot concurrently access memory, much less along with
the network and PCIe.

>>
>> So I ask you again - if your xeon systems are so great, why do companies
>> spend millions of dollars on mainframes?
>
> They're not _my_ xeon systems.
>

No, because they know what their mainframes do, and what your xeon
system does. These companies don't make money by being stupid.

> Mainframes still exist because customers have had them for decades and can't
> afford to change them out and move to a different architecture.
>

Wrong again. Mainframes exist because they are still the fastest
around. And it would be much cheaper in the long run to change to your
xeon architecture if it were even as good. But they know their
mainframes still run rings around your architecture - despite your claims.

> The same reason that the Unisys Mainframes are still being built
> (albeit emulated using Intel processors).
>
> IBM has sold _VERY FEW_ Z-series to _new_ customers for a decade or two, with
> one or two exceptions. IBM's sole advantages are in the robustness
> of the hardware(MTBF) and the ability to change out hardware without
> massive disruptions to running code (which supports the large MTBF).
> System-Z Revenue (mainly from customer refreshes) in 2015 was down 23%
> (as per the 2015 annual report).
>
> "Performance in 2014 reflected year-to-year declines related to
> the System Z product cycle".
>

You mean they have sold Z-series to new customers, right? So much for
your "have had them for decades..." argument. And yes, the robustness
and ability to change out hardware are advantageous - but hardware is
pretty solid overall now at all levels.

And yes, System-Z revenue is down - but that's not just because of them
being mainframes. There are numerous economic reasons all over the
world. Sales of computers in general have been down for the last couple
of years.

Really - you come across as a salesman with only one argument - one that
doesn't recognize the advantages of the competition. Such a position
will NEVER work long term.

Jerry Stuckle

unread,
May 27, 2016, 4:37:53 PM5/27/16
to
ROFLMAO! That's the best one I've heard all week.

Paavo Helde

unread,
May 27, 2016, 4:41:25 PM5/27/16
to
On 27.05.2016 23:37, Jerry Stuckle wrote:
> On 5/27/2016 2:20 PM, Paavo Helde wrote:
>> On 27.05.2016 19:13, Jerry Stuckle wrote:
>>>
>>> So I ask you again - if your xeon systems are so great, why do companies
>>> spend millions of dollars on mainframes?
>>
>> Maybe because highly-paid consultants advise them to continue spending
>> millions of dollars on obsolete technologies? Maybe otherwise their
>> consulting fees might start sticking out?
>>
>>
>
> ROFLMAO! That's the best one I've heard all week.
>
I'll take this as a compliment! :-)

Jerry Stuckle

unread,
May 27, 2016, 4:42:29 PM5/27/16
to
On 5/27/2016 12:10 PM, David Brown wrote:
> On 27/05/16 16:58, Jerry Stuckle wrote:
>> On 5/27/2016 9:07 AM, Scott Lurndal wrote:
>>> Jerry Stuckle <jstu...@attglobal.net> writes:
>>>> On 5/26/2016 11:19 AM, Thomas David Rivers wrote:
>>>
>>>>
>>>> Not when it's the only thing running on the mainframe. That's why it's
>>>> run overnight.
>>>>
>>>> A cross-compiler on a non-mainframe would take much longer.
>>>
>>> 30 years ago, maybe. Today? High-end xeon systems are performance
>>> competitive with IBM Z-series. Unisys mainframes (Burroughs, Sperry) _are_ now all
>>> intel CPU's in place of custom CMOS logic.
>>>
>>
>> But that's not the advantage of mainframes. Mainframe's advantage is
>> super fast I/O access to huge datasets through multiple simultaneous
>> channels. Your xeon systems can't compete.
>
> That may be true - but compilation, even of large programs, does not
> need such I/O. If you are working with such large software, and trying
> to get as short rebuild times as possible, you would have all the source
> and object files in ram. Most build processes are scalable across
> multiple cores and multiple machines, at least until the final link
> stages. Multi-core x86 systems, or blade racks if you want more power,
> will outperform mainframes in compile-speed-per-dollar by an order of
> magnitude at least. Even with the most compute-optimised mainframes,
> you are paying significantly for the reliability and other mainframe
> features that are unnecessary on a build server.
>

Actually, it does. And no matter how much you try, you can't run it
from ram until you get it into ram. Plus, unless you have a terabyte or
more of ram, you aren't going to be able to run multiple compilation and
keep all of the intermediate files in ram.

Sure, you can do it, when you are compiling the 100 line programs you
write. But you have no idea what it takes to compile huge programs such
as the one I described.

>>
>> If there were no advantage to mainframes, why would companies spend
>> millions of dollars on them?
>>
>
> Security, reliability, backwards compatibility, guarantees of long-term
> availability of parts, massive virtualisation, etc. There are plenty of
> reasons for using mainframes - computational speed, however, is
> certainly not one of them.
>
>

I won't address each one individually - just to say that every one of
your "reasons" is pure hogwash.

Ian Collins

unread,
May 27, 2016, 4:46:35 PM5/27/16
to
On 05/28/16 02:58, Jerry Stuckle wrote:
> On 5/27/2016 9:07 AM, Scott Lurndal wrote:
>> Jerry Stuckle <jstu...@attglobal.net> writes:
>>> On 5/26/2016 11:19 AM, Thomas David Rivers wrote:
>>
>>>
>>> Not when it's the only thing running on the mainframe. That's why it's
>>> run overnight.
>>>
>>> A cross-compiler on a non-mainframe would take much longer.
>>
>> 30 years ago, maybe. Today? High-end xeon systems are performance
>> competitive with IBM Z-series. Unisys mainframes (Burroughs, Sperry) _are_ now all
>> intel CPU's in place of custom CMOS logic.
>>
>
> But that's not the advantage of mainframes. Mainframe's advantage is
> super fast I/O access to huge datasets through multiple simultaneous
> channels. Your xeon systems can't compete.

Correct - that is why they are used for transaction processing. When
transaction processing is scaled to extreme (Google or Facebook for
example), custom Xeon based hardware takes over.

> If there were no advantage to mainframes, why would companies spend
> millions of dollars on them?

There are advantages, but compiling C++ code isn't one of them.

--
Ian

Ian Collins

unread,
May 27, 2016, 4:50:10 PM5/27/16
to
On 05/28/16 08:42, Jerry Stuckle wrote:
> On 5/27/2016 12:10 PM, David Brown wrote:
>>
>> That may be true - but compilation, even of large programs, does not
>> need such I/O. If you are working with such large software, and trying
>> to get as short rebuild times as possible, you would have all the source
>> and object files in ram. Most build processes are scalable across
>> multiple cores and multiple machines, at least until the final link
>> stages. Multi-core x86 systems, or blade racks if you want more power,
>> will outperform mainframes in compile-speed-per-dollar by an order of
>> magnitude at least. Even with the most compute-optimised mainframes,
>> you are paying significantly for the reliability and other mainframe
>> features that are unnecessary on a build server.
>>
>
> Actually, it does. And no matter how much you try, you can't run it
> from ram until you get it into ram. Plus, unless you have a terabyte or
> more of ram, you aren't going to be able to run multiple compilation and
> keep all of the intermediate files in ram.

You don't need to keep the intermediate files in RAM, just the generated
objects.

--
Ian

JiiPee

unread,
May 27, 2016, 6:12:57 PM5/27/16
to
On 25/05/2016 17:00, Jerry Stuckle wrote:
> On 5/25/2016 4:09 AM, Juha Nieminen wrote:
>> The complaint is idiotic.
>>
>> --- news://freenews.netfront.net/ - complaints: ne...@netfront.net ---
>>
> No, it's not the complaint that is idiotic. The complaint is valid.
> Just because it doesn't affect YOU doesn't mean it's not a valid
> complaint. It's just not valid for YOU.
>
> But it is very valid for a lot of other programmers.

But "a lot" is not necessary a lot percentage- wise. 100 is "a lot" but
its not a lot if there is 100 for every 10000000.
Thats a general rule anyway: how many it affects *percentage-wise* if
normally or almost always which matter what comes to language-issue. For
example if feature XX helps 200 people in the world (which is "alot")
but it does not help the rest 8999999 , then most surely it is not added
to the language.


jacobnavia

unread,
May 27, 2016, 6:30:40 PM5/27/16
to
Le 25/05/2016 à 10:09, Juha Nieminen a écrit :
> But that's my point: Every single time, every single freaking time,
> someone complains about C++, the "long compile times" argument is
> brought up, like it were some kind of crucial core flaw that affects
> every single C++ programmer.

Excuse me but given that any C++ programmer must compile his/her code
sometime, it SURELY affects EVERY SINGLE C++ PROGRAMMER.

That this is no reason to switch languages is obvious, most people are
saying that the compile times go up, and even if the machines now are
much more powerful, the compile times still go up.

That is (for you) not a reason for concern. The fact that you report,
(that many people complain about compilation times) is not a reason for
you to reflect a bit and notice that the long compilation time is just a
symptom of a more serious condition:

General obesity.

C++ has become too big, too obese. Pushed by a MacDonald industry, every
single feature that anybody can conceive has been added to that
language, and the language is now a vast mass of FAT.

Too complex now even for its own creator, Bjarne acknowledged that he
just could not implement the latest feature he wanted to add. The
concept of what C++ should be has disappeared beyond a confusion of
features piled upon features.

I think a reflexion is needed. We could re-create a leaner language
where all the features of C++ could be maintained (and many more added!)
if we open the language and let people program the compiler itself.

I will publish soon a document explaining this in detail.

Ian Collins

unread,
May 27, 2016, 6:59:19 PM5/27/16
to
On 05/28/16 10:30, jacobnavia wrote:
>
> I think a reflexion is needed. We could re-create a leaner language
> where all the features of C++ could be maintained (and many more added!)
> if we open the language and let people program the compiler itself.

Are you mistaking C++ for Java? What isn't "open" with the C++
development process? Anyone is free to work on one of the opensource
compilers. Is your lcc compiler source freely available for people to
modify?

> I will publish soon a document explaining this in detail.

That would be interesting.

--
Ian

Ian Collins

unread,
May 27, 2016, 7:40:50 PM5/27/16
to
On 05/28/16 11:28, Stefan Ram wrote:
> jacobnavia <ja...@jacob.remcomp.fr> writes:
>> C++ has become too big, too obese.
>
> I thought so too. The committee even wrote (posted by
> Bjarne Stroustrup):
>
> »C++ is already too large and complicated for our taste«.
>
> (This was when they were still posting to Usenet.)
>
> But C++11, C++14 and C++17 miraculously managed to get
> easier for beginners while getting even larger. The C++
> Core Guidelines might be another step in this direction.
>
> For example, one can compare C++98's
>
> for( ::std::vector< T >::iterator it = jobs.begin(); it != jobs.end(); ++it )
>
> with C++14's
>
> for( auto const & job: jobs )

I agree with you there! I'm currently working with a large code base
some of which is over 20 years old and modernising the code definitely
equals simplifying the code.

--
Ian

Jerry Stuckle

unread,
May 27, 2016, 8:48:36 PM5/27/16
to
It doesn't matter what the "percentage" is. What matters is the ral
numbers. And performance is not the same as language issues.

Another troll post.

Jerry Stuckle

unread,
May 27, 2016, 8:50:43 PM5/27/16
to
The generated objects ARE the intermediate files. But once again you
show your lack of knowledge of anything but the smallest (i.e. MSDOS
1.0) systems.

Jerry Stuckle

unread,
May 27, 2016, 8:52:10 PM5/27/16
to
On 5/27/2016 4:46 PM, Ian Collins wrote:
> On 05/28/16 02:58, Jerry Stuckle wrote:
>> On 5/27/2016 9:07 AM, Scott Lurndal wrote:
>>> Jerry Stuckle <jstu...@attglobal.net> writes:
>>>> On 5/26/2016 11:19 AM, Thomas David Rivers wrote:
>>>
>>>>
>>>> Not when it's the only thing running on the mainframe. That's why it's
>>>> run overnight.
>>>>
>>>> A cross-compiler on a non-mainframe would take much longer.
>>>
>>> 30 years ago, maybe. Today? High-end xeon systems are performance
>>> competitive with IBM Z-series. Unisys mainframes (Burroughs,
>>> Sperry) _are_ now all
>>> intel CPU's in place of custom CMOS logic.
>>>
>>
>> But that's not the advantage of mainframes. Mainframe's advantage is
>> super fast I/O access to huge datasets through multiple simultaneous
>> channels. Your xeon systems can't compete.
>
> Correct - that is why they are used for transaction processing. When
> transaction processing is scaled to extreme (Google or Facebook for
> example), custom Xeon based hardware takes over.
>

Try again. It doesn't work.

>> If there were no advantage to mainframes, why would companies spend
>> millions of dollars on them?
>
> There are advantages, but compiling C++ code isn't one of them.
>

Right. So say you.

Ian Collins

unread,
May 27, 2016, 8:53:11 PM5/27/16
to
On 05/28/16 12:50, Jerry Stuckle wrote:
> On 5/27/2016 4:49 PM, Ian Collins wrote:
>> On 05/28/16 08:42, Jerry Stuckle wrote:
>>> On 5/27/2016 12:10 PM, David Brown wrote:
>>>>
>>>> That may be true - but compilation, even of large programs, does not
>>>> need such I/O. If you are working with such large software, and trying
>>>> to get as short rebuild times as possible, you would have all the source
>>>> and object files in ram. Most build processes are scalable across
>>>> multiple cores and multiple machines, at least until the final link
>>>> stages. Multi-core x86 systems, or blade racks if you want more power,
>>>> will outperform mainframes in compile-speed-per-dollar by an order of
>>>> magnitude at least. Even with the most compute-optimised mainframes,
>>>> you are paying significantly for the reliability and other mainframe
>>>> features that are unnecessary on a build server.
>>>>
>>>
>>> Actually, it does. And no matter how much you try, you can't run it
>>> from ram until you get it into ram. Plus, unless you have a terabyte or
>>> more of ram, you aren't going to be able to run multiple compilation and
>>> keep all of the intermediate files in ram.
>>
>> You don't need to keep the intermediate files in RAM, just the generated
>> objects.
>>
>
> The generated objects ARE the intermediate files.

Well then, RAM won't be that big a deal, will it?

--
Ian

Jerry Stuckle

unread,
May 27, 2016, 9:01:17 PM5/27/16
to
So you say. But you obviously have no experience with large
compilations. Probably because your biggest program is around 100 LOC.
ROFLMAO!

JiiPee

unread,
May 27, 2016, 9:51:06 PM5/27/16
to
its not troll.

Surely its the relative number which matter. Or is it also in voting
that if 200 people in the country wants something that will become a
law?? surely not, unless there are 300 people in that country! (then it
would be 66%).

VS team must do many things so am sure they prioritize what they do. So
if only 0.01% of people need something and 30% of people need something
else, am sure they rather do that 30% issue.


JiiPee

unread,
May 27, 2016, 9:54:42 PM5/27/16
to
am not saying the fast compilation time is not important.. surely it is,
but for most of the programmers its not an issue, like for me.


I guess Microsoft is prioritizing tasks what they improve.

Ian Collins

unread,
May 27, 2016, 10:01:33 PM5/27/16
to
On 05/28/16 12:52, Jerry Stuckle wrote:
> On 5/27/2016 4:46 PM, Ian Collins wrote:
>> On 05/28/16 02:58, Jerry Stuckle wrote:
>>> On 5/27/2016 9:07 AM, Scott Lurndal wrote:
>>>> Jerry Stuckle <jstu...@attglobal.net> writes:
>>>>> On 5/26/2016 11:19 AM, Thomas David Rivers wrote:
>>>>
>>>>>
>>>>> Not when it's the only thing running on the mainframe. That's why it's
>>>>> run overnight.
>>>>>
>>>>> A cross-compiler on a non-mainframe would take much longer.
>>>>
>>>> 30 years ago, maybe. Today? High-end xeon systems are performance
>>>> competitive with IBM Z-series. Unisys mainframes (Burroughs,
>>>> Sperry) _are_ now all
>>>> intel CPU's in place of custom CMOS logic.
>>>>
>>>
>>> But that's not the advantage of mainframes. Mainframe's advantage is
>>> super fast I/O access to huge datasets through multiple simultaneous
>>> channels. Your xeon systems can't compete.
>>
>> Correct - that is why they are used for transaction processing. When
>> transaction processing is scaled to extreme (Google or Facebook for
>> example), custom Xeon based hardware takes over.
>>
> Try again. It doesn't work.

Try what again?

Try reading:

https://code.facebook.com/posts/1538145769783718/open-compute-project-u-s-summit-2015-facebook-news-recap/

http://datacenterfrontier.com/facebook-open-compute-hardware-next-level/

--
Ian

Ian Collins

unread,
May 27, 2016, 10:10:11 PM5/27/16
to
On 05/28/16 13:54, JiiPee wrote:
>
> am not saying the fast compilation time is not important.. surely it is,
> but for most of the programmers its not an issue, like for me.

It is an issue for those of us who use TDD and therefore build and run
tests very often. The solution is two-fold:

1) throw more hardware at it
2) better modularise the code.

I do both!

Building build farms is part of my day job, so the first is relatively
easy for me, but I appreciate that it isn't an option for everyone.

The second is good engineering practice.

--
Ian

Rosario19

unread,
May 28, 2016, 12:37:24 AM5/28/16
to
On Sat, 28 May 2016 00:30:29 +0200, jacobnavia wrote:

>C++ has become too big, too obese. Pushed by a MacDonald industry, every
>single feature that anybody can conceive has been added to that
>language, and the language is now a vast mass of FAT.

the compile time is not a problem...
because compilation is parallelizable process
so one can imagine even now to have 8 cpu
each cpu has its compiler program and compile one file of the many it
has to compile
or each cpu has its part of file to compile etc

Robert Wessel

unread,
May 28, 2016, 5:32:52 AM5/28/16
to
On Sat, 28 May 2016 06:37:14 +0200, Rosario19 <R...@invalid.invalid>
wrote:
Given that essentially all of my non-debug builds have LTCG turned on
(at least on those platforms where that's an option), the "link" step
is where a quite large chunk of the CPU time now gets burned, and it
doesn't parallelize.

Jerry Stuckle

unread,
May 28, 2016, 8:34:27 AM5/28/16
to
On 5/27/2016 9:50 PM, JiiPee wrote:
> On 28/05/2016 01:48, Jerry Stuckle wrote:
>> On 5/27/2016 6:12 PM, JiiPee wrote:
>> It doesn't matter what the "percentage" is. What matters is the ral
>> numbers. And performance is not the same as language issues.
>>
>> Another troll post.
>
> its not troll.
>

Sorry, another troll post.

> Surely its the relative number which matter. Or is it also in voting
> that if 200 people in the country wants something that will become a
> law?? surely not, unless there are 300 people in that country! (then it
> would be 66%).
>

It would be if only 300 people voted.

> VS team must do many things so am sure they prioritize what they do. So
> if only 0.01% of people need something and 30% of people need something
> else, am sure they rather do that 30% issue.
>
>

That's only a very small part of it. It more depends on what that 30%
wants.

Jerry Stuckle

unread,
May 28, 2016, 8:35:31 AM5/28/16
to
Citation?

>
> I guess Microsoft is prioritizing tasks what they improve.
>

It's a huge problem for a large number of programmers. Just because it
isn't for YOU does not mean it's not important to OTHERS. YOU ARE NOT
THE WHOLE WORLD.

Jerry Stuckle

unread,
May 28, 2016, 8:37:56 AM5/28/16
to
Nice ability to Google. Now try some *reliable* references. Then ask
some of those big companies why they use big iron.

David Brown

unread,
May 28, 2016, 11:25:14 AM5/28/16
to
The entire Debian source code repository is about 1.1 GLoC, perhaps
something like 30 GB total. And that is an absurdly big software
project - /way/ bigger than any individual program or project (even in
the mainframe world). A solid SSD will give you something like 0.5 GB
per second. So reading /all/ of those files is 15 seconds - at that is
if you don't want to splash out on a couple of disks in a raid system.

The disk speed is pretty much totally irrelevant in large compile times
(unless you are using a poor OS that has slow file access and bad
caching systems, like Windows - though Win8 is rumoured to be better).

The key issue is not the number of lines of code in the files, but the
number of lines of code /compiled/. You only need to read these
multi-MB header files once, but you need to compile them repeatedly.
Thus processor and memory is the issue, not disk speed.

> Plus, unless you have a terabyte or
> more of ram, you aren't going to be able to run multiple compilation and
> keep all of the intermediate files in ram.


Libreoffice is an example of a very large C++ project. From an report I
read, it took approximately an hour to build on an 8-core 1.4 GHz
Opteron system with 64 GB ram. Peak memory usage is only about 11 GB,
or 18 GB with link-time optimisation (which requires holding much more
in memory at a time).

The idea that you would need a TB or more of ram is just silly.

>
> Sure, you can do it, when you are compiling the 100 line programs you
> write. But you have no idea what it takes to compile huge programs such
> as the one I described.

You haven't described any programs, huge or otherwise.

(It is true that for the programs I write, compile times are rarely an
issue - in most cases, the code is written by a single person, for a
single purpose on a small embedded system. But sometimes I also have to
compile big code bases for other systems.)

>
>>>
>>> If there were no advantage to mainframes, why would companies spend
>>> millions of dollars on them?
>>>
>>
>> Security, reliability, backwards compatibility, guarantees of long-term
>> availability of parts, massive virtualisation, etc. There are plenty of
>> reasons for using mainframes - computational speed, however, is
>> certainly not one of them.
>>
>>
>
> I won't address each one individually - just to say that every one of
> your "reasons" is pure hogwash.
>

Are you telling me that mainframe customers are not interested in
security, or are you telling me that mainframes have poorer security
than the average Linux/Windows/Mac server? Are you telling me that
banks are not particularly concerned about reliability, or are you
telling me that they pick Dell servers over Z-Series because of Dell's
excellent reliability record?

Or could it be that you are simply talking nonsense again?


David Brown

unread,
May 28, 2016, 12:25:12 PM5/28/16
to
Okay, here are a few links. I did not use Google to find them. Do you
count these as reliable? If not, show us references that /you/ feel are
reliable - especially ones that show that Google and/or Facebook use
mainframes rather than Linux x86 servers for significant parts of their
computer centres, or ones that show anyone choosing a mainframe for a
build platform because of its greater performance. And since you
dislike Googling, I don't expect to see any "let me google that for you"
links - concrete references only.

<http://www.datacenterknowledge.com/archives/2016/04/28/guide-to-facebooks-open-source-data-center-hardware/>

<http://www.businessinsider.com/facebook-open-compute-project-history-2015-6?op=1%3fr=US&IR=T&IR=T>

<http://arstechnica.com/information-technology/2013/07/how-facebook-is-killing-the-hardware-business-as-we-know-it/>


> Then ask
> some of those big companies why they use big iron.
>

Why not ask IBM, since they have something like 90% of the mainframe market?

<https://www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zmainframe/zconc_whousesmf.htm>

Apparently, long lifetimes, reliability, stability, security,
scalability, and compatibility with previous systems are the main
points, along with the ability to handle huge datasets and high
bandwidth communications.

I believe that list is quite close to the one I gave, which you labelled
as "hogwash".

And I don't see "build server" or "compilation of large C++ programs" on
IBM's list.





Ian Collins

unread,
May 28, 2016, 7:58:50 PM5/28/16
to
So a post by Facebook discussing their own hardware isn't reliable?

--
Ian

Jerry Stuckle

unread,
May 28, 2016, 8:21:59 PM5/28/16
to
That shows how little you know. There are many programs in the
mainframe world which are much bigger. But you've never seen one, so
you think they don't exist.

But then all of those files in the Debian repository are pretty much
separate. They aren't linked into one huge program (or a few very large
programs), are they?

> The disk speed is pretty much totally irrelevant in large compile times
> (unless you are using a poor OS that has slow file access and bad
> caching systems, like Windows - though Win8 is rumoured to be better).
>

Wrong again. I/O speed is quite critical. But you think Debian is the
end-all of programs. It isn't.

> The key issue is not the number of lines of code in the files, but the
> number of lines of code /compiled/. You only need to read these
> multi-MB header files once, but you need to compile them repeatedly.
> Thus processor and memory is the issue, not disk speed.
>

Number of lines of compiled code is only part of the issue.

>> Plus, unless you have a terabyte or
>> more of ram, you aren't going to be able to run multiple compilation and
>> keep all of the intermediate files in ram.
>
>
> Libreoffice is an example of a very large C++ project. From an report I
> read, it took approximately an hour to build on an 8-core 1.4 GHz
> Opteron system with 64 GB ram. Peak memory usage is only about 11 GB,
> or 18 GB with link-time optimisation (which requires holding much more
> in memory at a time).
>
> The idea that you would need a TB or more of ram is just silly.
>

No, Libreoffice is an example of a small to medium sized program. Its'
not large at all. But I'm sure it's much bigger than anything you've
ever worked on.

>>
>> Sure, you can do it, when you are compiling the 100 line programs you
>> write. But you have no idea what it takes to compile huge programs such
>> as the one I described.
>
> You haven't described any programs, huge or otherwise.
>
> (It is true that for the programs I write, compile times are rarely an
> issue - in most cases, the code is written by a single person, for a
> single purpose on a small embedded system. But sometimes I also have to
> compile big code bases for other systems.)
>

I am not going to name any names because they are (or have been) my
clients. And those are none of your business.

And if you want large code base - I know of at least one which has
several hundred programmers working for 3 years just on one version.

And they are busy writing code - not drinking coffee. This is what
large programs look like.

>>
>>>>
>>>> If there were no advantage to mainframes, why would companies spend
>>>> millions of dollars on them?
>>>>
>>>
>>> Security, reliability, backwards compatibility, guarantees of long-term
>>> availability of parts, massive virtualisation, etc. There are plenty of
>>> reasons for using mainframes - computational speed, however, is
>>> certainly not one of them.
>>>
>>>
>>
>> I won't address each one individually - just to say that every one of
>> your "reasons" is pure hogwash.
>>
>
> Are you telling me that mainframe customers are not interested in
> security, or are you telling me that mainframes have poorer security
> than the average Linux/Windows/Mac server? Are you telling me that
> banks are not particularly concerned about reliability, or are you
> telling me that they pick Dell servers over Z-Series because of Dell's
> excellent reliability record?
>
> Or could it be that you are simply talking nonsense again?
>
>

I'm saying that every one of your "reasons" is pure hogwash. If you
think mainframes are more secure than your xeon systems, than you don't
know how to properly implement your xeon systems. And I guess your
systems aren't very reliable, either.

You've just told me a lot about why people don't want to buy your
systems. They want ones designed by competent people.

Jerry Stuckle

unread,
May 28, 2016, 8:26:13 PM5/28/16
to
None of which provide support for your claims. Nice try.

>
>> Then ask
>> some of those big companies why they use big iron.
>>
>
> Why not ask IBM, since they have something like 90% of the mainframe
> market?
>
> <https://www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zmainframe/zconc_whousesmf.htm>
>

I don't need to. I worked for IBM for 13 years - the first 5 being in
mainframe hardware and the last 8 in mainframe software. During that
time I worked with hundreds of IBM mainframe customers. I think I know
them pretty well.

But then I wouldn't expect an IBM site to be promoting Dell computers.

>
> Apparently, long lifetimes, reliability, stability, security,
> scalability, and compatibility with previous systems are the main
> points, along with the ability to handle huge datasets and high
> bandwidth communications.
>
> I believe that list is quite close to the one I gave, which you labelled
> as "hogwash".
>
> And I don't see "build server" or "compilation of large C++ programs" on
> IBM's list.
>

Sure - they're going to promote short lifetimes, no reliability and all
the rest? ROFLMAO! Now you've even gone beyond stoopid.

Jerry Stuckle

unread,
May 28, 2016, 8:26:56 PM5/28/16
to
How unbiased do you think it is? How stoopid are you?

Ian Collins

unread,
May 28, 2016, 8:56:57 PM5/28/16
to
So you now claim that Facebook (and by association the likes of Intel,
Microsoft and Google) are lying about an open project?

--
Ian

Mike Stump

unread,
May 28, 2016, 9:45:11 PM5/28/16
to
In article <niahja$m2$1...@dont-email.me>,
jacobnavia <ja...@jacob.remcomp.fr> wrote:
>I think a reflexion is needed. We could re-create a leaner language
>where all the features of C++ could be maintained (and many more added!)
>if we open the language and let people program the compiler itself.
>
>I will publish soon a document explaining this in detail.

Sounds fun. I think it would be nice to have a C/C++ successor where
more of the features of languages like C and C++ were instead in the
library of the new language. That way, people could evolve the
language merely by having people select what libraries they wanted to
use and refining and extending those libraries.

Jerry Stuckle

unread,
May 28, 2016, 9:47:19 PM5/28/16
to
I said they are biased - not lying. But you're obviously too stoopid to
understand the difference. About what I would expect, however.

Ian Collins

unread,
May 28, 2016, 10:07:30 PM5/28/16
to
Maybe they could call them boost?

--
Ian

Öö Tiib

unread,
May 29, 2016, 6:21:12 AM5/29/16
to
On Saturday, 28 May 2016 01:30:40 UTC+3, jacobnavia wrote:
>
> I think a reflexion is needed. We could re-create a leaner language
> where all the features of C++ could be maintained (and many more added!)
> if we open the language and let people program the compiler itself.
>
> I will publish soon a document explaining this in detail.

Very interesting. With C++ it is actually quite easy to create other
languages. Operator overloading, custom literals, templates and
preprocessor macros result with very lot of semantic freedom.
It is perhaps more important to ask why such things have still only
reached quite limited or questionable success.

IMHO (YMMV) it is because the mutated code is still compiled on C++
compiler. What it means "compiled"? Lot of attempts to compile
(possibly majority of) result with compiler finding something strange
in code (or even a reason to reject it totally) and to give diagnostics.
With those all immersion is replaced with headache.

All useful abstractions do leak. The C++ compiler does not know abstract
context of that "leaner language". It is powerful, compile-time
Turing-complete compiler. Therefore it spits out gibberish in C++ terms
about how it did recursively evaluate it and how it went well for pages
and pages and how it failed in some odd meta-programming trick somewhere
that user of the "leaner language" is supposed to know nothing about.

Do you have ideas how to mitigate that effect?

David Brown

unread,
May 29, 2016, 7:14:56 AM5/29/16
to
Do you really expect anyone to believe that in the mainframe world there
are many monolithic (or near monolithic) programs, written in C++, with
much more than 1,100,000,000 lines of code?

I'd ask for links or references, but I know I'll never see any.


>> The disk speed is pretty much totally irrelevant in large compile times
>> (unless you are using a poor OS that has slow file access and bad
>> caching systems, like Windows - though Win8 is rumoured to be better).
>>
>
> Wrong again. I/O speed is quite critical. But you think Debian is the
> end-all of programs. It isn't.

I just know how compilers work, and what their tasks are.

>
>> The key issue is not the number of lines of code in the files, but the
>> number of lines of code /compiled/. You only need to read these
>> multi-MB header files once, but you need to compile them repeatedly.
>> Thus processor and memory is the issue, not disk speed.
>>
>
> Number of lines of compiled code is only part of the issue.

Certainly it is only part of it, but it is often a reasonable indication
- much of the time-consuming tasks of compilation are correlated with
the amount of code compiled.

>
>>> Plus, unless you have a terabyte or
>>> more of ram, you aren't going to be able to run multiple compilation and
>>> keep all of the intermediate files in ram.
>>
>>
>> Libreoffice is an example of a very large C++ project. From an report I
>> read, it took approximately an hour to build on an 8-core 1.4 GHz
>> Opteron system with 64 GB ram. Peak memory usage is only about 11 GB,
>> or 18 GB with link-time optimisation (which requires holding much more
>> in memory at a time).
>>
>> The idea that you would need a TB or more of ram is just silly.
>>
>
> No, Libreoffice is an example of a small to medium sized program. Its'
> not large at all. But I'm sure it's much bigger than anything you've
> ever worked on.

Yes, Libreoffice is much bigger than anything I have worked on. And no,
it is not a "small to medium sized program". It is a /big/ program.
That does not mean that there are not bigger programs - it means that
the vast majority of programs written are very much smaller.

And because it is a popular program and an open source program, mostly
in C++, it is relatively easy to find information about compilation
resources.


>
>>>
>>> Sure, you can do it, when you are compiling the 100 line programs you
>>> write. But you have no idea what it takes to compile huge programs such
>>> as the one I described.
>>
>> You haven't described any programs, huge or otherwise.
>>
>> (It is true that for the programs I write, compile times are rarely an
>> issue - in most cases, the code is written by a single person, for a
>> single purpose on a small embedded system. But sometimes I also have to
>> compile big code bases for other systems.)
>>
>
> I am not going to name any names because they are (or have been) my
> clients. And those are none of your business.

Ah, we should just take your word for it, since you have such a
fantastic reputation for honesty.

>
> And if you want large code base - I know of at least one which has
> several hundred programmers working for 3 years just on one version.

Programmer man-years does not equate to code size. There are projects
where people write a thousand lines a day, and projects where people
write an average of a few lines a week.

>
> And they are busy writing code - not drinking coffee. This is what
> large programs look like.

And these folks are all generating hundreds of lines of C++ code per
day, every day, for years, all part of the same monolithic program that
must be recompiled as a /single/ linked executable program? Because
that's what's needed to get anything like what you have been claiming -
assuming, of course, that this is merely a new version that builds on an
existing code base several times as large.

>
>>>
>>>>>
>>>>> If there were no advantage to mainframes, why would companies spend
>>>>> millions of dollars on them?
>>>>>
>>>>
>>>> Security, reliability, backwards compatibility, guarantees of long-term
>>>> availability of parts, massive virtualisation, etc. There are plenty of
>>>> reasons for using mainframes - computational speed, however, is
>>>> certainly not one of them.
>>>>
>>>>
>>>
>>> I won't address each one individually - just to say that every one of
>>> your "reasons" is pure hogwash.
>>>
>>
>> Are you telling me that mainframe customers are not interested in
>> security, or are you telling me that mainframes have poorer security
>> than the average Linux/Windows/Mac server? Are you telling me that
>> banks are not particularly concerned about reliability, or are you
>> telling me that they pick Dell servers over Z-Series because of Dell's
>> excellent reliability record?
>>
>> Or could it be that you are simply talking nonsense again?
>>
>>
>
> I'm saying that every one of your "reasons" is pure hogwash. If you
> think mainframes are more secure than your xeon systems, than you don't
> know how to properly implement your xeon systems. And I guess your
> systems aren't very reliable, either.
>
> You've just told me a lot about why people don't want to buy your
> systems. They want ones designed by competent people.
>

You don't have the faintest idea of who I am, what I do for a living,
what my company produces, what part I play in that, who our customers
are, what they are looking for in their purchases, or anything else
about me. The only things you know are the things I have said - such as
that I mainly write code for small embedded systems.

Yet somehow you can conclude that I make insecure and unreliable xeon
systems that people don't want to buy. Is there no end to the drivel
you can invent?

And just for your example, I actually think that it is perfectly
possible to make xeon (or any other cpu) based systems that are as
secure as a mainframe, when dealing with appropriate tasks - security is
a process that depends on the tasks and the threats. But the question
at hand was not whether /I/ could make a secure x86 server, or indeed
whether mainframes are or are not inherently more secure than x86
servers. The question was why people choose mainframes, and one of the
reasons is that some people /believe/ that mainframes are inherently
more secure than x86 servers. It matters not if that belief is true or not.


Jerry Stuckle

unread,
May 29, 2016, 9:59:24 AM5/29/16
to
I really don't care what you believe. You've already proven you are not
only stoopid, but unwilling to learn anything that violates your "truth".

Here are some real truths.

Lines of code is not the only measurement of complexity.
Lines of code does not a good way to predict compilation time.
Lines of code is a measurement used by those who don't know better.

And no, not even Debian has over 1,100,000,000 lines of COMPILEABLE CODE.

>
>>> The disk speed is pretty much totally irrelevant in large compile times
>>> (unless you are using a poor OS that has slow file access and bad
>>> caching systems, like Windows - though Win8 is rumoured to be better).
>>>
>>
>> Wrong again. I/O speed is quite critical. But you think Debian is the
>> end-all of programs. It isn't.
>
> I just know how compilers work, and what their tasks are.
>

You only THINK you know how compilers work. That is obvious. You need
to look at how the hardware works, also.

>>
>>> The key issue is not the number of lines of code in the files, but the
>>> number of lines of code /compiled/. You only need to read these
>>> multi-MB header files once, but you need to compile them repeatedly.
>>> Thus processor and memory is the issue, not disk speed.
>>>
>>
>> Number of lines of compiled code is only part of the issue.
>
> Certainly it is only part of it, but it is often a reasonable indication
> - much of the time-consuming tasks of compilation are correlated with
> the amount of code compiled.
>

It is only reasonable to those who don't know better.

>>
>>>> Plus, unless you have a terabyte or
>>>> more of ram, you aren't going to be able to run multiple compilation
>>>> and
>>>> keep all of the intermediate files in ram.
>>>
>>>
>>> Libreoffice is an example of a very large C++ project. From an report I
>>> read, it took approximately an hour to build on an 8-core 1.4 GHz
>>> Opteron system with 64 GB ram. Peak memory usage is only about 11 GB,
>>> or 18 GB with link-time optimisation (which requires holding much more
>>> in memory at a time).
>>>
>>> The idea that you would need a TB or more of ram is just silly.
>>>
>>
>> No, Libreoffice is an example of a small to medium sized program. Its'
>> not large at all. But I'm sure it's much bigger than anything you've
>> ever worked on.
>
> Yes, Libreoffice is much bigger than anything I have worked on. And no,
> it is not a "small to medium sized program". It is a /big/ program.
> That does not mean that there are not bigger programs - it means that
> the vast majority of programs written are very much smaller.
>

Wrong again. In the mainframe world, it would barely be considered a
pimple on your arse.

Number of programs bigger or smaller is not a measure of size. But once
again you show your stoopidity. Libreoffice is 21 MB in 20 files (at
least on the Windows system I'm using now) and another 21MB in DLLs.
Doesn't even rate a medium sized program.

Although I suspect "Hello World" is a "big program" for you.

> And because it is a popular program and an open source program, mostly
> in C++, it is relatively easy to find information about compilation
> resources.
>

So?

>
>>
>>>>
>>>> Sure, you can do it, when you are compiling the 100 line programs you
>>>> write. But you have no idea what it takes to compile huge programs
>>>> such
>>>> as the one I described.
>>>
>>> You haven't described any programs, huge or otherwise.
>>>
>>> (It is true that for the programs I write, compile times are rarely an
>>> issue - in most cases, the code is written by a single person, for a
>>> single purpose on a small embedded system. But sometimes I also have to
>>> compile big code bases for other systems.)
>>>
>>
>> I am not going to name any names because they are (or have been) my
>> clients. And those are none of your business.
>
> Ah, we should just take your word for it, since you have such a
> fantastic reputation for honesty.
>

No, I'm not going to give you names of my clients. And I really don't
care what an idiot like you thinks about my reputation.

>>
>> And if you want large code base - I know of at least one which has
>> several hundred programmers working for 3 years just on one version.
>
> Programmer man-years does not equate to code size. There are projects
> where people write a thousand lines a day, and projects where people
> write an average of a few lines a week.
>

And in the mainframe world, someone only writing a few lines a week
would not be employed for long. Maybe that's why you can't find a job.

>>
>> And they are busy writing code - not drinking coffee. This is what
>> large programs look like.
>
> And these folks are all generating hundreds of lines of C++ code per
> day, every day, for years, all part of the same monolithic program that
> must be recompiled as a /single/ linked executable program? Because
> that's what's needed to get anything like what you have been claiming -
> assuming, of course, that this is merely a new version that builds on an
> existing code base several times as large.
>

C and C++, yes. And most of it does go into a single program, although
it does load various parts of itself when it start.
I know what you DON'T do for a living - you don't program. Most likely
the closest you come to programming is emptying wastebaskets of programmers.

> Yet somehow you can conclude that I make insecure and unreliable xeon
> systems that people don't want to buy. Is there no end to the drivel
> you can invent?
>

Just going by your comments here. They say much more than your claims.

> And just for your example, I actually think that it is perfectly
> possible to make xeon (or any other cpu) based systems that are as
> secure as a mainframe, when dealing with appropriate tasks - security is
> a process that depends on the tasks and the threats. But the question
> at hand was not whether /I/ could make a secure x86 server, or indeed
> whether mainframes are or are not inherently more secure than x86
> servers. The question was why people choose mainframes, and one of the
> reasons is that some people /believe/ that mainframes are inherently
> more secure than x86 servers. It matters not if that belief is true or
> not.
>
>

You made the claim. I just followed it to the logical conclusion. And
now you're trying to backpedal as fast as you can. ROFLMAO!

Just another example of your incompetence, David.

Alf P. Steinbach

unread,
May 29, 2016, 10:58:22 AM5/29/16
to
I'm not sure but I think I interviewed with you once, after sending just
a silly short e-mail application. It looked like I would get the job and
I maybe panicked, I don't know. Anyway I interrupted the interviewer
(you?) as he was telling me how good impression I'd made, and his face
fell flat, and I didn't get the job. :)

Is there by any chance a new Eastern connection to your company?


Cheers!,

- Alf

David Brown

unread,
May 29, 2016, 11:24:03 AM5/29/16
to
I agree entirely - but it is a good measure of "big". You are the one
keen to claim that mainframe programs are so much bigger than anything else.

> Lines of code does not a good way to predict compilation time.

Lines of code /is/ a good way to predict compilation time, but it is not
the only factor. In particular, lines of code /compiled/ is more
important than lines of code in total in the sources. And some code is
more complex than others, and of course there are many other factors.
You might have noticed that I have already mentioned this.

> Lines of code is a measurement used by those who don't know better.

It is a measurement used by many people, and perhaps /misused/ by people
who don't know better. But since you are keen on the "size" of code
bases, and keen to demonstrate that "many" mainframe programs are "much
bigger" than - for example - the entire Debian source base, then lines
of code is an excellent measurement. We could also use total disk
space, which might be more accurate (especially for answering the
question "how much ram do we need to hold it all in memory") if we had
numbers conveniently available. However, the disk usage given in my
reference below also includes non-compilable files, such as graphics
resources.

>
> And no, not even Debian has over 1,100,000,000 lines of COMPILEABLE CODE.

My reference, which I really hope you will view as reliable and
accurate, is:

<https://sources.debian.net/stats/>

Sid, the testing/development repository, currently has 1,139,708,723
lines of source code. That covers a number of different programming
languages, and code in Perl, Python, or Bash will not be compilable.
You can see figures further down that page detailing the breakdown by
language - approximately 445 MLoC of C, and 290 MLoC of C++.

But I am not fussy about the exact numbers - I am just giving "Debian"
as an example project with an enormous code base, for which figures are
openly available. And I would like references or links showing some
indication for your claim that /many/ mainframe /programs/ are /much/
bigger than 1.1 GLoC. Note that your claim was for "one or a few huge
programs" - unlike the Debian code, which is obviously spread across a
great many programs.

>
>>
>>>> The disk speed is pretty much totally irrelevant in large compile times
>>>> (unless you are using a poor OS that has slow file access and bad
>>>> caching systems, like Windows - though Win8 is rumoured to be better).
>>>>
>>>
>>> Wrong again. I/O speed is quite critical. But you think Debian is the
>>> end-all of programs. It isn't.
>>
>> I just know how compilers work, and what their tasks are.
>>
>
> You only THINK you know how compilers work. That is obvious. You need
> to look at how the hardware works, also.

I also know how a lot of hardware works. I don't know details of
mainframes, but I have a fair understanding of the principles. (The
details are hard, but the principles are not.) But since I know that
during compilation, I/O speed is not critical, it does not matter how
fast the I/O speed is on your build machine as long as it can get files
on and off the disk fast enough to keep the processors busy.
When someone qualified and believable tells me something about code
sizes in the mainframe world, I'll believe them. But your claims, with
the total lack of any kind of links or references, are not worth the
pixels they are written on.

> Although I suspect "Hello World" is a "big program" for you.
>
>> And because it is a popular program and an open source program, mostly
>> in C++, it is relatively easy to find information about compilation
>> resources.
>>
>
> So?
>
>>
>>>
>>>>>
>>>>> Sure, you can do it, when you are compiling the 100 line programs you
>>>>> write. But you have no idea what it takes to compile huge programs
>>>>> such
>>>>> as the one I described.
>>>>
>>>> You haven't described any programs, huge or otherwise.
>>>>
>>>> (It is true that for the programs I write, compile times are rarely an
>>>> issue - in most cases, the code is written by a single person, for a
>>>> single purpose on a small embedded system. But sometimes I also have to
>>>> compile big code bases for other systems.)
>>>>
>>>
>>> I am not going to name any names because they are (or have been) my
>>> clients. And those are none of your business.
>>
>> Ah, we should just take your word for it, since you have such a
>> fantastic reputation for honesty.
>>
>
> No, I'm not going to give you names of my clients. And I really don't
> care what an idiot like you thinks about my reputation.
>

It is quite obvious that you don't care what /anyone/ thinks of your
reputation here in Usenet.

>>>
>>> And if you want large code base - I know of at least one which has
>>> several hundred programmers working for 3 years just on one version.
>>
>> Programmer man-years does not equate to code size. There are projects
>> where people write a thousand lines a day, and projects where people
>> write an average of a few lines a week.
>>
>
> And in the mainframe world, someone only writing a few lines a week
> would not be employed for long. Maybe that's why you can't find a job.
>

People who write a few lines of code a week do not work on mainframes.
Well, I am glad you find these threads funny. However, I think your
entertainment value as the class clown is running low. Poking you and
watching the ensuing nonsense tumble out is fun for a while, but we all
reach a point where it gets too silly. It is important, I think, that
we make it clear that your posts and claims are mostly completely
spurious and have no backing in reality, in case any innocent viewers of
these threads (now or in the future) mistakenly believe you. But I
think that is already achieved. You have insulted and ridiculed just
about every single person who has disagreed with you in c.l.c and
c.l.c++, though others have presented references, quotations from
standards, logical reasoning, sample code, etc., clearly demonstrating
that you manage to get just about everything wrong.

I wonder why you ever bother posting in these newsgroups at all. Did
you get kicked out of your mythical private discussion groups of "real"
programmers?




David Brown

unread,
May 29, 2016, 3:39:53 PM5/29/16
to
I don't think that was me, though I am involved in technical interviews
of candidates. I've emailed you offline with details.

mvh.,

David

Jerry Stuckle

unread,
May 29, 2016, 9:16:50 PM5/29/16
to
No, it isn't even a measurement of "big".

>> Lines of code does not a good way to predict compilation time.
>
> Lines of code /is/ a good way to predict compilation time, but it is not
> the only factor. In particular, lines of code /compiled/ is more
> important than lines of code in total in the sources. And some code is
> more complex than others, and of course there are many other factors.
> You might have noticed that I have already mentioned this.
>

Not at all. There are many more important factors which determine
compile time than lines of code. But when none of your programs are
more then five lines long, I can see why you think that.

>> Lines of code is a measurement used by those who don't know better.
>
> It is a measurement used by many people, and perhaps /misused/ by people
> who don't know better. But since you are keen on the "size" of code
> bases, and keen to demonstrate that "many" mainframe programs are "much
> bigger" than - for example - the entire Debian source base, then lines
> of code is an excellent measurement. We could also use total disk
> space, which might be more accurate (especially for answering the
> question "how much ram do we need to hold it all in memory") if we had
> numbers conveniently available. However, the disk usage given in my
> reference below also includes non-compilable files, such as graphics
> resources.
>

It is a measurement used by people who don't know any better. It is not
considered very important by experts.

>>
>> And no, not even Debian has over 1,100,000,000 lines of COMPILEABLE CODE.
>
> My reference, which I really hope you will view as reliable and
> accurate, is:
>
> <https://sources.debian.net/stats/>
>

Yes, but that is not the whole story. Sorry.

> Sid, the testing/development repository, currently has 1,139,708,723
> lines of source code. That covers a number of different programming
> languages, and code in Perl, Python, or Bash will not be compilable. You
> can see figures further down that page detailing the breakdown by
> language - approximately 445 MLoC of C, and 290 MLoC of C++.
>

Gee, let's see. It includes a number of different programming
languages. But it also contains a number of different
platforms-dependent code. You don't compiler ARM modules for an Intel
platform, for instance. And there's a fair amount of assembler code in
there.

This also includes comments, code split across multiple lines, lines
with single left or right braces on them, and a bunch of other things.

Finally, it's not all one program. No one has everything in the
repository loaded on their machine.

Your numbers are as bogus as you are.

> But I am not fussy about the exact numbers - I am just giving "Debian"
> as an example project with an enormous code base, for which figures are
> openly available. And I would like references or links showing some
> indication for your claim that /many/ mainframe /programs/ are /much/
> bigger than 1.1 GLoC. Note that your claim was for "one or a few huge
> programs" - unlike the Debian code, which is obviously spread across a
> great many programs.
>

For someone not being fussy about exact numbers, you sure do quote a lot
of them.

>>
>>>
>>>>> The disk speed is pretty much totally irrelevant in large compile
>>>>> times
>>>>> (unless you are using a poor OS that has slow file access and bad
>>>>> caching systems, like Windows - though Win8 is rumoured to be better).
>>>>>
>>>>
>>>> Wrong again. I/O speed is quite critical. But you think Debian is the
>>>> end-all of programs. It isn't.
>>>
>>> I just know how compilers work, and what their tasks are.
>>>
>>
>> You only THINK you know how compilers work. That is obvious. You need
>> to look at how the hardware works, also.
>
> I also know how a lot of hardware works. I don't know details of
> mainframes, but I have a fair understanding of the principles. (The
> details are hard, but the principles are not.) But since I know that
> during compilation, I/O speed is not critical, it does not matter how
> fast the I/O speed is on your build machine as long as it can get files
> on and off the disk fast enough to keep the processors busy.
>

Once again you have shown a distinctly limited knowledge of much of
anything. No, you have no idea how mainframes work. You don't even
understand the principles. This is just another example.
Fine. I really don't give a damn what you're limited knowledge will
allow you to believe. After all, "Hello World" is a big program to you.
So is LibreOffice.

So much for your "experience". But I also know you'll argue until the
day you die instead of admitting you are wrong - as you have repeatedly
shown here and in c.l.c.

>> Although I suspect "Hello World" is a "big program" for you.
>>
>>> And because it is a popular program and an open source program, mostly
>>> in C++, it is relatively easy to find information about compilation
>>> resources.
>>>
>>
>> So?
>>
>>>
>>>>
>>>>>>
>>>>>> Sure, you can do it, when you are compiling the 100 line programs you
>>>>>> write. But you have no idea what it takes to compile huge programs
>>>>>> such
>>>>>> as the one I described.
>>>>>
>>>>> You haven't described any programs, huge or otherwise.
>>>>>
>>>>> (It is true that for the programs I write, compile times are rarely an
>>>>> issue - in most cases, the code is written by a single person, for a
>>>>> single purpose on a small embedded system. But sometimes I also
>>>>> have to
>>>>> compile big code bases for other systems.)
>>>>>
>>>>
>>>> I am not going to name any names because they are (or have been) my
>>>> clients. And those are none of your business.
>>>
>>> Ah, we should just take your word for it, since you have such a
>>> fantastic reputation for honesty.
>>>
>>
>> No, I'm not going to give you names of my clients. And I really don't
>> care what an idiot like you thinks about my reputation.
>>
>
> It is quite obvious that you don't care what /anyone/ thinks of your
> reputation here in Usenet.
>

I care what knowledgeable people think of my reputation. I don't care
what an idiot like you thinks. You aren't worth it.

>>>>
>>>> And if you want large code base - I know of at least one which has
>>>> several hundred programmers working for 3 years just on one version.
>>>
>>> Programmer man-years does not equate to code size. There are projects
>>> where people write a thousand lines a day, and projects where people
>>> write an average of a few lines a week.
>>>
>>
>> And in the mainframe world, someone only writing a few lines a week
>> would not be employed for long. Maybe that's why you can't find a job.
>>
>
> People who write a few lines of code a week do not work on mainframes.
>

No, they are productive. No wonder you have so much trouble finding work.
You really are funny. I'd feel sorry for you if you weren't so unwilling
to admit when you are wrong.

There is a difference between ignorance and stupidity. Ignorance can be
cured. But you are beyond stupidity.

And no, we still have a good group going. But I like usenet because
there are intelligent people on it, and I don't want new programmers to
get wrong ideas from the likes of you.

Juha Nieminen

unread,
May 30, 2016, 2:27:23 AM5/30/16
to
Jerry Stuckle <jstu...@attglobal.net> wrote:
> Another troll post.

Ah, the ultimate form of concession. Whenever someone disagrees with you,
just accuse them of "trolling". What a great argument.

--- news://freenews.netfront.net/ - complaints: ne...@netfront.net ---

Juha Nieminen

unread,
May 30, 2016, 2:31:20 AM5/30/16
to
jacobnavia <ja...@jacob.remcomp.fr> wrote:
> Le 25/05/2016 à 10:09, Juha Nieminen a écrit :
>> But that's my point: Every single time, every single freaking time,
>> someone complains about C++, the "long compile times" argument is
>> brought up, like it were some kind of crucial core flaw that affects
>> every single C++ programmer.
>
> Excuse me but given that any C++ programmer must compile his/her code
> sometime, it SURELY affects EVERY SINGLE C++ PROGRAMMER.

WTF are you talking about? Where exactly did I say that *compiling* the
program is the issue?

Juha Nieminen

unread,
May 30, 2016, 2:34:20 AM5/30/16
to
Jerry Stuckle <jstu...@attglobal.net> wrote:
> It's a huge problem for a large number of programmers. Just because it
> isn't for YOU does not mean it's not important to OTHERS. YOU ARE NOT
> THE WHOLE WORLD.

"You haven't worked on very big projects, then. I've seen compiles take
overnight on a mainframe."

So, tell me, how many C++ programmers develop "very big projects" that
"take overnight on a mainframe" to compile? In percents, that is.

Again: If you want to change your programming language in your humongous
project, be my guest. But don't be telling people that it's a problem
for *all* C++ programmers, because it isn't. That's just a big fat lie.

Christian Gollwitzer

unread,
May 30, 2016, 2:35:54 AM5/30/16
to
Am 30.05.16 um 08:27 schrieb Juha Nieminen:
> Jerry Stuckle <jstu...@attglobal.net> wrote:
>> Another troll post.
>
> Ah, the ultimate form of concession. Whenever someone disagrees with you,
> just accuse them of "trolling". What a great argument.

No, it's self-awareness. What is trolling? Posting controverse opinions,
insulting people as "too stooopid to understand their standpoint",
ignoring facts etc. See the pattern?

He still has to practice. Well-trained trolls have a much higher
response-per-troll-post ratio, i.e. they don't need to react that often
to keep it going.

Christian

David Brown

unread,
May 30, 2016, 2:52:31 AM5/30/16
to
On 30/05/16 03:16, Jerry Stuckle wrote:

> And no, we still have a good group going. But I like usenet because
> there are intelligent people on it, and I don't want new programmers to
> get wrong ideas from the likes of you.
>

I fully agree that there are many intelligent people on Usenet,
including many in c.l.c++ and c.l.c. But more important than mere
intelligence, there are many here who are helpful, friendly, honest,
experienced, and interested in sharing knowledge, learning new things,
and engaging in friendly banter. They don't always agree on everything,
but are happy to disagree and discuss in an adult manner (except on
topics of swearing, religion, and sausages - but no one's perfect).

Those are the people who you have, almost without exception, labelled as
"stoopid trolls" and "pigs who can't sing".

It seems you like Usenet because it gives you an opportunity to insult
people.

Öö Tiib

unread,
May 30, 2016, 4:10:36 AM5/30/16
to
A bald assertion of denial?

>
> >> Lines of code does not a good way to predict compilation time.
> >
> > Lines of code /is/ a good way to predict compilation time, but it is not
> > the only factor. In particular, lines of code /compiled/ is more
> > important than lines of code in total in the sources. And some code is
> > more complex than others, and of course there are many other factors.
> > You might have noticed that I have already mentioned this.
> >
>
> Not at all.

A bald assertion of denial?

> There are many more important factors which determine
> compile time than lines of code.

Some indefinite mysterious "factors"?

> But when none of your programs are
> more then five lines long, I can see why you think that.

Some patronizing ad hominem?

>
> >> Lines of code is a measurement used by those who don't know better.
> >
> > It is a measurement used by many people, and perhaps /misused/ by people
> > who don't know better. But since you are keen on the "size" of code
> > bases, and keen to demonstrate that "many" mainframe programs are "much
> > bigger" than - for example - the entire Debian source base, then lines
> > of code is an excellent measurement. We could also use total disk
> > space, which might be more accurate (especially for answering the
> > question "how much ram do we need to hold it all in memory") if we had
> > numbers conveniently available. However, the disk usage given in my
> > reference below also includes non-compilable files, such as graphics
> > resources.
> >
>
> It is a measurement used by people who don't know any better.

Another patronizing insult?

> It is not considered very important by experts.

A bald, groundless assertion of denial?

>
> >>
> >> And no, not even Debian has over 1,100,000,000 lines of COMPILEABLE CODE.
> >
> > My reference, which I really hope you will view as reliable and
> > accurate, is:
> >
> > <https://sources.debian.net/stats/>
> >
>
> Yes, but that is not the whole story. Sorry.

A bald assertion of denial?

>
> > Sid, the testing/development repository, currently has 1,139,708,723
> > lines of source code. That covers a number of different programming
> > languages, and code in Perl, Python, or Bash will not be compilable. You
> > can see figures further down that page detailing the breakdown by
> > language - approximately 445 MLoC of C, and 290 MLoC of C++.
> >
>
> Gee, let's see. It includes a number of different programming
> languages. But it also contains a number of different
> platforms-dependent code. You don't compiler ARM modules for an Intel
> platform, for instance. And there's a fair amount of assembler code in
> there.

Finally only place in post that indicates some connection with subject
not just mechanical denial or insult. That however is irrelevant red
herring without even slightest attempt of quantifying the alleged
effect of it.

>
> This also includes comments, code split across multiple lines, lines
> with single left or right braces on them, and a bunch of other things.

Red herring continues. Are you really wanting to say that stripping
comments will anyhow affect order of magnitude of code base size? What
code base is that?

>
> Finally, it's not all one program. No one has everything in the
> repository loaded on their machine.

Fighting with straw man you built yourself?

>
> Your numbers are as bogus as you are.

Patronizing insult.

>
> > But I am not fussy about the exact numbers - I am just giving "Debian"
> > as an example project with an enormous code base, for which figures are
> > openly available. And I would like references or links showing some
> > indication for your claim that /many/ mainframe /programs/ are /much/
> > bigger than 1.1 GLoC. Note that your claim was for "one or a few huge
> > programs" - unlike the Debian code, which is obviously spread across a
> > great many programs.
> >
>
> For someone not being fussy about exact numbers, you sure do quote a lot
> of them.

Totally unfair tu quoque? No one can compete with indefiniteness, baldness
and groundlessness of your postings, Jerry.

>
> >>
> >>>
> >>>>> The disk speed is pretty much totally irrelevant in large compile
> >>>>> times
> >>>>> (unless you are using a poor OS that has slow file access and bad
> >>>>> caching systems, like Windows - though Win8 is rumoured to be better).
> >>>>>
> >>>>
> >>>> Wrong again. I/O speed is quite critical. But you think Debian is the
> >>>> end-all of programs. It isn't.
> >>>
> >>> I just know how compilers work, and what their tasks are.
> >>>
> >>
> >> You only THINK you know how compilers work. That is obvious. You need
> >> to look at how the hardware works, also.
> >
> > I also know how a lot of hardware works. I don't know details of
> > mainframes, but I have a fair understanding of the principles. (The
> > details are hard, but the principles are not.) But since I know that
> > during compilation, I/O speed is not critical, it does not matter how
> > fast the I/O speed is on your build machine as long as it can get files
> > on and off the disk fast enough to keep the processors busy.
> >
>
> Once again you have shown a distinctly limited knowledge of much of
> anything.

Patronizing, groundless insult?

> No, you have no idea how mainframes work.

Patronizing, groundless insult?

> You don't even understand the principles.

Patronizing, groundless insult?

> This is just another example.

It is unclear even, WTF you are talking about here.
Total patronizing bullshit nonsense? Not even wrong, just grotesque?

>
> So much for your "experience". But I also know you'll argue until the
> day you die instead of admitting you are wrong - as you have repeatedly
> shown here and in c.l.c.

Direct, outright lie?
You have certainly high skill of posting lot of empty bullshit. Hopefully
it entertains you.

>
> >>>>
> >>>> And if you want large code base - I know of at least one which has
> >>>> several hundred programmers working for 3 years just on one version.
> >>>
> >>> Programmer man-years does not equate to code size. There are projects
> >>> where people write a thousand lines a day, and projects where people
> >>> write an average of a few lines a week.
> >>>
> >>
> >> And in the mainframe world, someone only writing a few lines a week
> >> would not be employed for long. Maybe that's why you can't find a job.
> >>
> >
> > People who write a few lines of code a week do not work on mainframes.
> >
>
> No, they are productive. No wonder you have so much trouble finding work.

:D You are apparently mirroring your own issues to others. Sad.
I replied to the post because it was so devoid of any sparks indicating
consciousness that it somewhat felt like Turing test. Are you bot, Jerry
Stuckle? Someone just wrote a program that pretends being total asshole,
idiot and troll?

Gareth Owen

unread,
May 30, 2016, 6:08:22 AM5/30/16
to
Ian Collins <ian-...@hotmail.com> writes:

>>
>> How unbiased do you think it is?
>
> So you now claim that Facebook (and by association the likes of Intel,
> Microsoft and Google) are lying about an open project?

Reality has an anti-Jerry bias.

Gareth Owen

unread,
May 30, 2016, 6:10:24 AM5/30/16
to
Juha Nieminen <nos...@thanks.invalid> writes:

> Jerry Stuckle <jstu...@attglobal.net> wrote:
>> It's a huge problem for a large number of programmers. Just because it
>> isn't for YOU does not mean it's not important to OTHERS. YOU ARE NOT
>> THE WHOLE WORLD.
>
> "You haven't worked on very big projects, then. I've seen compiles take
> overnight on a mainframe."
>
> So, tell me, how many C++ programmers develop "very big projects" that
> "take overnight on a mainframe" to compile? In percents, that is.

And more importantly, if those "very big projects" were to be written in
C, say, would they cease to take a very long time to compile?

jacobnavia

unread,
May 30, 2016, 8:12:14 AM5/30/16
to
You said:

>>> Every single time, every single freaking time,
>>> someone complains about C++, the "long compile times" argument is
>>> brought up, like it were some kind of crucial core flaw that affects
>>> every single C++ programmer.

Maybe I forgot my english but for me that is clear.

Jerry Stuckle

unread,
May 30, 2016, 10:28:26 AM5/30/16
to
Just because you don't doesn't mean it's not important. But then most
programmers develop programs bigger than "Hello World". And many of us
work on a project basis, where every minute spent compiling is lost
income. Others work on an hourly basis, where every minute spent
compiling is lost productivity.

And I never said it was a problem for *all* C++ programmers. But *YOU*
said it was not a problem for ANY C++ programmers.

Jerry Stuckle

unread,
May 30, 2016, 10:29:32 AM5/30/16
to
Nope, just when they are trolling. Like yours here.

But I know I'm trying to teach the pig to sing.

Jerry Stuckle

unread,
May 30, 2016, 10:33:54 AM5/30/16
to
Wrong, Christian. Trolling is twisting facts and statements to make it
seem like someone said something else. It is also rejecting something
without even examining it and trying to understand it. And there are
several people here do perform both. Some have even admitted they are
trolling.

Calling someone "too stoopid to understand..." is just that. It's
something I have observed repeatedly here.

Jerry Stuckle

unread,
May 30, 2016, 10:37:20 AM5/30/16
to
No, there are others here who try to understand, attempt to carry on
intelligent conversations and truly try to be helpful. But then there
are those who are so insecure they would rather try to prove someone
else wrong than admit they may be wrong. They don't try to understand -
they just take the opposite position no matter what. If someone said
the sun rises in the east, they would claim it rises in the west.

Those are the stoopid trolls and the pigs who can't sing.

I'm sorry you have such issues. But they're not *MY* problem.

Jerry Stuckle

unread,
May 30, 2016, 10:52:02 AM5/30/16
to
The truth - known by *real* experts in the field. But those who *think*
they are experts keep quoting it as the final word.

>>
>>>> Lines of code does not a good way to predict compilation time.
>>>
>>> Lines of code /is/ a good way to predict compilation time, but it is not
>>> the only factor. In particular, lines of code /compiled/ is more
>>> important than lines of code in total in the sources. And some code is
>>> more complex than others, and of course there are many other factors.
>>> You might have noticed that I have already mentioned this.
>>>
>>
>> Not at all.
>
> A bald assertion of denial?
>

The truth - known by *real* experts in the field. But those who *think*
they are experts keep quoting it as the final word.

>> There are many more important factors which determine
>> compile time than lines of code.
>
> Some indefinite mysterious "factors"?
>

Code complexity is the main one. In languages like C++, template
complexity and instantiation is another one. When using other tools
like some databases, the need to preprocess the code (ahead of the C++
preprocessor) takes time. A whole bunch of things.


>> But when none of your programs are
>> more then five lines long, I can see why you think that.
>
> Some patronizing ad hominem?
>

Just the truth, as indicated by previous posts.

>>
>>>> Lines of code is a measurement used by those who don't know better.
>>>
>>> It is a measurement used by many people, and perhaps /misused/ by people
>>> who don't know better. But since you are keen on the "size" of code
>>> bases, and keen to demonstrate that "many" mainframe programs are "much
>>> bigger" than - for example - the entire Debian source base, then lines
>>> of code is an excellent measurement. We could also use total disk
>>> space, which might be more accurate (especially for answering the
>>> question "how much ram do we need to hold it all in memory") if we had
>>> numbers conveniently available. However, the disk usage given in my
>>> reference below also includes non-compilable files, such as graphics
>>> resources.
>>>
>>
>> It is a measurement used by people who don't know any better.
>
> Another patronizing insult?
>
>> It is not considered very important by experts.
>
> A bald, groundless assertion of denial?
>

Just the truth.

>>
>>>>
>>>> And no, not even Debian has over 1,100,000,000 lines of COMPILEABLE CODE.
>>>
>>> My reference, which I really hope you will view as reliable and
>>> accurate, is:
>>>
>>> <https://sources.debian.net/stats/>
>>>
>>
>> Yes, but that is not the whole story. Sorry.
>
> A bald assertion of denial?
>

Just the truth.

>>
>>> Sid, the testing/development repository, currently has 1,139,708,723
>>> lines of source code. That covers a number of different programming
>>> languages, and code in Perl, Python, or Bash will not be compilable. You
>>> can see figures further down that page detailing the breakdown by
>>> language - approximately 445 MLoC of C, and 290 MLoC of C++.
>>>
>>
>> Gee, let's see. It includes a number of different programming
>> languages. But it also contains a number of different
>> platforms-dependent code. You don't compiler ARM modules for an Intel
>> platform, for instance. And there's a fair amount of assembler code in
>> there.
>
> Finally only place in post that indicates some connection with subject
> not just mechanical denial or insult. That however is irrelevant red
> herring without even slightest attempt of quantifying the alleged
> effect of it.
>

That's because I'm tired of having to repeat myself to people with the
inability to understand simple statements. And every bit of it is 100%
relevant as a reply to the post. But you are too stoopid to see even
that simple concept.

>>
>> This also includes comments, code split across multiple lines, lines
>> with single left or right braces on them, and a bunch of other things.
>
> Red herring continues. Are you really wanting to say that stripping
> comments will anyhow affect order of magnitude of code base size? What
> code base is that?
>

ROFLAMO! Once again you show your inability to understand a simple
concept. Either that or you are arguing just to argue. I'm not sure
which, but I suspect the former.

>>
>> Finally, it's not all one program. No one has everything in the
>> repository loaded on their machine.
>
> Fighting with straw man you built yourself?
>

Just responding to the post. Something you seem incapable of understanding.

>>
>> Your numbers are as bogus as you are.
>
> Patronizing insult.
>

Just the truth, as repeatedly proven by his own statements.

>>
>>> But I am not fussy about the exact numbers - I am just giving "Debian"
>>> as an example project with an enormous code base, for which figures are
>>> openly available. And I would like references or links showing some
>>> indication for your claim that /many/ mainframe /programs/ are /much/
>>> bigger than 1.1 GLoC. Note that your claim was for "one or a few huge
>>> programs" - unlike the Debian code, which is obviously spread across a
>>> great many programs.
>>>
>>
>> For someone not being fussy about exact numbers, you sure do quote a lot
>> of them.
>
> Totally unfair tu quoque? No one can compete with indefiniteness, baldness
> and groundlessness of your postings, Jerry.
>

Sorry, wrong again. First he cites numbers, then claims he's not fussy
about them. It doesn't work.

>>
>>>>
>>>>>
>>>>>>> The disk speed is pretty much totally irrelevant in large compile
>>>>>>> times
>>>>>>> (unless you are using a poor OS that has slow file access and bad
>>>>>>> caching systems, like Windows - though Win8 is rumoured to be better).
>>>>>>>
>>>>>>
>>>>>> Wrong again. I/O speed is quite critical. But you think Debian is the
>>>>>> end-all of programs. It isn't.
>>>>>
>>>>> I just know how compilers work, and what their tasks are.
>>>>>
>>>>
>>>> You only THINK you know how compilers work. That is obvious. You need
>>>> to look at how the hardware works, also.
>>>
>>> I also know how a lot of hardware works. I don't know details of
>>> mainframes, but I have a fair understanding of the principles. (The
>>> details are hard, but the principles are not.) But since I know that
>>> during compilation, I/O speed is not critical, it does not matter how
>>> fast the I/O speed is on your build machine as long as it can get files
>>> on and off the disk fast enough to keep the processors busy.
>>>
>>
>> Once again you have shown a distinctly limited knowledge of much of
>> anything.
>
> Patronizing, groundless insult?
>

Just the truth, as repeatedly proven by his own statements.

>> No, you have no idea how mainframes work.
>
> Patronizing, groundless insult?
>

Just the truth, as repeatedly proven by his own statements.

>> You don't even understand the principles.
>
> Patronizing, groundless insult?
>

Just the truth, as repeatedly proven by his own statements.

>> This is just another example.
>
> It is unclear even, WTF you are talking about here.
>

Nope. It would take someone with a modicum of intelligence to
understand anything in this discussion. Something you once again prove
you do not have.
Just the truth.

>>
>> So much for your "experience". But I also know you'll argue until the
>> day you die instead of admitting you are wrong - as you have repeatedly
>> shown here and in c.l.c.
>
> Direct, outright lie?
>

Just the truth, as repeatedly proven by his own statements.
You certainly have no skills for understanding simple concepts. But
this is the total truth. And just FYI, I hold you in even lower esteem
than David. This whole post of yours is nothing but trolling.

>>
>>>>>>
>>>>>> And if you want large code base - I know of at least one which has
>>>>>> several hundred programmers working for 3 years just on one version.
>>>>>
>>>>> Programmer man-years does not equate to code size. There are projects
>>>>> where people write a thousand lines a day, and projects where people
>>>>> write an average of a few lines a week.
>>>>>
>>>>
>>>> And in the mainframe world, someone only writing a few lines a week
>>>> would not be employed for long. Maybe that's why you can't find a job.
>>>>
>>>
>>> People who write a few lines of code a week do not work on mainframes.
>>>
>>
>> No, they are productive. No wonder you have so much trouble finding work.
>
> :D You are apparently mirroring your own issues to others. Sad.
>

Not at all. I have plenty of work. And unlike you, mine is real
*programming* work. Not digging ditches or washing dishes.
You replied because you are a stoopid troll, as you have repeatedly
shown. And it's exactly what I would expect coming from you.

JiiPee

unread,
May 30, 2016, 3:09:05 PM5/30/16
to
I guess one question is that how many percentage faster other languages
compile the same project (done with same structures)?
Also , I do not think we should only think on factor (how quickly is
compiled). We should take into consideration ALL factors, like how good
the langauge is... how long it takes to develop the code, how easy is to
maintain the code etc etc. Compiling is only on factor in regards to
things taking time.

How much faster other languages compile the same project?


Jerry Stuckle

unread,
May 30, 2016, 6:21:40 PM5/30/16
to
It makes no difference what other languages do. They are not being
used. And yes, compiling is one factor. But it is an important one
because it is wasted programmer time.

How good the language is is pretty unimportant, also. What IS important
is how good the programmer is.

JiiPee

unread,
May 30, 2016, 6:43:24 PM5/30/16
to
On 30/05/2016 23:21, Jerry Stuckle wrote:
> On 5/30/2016 3:08 PM, JiiPee wrote:
>> On 30/05/2016 15:28, Jerry Stuckle wrote:
>>> On 5/30/2016 2:34 AM, Juha Nieminen wrote:
>>> Just because you don't doesn't mean it's not important. But then most
>>> programmers develop programs bigger than "Hello World". And many of us
>>> work on a project basis, where every minute spent compiling is lost
>>> income. Others work on an hourly basis, where every minute spent
>>> compiling is lost productivity.
>>>
>>> And I never said it was a problem for *all* C++ programmers. But *YOU*
>>> said it was not a problem for ANY C++ programmers.
>> I guess one question is that how many percentage faster other languages
>> compile the same project (done with same structures)?
>> Also , I do not think we should only think on factor (how quickly is
>> compiled). We should take into consideration ALL factors, like how good
>> the langauge is... how long it takes to develop the code, how easy is to
>> maintain the code etc etc. Compiling is only on factor in regards to
>> things taking time.
>>
>> How much faster other languages compile the same project?
>>
>>
> It makes no difference what other languages do. They are not being
> used. And yes, compiling is one factor. But it is an important one
> because it is wasted programmer time.

I dont think nobody disagrees. But as I said, Microsoft cannot do
everything... they have limited budject I think. so they chose what is
more important and how many it affects. Thats how the C++ commitea also
thinks I know.

>
> How good the language is is pretty unimportant, also. What IS important
> is how good the programmer is.

in a way true. but on the other hand, if your statement is the only
truth, then that would mean that it does not matter if we use 90 s C++
or C++11/14. But I think there is an essential difference as C++11 is
much safer etc. So your code becomes less risky with better language, I
think. So it does matter, at least somewhat.



>

JiiPee

unread,
May 30, 2016, 8:51:07 PM5/30/16
to
dont we agree that its the old: "the right tool for the right job"?
sometimes is VB sometimes C++.
And, There is also this personal preference issue: I personally could
not imagine creating a complex game library with VB, ... at least for me
it matters. But maybe for somebody else VB would be better. People are
different... some languages are better for my thinking than others I think.


JiiPee

unread,
May 30, 2016, 8:58:12 PM5/30/16
to
On 30/05/2016 15:29, Jerry Stuckle wrote:
> On 5/30/2016 2:27 AM, Juha Nieminen wrote:
>> Jerry Stuckle <jstu...@attglobal.net> wrote:
>>> Another troll post.
>> Ah, the ultimate form of concession. Whenever someone disagrees with you,
>> just accuse them of "trolling". What a great argument.
>>
>> --- news://freenews.netfront.net/ - complaints: ne...@netfront.net ---
>>
> Nope, just when they are trolling. Like yours here.
>
> But I know I'm trying to teach the pig to sing.
>

what am trying to say, that there might be other factors which are more
important than "getting a faster compiler". like these:

http://stackoverflow.com/questions/1073384/what-strategies-have-you-used-to-improve-build-times-on-large-projects

1. Forward declaration
2. pimpl idiom
3. Precompiled headers
4. Parallel compilation (e.g. MPCL add-in for Visual Studio).
5. Distributed compilation (e.g. Incredibuild for Visual Studio).
6. Incremental build
7. Split build in several "projects" so not compile all the code if not
needed.


how about consentrating on those? its possible they shortens the
compilation time much much more than changing languages or compilers.


Mike Stump

unread,
May 30, 2016, 9:30:11 PM5/30/16
to
In article <nihk25$j8p$1...@jstuckle.eternal-september.org>,
Jerry Stuckle <jstu...@attglobal.net> wrote:
>>>>> Lines of code does not a good way to predict compilation time.
>>>>
>>>> Lines of code /is/ a good way to predict compilation time, but it is not
>>>> the only factor.
>
>>> There are many more important factors which determine
>>> compile time than lines of code.
>>
>> Some indefinite mysterious "factors"?
>
>Code complexity is the main one. In languages like C++, template
>complexity and instantiation is another one.

Very true, meta programming, either by cpp usage or templates does
have a way of expanding compile times.

Additionally, headers for C++ tend to inflate compile times faster
than plain C. It is hoped for C++ that modules can help improve
compile times here; only time will tell if they do.
http://clang.llvm.org/docs/Modules.html for the interested.

Jerry Stuckle

unread,
May 30, 2016, 9:41:55 PM5/30/16
to
You seem to be repeatedly disagreeing. And if you don't disagree, why
even bring it up?

>>
>> How good the language is is pretty unimportant, also. What IS important
>> is how good the programmer is.
>
> in a way true. but on the other hand, if your statement is the only
> truth, then that would mean that it does not matter if we use 90 s C++
> or C++11/14. But I think there is an essential difference as C++11 is
> much safer etc. So your code becomes less risky with better language, I
> think. So it does matter, at least somewhat.
>

I said nothing about C++11, C++14 or any other version. It's just
another of your red herrings.

Jerry Stuckle

unread,
May 30, 2016, 9:42:48 PM5/30/16
to
Which is another of your red herrings. None of this has anything to do
with compile time for C++ programs.

Jerry Stuckle

unread,
May 30, 2016, 9:47:25 PM5/30/16
to
On 5/30/2016 8:58 PM, JiiPee wrote:
> On 30/05/2016 15:29, Jerry Stuckle wrote:
>> On 5/30/2016 2:27 AM, Juha Nieminen wrote:
>>> Jerry Stuckle <jstu...@attglobal.net> wrote:
>>>> Another troll post.
>>> Ah, the ultimate form of concession. Whenever someone disagrees with
>>> you,
>>> just accuse them of "trolling". What a great argument.
>>>
>>> --- news://freenews.netfront.net/ - complaints: ne...@netfront.net ---
>>>
>> Nope, just when they are trolling. Like yours here.
>>
>> But I know I'm trying to teach the pig to sing.
>>
>
> what am trying to say, that there might be other factors which are more
> important than "getting a faster compiler". like these:
>
> http://stackoverflow.com/questions/1073384/what-strategies-have-you-used-to-improve-build-times-on-large-projects
>

No, let's be honest. What you're trying to say is the just because
compile times are not important to YOU they are not important to anyone
else.

Everything else is red herrings.

>
> 1. Forward declaration
> 2. pimpl idiom
> 3. Precompiled headers
> 4. Parallel compilation (e.g. MPCL add-in for Visual Studio).
> 5. Distributed compilation (e.g. Incredibuild for Visual Studio).
> 6. Incremental build
> 7. Split build in several "projects" so not compile all the code if not
> needed.
>
>
> how about consentrating on those? its possible they shortens the
> compilation time much much more than changing languages or compilers.
>
>

They can, when applied properly. But most programmers use as many of
these as possible already. And we still have problems with compile time.

Intelligent programmers do not compile every module every time. We use
make files or equivalent, for instance. We use compiled headers where
possible. In many cases parallel compilation doesn't offer significant
advantages, and distributed compilation has it's own problems - like
spending big bucks on multiple computers just to compile.

Every argument you have brought up is pure horse hockey. You're better
off just admitting that there are many programmers in this world who
work on complicated programs and do have compile time problems. The
fact you don't is really immaterial to the rest of the world.

jacobnavia

unread,
May 31, 2016, 4:30:33 AM5/31/16
to
Le 29/05/2016 à 12:20, Öö Tiib a écrit :

> All useful abstractions do leak. The C++ compiler does not know abstract
> context of that "leaner language". It is powerful, compile-time
> Turing-complete compiler. Therefore it spits out gibberish in C++ terms
> about how it did recursively evaluate it and how it went well for pages
> and pages and how it failed in some odd meta-programming trick somewhere
> that user of the "leaner language" is supposed to know nothing about.
>
> Do you have ideas how to mitigate that effect?
>
>

Errors

If an error occurs when compiling the template to be executed in the
compiler environment an error message is issued with line/reason of the
error. That is why is better to use precompiled binaries to associate
the compile time function to an event. That means faster compile times
since the compiler has less work to do: just load that function.

If an error occurs when compiling the generated code, the error message
can only be

Error when compiling code generated by the "xxx" compile time function.

If the generated code contains lines (and it is not only a huge
expression in a single line), a line number is given relative to the
start of the generated code


Öö Tiib

unread,
May 31, 2016, 8:30:07 AM5/31/16
to
On Tuesday, 31 May 2016 11:30:33 UTC+3, jacobnavia wrote:
> Le 29/05/2016 à 12:20, Öö Tiib a écrit :
>
> > All useful abstractions do leak. The C++ compiler does not know abstract
> > context of that "leaner language". It is powerful, compile-time
> > Turing-complete compiler. Therefore it spits out gibberish in C++ terms
> > about how it did recursively evaluate it and how it went well for pages
> > and pages and how it failed in some odd meta-programming trick somewhere
> > that user of the "leaner language" is supposed to know nothing about.
> >
> > Do you have ideas how to mitigate that effect?
> >
> >
>
> Errors
>
> If an error occurs when compiling the template to be executed in the
> compiler environment an error message is issued with line/reason of the
> error. That is why is better to use precompiled binaries to associate
> the compile time function to an event. That means faster compile times
> since the compiler has less work to do: just load that function.

The problem is not so much about compile time but relevance of
information contained in produced error messages.

For example "syntax error at line 42" is too few information for me.
I will stare at the line 42 in confusion for a while.

For another example 40 pages of log of how compiler got stuck after pile
of expanding (not written by me) macros, deducting arguments of (never
before seen by me) templates of trait classes and resolving ambiguity
between overloads (again not by me) is too lot of information for me.
I will also stare at the line I wrote that caused it all in confusion for a
while.

The 'static_assert' is tricky to put to some places and when the condition
is complex then it makes compilation slower too.


>
> If an error occurs when compiling the generated code, the error message
> can only be
>
> Error when compiling code generated by the "xxx" compile time function.
>
> If the generated code contains lines (and it is not only a huge
> expression in a single line), a line number is given relative to the
> start of the generated code

I do not understand the meaning of above sentences. Somehow those feel
partial expressions. There may be something obvious that I miss. Can you please
try to say same in some other words?
It is loading more messages.
0 new messages