On Thursday, 29 June 2017 04:51:07 UTC+3,
woodb...@gmail.com wrote:
> On Wednesday, June 28, 2017 at 6:52:12 PM UTC-5, Öö Tiib wrote:
> > On Wednesday, 28 June 2017 18:18:21 UTC+3,
woodb...@gmail.com wrote:
> > > On Wednesday, June 28, 2017 at 1:10:35 AM UTC-5, Öö Tiib wrote:
> > > > On Wednesday, 28 June 2017 06:07:02 UTC+3,
woodb...@gmail.com wrote:
> > > > >
> > > > > If I could find a compiler that does better than g++ 7.1.1, g++ 8.0.0
> > > > > and clang 4.0.0, that would be encouraging.
> > > >
> > > > You measure "better" with size of executable generated by compiler?
> > > > But that is not so important about software. Most important is that
> > > > it is doing correct things. Then it is important that it is easy to
> > > > use. Then it is important that it performs well. Size of executable
> > > > is later in list typically.
> > >
> > > C++ is known for its zero-overhead abstraction. I'm trying to figure
> > > out if there's something I'm doing wrong or if the compilers I've tried
> > > are not able to provide abstraction for free in this area.
> >
> > Huh? I felt I asked why you aim for size over speed, you tell something
> > that is not from this world.
>
> There's a link between size and speed:
There is very rough correlation but no relation.
>
> int main (){return SUCCESS;}
Programs that do nothing useful are not what we
discuss I hope.
> >
> > Zero overhead? That means C++ produces ideal assembler? You misunderstood.
> > It is zero overhead principle ... aim ... goal. Actual compilers can not
> > translate code into ideal assembler. There is always some overhead.
>
> The question is if there's unnecessary overhead.
Yes, non-ideal assembler has always some unnecessary overhead.
> This is quote from earlier in the thread:
> "And further: What you do use, you couldn’t hand code any better."
That is again principle, the goal to pursue, the philosophy how to go,
not something that actual C++ compilers have somehow magically
achieved already.
> In one case, my use of variadic templates yields larger results from
> compilers than if I don't use variadics. But in two other cases, compilers
> appear to produce better results.
>
> Supporting both options as you suggested earlier would be a headache
> for me. I don't have resources to support lots of options right now.
The cheapest is just to do nothing if you ask that from me.
> Rebbe Nachman said, "All the world is just a narrow bridge -- the most
> important thing is not to be afraid." I have to figure out one of these
> options to use for the time being. Is Russia attacking Ukraine with
> cyberattacks? Rebbe Nachman was from Ukraine if I remember right.
To my knowledge current cryptoworm (where Ukraine got lot of damage)
seem modified versions of EternalBlue and EternalRomance. Those two are
exploits from bigger set of hacking tools that were developed by the
National Security Agency of United States Department of Defence and that
last year leaked onto the Internet by hackers. Microsoft has already
patched the holes of those exploits but unfortunately in countries like
Ukraine and Russia there are significant usage of pirated versions of
Windows. Anyway it sounds like nonsense to accuse Russians of those
worms.
>
> > Now even if there was a super compiler then the ideally short assembler
> > for most algorithms has some speed overhead compared to ideally fast
> > assembler for same algorithm (that has some size overhead). So zero
> > overhead in both senses would be still simply impossible.
> >
> > So why you value size over speed?
>
> Maybe someone with g++ 7.2 or clang 5 could build the two
> approaches and report the results.
So ... why you ignore speed?