On 30/08/2019 13:47, Bonita Montero wrote:
>> Usually it is faster when exceptions are not thrown, but a lot slower
>> when they /are/ thrown. But "usually" is not "always". And for some
>> situations, it is the worst case performance - the speed when there is a
>> problem - that is critical.
>
> The case when an exception is thrown isn't performance-relevant
> as this occurs when there's an resource- or I/O-collapse.
As seems to be the case depressingly often, you are over-generalising
from certain types of software.
For some software, exceptions are ruled out as a technique because they
take too long, and in particular, their timing is very hard to predict.
For real-time programming, it doesn't matter how fast the usual
situation runs. It matters that even in the worst case, you know how
long the code will take. It is a world outside your experience and
understanding, apparently, but it is a vital part of the modern world.
>
>> Fair enough - though convenience is also a subjective matter.
>
> That's rather not subjective here because evaluating return-codes on
> every call-level is a lot of work.
>
Again, you don't see the big picture and think that your ideas cover
everything.
Dealing with return codes on every call level is only a lot of work if
you have lots of call levels between identifying a problem and dealing
with it. And using exceptions is a /huge/ amount of work for some kinds
of analysis of code and possible code flows. By being explicit and
clearly limited, return code error handling is far simpler to examine
and understand, and it is far easier to test functions fully when you
don't have to take into account the possibility of an unknown number of
unknown exceptions passing through.
As I have said, different ways of handling errors have their pros and
cons, and there is no one way that is best for all cases.
>
>>> 3. When you come to the excepion-handling mechanism like in Java
>>> with checked and unchecked exceptions, you get the best solution.
>>> But for compatibility-reasons with C this isn't possible in C++.
>
>> Checked exceptions - where the type of exceptions that a function may
>> throw or pass on is part of the signature - are safer, clearer and more
>> efficient. (Compilers can use tables or code branches, whichever is
>> most convenient). But they involve more explicit information, which
>> some people like and others dislike. And they can't pass through
>> functions compiled without knowledge of the exceptions - such as C
>> functions compiled by a C compiler.
>
> Can you read what I told above?
I elaborated on what you wrote.
>
>>> Almost right, but it doesn't depend on the performance of the CPU.
>>> Even on small CPUs the performance of processing a resurce- or I/O
>>> -collapse doesn't count.
>
>> Incorrect.
>
> No, correct.
I'm sorry, but you are making bald statements with apparently no
knowledge or experience in the area.
>
>> Performance under the worst case is often a critical feature of high
>> reliability systems.
>
> When you have a resource or I/O-collapse, you don't have reliability
> anyway.
>
You do understand that people use exceptions for a variety of reasons,
don't you? You do realise that people sometimes write safe and reliable
software that is designed to deal with problems in a reasonable fashion?
If "throw an exception" simply means "there's been a disaster - give
up on reliability, accept that the world has ended and it doesn't matter
how slowly we act as the program is falling apart" then why have an
exception system in the first place? Just call "abort()" and be done
with it.
>
>>> That's true, but then you wouldn't use C++.
>
>> Nonsense. There are many good reasons to use C++, and many features
>> that can be used, without leading to any inefficiency or bloat.
>
> Almost the whole standard-libary uses an allocaor which might
> throw bad_alloc; not to use the standard-library isn't really C++.
That's bollocks on so many levels, it's not worth trying to explain them
to you.