Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

what do people use for automated testing of C++ Windows app ?

124 views
Skip to first unread message

Lynn McGuire

unread,
Aug 20, 2016, 6:20:36 PM8/20/16
to
What do people use for automated testing of C++ Windows apps ?

Our app has 200+ dialogs, a calculation engine, 600K lines of code of C++,

We have some automated single stage testing now but I would like to add some automated multiple stage testing.

I am digesting a couple of good articles:
http://www.drdobbs.com/testing/dont-develop-gui-tests-teach-your-app-to/240168468
and
http://stackoverflow.com/questions/1287425/automated-testing-for-c-c-gui-applications

Thanks,
Lynn

Öö Tiib

unread,
Aug 20, 2016, 6:34:54 PM8/20/16
to
On Sunday, 21 August 2016 01:20:36 UTC+3, Lynn McGuire wrote:
> What do people use for automated testing of C++ Windows apps ?

List is there,
https://en.wikipedia.org/wiki/List_of_GUI_testing_tools
I have participated in project that used TestComplete.

My impression is that you need to reserve couple of programmers
full time to script those tests since GUI tends to change most
rapidly.

Richard

unread,
Aug 21, 2016, 3:16:02 PM8/21/16
to
[Please do not mail me a copy of your followup]

Lynn McGuire <lynnmc...@gmail.com> spake the secret code
<npal36$v1e$1...@dont-email.me> thusly:

>What do people use for automated testing of C++ Windows apps ?

I suppose it depends on what you mean by automated testing.

The general approach is to test the business logic of the application
with unit tests and acceptance tests (integration tests).

If your business logic is intimately coupled to the UI (i.e. business
logic is embedded in event handlers), then the tight coupling makes it
difficult to write unit tests. Acceptance tests in this situation
must be written as some sort of event synthesis in order to pretend to
be a user because there's no other way to exercise the business logic
than to get the event handlers to fire.

This is fine as long as the UI is stable and what you're after is a
regression test suite that verifies that internal changes haven't
indirectly broken stable functionality.

However, if you're talking about an application that is evolving and
having new features added, then coupling your acceptance tests to the
UI is very fragile IMO. The UI is the part of the application that
changes at the highest frequency. When you couple your acceptance
tests directly to the UI, then your tests are constantly needing to be
updated. However, it is likely that the underlying business logic or
computational process is changing at a much slower rate. Couple your
tests to the business logic and your tests become less of a burden.

For complex UI where there is coupling of behavior between controls
(i.e. disable this input field when that checkbox is checked), you
will want to have some automated test around that logic. If your UI
follows a model-view-controller pattern of separation, you can unit
test the controller. If you code is tightly coupled where the UI
logic is directly in the event handler and you can't exercise the
logic without instanting the GUI itself, then you can refactor it into
a Mediator pattern. This is just extracting the UI logic into a new
class (the mediator) and interacting with the UI through an interface
which is implemented by your dialog, window, etc. Then you can unit
test the mediator. This is what I do when making new dialogs with
more than trivial behavior (i.e. just a bunch of input fields and
OK/Cancel buttons) in our large Qt application.

>Our app has 200+ dialogs, a calculation engine, 600K lines of code of C++,
>
>We have some automated single stage testing now but I would like to add
>some automated multiple stage testing.

For acceptance tests I like FitNesse <http://www.fitnesse.org>. It
let's you express your acceptance tests through wiki pages with
embedded tables of test instructions. This lets you add all kinds of
ancillary documentation to the tests along with the tests. It is
heavily used in the Java and .NET communities for automated acceptance
testing. It is written in Java and has a bridge for executing C/C++
code.

Our application is a large 3D graphics Qt application with ~1M
statements and lots of UI. Fortunately it has also been scriptable
from the beginning with the Qt javascript variant. We add unit tests
for new features and acceptance tests covering new features as well as
broadening our acceptance tests to cover more legacy features. We try
to test at the level of functionality and not GUI (the GUI just drives
the same underlying functionality). With new dialogs we write
mediators and test the logic in the mediator.

I cover the Mediator idea in my set of blog posts on TDD:
<https://legalizeadulthood.wordpress.com/2009/07/05/c-unit-tests-with-boost-test-part-4/>

You can see me refactor some existing C# code into Mediator pattern
here:
<http://confreaks.tv/videos/agileroots2009-test-driven-development-and-refactoring-part-two>
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
The Terminals Wiki <http://terminals.classiccmp.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>

Lynn McGuire

unread,
Aug 21, 2016, 7:30:27 PM8/21/16
to
Thanks for the list!

Wow, my shop is me and three programmers, plus other support personnel.
I am going to need something a little less manpower intensive.

Thanks,
Lynn


Lynn McGuire

unread,
Aug 21, 2016, 7:48:27 PM8/21/16
to
I have three levels of automated tests now. The first level tests our
UI with the ability to open 400+ test files, resave them, and run our
calculation engine with them, takes 12 hours to run. The second level
adds extreme regression of our calculation engine with 700+ additional
files, takes another 12 hours to run. The third level adds 13,000+
customer files with the opening, saving, and execution of the
calculation engine, takes 6 more days to run. The third level is just a
test of the various combinations of user options to make sure that our
UI and calculation engine do not crash.

We use our own hand built diagramming toolkit in Win32 C++ dating back
to 1987 or so. I would to have something like Qt but that is another
day. We are not scriptable but easily programmable to do stuff.

I would like to find a test tool that allows us to test from the UI
level. Something that allows us to test multiple stage of expected
behavior across various dialogs.

Sincerely,
Lynn McGuire

Richard

unread,
Aug 22, 2016, 3:15:10 AM8/22/16
to
[Please do not mail me a copy of your followup]

Lynn McGuire <lynnmc...@gmail.com> spake the secret code
<npdek3$8tk$1...@dont-email.me> thusly:

>I would like to find a test tool that allows us to test from the UI
>level. Something that allows us to test multiple stage of expected
>behavior across various dialogs.

There are a number of tools that can synthesize events and then poke
at the resulting Win32 controls to attempt to verify the results.
However, they are very fragile and labor intensive to both create and
maintain. Sometimes they just don't work at all because the Win32 API
wasn't really designed so that application B can probe all the
internal state of the controls in application A. This is particularly
problematic when you write custom handlers for the low level events.
Also, these scripts are not portable so if you have cross-platform
code and have to care about testing on Mac or Linux, you will end up
needing tests for each platform as the event playback style of testing
is very platform specific.

One option is to build the event recording and playback directly into
your application. I've worked on C++ code bases that did this and
while it gave them the ability to playback a user's session (valuable
for more things than just regression testing), it was very labor
intensive to create and maintain. Sometimes, this might be the only
reliable way of reproducing certain user issues, however.

For instance, a CAD program is used in sessions lasting hours or
perhaps even days. When the CAD program crashes, there may be no way
to easily reproduce the problem from a few edit steps in a fresh
session. You may need the entire session playback in order to
reproduce the problem. Some games have a record/playback feature to
support debugging of user problems encountered during gameplay for a
similar reason.

You don't state if your goal is to cover new features and enhancements
to existing features or if your goal is to build a regression suite.

As I mentioned earlier, event synthesis/control probing schemes are OK
for regressions. They are still very labor intensive to create and
maintain, but if the UI isn't constantly changing the investment can
be ammortized over time.

A scheme like FitNesse tests tied directly to the business logic is
better, however. You really want to make your UI as dumb as possible
so that it can be shown correct by inspection instead of by automated
testing. This is the so-called "Humble Dialog". Put the automated
testing to work on the underlying business logic that produces results
shown by the UI.

Öö Tiib

unread,
Aug 22, 2016, 8:21:02 AM8/22/16
to
It may depend how large and friendly user base you have but in
general you need at least one engineer fully dedicated to quality
anyway. Focus to quality is too different from focus to functionality
and often hard to switch between for programmers. Low quality
software is considered creepy by lot of customers.

Test suite that interacts with GUI (does its drags, taps, scrolls, slides,
pulls and rotates in meaningful way) and recognizes validity of
outcome can not be trivial and so inevitably takes complex scripting.
If you don't have budget for full coverage tests then try to automate
at least regression tests. Places that have been ever broken are
fragile and so regressions are surprisingly common.

Lynn McGuire

unread,
Aug 22, 2016, 1:50:00 PM8/22/16
to
Interesting article, "Why Are There So Many C++ Testing Frameworks?":
http://googletesting.blogspot.com/2012/10/why-are-there-so-many-c-testing.html

Lynn

Richard

unread,
Aug 22, 2016, 3:54:20 PM8/22/16
to
[Please do not mail me a copy of your followup]

Lynn McGuire <lynnmc...@gmail.com> spake the secret code
<npfdvq$4mn$1...@dont-email.me> thusly:

>Interesting article, "Why Are There So Many C++ Testing Frameworks?":
>
>http://googletesting.blogspot.com/2012/10/why-are-there-so-many-c-testing.html

Without reading the article, my guesses are:

- there wasn't one, so we made our own and kept using it
- open source egoism[*]
- there's no defacto standard like jUnit
- framework PQR doesn't do enough, so I made my own
- framework PQR does too much, so I made my own

Now let's go look at the article and see how I did.

Google says PQR doesn't do enough, so they made their own:

"The short answer is that we couldn't find an existing C++
testing framework that satisfied all our needs."

Which frankly I find hard to believe because there isn't anything in
GTest that wasn't already in Boost.Test AFAICT. Maybe it was the use
of exceptions they didn't like, I can't recall if Boost.Test relies
on exceptions or simply supports testing code that uses them.

[*] "open source egoism" is what I call the phenomenon of what
motivates people to contribute to open source. It's much sexier to
create your own thing from scratch than it is to do bug fixes on an
existing thing or enhance/extend an existing thing. Therefore, open
source software development tends to result in lots of greenfield
projects that do just enough to scratch the author's itch and they
don't build a community that extends them further. Therefore a
profusion of small greenfield projects appear, but none of them is
sufficiently embraced to gain dominance.

Lynn McGuire

unread,
Aug 22, 2016, 4:36:58 PM8/22/16
to
And they are Google.

Lynn

Öö Tiib

unread,
Aug 22, 2016, 6:58:14 PM8/22/16
to
For me it means "a large company that aggressively avoids taxes
worldwide". Why google is better than others? Amazon pays taxes, oracle
pays taxes, microsoft pays taxes, apple pays taxes. Google should pay
taxes too.

They indeed have a odd religion against C++ exceptions. So if other
frameworks did not compile with exceptions turned off then they had
to make their own.

David Brown

unread,
Aug 23, 2016, 3:00:00 AM8/23/16
to
I agree that Google should pay taxes - but none of these other companies
pays taxes in the way small companies or individuals do. They /all/
have armies of lawyers and accountants whose job is to minimise the
taxes paid, to make sure profits are shuffled around from company to
company and country to country until tax authorities have lost track.
/None/ of them pay an appropriate level of tax in the countries where
they make they money. So while I fully agree that Google is bad at
this, it is no worse than the other big IT companies.

>
> They indeed have a odd religion against C++ exceptions. So if other
> frameworks did not compile with exceptions turned off then they had
> to make their own.
>

Google are not alone in disliking C++ exceptions - it is a better
attitude than many C++ programmers who write code that occasionally uses
exceptions, but don't really understand all the details needed to make
properly exception-safe code, or who treat exceptions as just a way of
giving the user an error message when something goes wrong.

Gareth Owen

unread,
Aug 23, 2016, 2:05:14 PM8/23/16
to
Öö Tiib <oot...@hot.ee> writes:


> Amazon pays taxes

Not round here they don't.

Profits end up at a shell company in Luxembourg.

Lynn McGuire

unread,
Aug 23, 2016, 7:53:08 PM8/23/16
to
On 8/20/2016 5:20 PM, Lynn McGuire wrote:
We have decided to add more application specific tests for now that we run from the command line.

Thanks,
Lynn

Öö Tiib

unread,
Aug 24, 2016, 4:42:22 AM8/24/16
to
I had different impression but I'm no expert to judge it myself. So may be
there are only putrid apples. ;)

>
> >
> > They indeed have a odd religion against C++ exceptions. So if other
> > frameworks did not compile with exceptions turned off then they had
> > to make their own.
> >
>
> Google are not alone in disliking C++ exceptions - it is a better
> attitude than many C++ programmers who write code that occasionally uses
> exceptions, but don't really understand all the details needed to make
> properly exception-safe code, or who treat exceptions as just a way of
> giving the user an error message when something goes wrong.

Only some pre-standard versions of C++ did not have exceptions and
so it is not good tool for people who can't handle those. Also C++ is
sort of tricky for those who do not follow RAII. There are some
programming languages (unfortunately very few) that lack exceptions
so perhaps such people should use one of those languages. Existence
of such people is not good reason to forbid exceptions.

David Brown

unread,
Aug 24, 2016, 7:14:15 AM8/24/16
to
I am not an expert myself. It could be that Google is worse than others
- the real point is that they are /all/ bad. And of course, how "bad"
they are can depend on your viewpoints. Is a company that dodges taxes
but gives other things to the community "bad"? Is it "bad" if it
provides cheap services to customers but uses bullying (legal or not)
tactics on competitors? Should a company pay taxes in its "mother"
country, or in different countries based on where it gets its income, or
different countries based on where it makes a profit, or in countries
based on where it has employees?

Once you realise that they are all bad in some way, you don't have to
feel guilty about using their services!

>>
>>>
>>> They indeed have a odd religion against C++ exceptions. So if other
>>> frameworks did not compile with exceptions turned off then they had
>>> to make their own.
>>>
>>
>> Google are not alone in disliking C++ exceptions - it is a better
>> attitude than many C++ programmers who write code that occasionally uses
>> exceptions, but don't really understand all the details needed to make
>> properly exception-safe code, or who treat exceptions as just a way of
>> giving the user an error message when something goes wrong.
>
> Only some pre-standard versions of C++ did not have exceptions and
> so it is not good tool for people who can't handle those. Also C++ is
> sort of tricky for those who do not follow RAII. There are some
> programming languages (unfortunately very few) that lack exceptions
> so perhaps such people should use one of those languages. Existence
> of such people is not good reason to forbid exceptions.
>

/If/ you are in a position to use exceptions well, then there are
advantages in using them. But that means that /you/, and all the other
programmers on your team, need to understand them and use them
appropriately. It also means that any libraries or third-party code you
use must be equally good. And since "exception safe" can mean many
different things ("basic" guarantee, "strong" guarantee, "nothrow"
guarantee) you have to make sure you have everything coordinated
appropriately.

RAII is necessary for making code exception-safe. But it is not
sufficient, and you certainly don't need exceptions in order to make
good use of RAII.

The facts are that writing /really/ exception-safe code is often hard
(though modern features such as unique_ptr makes it a good deal easier),
and can often lead to verbose or inefficient code. You can quickly find
that you need to make extra little classes and put code in the
constructor and destructor (again, unique_ptr can simplify this a lot).
It is easy to write code that looks safe, but has hidden flaws, as it
can be hard to know where exceptions might be thrown.

People often claim that exception handling is free - noting that with
stack unwind tables, code that uses exceptions typically runs faster
than similar code that has explicit checking for failures. While that
is true as far as it goes, it is far from always the case, for a variety
of reasons. One is that on some targets, it is not as efficient - and
the cost of the space needed for the tables can be significant (think
small embedded systems). Another is that it can hinder optimisations
(by enforcing ordering of operations, or pushing local objects onto the
stack). And if you are trying to get a "strong" exception safe
guarantee, you often need a good deal more code, and a good deal slower
code. A function that modifies an object "x" with the strong guarantee
will typically take a copy "x2" of "x", modify "x2" in a way that might
thrown an exception, and then swap "x" and "x2" once everything is
completed safely. That means an extra copy-constructor, and an extra
destructor - that's hardly free if "x" is a large data structure.


There are, of course, a great many benefits to be had from using
exceptions and writing exception-safe code. But you need to be clear
that the benefits only come if the code really is exception safe, that
it is realistic for exceptions to occur, and that you have something
useful that you can do /if/ an exception occurs. C++ gives you the
/option/ of using exceptions - it does not force you to do so, and it is
not always the right choice.






Jerry Stuckle

unread,
Aug 24, 2016, 11:48:22 AM8/24/16
to
In this case I agree completely. Exceptions have their place, and good
C++ programmers know how to use them properly.

But all too often I've seen them misused. A good example would be
throwing an exception for a "NOT FOUND" response from a database query.
That is a normal response which should be expected, and should be
handled appropriately. Exceptions should be used for abnormal responses
and similar unexpected conditions such as "ABORTED CONNECTION".

--
==================
Remove the "x" from my email address
Jerry Stuckle
jstu...@attglobal.net
==================

Richard

unread,
Aug 24, 2016, 12:30:43 PM8/24/16
to
[Please do not mail me a copy of your followup]

Lynn McGuire <lynnmc...@gmail.com> spake the secret code
<npinkq$jj1$1...@dont-email.me> thusly:

>We have decided to add more application specific tests for now that we
>run from the command line.

So if I understand you correctly:

- your application can accept inputs from the command-line and write
out results to a file
- your application tests invoke the tool from the command-line and
verify the resulting file(s)

This is a very reasonable way to write acceptance tests and/or
regression tests. FitNesse can be useful in this scenario as well.
The command-line arguments are documented on the FitNesse wiki page
describing the test and the expected file contents are also described
on the same wiki page. You glue them to your application by writing
a FitNesse fixture or using an existing fixture. Once your fixture
builds up enough custom verification primitives, you find that you
cna express new tests as new FitNesse wiki pages without having to
write more code.

The idea behind FitNesse originally was to create a way for the
business analysts and product owners to create acceptance criteria in
a way that not only made sense to them but also became an executable
unit that could directly verify the functionality.

Mr Flibble

unread,
Aug 24, 2016, 12:44:24 PM8/24/16
to
Exceptions can be used for expected conditions too and I would argue
"ABORTED CONNECTION" is probably one such expected condition; invalid
user input is another.

/Flibble


Jerry Stuckle

unread,
Aug 24, 2016, 12:53:49 PM8/24/16
to
You would. But GOOD C++ programmers know better.

And no, "ABORTED CONNECTION" would not be a condition that would
normally occur, and would not be expected. At least not in any decent
system. Maybe it is on the ones you work with.

And invalid user input is an expected condition and should be handled
appropriately - not with exceptions.

Mr Flibble

unread,
Aug 24, 2016, 1:05:45 PM8/24/16
to
Your fractal wrongness continues unabated it seems.

Exceptions are not just for *fatal* errors; if that were the cause there
would be little point having the ability to catch exceptions.
Exceptions are also useful for handling non-fatal, i.e. recoverable,
conditions without having to resort to the old fashioned C-way of
returning error codes manually up the call stack to indicate an error.

It is obvious given your outlook that you do not know how to design
software properly through the use of layered abstractions through which
exceptions are an ideal way for transmitting errors of all kinds.

/Flibble

Öö Tiib

unread,
Aug 24, 2016, 1:35:43 PM8/24/16
to
So part of exception handling skill is how *not* to throw into legacy
code (module or library) that expects nothrow but does not constrain
it in interface or how to edit the interface to constrain it.

About unskilled people it is anyway gamble. Can't handle exceptions?
Can they manage resources? Resources besides memory? Can they
handle threads and locks? Etc. C++ without exceptions is no way
simple and foolproof tool.

> And since "exception safe" can mean many
> different things ("basic" guarantee, "strong" guarantee, "nothrow"
> guarantee) you have to make sure you have everything coordinated
> appropriately.

These guarantees are trivial and obvious. Let me list?

* The "nothrow" guarantee is marked with "noexcept" specifier in C++.
Exceptions won't be ever thrown out of such functions. How it can be
simpler?

* "Basic" guarantee is that all objects will be in valid invariant and there
are no resource leaks when exception is thrown out of function. That
very guarantee is expected *always* in C++ regardless if function does
throw or not. So it is only essential and is achievable with RAII alone
on case of exceptions.

* "Strong" guarantee means that function rolls everything back into
state before call if it throws. That means special design effort is
used that can be expensive. That design effort however does not
differ from a function that has no side effects (rolls everything back)
when it returns an error code. So it is special *functionality* and is
expected to be documented about a function.

>
> RAII is necessary for making code exception-safe. But it is not
> sufficient, and you certainly don't need exceptions in order to make
> good use of RAII.

RAII is sufficient for achieving basic guarantee. It does not achieve
roll-back guarantee automatically on general case, but nothing does
that. RAII is essential for other reasons anyway, exactly my point.

>
> The facts are that writing /really/ exception-safe code is often hard
> (though modern features such as unique_ptr makes it a good deal easier),
> and can often lead to verbose or inefficient code. You can quickly find
> that you need to make extra little classes and put code in the
> constructor and destructor (again, unique_ptr can simplify this a lot).
> It is easy to write code that looks safe, but has hidden flaws, as it
> can be hard to know where exceptions might be thrown.

The 'unique_ptr' has been around for years. Before it there was
a decade when 'boost::scoped_ptr' was available. Efficiency-wise it is
about as good, it just does essential things. Before that there was time
when C++ wasn't standardized and both templates and exceptions
were sort of expensively and inconsistently implemented so I avoided
those. That was two decades ago however.

>
> People often claim that exception handling is free - noting that with
> stack unwind tables, code that uses exceptions typically runs faster
> than similar code that has explicit checking for failures. While that
> is true as far as it goes, it is far from always the case, for a variety
> of reasons.

Not free but often cheaper and that is why it is often faster.

> One is that on some targets, it is not as efficient - and
> the cost of the space needed for the tables can be significant (think
> small embedded systems).

Very small embedded systems and bad quality of some implementations
are red herrings again. Small systems do trivial things and so usage of most
features of C++ (including exceptions) are likely overengineering there.
Usage of low quality tools can result with unanticipated expenses regardless
if we use exceptions or not. It is same as with unskilled workforce, exceptions
are not *the* issue there and don't stand out as special, something else is
the issue.

> Another is that it can hinder optimisations
> (by enforcing ordering of operations, or pushing local objects onto the
> stack). And if you are trying to get a "strong" exception safe
> guarantee, you often need a good deal more code, and a good deal slower
> code. A function that modifies an object "x" with the strong guarantee
> will typically take a copy "x2" of "x", modify "x2" in a way that might
> thrown an exception, and then swap "x" and "x2" once everything is
> completed safely. That means an extra copy-constructor, and an extra
> destructor - that's hardly free if "x" is a large data structure.

What you mean? We often have to make backup copies of objects when
failed attempt of operation may change it (but we guarantee not to)
regardless if we throw exception after we failed or we return error code.
What else you suggest to do on general case?

>
> There are, of course, a great many benefits to be had from using
> exceptions and writing exception-safe code. But you need to be clear
> that the benefits only come if the code really is exception safe, that
> it is realistic for exceptions to occur, and that you have something
> useful that you can do /if/ an exception occurs. C++ gives you the
> /option/ of using exceptions - it does not force you to do so, and it is
> not always the right choice.

With that I agree. Also I feel that I have never claimed that usage of
exceptions is correct choice in every situation. Exceptions are good
only for to handle exceptional situations (like name indicates).
That does not mean that such handling itself is rare in code. In mature
program over half of handled situations are exceptional.

Jerry Stuckle

unread,
Aug 24, 2016, 4:00:26 PM8/24/16
to
I didn't say anything about *fatal* errors. That's YOUR WORDS.

And no, GOOD programmers know exceptions are NOT an "ideal way for
transmitting errors of all kinds". Among other things, the overhead of
exception handling is quite high. And exceptions never were designed to
handle "all kinds of errors".

Good programmers know how to use exceptions effectively. Crappy
programmers misuse them for all kinds of things they weren't meant for.

Mr Flibble

unread,
Aug 24, 2016, 5:12:11 PM8/24/16
to
Again your ignorance is showing. In a decent implementation the
overhead for exception handling is zero for the case of the exception
not being thrown. For recoverable exceptions the overhead of throwing
them is offset by the fact that they shouldn't be thrown very often
(i.e. not on the common/critical code path) but that doesn't mean you
should only use them for non-recoverable conditions.

>
> Good programmers know how to use exceptions effectively. Crappy
> programmers misuse them for all kinds of things they weren't meant for.

You don't seem to know what exceptions are meant for so according to you
you are a crappy programmer. I wouldn't argue with that.

/Flibble

Jerry Stuckle

unread,
Aug 24, 2016, 5:28:36 PM8/24/16
to
I agree. Your ignorance is showing. As I said - exceptions are for
"exceptional conditions" - not handling expected conditions such as NOT
FOUND from a database or invalid user input. The cost of unwinding the
stack is very high.

And I said nothing about using them for non-recoverable conditions.
Those are YOUR WORDS.

>>
>> Good programmers know how to use exceptions effectively. Crappy
>> programmers misuse them for all kinds of things they weren't meant for.
>
> You don't seem to know what exceptions are meant for so according to you
> you are a crappy programmer. I wouldn't argue with that.
>
> /Flibble

Oh, I understand completely what they are for. But you once again show
you are a crappy programmer.

No surprise there. And no wonder you can't find a job with a decent
company.

Mr Flibble

unread,
Aug 24, 2016, 5:39:46 PM8/24/16
to
Whether or not the cost of unwinding the stack is a concern depends
entirely on the situation in question. In the rare case where it is a
concern then a different approach to the particular problem in question
may be required. You wouldn't throw an exception in order to draw an
individual pixel when rendering a frame buffer for example but this is a
straw man much like your argument. Your words "very high" are entirely
subjective and completely relative.

Your basic problem is that you are generalizing without realizing that
you are generalizing.

>
> And I said nothing about using them for non-recoverable conditions.
> Those are YOUR WORDS.
>
>>>
>>> Good programmers know how to use exceptions effectively. Crappy
>>> programmers misuse them for all kinds of things they weren't meant for.
>>
>> You don't seem to know what exceptions are meant for so according to you
>> you are a crappy programmer. I wouldn't argue with that.
>>
>> /Flibble
>
> Oh, I understand completely what they are for. But you once again show
> you are a crappy programmer.
>
> No surprise there. And no wonder you can't find a job with a decent
> company.

You do know that attacking the person (ad hominem) is a logical fallacy
right? Do you even know what a logical fallacy is? And when you use
one you have effectively lost the argument?

/Flibble


Jerry Stuckle

unread,
Aug 24, 2016, 5:58:47 PM8/24/16
to
The cost of unwinding a stack is ALWAYS a concern - at least to a
competent programmer. And it is YOU who are generalizing - and trying
to put words into my mouth. It doesn't work, Flibbie.


>>
>> And I said nothing about using them for non-recoverable conditions.
>> Those are YOUR WORDS.
>>
>>>>
>>>> Good programmers know how to use exceptions effectively. Crappy
>>>> programmers misuse them for all kinds of things they weren't meant for.
>>>
>>> You don't seem to know what exceptions are meant for so according to you
>>> you are a crappy programmer. I wouldn't argue with that.
>>>
>>> /Flibble
>>
>> Oh, I understand completely what they are for. But you once again show
>> you are a crappy programmer.
>>
>> No surprise there. And no wonder you can't find a job with a decent
>> company.
>
> You do know that attacking the person (ad hominem) is a logical fallacy
> right? Do you even know what a logical fallacy is? And when you use
> one you have effectively lost the argument?
>
> /Flibble
>
>

In your case it's not an attack. It's only the truth - as you yourself
have proven time and time again. I can't help it if the truth hurts.

Mr Flibble

unread,
Aug 24, 2016, 6:41:54 PM8/24/16
to
On 24/08/2016 22:58, Jerry Stuckle wrote:
>
> The cost of unwinding a stack is ALWAYS a concern - at least to a
> competent programmer.

No, it isn't. Donald Knuth once said:

"We should forget about small efficiencies, say about 97% of the time:
premature optimization is the root of all evil. Yet we should not pass
up our opportunities in that critical 3%."

So we only need to worry about the overhead of throwing exceptions in
that 3% not that 97%.

Maybe you should take a course in computer programming?

/Flibble

David Brown

unread,
Aug 24, 2016, 7:17:36 PM8/24/16
to
On 24/08/16 19:35, Öö Tiib wrote:
> On Wednesday, 24 August 2016 14:14:15 UTC+3, David Brown wrote:

>>
>> /If/ you are in a position to use exceptions well, then there are
>> advantages in using them. But that means that /you/, and all the other
>> programmers on your team, need to understand them and use them
>> appropriately. It also means that any libraries or third-party code you
>> use must be equally good.
>
> So part of exception handling skill is how *not* to throw into legacy
> code (module or library) that expects nothrow but does not constrain
> it in interface or how to edit the interface to constrain it.
>
> About unskilled people it is anyway gamble. Can't handle exceptions?
> Can they manage resources? Resources besides memory? Can they
> handle threads and locks? Etc. C++ without exceptions is no way
> simple and foolproof tool.

It is neither simple nor foolproof, whether or not you use exceptions.
Exceptions are just one tool in the C++ toolbox.

>
>> And since "exception safe" can mean many
>> different things ("basic" guarantee, "strong" guarantee, "nothrow"
>> guarantee) you have to make sure you have everything coordinated
>> appropriately.
>
> These guarantees are trivial and obvious. Let me list?
>
> * The "nothrow" guarantee is marked with "noexcept" specifier in C++.
> Exceptions won't be ever thrown out of such functions. How it can be
> simpler?

Note that "noexcept" is from C++11 onwards. The older exception
specifications did not guarantee anything (other than giving the
compiler new ways to make things go wrong for your program if you don't
follow your specifications).

But since noexcept (like throw()) is not checked at compile time, it
means manual work - you have to figure out if your function really does
not throw anything, which means figuring out if any of the operators,
function calls, etc., in the function are definitely noexcept.

Sure, it can be done - but the effort involved is not negligible. If
the end result is better, safer code, then great. If not, the effort is
wasted, and it's another thing for people to get wrong.

>
> * "Basic" guarantee is that all objects will be in valid invariant and there
> are no resource leaks when exception is thrown out of function. That
> very guarantee is expected *always* in C++ regardless if function does
> throw or not. So it is only essential and is achievable with RAII alone
> on case of exceptions.
>
> * "Strong" guarantee means that function rolls everything back into
> state before call if it throws. That means special design effort is
> used that can be expensive. That design effort however does not
> differ from a function that has no side effects (rolls everything back)
> when it returns an error code. So it is special *functionality* and is
> expected to be documented about a function.
>

Having a strong guarantee on a function can definitely be a useful thing.

But consider what you are protecting against by this "special design
effort". A sizeable percentage of programs are run on PCs, by humans.
They will /never/ run out of memory, assuming you don't have leaks. The
action on failure, from a general "catch" somewhere in or close to
main(), will be to present an error message and then die. In such
cases, all the extra effort in making strong guarantee functions is
wasted. Worse than that - more code means more opportunities for bugs,
and the complications for the strong guarantee can make code much
slower. And all in the name of avoiding something that will never
happen, and minimising the consequences unnecessarily.

And there are lots of systems where errors are simply not acceptable -
exceptions should not be thrown. Then there is not need to throw or
catch exceptions - at best, they are a tool to help during testing and
development.

Again, I am not saying it is wrong to use exceptions, or wrong to
provide strong guarantees - just that it comes with a cost that is often
not worth paying.

>>
>> RAII is necessary for making code exception-safe. But it is not
>> sufficient, and you certainly don't need exceptions in order to make
>> good use of RAII.
>
> RAII is sufficient for achieving basic guarantee. It does not achieve
> roll-back guarantee automatically on general case, but nothing does
> that. RAII is essential for other reasons anyway, exactly my point.
>

I am quite happy with RAII - it has many uses even without exceptions.

>>
>> The facts are that writing /really/ exception-safe code is often hard
>> (though modern features such as unique_ptr makes it a good deal easier),
>> and can often lead to verbose or inefficient code. You can quickly find
>> that you need to make extra little classes and put code in the
>> constructor and destructor (again, unique_ptr can simplify this a lot).
>> It is easy to write code that looks safe, but has hidden flaws, as it
>> can be hard to know where exceptions might be thrown.
>
> The 'unique_ptr' has been around for years. Before it there was
> a decade when 'boost::scoped_ptr' was available. Efficiency-wise it is
> about as good, it just does essential things. Before that there was time
> when C++ wasn't standardized and both templates and exceptions
> were sort of expensively and inconsistently implemented so I avoided
> those. That was two decades ago however.
>
>>
>> People often claim that exception handling is free - noting that with
>> stack unwind tables, code that uses exceptions typically runs faster
>> than similar code that has explicit checking for failures. While that
>> is true as far as it goes, it is far from always the case, for a variety
>> of reasons.
>
> Not free but often cheaper and that is why it is often faster.
>
>> One is that on some targets, it is not as efficient - and
>> the cost of the space needed for the tables can be significant (think
>> small embedded systems).
>
> Very small embedded systems and bad quality of some implementations
> are red herrings again.

Small embedded systems are programmed in C++ these days. It is a
growing field.

> Small systems do trivial things and so usage of most
> features of C++ (including exceptions) are likely overengineering there.

True enough.

> Usage of low quality tools can result with unanticipated expenses regardless
> if we use exceptions or not. It is same as with unskilled workforce, exceptions
> are not *the* issue there and don't stand out as special, something else is
> the issue.

Lack of support for exceptions, or inefficient implementations, does not
mean low quality tools.

>
>> Another is that it can hinder optimisations
>> (by enforcing ordering of operations, or pushing local objects onto the
>> stack). And if you are trying to get a "strong" exception safe
>> guarantee, you often need a good deal more code, and a good deal slower
>> code. A function that modifies an object "x" with the strong guarantee
>> will typically take a copy "x2" of "x", modify "x2" in a way that might
>> thrown an exception, and then swap "x" and "x2" once everything is
>> completed safely. That means an extra copy-constructor, and an extra
>> destructor - that's hardly free if "x" is a large data structure.
>
> What you mean? We often have to make backup copies of objects when
> failed attempt of operation may change it (but we guarantee not to)
> regardless if we throw exception after we failed or we return error code.
> What else you suggest to do on general case?
>

You may well have to make backup copies if you want to write
trial-and-error code, whether or not you use exceptions. But when you
have exceptions, you have to be aware that just about anything can fail
unexpectedly with an exception. Without exceptions, and using explicit
error handling, you know where things can go wrong.

>>
>> There are, of course, a great many benefits to be had from using
>> exceptions and writing exception-safe code. But you need to be clear
>> that the benefits only come if the code really is exception safe, that
>> it is realistic for exceptions to occur, and that you have something
>> useful that you can do /if/ an exception occurs. C++ gives you the
>> /option/ of using exceptions - it does not force you to do so, and it is
>> not always the right choice.
>
> With that I agree. Also I feel that I have never claimed that usage of
> exceptions is correct choice in every situation. Exceptions are good
> only for to handle exceptional situations (like name indicates).
> That does not mean that such handling itself is rare in code. In mature
> program over half of handled situations are exceptional.
>

My dislike for exceptions is that they hide things that I would prefer
to be out in the open.

If I see a function:

int foo(int x);

I read that as a function that as a function that takes an integer "x",
foo's it, and returns an integer result. It is clear and simple, and
ready to use.

But with exceptions, foo is now a function that takes an integer "x",
and /either/ foo's it and returns an integer, /or/ it returns a
completely unexpected object of an unknown type that may not even be
known to the user of foo, which will lead to a jump to somewhere else in
code that the user of foo doesn't know about.

If foo is a function that could fail, I prefer to know about it:

tuple<int, bool> foo(int x);


As an aside, I /really/ hope that structured binding makes it into
C++17, so that we can write:

auto [result, success] = foo(123);


Jerry Stuckle

unread,
Aug 24, 2016, 10:57:41 PM8/24/16
to
Right. This from a demonstrated troll who has proven he knows nothing
about programming.

No wonder you can't get a job with a decent company.

Gareth Owen

unread,
Aug 25, 2016, 1:15:38 AM8/25/16
to
David Brown <david...@hesbynett.no> writes:

> My dislike for exceptions is that they hide things that I would prefer
> to be out in the open.
>
> If I see a function:
>
> int foo(int x);
>
> I read that as a function that as a function that takes an integer
> "x", foo's it, and returns an integer result. It is clear and simple,
> and ready to use.

That assumes that the returned int must always be a valid value
(i.e. there are no domain errors, foo(x) can never fail due to resource
exhaustion etc). Otherwise, some return value(s) must be set aside as
error codes, which you must now check. There's no clue in the function
signature as to whether this is true. Furthermore, if you neglect check
for these error codes, your program appears to work, but is probably
doing so incorrectly. If I do check, I have to propagate that error up
the call chain, which is probably going to require me to find out the
error-return and return that.

(Boy, I hope the callers all the way up the stack remember to check &
propagate, and clean up their resources correctly).

Conversely, if you know that all return values of int foo(int x) are not
error codes (i.e. f() can't fail), then you also know that foo is not
going to throw. If you are unaware of this failure case, your program
fails, but it fails early with a (possibly uncaught) exception.

The only question that matters is "Can this function fail and what
happens if it does?"

int foo(int x);

doesn't tell you that.

Gareth Owen

unread,
Aug 25, 2016, 1:27:59 AM8/25/16
to
An additionally, its simply wrong to suggest that if I use an
error-return code to signal failure from the point-of-failure to
point-of-recovery then I am somehow absolved from having to unwind the
stack.

On the contrary, I know have to do it by hand, requiring error handling
code at multiple places in the call graph, and requires API that has
error returns for every function.

David Brown

unread,
Aug 25, 2016, 3:58:29 AM8/25/16
to
On 25/08/16 07:15, Gareth Owen wrote:
> David Brown <david...@hesbynett.no> writes:
>
>> My dislike for exceptions is that they hide things that I would prefer
>> to be out in the open.
>>
>> If I see a function:
>>
>> int foo(int x);
>>
>> I read that as a function that as a function that takes an integer
>> "x", foo's it, and returns an integer result. It is clear and simple,
>> and ready to use.
>
> That assumes that the returned int must always be a valid value
> (i.e. there are no domain errors, foo(x) can never fail due to resource
> exhaustion etc).

Exactly. It's that simple.

If there are domain restrictions on x, then they should be documented as
part of the type of x, the name of the function, or a comment. If code
calls foo with an invalid x, then the calling code is broken. It has a
bug. When implementing foo(), it is helpful to spot invalid x values to
help during debugging, and it is helpful to avoid propagating errors,
but it is /not/ helpful to try to aid recovery in the calling code.

Functions should not fail when they are used correctly. If it might do
something other than its primary function, then that is a secondary
function and should be part of the function's specifications. You can't
have a function that claims to always do "foo", but sometimes does
something else entirely just because of lack of memory. Then it is no
longer the function "foo" - it is the function
"try_to_foo_or_do_bar_instead". And that is absolutely fine - but it is
a different specification.

> Otherwise, some return value(s) must be set aside as
> error codes, which you must now check.

If it foo can do something other than its expected job, and the caller
of foo cares about it, then yes - some sort of returned indication is
needed, and some action is needed by the caller. In many cases, the
caller is /not/ interested - how often do people check the return from
printf?

> There's no clue in the function
> signature as to whether this is true.

If the function needs to return a "success/failure" indication as well
as a return value, then it has to return two bits of information. This
needs to be in the function's signature and/or documentation - just like
any other function that returns one or more pieces of information.
Sometimes for efficiency, the error states are indicated by particular
values in the return type - the documentation and commenting needs to be
/really/ clear in such cases.

But with exceptions, you really have no clue as to what is going on -
the specifications are totally missing from the function signature.
Given "int foo(int x);", you don't have a function that returns an int -
you have a function that can return absolutely anything.

> Furthermore, if you neglect check
> for these error codes, your program appears to work, but is probably
> doing so incorrectly. If I do check, I have to propagate that error up
> the call chain, which is probably going to require me to find out the
> error-return and return that.

Exactly the same applies to exceptions.

Exceptions can make it a little easier to pretend that this doesn't
happen, or that it doesn't matter. But they also mean that you have to
code /everything/ as though an error was propagating through it, because
you never know when that might happen.

And the real meat of the problem - what should code do with the error -
is pretty much the same. Generally, there are four sorts of error that
could be caught. One is a coding bug - this should (at least during
testing/debugging) not be propagated, but lead as quickly as possible to
a helpful error message so that it can be fixed. Another is recoverable
local errors (such as a packet error that triggers a retry). These
should be spotted locally, and handled locally. Thus propagation is not
an issue. Then there are recoverable long-range errors (such as a
connection timeout). The function that can fail in this way is a "best
effort" function, and "failure" is part of its expected normal operation
- there is no error. And then there are unexpected disasters such as
running out of memory. Recovery is not an option, and propagation is
seldom useful or successful (good luck propagating exceptions when your
heap has failed).

>
> (Boy, I hope the callers all the way up the stack remember to check &
> propagate, and clean up their resources correctly).
>
> Conversely, if you know that all return values of int foo(int x) are not
> error codes (i.e. f() can't fail), then you also know that foo is not
> going to throw. If you are unaware of this failure case, your program
> fails, but it fails early with a (possibly uncaught) exception.
>
> The only question that matters is "Can this function fail and what
> happens if it does?"
>
> int foo(int x);
>
> doesn't tell you that.
>

In my view, it /does/ tell you - functions don't fail if you use them
correctly. But function specifications are often lax about telling you
/exactly/ what they will do in various circumstances (either dependent
on the parameters, or dependent on other resources).

If you write a function like this:

int squareRoot(int x);

described as "returns the integer square root of x", then the problem
lies not with the function, but with the specification of the function.

It should be specified as:

"returns the integer square root of x if x >= 0, and has undefined
behaviour otherwise"

or

"returns the integer square root of x if x >= 0, or -1 otherwise"

or

"returns the integer square root of x if x >= 0, or throws
std::domain_error otherwise"

Code can use the function incorrectly, but the function itself does not
"fail". "Failure modes" are part of the specification of a function,
and thus known and documented behaviour.


Scott Lurndal

unread,
Aug 25, 2016, 9:03:59 AM8/25/16
to
Actually, there is a non-zero cost for that case; the instruction
footprint of the code gets larger due to the added code to handle
stack unwinding when the exception is thrown. This will have a
rather small impact on performance due to Icache resource scarcity
which matters in some cases (embedded code, operating systems,
hypervisors, simulators).

$ objdump -d bin/a > /tmp/a.dis
$ grep Unwind_Resume /tmp/a.dis |wc -l
941

I once compared using sigsetjmp/siglongjmp with using Exceptions
in the same codebase. Exceptions were significantly slower. It's
much more difficult to use sigsetjmp/siglongjmp correctly, however,
particularly in multithreaded code. That codebase still uses
sigsetjmp/siglongjmp for performance reasons (and because it was
developed before efficient C++ exceptions were available and in a time when
a 180mhz Intel P6 was brand new and 4MB was a lot of memory).

Scott Lurndal

unread,
Aug 25, 2016, 9:05:26 AM8/25/16
to
Gareth Owen <gwo...@gmail.com> writes:
>David Brown <david...@hesbynett.no> writes:
>
>> My dislike for exceptions is that they hide things that I would prefer
>> to be out in the open.
>>
>> If I see a function:
>>
>> int foo(int x);
>>
>> I read that as a function that as a function that takes an integer
>> "x", foo's it, and returns an integer result. It is clear and simple,
>> and ready to use.
>
>That assumes that the returned int must always be a valid value
>(i.e. there are no domain errors, foo(x) can never fail due to resource
>exhaustion etc). Otherwise, some return value(s) must be set aside as

Or you can use 'bool foo (int x, int& rval)' instead.

Jerry Stuckle

unread,
Aug 25, 2016, 9:30:56 AM8/25/16
to
Unwinding the stack via exception handling is much more CPU intensive
than returning from functions.

David Brown

unread,
Aug 25, 2016, 10:15:16 AM8/25/16
to
Yes. Or "int foo(int x, bool& success)", or "tuple<int, bool> foo(int
x)", or various other arrangements. You can even use a global "errno"
type value (though it might make some people scream...). Or you can use
exceptions.

The point is that this should all be part of what "foo" does - not a
hidden mechanism.

Öö Tiib

unread,
Aug 25, 2016, 11:01:53 AM8/25/16
to
On Thursday, 25 August 2016 02:17:36 UTC+3, David Brown wrote:
> On 24/08/16 19:35, Öö Tiib wrote:
> > On Wednesday, 24 August 2016 14:14:15 UTC+3, David Brown wrote:
>
> >>
> >> /If/ you are in a position to use exceptions well, then there are
> >> advantages in using them. But that means that /you/, and all the other
> >> programmers on your team, need to understand them and use them
> >> appropriately. It also means that any libraries or third-party code you
> >> use must be equally good.
> >
> > So part of exception handling skill is how *not* to throw into legacy
> > code (module or library) that expects nothrow but does not constrain
> > it in interface or how to edit the interface to constrain it.
> >
> > About unskilled people it is anyway gamble. Can't handle exceptions?
> > Can they manage resources? Resources besides memory? Can they
> > handle threads and locks? Etc. C++ without exceptions is no way
> > simple and foolproof tool.
>
> It is neither simple nor foolproof, whether or not you use exceptions.
> Exceptions are just one tool in the C++ toolbox.

So my solution with that so far has been to give the tool to people
who can handle it or at least express excitement to learn to. Existence
of people who can't handle it and/or dislike it does not matter.

>
> >
> >> And since "exception safe" can mean many
> >> different things ("basic" guarantee, "strong" guarantee, "nothrow"
> >> guarantee) you have to make sure you have everything coordinated
> >> appropriately.
> >
> > These guarantees are trivial and obvious. Let me list?
> >
> > * The "nothrow" guarantee is marked with "noexcept" specifier in C++.
> > Exceptions won't be ever thrown out of such functions. How it can be
> > simpler?
>
> Note that "noexcept" is from C++11 onwards. The older exception
> specifications did not guarantee anything (other than giving the
> compiler new ways to make things go wrong for your program if you don't
> follow your specifications).

Yes. In C++98 the exceptions were somewhat worse. It is generally considered
good news that things evolve.

>
> But since noexcept (like throw()) is not checked at compile time, it
> means manual work - you have to figure out if your function really does
> not throw anything, which means figuring out if any of the operators,
> function calls, etc., in the function are definitely noexcept.
>
> Sure, it can be done - but the effort involved is not negligible. If
> the end result is better, safer code, then great. If not, the effort is
> wasted, and it's another thing for people to get wrong.

Yes. Exceptions like everything else are not silver bullet. There is cost
when we code and run-time cost. Misuse may result with inefficiency
*and* more work. So don't misuse, use. ;-)

For example hard drive can become full. For what operations in our
program it is showstopper issue? Where we discover it? Where we
can do anything about it? How likely it is that it happens? Answers
to those questions likely show that it is perfect candidate to handle
with exception but is sure not as "cheap" as to ignore the whole
possibility and to assume that hard drive is *never* full. I remember
Windows NT 3.5 was such that just died down when system drive was
full.

>
> >
> > * "Basic" guarantee is that all objects will be in valid invariant and there
> > are no resource leaks when exception is thrown out of function. That
> > very guarantee is expected *always* in C++ regardless if function does
> > throw or not. So it is only essential and is achievable with RAII alone
> > on case of exceptions.
> >
> > * "Strong" guarantee means that function rolls everything back into
> > state before call if it throws. That means special design effort is
> > used that can be expensive. That design effort however does not
> > differ from a function that has no side effects (rolls everything back)
> > when it returns an error code. So it is special *functionality* and is
> > expected to be documented about a function.
> >
>
> Having a strong guarantee on a function can definitely be a useful thing.

My point was that there has to be such requirement and that it is orthogonal
to exceptions. Example: In middle of overwriting a file we may discover that
we lack something to complete the operation. Now we want the file back how
it was before we started to write it. Rolling that file back makes sense
regardless if we use programming language that supports exceptions or not
and the programming effort and run-time cost of it is not somehow magically
bigger for languages that have exceptions.

>
> But consider what you are protecting against by this "special design
> effort". A sizeable percentage of programs are run on PCs, by humans.
> They will /never/ run out of memory, assuming you don't have leaks. The
> action on failure, from a general "catch" somewhere in or close to
> main(), will be to present an error message and then die. In such
> cases, all the extra effort in making strong guarantee functions is
> wasted. Worse than that - more code means more opportunities for bugs,
> and the complications for the strong guarantee can make code much
> slower. And all in the name of avoiding something that will never
> happen, and minimising the consequences unnecessarily.

I do not see what you mean. The strong roll-back guarantee is needed
only on very special cases IMHO. The basic exception guarantee is plenty
on general case.

>
> And there are lots of systems where errors are simply not acceptable -
> exceptions should not be thrown. Then there is not need to throw or
> catch exceptions - at best, they are a tool to help during testing and
> development.
>
> Again, I am not saying it is wrong to use exceptions, or wrong to
> provide strong guarantees - just that it comes with a cost that is often
> not worth paying.

Again I can't see your point. We can't redefine the universe that is full of
things that can break down or fill up or reach limits in various ways.
"Hard drive full" is inacceptable? "Sensor short circuited" is not acceptable?
Reality has to be acceptable, problems have to be acceptable. Solution of
quitting to work when sensor is short circuited may be unacceptable. That
is again a requirement orthogonal from if the event when sensor was
discovered being short circuited was propagated with exceptions or error
codes. On any case we have to behave differently with broken sensor and
there goes the real development effort.


... snip

>
> > Usage of low quality tools can result with unanticipated expenses regardless
> > if we use exceptions or not. It is same as with unskilled workforce, exceptions
> > are not *the* issue there and don't stand out as special, something else is
> > the issue.
>
> Lack of support for exceptions, or inefficient implementations, does not
> mean low quality tools.

Ok, former is better described as "non-conforming to specification" tool and
latter as "inefficient" tool. Both have "quality of implementation issues" IOW
are "low quality" tools. The quality of tool can be is not uniform so some other
feature is possibly made brilliantly there but we got fly in the ointment so "low
quality".

>
> >
> >> Another is that it can hinder optimisations
> >> (by enforcing ordering of operations, or pushing local objects onto the
> >> stack). And if you are trying to get a "strong" exception safe
> >> guarantee, you often need a good deal more code, and a good deal slower
> >> code. A function that modifies an object "x" with the strong guarantee
> >> will typically take a copy "x2" of "x", modify "x2" in a way that might
> >> thrown an exception, and then swap "x" and "x2" once everything is
> >> completed safely. That means an extra copy-constructor, and an extra
> >> destructor - that's hardly free if "x" is a large data structure.
> >
> > What you mean? We often have to make backup copies of objects when
> > failed attempt of operation may change it (but we guarantee not to)
> > regardless if we throw exception after we failed or we return error code.
> > What else you suggest to do on general case?
> >
>
> You may well have to make backup copies if you want to write
> trial-and-error code, whether or not you use exceptions. But when you
> have exceptions, you have to be aware that just about anything can fail
> unexpectedly with an exception. Without exceptions, and using explicit
> error handling, you know where things can go wrong.

What you mean by that "trial-and-error code"? Doing something may be the
cheapest and sometimes the only available proof that it is doable. We may
check that hard drive has plenty of room beforehand, but millisecond later
some other process uses it up and our write fails. Real code is actually full
of unhandled exceptional circumstances with or without exceptions. Lets
take first allfamous "trial-and-error" C program:

/* Hello World program */
#include<stdio.h>
#include<stdlib.h>
int main(void)
{
printf("Hello World!\n");
return EXIT_SUCCESS;
}

The return value of 'printf' is unhandled and therefore 'EXIT_SUCCESS'
is a lie on such exceptional case when it fails. If 'printf' would throw
on failure then there at least would be chance that some static analysis
tool would report about potentially unhandled exception. Otherwise
all same old crap.


>
> >>
> >> There are, of course, a great many benefits to be had from using
> >> exceptions and writing exception-safe code. But you need to be clear
> >> that the benefits only come if the code really is exception safe, that
> >> it is realistic for exceptions to occur, and that you have something
> >> useful that you can do /if/ an exception occurs. C++ gives you the
> >> /option/ of using exceptions - it does not force you to do so, and it is
> >> not always the right choice.
> >
> > With that I agree. Also I feel that I have never claimed that usage of
> > exceptions is correct choice in every situation. Exceptions are good
> > only for to handle exceptional situations (like name indicates).
> > That does not mean that such handling itself is rare in code. In mature
> > program over half of handled situations are exceptional.
> >
>
> My dislike for exceptions is that they hide things that I would prefer
> to be out in the open.
>
> If I see a function:
>
> int foo(int x);
>
> I read that as a function that as a function that takes an integer "x",
> foo's it, and returns an integer result. It is clear and simple, and
> ready to use.
>
> But with exceptions, foo is now a function that takes an integer "x",
> and /either/ foo's it and returns an integer, /or/ it returns a
> completely unexpected object of an unknown type that may not even be
> known to the user of foo, which will lead to a jump to somewhere else in
> code that the user of foo doesn't know about.

Exceptions must be documented like any other potential outcome. If the
exceptions are potentially from injected stuff then that must be also
documented. Also some of it can be constrained and handled compile-
time. The 'noexcept' can be 'enable_if'd for and so constrained or
specialized for.

The 'throws' clutter in Java function's signature does not work well;
programmers weasel out of it by throwing covariants and it is
getting close to 'public void foo(String myString) throws Throwable'
plus actual exceptions described in documentation. So it is only
good how C++ deprecated that worthless nonsense bloat.

>
> If foo is a function that could fail, I prefer to know about it:
>
> tuple<int, bool> foo(int x);

I prefer 'boost::optional<int>' there when lack of return is normal
and reason of it is obvious and show must go on anyway. The
'pair<int,bool>' or 'tuple<int,bool>' are less convenient.

However if 'false' is exceptional (say 1/50000 likelihood) and it
breaks the whole operation during what that 'foo' was called then
handling it in close vicinity just pointlessly clutters the core logic
up.

With exceptions we can deal farther away or may be even up stack
in 'catch' block, and there we may also get the phone number of
grandmother whose fault it was that hard drive was full and no
'int' did emerge from 'foo' and so we could not complete the whole
operation.

>
>
> As an aside, I /really/ hope that structured binding makes it into
> C++17, so that we can write:
>
> auto [result, success] = foo(123);

That is another issue how to express what are the 'in', 'in-out' or 'out'
parameters. It is lot simpler since it is syntax issue. Exceptions are
not any of those (and may be rather long list) so it is good idea to
handle those separately.

Gareth Owen

unread,
Aug 25, 2016, 1:13:43 PM8/25/16
to
sc...@slp53.sl.home (Scott Lurndal) writes:

>>That assumes that the returned int must always be a valid value
>>(i.e. there are no domain errors, foo(x) can never fail due to resource
>>exhaustion etc). Otherwise, some return value(s) must be set aside as
>
> Or you can use 'bool foo (int x, int& rval)' instead.

Of course you can. But if every failable function looks like this, you
end up with a horrible nested mess of error checks and partial cleanup
whenever you try and do something complicated.

Consider a Matrix algebra package that can go wrong in multiple ways
(singular matrices, numeric instability, incompatible dimensions etc).

You can write code that looks like:

try {
Matrix A = B + C + D;
Vector Z;
C = transpose(B) * (inverse(A) * Z);
return C*C + transpose(A *Z);
} catch(const MatrixError&) {
// whatever
}
or you can write


if(Add(tmp,B,C)) goto fail;
if(Add(A,tmp,D)) goto fail;
if(Transpose(tmp1,B)) goto fail;
if(Inverse(tmp2,A)) goto fail;
if(Mult(tmp3,tmp2,Z)) goto fail;
// Well you get the idea... there are, what, ten-fifteen more lines here that
// I can't be bothered to write!
// .
// .
// .

return tmp;

fail:
return -EMATRIX_ERROR;

No thanks.

Gareth Owen

unread,
Aug 25, 2016, 1:20:55 PM8/25/16
to
sc...@slp53.sl.home (Scott Lurndal) writes:

> Mr Flibble <flibbleREM...@i42.co.uk> writes:
>>On 24/08/2016 21:00, Jerry Stuckle wrote:
>
>>> And no, GOOD programmers know exceptions are NOT an "ideal way for
>>> transmitting errors of all kinds". Among other things, the overhead of
>>> exception handling is quite high. And exceptions never were designed to
>>> handle "all kinds of errors".
>>
>>Again your ignorance is showing. In a decent implementation the
>>overhead for exception handling is zero for the case of the exception
>>not being thrown.
>
> Actually, there is a non-zero cost for that case; the instruction
> footprint of the code gets larger due to the added code to handle
> stack unwinding when the exception is thrown.

Is this more or less than the size of the error checking/handling code
that you would have to put if you choose not to use exceptions?

And its easier for the compiler to put the exception-unwind code in a
relatively remote segment and jump to it at exceptions, rather than
interleaving the error handling in the hot path. Which can actually
improve cache behaviour. And since the compiler knows what the "normal"
path through the code is, that can improve performance of branch
prediction and speculative execution, as well as cache locality.

Mr Flibble

unread,
Aug 25, 2016, 3:00:31 PM8/25/16
to
Indeed, try telling Jerry Stuckle that.

/Flibble


Ian Collins

unread,
Aug 25, 2016, 3:26:38 PM8/25/16
to
Evidence?

--
Ian

Richard

unread,
Aug 25, 2016, 4:01:43 PM8/25/16
to
[Please do not mail me a copy of your followup]

Ian Collins <ian-...@hotmail.com> spake the secret code
<e28v32...@mid.individual.net> thusly:
Every measured comparison I've seen says they come out about the
same.

Of course, if you ignore all the error return codes and don't
properly handle them, that is obviously fewer cycles since you're
comparing not doing error handling to doing error handling with
exceptions.

David Brown

unread,
Aug 25, 2016, 4:09:23 PM8/25/16
to
On 25/08/16 19:13, Gareth Owen wrote:
> sc...@slp53.sl.home (Scott Lurndal) writes:
>
>>> That assumes that the returned int must always be a valid value
>>> (i.e. there are no domain errors, foo(x) can never fail due to resource
>>> exhaustion etc). Otherwise, some return value(s) must be set aside as
>>
>> Or you can use 'bool foo (int x, int& rval)' instead.
>
> Of course you can. But if every failable function looks like this, you
> end up with a horrible nested mess of error checks and partial cleanup
> whenever you try and do something complicated.
>
> Consider a Matrix algebra package that can go wrong in multiple ways
> (singular matrices, numeric instability, incompatible dimensions etc).

In most cases, incompatible dimensions should be caught at compile time.
But the other failures could well happen.

>
> You can write code that looks like:
>
> try {
> Matrix A = B + C + D;
> Vector Z;
> C = transpose(B) * (inverse(A) * Z);
> return C*C + transpose(A *Z);
> } catch(const MatrixError&) {
> // whatever
> }
> or you can write
>
>
> if(Add(tmp,B,C)) goto fail;
> if(Add(A,tmp,D)) goto fail;
> if(Transpose(tmp1,B)) goto fail;
> if(Inverse(tmp2,A)) goto fail;
> if(Mult(tmp3,tmp2,Z)) goto fail;
> // Well you get the idea... there are, what, ten-fifteen more lines here that
> // I can't be bothered to write!
> // .
> // .
> // .
>
> return tmp;
>
> fail:
> return -EMATRIX_ERROR;
>
> No thanks.
>

I also say "no thanks" to a goto "solution". But there are many other
ways to handle this. For example, the Matrix class could hold a
information about the validity of its contents and possible errors -
just like NaN's in floating point. You can write your code that uses
Matrix's just the way you want, and at the end (or at any other
convenient points) check for safe results.

And yes, exceptions are a perfectly good way to handle this. I am not
saying exceptions don't have their use or their place - I am just saying
that they are not without their own costs, and are not always the best
way of handling things. In a situation like this, they could give a
very neat and clear way of handling local problems - precisely because
they can be treated locally rather than wandering off through innocent code.

Ian Collins

unread,
Aug 25, 2016, 4:11:54 PM8/25/16
to
On 08/26/16 08:01 AM, Richard wrote:
> [Please do not mail me a copy of your followup]
>
> Ian Collins <ian-...@hotmail.com> spake the secret code
> <e28v32...@mid.individual.net> thusly:
>
>> On 08/26/16 01:30 AM, Jerry Stuckle wrote:
>>> Unwinding the stack via exception handling is much more CPU intensive
>>> than returning from functions.
>>
>> Evidence?
>
> Every measured comparison I've seen says they come out about the
> same.

Same here along with the measurements I have made.

> Of course, if you ignore all the error return codes and don't
> properly handle them, that is obviously fewer cycles since you're
> comparing not doing error handling to doing error handling with
> exceptions.

That's right. Often in the case where exceptions are used but not
thrown (that is most of the time), using exceptions is slightly faster
than checking error codes.

--
Ian

Jerry Stuckle

unread,
Aug 25, 2016, 4:48:00 PM8/25/16
to
Try it yourself and find out. But I know that is beyond the limited
capabilities of a troll.

Jerry Stuckle

unread,
Aug 25, 2016, 4:55:09 PM8/25/16
to
If you don't believe me, try believing Google. They have a good writeup
on exceptions in their style guide:
https://google.github.io/styleguide/cppguide.html#Exceptions

Although I wouldn't go so far as to say to "never use exceptions", I
would point out one statement in their discussion: "he availability of
exceptions may encourage developers to throw them when they are not
appropriate or recover from them when it's not safe to do so. For
example, invalid user input should not cause exceptions to be thrown."

Exactly the opposite of what you claimed.

Mr Flibble

unread,
Aug 25, 2016, 5:07:51 PM8/25/16
to
If you read the entire section of the Google style guide on exceptions
you would know that they require this prohibition for practical reasons
(to be compatible with their existing code-base). At the end of the
section Google say:

"Things would probably be different if we had to do it all over again
from scratch."

So assuming you are not writing code for a Google open source project
(most likely the case) then, no, the Google style guide is not a good
document to instruct one how to write modern C++ code and whether or not
to use exceptions.

So it seems your fractal wrongness happily continues unabated.

/Flibble

Jerry Stuckle

unread,
Aug 25, 2016, 7:37:21 PM8/25/16
to
Yes, I know why they prohibit it. But if you could READ, you would see
how exceptions are being misused - like you do. And the line I quoted
is a very good example of misuse of exceptions.

The pros and cons are not specific to Google. They are general guides
followed by GOOD C++ programmers - which you are not.

Lynn McGuire

unread,
Aug 25, 2016, 9:00:22 PM8/25/16
to
On 8/24/2016 11:30 AM, Richard wrote:
> [Please do not mail me a copy of your followup]
>
> Lynn McGuire <lynnmc...@gmail.com> spake the secret code
> <npinkq$jj1$1...@dont-email.me> thusly:
>
>> We have decided to add more application specific tests for now that we
>> run from the command line.
>
> So if I understand you correctly:
>
> - your application can accept inputs from the command-line and write
> out results to a file
> - your application tests invoke the tool from the command-line and
> verify the resulting file(s)
>
> This is a very reasonable way to write acceptance tests and/or
> regression tests. FitNesse can be useful in this scenario as well.
> The command-line arguments are documented on the FitNesse wiki page
> describing the test and the expected file contents are also described
> on the same wiki page. You glue them to your application by writing
> a FitNesse fixture or using an existing fixture. Once your fixture
> builds up enough custom verification primitives, you find that you
> cna express new tests as new FitNesse wiki pages without having to
> write more code.
>
> The idea behind FitNesse originally was to create a way for the
> business analysts and product owners to create acceptance criteria in
> a way that not only made sense to them but also became an executable
> unit that could directly verify the functionality.

Thanks for the reference of FitNesse.

We invoke the all phases of our testing from the command line and use /COMMANDS for directions. For the first three phases, we
perform a detailed analysis of the output files using a custom built tool in C++. All 1,100+ of them.

The fourth phase of the testing is just an endurance test using 13,000+ input files to make sure that we do not crash. No analysis
is done of the results.

So, adding more command line options is a natural for us.

Thanks,
Lynn


Lynn McGuire

unread,
Aug 25, 2016, 9:05:03 PM8/25/16
to
On 8/22/2016 2:15 AM, Richard wrote:
> [Please do not mail me a copy of your followup]
>
> Lynn McGuire <lynnmc...@gmail.com> spake the secret code
> <npdek3$8tk$1...@dont-email.me> thusly:
>
>> I would like to find a test tool that allows us to test from the UI
>> level. Something that allows us to test multiple stage of expected
>> behavior across various dialogs.
>
> There are a number of tools that can synthesize events and then poke
> at the resulting Win32 controls to attempt to verify the results.
> However, they are very fragile and labor intensive to both create and
> maintain. Sometimes they just don't work at all because the Win32 API
> wasn't really designed so that application B can probe all the
> internal state of the controls in application A. This is particularly
> problematic when you write custom handlers for the low level events.
> Also, these scripts are not portable so if you have cross-platform
> code and have to care about testing on Mac or Linux, you will end up
> needing tests for each platform as the event playback style of testing
> is very platform specific.
>
> One option is to build the event recording and playback directly into
> your application. I've worked on C++ code bases that did this and
> while it gave them the ability to playback a user's session (valuable
> for more things than just regression testing), it was very labor
> intensive to create and maintain. Sometimes, this might be the only
> reliable way of reproducing certain user issues, however.
>
> For instance, a CAD program is used in sessions lasting hours or
> perhaps even days. When the CAD program crashes, there may be no way
> to easily reproduce the problem from a few edit steps in a fresh
> session. You may need the entire session playback in order to
> reproduce the problem. Some games have a record/playback feature to
> support debugging of user problems encountered during gameplay for a
> similar reason.
>
> You don't state if your goal is to cover new features and enhancements
> to existing features or if your goal is to build a regression suite.
>
> As I mentioned earlier, event synthesis/control probing schemes are OK
> for regressions. They are still very labor intensive to create and
> maintain, but if the UI isn't constantly changing the investment can
> be ammortized over time.
>
> A scheme like FitNesse tests tied directly to the business logic is
> better, however. You really want to make your UI as dumb as possible
> so that it can be shown correct by inspection instead of by automated
> testing. This is the so-called "Humble Dialog". Put the automated
> testing to work on the underlying business logic that produces results
> shown by the UI.

BTW, we do use a CAD front end that looks kinda like Visio. But, this is ours that we have been writing since 1987. Our calculation
engine dates back to the early 1960s.
https://www.winsim.com/screenshots.html (man, I need to update these !)

Thanks,
Lynn

Gareth Owen

unread,
Aug 26, 2016, 10:04:06 AM8/26/16
to
legaliz...@mail.xmission.com (Richard) writes:

> [Please do not mail me a copy of your followup]
>
> Ian Collins <ian-...@hotmail.com> spake the secret code
> <e28v32...@mid.individual.net> thusly:
>
>>On 08/26/16 01:30 AM, Jerry Stuckle wrote:
>>> Unwinding the stack via exception handling is much more CPU intensive
>>> than returning from functions.
>>
>>Evidence?
>
> Every measured comparison I've seen says they come out about the
> same.

Ditto. Now, how many of our anecdotes do we need to make some data?

Mr Flibble

unread,
Aug 26, 2016, 11:53:17 AM8/26/16
to
Again your fractal wrongness manifests. The line you quoted about not
throwing an exception for invalid user input is simply wrong: invalid
user input is an error condition which typically causes some modal
process to be cancelled and is handled perfectly using exceptions.

>
> The pros and cons are not specific to Google. They are general guides
> followed by GOOD C++ programmers - which you are not.

Nope. The Google style guide is a guide for contributing to Google open
source projects and not a guide for writing modern C++ with good quality.

/Flibble

Richard

unread,
Aug 26, 2016, 12:53:05 PM8/26/16
to
[Please do not mail me a copy of your followup]

Lynn McGuire <lynnmc...@gmail.com> spake the secret code
<npo4jl$bq5$1...@dont-email.me> thusly:

>BTW, we do use a CAD front end that looks kinda like Visio. But, this
>is ours that we have been writing since 1987. Our calculation
>engine dates back to the early 1960s.
> https://www.winsim.com/screenshots.html (man, I need to update these !)

Yum, chemical engineering. I love it!

Richard

unread,
Aug 26, 2016, 12:56:37 PM8/26/16
to
[Please do not mail me a copy of your followup]

Lynn McGuire <lynnmc...@gmail.com> spake the secret code
<npo4ap$b7c$1...@dont-email.me> thusly:

>We invoke the all phases of our testing from the command line and use
>/COMMANDS for directions. For the first three phases, we
>perform a detailed analysis of the output files using a custom built
>tool in C++. All 1,100+ of them.
>
>The fourth phase of the testing is just an endurance test using 13,000+
>input files to make sure that we do not crash. No analysis
>is done of the results.
>
>So, adding more command line options is a natural for us.

Yeah, you've got enough infrastructure already that adding new
command-line options is a small incremental cost.

Lynn McGuire

unread,
Aug 26, 2016, 2:21:41 PM8/26/16
to
On 8/26/2016 11:56 AM, Richard wrote:
> [Please do not mail me a copy of your followup]
>
> Lynn McGuire <lynnmc...@gmail.com> spake the secret code
> <npo4ap$b7c$1...@dont-email.me> thusly:
>
>> We invoke the all phases of our testing from the command line and use
>> /COMMANDS for directions. For the first three phases, we
>> perform a detailed analysis of the output files using a custom built
>> tool in C++. All 1,100+ of them.
>>
>> The fourth phase of the testing is just an endurance test using 13,000+
>> input files to make sure that we do not crash. No analysis
>> is done of the results.
>>
>> So, adding more command line options is a natural for us.
>
> Yeah, you've got enough infrastructure already that adding new
> command-line options is a small incremental cost.

One of my programmers is trying to add a new test mode that rolls through all of the dialogs connected to the symbols in a particular
drawing. That should be the test that smokes out a few problems in our new sparse data item structure. We just want to instance the
dialog and then click ok. That should be interesting.

Thanks,
Lynn

Gareth Owen

unread,
Aug 26, 2016, 2:29:46 PM8/26/16
to
Mr Flibble <flibbleREM...@i42.co.uk> writes:

> Again your fractal wrongness manifests. The line you quoted about not
> throwing an exception for invalid user input is simply wrong: invalid
> user input is an error condition which typically causes some modal
> process to be cancelled and is handled perfectly using exceptions.

It's also a categorical error.

Gracefully handling (or not handling) an error is an outcome.
Giving the user a helpful message about how to proceed is an outcome.

Throwing an exception is not an outcome, it's a mechanism.

It's a method for getting from the point-of-error to the
point-at-which-the-error-is-handled.

Throwing an exception on invalid input is not an alternative to alerting
the user about invalid input, or gracefully handling . It's simply one
mechanism by which those outcomes can be achieved. Whether its the best
mechanism, of course, depends on the nature of the application, the
nature of the input, and the nature of the user.

Mr Flibble

unread,
Aug 26, 2016, 2:57:10 PM8/26/16
to
Great description of the issue: I'm sure Stuckle will disagree and call
us both trolls! :D

/Flibble


Jerry Stuckle

unread,
Aug 26, 2016, 3:46:56 PM8/26/16
to
Here are some other quotes from other experts in C++:

http://stackoverflow.com/questions/1744070/why-should-exceptions-be-used-conservatively

From the first answer (#87): "For example, wrong user input does not
count as an exception because you expect this to happen and ready for that."

http://stackoverflow.com/questions/19696442/how-to-catch-invalid-input-in-c

Again, from the first answer (#4): "I recommend not to use the exception
facilities in IOStreams! Input errors are normal and exceptions are for,
well, exceptional errors."

I can find other examples, also. But I know you'll continue to write
crappy code.


As for Ian's stack unwinding not being slower:

http://www.boost.org/community/error_handling.html

Under "Guidelines": " Because actually handling an exception is likely
to be significantly slower than executing mainline code, ..."

I can find other similar statements, also. But what's the use? Crappy
programmers will continue to write crappy code. And trolls will
continue to be trolls.

Mr Flibble

unread,
Aug 26, 2016, 4:50:28 PM8/26/16
to
A contributing factor to your fractal wrongness seems to be your
inability to tell a good source from a bad source, a good quote from a
bad quote, a good website from a bad website, an expert and someone you
think is an expert. I am not sure how one would go about correcting
this personality trait.

/Flibble



Jerry Stuckle

unread,
Aug 26, 2016, 7:22:44 PM8/26/16
to
Yes, I know the difference. Anyone who disagrees with your crap
practices is a "bad source" or a "bad quote", and anyone who agrees with
your crap is a "good source" or a "good quote".

The personality trait correction is in YOU. But you are too bull-headed
to learn and too stoopid to realize how wrong you are.

Each of those quotes is from a widely recognized expert in C++ - which
you are definitely NOT.
0 new messages