Why does std::uncaught_exceptions return int?

232 views
Skip to first unread message

FrankHB1989

unread,
Jun 16, 2015, 6:13:39 AM6/16/15
to std-dis...@isocpp.org
More specifically, what does it benefit by allowing returning a negative value?
A similar question: why does std::shared_ptr::use_count return long?

Fabio Fracassi

unread,
Jun 16, 2015, 7:44:27 AM6/16/15
to std-dis...@isocpp.org
Using unsigned values has fallen out of favor within the committee.
Some of the reasons are nicely summarized here (
http://stackoverflow.com/a/18796546/358277 ).
Additionally modern compilers can exploit the undefined overflow
behavior of signed integers for extra optimizations, that are very hard
to do on unsigned types.

More modern APIs in the standard reflect this thinking, while older ones
(like container .size() returning the unsigned size_t) are considered to
be a mistake (that we unfortunately cannot fix for compatibility reasons).


Tony V E

unread,
Jun 16, 2015, 11:45:35 AM6/16/15
to std-dis...@isocpp.org
where it comes up a few times, and they repreatedly say only use unsigned for twiddling bits or that rare case when your really do want modulo arithmetic.

Tony


Jim Porter

unread,
Jun 16, 2015, 1:31:15 PM6/16/15
to std-dis...@isocpp.org
On 6/16/2015 6:44 AM, Fabio Fracassi wrote:
> More modern APIs in the standard reflect this thinking, while older ones
> (like container .size() returning the unsigned size_t) are considered to
> be a mistake (that we unfortunately cannot fix for compatibility reasons).

I seem to recall there being some discussion about creating new versions
of the standard container types (for the Ranges proposal maybe?). Would
that be a good time to change .size()'s return type?

- Jim


Thiago Macieira

unread,
Jun 16, 2015, 6:13:30 PM6/16/15
to std-dis...@isocpp.org
That bears the question: what to?

a) int
Like the Qt container types, that's easy to use since it neatly avoids
accidental 64-bit truncation, but also necessarily limits to 2 billion
elements (and if the container is not careful, of 2 GB of memory).

b) ssize_t
POSIX type that is the signed counterpart to size_t, but may not necessarily
be std::make_signed<size_t>::type (because long has the same size of either
int or long long). POSIX doesn't define what type it actually is, only that it
shall hold at least the values [-1, SSIZE_MAX] [1] and SSIZE_MAX is defined to
be at least 32767[2]. That definition actually allows POSIX implementations to
have sizeof(size_t) > sizeof(ssize_t), though that's unlikely.

c) intptr_t
d) ptrdiff_t

I added a size check for size_t and ssize_t in Qt [3] and I did receive a
report once of that assertion triggering, but since I don't recall the
resolution, I assume it must have been a misconfiguration of the user's
toolchain.

[1] http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_types.h.html
[2] http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/limits.h.html
[3]
http://code.woboq.org/qt5/qtbase/src/corelib/io/qfsfileengine.cpp.html#SignedIOType
--
Thiago Macieira - thiago (AT) macieira.info - thiago (AT) kde.org
Software Architect - Intel Open Source Technology Center
PGP/GPG: 0x6EF45358; fingerprint:
E067 918B B660 DBD1 105C 966C 33F5 F005 6EF4 5358

FrankHB1989

unread,
Jun 16, 2015, 7:53:50 PM6/16/15
to std-dis...@isocpp.org


在 2015年6月16日星期二 UTC+8下午7:44:27,Fabio Fracassi写道:
 
Thank you for the answer.
I carefully checked these reasons and I am sure I have thought all of them previously.
To summarize shortly: plausible, but not totally convinced me.
I agree:
0. Obviously, `int` is shorter.
1. The type `int` is "natural" over other signed types in general.
(So again, is it accidental to choose `long` for std::shared_ptr::use_count?)
2. Both of them are needed (Andrei Alexandrescu's point).
3. Signed integers have advantages to represent negative results naturally, i.e. they can be clearer.
4. Signed integers make newbies (or perhaps average users) less surprise on integer arithmetic, as well as, some other arithmetic operations e.g. pointer subtraction.
5. The performance difference between signed and unsigned arithmetic operations is generally insignificant for users of modern architectures, even though the cost on hardware implementation may differ.
6. No simple guideline (again Andrei Alexandrescu's point), since they are more complicated than pure integer arithmetic rules most of users learned at school, and the requirements differ.
7. Be careful to implicit integer conversions, or better try to avoid them (Bjarne Stroustrup's point). (I almost always use -Wsign-conversion, etc when possible.)
However:
0. Although shorter is generally good, typing less is usually not in the first place. If other things matter, it can be changed relatively easily.
For example, once we have `main(){}`, then we forbade omission of  `int`.
1. The situation of unsigned integers is same. I'd rather reluctant to use `unsigned long` instead of `unsigned`, unless really needed (e.g. to prevent `DWORD` pollution).
2. Both of them are needed (Andrei Alexandrescu's point), but usually in different cases.
3. Signed integers may be clearer than unsigned integers to use, only when negative integer values are really needed.
4. Despite the truth of unfamiliarity on modular arithmetic for average users, many people have even no such knowledge: "integers overflow is undefined behavior" or "unsigned integers never overflow". The integers in C++ is actually not the same in mathematics, whether signed or not, since the precision is limited and there exist values out of range to be handled. This innocent is dangerous. Why cheating users with the fictional simplicity?
5. Anyway, signed integers are actually more complicated to implement. Sign bit is special, though we have all kinds of complexity on other circuits, they beats unsigned integers easily. Therefore at least I don't think signed integers are superior here.
6. To be pragmatic, "use one single type as default" rule is not enough for API design. To me that sounds like "a common base class for every type is always good" ("by default"), which is not the fact. It may get users like me confused about the reasoning: why use some particular type even if there exist at least one more type to fit the intent more precisely, or what does the designer of the API want the user to care. In such cases, signed integers are definitely unclearer.
7. Despite conversion, the sign has already bring traps on portability. Are they 2's complements, 1’s complement or signed magnitude representations? How to reproduce the strange behavior if they have probably overflowed? ...
8. As the name, "unsigned integers" are for special "unsigned" integer arithmetic, rather than being specific to bit patterns. Even if the latter is needed, there can be clearer ways, though facilities derived from C (bit fields) are not ideal. The abstraction of bitmask types may be a good beginning.
Currently we are lack of more powerful features (i.e. Scheme's numeric tower) to teach users always "do the right thing" easily, but it should not the case forever. Thus we may have more compatibility issues in future which we'd have avoid. To make the evolution harder is not intended, is it?


FrankHB1989

unread,
Jun 16, 2015, 8:40:12 PM6/16/15
to std-dis...@isocpp.org
A few more notes:

1. The type `int` can be natural than others in 1970s, but should be not any longer now.
Despite the shorter spelling, `int` is nothing special in nature than other builtin signed types, except for historical artifacts, e.g. return type of `main`, or placeholder to mark postfix ++/-- overloading.
Every time when I want a signed integer type, there is almost always more suitable one, like `signed char`/`int_leastN_t`/`intptr_t` (though the corresponding unsigned types are more often used). I never doubt to decide how to pick one of them once I know what I need, and I find no reason to use `int` intentionally. So unless there are legacy APIs I cannot avoid to interop (e.g. `errno`), I treated `int` just as the aged-language-mandated placeholders.

2. Even if the fundamental types in C++ are not graceful, they deserved to be use in the most suitable places, esp. by the standard library.
Specifically, I dislike to weaken the range of the types, i.e. using integer types instead of unsigned types merely due to one or a few special values are reserved to indicate exceptional states. This can be rarely "natural".
Violation of this can be treated as a case of "no overhead principle": almost a half of the range is dropped (which may be avoid by other style, e.g. using exception instead of error code), and the result is not obviously beneficial.
`ssize_t` is the outstanding example.
Nevertheless, they make senses, because there may be meaningful negative values. As low level implementation details they can be tolerated if there are no obviously alternative.

3. I'm in doubt whether a realistic implementation has more opportunity to perform optimization on signed types than unsigned types, once the unsigned types are used properly, considering what would be optimized away. Are there any examples?

Kazutoshi Satoda

unread,
Jun 16, 2015, 10:08:15 PM6/16/15
to std-dis...@isocpp.org
On 2015/06/17 7:13, Thiago Macieira wrote:
> On Tuesday 16 June 2015 12:30:52 Jim Porter wrote:
>> On 6/16/2015 6:44 AM, Fabio Fracassi wrote:
>> > More modern APIs in the standard reflect this thinking, while older ones
>> > (like container .size() returning the unsigned size_t) are considered to
>> > be a mistake (that we unfortunately cannot fix for compatibility reasons).
>>
>> I seem to recall there being some discussion about creating new versions
>> of the standard container types (for the Ranges proposal maybe?). Would
>> that be a good time to change .size()'s return type?
>
> That bears the question: what to?

I thought of adding ssize_type ssize() where ssize_type is a signed
integer type, and numeric_limits<ssize_type>::max() >= max_size() hold.

If such a integer type doesn't exist, these shouldn't be declared so
that the use of ssize() itself can be a compile time check. Though, I
think there are no such situation (max_size() >
numeric_limits<intmax_t>::max()) in practice.

This way, there is no compatibility issue, and users can just use
x.ssize() as an easy and safe way to avoid mixed-sign operation. I think
it will be a better situation at least than now; where some discussions
like https://stackoverflow.com/questions/275853/ arise and people
diverge each other finding some nit-picks on each solutions.

--
k_satoda

David Krauss

unread,
Jun 16, 2015, 10:25:42 PM6/16/15
to std-dis...@isocpp.org
On 2015–06–17, at 8:40 AM, FrankHB1989 <frank...@gmail.com> wrote:

A few more notes:

1. The type `int` can be natural than others in 1970s, but should be not any longer now.

int is effectively 16-bit on “small” systems, 32-bit on the majority of systems with virtual memory, and perhaps 64-bit on some systems that don’t support smaller datatypes very well. The landscape really hasn’t changed much, except that small systems are now less often overextended to do big jobs.

Perhaps you can’t draw very strong conclusions from sizeof(int), but it’s still a good enough proxy for the complexity of programs that the machine is likely to run. In the case of uncaught_exceptions, a system with 16-bit int is very likely to hit some other limit before getting to 30,000 nested unwindings, but a system with 32-bit int might not.

2. Even if the fundamental types in C++ are not graceful, they deserved to be use in the most suitable places, esp. by the standard library.
Specifically, I dislike to weaken the range of the types, i.e. using integer types instead of unsigned types merely due to one or a few special values are reserved to indicate exceptional states. This can be rarely "natural".
Violation of this can be treated as a case of "no overhead principle": almost a half of the range is dropped (which may be avoid by other style, e.g. using exception instead of error code), and the result is not obviously beneficial.

Because you plan to use 2 billion nested unwindings… but not 5 billion?

Really, 30,000 is enough for anyone. It already indicates that the program is crashing. The issue at hand is how to represent a small number with minimal constraints. The answer is int.

3. I'm in doubt whether a realistic implementation has more opportunity to perform optimization on signed types than unsigned types, once the unsigned types are used properly, considering what would be optimized away. Are there any examples?

The usage of uncaught_exceptions is too simple for the instructions to be optimized. It simply reads a thread-local global. You store that in a structure. Later you call it again and compare to the previous result. The comparison feeds into a conditional branch or move. Done.

However, it does get stored in a structure in the meantime. It would be nice to maximize the chance of uncaught_exceptions fitting into padding bytes, for example in a class that would otherwise be empty. I think that makes a decent argument for some flavor of char. But…



The elephant in the room, which I can’t believe hasn’t been pointed out among so many software engineering experts, is that uncaught_exceptions is obviously missing encapsulation. It requires the user to follow a protocol which is far from obvious. What users need is an object which stores the unwinding level at its construction and then later tells you whether the unwinding level is still the same. Nobody should hand-code this process as boilerplate! Moreover, there is no other valid usage of uncaught_exceptions, and any attempt at cleverness is certain to end in disaster. An integer of any type is the wrong interface.

Brittleness is the other consequence of poor encapsulation. Exception processing can get complicated and we don’t have a theory as to why uncaught_exceptions is the ultimate answer to the problem. It’s just a variable that was conveniently lying around. Are we sure that the implementation will never, ever want to extend it? For example,

1. Keeping a pointer to the current unwinding exception would allow the debugger to provide insight into transactions initiated by error handling.

2. It’s strictly limited to RAII classes. uncaught_exceptions assumes that the transaction is created on the stack, so it must be destroyed before unwinding crosses the scope of its construction. What if its lifetime extends further, because it can’t certainly be committed in RAII fashion? What about a server that creates transactions on one thread and retires them on another? It should be possible to transplant the exception processing state by catching the exception and re-throwing it elsewhere. std::exception_ptr allows this but uncaught_exceptions does not. Even worse, comparing small integers provides arbitrary results in such a situation. If this ever leads to a security hole in an application, standard library implementations won’t be able to fix it until ISO revises the API.

By far the sensible thing to do is to introduce a class:

class unwinding_state {
    // Implementation-defined nonstatic member(s).
    // (Let an int storing uncaught_exceptions() be a conforming implementation.)
public:
    unwinding_state() noexcept; // Initialize with the current state.
    unwinding_state( unwinding_state const & ) noexcept;
    unwinding_state & operator = ( unwinding_state const & ) noexcept;

    enum class status_type {
        normal,
        exceptional
    };

    status_type status() const;

    operator bool () const
        { return status() == status_type::normal; }
};

We use C++ because it provides this sort of abstraction with zero overhead. If you want to code without abstraction, but instead always obsessing over interfaces hardcoded to to int, and worrying whether everything is simultaneously micro-optimized and future-proofed, use C.

David Krauss

unread,
Jun 16, 2015, 11:57:27 PM6/16/15
to std-dis...@isocpp.org
Also: since the whole purpose is to allow the user to write throwing destructors, the encapsulation class should declare a noexcept(false) destructor. Its noexcept-specification will propagate to the transaction class destructor.


On 2015–06–17, at 10:25 AM, David Krauss <pot...@gmail.com> wrote:

public:
    unwinding_state() noexcept; // Initialize with the current state.
    unwinding_state( unwinding_state const & ) noexcept;
    unwinding_state & operator = ( unwinding_state const & ) noexcept;

    ~ unwinding_state() noexcept(false);


When you consider what uncaught_exceptions is intended to enable, it seems like the proposal should have been more controversial.

Ville Voutilainen

unread,
Jun 17, 2015, 12:32:53 AM6/17/15
to std-dis...@isocpp.org
On 17 June 2015 at 06:57, David Krauss <pot...@gmail.com> wrote:
> Also: since the whole purpose is to allow the user to write throwing
> destructors, the encapsulation class should declare a noexcept(false)

That's not at all the purpose of uncaught_exceptions. The purpose is to be able
to reliably rollback sub-transactions when unwinding is in progress,
not to be able
to throw from a destructor.

David Krauss

unread,
Jun 17, 2015, 12:38:29 AM6/17/15
to std-dis...@isocpp.org
OK, then I retract the last message. The rest still stands though.

Ville Voutilainen

unread,
Jun 17, 2015, 1:00:49 AM6/17/15
to std-dis...@isocpp.org
On 17 June 2015 at 07:38, David Krauss <pot...@mac.com> wrote:
>> That's not at all the purpose of uncaught_exceptions. The purpose is to be able
>> to reliably rollback sub-transactions when unwinding is in progress,
>> not to be able
>> to throw from a destructor.
> OK, then I retract the last message. The rest still stands though.


The wrapper you outline doesn't magically solve the problem of moving
a transaction
to a different lifetime scope. When a move operation occurs, the user
still needs to
signal to the transaction whether it was or was not moved to a different scope.
And the user still needs to "conform to the procotol" as in initialize
the wrapper at
the beginning of a transaction and check it at the end.

David Krauss

unread,
Jun 17, 2015, 2:47:22 AM6/17/15
to std-dis...@isocpp.org
On 2015–06–17, at 1:00 PM, Ville Voutilainen <ville.vo...@gmail.com> wrote:

The wrapper you outline doesn't magically solve the problem of moving
a transaction
to a different lifetime scope. When a move operation occurs, the user
still needs to
signal to the transaction whether it was or was not moved to a different scope.
And the user still needs to "conform to the procotol" as in initialize
the wrapper at
the beginning of a transaction and check it at the end.

The class (not really a wrapper, it’s opaque) is intended to be a nonstatic member of the transaction object. It will be automatically initialized when the transaction is created. It will also be moved and assigned together with the enclosing transaction, if that is movable or assignable.

The only time the unwinding_state subobject would typically be mentioned at all, after its declaration, is to “check it at the end.” If the user doesn’t do that, then it’s obviously been left dangling.

I never said anything about helping to define a move constructor or assignment operator. A transaction that isn’t expected to complete in the current scope would probably be created on the heap in the first place anyway.

No magic, just sensible encapsulation of the intended use of a bare integer.

Thiago Macieira

unread,
Jun 17, 2015, 3:08:59 AM6/17/15
to std-dis...@isocpp.org
On Tuesday 16 June 2015 17:40:12 FrankHB1989 wrote:
> 3. I'm in doubt whether a realistic implementation has more opportunity to
> perform optimization on signed types than unsigned types, once the unsigned
> types *are used properly*, considering *what would be optimized away*. Are
> there any examples?

Your doubt is unfounded. Clearly and provably the compiler can perform more
optimisations on int than on unsigned.

That's also the reason why GCC needs to get rid of the warning that says
"assuming signed overflow does not occur when assuming that (X + c) < X is
always false". That warning is triggered when GCC *did* optimise the code,
which often is the intended behaviour. Getting rid of the warning by changing
the code implies pessimising the code by switching to an unsigned type: the
compiler now will now need to check for overflow.

The warning was introduced when GCC started doing the optimisation and someone
complained (loudly) in a bug report that the compiler shouldn't behave like
that.

David Krauss

unread,
Jun 17, 2015, 3:27:15 AM6/17/15
to std-dis...@isocpp.org
Actually, are you sure that this use-case doesn’t imply having a throwing destructor? See page 24 (slide 39) of N4152  Also, the Facebook Folly implementation, which seems closely related to the proposal, does run the SCOPE_SUCCESS case in a destructor which is, for that case, noexcept(false).

A transaction with a non-throwing destructor cannot be committed from that destructor. Instead it must be committed “manually” before being destroyed. If the destructor observes a non-committed state, then it infers unwinding and performs rollback. This is the safe status quo and there’s no need for uncaught_exceptions.

If you don’t want to write commit() at the end of each scope, you need throwing destructors. That is what this is all about, since the beginning, no?

FrankHB1989

unread,
Jun 17, 2015, 3:51:41 AM6/17/15
to std-dis...@isocpp.org


在 2015年6月17日星期三 UTC+8上午10:25:42,David Krauss写道:

On 2015–06–17, at 8:40 AM, FrankHB1989 <frank...@gmail.com> wrote:

A few more notes:

1. The type `int` can be natural than others in 1970s, but should be not any longer now.

int is effectively 16-bit on “small” systems, 32-bit on the majority of systems with virtual memory, and perhaps 64-bit on some systems that don’t support smaller datatypes very well. The landscape really hasn’t changed much, except that small systems are now less often overextended to do big jobs.

Perhaps you can’t draw very strong conclusions from sizeof(int), but it’s still a good enough proxy for the complexity of programs that the machine is likely to run. In the case of uncaught_exceptions, a system with 16-bit int is very likely to hit some other limit before getting to 30,000 nested unwindings, but a system with 32-bit int might not.

No, at least we did not have 64-bit platforms then, although the fact of "to predicate size of these builtin types keeps always not reliable" is not ever changed. And now we have more explicit `int_fastN_t` types. It sounds like `int` is like `int_fast16_t` with a vague name and extra builtin support.
 
2. Even if the fundamental types in C++ are not graceful, they deserved to be use in the most suitable places, esp. by the standard library.
Specifically, I dislike to weaken the range of the types, i.e. using integer types instead of unsigned types merely due to one or a few special values are reserved to indicate exceptional states. This can be rarely "natural".
Violation of this can be treated as a case of "no overhead principle": almost a half of the range is dropped (which may be avoid by other style, e.g. using exception instead of error code), and the result is not obviously beneficial.

Because you plan to use 2 billion nested unwindings… but not 5 billion?

It is true we don't have to worry about this particular defect too much in reality, but at least it is not consistency.
 
Really, 30,000 is enough for anyone. It already indicates that the program is crashing. The issue at hand is how to represent a small number with minimal constraints. The answer is int.

What constraints? Not use extra headers ot builtins?
3. I'm in doubt whether a realistic implementation has more opportunity to perform optimization on signed types than unsigned types, once the unsigned types are used properly, considering what would be optimized away. Are there any examples?

The usage of uncaught_exceptions is too simple for the instructions to be optimized. It simply reads a thread-local global. You store that in a structure. Later you call it again and compare to the previous result. The comparison feeds into a conditional branch or move. Done.

However, it does get stored in a structure in the meantime. It would be nice to maximize the chance of uncaught_exceptions fitting into padding bytes, for example in a class that would otherwise be empty. I think that makes a decent argument for some flavor of char. But…



The elephant in the room, which I can’t believe hasn’t been pointed out among so many software engineering experts, is that uncaught_exceptions is obviously missing encapsulation. It requires the user to follow a protocol which is far from obvious. What users need is an object which stores the unwinding level at its construction and then later tells you whether the unwinding level is still the same. Nobody should hand-code this process as boilerplate! Moreover, there is no other valid usage of uncaught_exceptions, and any attempt at cleverness is certain to end in disaster. An integer of any type is the wrong interface.

Brittleness is the other consequence of poor encapsulation. Exception processing can get complicated and we don’t have a theory as to why uncaught_exceptions is the ultimate answer to the problem. It’s just a variable that was conveniently lying around. Are we sure that the implementation will never, ever want to extend it? For example,

1. Keeping a pointer to the current unwinding exception would allow the debugger to provide insight into transactions initiated by error handling.

2. It’s strictly limited to RAII classes. uncaught_exceptions assumes that the transaction is created on the stack, so it must be destroyed before unwinding crosses the scope of its construction. What if its lifetime extends further, because it can’t certainly be committed in RAII fashion? What about a server that creates transactions on one thread and retires them on another? It should be possible to transplant the exception processing state by catching the exception and re-throwing it elsewhere. std::exception_ptr allows this but uncaught_exceptions does not. Even worse, comparing small integers provides arbitrary results in such a situation. If this ever leads to a security hole in an application, standard library implementations won’t be able to fix it until ISO revises the API.

By far the sensible thing to do is to introduce a class:

class unwinding_state {
    // Implementation-defined nonstatic member(s).
    // (Let an int storing uncaught_exceptions() be a conforming implementation.)
public:
    unwinding_state() noexcept; // Initialize with the current state.
    unwinding_state( unwinding_state const & ) noexcept;
    unwinding_state & operator = ( unwinding_state const & ) noexcept;

    enum class status_type {
        normal,
        exceptional
    };

    status_type status() const;

    operator bool () const
        { return status() == status_type::normal; }
};

We use C++ because it provides this sort of abstraction with zero overhead. If you want to code without abstraction, but instead always obsessing over interfaces hardcoded to to int, and worrying whether everything is simultaneously micro-optimized and future-proofed, use C.

Basically I agree with this point.
Perhaps the major problems is difficulties during standardization. Even if your proposed interface is more usable to ordinary users, someone may still need the portable underlying low level interface. And there might be more different models of higher level abstraction. Taking account all of these stuff, discussion, wording, balloting ... needs time. Anyway, it is more subtle and complicated than choosing from signed and unsigned types. The latter should have been obvious in most cases.

 

Fabio Fracassi

unread,
Jun 17, 2015, 4:20:08 AM6/17/15
to std-dis...@isocpp.org


On 17.06.2015 01:53, FrankHB1989 wrote:

Thank you for the answer.
I carefully checked these reasons and I am sure I have thought all of them previously.
To summarize shortly: plausible, but not totally convinced me.

It is perfectly fine, to be unconvinced by the reasoning, but you asked about why it is the way it is and this is your answer.
The reasons on both sides are well known, and the debate has been had multiple times.

When the design (of uncaught_exceptions) was reviewed this was debated again (shortly) and voted on.
Only very few people voted for unsigned.


I agree:
0. Obviously, `int` is shorter.
1. The type `int` is "natural" over other signed types in general.
(So again, is it accidental to choose `long` for std::shared_ptr::use_count?)
2. Both of them are needed (Andrei Alexandrescu's point).
3. Signed integers have advantages to represent negative results naturally, i.e. they can be clearer.
4. Signed integers make newbies (or perhaps average users) less surprise on integer arithmetic, as well as, some other arithmetic operations e.g. pointer subtraction.
5. The performance difference between signed and unsigned arithmetic operations is generally insignificant for users of modern architectures, even though the cost on hardware implementation may differ.
6. No simple guideline (again Andrei Alexandrescu's point), since they are more complicated than pure integer arithmetic rules most of users learned at school, and the requirements differ.
7. Be careful to implicit integer conversions, or better try to avoid them (Bjarne Stroustrup's point). (I almost always use -Wsign-conversion, etc when possible.)
However:
0. Although shorter is generally good, typing less is usually not in the first place. If other things matter, it can be changed relatively easily.
For example, once we have `main(){}`, then we forbade omission of  `int`.
1. The situation of unsigned integers is same. I'd rather reluctant to use `unsigned long` instead of `unsigned`, unless really needed (e.g. to prevent `DWORD` pollution).
2. Both of them are needed (Andrei Alexandrescu's point), but usually in different cases.
3. Signed integers may be clearer than unsigned integers to use, only when negative integer values are really needed.
4. Despite the truth of unfamiliarity on modular arithmetic for average users, many people have even no such knowledge: "integers overflow is undefined behavior" or "unsigned integers never overflow". The integers in C++ is actually not the same in mathematics, whether signed or not, since the precision is limited and there exist values out of range to be handled. This innocent is dangerous. Why cheating users with the fictional simplicity?
5. Anyway, signed integers are actually more complicated to implement. Sign bit is special, though we have all kinds of complexity on other circuits, they beats unsigned integers easily. Therefore at least I don't think signed integers are superior here.
6. To be pragmatic, "use one single type as default" rule is not enough for API design. To me that sounds like "a common base class for every type is always good" ("by default"), which is not the fact. It may get users like me confused about the reasoning: why use some particular type even if there exist at least one more type to fit the intent more precisely, or what does the designer of the API want the user to care. In such cases, signed integers are definitely unclearer.
7. Despite conversion, the sign has already bring traps on portability. Are they 2's complements, 1’s complement or signed magnitude representations? How to reproduce the strange behavior if they have probably overflowed? ...
8. As the name, "unsigned integers" are for special "unsigned" integer arithmetic, rather than being specific to bit patterns. Even if the latter is needed, there can be clearer ways, though facilities derived from C (bit fields) are not ideal. The abstraction of bitmask types may be a good beginning.
Currently we are lack of more powerful features (i.e. Scheme's numeric tower) to teach users always "do the right thing" easily, but it should not the case forever. Thus we may have more compatibility issues in future which we'd have avoid. To make the evolution harder is not intended, is it?


--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussio...@isocpp.org.
To post to this group, send email to std-dis...@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-discussion/.

FrankHB1989

unread,
Jun 17, 2015, 4:20:38 AM6/17/15
to std-dis...@isocpp.org


在 2015年6月17日星期三 UTC+8下午3:08:59,Thiago Macieira写道:
On Tuesday 16 June 2015 17:40:12 FrankHB1989 wrote:
> 3. I'm in doubt whether a realistic implementation has more opportunity to
> perform optimization on signed types than unsigned types, once the unsigned
> types *are used properly*, considering *what would be optimized away*. Are
> there any examples?

Your doubt is unfounded. Clearly and provably the compiler can perform more
optimisations on int than on unsigned.

I think I did not expressed my opinions about this topic clearly enough, sorry.
I never deny some optimization can be performed for signed integer operations while not for unsigned ones in reality implementations.
What I meant was, do the optimized-away stuff should really be coded in the first place?
To proper use the unsigned types means I generally use it only when I am sure the negative values are not needed. Thus I can omit some check intentionally even before the optimization occur.
Consider the following case:

https://github.com/devkitPro/libfat/pull/1

In fact gcc did warn there was unnecessary comparison. Since `size_t` is used, the compiler can easily detect it. If signed integer is used instead, could the compiler still found these redundant code and to optimize them easily?
I don't think so.

That's also the reason why GCC needs to get rid of the warning that says
"assuming signed overflow does not occur when assuming that (X + c) < X is
always false". That warning is triggered when GCC *did* optimise the code,
which often is the intended behaviour. Getting rid of the warning by changing
the code implies pessimising the code by switching to an unsigned type: the
compiler now will now need to check for overflow.

The warning was introduced when GCC started doing the optimisation and someone
complained (loudly) in a bug report that the compiler shouldn't behave like
that.
--
Thiago Macieira - thiago (AT) macieira.info - thiago (AT) kde.org
   Software Architect - Intel Open Source Technology Center
      PGP/GPG: 0x6EF45358; fingerprint:
      E067 918B B660 DBD1 105C  966C 33F5 F005 6EF4 5358

I have actually seen some related reports on GCC bugzilla. GCC does right here. But to me, I tend to write code naturally (without particular effort to optimize manually) using unsigned types and the resulted quality of code are usually near to (if not better than) the signed integer code after being optimized. The fact is, no warnings and no more optimization performed here does not mean the code is poorer. On the other hand, coders who has nothing these fact in mind often trigger the (unnecessary) undefined behavior. So this is just not the excuse to use signed integer as possible.


 

Ville Voutilainen

unread,
Jun 17, 2015, 5:29:24 AM6/17/15
to std-dis...@isocpp.org
On 17 June 2015 at 09:47, David Krauss <pot...@mac.com> wrote:
> The class (not really a wrapper, it’s opaque) is intended to be a nonstatic
> member of the transaction object. It will be automatically initialized when
> the transaction is created. It will also be moved and assigned together with
> the enclosing transaction, if that is movable or assignable.

Will it reset the state on move?

> No magic, just sensible encapsulation of the intended use of a bare integer.

So it encapsulates storing of the uncaught_exceptions value and provides
encapsulation for checking the current count against that. That's
nice, but hardly
crucial.

David Krauss

unread,
Jun 17, 2015, 5:30:28 AM6/17/15
to std-dis...@isocpp.org, Nevin Liber
/cc Nevin because he’s shown an interest in throwing destructors.

On 2015–06–17, at 3:51 PM, FrankHB1989 <frank...@gmail.com> wrote:

Perhaps the major problems is difficulties during standardization. Even if your proposed interface is more usable to ordinary users, someone may still need the portable underlying low level interface.

No, the high-level interface is fundamentally more portable.

And there might be more different models of higher level abstraction. Taking account all of these stuff, discussion, wording, balloting ... needs time.

Laziness is not an excuse for standardizing the wrong interface.

I’m starting to get the impression that uncaught_exceptions is not intended to support transaction objects at all, but only scope guards encapsulating success and failure cases. So the best interface would rather be a scope guard facility, but that hasn’t happened yet, probably because 1) it would need anonymous guard variables which the core language still doesn’t have, and 2) the proposals so far haven’t really been palatable.

For what it’s worth, I do use scope guards occasionally, but seldom need to cancel one. If the need arises, I just declare a local bool cancel = false; , capture it into the guard, set it manually, and use if (cancel) in the guard destructor.

If I want to run cleanup only on unwinding, I just write try { risky_business(); } catch (...) { cleanup(); throw; }. No guard, no boilerplate, no gimmicks. The strange thing about SCOPE_SUCCESS and SCOPE_FAIL is that they’re redundant with adding something to the end of the scope, and to a catch(...) block, respectively. The only reason to use them is aesthetic preference for grouping the success, failure, and cleanup blocks. But… aesthetically shuffling blocks of text is a job for the preprocessor, and these libraries already need the preprocessor to declare the anonymous variable and to save the user from writing the parens of a lambda expression.

Boost.ScopeExit does not offer success or failure cases. It only generates a straightforward destructor. They also recommend capturing a local variable if cancellation is desired.


Tallying the final score —

Gains: The parens of conforming scope guard macros only need to enclose the success, failure, and cleanup blocks, not the guarded scope. Existing libraries in the niche, like Folly ScopeFail() and ScopeSuccess(), become conforming. That’s all — no new semantic capability is intended over classic catch(...).

Costs: uncaught_exceptions appears to be suitable for more general use. Users will create value-semantic transaction classes with throwing destructors (and other things, like parsers and serializers needing a terminal token), which will inevitably end up inside unique_ptr or a container.

Ville Voutilainen

unread,
Jun 17, 2015, 5:33:41 AM6/17/15
to std-dis...@isocpp.org
On 17 June 2015 at 10:26, David Krauss <pot...@mac.com> wrote:
> That's not at all the purpose of uncaught_exceptions. The purpose is to be
> able
> to reliably rollback sub-transactions when unwinding is in progress,
> not to be able
> to throw from a destructor.
>
>
> Actually, are you sure that this use-case doesn’t imply having a throwing
> destructor? See page 24 (slide 39) of N4152 Also, the Facebook Folly

Yes, I am.

> A transaction with a non-throwing destructor cannot be committed from that
> destructor. Instead it must be committed “manually” before being destroyed.

Such transactions are not the purpose of uncaught_exceptions. If your
commit/rollback
operations throw, they still shouldn't throw out of destructors.

> If the destructor observes a non-committed state, then it infers unwinding
> and performs rollback. This is the safe status quo and there’s no need for
> uncaught_exceptions.

I fail to follow that logic. There are cases where nested transactions
want to commit
even during unwinding, and there are cases where they want to rollback during
unwinding, and uncaught_exceptions provides a way to reliably detect unwinding,
so the various strategies can actually be built on top of it.

> If you don’t want to write commit() at the end of each scope, you need
> throwing destructors. That is what this is all about, since the beginning,
> no?

No. I don't need throwing destructors for automatic commit/rollback.

David Krauss

unread,
Jun 17, 2015, 6:05:07 AM6/17/15
to std-dis...@isocpp.org
On 2015–06–17, at 5:29 PM, Ville Voutilainen <ville.vo...@gmail.com> wrote:

Will it reset the state on move?

No. Why should it? A moved-from transaction should be empty, should have nothing to roll back, and ideally shouldn’t query the unwinding state in the first place. However, it would be more straightforward to wrap any persistent transaction object in a smart pointer and never move it.

So it encapsulates storing of the uncaught_exceptions value and provides
encapsulation for checking the current count against that. That's
nice, but hardly
crucial.

It provides separation of concerns and encapsulation. It prevents the user from doing something like if (uncaught_exceptions() > 0) which is IMHO very likely to happen in the wild. Please address my arguments for encapsulation rather than dismissing it as mere sugar.


On 2015–06–17, at 5:33 PM, Ville Voutilainen <ville.vo...@gmail.com> wrote:

Such transactions are not the purpose of uncaught_exceptions. If your
commit/rollback
operations throw, they still shouldn't throw out of destructors.

Rollback typically doesn’t throw. Commitment typically does.

If the destructor observes a non-committed state, then it infers unwinding
and performs rollback. This is the safe status quo and there’s no need for
uncaught_exceptions.

I fail to follow that logic.

My point is only that you can detect unwinding without the help of uncaught_exceptions if you require a manual commit().

There are cases where nested transactions
want to commit
even during unwinding,

Such a case doesn’t care about uncaught_exceptions, the destructor would just commit either way. Such “commitment” can’t consist of much more than moving state to another variable, though.

and there are cases where they want to rollback during
unwinding,

“Rollback during unwinding” seems to be code for “commit in the destructor when not unwinding.”

and uncaught_exceptions provides a way to reliably detect unwinding,
so the various strategies can actually be built on top of it.

It’s as reliable as uncaught_exceptions is, which means no support for rethrowing an exception_ptr. See my original message.

If you don’t want to write commit() at the end of each scope, you need
throwing destructors. That is what this is all about, since the beginning,
no?

No. I don't need throwing destructors for automatic commit/rollback.

You need a destructor which throws if commitment does. I’m not familiar with this non-throwing commit strategy, and I’m not seeing it in the literature. Do you have a reference?

Ultimately the transaction needs to produce some externally-observable effects, or it’s not much of a transaction.

Ville Voutilainen

unread,
Jun 17, 2015, 6:31:36 AM6/17/15
to std-dis...@isocpp.org
On 17 June 2015 at 13:04, David Krauss <pot...@mac.com> wrote:
> Will it reset the state on move?
>
> No. Why should it? A moved-from transaction should be empty, should have
> nothing to roll back, and ideally shouldn’t query the unwinding state in the
> first place. However, it would be more straightforward to wrap any
> persistent transaction object in a smart pointer and never move it.

Because there are cases where the state needs to be reset and cases where
it doesn't need to be, so there are more than one kind of such wrappers that
domain-specific users want to write.

> So it encapsulates storing of the uncaught_exceptions value and provides
> encapsulation for checking the current count against that. That's
> nice, but hardly
> crucial.
>
> It provides separation of concerns and encapsulation. It prevents the user
> from doing something like if (uncaught_exceptions() > 0) which is IMHO very
> likely to happen in the wild. Please address my arguments for encapsulation
> rather than dismissing it as mere sugar.

Well, again, that's nice, but hardly crucial. Also, if you want to
propose such a wrapper
to be part of the standard library, by all means go ahead. If you want
to change uncaught_exceptions
itself, you'll need stronger rationale.

> Such transactions are not the purpose of uncaught_exceptions. If your
> commit/rollback
> operations throw, they still shouldn't throw out of destructors.
>
> Rollback typically doesn’t throw. Commitment typically does.

Even so, commits shouldn't throw out of destructors, even with
uncaught_exceptions.

> If the destructor observes a non-committed state, then it infers unwinding
> and performs rollback. This is the safe status quo and there’s no need for
> uncaught_exceptions.
>
> I fail to follow that logic.
>
> My point is only that you can detect unwinding without the help of
> uncaught_exceptions if you require a manual commit().

Except that such manual commits can happen during unwinding.

> There are cases where nested transactions
> want to commit
> even during unwinding,
>
> Such a case doesn’t care about uncaught_exceptions, the destructor would
> just commit either way. Such “commitment” can’t consist of much more than
> moving state to another variable, though.
>
> and there are cases where they want to rollback during
> unwinding,
>
>
> “Rollback during unwinding” seems to be code for “commit in the destructor
> when not unwinding.”

Well, having the capability to know when unwinding is in progress for
the current
scope allows making decisions based on it. I fail to see what's so
hard to understand
about that.

> and uncaught_exceptions provides a way to reliably detect unwinding,
> so the various strategies can actually be built on top of it.
>
> It’s as reliable as uncaught_exceptions is, which means no support for
> rethrowing an exception_ptr. See my original message.

How does this wrapper help with that?

> No. I don't need throwing destructors for automatic commit/rollback.
>
> You need a destructor which throws if commitment does. I’m not familiar with

No I don't. I can design the system in a fashion somewhat similar to
what streams
do, so that the engine that executes transactions reports an error
state if it has
failed a transaction.

> this non-throwing commit strategy, and I’m not seeing it in the literature.
> Do you have a reference?

I don't have a public reference, but I have implemented such systems.

> Ultimately the transaction needs to produce some externally-observable
> effects, or it’s not much of a transaction.

That has absolutely nothing to do with whether a transaction throws from its
destructor.

FrankHB1989

unread,
Jun 17, 2015, 7:19:50 AM6/17/15
to std-dis...@isocpp.org


在 2015年6月17日星期三 UTC+8下午4:20:08,Fabio Fracassi写道:


On 17.06.2015 01:53, FrankHB1989 wrote:

Thank you for the answer.
I carefully checked these reasons and I am sure I have thought all of them previously.
To summarize shortly: plausible, but not totally convinced me.

It is perfectly fine, to be unconvinced by the reasoning, but you asked about why it is the way it is and this is your answer.
The reasons on both sides are well known, and the debate has been had multiple times.

Yes, I know this situation. However, it is still debatable until all of the users have known what they should know.
 
When the design (of uncaught_exceptions) was reviewed this was debated again (shortly) and voted on.
Only very few people voted for unsigned.

I'd admit, this is the expected answer. Though I have some more interest in EWG/LEWG's official responses about this topic in general.

FrankHB1989

unread,
Jun 17, 2015, 7:36:22 AM6/17/15
to std-dis...@isocpp.org, ne...@eviloverlord.com


在 2015年6月17日星期三 UTC+8下午5:30:28,David Krauss写道:
/cc Nevin because he’s shown an interest in throwing destructors.

On 2015–06–17, at 3:51 PM, FrankHB1989 <frank...@gmail.com> wrote:

Perhaps the major problems is difficulties during standardization. Even if your proposed interface is more usable to ordinary users, someone may still need the portable underlying low level interface.

No, the high-level interface is fundamentally more portable.
And there might be more different models of higher level abstraction. Taking account all of these stuff, discussion, wording, balloting ... needs time.

Laziness is not an excuse for standardizing the wrong interface.

This is not simple as laziness. More resources are needed to get the work done, including half-done.
 
I’m starting to get the impression that uncaught_exceptions is not intended to support transaction objects at all, but only scope guards encapsulating success and failure cases. So the best interface would rather be a scope guard facility, but that hasn’t happened yet, probably because 1) it would need anonymous guard variables which the core language still doesn’t have, and 2) the proposals so far haven’t really been palatable.

For what it’s worth, I do use scope guards occasionally, but seldom need to cancel one. If the need arises, I just declare a local bool cancel = false; , capture it into the guard, set it manually, and use if (cancel) in the guard destructor.

If I want to run cleanup only on unwinding, I just write try { risky_business(); } catch (...) { cleanup(); throw; }. No guard, no boilerplate, no gimmicks. The strange thing about SCOPE_SUCCESS and SCOPE_FAIL is that they’re redundant with adding something to the end of the scope, and to a catch(...) block, respectively. The only reason to use them is aesthetic preference for grouping the success, failure, and cleanup blocks. But… aesthetically shuffling blocks of text is a job for the preprocessor, and these libraries already need the preprocessor to declare the anonymous variable and to save the user from writing the parens of a lambda expression.

C++ does not do well in aesthetic things. The preprocessor (lack of hygienic macros, etc) is too weak for syntax transformation. The only things I expected here is clarity and ease to write. Preprocessor tricks are the workarounds.

David Krauss

unread,
Jun 17, 2015, 8:18:12 AM6/17/15
to std-dis...@isocpp.org
On 2015–06–17, at 6:31 PM, Ville Voutilainen <ville.vo...@gmail.com> wrote:

Because there are cases where the state needs to be reset and cases where
it doesn't need to be, so there are more than one kind of such wrappers that
domain-specific users want to write.

The moved-from state of standard library classes is unspecified by default. Users should follow best practices and reinitialize by assigning from a default-constructed object.

Even so, commits shouldn't throw out of destructors, even with
uncaught_exceptions.

That’s not what my earlier citation from the proposal seems to say:

See page 24 (slide 39) of N4152  

Even ignoring that, “shouldn’t” is often hard to teach.

Except that such manual commits can happen during unwinding.

Putting a non-throwing commit in a scope guard sounds plausible, no big deal. However, it all but precludes the subsequent destructor from doing anything with the unwinding state, because the destructor has no significant work left to do. The destructor can still ask about unwinding when no manual commit happened, but regardless, manual commits on unwind should not disturb using an "already-committed" test as a proxy for uncaught_exception(s).

For uncaught_exceptions to add something to the manual commit scenario: The user says commit() but it’s only deferred, and then the destructor checks uncaught_exceptions and decides to rollback instead.

“Rollback during unwinding” seems to be code for “commit in the destructor
when not unwinding.”

Well, having the capability to know when unwinding is in progress for
the current
scope allows making decisions based on it. I fail to see what's so
hard to understand
about that.

Because rollback is easy and commitment is hard. Commitment can fail, and the premise is to do it in a destructor. Given an unsurprising idea and a surprising one which are married together, it’s odd to refer only to the unsurprising one. Also, rollback on unwind is what happens even if you don’t commit on non-unwind, so there’s an ambiguity.

Not hard to comprehend, just seems like a circumlocution.

It’s as reliable as uncaught_exceptions is, which means no support for
rethrowing an exception_ptr. See my original message.

How does this wrapper help with that?

It helps by allowing the implementation to return a status based on the identity of the exception object being unwound, not the size of a stack which may be different each time that same object is rethrown.

Even if that’s not the right solution, other solutions may exist. What we can say for sure is that uncaught_exceptions does not support exception_ptr rethrowing because the stack depth is too volatile.

Regardless of exception_ptr, as I said in the original message, WG21 generally prefers to avoid overspecification and to allow implementation latitude for extensions and QOI. Limiting the amount of state to the one int is contrary to the usual, best practices.

No. I don't need throwing destructors for automatic commit/rollback.

You need a destructor which throws if commitment does. I’m not familiar with

No I don't. I can design the system in a fashion somewhat similar to
what streams
do, so that the engine that executes transactions reports an error
state if it has
failed a transaction.

So, commitment failure throws unless it happens during unwinding, in which case some state, say an exception_ptr, is set.

Then unwinding continues none the wiser. If the next commit-on-unwind fails for the same reason as the first, another exception_ptr needs to be queued.

At best, there are two completely different but perfectly equivalent error handling mechanisms. At worst, a random subset of the commits (or other side effects) will succeed and the committed output is senseless. (And there’s a lot of room for behavior in between those extremes.) If further commits aren’t even attempted when the flag or exception_ptr is set, then that represents at least a little overhead, and yucky global state.

I don’t think this boils down to a teachable or idiomatic architecture. Iostreams lets the user select the error handling style. It doesn’t switch dynamically so it’s not comparable.

C++ users deserve a comprehensive solution. If encapsulation of uncaught_exceptions even has a chance of enabling an improvement, that should be a strong argument for it.

Ville Voutilainen

unread,
Jun 17, 2015, 8:40:34 AM6/17/15
to std-dis...@isocpp.org
On 17 June 2015 at 15:18, David Krauss <pot...@mac.com> wrote:
> Because there are cases where the state needs to be reset and cases where
> it doesn't need to be, so there are more than one kind of such wrappers that
> domain-specific users want to write.
>
> The moved-from state of standard library classes is unspecified by default.
> Users should follow best practices and reinitialize by assigning from a
> default-constructed object.

I'm talking about the target of the move, not the source of the move.

>
> Even so, commits shouldn't throw out of destructors, even with
> uncaught_exceptions.
>
>
> That’s not what my earlier citation from the proposal seems to say:
>
> See page 24 (slide 39) of N4152

That's not part of the proposed semantics, that's reference material.
uncaught_exceptions
is certainly not intended to encourage anyone to throw from a destructor.

> It’s as reliable as uncaught_exceptions is, which means no support for
> rethrowing an exception_ptr. See my original message.
>
> How does this wrapper help with that?
>
> It helps by allowing the implementation to return a status based on the
> identity of the exception object being unwound, not the size of a stack
> which may be different each time that same object is rethrown.

Has that "status based on the identify of the exception" been implemented?

> Even if that’s not the right solution, other solutions may exist. What we
> can say for sure is that uncaught_exceptions does not support exception_ptr
> rethrowing because the stack depth is too volatile.

Please show an example of what you mean by uncaught_exceptions not supporting
exception_ptr rethrowing, and how this wrapper of yours will support it better.

> Regardless of exception_ptr, as I said in the original message, WG21
> generally prefers to avoid overspecification and to allow implementation
> latitude for extensions and QOI. Limiting the amount of state to the one int
> is contrary to the usual, best practices.

That's news to me. I don't quite see how providing a simple int for the basic
facility is contrary to any usual best practices. It's standardizing simple and
palatable parts of existing practice.

> No I don't. I can design the system in a fashion somewhat similar to
> what streams
> do, so that the engine that executes transactions reports an error
> state if it has
> failed a transaction.
>
> So, commitment failure throws unless it happens during unwinding, in which
> case some state, say an exception_ptr, is set.

You are making the assumption that commit failure throws. That's not
what I said.

> At best, there are two completely different but perfectly equivalent error
> handling mechanisms. At worst, a random subset of the commits (or other side
> effects) will succeed and the committed output is senseless. (And there’s a
> lot of room for behavior in between those extremes.) If further commits
> aren’t even attempted when the flag or exception_ptr is set, then that
> represents at least a little overhead, and yucky global state.

There is no need to make that state global.

> C++ users deserve a comprehensive solution. If encapsulation of
> uncaught_exceptions even has a chance of enabling an improvement, that
> should be a strong argument for it.

I suppose a wrapper would be a useful addition, perhaps it should be proposed.

When it comes to being able to inspect the identity of the
exception(s) in flight,
that seems like a fair amount more complex facility. The support
library of libstdc++ would
certainly allow doing that without too much trouble, I don't know about others.

Nevin Liber

unread,
Jun 17, 2015, 11:20:14 AM6/17/15
to std-dis...@isocpp.org
On 17 June 2015 at 06:36, FrankHB1989 <frank...@gmail.com> wrote:
在 2015年6月17日星期三 UTC+8下午5:30:28,David Krauss写道:
/cc Nevin because he’s shown an interest in throwing destructors.

Eh?  Do you have a reference message number for that?  I'd like to see what I said...

My interest is in avoiding them, as any code base which uses them is much harder to deal with.

FWIW, I'm in general agreement with Ville on this thread.
--
 Nevin ":-)" Liber  <mailto:ne...@eviloverlord.com(847) 691-1404
Reply all
Reply to author
Forward
0 new messages