Standardized SafeInt?

688 views
Skip to first unread message

Ben Craig

unread,
Nov 7, 2012, 9:59:49 PM11/7/12
to std-pr...@isocpp.org
SafeInt information can be found here.  Basically, it is an open source library authored by security expert David LeBlanc of Microsoft.  It is basically a "drop-in" replacement for integer types, and will throw an exception whenever integer overflows occur.  I believe that getting this added to the standard would be a boon to C++ and secure code.  It should be a relatively low effort addition considering the "proof-of-concept" is already widely used within Microsoft.

Jens Maurer

unread,
Nov 10, 2012, 2:55:07 PM11/10/12
to std-pr...@isocpp.org
On 11/08/2012 03:59 AM, Ben Craig wrote:
> SafeInt information can be found here <http://safeint.codeplex.com/>. Basically, it is an open source library authored by security expert David LeBlanc of Microsoft. It is basically a "drop-in" replacement for integer types, and will throw an exception whenever integer overflows occur. I believe that getting this added to the standard would be a boon to C++ and secure code. It should be a relatively low effort addition considering the "proof-of-concept" is already widely used within Microsoft.

Probably a good idea.

This would need a proposal, i.e. a paper essentially showing a bit of rationale
plus (eventually) the full text that would go into the standard. Unless someone
from the committee itself has enough interest to invest the substantial effort
to write that, it's not going to happen with a one-line post like this.

Or, you could write the proposal yourself and submit it (as a first start,
only show the motivation and rationale plus a rough outline; unless encouraged
by WG21 to do the full thing, don't write the full standard text just yet;
it might turn out to be a waste of time).

Jens

Beman Dawes

unread,
Nov 10, 2012, 3:33:18 PM11/10/12
to std-pr...@isocpp.org
On Sat, Nov 10, 2012 at 2:55 PM, Jens Maurer <Jens....@gmx.net> wrote:
> On 11/08/2012 03:59 AM, Ben Craig wrote:
>> SafeInt information can be found here <http://safeint.codeplex.com/>. Basically, it is an open source library authored by security expert David LeBlanc of Microsoft. It is basically a "drop-in" replacement for integer types, and will throw an exception whenever integer overflows occur. I believe that getting this added to the standard would be a boon to C++ and secure code. It should be a relatively low effort addition considering the "proof-of-concept" is already widely used within Microsoft.
>
> Probably a good idea.

Yes, although with no docs at the above link, and no examples in your
query, it is not possible to give a very complete answer.

> This would need a proposal, i.e. a paper essentially showing a bit of rationale
> plus (eventually) the full text that would go into the standard. Unless someone
> from the committee itself has enough interest to invest the substantial effort
> to write that, it's not going to happen with a one-line post like this.
>
> Or, you could write the proposal yourself and submit it (as a first start,
> only show the motivation and rationale plus a rough outline; unless encouraged
> by WG21 to do the full thing, don't write the full standard text just yet;
> it might turn out to be a waste of time).

As Jens suggests, writing a query proposal is really the only way to
know how the LWG will react.

Since there are other "safe integer" libraries in use, a proposal
document should comment on the differences between the proposal and
other similar libraries. It always rattles me when a proposal appears
that doesn't seem to be aware of the work of others, and which
approaches were successful with users.

--Beman

Marc

unread,
Nov 11, 2012, 6:34:49 AM11/11/12
to std-pr...@isocpp.org
On Thursday, November 8, 2012 3:59:50 AM UTC+1, Ben Craig wrote:
SafeInt information can be found here.  Basically, it is an open source library authored by security expert David LeBlanc of Microsoft.  It is basically a "drop-in" replacement for integer types, and will throw an exception whenever integer overflows occur.  I believe that getting this added to the standard would be a boon to C++ and secure code.  It should be a relatively low effort addition considering the "proof-of-concept" is already widely used within Microsoft.

I don't think adding an isolated checked int makes much sense (I haven't looked at SafeInt). I believe it would make more sense as part of a generic integer type for which you can specify many overflow (and division by 0) policies: undefined, wrap, saturate, set a flag, trap/throw, etc (we can start with few policies).

Michał Dominiak

unread,
Nov 11, 2012, 2:58:13 PM11/11/12
to std-pr...@isocpp.org
Yay for that. Currently, if you want to check for overflow and other situations like that, you need to either write weird tests (result + b < a, for example, huh) or rely on assembly (like `jc` on x86). Creating some reasonable way to access such flag or throw something on carry, etc. would be great.

Now (following x86 example), throwing on carry is trivial; just add `jc` here and make it jump to throwing code. Accessing the flag would be different, harder. It would either require construct similar to assembly `jc` (if [[overflow]]? looks weird), or keeping additional flag for each variable, which would kill the point of doing it without library construct - for example, `std::safe<int>`.

Just throwing some thoughts.

dvd_l...@yahoo.com

unread,
Nov 13, 2012, 3:31:48 PM11/13/12
to std-pr...@isocpp.org
Hi - I was notified of this thread on Sunday, and am happy to answer questions about my library.
 
A few comments I can make on some of what is here so far:
 
1) Rationale - integer checking is hard - much more difficult than it would seem at first. Many people can't get the math right to start with, and then get the checks wrong when they do try (e.g., if ( a * b < 0 ) for signed multiplication). There are also very subtle problems involving code that isn't quite standards-compliant (e.g., signed integer rollover). There's a need for a library that can just be dropped into existing code which does not change the current logic, and which will help solve this problem. The SafeInt library works nicely to solve these issues, as well as difficult things, like checking to see if a cast on entry into a function results in information loss.
 
2) I have not extensively reviewed other libraries. The one that I'm aware of which came before my work (first released in 2005) was the Microsoft IntSafe library. It is written in C, and is easy to use incorrectly. SafeInt takes a more robust C++ approach. I believe other libraries have been created after mine was released, and I'm largely aware of them because John Regehr of the University of Utah did some nice work to find a small number of flaws in SafeInt. His team looked at other libraries, and found much more substantial flaws. I can take a look at the other libraries, or it would really be better for an unbiased 3rd party to do the comparison. Other than IntSafe, I am not aware of any libraries that pre-dated SafeInt.
 
3) There are two versions of SafeInt that are public - one ships in Visual Studio, and is only expected to be correct with that compiler. The other is at www.codeplex.com/SafeInt, should be thoroughly standards-compliant (if not, let me know, I will consider it a bug and fix it), and is known to work correctly with a number of compilers.
 
4) Just checking the carry flag is insufficient. For example, adding 2 16-bit shorts into a 32-bit register will never trip the carry flag, but if the result is assigned to a short, it may represent an overflow - ironic that if sizeof(short) < sizeof(int), the signed short addition overflow is defined by the standard (due to operator casting), but signed int overflow is not - these are the sorts of oddities that make the problem so difficult for most programmers.
 
5) The library has had significant adoption within Microsoft, and has proven very useful in mitigating this class of problem, which often manifests as security bugs. It has been downloaded over 5000 times, but I'm not aware of what other products it may have ended up in.
 
I'll be happy to answer any questions and help with this process. If there's significant interest, I can work with the Microsoft standards team to do the full write-up needed. I've participated in other standards bodies, but not this one.
 
Thanks to all for the kind comments thus far -

Beman Dawes

unread,
Nov 13, 2012, 7:29:40 PM11/13/12
to std-pr...@isocpp.org
Sounds like an excellent candidate for standardization. Please do
consider making a formal proposal. Stephan T. Lavavej (aka STL) is the
Microsoft person you probably want to talk to about getting together a
formal proposal.

--Beman

robertmac...@gmail.com

unread,
Nov 21, 2012, 9:25:32 PM11/21/12
to std-pr...@isocpp.org


On Wednesday, November 7, 2012 6:59:50 PM UTC-8, Ben Craig wrote:
SafeInt information can be found here.  ...

I also found myself interested in this code.  When I got around to looking at it I felt it could be improved.  Being a "booster" I wanted to leverage on Boost stuff to make a simpler implementation.  The result of my efforts can be found at http://rrsd.com/blincubator.com/bi_library/safe-numerics/

Main differences are:
a) follows Boost/C++ library conventions as to naming, formating etc.
b) Is many many lines shorter due to the usage of more meta-programming technique in the implementation.
c) Includes an exhaustive set of tests including all combinations of corner cases - generated with boost pre-processor library.
d) defines concepts for "numeric" types and includes concept checking via the boost concept checking library.
e) includes documentation more in line with currently recommended "formal" style.

Robert Ramey

Fernando Cacciola

unread,
Nov 22, 2012, 8:45:00 AM11/22/12
to std-pr...@isocpp.org
I would most definitely like to have a standardized interface to handle numerical overflow.

I do however have some reservations on doing it by means of an integer wrapper.

First, I think we need such "safe numeric operations" to handle floating point values just as well. While the mechanisms are different, the requirements are exactly the same (and floating-point does overflow no matter how big their range is). In fact, I think we need it for arbitrary numeric types, including rationals, complex, user-defined extended precision, interval, etc... (and consistently defined for even unlimited precision numeric types, even if the overflow test always yields false)

Second, I wonder if a wrapper is the best interface. Since this is a "security" utility, it needs to be consistently applied *all over* the relevant source code, otherwise it is not really effective
A lot, if not most, critical and complex numeric code (where this is mostly needed) depends on third-party libraries, and in the numerical community there is a huge base of reusable code that is (yet) not sufficiently generic to let you get away with simple using a wrapper type at the highest level.
In other words, you might use SafeInt in your own code but it is not easy to have third-party code use it as well.

Since the overflow is always a side effect of operations, and operations are expressed in C++ via operators (and functions), I wonder if the library shouldn't be in a form of "safe operators and functions" as opposed to "safe numeric type".

IIUC, the current operator overloading mechanism should allow something like the following:

#include <safe_numeric_operations>

// This "activates" the overflow detection on any supported type, including builtin types
using std::safe_numeric_operations ;

void test()
{
  try
  {
     int a = std::numeric_limits<int>::max();

     // std::safe_numeric_operations::operator + (int,int) being called here
     int r = a + a ;
  }
  catch ( std::bad_numeric_conversion ) { }
}




--
 
 
 



--
Fernando Cacciola
SciSoft Consulting, Founder
http://www.scisoft-consulting.com

Nicol Bolas

unread,
Nov 22, 2012, 10:58:42 AM11/22/12
to std-pr...@isocpp.org


On Thursday, November 22, 2012 5:45:42 AM UTC-8, Fernando Cacciola wrote:
I would most definitely like to have a standardized interface to handle numerical overflow.

I do however have some reservations on doing it by means of an integer wrapper.

First, I think we need such "safe numeric operations" to handle floating point values just as well. While the mechanisms are different, the requirements are exactly the same (and floating-point does overflow no matter how big their range is). In fact, I think we need it for arbitrary numeric types, including rationals, complex, user-defined extended precision, interval, etc... (and consistently defined for even unlimited precision numeric types, even if the overflow test always yields false)

Second, I wonder if a wrapper is the best interface. Since this is a "security" utility, it needs to be consistently applied *all over* the relevant source code, otherwise it is not really effective

Don't forget the essential C++ maxim: pay only for what you use.

There is a lot of code that's just fine with non-secure integers. They no need to have overflow checks and such every time time they loop from 0 to 240. They aren't going to overflow their loop counter. There are many such occurrences in a code-base where the structure and nature of the code means that overflow is just not a realistic possibility.

This means that whatever we adopt has to be an opt-in mechanism, not an opt-out.
 
A lot, if not most, critical and complex numeric code (where this is mostly needed) depends on third-party libraries, and in the numerical community there is a huge base of reusable code that is (yet) not sufficiently generic to let you get away with simple using a wrapper type at the highest level.
In other words, you might use SafeInt in your own code but it is not easy to have third-party code use it as well.

Since the overflow is always a side effect of operations, and operations are expressed in C++ via operators (and functions), I wonder if the library shouldn't be in a form of "safe operators and functions" as opposed to "safe numeric type". 

IIUC, the current operator overloading mechanism should allow something like the following:

#include <safe_numeric_operations>

// This "activates" the overflow detection on any supported type, including builtin types
using std::safe_numeric_operations ;

void test()
{
  try
  {
     int a = std::numeric_limits<int>::max();

     // std::safe_numeric_operations::operator + (int,int) being called here
     int r = a + a ;
  }
  catch ( std::bad_numeric_conversion ) { }
}

Ignoring the fact that a recompile would suddenly cause massive amounts of code to potentially throw std::bad_numeric_conversion without the knowledge of that code's owners (ie: we can't do that), there's also the practical issue that this won't magically affect any C code that people often use.

But even more importantly, I'm pretty sure you can't overload operator+(int, int). Besides being practically infeasible by creating the possibility of exceptions in potentially exception-unsafe code, it's simply not possible.
 

Marc Thibault

unread,
Nov 22, 2012, 11:35:26 AM11/22/12
to std-pr...@isocpp.org
I would like to have a standard that deals with overflows. I think the best way would be an arbitrary precision class used for internal linkage and the right cast functions to create object with external linkage. I think dynamic_cast could in theory be used to throw on overflow. I would also like a saturate_cast which would convert to the nearest valid value of the output type. The class definition could also look like this:

template<int64_t MIN, int64_t MAX> //minimum and maximum runtime value that can be stored
class integer
{
    /*Large enough storage type for MIN and MAX*/ value;
    public:
    template<int64_t OMIN, int64_t OMAX>
    integer<MIN+OMIN, MAX+OMAX> operator+(integer<OMIN,OMAX> i) const;
};

Now, there is a limit to how much precision the MIN and MAX could have. So it would still be interesting to have a safeintmax_t and safeuintmax_t.

Other options that could be interesting:

A third template parameter being a common divisor. If you make an integer from a long*, the alignment can guaranty a minimum common divisor among all results.

An arbitrary precision floating point class which knows its min and max to some extent.

A cast that lowers precision so all preceding operations could be replaced by less precise and faster operations.

robertmac...@gmail.com

unread,
Nov 22, 2012, 11:38:28 AM11/22/12
to std-pr...@isocpp.org


On Thursday, November 22, 2012 5:45:42 AM UTC-8, Fernando Cacciola wrote:
I would most definitely like to have a standardized interface to handle numerical overflow.

I do however have some reservations on doing it by means of an integer wrapper.

First, I think we need such "safe numeric operations" to handle floating point values just as well. While the mechanisms are different, the requirements are exactly the same (and floating-point does overflow no matter how big their range is). In fact, I think we need it for arbitrary numeric types, including rationals, complex, user-defined extended precision, interval, etc... (and consistently defined for even unlimited precision numeric types, even if the overflow test always yields false)

Note that my proposed implementation defines a concept "Numeric" for all types which look like a number and are supported via std::limits. I've implemented this for safe<int>, safe<unsigned int>, etc..  All of the other types listed above would fit into this scheme.  (Though implementing this for floating point types would be an effort).

Second, I wonder if a wrapper is the best interface. Since this is a "security" utility, it needs to be consistently applied *all over* the relevant source code, otherwise it is not really effective

I totally disagree with this.  I must be optional on the part of the user.  we can't really know that the user really want's to do.  Maybe he's just fine with high order bits being lost. So if you applied this "everywhere" you'd have to supply an "escape hatch" which would complicate it's usage.  Further more, some users would want to use this only for debug build and perhaps define away the functionality for release builds.  Finally,  using it "everywhere" would mean mucking with C++ language - a language which is already to complex for anyone to write a correct compiler for.
 
A lot, if not most, critical and complex numeric code (where this is mostly needed) depends on third-party libraries, and in the numerical community there is a huge base of reusable code that is (yet) not sufficiently generic to let you get away with simple using a wrapper type at the highest level.
In other words, you might use SafeInt in your own code but it is not easy to have third-party code use it as well.

Which is as it must be.  We can't fix the whole world - only our own code. But this shouldn't prevent us from doing that.
 
IIUC, the current operator overloading mechanism should allow something like the following:

#include <safe_numeric_operations>

// This "activates" the overflow detection on any supported type, including builtin types
using std::safe_numeric_operations ;

void test()
{
  try
  {
     int a = std::numeric_limits<int>::max();

     // std::safe_numeric_operations::operator + (int,int) being called here
     int r = a + a ;
  }
  catch ( std::bad_numeric_conversion ) { }
}

This is pretty much exactly what my reference implementation does!

Robert Ramey

robertmac...@gmail.com

unread,
Nov 22, 2012, 11:47:15 AM11/22/12
to std-pr...@isocpp.org


On Thursday, November 22, 2012 8:35:26 AM UTC-8, Marc Thibault wrote:

template<int64_t MIN, int64_t MAX> //minimum and maximum runtime value that can be stored

...

Note that my reference implementation includes safe_signed_range<min, max> and safe_unsigned_range<min, max>.  In fact, safe<int> .. are implemented in terms of these more general templates.  - something like

template<T>
struct safe_unsigned_int<T> : public safe_range<std::limits<T>::min ...
{};

So you get this functionality for free.


Robert Ramey

Fernando Cacciola

unread,
Nov 22, 2012, 12:01:58 PM11/22/12
to std-pr...@isocpp.org
OK, I agree with your observations about silently overloading operators.

OTOH, I still think there is something to be considered in the fact that it is the operations that overflow (and other unwanted side effects), not the numbers themselves, so I would propose a layer of overloaded functions, that can be applied on any type, including built-in types, and on top of that a higher-level wrapper (or wrappers like in Robert's library)

The advantage of doing so, following your own observations, is that in most numeric code, you don't really need to check for all operations, so a performance critical algorithm might not be able to just use the wrapper as a drop-in replacement for int, yet it could be made secure by adding checked operations where specifically necessary.

Having said all that, I would like to at least mention the fact that there is a largest picture to consider here, where integer overflow is just one case. I'm referring to the domain of numerical robustness in general, and while a comprehensive work on that is way out of the scope of a relatively simple proposal like this one, the big picture should still be constantly present and considered.

From the more general point of view of numerical robustness, throwing an exception when a numerical computation could not be performed (such as in the case of integer overflow) is the most basic building block, but it is still too low-level. There is a lot of work in the research community about how to truly handle numerical limitations, as opposed to not handling in order (which is what a safe-int does). Unfortunately, most of that work is specific to an application domain and a given numeric type or types, so it's hard to extrapolate general idioms and utilities. However, there are two basic elements that are usually found in most of them:

First there is the general mechanism of using a proper type to perform an operation. For intance, an int64 to operate two int32. This is the mechanism used by SafeInt and Robert's code, but it's actually used in a lot of places dealing with integer numerics. So, we should standardized that, such that it is generally available to any programmer to do something like:

auto r = (secure)a + b;

here (secure) would bootstrap an operator+ which selects the proper return type to guarantee the addition does not overflow (or you could just have secure_add(a,b) or whatever)

Second, there is the general requirement to test whether the result of operations are reliable, beyond just overflow.
One idiom that has emerged in recent years and which I found intuitive enough to be general is the concept of "certified arithmetic" (usually implemented with interval floating-point numbers). The idea is that, on the one hand, the result of operations carry a certificate (explicit or not) to indicate if it succeeded or not, and on the other hand, triboolean logic is used along, made general and easy.  In terms of safe_int this would mean that instead of throwing, the result is just flaged as overflow, much like a floating-point saturates to +- INF, but, when you try to compare, the answer is "maybe" (and if you want to convert it to plain int it throws)

The general form of the idiom would be something like this:

auto e = ( a  + b == c + d ) ;
if ( certainy(e) )
{
}
else if ( certainly_not(e ) )
{
}
else
{
  // can't tell. It not possible to compare the numbers
}

Here, if a,b,c,d are integers numbers, and either of the additions overflow, the code flows to the last else, and without even any exception being thrown.
The trick is that operator == returns a tribool (or it's generalization uncertain<bool>)


Best

robertmac...@gmail.com

unread,
Nov 22, 2012, 3:57:25 PM11/22/12
to std-pr...@isocpp.org


On Thursday, November 22, 2012 9:02:42 AM UTC-8, Fernando Cacciola wrote:
OK, I agree with your observations about silently overloading operators.

OTOH, ....

Well one can make this very elaborate.  When I was doing this I experimented with keep track of number of bits a compile time so that operations which could never overthrow would generate no runtime overhead.  There are lots of things one could do along this line - especially now that we have something like auto.

But in the end, I decided to keep it as simple as I could. The whole library can be summarized in one sentence.

"any operation involving a safe numeric type will produce the expected mathematical result or throw an exception"

Making it do more or less than this would severely hamper it's utility.

Robert Ramey

 

Fernando Cacciola

unread,
Nov 22, 2012, 4:32:52 PM11/22/12
to std-pr...@isocpp.org
I didn't intend to suggest that a library should do less or more than proposed here. I suggested that in order to prepare and evaluate such a proposal, the bigger picture should be taken into account.

For instance, both SafeInt's and your code *requires* a primitive operation that returns the result in a larger type such that it is known not to overflow. The overflow detection is just an extra step, and it's only required because (or rather IFF) the result is needed to be converted into the same type as the operands. That means that both libraries under consideration are really the composition of two layers: the bottom layer that performs the operation in a way that it's result is known to be correct, and the layer that performs the conversion to a given receiving type but doing a proper range check first.

Then I'm saying that, since the bottom layer is a requirement for these libraries, and it's also useful--and used--in several other numerical techniques, it should be standardized as such (and not hidden as an implementation detail as it proposed)

Furthermore, with such a decomposition in place, the top layer can be *totally* given by a general numeric_cast<> utility, such as the the one proposed in n1879
(http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1879.htm)

But that's not all.

I'm also saying that the bigger picture could critically affect design decision even within the scope proposed. I gave the example of certified arithmetic because that would suggest that a given safe_int should not necessarily just throw right up front, but it could instead flag the overflow and throw only when necessary, such as when casting to int. This is important because if you are going to compare the overflowed result, you can just return "maybe" from the comparison without throwing an exception. This not only allows the client code to be *elegantly* explicit about the way overflowed results affect the computation results, but it also allows the library to be used in contexts where exceptions are to be avoided when possible.


Best

robertmac...@gmail.com

unread,
Nov 22, 2012, 5:25:01 PM11/22/12
to std-pr...@isocpp.org


On Thursday, November 22, 2012 1:33:38 PM UTC-8, Fernando Cacciola wrote:
On Thu, Nov 22, 2012 at 5:57 PM, <robertmac...@gmail.com> wrote:


On Thursday, November 22, 2012 9:02:42 AM UTC-8, Fernando Cacciola wrote:
OK, I agree with your observations about silently overloading operators.

OTOH, ....

Well one can make this very elaborate.  When I was doing this I experimented with keep track of number of bits a compile time so that operations which could never overthrow would generate no runtime overhead.  There are lots of things one could do along this line - especially now that we have something like auto.

But in the end, I decided to keep it as simple as I could. The whole library can be summarized in one sentence.

"any operation involving a safe numeric type will produce the expected mathematical result or throw an exception"

Making it do more or less than this would severely hamper it's utility.

 
 
I didn't intend to suggest that a library should do less or more than proposed here. I suggested that in order to prepare and evaluate such a proposal, the bigger picture should be taken into account.

For instance, both SafeInt's and your code *requires* a primitive operation that returns the result in a larger type such that it is known not to overflow.

I don't think that's true.  That's not part of the interface or type requirements. It's merely an implementation feature of the library because it's the fastest way to do this.  It still checks for overflows on int64 operations even though there is not int 128 type - it has to use a much slower method.

 
The overflow detection is just an extra step, and it's only required because (or rather IFF) the result is needed to be converted into the same type as the operands.

The result is calculated to whatever precision is required to avoid overflow. When the result is assigned somewhere - then the overflow is caught.  So

safe<int16> a, b;
safe<int16> x = a + b; // could trap on overflow - but
safe<int32> x = a + b; // would never trap

An interesting side issue of this is that there is no run time overhead on the second
statement because template meta programming is used to determine that
there can never, ever be an overflow. Finally

auto x = a + b; will result in x being of type safe<int32> and will never, ever trap( actually I have to double check this).

So to my mind this is exactly what is desired.

That means that both libraries under consideration are really the composition of two layers: the bottom layer that performs the operation in a way that it's result is known to be correct, and the layer that performs the conversion to a given receiving type but doing a proper range check first.

In my implementation, the two layers are there - but can't be separated.  The "lower" layer happens on the + operator while the "upper" layer happens on the = operator.  There is only one library though.

 
Then I'm saying that, since the bottom layer is a requirement for these libraries, and it's also useful--and used--in several other numerical techniques, it should be standardized as such (and not hidden as an implementation detail as it proposed)

I'm guessing that anyone wanting to do this could just overload the = operator. or use safe<?> as a constructor argument to his own special type.


But that's not all.

I'm also saying that the bigger picture could critically affect design decision even within the scope proposed. I gave the example of certified arithmetic because that would suggest that a given safe_int should not necessarily just throw right up front, but it could instead flag the overflow and throw only when necessary, such as when casting to int. This is important because if you are going to compare the overflowed result, you can just return "maybe" from the comparison without throwing an exception. This not only allows the client code to be *elegantly* explicit about the way overflowed results affect the computation results, but it also allows the library to be used in contexts where exceptions are to be avoided when possible.

as a practical matter, I don't see how you can "throw upfront" until you actually do an assignment or explicit cast.  Take the following assignment.

safe<int16> a;
safe<int32> b;

auto x = a + b;  // what is the type of x?
auto x = b + a;  // is the type of y the same as the type of x?

Questions like these made me decide to divide the task as you suggested
a) do the calculation resulting in a type which can hold the true result
b) on assignment or cast (safe_cast ...) trap the attempt to save a value which loses information.

Robert Ramey


Fernando Cacciola

unread,
Nov 22, 2012, 5:51:41 PM11/22/12
to std-pr...@isocpp.org
On Thu, Nov 22, 2012 at 7:25 PM, <robertmac...@gmail.com> wrote:


On Thursday, November 22, 2012 1:33:38 PM UTC-8, Fernando Cacciola wrote:
On Thu, Nov 22, 2012 at 5:57 PM, <robertmac...@gmail.com> wrote:


On Thursday, November 22, 2012 9:02:42 AM UTC-8, Fernando Cacciola wrote:
OK, I agree with your observations about silently overloading operators.

OTOH, ....

Well one can make this very elaborate.  When I was doing this I experimented with keep track of number of bits a compile time so that operations which could never overthrow would generate no runtime overhead.  There are lots of things one could do along this line - especially now that we have something like auto.

But in the end, I decided to keep it as simple as I could. The whole library can be summarized in one sentence.

"any operation involving a safe numeric type will produce the expected mathematical result or throw an exception"

Making it do more or less than this would severely hamper it's utility.

 
 
I didn't intend to suggest that a library should do less or more than proposed here. I suggested that in order to prepare and evaluate such a proposal, the bigger picture should be taken into account.

For instance, both SafeInt's and your code *requires* a primitive operation that returns the result in a larger type such that it is known not to overflow.

I don't think that's true.  That's not part of the interface or type requirements. It's merely an implementation feature of the library because it's the fastest way to do this.  It still checks for overflows on int64 operations even though there is not int 128 type - it has to use a much slower method.

I haven't checked your code in this corner case, but SafeInt uses a form of 128 bits type (when configured as such)

Since it is always true that the result of an integer operation between two N bits integer fits perfectly in a 2*N bits integer, it is entirely possible to generalize the pattern even for 64bits by providing a library-based 128bit integer.
In fact, we've been doing that for 64bits on 32bits hardwarde and OS for a long long time, so it wouldn't be anything new.

 
 
The overflow detection is just an extra step, and it's only required because (or rather IFF) the result is needed to be converted into the same type as the operands.

The result is calculated to whatever precision is required to avoid overflow. When the result is assigned somewhere - then the overflow is caught.  So

safe<int16> a, b;
safe<int16> x = a + b; // could trap on overflow - but
safe<int32> x = a + b; // would never trap

An interesting side issue of this is that there is no run time overhead on the second
statement because template meta programming is used to determine that
there can never, ever be an overflow. Finally

auto x = a + b; will result in x being of type safe<int32> and will never, ever trap( actually I have to double check this).


So what's the signature of operator+ ( safe<int64>, safe<int64> ) ?
 
So to my mind this is exactly what is desired.

So we are mostly agreeing on what is useful, only not on the interface.

I still prefer something more general that would allow me to do:

int16 a,b ;
int32 x = safe_add(a,b);

and entirely bypass the wrapper though, specially as a std utility (which is what we are discussing here)
 

That means that both libraries under consideration are really the composition of two layers: the bottom layer that performs the operation in a way that it's result is known to be correct, and the layer that performs the conversion to a given receiving type but doing a proper range check first.

In my implementation, the two layers are there - but can't be separated.  The "lower" layer happens on the + operator while the "upper" layer happens on the = operator.  There is only one library though.


Notice that this is not the case of the SafeInt (and SafeInt does not allow me to just get the correct, non-overflowed result as we did above)

 
Then I'm saying that, since the bottom layer is a requirement for these libraries, and it's also useful--and used--in several other numerical techniques, it should be standardized as such (and not hidden as an implementation detail as it proposed)

I'm guessing that anyone wanting to do this could just overload the = operator. or use safe<?> as a constructor argument to his own special type.


But that's not all.

I'm also saying that the bigger picture could critically affect design decision even within the scope proposed. I gave the example of certified arithmetic because that would suggest that a given safe_int should not necessarily just throw right up front, but it could instead flag the overflow and throw only when necessary, such as when casting to int. This is important because if you are going to compare the overflowed result, you can just return "maybe" from the comparison without throwing an exception. This not only allows the client code to be *elegantly* explicit about the way overflowed results affect the computation results, but it also allows the library to be used in contexts where exceptions are to be avoided when possible.

as a practical matter, I don't see how you can "throw upfront" until you actually do an assignment or explicit cast.  Take the following assignment.

Indeed my point was that you shouldn't, but you certainly could (and naturally would) in a design that does not separate the two layers.

The SafeInt code, for example, (and AFAICT), does just that, since it doesn't produce a 2N bits integer for an operation between two N bits integers. So it checks and throws right in the operators (actually the functions that produce the results, to be precise)


 
safe<int16> a;
safe<int32> b;

auto x = a + b;  // what is the type of x?

should be safe<int64>
 
auto x = b + a;  // is the type of y the same as the type of x?

yes
 
Questions like these made me decide to divide the task as you suggested
a) do the calculation resulting in a type which can hold the true result
b) on assignment or cast (safe_cast ...) trap the attempt to save a value which loses information.


So we are almost on the same page.

We differ in how to propose two two layers to the std

Ben Craig

unread,
Nov 22, 2012, 9:10:24 PM11/22/12
to std-pr...@isocpp.org, robertmac...@gmail.com


On Thursday, November 22, 2012 2:57:26 PM UTC-6, robertmac...@gmail.com wrote:
Well one can make this very elaborate.  When I was doing this I experimented with keep track of number of bits a compile time so that operations which could never overthrow would generate no runtime overhead.  There are lots of things one could do along this line - especially now that we have something like auto.
 
That would likely turn into a fixed-point arithmetic library.

robertmac...@gmail.com

unread,
Nov 23, 2012, 1:45:09 AM11/23/12
to std-pr...@isocpp.org, robertmac...@gmail.com


On Thursday, November 22, 2012 6:10:24 PM UTC-8, Ben Craig wrote:

 
That would likely turn into a fixed-point arithmetic library.

I believe that a multi-precision library has recently been submitted to boost by john maddock.  It would be my hope that save<T> would work with any type which has std::limits<T> implemented and the member limits has the right definitions.  I haven't spent any time considering whether this is possible though.  I did define and implement a "Numeric" concept which includes all the intergers and the safe versions of the same in order to support such an idea.  But I don't know whether this will really pan out or not.  My main motivation was to make a "boostified" version of SafeInt.

Off topic - but related note.  In the course of doing this, I spent time investigating Boost conversion which I believe you wrote.  I really couldn't understand how to use it from the documentation  and filed a track issue to that effect.  To my knowledge it's still a pending issue.  I think some more effort in this area would be very helpful.

Robert Ramey
 

Fernando Cacciola

unread,
Nov 23, 2012, 7:25:27 AM11/23/12
to std-pr...@isocpp.org
On Fri, Nov 23, 2012 at 3:45 AM, <robertmac...@gmail.com> wrote:


On Thursday, November 22, 2012 6:10:24 PM UTC-8, Ben Craig wrote:

 
That would likely turn into a fixed-point arithmetic library.


I would rather say that a fixed-point library, as well as a rationals library, would be alongside (or maybe one level above) safe-int, as opposed to a generalization of it.
 
I believe that a multi-precision library has recently been submitted to boost by john maddock.  It would be my hope that save<T> would work with any type which has std::limits<T> implemented and the member limits has the right definitions.  I haven't spent any time considering whether this is possible though.  I did define and implement a "Numeric" concept which includes all the intergers and the safe versions of the same in order to support such an idea.  But I don't know whether this will really pan out or not.  My main motivation was to make a "boostified" version of SafeInt.

I still prefer a traits and functions only lower layer because it would be naturally used in the implementation of a fixed_point<T> and rational<T>, but I admit that they could just as well use safe_int<T>, as has been presented.

Here's one example off the top of my head about why I would prefer a lower layer independent of safe_int<>

Both fixed_point<T> and rational<T> are useful as long as the values do not overflow. However, and this is particularly true in the case of rationals, there are several application domains where computations can easily, and often, overflow. In these cases, one must use a big_int (unlimited precision integer) as T (whether for a fixed_point or rational)
But that is pessimistically inefficient, so one would consider a non-template "exact" numeric type which would have a dynamic internal representation of rational<T> but which would automatically switch from T to 2T (meaning a two times bigger integer), until it reaches big_int.

Now, if rational<T> uses internally safe<T>, how would we determine the next representation to use when overflow is detected? would it do something like

rational< typename decltype( safe<T> + safe<T> )::internal_type > ?

that'd work but it starts getting cumbersome.

If, OTOH, safe<T>, rational<T>, fixed_point<T> and exact, they all use the same general lower layer with elements such as :

exact_integer_result_type<T,T>::type  // given int32,int32 yields int64

safe_add(T,T) -> exact_integer_result_type<T,T>::type

etc...

then the promoted rational to use within exact would be

rational< typename exact_integer_result_type<T,T>::type >

which looks a lot clean to me.

That's all off the top of my head, but I hope it illustrates my point.


 
Off topic - but related note.  In the course of doing this, I spent time investigating Boost conversion which I believe you wrote.  I really couldn't understand how to use it from the documentation  and filed a track issue to that effect.  To my knowledge it's still a pending issue.  I think some more effort in this area would be very helpful.


Absolutely. Fixing that documentation is in my close-term TODO list

Best
 

robertmac...@gmail.com

unread,
Nov 23, 2012, 12:10:31 PM11/23/12
to std-pr...@isocpp.org


On Friday, November 23, 2012 4:26:08 AM UTC-8, Fernando Cacciola wrote:

On Fri, Nov 23, 2012 at 3:45 AM, <robertmac...@gmail.com> wrote:


On Thursday, November 22, 2012 6:10:24 PM UTC-8, Ben Craig wrote:

 
That would likely turn into a fixed-point arithmetic library.


I would rather say that a fixed-point library, as well as a rationals library, would be alongside (or maybe one level above) safe-int, as opposed to a generalization of it.


I'm not sure I followed your suggestion, but somehow I'm thinking I might be in agreement. My view would be the "safe" idea would be orthogonal to the "numeric" idea.  So

int, rational<T>, multiprecision<N>, .... would have std limits<T> defined. std::limits includes members to define min/max values and a lot of other features of numeric types.

This would make them fullfill the "Numeric" concept (type requirements).

safe<T> would be defined for any type T fullfilling the "Numeric" concept.

and safe<T> would also have std::limits implemented so it would fullfill the "Numeric" concept as well.

This would separate the "safe" idea from the "numeric" idea and permit users to use the "safe" version if and only if actually desired.  It would also clarify what "safe" does as opposed to what "numeric" does.  My proposal implements this idea for current integer types.  It hasn't really been investigated whether this extends well to all integer types for which std::limits is implemented.  Anyway, this is far into the future.

Robert Ramey

Marc Thibault

unread,
Nov 25, 2012, 11:59:26 AM11/25/12
to std-pr...@isocpp.org
A few facts I wanted to share:

uint32 a= random<uint32>();
uint32 b= random<uint32>();
bool carry= random<bool>();
uint33 c= a + b + carry; //this cannot overflow

In this example, the integer type that stores automatic results must know the exact maximum, not the number of bits of the sub result. Also, a multiplication of two Nbits number cannot give the maximum of a 2*Nbits number.

It might be possible to define a 128bits integer as a pair of low bits and high bits. And then generalize for N*64bits number. All of those numbers could know their exact minimum and maximum at compile time. Multiplication algorithm of large numbers often add several results of the multiplication of 2 machine words. Many overflow check could be removed if the integer type knows its exact minimum and maximum.

robertmac...@gmail.com

unread,
Nov 25, 2012, 12:31:31 PM11/25/12
to std-pr...@isocpp.org


On Sunday, November 25, 2012 8:59:27 AM UTC-8, Marc Thibault wrote:
A few facts I wanted to share:...


Many overflow check could be removed if the integer type knows its exact minimum and maximum.

The library I cited - http://rrsd.com/blincubator.com/bi_library/safe-numerics/ does exactly that.  That is using TMP techniques - it elides the overflow check if it can be determined that the result can never overflow.

Robert Ramey

Marc Thibault

unread,
Nov 25, 2012, 1:31:18 PM11/25/12
to std-pr...@isocpp.org, robertmac...@gmail.com
I also believe that safe-numerics is very good. I think the problem of integers that grows at runtime has to be done in a different library. Also, there is the independent problem of creating efficient accumulators.

safe_int<int64_t> i;
i.accumulate(uint32_t* first, uint32_t* last);

The number of overflow check could be greatly reduced in the above equation.

robertmac...@gmail.com

unread,
Nov 25, 2012, 3:09:26 PM11/25/12
to std-pr...@isocpp.org, robertmac...@gmail.com


On Sunday, November 25, 2012 10:31:18 AM UTC-8, Marc Thibault wrote:
I also believe that safe-numerics is very good. I think the problem of integers that grows at runtime has to be done in a different library. Also, there is the independent problem of creating efficient accumulators.

safe_int<int64_t> i;
i.accumulate(uint32_t* first, uint32_t* last);

The number of overflow check could be greatly reduced in the above equation.

The SafeInt as well as safe<int64> don't have an accumulate function.  They are meant to be drop-in replacements for the base type int64.  Adding a new interface would be a whole new kettle of fish.

Besides,  dont' think anything more is needed though.

safe<int64> total;

for(..
    int32 x.
   ....
    total += x;
}
would work just fine and throws if the total were to overflow.

Robert Ramey

Vicente J. Botet Escriba

unread,
Nov 25, 2012, 4:25:07 PM11/25/12
to std-pr...@isocpp.org
Le 25/11/12 19:31, Marc Thibault a �crit :
> I also believe that safe-numerics is very good. I think the problem of
> integers that grows at runtime has to be done in a different library.
> Also, there is the independent problem of creating efficient accumulators.
>
> safe_int<int64_t> i;
> i.accumulate(uint32_t* first, uint32_t* last);
>
> The number of overflow check could be greatly reduced in the above
> equation.

Well all this depends of the initial value of i. If it is 0, I agree
that a accumulate member function could avoid lost of the overloads checks.
I will suggest then that the accumulate function be a free function and
return just a int64_t, so the result initialization is mastered by the
algorithm.

-- Vicente

Vicente J. Botet Escriba

unread,
Nov 25, 2012, 4:39:54 PM11/25/12
to std-pr...@isocpp.org
Le 08/11/12 03:59, Ben Craig a écrit :
SafeInt information can be found here.  Basically, it is an open source library authored by security expert David LeBlanc of Microsoft.  It is basically a "drop-in" replacement for integer types, and will throw an exception whenever integer overflows occur.  I believe that getting this added to the standard would be a boon to C++ and secure code.  It should be a relatively low effort addition considering the "proof-of-concept" is already widely used within Microsoft.
--
 
 
 
Hi,

have you take a look at "C++ Binary Fixed-Point Arithmetic" http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html?

It seems to respond to most of the problems (if not all) you try to solve.

Could you comment features you want  that this proposal doesn't covers?

-- Vicente

robertmac...@gmail.com

unread,
Nov 25, 2012, 5:07:17 PM11/25/12
to std-pr...@isocpp.org


On Sunday, November 25, 2012 1:39:58 PM UTC-8, viboes wrote:
Le 08/11/12 03:59, Ben Craig a écrit :

Hi,

have you take a look at "C++ Binary Fixed-Point Arithmetic" http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html?

It seems to respond to most of the problems (if not all) you try to solve.

Could you comment features you want  that this proposal doesn't covers?

Very interesting.  i don't know whether or not I saw this when I made my version of safe<T>.

In general, I see it as very similar.  But there are some notable differences.

"The fixed-point library contains four class templates. They are cardinal and integral for integer arithmetic, and nonnegative and negatable for fractional arithmetic."

a) I use the integer traits defined by std::limits<T> to determined whether the type is signed/unsigned, the max/min, etc. So instead of having a variety of templates, there is only one using TMP to determine the traits.

b) I don't address fractions or rounding at all. 

c) I considered a policy parameter of some sort to indicate how to handle overflows - they use an enum to select.  But I rejected the idea for two reasons

i) more complexity with very little value.
ii)  I couldn't see how to do it with something like

safe<int32> x;
safe<int16> y;
safe<int32> z;
z = x + y;  // where would the policy parameter go?

In the end, I re-focused on a drop-in replacement for all integer types.  "all integer types" means all types - instrinsic and user defined types for which std::limit<T>::is_integer() return true.

To summarize,  SafeInt and safe<T> are different than the proposal in the paper above.  But I believe they are better.

Robert Ramey


-- Vicente

Vicente J. Botet Escriba

unread,
Nov 26, 2012, 2:17:01 AM11/26/12
to std-pr...@isocpp.org
Le 25/11/12 23:07, robertmac...@gmail.com a écrit :


On Sunday, November 25, 2012 1:39:58 PM UTC-8, viboes wrote:
Le 08/11/12 03:59, Ben Craig a écrit :

Hi,

have you take a look at "C++ Binary Fixed-Point Arithmetic" http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html?

It seems to respond to most of the problems (if not all) you try to solve.

Could you comment features you want  that this proposal doesn't covers?

Very interesting.  i don't know whether or not I saw this when I made my version of safe<T>.
My question was addressed to the PO. Anyway ...


In general, I see it as very similar.  But there are some notable differences.

"The fixed-point library contains four class templates. They are cardinal and integral for integer arithmetic, and nonnegative and negatable for fractional arithmetic."

a) I use the integer traits defined by std::limits<T> to determined whether the type is signed/unsigned, the max/min, etc. So instead of having a variety of templates, there is only one using TMP to determine the traits.
I don't see an added value at the user level. I'm open to variations.


b) I don't address fractions or rounding at all. 
Well, this is out of the PO subject, so no matter. I could understand that we could want safe integers independently.


c) I considered a policy parameter of some sort to indicate how to handle overflows - they use an enum to select.  But I rejected the idea for two reasons

i) more complexity with very little value.
ii)  I couldn't see how to do it with something like

safe<int32> x;
safe<int16> y;
safe<int32> z;
z = x + y;  // where would the policy parameter go?
With the Lawrence proposal you don't have overflow as far as you don't loss range. So the natural place is to use it only with assignment.

integral<31,OV1> x;
integral<15,OV2> y;
integral<31,OV3> z;

z = x + y;

Check for overflow only when doing the assignment using the overflow OV3 policy.

The fact to add the overflow policy on the type makes the code less weight.

We can have also a base class for integral that has no overflow policy and forbids assignment that can loss information

basic_integral<31> x;
basic_integral<15> y;
basic_integral<31> z;
z = x + y; // compile file

The user will need to use a number_cast conversion like in

z = number_cast<OV3>(x + y);



In the end, I re-focused on a drop-in replacement for all integer types.  "all integer types" means all types - instrinsic and user defined types for which std::limit<T>::is_integer() return true.

To summarize,  SafeInt and safe<T> are different than the proposal in the paper above.  But I believe they are better.
Humm, I need then to read both ;-) But do the PO proposal or your Safe library allow to saturate on overflow, assert, ignore, ...?

Best,
Vicente

Vicente J. Botet Escriba

unread,
Nov 26, 2012, 2:42:03 AM11/26/12
to std-pr...@isocpp.org
Le 26/11/12 08:17, Vicente J. Botet Escriba a écrit :
Where can I get the documentation of the PO proposal?

I wanted to add. safe<T> could be defined as a alias of the fixed point class.

-- Vicente

Ben Craig

unread,
Nov 26, 2012, 11:59:58 AM11/26/12
to std-pr...@isocpp.org
Some large use cases for SafeInt are computing sizes for buffers and array indices.  In these situations, you probably want to throw an exception, or terminate the program.  Saturation, assertions in debug mode, and ignoring aren't very good options from a security standpoint.

Are the saturate, debug assert, and ignore cases useful in some other domain?  I can see some use for saturation in some cases where a (possibly implicit) range check was already present.  For instance, if I'm doing some calculations with the volume on my speakers, I may have an acceptable range of 0 - 65,535.  If my math comes up with a volume of 100,000, then setting the volume to 65,535 may be reasonable.  I don't know how often this kind of thing comes up though.

If you stick to "drop-in" replacements, then I'm not sure if there's any benefit to an "ignore" policy.  Ignore is equivalent to the raw type.  Fixed-point libraries could make use of it though.  Debug assertions aren't any better from a security standpoint, but they could help discover bugs in non-security critical code.

robertmac...@gmail.com

unread,
Nov 26, 2012, 12:45:00 PM11/26/12
to std-pr...@isocpp.org


On Sunday, November 25, 2012 11:17:04 PM UTC-8, viboes wrote:


c) I considered a policy parameter of some sort to indicate how to handle overflows - they use an enum to select.  But I rejected the idea for two reasons

i) more complexity with very little value.
ii)  I couldn't see how to do it with something like

safe<int32> x;
safe<int16> y;
safe<int32> z;
z = x + y;  // where would the policy parameter go?
With the Lawrence proposal you don't have overflow as far as you don't loss range. So the natural place is to use it only with assignment.

It's exactly the same with safe<T>.  EXCEPT the case where safe<int64> + safe<int64> where there is not large type to hold the intermediate result. That is, the overflow can occur before the assignment.  In this case, it throws immediately.  I struggled on how to avoid this - but solutions seemed more and more heavy weight.


integral<31,OV1> x;
integral<15,OV2> y;
integral<31,OV3> z;
z = x + y;

Check for overflow only when doing the assignment using the overflow OV3 policy.

I suppose this could be made to work - except for the note above.

The fact to add the overflow policy on the type makes the code less weight.

We can have also a base class for integral that has no overflow policy and forbids assignment that can loss information

basic_integral<31> x;
basic_integral<15> y;
basic_integral<31> z;
z = x + y; // compile file

The user will need to use a number_cast conversion like in

z = number_cast<OV3>(x + y);

I wanted to avoid this so as to keep the "drop-in" replacement functionality. I want people take a program with bugs in it,  change all their "int" to safe<int>, etc. run the program and trap errors. This makes the library much, much more attractive to use.
 
But do the PO proposal or your Safe library allow to saturate on overflow, assert, ignore, ...?

The safe<T> throws an exception.  Users would have to trap the exception if they want to handle it in some specific way.  I considered alternatives, but I felt that if something traps it's almost always an unrecoverable error - ie a programming error - so making it any fancier than necessary would be counter productive - also refer to the note above regarding trying to specify a policy.

The thread started with the proposal to consider SafeInt for inclusion in the standard library.  safe<T> should be considered in this light.

Other proposals - modular arithmetic, decimal integers, etc should be considered separately.

Note that safe<T> uses the traits defined in std::limits so that it should be possible to apply safe<T> to any type which has std::limits<T> implemented.  (I doubt it's possible now - but I believe it can be made to work).  This would leave the concept of "safe" as orthogonal to the "number type" which is also where I believe we would want to be.

Robert Ramey.

Robert Ramey
 

Best,
Vicente

robertmac...@gmail.com

unread,
Nov 26, 2012, 12:49:16 PM11/26/12
to std-pr...@isocpp.org


I wanted to add. safe<T> could be defined as a alias of the fixed point class.

I would like to keep the "save" concept/idea orthogonal to the "numeric" concept/idea.

Robert Ramey

-- Vicente

robertmac...@gmail.com

unread,
Nov 26, 2012, 12:57:59 PM11/26/12
to std-pr...@isocpp.org


On Monday, November 26, 2012 8:59:59 AM UTC-8, Ben Craig wrote:
Some large use cases for SafeInt are computing sizes for buffers and array indices.  In these situations, you probably want to throw an exception, or terminate the program.  Saturation, assertions in debug mode, and ignoring aren't very good options from a security standpoint.

agreed.  But note that SafeInt or safe<unsigned int> aren't that useful for things like buffer sizes or array indices unless they happen to be exactly max<unsigned int> long or something like that.

However, it turns out there is a very slick solution here.  When I started this effort what I really wanted to make was "save_range<min, max> for just this purpose.  I made this and since it depended upon std::limits, it could use the equivalent of

template<T>
safe : public safe_range<std::limits<T>:min, safe_range<std::limits<T>:max> {
};

to make safe<int> etc.

So to trap array and buffer overflows one would already have (for free)

char a[12345];
safe_unsigned_range<0, sizeof(a)> aindex;

and you're in business. It's actually a side effect of the way it's implemented.
 .
If you stick to "drop-in" replacements, then I'm not sure if there's any benefit to an "ignore" policy.  Ignore is equivalent to the raw type. 

+1
 
Fixed-point libraries could make use of it though.  Debug assertions aren't any better from a security standpoint, but they could help discover bugs in non-security critical code.

I would like to avoid "feature creep"

Robert Ramey
 

Lawrence Crowl

unread,
Nov 26, 2012, 5:59:40 PM11/26/12
to std-pr...@isocpp.org
On 11/23/12, Fernando Cacciola <fernando...@gmail.com> wrote:
> On Nov 23, 2012 <robertmac...@gmail.com> wrote:
> > On Thursday, November 22, 2012 6:10:24 PM UTC-8, Ben Craig wrote:
> > > That would likely turn into a fixed-point arithmetic
> > > <http://en.wikipedia.org/wiki/Fixed-point_arithmetic> library.
>
> I would rather say that a fixed-point library, as well as a
> rationals library, would be alongside (or maybe one level above)
> safe-int, as opposed to a generalization of it.

The committee has a Study Group 6 addressing issues of number
representation. One of the tasks is figuring out the logical
relationships and implementation relationships between various
number types.

We welcome contributions.

> Both fixed_point<T> and rational<T> are useful as long as the
> values do not overflow. However, and this is particularly true in
> the case of rationals, there are several application domains where
> computations can easily, and often, overflow. In these cases, one
> must use a big_int (unlimited precision integer) as T (whether for
> a fixed_point or rational) But that is pessimistically inefficient,
> so one would consider a non-template "exact" numeric type which
> would have a dynamic internal representation of rational<T>
> but which would automatically switch from T to 2T (meaning a two
> times bigger integer), until it reaches big_int.

The study group has reached a tentative conclusion that a rational
should be based either on a bigint, or do rounding when the
representation is not sufficient for the true result. The former
would be useful in computational geometry. The latter would be
useful in music.

Fixed point is useful even in the presence of overflow, but the
method for dealing with the overflow may vary depending on the
application. In these cases, I think it generally best to give
program the tools they need, rather than pick a single solution.
(Plain unsigned ints picked one solution, and it is inappropriate
in a number of cases.)

--
Lawrence Crowl

Lawrence Crowl

unread,
Nov 26, 2012, 6:09:45 PM11/26/12
to std-pr...@isocpp.org
On 11/25/12, Vicente J. Botet Escriba <vicent...@wanadoo.fr> wrote:
> Le 25/11/12 23:07, robertmac...@gmail.com a écrit :
> > On Sunday, November 25, 2012 1:39:58 PM UTC-8, viboes wrote:
> > > Le 08/11/12 03:59, Ben Craig a écrit :
> > > > have you take a look at "C++ Binary Fixed-Point Arithmetic"
> > > > http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html
> > > > <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html>?
> > > >
> > > > It seems to respond to most of the problems (if not all)
> > > > you try to solve.
> > > >
> > > > Could you comment features you want that this proposal
> > > > doesn't covers?
> >
> > Very interesting. i don't know whether or not I saw this when
> > I made my version of safe<T>.
>
> My question was addressed to the PO. Anyway ...
>
> > In general, I see it as very similar. But there are some
> > notable differences.
> >
> > "The fixed-point library contains four class templates. They
> > are |cardinal| and |integral| for integer arithmetic, and
> > |nonnegative| and |negatable| for fractional arithmetic."
> >
> > a) I use the integer traits defined by std::limits<T> to
> > determined whether the type is signed/unsigned, the max/min,
> > etc. So instead of having a variety of templates, there is only
> > one using TMP to determine the traits.
>
> I don't see an added value at the user level. I'm open to
> variations.

I am the author of that proposal. The difference is that my proposal
is parameterized by the number of bits, not by a representation type.
The implementation is responsible for picking a representation
suitable to the number of bits and the sign. So, in my proposal,
there is no type parameter available to extract that the information.
That was part of the feedback from the committee's first review
of the proposal.

> > In the end, I re-focused on a drop-in replacement for all integer
> > types. "all integer types" means all types - instrinsic and user
> > defined types for which std::limit<T>::is_integer() return true.
> >
> > To summarize, SafeInt and safe<T> are different than the proposal
> > in the paper above. But I believe they are better.
>
> Humm, I need then to read both ;-) But do the PO proposal or your
> Safe library allow to saturate on overflow, assert, ignore, ...?

A safe integer proposal and my fixed-point proposal do serve
different purposes. My proposal works best when the range and
precision is derivable from the data, e.g. pixel values. A safe
integer type works best when the range is derivable from the machine,
e.g. indexing.

So, I think there is room for a safe integer proposal. I would,
however, like to see in conceptually integrated with my proposal
so that we have one mechanism for specifying overflow behavior, etc.

--
Lawrence Crowl

Lawrence Crowl

unread,
Nov 26, 2012, 6:17:54 PM11/26/12
to std-pr...@isocpp.org
On 11/26/12, Ben Craig <ben....@gmail.com> wrote:
> Some large use cases for SafeInt are computing sizes for buffers
> and array indices. In these situations, you probably want
> to throw an exception, or terminate the program. Saturation,
> assertions in debug mode, and ignoring aren't very good options
> from a security standpoint.

Not necessarily true. An exception could be vulnerable to a
denial-of-service attack, where as saturation may just deliver
fewer widgets than desired.

> Are the saturate, debug assert, and ignore cases useful in some
> other domain? I can see some use for saturation in some cases
> where a (possibly implicit) range check was already present.
> For instance, if I'm doing some calculations with the volume
> on my speakers, I may have an acceptable range of 0 - 65,535.
> If my math comes up with a volume of 100,000, then setting the
> volume to 65,535 may be reasonable. I don't know how often this
> kind of thing comes up though.

It comes up often in signal processing applications. The C standard
has an option for fixed-point arithmetic, and it allow saturated
arithmetic for exactly this reason.

> If you stick to "drop-in" replacements, then I'm not sure if
> there's any benefit to an "ignore" policy. Ignore is equivalent to
> the raw type. Fixed-point libraries could make use of it though.
> Debug assertions aren't any better from a security standpoint,
> but they could help discover bugs in non-security critical code.

Can we be clear on what "ignore" means? I have two policies in mind,
both of which allow the implementation to ignore overflow.

"I have done a mathematical proof that overflow cannot occur.
Code reviewers please check my proof."

"I have really low reliability constraints, and do not have the need
or time to make the code correct in all circumstances. Don't bother
me with overflow."

--
Lawrence Crowl

Lawrence Crowl

unread,
Nov 26, 2012, 6:28:42 PM11/26/12
to std-pr...@isocpp.org
On 11/26/12, robertmac...@gmail.com
<robertmac...@gmail.com> wrote:
> On Sunday, November 25, 2012 11:17:04 PM UTC-8, viboes wrote:
> > > c) I considered a policy parameter of some sort to indicate
> > > how to handle overflows - they use an enum to select. But I
> > > rejected the idea for two reasons
> > >
> > > i) more complexity with very little value.
> > > ii) I couldn't see how to do it with something like
> > >
> > > safe<int32> x;
> > > safe<int16> y;
> > > safe<int32> z;
> > > z = x + y; // where would the policy parameter go?
> >
> > With the Lawrence proposal you don't have overflow as far as
> > you don't loss range. So the natural place is to use it only
> > with assignment.
>
> It's exactly the same with safe<T>. EXCEPT the case where
> safe<int64> + safe<int64> where there is not large type to
> hold the intermediate result. That is, the overflow can occur
> before the assignment. In this case, it throws immediately.
> I struggled on how to avoid this - but solutions seemed more and
> more heavy weight.

Given that expressions can have an arbitrary number of operators,
how do you handle other intermediate results that might overflow?

> > integral<31,OV1> x;
> > integral<15,OV2> y;
> > integral<31,OV3> z;
> > z = x + y;
> >
> > Check for overflow only when doing the assignment using the
> > overflow OV3 policy.
>
> I suppose this could be made to work - except for the note above.

The intent in the fixed-point library is that the intermediate type
would switch to a multi-precision implementation. An alternative
is cause a compilation error.

> > The fact to add the overflow policy on the type makes the code
> > less weight.
> >
> > We can have also a base class for integral that has no overflow
> > policy and forbids assignment that can loss information
> >
> > basic_integral<31> x;
> > basic_integral<15> y;
> > basic_integral<31> z;
> > z = x + y; // compile file
> >
> > The user will need to use a number_cast conversion like in
> >
> > z = number_cast<OV3>(x + y);
>
> I wanted to avoid this so as to keep the "drop-in" replacement
> functionality. I want people take a program with bugs in it,
> change all their "int" to safe<int>, etc. run the program and trap
> errors. This makes the library much, much more attractive to use.

I agree that this approach will produce a valuable tool.

> > But do the PO proposal or your Safe library allow to saturate
> > on overflow, assert, ignore, ...?
>
> The safe<T> throws an exception. Users would have to trap
> the exception if they want to handle it in some specific way.
> I considered alternatives, but I felt that if something traps it's
> almost always an unrecoverable error - ie a programming error - so
> making it any fancier than necessary would be counter productive -
> also refer to the note above regarding trying to specify a policy.
>
> The thread started with the proposal to consider SafeInt for
> inclusion in the standard library. safe<T> should be considered
> in this light.
>
> Other proposals - modular arithmetic, decimal integers, etc should
> be considered separately.

I agree that a safe-int proposal make sense, but we want the proposal
to integrate with the language and library as a whole. We have a
study group to avoid unintended incompatibilities within the C++
library. So, all these proposals should be considered together.

> Note that safe<T> uses the traits defined in std::limits so that
> it should be possible to apply safe<T> to any type which has
> std::limits<T> implemented. (I doubt it's possible now - but I
> believe it can be made to work). This would leave the concept of
> "safe" as orthogonal to the "number type" which is also where I
> believe we would want to be.

I have less confidence that we can have one definition of "safe",
let alone apply it uniformly to all numbers.

--
Lawrence Crowl

Vicente J. Botet Escriba

unread,
Nov 26, 2012, 6:32:07 PM11/26/12
to std-pr...@isocpp.org
Le 23/11/12 18:10, robertmac...@gmail.com a écrit :


On Friday, November 23, 2012 4:26:08 AM UTC-8, Fernando Cacciola wrote:

On Fri, Nov 23, 2012 at 3:45 AM, <robertmac...@gmail.com> wrote:


On Thursday, November 22, 2012 6:10:24 PM UTC-8, Ben Craig wrote:

 
That would likely turn into a fixed-point arithmetic library.


I would rather say that a fixed-point library, as well as a rationals library, would be alongside (or maybe one level above) safe-int, as opposed to a generalization of it.


I'm not sure I followed your suggestion, but somehow I'm thinking I might be in agreement. My view would be the "safe" idea would be orthogonal to the "numeric" idea.  So

int, rational<T>, multiprecision<N>, .... would have std limits<T> defined. std::limits includes members to define min/max values and a lot of other features of numeric types.

This would make them fullfill the "Numeric" concept (type requirements).

safe<T> would be defined for any type T fullfilling the "Numeric" concept.
safe and integers could be orthogonal, but I don't see how a safe<T> class could provide safety to rational as overflow is avoided by using gcd while doing the addition of two rationals and many other tricks.


and safe<T> would also have std::limits implemented so it would fullfill the "Numeric" concept as well.

This would separate the "safe" idea from the "numeric" idea and permit users to use the "safe" version if and only if actually desired.  It would also clarify what "safe" does as opposed to what "numeric" does.  My proposal implements this idea for current integer types. 
So maybe your class should be renamed as safe_int ;-)


It hasn't really been investigated whether this extends well to all integer types for which std::limits is implemented.  Anyway, this is far into the future.


I guess you meant any number type.

-- Vicente

robertmac...@gmail.com

unread,
Nov 26, 2012, 9:16:54 PM11/26/12
to std-pr...@isocpp.org


On Monday, November 26, 2012 3:32:09 PM UTC-8, viboes wrote:
Le 23/11/12 18:10, robertmac...@gmail.com a écrit :safe and integers could be orthogonal, but I don't see how a safe<T> class could provide safety to rational as overflow is avoided by using gcd while doing the addition of two rationals and many other tricks.

then safe<rational> wouldn't throw on overflow - because it can't happen.  Same with save<multi-precision>.  But currently safe<T> throws on at attempt to divide by zero.  So the concept is easily extended to other numeric types.  I can easily imagine a safe<float> and safe<double> which could overflow, underflow and device by zero.  Basically safe<T> would throw anytime and operation on T doesn't yield the expected mathematical result.

Note that I'm getting ahead of myself here.  safe<T> is only implemented for types which are integers according to std::limits<T>.  For other types it will trip a compile time assert.
 

Robert Ramey

Fernando Cacciola

unread,
Nov 27, 2012, 8:25:25 AM11/27/12
to std-pr...@isocpp.org
On Sun, Nov 25, 2012 at 6:39 PM, Vicente J. Botet Escriba <vicent...@wanadoo.fr> wrote:
Le 08/11/12 03:59, Ben Craig a écrit :
SafeInt information can be found here.  Basically, it is an open source library authored by security expert David LeBlanc of Microsoft.  It is basically a "drop-in" replacement for integer types, and will throw an exception whenever integer overflows occur.  I believe that getting this added to the standard would be a boon to C++ and secure code.  It should be a relatively low effort addition considering the "proof-of-concept" is already widely used within Microsoft.
--
 
 
 
Hi,

have you take a look at "C++ Binary Fixed-Point Arithmetic" http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html?

I've been wanting to read that proposal, but never did before.

It looks to me that it is a very good proposal BTW 

Fernando Cacciola

unread,
Nov 27, 2012, 8:32:05 AM11/27/12
to std-pr...@isocpp.org
On Mon, Nov 26, 2012 at 1:59 PM, Ben Craig <ben....@gmail.com> wrote:
Some large use cases for SafeInt are computing sizes for buffers and array indices.

Hmmm.
Can you elaborate how is SafeInt safe when it comes to an array index in the case of an array of runtime size? Which I believe is, by far, the most common type of array (buffer, container, etc)

For *that* purpose I think the proper tool is an integer with runtime boundaries (a number of these have been implemented and proposed to, for instance, Boost)

 
  In these situations, you probably want to throw an exception, or terminate the program. 

This s off-topic but IMO you never ever want to terminate a program when you use a language that supports exceptions, like C++

 
Saturation, assertions in debug mode, and ignoring aren't very good options from a security standpoint.


Maybe. It depends on how you define security.
But for sure those are good options in several application domains, and a standard C++ library facility should consider them all (as much as possible)
 

Fernando Cacciola

unread,
Nov 27, 2012, 8:44:02 AM11/27/12
to std-pr...@isocpp.org
On Mon, Nov 26, 2012 at 2:45 PM, <robertmac...@gmail.com> wrote:


On Sunday, November 25, 2012 11:17:04 PM UTC-8, viboes wrote:


c) I considered a policy parameter of some sort to indicate how to handle overflows - they use an enum to select.  But I rejected the idea for two reasons

i) more complexity with very little value.
ii)  I couldn't see how to do it with something like

safe<int32> x;
safe<int16> y;
safe<int32> z;
z = x + y;  // where would the policy parameter go?
With the Lawrence proposal you don't have overflow as far as you don't loss range. So the natural place is to use it only with assignment.

It's exactly the same with safe<T>.  EXCEPT the case where safe<int64> + safe<int64> where there is not large type to hold the intermediate result. That is, the overflow can occur before the assignment.  In this case, it throws immediately.  I struggled on how to avoid this - but solutions seemed more and more heavy weight.

Hmm, how isn't is as simple as using a software-based int128 number type?
 

integral<31,OV1> x;
integral<15,OV2> y;
integral<31,OV3> z;
z = x + y;

Check for overflow only when doing the assignment using the overflow OV3 policy.

I suppose this could be made to work - except for the note above.

The fact to add the overflow policy on the type makes the code less weight.

We can have also a base class for integral that has no overflow policy and forbids assignment that can loss information

basic_integral<31> x;
basic_integral<15> y;
basic_integral<31> z;
z = x + y; // compile file

The user will need to use a number_cast conversion like in

z = number_cast<OV3>(x + y);

I wanted to avoid this so as to keep the "drop-in" replacement functionality. I want people take a program with bugs in it,  change all their "int" to safe<int>, etc. run the program and trap errors.

Imagine you have a program and you get a division by zero exception . What do you do
You change the program so that, instead of attempting a computation that cannot be performed, it determines the case and handle the "exceptional case" manually in a different way.
You could just let the exception abort the current execution block but that's NOT the way division by zero is handled.

Now suppose you change all the 'int' for safe<int> and you discover a point where overflow is occurring?  What do you do?
By analogy to the div-by-zero case, and IME (and I do have experience writing algorithms that need handle overflow, both integer and floating-point), you also want to detect the case before-hand so you can do something else.
Simply changing a program so that instead of UB it throws is a step ahead, but it's a too general solution that surely works for general application code. But if you are writing a numerical algorithm OTOH you need, for example, to have a clean and simple way to handle the overflow and do something else *without* aborting the computation.

 
 
But do the PO proposal or your Safe library allow to saturate on overflow, assert, ignore, ...?

The safe<T> throws an exception.  Users would have to trap the exception if they want to handle it in some specific way.  I considered alternatives, but I felt that if something traps it's almost always an unrecoverable error - ie a programming error

Wait.
Numerical overflow is *rarely* a programming error. It's the consequence of finite precision when crunching numbers.

Fernando Cacciola

unread,
Nov 27, 2012, 8:50:21 AM11/27/12
to std-pr...@isocpp.org
On Mon, Nov 26, 2012 at 7:59 PM, Lawrence Crowl <cr...@googlers.com> wrote:
On 11/23/12, Fernando Cacciola <fernando...@gmail.com> wrote:
> On Nov 23, 2012 <robertmac...@gmail.com> wrote:
> > On Thursday, November 22, 2012 6:10:24 PM UTC-8, Ben Craig wrote:
> > > That would likely turn into a fixed-point arithmetic
> > > <http://en.wikipedia.org/wiki/Fixed-point_arithmetic> library.
>
> I would rather say that a fixed-point library, as well as a
> rationals library, would be alongside (or maybe one level above)
> safe-int, as opposed to a generalization of it.

The committee has a Study Group 6 addressing issues of number
representation.  One of the tasks is figuring out the logical
relationships and implementation relationships between various
number types.

We welcome contributions.

How do I join Study Group 6 ?

I would very much like to contribute.
 
> Both fixed_point<T> and rational<T> are useful as long as the
> values do not overflow. However, and this is particularly true in
> the case of rationals, there are several application domains where
> computations can easily, and often, overflow. In these cases, one
> must use a big_int (unlimited precision integer) as T (whether for
> a fixed_point or rational) But that is pessimistically inefficient,
> so one would consider a non-template "exact" numeric type which
> would have a dynamic internal representation of rational<T>
> but which would automatically switch from T to 2T (meaning a two
> times bigger integer), until it reaches big_int.

The study group has reached a tentative conclusion that a rational
should be based either on a bigint, or do rounding when the
representation is not sufficient for the true result.  The former
would be useful in computational geometry.  The latter would be
useful in music.

I agree. Well almost.
There are significant efficiency concerns when using bigint, and I think that in C++  it's possible to overcome that by means of a mechanism that promotes to "the next most efficient integer type needed" as computations are performed.
Something of this form has been done, sort of manually, over the past decade or more, in different unrelated projects, and it would be fantastic to capture some of that within standard C++


Fixed point is useful even in the presence of overflow, but the
method for dealing with the overflow may vary depending on the
application.  In these cases, I think it generally best to give
program the tools they need, rather than pick a single solution.
(Plain unsigned ints picked one solution, and it is inappropriate
in a number of cases.)

Agreed.

Ben Craig

unread,
Nov 27, 2012, 9:54:34 AM11/27/12
to std-pr...@isocpp.org
On Tue, Nov 27, 2012 at 7:32 AM, Fernando Cacciola <fernando...@gmail.com> wrote:

On Mon, Nov 26, 2012 at 1:59 PM, Ben Craig <ben....@gmail.com> wrote:
Some large use cases for SafeInt are computing sizes for buffers and array indices.

Hmmm.
Can you elaborate how is SafeInt safe when it comes to an array index in the case of an array of runtime size? Which I believe is, by far, the most common type of array (buffer, container, etc)

For *that* purpose I think the proper tool is an integer with runtime boundaries (a number of these have been implemented and proposed to, for instance, Boost)

Without a facility like SafeInt, a developer would likely write (buggy) code like this:
size_t index = user_controlled_x+user_controlled_y;
if(index > array_size)
   throw std::range_error("");

With SafeInt, you get something like this:
SafeInt<size_t> index = SafeInt<uint32_t>(user_controlled_x)+SafeInt<uint32_t>(user_controlled_y);
if(index > array_size)
   throw std::range_error("");

SafeInt doesn't do everything for you, but it gets the difficult checks out of the way, and lets the programmer handle the easy, program specific checks.  A more fully range checked class might be appropriate, but there isn't as much field experience with that.

 
  In these situations, you probably want to throw an exception, or terminate the program. 

This s off-topic but IMO you never ever want to terminate a program when you use a language that supports exceptions, like C++

If you hit undefined behavior, you should probably terminate the program as quickly as possible.  This is the approach that "stack canary" implementations take.  However, since we are talking about C++ spec work here, we should probably only talk about things with defined behavior.  For those, I agree with you that exceptions are the better approach.
 
Saturation, assertions in debug mode, and ignoring aren't very good options from a security standpoint.
Maybe. It depends on how you define security.
But for sure those are good options in several application domains, and a standard C++ library facility should consider them all (as much as possible)
 
Let's take it from a code review standpoint.  If SafeInt (or some alternative) is used, then any place where C++ "a+b" is not the same as arithmetic "a+b", you should get an exception, or there should be something explicit in the code indicating that a different policy is required at that point.  Maybe the "drop-in" stuff only allows exceptions, but saturation and modulo arithmetic get free functions?  "short myVal = saturation_cast<short>(x, 0, 100);"

Olaf van der Spek

unread,
Nov 27, 2012, 12:24:59 PM11/27/12
to std-pr...@isocpp.org
On Tuesday, November 27, 2012 3:54:38 PM UTC+1, Ben Craig wrote:
Without a facility like SafeInt, a developer would likely write (buggy) code like this:
size_t index = user_controlled_x+user_controlled_y;
if(index > array_size)
   throw std::range_error("");

With SafeInt, you get something like this:
SafeInt<size_t> index = SafeInt<uint32_t>(user_controlled_x)+SafeInt<uint32_t>(user_controlled_y);
if(index > array_size)
   throw std::range_error("");

I assume you meant index >= array_size? :p 

Fernando Cacciola

unread,
Nov 27, 2012, 12:29:18 PM11/27/12
to std-pr...@isocpp.org
On Mon, Nov 26, 2012 at 11:16 PM, <robertmac...@gmail.com> wrote:


On Monday, November 26, 2012 3:32:09 PM UTC-8, viboes wrote:
Le 23/11/12 18:10, robertmac...@gmail.com a écrit :safe and integers could be orthogonal, but I don't see how a safe<T> class could provide safety to rational as overflow is avoided by using gcd while doing the addition of two rationals and many other tricks.

then safe<rational> wouldn't throw on overflow - because it can't happen. 

Strictly speaking, you can end up with an irreductible fraction. In that case, operating with such a fraction can overflow.

OTOH, gcd is time consuming, so it is sometimes more efficient to just use big enough integers as long as you can, for which you need to have a form of overflow management.


Olaf van der Spek

unread,
Nov 27, 2012, 12:33:09 PM11/27/12
to std-pr...@isocpp.org
On Tuesday, November 27, 2012 2:33:03 PM UTC+1, Fernando Cacciola wrote:
This s off-topic but IMO you never ever want to terminate a program when you use a language that supports exceptions, like C++

Why not? The difference between aborting and throwing is basically whether you can catch and continue (safely). 
For the majority of apps aborting is fine for certain errors and exceptions aren't without cost and can't be used everywhere.

Fernando Cacciola

unread,
Nov 27, 2012, 12:36:58 PM11/27/12
to std-pr...@isocpp.org
On Tue, Nov 27, 2012 at 11:54 AM, Ben Craig <ben....@gmail.com> wrote:


On Tue, Nov 27, 2012 at 7:32 AM, Fernando Cacciola <fernando...@gmail.com> wrote:

On Mon, Nov 26, 2012 at 1:59 PM, Ben Craig <ben....@gmail.com> wrote:
Some large use cases for SafeInt are computing sizes for buffers and array indices.

Hmmm.
Can you elaborate how is SafeInt safe when it comes to an array index in the case of an array of runtime size? Which I believe is, by far, the most common type of array (buffer, container, etc)

For *that* purpose I think the proper tool is an integer with runtime boundaries (a number of these have been implemented and proposed to, for instance, Boost)

Without a facility like SafeInt, a developer would likely write (buggy) code like this:
size_t index = user_controlled_x+user_controlled_y;
if(index > array_size)
   throw std::range_error("");

With SafeInt, you get something like this:
SafeInt<size_t> index = SafeInt<uint32_t>(user_controlled_x)+SafeInt<uint32_t>(user_controlled_y);
if(index > array_size)
   throw std::range_error("");

SafeInt doesn't do everything for you, but it gets the difficult checks out of the way, and lets the programmer handle the easy, program specific checks.  A more fully range checked class might be appropriate, but there isn't as much field experience with that.

OK, I see the motivation to make sure you don't compute an index value the wrong way, accidentally because of the current limitations with integer types.
 

 
  In these situations, you probably want to throw an exception, or terminate the program. 

This s off-topic but IMO you never ever want to terminate a program when you use a language that supports exceptions, like C++

If you hit undefined behavior, you should probably terminate the program as quickly as possible.  This is the approach that "stack canary" implementations take.  However, since we are talking about C++ spec work here, we should probably only talk about things with defined behavior.  For those, I agree with you that exceptions are the better approach.
 
Saturation, assertions in debug mode, and ignoring aren't very good options from a security standpoint.
Maybe. It depends on how you define security.
But for sure those are good options in several application domains, and a standard C++ library facility should consider them all (as much as possible)
 
Let's take it from a code review standpoint.  If SafeInt (or some alternative) is used, then any place where C++ "a+b" is not the same as arithmetic "a+b", you should get an exception, or there should be something explicit in the code indicating that a different policy is required at that point.  Maybe the "drop-in" stuff only allows exceptions, but saturation and modulo arithmetic get free functions?  "short myVal = saturation_cast<short>(x, 0, 100);"

I agree that, for an utility intended to be a drop-in replacement for 'int', should simply throw in the case of overflow (when it is assigned, or in corner cases).
I also agree that such an utility might be useful.

Having said that, I still think we need a lower layer, used by such an utility, that would also be used by other similar utilities (like fixed_int<>). That lower layer would provide general building blocks, with your saturation_cast<short> being a possible one.

Fernando Cacciola

unread,
Nov 27, 2012, 12:42:46 PM11/27/12
to std-pr...@isocpp.org

I'm tempted to respond, but I think this thread is not the correct place. (and I'm not sure where would be)

Anyway, let me just rephrase that: IMO, the *end user* never ever wants you the programmer to structure the program in such a way that the *entire application* must abort as opposed to the current "task" (task from the end user POV )

 

Vicente J. Botet Escriba

unread,
Nov 27, 2012, 1:05:25 PM11/27/12
to std-pr...@isocpp.org
Le 27/11/12 00:28, Lawrence Crowl a �crit :
> On 11/26/12, robertmac...@gmail.com
> <robertmac...@gmail.com> wrote:
>> I wanted to avoid this so as to keep the "drop-in" replacement
>> functionality. I want people take a program with bugs in it,
>> change all their "int" to safe<int>, etc. run the program and trap
>> errors. This makes the library much, much more attractive to use.
> I agree that this approach will produce a valuable tool.
>
>
I don't know if I understand your use case, you want to use safe as a
debug tool, then IMHO the best should be to terminate the program, isn't
it?

I don't know if your SafeInt/safe<T> prevents from assignments from
signed values to unsigned types, but this could also be useful.

-- Vicente

Vicente J. Botet Escriba

unread,
Nov 27, 2012, 1:08:22 PM11/27/12
to std-pr...@isocpp.org
Le 27/11/12 03:16, robertmac...@gmail.com a écrit :


On Monday, November 26, 2012 3:32:09 PM UTC-8, viboes wrote:
Le 23/11/12 18:10, robertmac...@gmail.com a écrit :safe and integers could be orthogonal, but I don't see how a safe<T> class could provide safety to rational as overflow is avoided by using gcd while doing the addition of two rationals and many other tricks.

then safe<rational> wouldn't throw on overflow - because it can't happen. 
Sorry, I wanted to say "to avoid intermediary overflows (before normalization)". To compute  {n1/d1}+{n2/d2} the rational library makes some intermediate operation on the rational value type as products, additions and divisions that can overflow. But nothing ensures you that the template parameter T of rational is safe.
Same with save<multi-precision>.  But currently safe<T> throws on at attempt to divide by zero.  So the concept is easily extended to other numeric types.  I can easily imagine a safe<float> and safe<double> which could overflow, underflow and device by zero.  Basically safe<T> would throw anytime and operation on T doesn't yield the expected mathematical result.

Maybe you are right. We need a concrete proposal (only for integral types or for numbers in general) to continue the discussion.

--Vicente

Marshall Clow

unread,
Nov 27, 2012, 2:44:31 PM11/27/12