Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

is_modulo could be more clear

2 views
Skip to first unread message

Chris Jefferson

unread,
Oct 30, 2006, 12:43:19 PM10/30/06
to
I've been recently writing some maths code, which I can make more
efficent in the case that types are modulo, and therefore I decided to
optimise for types that have numeric_limits::is_modulo true.

Just out of interest, I went to see what the standard said about the
definition of modulo. It says:

"A type is modulo if it is possible to add two positive numbers and
have a
result that wraps around to a third number that is less."

This seems far too weak. I would expect is_modulo to mean something
(this isn't suggested replacement wording, just the aim of what I'd
want:

Any addition, subtraction or multiplication between pairs of elements
is well-defined, and satisfies the condition that if A op B = C, then C
differs from the "true" value by a multiple of "MAX_VAL - MIN_VAL".

Basically, the traditional mathematical definition of modulo. Is this
the intended meaning? The definition as it stands seem effectively
useless, as it doesn't even appear to require that A + B is
well-defined for all A and B, just at least one which it makes
negative...

---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std...@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.comeaucomputing.com/csc/faq.html ]

Maarten Kronenburg

unread,
Oct 30, 2006, 2:31:30 PM10/30/06
to
Chris,
The mathematical meaning of modulo is:
x mod y = x - y * [ x / y ]
where [ ] truncates toward minus infinity (that is always downward).
This is also used in my infinite precision integer tr2 proposal:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2020.pdf
Probably the standard numeric limits does not use this definition
because in the hardware it usually works a bit differently, perhaps also
between different processors.
The x86 unsigned int for example is indeed modular in this sense
with mod 2^32 (or 2^64)., but the x86 signed int is of course
not modular in this sense (because of the sign bit).
For a mathematical modular integer, after some basic operation, the sign
(if not zero) always remains identical. Clearly for the signed x86 int
this is not the case, and probably neither for the signed int on other
processors.
Regards, Maarten.


"Chris Jefferson" <4zum...@gmail.com> wrote in message
news:1162206396....@e3g2000cwe.googlegroups.com...

Chris Jefferson

unread,
Oct 30, 2006, 4:37:07 PM10/30/06
to

"Maarten Kronenburg" wrote:
> Chris,
> The mathematical meaning of modulo is:
> x mod y = x - y * [ x / y ]
> where [ ] truncates toward minus infinity (that is always downward).

That looks more to me like the tradition definition in terms of
programming and / or C. I always thought the mathematical definition of
modulo is that it splits your ring or field into a set of equivalence
classes, and it's usual to choose a single member from each equivalence
class to represent it. Under this definition 2's complement singed
integers behave perfectly well. The intention of the standard clearly
appears to be that signed integers "are modulo", as thats what it says.

I don't really want to argue over which definition is "right", like
many things in maths each word has many definitions, which often turn
out to be almost or even entirely equivalent under close inspection.
What I'm interested in is exactly what the C++ standard wants.

What I suspect it wants is that given any expression involving +,- or *
(I'm not positive what to do about division, the only case which
matters is either TYPE_MAX/-1 or TYPE_MIN/-1), the definition which
should be used is that you get the answer you would get by adding or
subtracting a multiple of "TYPE_MAX - TYPE_MIN" to get inside the range
[TYPE_MIN, TYPE_MAX]. I say that should be the definition mainly
because that is what I want to use.

One BIG thing I think we should have is that if any arithmetic
expression leads to undefined behaviour on any kind of overflow, then
is_modulo should definatly be false, which is probably more about what
optimisation opportunities the compiler takes rather than how the
underlying type works. After that there is probably only a small number
of possible sensible definitions, and which one is in operation could
be checked.

Chris

Gennaro Prota

unread,
Oct 30, 2006, 5:06:21 PM10/30/06
to
On Mon, 30 Oct 2006 19:31:30 GMT, "Maarten Kronenburg" wrote:

>Chris,
>The mathematical meaning of modulo is:
>x mod y = x - y * [ x / y ]
>where [ ] truncates toward minus infinity (that is always downward).

That is, (assuming that x and y are integers) the remainder of their
integer division. But that's one of the meanings, more common in
computing than maths.

In maths the word is used with many meanings and at various levels of
abstraction (integer arithmetic, ring algebra, set theory, mere
equivalence relation, and others); the most known meaning is probably
that used in conjunction with the congruence notion: given three
integers x, y and n, x is congruent to y modulo n if and only if x and
y yield the same remainder when divided by n (the notion is due to
Gauss and his Disquisitiones Arithmeticae)

--
Gennaro Prota
[To mail me, remove any 'u' from the provided address]

Maarten Kronenburg

unread,
Oct 30, 2006, 7:50:36 PM10/30/06
to

===================================== MODERATOR'S COMMENT:
Please trim appropriately and do not top-post.


===================================== END OF MODERATOR'S COMMENT
Gennaro,
No, the remainder is
x rem y = x - y * trunc( x / y )
where the trunc truncates toward zero.
This is not always equal to the mod.
Regards, Maarten.

"Gennaro Prota" <geunnar...@yahoo.com> wrote in message
news:52nck2hn0najvg2jf...@4ax.com...

Maarten Kronenburg

unread,
Oct 30, 2006, 7:49:56 PM10/30/06
to
Chris,
Indeed any simple mathematical operation on base type operands
results in a result within the range of the base type. The question is how
this is accomplished.
For unsigned base types it is specified in 3.9.1 item 4,
that is take the "mathematical" mod 2^n.
But for signed integers it is not specified in the standard,
although your idea of doing it seems logical.
Divisions always get smaller results, so there you don't have this problem.
But I agree with your idea of also putting the possible behaviours
of signed base types into numeric limits (18.2.1), so that at least
the programmer can know how the signed "wrapping around" works,
or when all processors behave identical, put it in 3.9.1.
Probably they had a reason not to, but I don't know that reason.
Regards, Maarten.

"Chris Jefferson" <4zum...@gmail.com> wrote in message

news:1162242968....@f16g2000cwb.googlegroups.com...

Francis Glassborow

unread,
Oct 31, 2006, 10:01:50 AM10/31/06
to
In article <4546822b$0$425$19de...@news.inter.NL.net>, Maarten
Kronenburg <M.Kron...@inter.nl.net> writes

>Chris,
>Indeed any simple mathematical operation on base type operands
>results in a result within the range of the base type. The question is how
>this is accomplished.
>For unsigned base types it is specified in 3.9.1 item 4,
>that is take the "mathematical" mod 2^n.
>But for signed integers it is not specified in the standard,
>although your idea of doing it seems logical.
>Divisions always get smaller results, so there you don't have this problem.
>But I agree with your idea of also putting the possible behaviours
>of signed base types into numeric limits (18.2.1), so that at least
>the programmer can know how the signed "wrapping around" works,
>or when all processors behave identical, put it in 3.9.1.
>Probably they had a reason not to, but I don't know that reason.
>Regards, Maarten.


The undefined behaviour on overflow of signed integer arithmetic has
long been an irritant to me. I would like to see a way by which
programmers can program safely and with confidence on systems where
signed integers wrap (the commonest behaviour). It would also help a
little if they could identify systems on which they saturate.

The reason for the undefined behaviour is that some hardware provides
direct support for signed integer arithmetic and traps on overflow. My
personal opinion is that we should not be distorting the language
because such hardware exists. (C does not distort itself just because
there is hardware that has bit registers and supports bit variables, the
specialist compilers for such (embedded systems) provide support.

Even on systems where signed integer overflow traps there are ways to
emulate non-trapping wrap behaviour; the problem is that the result is
not efficient. I could argue, and perhaps should do so, that making
signed integer overflow undefined behaviour is a premature optimisation
:-)

--
Francis Glassborow ACCU
Author of 'You Can Do It!' and "You Can Program in C++"
see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

Pete Becker

unread,
Oct 31, 2006, 12:59:11 PM10/31/06
to
Francis Glassborow wrote:
>
> I would like to see a way by which
> programmers can program safely and with confidence on systems where
> signed integers wrap (the commonest behaviour).

If you know that that's your system's behavior, you can program safely
and with confidence. You don't need the standard to bless this. You do
need compiler documentation.

> It would also help a
> little if they could identify systems on which they saturate.
>

Compiler documentation ought to tell you what behavior you can rely on.

> The reason for the undefined behaviour is that some hardware provides
> direct support for signed integer arithmetic and traps on overflow.

The reason for undefined behavior is that there are many hardware
approaches, and neither C nor C++ wanted to be in the business of
specifying particular behaviors.

Saying that the behavior of some code construct is undefined does not
mean that the behavior must be irrational. Implementors are allowed to
do something that's appropriate on their target hardware, and if they
don't, they won't last long in a free market.

Porting code with deliberate integer overflows is, indeed, hazardous.
Knowing whether overflows wrap is not sufficient. Unlike unsigned
arithmetic, which is required to wrap to zero (i.e. incrementing
UINT_MAX produces 0) on all systems, signed arithmetic, when it wraps,
wraps to a potentially different value on different systems, so the
programmer has to deal with the compiler-specific minimum value for the
type. That value may well be different from the value given by the
xxx_MIN macro for the type, so specifying what the wrapping behavior is
would require changing the requirements for all of the _MIN macros and
the corresponding numeric_traits, or adding new ones.

The only argument I have seen in support of standardizing this behavior
is for post hoc error checking, which is rather limited in what it can
actually find, and simply isn't necessary. There are better ways to
minimize the impact of integer overflows, and they can be used without
language and library changes.

--

-- Pete

Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." For more information about this book, see
www.petebecker.com/tr1book.

Chris Jefferson

unread,
Oct 31, 2006, 1:20:44 PM10/31/06
to

Pete Becker wrote:
> Francis Glassborow wrote:
> ...

> Porting code with deliberate integer overflows is, indeed, hazardous.
> Knowing whether overflows wrap is not sufficient. Unlike unsigned
> arithmetic, which is required to wrap to zero (i.e. incrementing
> UINT_MAX produces 0) on all systems, signed arithmetic, when it wraps,
> wraps to a potentially different value on different systems, so the
> programmer has to deal with the compiler-specific minimum value for the
> type. That value may well be different from the value given by the
> xxx_MIN macro for the type, so specifying what the wrapping behavior is
> would require changing the requirements for all of the _MIN macros and
> the corresponding numeric_traits, or adding new ones.
>

I'm happy to agree with this. My problem is that as it stands,
is_modulo is effectively useless, and that I've had to replace it's
usage as a base with just is_unsigned, and then some extra checks for
systems I have access to to see if it can be generalised.

I have no problem with compilers deciding to set is_modulo to false on
signed types. My problem is that many set it to true, and I'm curious
about exactly what, if any, "promise" they are making by doing that?

Chris

Maarten Kronenburg

unread,
Oct 31, 2006, 2:18:11 PM10/31/06
to

===================================== MODERATOR'S COMMENT:
Please quote appropriately and remove moderation footers when replying.


===================================== END OF MODERATOR'S COMMENT

"Pete Becker" <peteb...@acm.org> wrote in message
news:k4mdnfy5ut3l5drY...@giganews.com...


> Francis Glassborow wrote:
> >
> > I would like to see a way by which
> > programmers can program safely and with confidence on systems where
> > signed integers wrap (the commonest behaviour).
>
> If you know that that's your system's behavior, you can program safely
> and with confidence. You don't need the standard to bless this. You do
> need compiler documentation.
>
> > It would also help a
> > little if they could identify systems on which they saturate.
> >
>
> Compiler documentation ought to tell you what behavior you can rely on.
>
> > The reason for the undefined behaviour is that some hardware provides
> > direct support for signed integer arithmetic and traps on overflow.
>
> The reason for undefined behavior is that there are many hardware
> approaches, and neither C nor C++ wanted to be in the business of
> specifying particular behaviors.

Why does the standard in 3.9.1 item 4 do it then for unsigneds?
There is only one logical way to do it for signed base types:
Define the range as: delta = x_max - x_min + 1
and do the "mathematical" mod, but from x_min:
x smod y = x - delta * [ ( x - x_min ) / delta ]
where [ ] truncates downward.
This wrap around always brings the value back within range.
After some checks the x86 signed int seems to work this way,
but I will test it thoroughly on x86.
In fact one could make a program to check if any signed base type
on any system complies with this wrap around.
Probably the only alternative is trap the overflow, which could
also be mentioned in the standard.

P.J. Plauger

unread,
Oct 31, 2006, 2:55:49 PM10/31/06
to
"Maarten Kronenburg" <M.Kron...@inter.nl.net> wrote in message
news:45479b35$0$429$19de...@news.inter.NL.net...

> .....


>> The reason for undefined behavior is that there are many hardware
>> approaches, and neither C nor C++ wanted to be in the business of
>> specifying particular behaviors.
>
> Why does the standard in 3.9.1 item 4 do it then for unsigneds?

Because it's well defined. In C, "unsigned" really means "modulus",
and C++ inherits the same arithmetic rules.

> There is only one logical way to do it for signed base types:

Nope. There are several ways. You could trap on arithmetic
overflow, or wrap on a different range than is apparent
from the published limits, as Pete Becker suggested, to name
just two.

> Define the range as: delta = x_max - x_min + 1
> and do the "mathematical" mod, but from x_min:
> x smod y = x - delta * [ ( x - x_min ) / delta ]
> where [ ] truncates downward.
> This wrap around always brings the value back within range.
>
> After some checks the x86 signed int seems to work this way,
> but I will test it thoroughly on x86.

Right. Many computers do indeed provide quiet wraparound on
signed overflow, but not all. The C Standard intentionally
permits other behavior, under the rubric of "undefined".

> In fact one could make a program to check if any signed base type
> on any system complies with this wrap around.

If it doesn't trap out, yes.

> Probably the only alternative is trap the overflow, which could
> also be mentioned in the standard.

It could, but it's not the *only* alternative. And what would the
standard say (other than UB)? "It always does this unless it does
that, or maybe something else"?

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com

Pete Becker

unread,
Oct 31, 2006, 4:09:09 PM10/31/06
to
Maarten Kronenburg wrote:
> "Pete Becker" <peteb...@acm.org> wrote in message
> news:k4mdnfy5ut3l5drY...@giganews.com...
>>
>>> The reason for the undefined behaviour is that some hardware provides
>>> direct support for signed integer arithmetic and traps on overflow.
>> The reason for undefined behavior is that there are many hardware
>> approaches, and neither C nor C++ wanted to be in the business of
>> specifying particular behaviors.
>
> Why does the standard in 3.9.1 item 4 do it then for unsigneds?

Because the behavior of unsigned overflow is easy to specify, easy to
implement, and easy to use. And it's actually useful.

> There is only one logical way to do it for signed base types:
> Define the range as: delta = x_max - x_min + 1
> and do the "mathematical" mod, but from x_min:
> x smod y = x - delta * [ ( x - x_min ) / delta ]
> where [ ] truncates downward.
> This wrap around always brings the value back within range.
> After some checks the x86 signed int seems to work this way,
> but I will test it thoroughly on x86.
> In fact one could make a program to check if any signed base type
> on any system complies with this wrap around.
> Probably the only alternative is trap the overflow, which could
> also be mentioned in the standard.
>

Could be. What real-world problem does it solve?

Maarten Kronenburg

unread,
Oct 31, 2006, 4:10:49 PM10/31/06
to

"P.J. Plauger" wrote

> >> The reason for undefined behavior is that there are many hardware
> >> approaches, and neither C nor C++ wanted to be in the business of
> >> specifying particular behaviors.
> >
> > Why does the standard in 3.9.1 item 4 do it then for unsigneds?
>
> Because it's well defined. In C, "unsigned" really means "modulus",
> and C++ inherits the same arithmetic rules.
>
> > There is only one logical way to do it for signed base types:
>
> Nope. There are several ways. You could trap on arithmetic
> overflow, or wrap on a different range than is apparent
> from the published limits, as Pete Becker suggested, to name
> just two.

The trap I mentioned below. What is the purpose of wrapping on
a different range? Then the result would surely not fall within
the published range, and the published range would be false.

>
> > Define the range as: delta = x_max - x_min + 1
> > and do the "mathematical" mod, but from x_min:
> > x smod y = x - delta * [ ( x - x_min ) / delta ]
> > where [ ] truncates downward.
> > This wrap around always brings the value back within range.
> >
> > After some checks the x86 signed int seems to work this way,
> > but I will test it thoroughly on x86.
>
> Right. Many computers do indeed provide quiet wraparound on
> signed overflow, but not all. The C Standard intentionally
> permits other behavior, under the rubric of "undefined".
>
> > In fact one could make a program to check if any signed base type
> > on any system complies with this wrap around.
>
> If it doesn't trap out, yes.
>
> > Probably the only alternative is trap the overflow, which could
> > also be mentioned in the standard.
>
> It could, but it's not the *only* alternative. And what would the
> standard say (other than UB)? "It always does this unless it does
> that, or maybe something else"?
>

That would be better than the situation now, where the programmer
does not have a clue what happens at overflow of signed base types,
or what is_modulo means in the numeric_limits.
But of course I don't know from hard fact how many processors follow
the formula I gave for signed base type wrap around, and how many
trap the overflow, and how many do something else (if so, what?).
Regards, Maarten.

James Kanze

unread,
Nov 1, 2006, 3:22:34 PM11/1/06
to
Chris Jefferson wrote:
> Pete Becker wrote:
> > Francis Glassborow wrote:
> > ...
> > Porting code with deliberate integer overflows is, indeed, hazardous.
> > Knowing whether overflows wrap is not sufficient. Unlike unsigned
> > arithmetic, which is required to wrap to zero (i.e. incrementing
> > UINT_MAX produces 0) on all systems, signed arithmetic, when it wraps,
> > wraps to a potentially different value on different systems, so the
> > programmer has to deal with the compiler-specific minimum value for the
> > type. That value may well be different from the value given by the
> > xxx_MIN macro for the type, so specifying what the wrapping behavior is
> > would require changing the requirements for all of the _MIN macros and
> > the corresponding numeric_traits, or adding new ones.

> I'm happy to agree with this. My problem is that as it stands,
> is_modulo is effectively useless, and that I've had to replace it's
> usage as a base with just is_unsigned, and then some extra checks for
> systems I have access to to see if it can be generalised.

> I have no problem with compilers deciding to set is_modulo to false on
> signed types. My problem is that many set it to true, and I'm curious
> about exactly what, if any, "promise" they are making by doing that?

Presumably, that numeric_limits<T>::max() + 1 ==
numeric_limits<T>::min().

Not that I see any use in that sort of a guarantee. I think
Pete pretty much explained the issue correctly. I might add
that perhaps part of the reason for making it undefined behavior
was to encourage implementations to trap, or to do something
sensible, rather than to silently generate a value of no use to
anyone.

--
James Kanze (Gabi Software) email: james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

James Kanze

unread,
Nov 1, 2006, 3:22:42 PM11/1/06
to
"Maarten Kronenburg" wrote:
> Chris,

> Indeed any simple mathematical operation on base type operands
> results in a result within the range of the base type.

Not in the case of overflow. The issue wasn't clear in C90 (on
which C++98 is based), but the C99 standard makes it more
explicit: the "results" can be the raising of an implementation
defined signal.

--
James Kanze (Gabi Software) email: james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

P.J. Plauger

unread,
Nov 1, 2006, 7:30:16 PM11/1/06
to
""Maarten Kronenburg"" <M.Kron...@inter.nl.net> wrote in message
news:4547b0ec$0$427$19de...@news.inter.NL.net...

> "P.J. Plauger" wrote
>> >> The reason for undefined behavior is that there are many hardware
>> >> approaches, and neither C nor C++ wanted to be in the business of
>> >> specifying particular behaviors.
>> >
>> > Why does the standard in 3.9.1 item 4 do it then for unsigneds?
>>
>> Because it's well defined. In C, "unsigned" really means "modulus",
>> and C++ inherits the same arithmetic rules.
>>
>> > There is only one logical way to do it for signed base types:
>>
>> Nope. There are several ways. You could trap on arithmetic
>> overflow, or wrap on a different range than is apparent
>> from the published limits, as Pete Becker suggested, to name
>> just two.
>
> The trap I mentioned below.

But only after saying that there is only one logical way.

> What is the purpose of wrapping on
> a different range? Then the result would surely not fall within
> the published range, and the published range would be false.

An implementation might choose to represent integer NaN as 2^-31,
so INT_MIN is -(2^-31 - 1) and INT_MAX is 2^31 - 1. This published
range isn't "false", just not honored by an unchecked overflow that
quietly wraps around like unsigned.

> .....


>> > Probably the only alternative is trap the overflow, which could
>> > also be mentioned in the standard.
>>
>> It could, but it's not the *only* alternative. And what would the
>> standard say (other than UB)? "It always does this unless it does
>> that, or maybe something else"?
>>
>
> That would be better than the situation now, where the programmer
> does not have a clue what happens at overflow of signed base types,
> or what is_modulo means in the numeric_limits.

Is it really better? The C Standard is careful to distinguish
undefined behavior, which can be anything (useful or otherwise)
from unspecified behavior, which can be something valid from a
limited choice. Unspecified makes sense only when *any* of the
choices makes sense -- as in the order of evaluation of the
arguments on a function call. It makes way less sense if you
don't know whether an operation might trap or not.

> But of course I don't know from hard fact how many processors follow
> the formula I gave for signed base type wrap around, and how many
> trap the overflow, and how many do something else (if so, what?).

Doesn't matter how many. What matters is the range of conforming
behavior.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com

Maarten Kronenburg

unread,
Nov 2, 2006, 2:52:43 PM11/2/06
to

""P.J. Plauger"" wrote

>
> > "P.J. Plauger" wrote
> >> >> The reason for undefined behavior is that there are many hardware
> >> >> approaches, and neither C nor C++ wanted to be in the business of
> >> >> specifying particular behaviors.
> >> >
> >> > Why does the standard in 3.9.1 item 4 do it then for unsigneds?
> >>
> >> Because it's well defined. In C, "unsigned" really means "modulus",
> >> and C++ inherits the same arithmetic rules.
> >>
> >> > There is only one logical way to do it for signed base types:
> >>
> >> Nope. There are several ways. You could trap on arithmetic
> >> overflow, or wrap on a different range than is apparent
> >> from the published limits, as Pete Becker suggested, to name
> >> just two.
> >
> > The trap I mentioned below.
>
> But only after saying that there is only one logical way.
>
> > What is the purpose of wrapping on
> > a different range? Then the result would surely not fall within
> > the published range, and the published range would be false.
>
> An implementation might choose to represent integer NaN as 2^-31,
> so INT_MIN is -(2^-31 - 1) and INT_MAX is 2^31 - 1. This published
> range isn't "false", just not honored by an unchecked overflow that
> quietly wraps around like unsigned.
>

An implementation may of course do anything if it is not defined in
the standard (and this is really strange).
What is the numeric_limits flag for this?
The standard should define the base types so that these strange
and very unusual implementations are prevented.
That is what the standard is supposed to do, because no programmer
can be expected to take these strange implementations into account.
The standard is supposed to make the "undefined" as small as possible.

> > .....
> >> > Probably the only alternative is trap the overflow, which could
> >> > also be mentioned in the standard.
> >>
> >> It could, but it's not the *only* alternative. And what would the
> >> standard say (other than UB)? "It always does this unless it does
> >> that, or maybe something else"?
> >>
> >
> > That would be better than the situation now, where the programmer
> > does not have a clue what happens at overflow of signed base types,
> > or what is_modulo means in the numeric_limits.
>
> Is it really better? The C Standard is careful to distinguish
> undefined behavior, which can be anything (useful or otherwise)
> from unspecified behavior, which can be something valid from a
> limited choice. Unspecified makes sense only when *any* of the
> choices makes sense -- as in the order of evaluation of the
> arguments on a function call. It makes way less sense if you
> don't know whether an operation might trap or not.

Yes it makes sense to distinguish between modulo wrap around
and trap overflow, which are flags in numeric_limits.
But it makes also sense then to define what modulo wrap around
exactly means, so that a programmer can rely on this,
and so that implementations are not allowed to do strange things with it
that are very hard for programmers to deal with.

>
> > But of course I don't know from hard fact how many processors follow
> > the formula I gave for signed base type wrap around, and how many
> > trap the overflow, and how many do something else (if so, what?).
>
> Doesn't matter how many. What matters is the range of conforming
> behavior.

My suggestion is to add to 3.9.1 item 4:
"Signed integers may obey the laws of arithmetic modulo, when its
numeric_limits is_modulo flag is true, or may trap overflow,
when its numeric_limits traps flag is true (see 18.2.11).
The exact meaning of arithmetic modulo is defined there (see 18.2.11)."
and in 18.2.11 item 57, delete the line "A type is modulo if...", and add:
"A type is modulo if an arithmetic result that is not within the range
between x_min and x_max inclusive, is wrapped around this range with:
x = x_min + ( ( x - x_min ) mod ( x_max - x_min + 1 ) ),
where x mod y = x - y * [ x / y ], where [ ] truncates downward,
so that the result is again within the range."
Then at least the programmers know what modulo wrap around
means by the standard, and implementations are discouraged
to do strange things as above, which are then "undefined".
Regards, Maarten.

kuy...@wizard.net

unread,
Nov 2, 2006, 3:31:07 PM11/2/06
to
"Maarten Kronenburg" wrote:
.
> The standard is supposed to make the "undefined" as small as possible.

No, an implementation is supposed to make the "undefined" as clear as
possible. You can make "undefined" arbitrarily small; just specify
precisely the behavior of C++ as it is actually implemented by a
particular compiler on a particular platform. Making some things
implementation-defined, unspecified, or undefined is essential to allow
for efficient implementations on a wide variety of platforms (and, in
some cases, for backwards compatibility). The point of the standard is
to make it clear which things fall into each category, not to hamstring
implementors by making those categories as small as possble.

mark

unread,
Nov 3, 2006, 12:23:12 PM11/3/06
to

Pete Becker wrote:
> Francis Glassborow wrote:
> >
> > I would like to see a way by which
> > programmers can program safely and with confidence on systems where
> > signed integers wrap (the commonest behaviour).
>
> If you know that that's your system's behavior, you can program safely
> and with confidence. You don't need the standard to bless this. You do
> need compiler documentation.

Which isnt going to tell you, because its undefined behavior, not
implementation defined.

>
> Compiler documentation ought to tell you what behavior you can rely on.

It ought if the behavior were implementation defined. As it is I dont
know of any compiler whose documentation makes any promises about this
behavior.

>
> > The reason for the undefined behaviour is that some hardware provides
> > direct support for signed integer arithmetic and traps on overflow.
>
> The reason for undefined behavior is that there are many hardware
> approaches, and neither C nor C++ wanted to be in the business of
> specifying particular behaviors.

Which is why the behavior should be implementation defined.

>
> Saying that the behavior of some code construct is undefined does not
> mean that the behavior must be irrational. Implementors are allowed to
> do something that's appropriate on their target hardware, and if they
> don't, they won't last long in a free market.

I know several compilers which use the undefined behavior for
optimization purposes. The most interesting was one (for a platform
where signed arithmetic wrapped in the hardware) which optimized "(a <
0) && (b< 0) && (a + b >= 0)" to "false".

>
> Porting code with deliberate integer overflows is, indeed, hazardous.
> Knowing whether overflows wrap is not sufficient. Unlike unsigned
> arithmetic, which is required to wrap to zero (i.e. incrementing
> UINT_MAX produces 0) on all systems, signed arithmetic, when it wraps,
> wraps to a potentially different value on different systems, so the
> programmer has to deal with the compiler-specific minimum value for the
> type. That value may well be different from the value given by the
> xxx_MIN macro for the type, so specifying what the wrapping behavior is
> would require changing the requirements for all of the _MIN macros and
> the corresponding numeric_traits, or adding new ones.

But because the behavior is undefined, porting such code doesnt make
much sense - unless you find a compiler whose documentation really does
make guarantees for signed integer overflow. The correct thing to do
(with the standard the way it is today) is to rewrite it using unsigned
arithmetic, and take care of the details yourself.

>
> The only argument I have seen in support of standardizing this behavior
> is for post hoc error checking, which is rather limited in what it can
> actually find, and simply isn't necessary. There are better ways to
> minimize the impact of integer overflows, and they can be used without
> language and library changes.

Who said anything about standardizing behavior?

Implementation defined would be all thats needed. Then everything you
wrote above would be correct.

Mark Williams

Pete Becker

unread,
Nov 3, 2006, 11:11:37 PM11/3/06
to
mark wrote:
> Pete Becker wrote:
>> Francis Glassborow wrote:
>>> I would like to see a way by which
>>> programmers can program safely and with confidence on systems where
>>> signed integers wrap (the commonest behaviour).
>> If you know that that's your system's behavior, you can program safely
>> and with confidence. You don't need the standard to bless this. You do
>> need compiler documentation.
>
> Which isnt going to tell you, because its undefined behavior, not
> implementation defined.
>

The language definition does not prohibit compiler vendors from telling
you what their compiler does with code whose behavior is undefined.

--

-- Pete

Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." For more information about this book, see
www.petebecker.com/tr1book.

---

Seungbeom Kim

unread,
Nov 3, 2006, 11:29:42 PM11/3/06
to
mark wrote:
> Pete Becker wrote:
>> Francis Glassborow wrote:
>>> I would like to see a way by which
>>> programmers can program safely and with confidence on systems where
>>> signed integers wrap (the commonest behaviour).
>> If you know that that's your system's behavior, you can program safely
>> and with confidence. You don't need the standard to bless this. You do
>> need compiler documentation.
>
> Which isnt going to tell you, because its undefined behavior, not
> implementation defined.
>
>> Compiler documentation ought to tell you what behavior you can rely on.
>
> It ought if the behavior were implementation defined. As it is I dont
> know of any compiler whose documentation makes any promises about this
> behavior.

An implementation can define a behaviour that is even *undefined* (not
only implementation-defined) in the standard, and as long as you stick
to that implementation, you're safe.
Multithreading is a good example: its behaviour is undefined in the
standard, but no one fears the danger of undefined behaviour on POSIX or
Win32 on which it is defined.

>>> The reason for the undefined behaviour is that some hardware provides
>>> direct support for signed integer arithmetic and traps on overflow.
>> The reason for undefined behavior is that there are many hardware
>> approaches, and neither C nor C++ wanted to be in the business of
>> specifying particular behaviors.
>
> Which is why the behavior should be implementation defined.

If it's left undefined, an implementation is given the freedom to ignore
the issue completely, and possibly it may have better chance of
optimization. If it's implementation-defined, an implementation should
pick one behaviour and document it.

--
Seungbeom Kim

kuy...@wizard.net

unread,
Nov 3, 2006, 11:29:57 PM11/3/06
to
mark wrote:
> Pete Becker wrote:
> > Francis Glassborow wrote:
> > >
> > > I would like to see a way by which
> > > programmers can program safely and with confidence on systems where
> > > signed integers wrap (the commonest behaviour).
> >
> > If you know that that's your system's behavior, you can program safely
> > and with confidence. You don't need the standard to bless this. You do
> > need compiler documentation.
>
> Which isnt going to tell you, because its undefined behavior, not
> implementation defined.

Which means that documenting the behavior is not mandatory; however,
it's still permitted. And if your compiler chooses not to document that
behavior, I would strongly recommend not writing code that depends upon
it.

> > Compiler documentation ought to tell you what behavior you can rely on.
>

> It ought if the behavior were implementation defined. ...

No - it MUST tell you if the behavior is implementation defined. It
ought to tell you what behavior you can rely on, as a matter of QoI,
regardless of whether or not such documentation is mandatory. An
implementation that chooses not to document a particular kind of
behavior is implicitly telling you - "Don't rely on this behavior".

> ... As it is I dont


> know of any compiler whose documentation makes any promises about this
> behavior.

Then don't write code that depends upon it.

James Kanze

unread,
Nov 6, 2006, 12:01:44 AM11/6/06
to
"Maarten Kronenburg" wrote:
> ""P.J. Plauger"" wrote

[...]


> > > What is the purpose of wrapping on
> > > a different range? Then the result would surely not fall within
> > > the published range, and the published range would be false.

> > An implementation might choose to represent integer NaN as 2^-31,
> > so INT_MIN is -(2^-31 - 1) and INT_MAX is 2^31 - 1. This published
> > range isn't "false", just not honored by an unchecked overflow that
> > quietly wraps around like unsigned.

> An implementation may of course do anything if it is not defined in
> the standard (and this is really strange).

What's so strange about it? The standard is designed to be part
of a contract between users and compiler vendors. What's
strange about leaving part of the contract up to each individual
vendor and its users. Or even saying that some things won't be
contractually guaranteed.

> What is the numeric_limits flag for this?

There isn't. Practically speaking, there cannot be a
numeric_limits flag for everything.

> The standard should define the base types so that these strange
> and very unusual implementations are prevented.

And thus forbid the use of C++ on certain machines. Or forbid
implementations that some user might consider useful.

The basic rule is that numeric overflow in a signed integral
type is undefined behavior. Don't do it, and you don't care
what the implementation does in such cases.

In practice, it's the approach you'll want to follow anyway,
because almost no implementation does anything useful (like
trapping).

> That is what the standard is supposed to do, because no programmer
> can be expected to take these strange implementations into account.

I fail to see where there is a problem. All of my programs
avoid overflow in signed integer arithmetic. If not, I don't
consider them correct. So where is the problem with taking such
"strange implementations" into account.

> The standard is supposed to make the "undefined" as small as possible.

Not in the case of C++. That was the politic that Java pursued.
The result is that there are hardwares on which it isn't
practical to implement Java. The politic of C++ is that an
effecient implementation should be possible on all hardware. In
practice, with regards to this one specific point, I don't see
what that buys the user.

> > > .....
> > >> > Probably the only alternative is trap the overflow, which could
> > >> > also be mentioned in the standard.

> > >> It could, but it's not the *only* alternative. And what would the
> > >> standard say (other than UB)? "It always does this unless it does
> > >> that, or maybe something else"?

> > > That would be better than the situation now, where the programmer
> > > does not have a clue what happens at overflow of signed base types,
> > > or what is_modulo means in the numeric_limits.

> > Is it really better? The C Standard is careful to distinguish
> > undefined behavior, which can be anything (useful or otherwise)
> > from unspecified behavior, which can be something valid from a
> > limited choice. Unspecified makes sense only when *any* of the
> > choices makes sense -- as in the order of evaluation of the
> > arguments on a function call. It makes way less sense if you
> > don't know whether an operation might trap or not.

> Yes it makes sense to distinguish between modulo wrap around
> and trap overflow, which are flags in numeric_limits.

There's no flag that I can see to indicate trap overflow. And
the utility of the is_modulo flag would seem fairly limited to
me.

> But it makes also sense then to define what modulo wrap around
> exactly means, so that a programmer can rely on this,
> and so that implementations are not allowed to do strange things with it
> that are very hard for programmers to deal with.

> > > But of course I don't know from hard fact how many processors follow
> > > the formula I gave for signed base type wrap around, and how many
> > > trap the overflow, and how many do something else (if so, what?).

> > Doesn't matter how many. What matters is the range of conforming
> > behavior.

> My suggestion is to add to 3.9.1 item 4:
> "Signed integers may obey the laws of arithmetic modulo, when its
> numeric_limits is_modulo flag is true, or may trap overflow,
> when its numeric_limits traps flag is true (see 18.2.11).

Isn't that sort of making numeric_limits::traps a bit ambiguous?
As it is currently defined, it is more or less means that there
are trapping NaN's.

Personally, I don't see much interest in the is_modulo
flag---the standard requires unsigned integral types to be
modulo, and I've never needed any other type of modula.

--
James Kanze (Gabi Software) email: james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Maarten Kronenburg

unread,
Nov 6, 2006, 2:11:43 PM11/6/06
to

"James Kanze" wrote

> "Maarten Kronenburg" wrote:
> > ""P.J. Plauger"" wrote
>
> [...]
> > > > What is the purpose of wrapping on
> > > > a different range? Then the result would surely not fall within
> > > > the published range, and the published range would be false.
>
> > > An implementation might choose to represent integer NaN as 2^-31,
> > > so INT_MIN is -(2^-31 - 1) and INT_MAX is 2^31 - 1. This published
> > > range isn't "false", just not honored by an unchecked overflow that
> > > quietly wraps around like unsigned.
>
> > An implementation may of course do anything if it is not defined in
> > the standard (and this is really strange).
>
> What's so strange about it? The standard is designed to be part
> of a contract between users and compiler vendors. What's
> strange about leaving part of the contract up to each individual
> vendor and its users. Or even saying that some things won't be
> contractually guaranteed.

When too much is undefined, the implementations will tend to differ
more and more, which I think is in general not good.
But sometimes it may be unavoidable. In this case however I think
it is avoidable.

>
> > What is the numeric_limits flag for this?
>
> There isn't. Practically speaking, there cannot be a
> numeric_limits flag for everything.
>
> > The standard should define the base types so that these strange
> > and very unusual implementations are prevented.
>
> And thus forbid the use of C++ on certain machines. Or forbid
> implementations that some user might consider useful.
>
> The basic rule is that numeric overflow in a signed integral
> type is undefined behavior. Don't do it, and you don't care
> what the implementation does in such cases.

Then is_modulo should not have been provided at all.
But I think providing is_modulo was a good thing,
and specifying its exact meaning also is.

>
> In practice, it's the approach you'll want to follow anyway,
> because almost no implementation does anything useful (like
> trapping).
>
> > That is what the standard is supposed to do, because no programmer
> > can be expected to take these strange implementations into account.
>
> I fail to see where there is a problem. All of my programs
> avoid overflow in signed integer arithmetic. If not, I don't
> consider them correct. So where is the problem with taking such
> "strange implementations" into account.


I think programmers want to know what is_modulo means,
because preventing overflow may not be as easy as it seems.
You may consider overflow incorrect, but the standard considers it correct.

That seems to be the has_quiet_NaN and has_signaling_NaN flags.
But I agree that the traps flag definition in the standard is also
a bit unclear to me. Perhaps someone else can comment on this.

>
> Personally, I don't see much interest in the is_modulo
> flag---the standard requires unsigned integral types to be
> modulo, and I've never needed any other type of modula.
>
> --

Regards, Maarten.

mark

unread,
Nov 6, 2006, 3:57:39 PM11/6/06
to

kuy...@wizard.net wrote:
> mark wrote:

> > ... As it is I dont
> > know of any compiler whose documentation makes any promises about this
> > behavior.
>
> Then don't write code that depends upon it.

I dont, and I dont know anybody that does - because no compilers that I
am aware of document it, and most perform optimizations based on the
assumption that undefined behavior does not occur, thus breaking any
assumptions the programmer might make about how the underlying hardware
handles signed overflow.

If you follow the thread, Francis said he wanted to see things change
so that programmers could determine whether or not they were in an
environment where it was safe to make assumptions about how signed
overflow would be handled - presumably he wanted implementation defined
behavior.

Pete asserted that no changes to the standard were necessary because
any decent implementation would match the behavior of the underlying
hardware, and document the behavior as such, or "they won't last long


in a free market".

I merely pointed out that that is not the case.

Mark Williams

James Kanze

unread,
Nov 7, 2006, 12:54:09 PM11/7/06
to
"Maarten Kronenburg" wrote:
> "James Kanze" wrote
> > "Maarten Kronenburg" wrote:
> > > ""P.J. Plauger"" wrote

> > [...]
> > > > > What is the purpose of wrapping on
> > > > > a different range? Then the result would surely not fall within
> > > > > the published range, and the published range would be false.

> > > > An implementation might choose to represent integer NaN as 2^-31,
> > > > so INT_MIN is -(2^-31 - 1) and INT_MAX is 2^31 - 1. This published
> > > > range isn't "false", just not honored by an unchecked overflow that
> > > > quietly wraps around like unsigned.

> > > An implementation may of course do anything if it is not defined in
> > > the standard (and this is really strange).

> > What's so strange about it? The standard is designed to be part
> > of a contract between users and compiler vendors. What's
> > strange about leaving part of the contract up to each individual
> > vendor and its users. Or even saying that some things won't be
> > contractually guaranteed.

> When too much is undefined, the implementations will tend to differ
> more and more, which I think is in general not good.
> But sometimes it may be unavoidable. In this case however I think
> it is avoidable.

Avoidable, maybe. If you decide that C++ can only be
implemented on a certain subset of machines. If you decide that
you know better than the users what safety/performance
trade-offs are should be used, and forbid "safe"
implementations. (A safe implementation will, of course, trap
on overflow. Or failing that, saturate. Both behaviors seem
preferable to the most frequent behavior today. Both have
notable runtime overhead on most platforms, however, and many
applications are not willing to pay that overhead.)

If you want to allow all reasonable implementations on all
reasonable platforms, you've got to allow some freedom.

> > > What is the numeric_limits flag for this?

> > There isn't. Practically speaking, there cannot be a
> > numeric_limits flag for everything.

> > > The standard should define the base types so that these strange
> > > and very unusual implementations are prevented.

> > And thus forbid the use of C++ on certain machines. Or forbid
> > implementations that some user might consider useful.

> > The basic rule is that numeric overflow in a signed integral
> > type is undefined behavior. Don't do it, and you don't care
> > what the implementation does in such cases.

> Then is_modulo should not have been provided at all.
> But I think providing is_modulo was a good thing,
> and specifying its exact meaning also is.

Well, I'll admit that I don't see much use in it. I do agree
that if it is provided, its exact meaning should be specified.
But I think, based on the footnote, that the intent is clear:
is_modulo <==> max() + 1 == min(). Guaranteed by the
implementation. (An implementation for Intel, for example,
might decide to define it as false for the integral types, to
reserve the right to generate an INTO instruction after each
arithmetic operation.)

> > In practice, it's the approach you'll want to follow anyway,
> > because almost no implementation does anything useful (like
> > trapping).

> > > That is what the standard is supposed to do, because no programmer
> > > can be expected to take these strange implementations into account.

> > I fail to see where there is a problem. All of my programs
> > avoid overflow in signed integer arithmetic. If not, I don't
> > consider them correct. So where is the problem with taking such
> > "strange implementations" into account.

> I think programmers want to know what is_modulo means,

I think that most programmers have never heard of it, and so
aren't asking what it means:-).

> because preventing overflow may not be as easy as it seems.

It's part of writing a correct program. Writing correct
programs is certainly harder than writing incorrect ones.

> You may consider overflow incorrect, but the standard
> considers it correct.

If you consider undefined behavior correct.

Note that in practice, the implementations I'm familiar with
don't do anything useful. If you have overflow, you get wrong
results. Which is generally worse than crashing; at least when
the program crashes, you know that it did something wrong.

[...]


> > > My suggestion is to add to 3.9.1 item 4:
> > > "Signed integers may obey the laws of arithmetic modulo, when its
> > > numeric_limits is_modulo flag is true, or may trap overflow,
> > > when its numeric_limits traps flag is true (see 18.2.11).

> > Isn't that sort of making numeric_limits::traps a bit ambiguous?
> > As it is currently defined, it is more or less means that there
> > are trapping NaN's.

> That seems to be the has_quiet_NaN and has_signaling_NaN flags.
> But I agree that the traps flag definition in the standard is also
> a bit unclear to me. Perhaps someone else can comment on this.

>From what the standard says, it basically means that you cannot
be assured of being able to read an uninitialized value of the
type without trapping. I think the difference with regards to
has_signaling_NaN is that a signaling_NaN can be generated as
the result of an arithmetic operation, but that this is not
required of a trapping representation. A signaling NaN is
really only one possible form of "traps". Presumably, if
has_signaling_NaN is true, traps must be true, but the reverse
is not guaranteed. (There's also the fact that
has_signaling_NaN is only meaningful for floating point types.
traps is meaningful for all types.)

--
James Kanze (GABI Software) email:james...@gmail.com


Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Robert Mabee

unread,
Nov 7, 2006, 2:47:44 PM11/7/06
to
James Kanze wrote:
> "Maarten Kronenburg" wrote:
>>When too much is undefined, the implementations will tend to differ
>>more and more, which I think is in general not good.
>>But sometimes it may be unavoidable. In this case however I think
>>it is avoidable.
>
> Avoidable, maybe. If you decide that C++ can only be
> implemented on a certain subset of machines. If you decide that
> you know better than the users what safety/performance
> trade-offs are should be used, and forbid "safe"
> implementations. (A safe implementation will, of course, trap
> on overflow. Or failing that, saturate. Both behaviors seem
> preferable to the most frequent behavior today. Both have
> notable runtime overhead on most platforms, however, and many
> applications are not willing to pay that overhead.)

By all means, expose what the hardware does efficiently, but don't
claim is_modulo if the arithmetic doesn't meet a reasonable
definition of modular. The spec needs a better definition to help
implementors determine whether they should set this true.

C. M. Heard

unread,
Nov 9, 2006, 12:22:31 PM11/9/06
to
[ With apologies for the late reply ]

Pete Becker <peteb...@acm.org> wrote:


> Maarten Kronenburg wrote:
> > There is only one logical way to do it for signed base types:
> > Define the range as: delta = x_max - x_min + 1
> > and do the "mathematical" mod, but from x_min:
> > x smod y = x - delta * [ ( x - x_min ) / delta ]
> > where [ ] truncates downward.
> > This wrap around always brings the value back within range.
> > After some checks the x86 signed int seems to work this way,
> > but I will test it thoroughly on x86.
> > In fact one could make a program to check if any signed base type
> > on any system complies with this wrap around.
> > Probably the only alternative is trap the overflow, which could
> > also be mentioned in the standard.
> >
>
> Could be. What real-world problem does it solve?

Here is a real-world example. For the purposes of this discussion,
assume that route_cache_entry::expiration_time, sys_ticks(), and
the constant route_cache_timeout are of the same signed base type T
and that the implementation guarantees that signed arithmetic wraps
as described above. Assume further that route_cache_timeout is a
positive constant. If the maximum positive value of the signed type
T is Tmax, then the following code will do its job as long as the
program arranges to scan each route cache entry within Tmax ticks of
when its expiration time is set.

I always felt that it was a shame that this idiom was non-portable.

/*
* Add a new destination to the route cache.
*/
static void
add_to_route_cache (unsigned int dest_ip_address,
unsigned int gateway_address)
{
route_cache_entry *this_entry = allocate_route_cache_entry();
if (this_entry != NULL)
{
this_entry->prev_entry = route_cache_anchor.prev_entry;
route_cache_anchor.prev_entry->next_entry = this_entry;
this_entry->next_entry = &route_cache_anchor;
route_cache_anchor.prev_entry = this_entry;
this_entry->expiration_time = sys_ticks() + route_cache_timeout;
this_entry->route_key = dest_ip_address;
this_entry->gateway_ip_address = gateway_address;
}
}

/*
* Restart the timeout for a route cache entry.
*/
void
restart_cache_entry_timeout (route_cache_entry *this_entry)
{
this_entry->prev_entry->next_entry = this_entry->next_entry;
this_entry->next_entry->prev_entry = this_entry->prev_entry;
this_entry->prev_entry = route_cache_anchor.prev_entry;
route_cache_anchor.prev_entry->next_entry = this_entry;
this_entry->next_entry = &route_cache_anchor;
route_cache_anchor.prev_entry = this_entry;
this_entry->expiration_time = sys_ticks() + route_cache_timeout;
}

/*
* Remove expired route cache entries from the active list.
*/
static void
delete_expired_route_cache_entries (void)
{
route_cache_entry *this_entry, *next_entry;

next_entry = route_cache_anchor.next_entry;
while ((next_entry != &route_cache_anchor)
&&
((next_entry->expiration_time - sys_ticks()) <= 0))
{
this_entry = next_entry;
next_entry = this_entry->next_entry;
free_route_cache_entry(this_entry);
}
}

//cmh

Maarten Kronenburg

unread,
Nov 9, 2006, 1:56:11 PM11/9/06
to

""C. M. Heard"" wrote in message

> [ With apologies for the late reply ]
>

This is indeed an important application of a modulo wrapped around
signed integer base type, because it wraps around and a negative difference
exists (in contrast with the unsigned base type).
I will add this to my little is_modulo spec proposal.
Thanks, Maarten.

> /*
> * Add a new destination to the route cache.
> */

> ...

0 new messages