Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Re: Division by zero

206 views
Skip to first unread message

Mr Flibble

unread,
Oct 29, 2016, 6:01:50 PM10/29/16
to
On 29/10/2016 22:59, Stefan Ram wrote:
> C++ says:
>
> »If during the evaluation of an expression, the result
> is not mathematically defined or not in the range of
> representable values for its type, the behavior is
> undefined.«, C++ 2016, 5p4 and
>
> »If the second operand of / or % is zero the behavior is
> undefined.«, C++ 2016, 5.6p4.
>
> Does it say anywhere that a division by zero /is/ allowed
> (is not undefined behavior) in floating-point divisions?

Division by zero is undefined in C++ and undefined in mathematics so not
much more needs to be said really.

/Flibble


Alf P. Steinbach

unread,
Oct 29, 2016, 7:51:54 PM10/29/16
to
On 30.10.2016 00:46, Stefan Ram wrote:
> r...@zedat.fu-berlin.de (Stefan Ram) writes:
>> Does it say anywhere that a division by zero /is/ allowed
>> (is not undefined behavior) in floating-point divisions?
>
> In practice, it seems to be true what is written in the Web:
>
> »if std::numeric_limits<T>::is_iec559 == true, which it
> normally is for T = float and T = double, the behavior
> is overridden by IEC 559 aka IEEE 754«,
>
> however, the standard does nowhere say that it does not have
> UB anymore in this case (as far as I can [not ]find it). Of
> course, evaluating to "Inf" is subsumed by UB, but when the
> programmer has no guarantee for that result, it is not worth
> that much.

And there is the issue that both g++ and Visual C++ can be told to
ignore certain aspects of the IEEE 754 standard to gain more speed, in
which case they don't adjust `numeric_limits` accordingly.

I do know that they can ignore the semantics of NaN.

I'm not sure about infinities, but the short of it is, what you get
depends in practice not only on `numeric_limits`, but also on the
compiler and options employed, and this is very compiler-specific. :(


Cheers!,

- Alf

Marcel Mueller

unread,
Oct 30, 2016, 11:44:04 AM10/30/16
to
On 30.10.16 00.01, Mr Flibble wrote:
> Division by zero is undefined in C++ and undefined in mathematics so not
> much more needs to be said really.

According to the usual IEEE 754 float encoding 0./0. is the value NaN.
But C does not require IEEE floats neither NaN support.

So I would rephrase the question whether platforms which claim to have
IEEE float support (see numeric_limits) have defined behavior at 0./0.


Marcel

Mr Flibble

unread,
Oct 30, 2016, 1:27:42 PM10/30/16
to
In mathematics dividing by zero is undefined ego you SHOULD NOT do it in
code so who cares what the result is?

/Flibble

Paavo Helde

unread,
Oct 30, 2016, 1:51:17 PM10/30/16
to
On 30.10.2016 19:27, Mr Flibble wrote:
> On 30/10/2016 15:43, Marcel Mueller wrote:
>> On 30.10.16 00.01, Mr Flibble wrote:
>>> Division by zero is undefined in C++ and undefined in mathematics so not
>>> much more needs to be said really.
>>
>> According to the usual IEEE 754 float encoding 0./0. is the value NaN.
>> But C does not require IEEE floats neither NaN support.
>>
>> So I would rephrase the question whether platforms which claim to have
>> IEEE float support (see numeric_limits) have defined behavior at 0./0.
>
> In mathematics dividing by zero is undefined

No, there are mathematics where this is well defined, for example
https://en.wikipedia.org/wiki/Projectively_extended_real_line


Mr Flibble

unread,
Oct 30, 2016, 2:24:13 PM10/30/16
to
Nonsense.

/Flibble


Paavo Helde

unread,
Oct 30, 2016, 3:33:37 PM10/30/16
to
"Nonsense" is a word of no importance in mathematics. "Consistent" is, OTOH.




Rick C. Hodgin

unread,
Oct 30, 2016, 3:57:21 PM10/30/16
to
Division by zero has case-by-case definitions where it's actual value would
be valid given the approaching limits from both sides. It has other cases
where it is invalid. And other cases where it's clearly +infinity,
-infinity, or the entire number line.

I think C and C++ have it wrong to disallow division by zero (or to enter
into UB when encountered). I think they should both define and allow for
hardware which traps the condition to allow explicit and specific code to
address it. On modern hardware that would be generic OS-level code which
can then signal an exception or some other application-defined handler for
those kinds of conditions. But I think more generally a new approach should
be taken such that real application code can be created to handle the division
by zero case, such that in various scenarios it can return a valid value. In
fact, it might even be worth definining the value so that it computes normally
when a hard value is known for a given formula, for example. But in the case
of not having that handler, then it would also fall back to a generic
division by zero handler.

I will incorporate these abilities into both integer and fp on my Arxoda CPU
design, and support for both into my CAlive compiler.

Best regards,
Rick C. Hodgin

David Brown

unread,
Oct 30, 2016, 4:01:40 PM10/30/16
to
No, it is not nonsense - it is perfectly reasonable mathematics.

However, it is not directly applicable here - standard floats in C++ do
not cover the kinds of numbers needed for mathematics on a projective
plane. So normal C++ floats should not need to support division by zero
any more than they should support the square root of negative numbers.
But it would be wrong to say that "in mathematics, taking the square
root of negative numbers is undefined" - it is merely undefined over the
real numbers.


David Brown

unread,
Oct 30, 2016, 4:15:49 PM10/30/16
to
On 30/10/16 20:57, Rick C. Hodgin wrote:
> On Sunday, October 30, 2016 at 1:27:42 PM UTC-4, Mr Flibble wrote:
>> On 30/10/2016 15:43, Marcel Mueller wrote:
>>> On 30.10.16 00.01, Mr Flibble wrote:
>>>> Division by zero is undefined in C++ and undefined in mathematics so not
>>>> much more needs to be said really.
>>>
>>> According to the usual IEEE 754 float encoding 0./0. is the value NaN.
>>> But C does not require IEEE floats neither NaN support.
>>>
>>> So I would rephrase the question whether platforms which claim to have
>>> IEEE float support (see numeric_limits) have defined behavior at 0./0.
>>
>> In mathematics dividing by zero is undefined ego you SHOULD NOT do it in
>> code so who cares what the result is?
>
> Division by zero has case-by-case definitions where it's actual value would
> be valid given the approaching limits from both sides. It has other cases
> where it is invalid. And other cases where it's clearly +infinity,
> -infinity, or the entire number line.
>

That's all true - but in simple arithmetic of real numbers (which is
what C++ standards-defined floats and doubles approximate), it is always
undefined.

You can write:

y = lim x->0 ( (sin x) / x )

y is then 1, even though you are taking a limit of a division by 0.

But you cannot find any "r" for which "r / 0" is 1. When you take any
real number r, and divide it by 0 over the set of real numbers, the
result mathematically is undefined.

All division by 0 with the mathematical systems supported directly by
C++ is mathematically undefined - it makes perfect sense for it to be
undefined in the language.

(Division by 0 /has/ a defined value in IEEE 754 floating point, giving
either a NaN or +/-Inf depending on the circumstances. But C and C++ do
not specify IEEE behaviour. An implementation may, if it wishes, give
division by 0 a defined behaviour - it is always free to give
definitions to thinks the standards leave undefined.)

> I think C and C++ have it wrong to disallow division by zero (or to enter
> into UB when encountered). I think they should both define and allow for
> hardware which traps the condition to allow explicit and specific code to
> address it.

Lots of hardware has no efficient way of making such traps.

> On modern hardware that would be generic OS-level code which
> can then signal an exception or some other application-defined handler for
> those kinds of conditions.

Lots of systems don't have an OS - and certainly not one that wants to
bother with handling something that makes no sense in a program.

> But I think more generally a new approach should
> be taken such that real application code can be created to handle the division
> by zero case, such that in various scenarios it can return a valid value. In
> fact, it might even be worth definining the value so that it computes normally
> when a hard value is known for a given formula, for example.

That's what strict IEEE compliance does - it gives a hard value (NaN or
+/-Inf) that you can test later. But that's up to an implementation to
decide - it is /not/ part of the standards.

> But in the case
> of not having that handler, then it would also fall back to a generic
> division by zero handler.
>
> I will incorporate these abilities into both integer and fp on my Arxoda CPU
> design, and support for both into my CAlive compiler.
>

In virtually every case, a division by 0 is a bug in the code. For some
types of code, it might be acceptable to continue calculations
regardless, and check for a NaN at the end of the string of operations -
perhaps that is more efficient than checking for 0 before attempting the
division. Compilers can support that. Making the compiler trap in some
clear way on division by zero can also be useful - it will help
programmers find their bugs faster, and that is always a good thing.
Again, nothing in the C or C++ standards prevents a compiler doing that.

But pretending it is possible to define a sensible numerical value for
division by 0 is just silly - it makes no sense mathematically, and is
of no use in code.

Mr Flibble

unread,
Oct 30, 2016, 4:36:23 PM10/30/16
to
But in mathematics division by zero is undefined so you are talking
bollocks.

/Flibble


Alf P. Steinbach

unread,
Oct 30, 2016, 4:41:53 PM10/30/16
to
I agree with Leigh, it's nonsense.

Defined as, makes no sense.

It seems to be self-consistent nonsense, but nonsense.

Sometimes people who make some unwarranted assertion, some fact that
isn't a fact or some conclusion that doesn't follow from anything, start
spouting techno-babble to defend it. Often the techno-babble is
self-consistent, in a way, but it's meaningless, irrelevant, of no
relevance other than associatively: it can convince associative
non-technical people reading the exchange. The projectively extended
real number line seems to be of that kind; call it math-babble.


Cheers!,

- Alf

Alf P. Steinbach

unread,
Oct 30, 2016, 4:42:37 PM10/30/16
to
On 30.10.2016 21:01, David Brown wrote:
> On 30/10/16 19:24, Mr Flibble wrote:
>> On 30/10/2016 17:51, Paavo Helde wrote:
>>>
>>> No, there are mathematics where this is well defined, for example
>>> https://en.wikipedia.org/wiki/Projectively_extended_real_line
>>
>> Nonsense.
>>
>
> No, it is not nonsense - it is perfectly reasonable mathematics.

Is it? What's it use then?

Cheers!,

- Alf

Rick C. Hodgin

unread,
Oct 30, 2016, 5:04:48 PM10/30/16
to
The idea is that the software developer working on the larger algorithm,
which in and of itself must then be broken down into many constituent
parts, would be able to apply something like a cask to indicate that in
a particular portion, this should be the result should it encounter a
division by zero condition.

> All division by 0 with the mathematical systems supported directly by
> C++ is mathematically undefined - it makes perfect sense for it to be
> undefined in the language.

Yes, because that's the way it is today. I'm suggesting it shouldn't be
like that, and with some redesign, wouldn't need to be like that.

We live in the age of advanced and mature design toolsets which would
allow for us to do things more easily in the 2010s that we couldn't do
in the 1990s.

> (Division by 0 /has/ a defined value in IEEE 754 floating point, giving
> either a NaN or +/-Inf depending on the circumstances. But C and C++ do
> not specify IEEE behaviour. An implementation may, if it wishes, give
> division by 0 a defined behaviour - it is always free to give
> definitions to thinks the standards leave undefined.)

IEEE 754 also defines signaling and non-signaling forms, so it's at least
incorporated into fp that the possibility of legitimate recovery exists,
though it will trap to an OS-level handler.

My proposal is that it should be to a local code handler so that a
developer working on a particular problem could have it handled in the
special cases without the extra overhead of trapping to the OS, noting
the trap, redirecting to an app handler, returning to the OS, then
finally returning back to the original instruction stream.

> > I think C and C++ have it wrong to disallow division by zero (or to enter
> > into UB when encountered). I think they should both define and allow for
> > hardware which traps the condition to allow explicit and specific code to
> > address it.
>
> Lots of hardware has no efficient way of making such traps.

That's true today. I'm talking about looking to the future.

> > On modern hardware that would be generic OS-level code which
> > can then signal an exception or some other application-defined handler for
> > those kinds of conditions.
>
> Lots of systems don't have an OS - and certainly not one that wants to
> bother with handling something that makes no sense in a program.
>
> > But I think more generally a new approach should
> > be taken such that real application code can be created to handle the division
> > by zero case, such that in various scenarios it can return a valid value. In
> > fact, it might even be worth definining the value so that it computes normally
> > when a hard value is known for a given formula, for example.
>
> That's what strict IEEE compliance does - it gives a hard value (NaN or
> +/-Inf) that you can test later. But that's up to an implementation to
> decide - it is /not/ part of the standards.

There are defined traps as well which are maskable (and typically masked)
which allow those values to come through. However, what I see being
needed is the next level above that. Such an ability gives developers
more control over the machine for those times when that control would
be beneficial.

> > But in the case
> > of not having that handler, then it would also fall back to a generic
> > division by zero handler.
> >
> > I will incorporate these abilities into both integer and fp on my Arxoda CPU
> > design, and support for both into my CAlive compiler.
>
> In virtually every case, a division by 0 is a bug in the code.

That's true today. :-) I'm talking about looking to the future. :-)

> For some
> types of code, it might be acceptable to continue calculations
> regardless, and check for a NaN at the end of the string of operations -
> perhaps that is more efficient than checking for 0 before attempting the
> division. Compilers can support that. Making the compiler trap in some
> clear way on division by zero can also be useful - it will help
> programmers find their bugs faster, and that is always a good thing.
> Again, nothing in the C or C++ standards prevents a compiler doing that.
>
> But pretending it is possible to define a sensible numerical value for
> division by 0 is just silly - it makes no sense mathematically, and is
> of no use in code.

You must say, "It is not applicable in raw fundamental operations, but
within the context of a larger known formula, which must be broken out
to into discrete operations, it can make sense, and in many cases it
does" (or words to that effect).

Paavo Helde

unread,
Oct 30, 2016, 5:26:28 PM10/30/16
to
Why should there be any use? That's not the point of mathematics.
Besides, you never know what may appear useful in the future.

Riemann manifolds were also considered absolutely useless a couple of
centuries ago, not to speak about number theory (aka cryptography, these
days).

See e.g.
"http://mathoverflow.net/questions/116627/useless-math-that-became-useful"




Alf P. Steinbach

unread,
Oct 30, 2016, 5:39:18 PM10/30/16
to
On 30.10.2016 22:04, Rick C. Hodgin wrote:
>
> IEEE 754 also defines signaling and non-signaling forms, so it's at least
> incorporated into fp that the possibility of legitimate recovery exists,
> though it will trap to an OS-level handler.
>
> My proposal is that it should be to a local code handler so that a
> developer working on a particular problem could have it handled in the
> special cases without the extra overhead of trapping to the OS, noting
> the trap, redirecting to an app handler, returning to the OS, then
> finally returning back to the original instruction stream.
>

Check out C#'s `checked` and `unchecked`. IIRC.


Cheers!,

- Alf


Rick C. Hodgin

unread,
Oct 30, 2016, 7:07:12 PM10/30/16
to
As I understand it, C# is a managed language. The conditions it provides
for are signaled to it, as through the flow circuit I outlined: hardware
to OS, OS to manager, manager to handler, handler to manager, manager to
OS, OS back to hardware, which then continues on with the original
instructions.

What I propose is new hardware, and an extension through something I call
casks (which are arbitrary injections of source code in the middle of
otherwise syntactically correct statements), which handle local cases of
that condition when it is encountered, and directly in local code without
the route through the OS and back.

Juha Nieminen

unread,
Oct 31, 2016, 4:19:22 AM10/31/16
to
Stefan Ram <r...@zedat.fu-berlin.de> wrote:
> Does it say anywhere that a division by zero /is/ allowed
> (is not undefined behavior) in floating-point divisions?

It's UB. In some systems you might get the IEEE floating point value
of "infinity". In other systems you will get a system signal (which
will abort your program unless you catch it with a system-specific
method). Even in the former case the compiler may optimize the
code in such a manner that it does something else than give you inf
as the result.

--- news://freenews.netfront.net/ - complaints: ne...@netfront.net ---

mark

unread,
Oct 31, 2016, 5:25:58 AM10/31/16
to
On 2016-10-30 00:46, Stefan Ram wrote:
> r...@zedat.fu-berlin.de (Stefan Ram) writes:
>> Does it say anywhere that a division by zero /is/ allowed
>> (is not undefined behavior) in floating-point divisions?
>
> In practice, it seems to be true what is written in the Web:
>
> »if std::numeric_limits<T>::is_iec559 == true, which it
> normally is for T = float and T = double, the behavior
> is overridden by IEC 559 aka IEEE 754«,
>
> however, the standard does nowhere say that it does not have
> UB anymore in this case (as far as I can [not ]find it). Of
> course, evaluating to "Inf" is subsumed by UB, but when the
> programmer has no guarantee for that result, it is not worth
> that much.

C++14 standard:
<<<
static constexpr bool is_iec559;
True if and only if the type adheres to IEC 559 standard.
>>>

IEC559 standard:
<<<
7.3 Division by zero
The divideByZero exception shall be signaled if and only if an exact
infinite result is defined for an operation on finite operands. The
default result of divideByZero shall be an ∞ correctly signed according
to the operation:
...
>>>


So I don't see how anyone could argue that the behavior is undefined,
since that would directly contradict IEC 559 adherence.

David Brown

unread,
Oct 31, 2016, 5:45:00 AM10/31/16
to
You can look up projective planes, projectively extended real lines,
etc., on Wikipedia. The most useful variation, I believe, is the
Reimann sphere which is the complex numbers with an infinity added.
Imagine taking the complex plane as a sheet of rubber and wrapping it
around a sphere, stretching and squashing as necessary (I don't know
your mathematical background, but you will probably have heard of
topology being described as "rubber sheet geometry"). You can make it
all fit as neatly as you want, but there is still a tiny hole left. The
Reimann sphere has that hole filled in with "infinity". As well as
being useful in mathematics, it turns out to be helpful in quantum
physics, I believe.

Of course, once you start adding a single infinity and the result of
division by zero to be that infinity, you no longer have a field and
lose some mathematical properties while gaining others. And in the
Reimann sphere (or the projectively extended real line, which is a sort
of circular version) then 0/0 and inf/inf are still undefined.

It can also be useful to use vectors of the form (x, y, z, u) in 3
dimensional geometry, where your points are (x/u, y/u, z/u) for any
non-zero u. Zero u represents directions, but not points.

So there are many situations where you might want infinities, or
divisions by zero (the two are not the same - hyperreals have
infinities, but no division by zero). But division by zero does not
make sense within the mathematics directly supported by C and C++ -
integer arithmetic, modulo arithmetic, and floating point approximations
to reals and complex numbers.


David Brown

unread,
Oct 31, 2016, 5:48:22 AM10/31/16
to
Ah, the old "proof by repeated assertion" - re-enforced by gratuitously
colourful language. (I have nothing against a bit of swearing where
appropriate - but it does not help when you are wrong.)

Mathematics is more than the arithmetic you learned at school - and more
than the arithmetic supported directly by C++. There are useful
mathematical structures in which division by zero /is/ defined (though
usually excluding 0/0). You might not be familiar with them - most
people are not - but they still exist.


David Brown

unread,
Oct 31, 2016, 6:09:34 AM10/31/16
to
It is very rare that there is any sensible answer for the result, except
an error indicator. If you have an algorithm that may end up doing
division by zero, then spotting that error and trapping it in some way
could be a good idea. Whether you call that a NaN, a trap, an exception
or a cask is just details. And if it helps the software developer write
better code, or spot their mistakes faster, great.

However, I would say that if an algorithm can try to execute a division
by zero, the algorithm is broken. At best, it is going to be wildly
unstable somewhere - your tools may trap division by zero, but what
about division by 0.00000000000000000000000000000001 that is likely to
be just as wrong?

>
>> All division by 0 with the mathematical systems supported directly by
>> C++ is mathematically undefined - it makes perfect sense for it to be
>> undefined in the language.
>
> Yes, because that's the way it is today. I'm suggesting it shouldn't be
> like that, and with some redesign, wouldn't need to be like that.

The mathematics of integers, reals and complex numbers don't change with
the times. If you want to include some support for division by zero
(other than tracking errors), you are no longer trying to approximate
real number arithmetic with your floating point.

>
> We live in the age of advanced and mature design toolsets which would
> allow for us to do things more easily in the 2010s that we couldn't do
> in the 1990s.
>
>> (Division by 0 /has/ a defined value in IEEE 754 floating point, giving
>> either a NaN or +/-Inf depending on the circumstances. But C and C++ do
>> not specify IEEE behaviour. An implementation may, if it wishes, give
>> division by 0 a defined behaviour - it is always free to give
>> definitions to thinks the standards leave undefined.)
>
> IEEE 754 also defines signaling and non-signaling forms, so it's at least
> incorporated into fp that the possibility of legitimate recovery exists,
> though it will trap to an OS-level handler.

No, NaNs are not there for recovery. They exist in IEEE to help track
errors and find bugs (such as dropping to a debugger when a signalling
NaN is generated), and to allow code to know that an incorrect
calculation has occurred. They are not of any use in trying to correct
the calculation or generate a correct output given incorrect input or an
incorrect algorithm.

>
> My proposal is that it should be to a local code handler so that a
> developer working on a particular problem could have it handled in the
> special cases without the extra overhead of trapping to the OS, noting
> the trap, redirecting to an app handler, returning to the OS, then
> finally returning back to the original instruction stream.
>

That's fine - I have nothing against such an idea. C++ supports it
manually:

double safediv(double x, double y) {
if (y == 0) throw std::overflow_error("Division by zero");
return x / y;
}

For your language, you would use casks, and you can make it automatic if
you like (as Java does). It adds a small run-time overhead to do the
checking, but it is your decision whether you want the feature or not.

>>> I think C and C++ have it wrong to disallow division by zero (or to enter
>>> into UB when encountered). I think they should both define and allow for
>>> hardware which traps the condition to allow explicit and specific code to
>>> address it.
>>
>> Lots of hardware has no efficient way of making such traps.
>
> That's true today. I'm talking about looking to the future.

I look to the future, and for every processor made that has efficient
traps on division by zero, there will be thousands made that don't have
such traps. There even will be thousands made that don't even have
instructions for division. And there is no reason why that should ever
change.

It is, of course, fine for you to say that your language will require
such hardware support in its target processors. That won't fly for C or
C++, but it's fine for /your/ language to make such choices.
Again, basic mathematics won't change in the future. Unless you are
doing maths with structures (such as the Reimann sphere) that support
division by 0 in a consistent and sensible way, then dividing by zero is
usually a bug in the code. It might turn up due to incorrect input
(garbage in, garbage out) - an error in whatever is feeding the
algorithm, rather than in the algorithm itself. And NaNs can be a
convenient way of spotting this without having to explicitly check for
division by 0 in the code.

>
>> For some
>> types of code, it might be acceptable to continue calculations
>> regardless, and check for a NaN at the end of the string of operations -
>> perhaps that is more efficient than checking for 0 before attempting the
>> division. Compilers can support that. Making the compiler trap in some
>> clear way on division by zero can also be useful - it will help
>> programmers find their bugs faster, and that is always a good thing.
>> Again, nothing in the C or C++ standards prevents a compiler doing that.
>>
>> But pretending it is possible to define a sensible numerical value for
>> division by 0 is just silly - it makes no sense mathematically, and is
>> of no use in code.
>
> You must say, "It is not applicable in raw fundamental operations, but
> within the context of a larger known formula, which must be broken out
> to into discrete operations, it can make sense, and in many cases it
> does" (or words to that effect).
>

No, I don't have to say that.



Real Troll

unread,
Oct 31, 2016, 9:01:31 AM10/31/16
to
On 29/10/2016 22:59, Stefan Ram wrote:
> C++ says:
>
> »If during the evaluation of an expression, the result
> is not mathematically defined or not in the range of
> representable values for its type, the behavior is
> undefined.«, C++ 2016, 5p4 and
>
> »If the second operand of / or % is zero the behavior is
> undefined.«, C++ 2016, 5.6p4.
>
> Does it say anywhere that a division by zero /is/ allowed
> (is not undefined behavior) in floating-point divisions?
>

A number is a number whether an integer or a decimal number. Therefore,
division by zero is not defined and one should introduce error checking
to avoid this problem.


leigh.v....@googlemail.com

unread,
Oct 31, 2016, 9:09:57 AM10/31/16
to
You are simply wrong. Division by zero is undefined in mathematics.

Rick C. Hodgin

unread,
Oct 31, 2016, 9:28:59 AM10/31/16
to
On Monday, October 31, 2016 at 9:09:57 AM UTC-4, leigh.v....@googlemail.com wrote:
> You are simply wrong. Division by zero is undefined in mathematics.

You are stuck on one point, Leigh, and can't break past it. That doesn't
change the fact that there's more to the explanation of division by zero
than that one position.

If you pursue the truth, you'll come to realize that there's more than
you previously thought (about a great many things ... including faith in
Jesus Christ).

David Brown

unread,
Oct 31, 2016, 9:37:25 AM10/31/16
to
On 31/10/16 14:08, leigh.v....@googlemail.com wrote:
> You are simply wrong. Division by zero is undefined in mathematics.
>

Please update Wikipedia, based on your new-found expertise in
mathematics. To get started, edit this page:

<https://en.wikipedia.org/wiki/Riemann_sphere#Arithmetic_operations>

Then move on to
<https://en.wikipedia.org/wiki/Projectively_extended_real_line>

<https://en.wikipedia.org/wiki/Wheel_theory> should be deleted altogether.


Those of us who know a little more mathematics, will continue to say
that division by zero is undefined for integers, real numbers, and any
other fields, as well as computer-based approximations to those fields -
but that division by zero /is/ defined for certain other mathematical
structures.


It is fine to say that you are never going to need these structures.
/I/ am highly unlikely to ever need them, despite knowing about them.
And most people will never know of or care about their existence. But
repeating your ignorance does not make you look any smarter.

Mr Flibble

unread,
Oct 31, 2016, 1:35:24 PM10/31/16
to
Those three "mathematical structures" are gibberish; yes the articles
should be deleted. Again: division by zero is undefined in kosher
mathematics.

/Flibble


Gareth Owen

unread,
Oct 31, 2016, 1:56:43 PM10/31/16
to
Mr Flibble <flibbleREM...@i42.co.uk> writes:

> But in mathematics division by zero is undefined so you are talking
> bollocks.

Depends which field you're in. Real or complex numbers?
Yeah definitely undefined.

But mathematicians - devious bastards that we are - have created domains
in which there is a zero element with defined division.

The fact you don't know enough advanced mathematics to appreciate that
truth of that statement a shame; that you assert your ignorance in
denying is embarrassing.

Gareth Owen

unread,
Oct 31, 2016, 1:57:19 PM10/31/16
to
Mr Flibble <flibbleREM...@i42.co.uk> writes:

> Those three "mathematical structures" are gibberish; yes the articles
> should be deleted. Again: division by zero is undefined in kosher
> mathematics.

Wow. That's ... Stucklesque.

Mr Flibble

unread,
Oct 31, 2016, 1:59:32 PM10/31/16
to
Nope. Anyone can make up any old bollocks in any field as is the case
here. Division is undefined, end of.

/Flibble


Mr Flibble

unread,
Oct 31, 2016, 2:02:06 PM10/31/16
to
And whilst we are on this subject: there is no such thing as negative
zero in mathematics. IEEE floating point is wrong in including support
for negative zero.

/Flibble


Gareth Owen

unread,
Oct 31, 2016, 2:11:39 PM10/31/16
to
Mr Flibble <flibbleREM...@i42.co.uk> writes:

> And whilst we are on this subject: there is no such thing as negative
> zero in mathematics

This is true (to the best of my knowledge). Stopped clocks, and all that.

Rick C. Hodgin

unread,
Oct 31, 2016, 2:22:21 PM10/31/16
to
On Monday, October 31, 2016 at 1:56:43 PM UTC-4, gwowen wrote:
> Mr Flibble <flibbleREM...@i42.co.uk> writes:
>
> > But in mathematics division by zero is undefined so you are talking
> > bollocks.
>
> Depends which field you're in. Real or complex numbers?
> Yeah definitely undefined.
>
> But mathematicians - devious bastards that we are - have created domains
> in which there is a zero element with defined division.

It's worse than that:

1 + 2 + 3 + 4 ... = -1/12
https://www.youtube.com/watch?v=w-I6XTVZXww

> The fact you don't know enough advanced mathematics to appreciate that
> truth of that statement a shame; that you assert your ignorance in
> denying is embarrassing.

All truly advanced math people are obvious lunatics. It's so easy to
see from the nonsensical stuff they come up with. So, it's not fair
to judge Leigh on standards where they think an infinite sum of ever-
increasing positive integers is actually -1/12.

:-)

Best regards
Rick C. Hodgin

Melzzzzz

unread,
Oct 31, 2016, 2:26:05 PM10/31/16
to
On 2016-10-31, Rick C. Hodgin <rick.c...@gmail.com> wrote:
> All truly advanced math people are obvious lunatics.

Bullshit. Math is rational science. You can't get more rational then in
math...


>
> Best regards
> Rick C. Hodgin


--
press any key to continue or any other to quit

Rick C. Hodgin

unread,
Oct 31, 2016, 2:29:08 PM10/31/16
to
On Monday, October 31, 2016 at 2:26:05 PM UTC-4, Melzzzzz wrote:
> On 2016-10-31, Rick C. Hodgin <rick.c...@gmail.com> wrote:
> > All truly advanced math people are obvious lunatics.
> ... Math is rational science. You can't get more rational then in
> math...

Psssst. Hey... come over here, Melzzzzz. I'll let you in on a little
secret... I was kidding.

:-)

Melzzzzz

unread,
Oct 31, 2016, 2:50:00 PM10/31/16
to
On 2016-10-31, Rick C. Hodgin <rick.c...@gmail.com> wrote:
> On Monday, October 31, 2016 at 2:26:05 PM UTC-4, Melzzzzz wrote:
>> On 2016-10-31, Rick C. Hodgin <rick.c...@gmail.com> wrote:
>> > All truly advanced math people are obvious lunatics.
>> ... Math is rational science. You can't get more rational then in
>> math...
>
> Psssst. Hey... come over here, Melzzzzz. I'll let you in on a little
> secret... I was kidding.
>
>:-)

Oh!

>
> Best regards,
> Rick C. Hodgin


Alf P. Steinbach

unread,
Oct 31, 2016, 2:51:22 PM10/31/16
to
Uhm, none of the above seems at all useful to me.

I once (1986) defined a kind of division by zero that was useful for
Dempster-Shafer evidence combination formulas. Essentially it treats A*B
as a multiset {A, B}, and then A/B is practically defined for B=0 by
allowing a negative number of a factor in a set. It all reduces to
simple operations on pairs for the cases of practical interest.

This allowed simple general code (it was in Modula-2) without
special-casing all over the place.

But there is no infinity in such a scheme, and I'd better mention, in
order to not give people wrong ideas, that apparently it can't
reasonably support addition or subtraction of pairs other than those
that have the same count of 0. So it's a scheme that is practical for
certain programming tasks, but as far as I can see not for mathematics.
Unless mathematicians can enjoy the elegance of the code.

The infinity of the Riemann sphere and of the whatever-projected real
line, seems to be nothing but a device to allow vertical lines to be
expressed as y = ax + b, with a as infinity. That “explains” why
-infinity = +ininity in such scheme: the vertical line can be viewed as
going up, or down, at will. But it's utter nonsense: it's a kludge, at a
high cost, to support a single case, for which there is no need.

So, I think Leigh is entirely right about the x/0 that yields infinity.

It's just idiocy. :)


Cheers!,

- Alf

Gareth Owen

unread,
Oct 31, 2016, 3:43:46 PM10/31/16
to
"Alf P. Steinbach" <alf.p.stein...@gmail.com> writes:

> Uhm, none of the above seems at all useful to me.

You should write a letter to the people form who it is useful, and let
them know they're wrong then. It'll be a shame for all the
undergraduates using Mobius transformation groups to solve 2D
incompressible flow problems.

Alf P. Steinbach

unread,
Oct 31, 2016, 4:26:53 PM10/31/16
to
On 31.10.2016 20:43, Gareth Owen wrote:
> "Alf P. Steinbach" <alf.p.stein...@gmail.com> writes:
>
>> Uhm, none of the above seems at all useful to me.
>
> You should write a letter to the people form who it is useful, and let
> them know they're wrong then.

Well, that's a provably false assertion. A so called straw man argument,
which is classical fallacy. I have not said they're wrong.

But regarding the utility of the (IHMO) idiocy, if you'd like to discuss
that the burden of proof is on you sir.


> It'll be a shame for all the
> undergraduates using Mobius transformation groups to solve 2D
> incompressible flow problems.

Do explain how that (if it has some meaning) is relevant to the current
discussion.

I posit that if you can't explain it in a way that I understand, then as
far as being an argument made by you it's invalid.

Thank you.


Cheers!,

- Alf

Jerry Stuckle

unread,
Oct 31, 2016, 6:23:31 PM10/31/16
to
Wrong. I go by the facts - not what someone who claims to be an expert
because he read an article in Wikipedia says.

And yes, division by zero is valid in some mathematics.

--
==================
Remove the "x" from my email address
Jerry Stuckle
jstu...@attglobal.net
==================

David Brown

unread,
Nov 1, 2016, 5:06:31 AM11/1/16
to
On 31/10/16 23:23, Jerry Stuckle wrote:
> On 10/31/2016 1:57 PM, Gareth Owen wrote:
>> Mr Flibble <flibbleREM...@i42.co.uk> writes:
>>
>>> Those three "mathematical structures" are gibberish; yes the articles
>>> should be deleted. Again: division by zero is undefined in kosher
>>> mathematics.
>>
>> Wow. That's ... Stucklesque.
>>
>
> Wrong. I go by the facts - not what someone who claims to be an expert
> because he read an article in Wikipedia says.
>
> And yes, division by zero is valid in some mathematics.
>

The Wikipedia articles were quoted as references for division by zero in
some mathematics. Wikipedia is not always correct or complete, but it
is often not bad - and it's maths articles are usually accurate (though
sometimes almost incomprehensible, even to people familiar with the topic).

But the point is that Wikipedia articles on mathematical structures like
the Riemann sphere are vastly better references than Mr. Flibble's
ignorance backed up by absolutely nothing.

(For the record, I have not claimed to be an expert on these sorts of
mathematical structures. I learned about them - and forgot most of them
- before Wikipedia was invented.)

David Brown

unread,
Nov 1, 2016, 5:12:27 AM11/1/16
to
In most mathematical structure with a zero, the zero is unique - that is
generally one of the fundamental useful properties of many structures.

However, it is sometimes useful to distinguish "approaching a limiting
value of 0 from the negative side" and "approaching a limiting value of
0 from the positive side". It may also be useful to distinguish between
"positive infinitesimal" and "negative infinitesimal" (as is done in the
field of hyperreals).

But while there is no "negative zero" or "positive zero" in the real
numbers, IEEE floating point is not an exact representation of the real
numbers - and it includes extra features. "Negative zero" can be used
to mean "too small to represent in this format, but known to be
negative". Is it a useful feature that makes sense in algorithms? I
don't know - I have never had a use for it myself, but maybe someone
else has.

David Brown

unread,
Nov 1, 2016, 5:47:28 AM11/1/16
to
As noted before, no one looks at your youtube links.

Mathematicians don't think that you can sum the positive integers and
get the result -1/12 (assuming we are talking about the field of
integers, rationals or reals here, rather than a different algebraic
structure designed to give that specific result).

But contrary to popular opinion, some mathematicians have a sense of
humour and like to make up "proofs" of obviously incorrect statements.
It's a sort of game - the challenge is to spot the flaw in the "proof".
Here is an example, the "Cheese Sandwich Theorem" (named after the very
real and important "Ham Sandwich Theorem") :

1. Nothing is better than complete happiness.
2. A cheese sandwich is better than nothing.
3. Therefore, a cheese sandwich is better than complete happiness.

I can also "prove" that everyone in a class gets the same result in any
exam.


David Brown

unread,
Nov 1, 2016, 6:02:32 AM11/1/16
to
You are missing the point here.

No one is arguing that dividing by zero in the real numbers, or C++
floats or doubles, is a good idea - it will /always/ give an incorrect
answer and will not help anyone (even if they are trying to simulate
incompressible flows, or solve quantum mechanics problems). The IEEE
defined values of NaN and infinity are not real numbers - they are error
indicators that can help you spot what went wrong in the code, and also
to make sure that you can see that something went wrong. Since "x / 0"
is undefined by the C++ standards (and only defined by implementations,
such as by adding IEEE requirements), "x / 0" could return 3.12 and your
program would not realise there was a mistake.


As a more general point, however, there are mathematical structures for
which division by zero /is/ defined. These are structures that have
definite usage - even though /you/ may not need them (nor have I), and
ignoramuses like Mr. Flibble may deny their existence. They cannot be
represented directly by a single C++ float or double - and division
operations are not conducted by a single C++ floating point division.


I find it hard to comprehend that people are arguing about this in this
thread. Are people so convinced that they know /all/ about everything
in mathematics that they can deny the existence of other mathematical
structures that they are not familiar with? Are people so convinced
that they know /all/ about physics and how everything is calculated, so
that they can deny that such mathematics has use? Are people so
convinced that they know everything, that they insist on concrete proof
and detailed descriptions of such usage, rather than just accepting
people's word that such usage exists?



Rick C. Hodgin

unread,
Nov 1, 2016, 6:40:02 AM11/1/16
to
David Brown wrote:
> Rick C. Hodgin wrote:
> > 1 + 2 + 3 + 4 ... = -1/12
> > https://www.youtube.com/watch?v=w-I6XTVZXww
> > ...
> > All truly advanced math people are obvious lunatics. It's so easy to
> > see from the nonsensical stuff they come up with. So, it's not fair
> > to judge Leigh on standards where they think an infinite sum of ever-
> > increasing positive integers is actually -1/12.
>
> As noted before, no one looks at your youtube links.

How exactly would you prove that claim, D.B.? Oh that's right ... you
can't. Well, I do apologize. I must've forgotten momentarily when I
first asked. Either that, or I was distracted by a thing. In either case,
my bad. (Or ... is it your bad? Hmmm...)

> Mathematicians don't think that you can sum the positive integers
> and get the result -1/12 (assuming we are talking about the
> field of integers, rationals or reals here, rather than a different
> algebraic structure designed to give that specific result).

I've heard it said, "Ignorance is bliss." Tell me, Mr. Brown, is it true?

Alf P. Steinbach

unread,
Nov 1, 2016, 7:08:00 AM11/1/16
to
On 01.11.2016 10:12, David Brown wrote:
> "Negative zero" can be used
> to mean "too small to represent in this format, but known to be
> negative". Is it a useful feature that makes sense in algorithms? I
> don't know - I have never had a use for it myself, but maybe someone
> else has.

I once saw some graphing examples where it really mattered.

The negative zero in essence remembers to what side of something
something is.

Wikipedia's article about signed zero gives a reference to an article by
William Kahan, who created IEEE 754 (sort of). It's behind a pay wall,
but Mr. Google found a PDF. Here: <url:
https://people.freebsd.org/~das/kahan86branch.pdf>.

It's looks pretty inaccessible to me; I can't follow the reasoning just
by skimming it. At the end he has some weird-looking conclusions, e.g.

“Somewhat less clear are the signs of results like 000(±O.S)"" = (±2f""
= 00,-00 = 0 , and 0-00(±o.sf"", (± 2)"", (±oo)oo, all ±oo. It is
possible to argue that all these results should be assigned + signs in
real arithmetic on any North American computer;”

“North American”? WTF?

Cheers!,

- Alf

David Brown

unread,
Nov 1, 2016, 7:19:55 AM11/1/16
to
On 01/11/16 11:39, Rick C. Hodgin wrote:
> David Brown wrote:
>> Rick C. Hodgin wrote:
>>> 1 + 2 + 3 + 4 ... = -1/12
>>> https://www.youtube.com/watch?v=w-I6XTVZXww
>>> ...
>>> All truly advanced math people are obvious lunatics. It's so easy to
>>> see from the nonsensical stuff they come up with. So, it's not fair
>>> to judge Leigh on standards where they think an infinite sum of ever-
>>> increasing positive integers is actually -1/12.
>>
>> As noted before, no one looks at your youtube links.
>
> How exactly would you prove that claim, D.B.? Oh that's right ... you
> can't. Well, I do apologize. I must've forgotten momentarily when I
> first asked. Either that, or I was distracted by a thing. In either case,
> my bad. (Or ... is it your bad? Hmmm...)

I should apologise - I came across a bit rougher than I intended here.
And you are correct that it is unreasonable for me to claim that /no
one/ would look at that link, even though I think very few people would
bother. Youtube videos are simply an extremely inconvenient and
time-consuming way of having a quick look at something, or of studying
something in detail - they can be a useful addition to written
references, but are not alternatives.

If you had used a link such as
<https://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_%E2%8B%AF> or
<http://math.stackexchange.com/questions/39802/why-does-123-cdots-frac112>,
it would have been a lot more useful.

>
>> Mathematicians don't think that you can sum the positive integers
>> and get the result -1/12 (assuming we are talking about the
>> field of integers, rationals or reals here, rather than a different
>> algebraic structure designed to give that specific result).
>
> I've heard it said, "Ignorance is bliss." Tell me, Mr. Brown, is it true?
>

I don't know :-)

I do know that the Riemann zeta function at -1 is -1/12, and I know that
zeta(s) = sum(n^-s) for s with real part greater than 1. I know that
people who don't really understand this think that it means it is a
proof that 1 + 2 + 3 + ... is -1/12. And I know that mathematicians do
not think that is the case (except perhaps in specific mathematical
structures).


Ben Bacarisse

unread,
Nov 1, 2016, 8:18:57 AM11/1/16
to
David Brown <david...@hesbynett.no> writes:

> On 31/10/16 14:08, leigh.v....@googlemail.com wrote:
>> You are simply wrong. Division by zero is undefined in mathematics.
>
> Please update Wikipedia, based on your new-found expertise in
> mathematics. To get started, edit this page:
>
> <https://en.wikipedia.org/wiki/Riemann_sphere#Arithmetic_operations>
>
> Then move on to
> <https://en.wikipedia.org/wiki/Projectively_extended_real_line>
>
> <https://en.wikipedia.org/wiki/Wheel_theory> should be deleted altogether.
>
> Those of us who know a little more mathematics, will continue to say
> that division by zero is undefined for integers, real numbers, and any
> other fields, as well as computer-based approximations to those fields -
> but that division by zero /is/ defined for certain other mathematical
> structures.
>
> It is fine to say that you are never going to need these structures.

You are very likely to. Division by zero is defined in some programming
languages (for example ECMAScript) and IEEE floating point also defines
it (though a little less strictly). It's ironic that some of the most
widely encountered cases where division by zero is well-defined are in
the field of computing.

<snip>
--
Ben.

Rick C. Hodgin

unread,
Nov 1, 2016, 8:24:53 AM11/1/16
to
I don't typically rely on Wikipedia, though at times it's helpful. Stack
Exchange has been helpful in finding solutions to computer questions as
there are so many contributors. It never occurred to me to use it for
math-related questions. Learn something new every day.

I also have dyslexia and have a very difficult time reading. I try and
choose channels on YouTube which provide teaching and not just fluff.
These give big picture ideas typically which, if I'm interested in more
details, then I can go deeper in some other forum.

I refer to those videos because they convey things in a way I can more
easily understand (than compared to reading, which takes much more effort,
is much slower and prone to weird errors).

> >> Mathematicians don't think that you can sum the positive integers
> >> and get the result -1/12 (assuming we are talking about the
> >> field of integers, rationals or reals here, rather than a different
> >> algebraic structure designed to give that specific result).
> >
> > I've heard it said, "Ignorance is bliss." Tell me, Mr. Brown, is it true?
>
> I don't know :-)
>
> I do know that the Riemann zeta function at -1 is -1/12, and I know that
> zeta(s) = sum(n^-s) for s with real part greater than 1.

I have no knowledge of that apart from having read it previously. It was
also mentioned on an episode of Numb3rs that I remember.

> I know that
> people who don't really understand this think that it means it is a
> proof that 1 + 2 + 3 + ... is -1/12. And I know that mathematicians do
> not think that is the case (except perhaps in specific mathematical
> structures).

The people on the Numberphile YouTube channel are PhD mathematicians, and
they make videos related to unusual math traits.

In the video I linked above they also cite this proof:

S1 = 1 - 1 + 1 - 1 ... = 1/2
[Grandi's Series: https://www.youtube.com/watch?v=PCu_BNNI5x4]

S2 = 1 - 2 + 3 - 4 ... = 1/4
S3 = 1 + 2 + 3 + 4 ... = -1/12

If you watch the video, they take you through it step-by-step (and
does not use Reimann). The Brady Haran blog fills in some additional
details via written words, as does the follow-on video to the
1 + 2 + 3 + 4 ... = -1/12 video:

Sum of Natural Numbers (second proof (Euler) and extra footage):
https://www.youtube.com/watch?v=E-d9mgo8FGk

Why -1/12 is a gold nugget:
https://www.youtube.com/watch?v=0Oazb7IWzbA

NY Times article on the subject:
www.nytimes.com/2014/02/04/science/in-the-end-it-all-adds-up-to.html

General blog post regarding the videos:
http://www.bradyharanblog.com/blog/2015/1/11/this-blog-probably-wont-help

The value turns up in physics, even being cited in the Physics book:

Volume 1: An introduction to the bosonic string: String Theory
by Joseph Polchinski, page 22:

https://www.amazon.com/String-Cambridge-Monographs-Mathematical-Physics/dp/0521672279

sum(n, n = 1 .. infinity) --> -1/12

Daniel

unread,
Nov 1, 2016, 9:27:31 AM11/1/16
to
On Tuesday, November 1, 2016 at 6:02:32 AM UTC-4, David Brown wrote:
>
> I find it hard to comprehend that people are arguing about this in this
> thread.

Mr Flibble doesn't argue, David. Mr Flibble makes statements.

Daniel

David Brown

unread,
Nov 1, 2016, 10:12:38 AM11/1/16
to
Just remember, when you post in a newsgroup you are not posting for
/you/. You are posting for everyone else. A youtube video will have a
very specific viewpoint, presented in a specific way by a specific
person. For the most part, no one has any idea of its level of
authority or correctness. Wikipedia is not always authoritative and not
always correct, but it is peer-reviewed and people can judge its
quality. It is not merely "some guy off the internet". Stack exchange
and similar sites also have at least a basis of peer review in their
comments. Most people can quickly and easily scan a web page to get an
idea if it is of interest and understandable to them - they can go at
the speed /they/ want, and the depth /they/ want, not just the speed and
depth the video presenter wants. Add to that that this is an
international newsgroup - reading English is often a lot easier for
non-natives than listing to some presenter, and it is often possible to
translate web pages.


>
>>>> Mathematicians don't think that you can sum the positive integers
>>>> and get the result -1/12 (assuming we are talking about the
>>>> field of integers, rationals or reals here, rather than a different
>>>> algebraic structure designed to give that specific result).
>>>
>>> I've heard it said, "Ignorance is bliss." Tell me, Mr. Brown, is it true?
>>
>> I don't know :-)
>>
>> I do know that the Riemann zeta function at -1 is -1/12, and I know that
>> zeta(s) = sum(n^-s) for s with real part greater than 1.
>
> I have no knowledge of that apart from having read it previously. It was
> also mentioned on an episode of Numb3rs that I remember.

zeta(-1) is the basis of the -1/12 value. Interestingly, it also turns
up in some physics problems where it looks like you have 1 + 2 + 3 +
..., and the "infinite bit" cancels out with other effects leaving you a
neat -1/12 result.

Basically, zeta(s) is defined for Re(s) > 1 to be sum(n ^ -s). It
converges nicely for any complex number s with real part greater than 1.
Then you take the analytic extension of it in the complex number field
to define it for other values of s (except s == 1). It turns out to be
a really useful function with very interesting properties - but it only
actually matches sum(n ^ -s) for Re(s) > 1. So you can calculate
zeta(-1) to get -1/12, but that does not mean that the sum
representation gives -1/12.

>
>> I know that
>> people who don't really understand this think that it means it is a
>> proof that 1 + 2 + 3 + ... is -1/12. And I know that mathematicians do
>> not think that is the case (except perhaps in specific mathematical
>> structures).
>
> The people on the Numberphile YouTube channel are PhD mathematicians, and
> they make videos related to unusual math traits.
>
> In the video I linked above they also cite this proof:
>
> S1 = 1 - 1 + 1 - 1 ... = 1/2
> [Grandi's Series: https://www.youtube.com/watch?v=PCu_BNNI5x4]
>
> S2 = 1 - 2 + 3 - 4 ... = 1/4
> S3 = 1 + 2 + 3 + 4 ... = -1/12

And again, none of these sequences converge in the conventional sense.
You can, in a sense, "pretend" that these work - and you can define
other forms of infinite sums that allow you to assign a value to a sum
in a way that could be useful.

So if you take 1, subtract 1, add 1, and keep going, you will /never/
get 1/2. You will never get close to it.

If you take the series of partial sums in pairs, you get a sequence 1,
0, 1, 0, 1, 0, ... This clearly has an average value of 1/2, getting
closer to 1/2 as you go further along. So you can say that in this
sense (known as the Cesàro sum, which is a well-defined concept) the
sequence S1 "sums" to 1/2. That has plenty of practical uses - it is
how you set the backlight on your mobile to half-strength, for example.
But it is /not/ the limit of the normal summation of the sequence.

S1 = 1 - 1 + 1 - 1 + 1 - 1 + 1 ...
so S1 = 1 + (-1 + 1) + (-1 + 1) + (-1 + 1) + ...
= 1 + 0 + 0 + 0 + ....
= 1.
Also S1 = (1 - 1) + (1 - 1) + (1 - 1) + ...
= 0 + 0 + 0 ...
= 0.

It's easy to play games like this with infinite sequences - because the
rules of normal arithmetic do not apply. But they are great for looking
impressive, especially when the audience (or sometimes the presenter!)
is not aware of the subtleties involved and unable to see the flaw - the
undefined behaviour, in C++ terms.

Rick C. Hodgin

unread,
Nov 1, 2016, 10:26:23 AM11/1/16
to
You're free to not watch anything I post, David. It will be to your loss
however, as I don't post random things. The things I post are specific
and relate to whatever the topic is I have. In fact, the things I post
are typically the most concise and accurate summation of whatever the
things is I'm trying to convey.

> >>>> Mathematicians don't think that you can sum the positive integers
> >>>> and get the result -1/12 (assuming we are talking about the
> >>>> field of integers, rationals or reals here, rather than a different
> >>>> algebraic structure designed to give that specific result).
> >>>
> >>> I've heard it said, "Ignorance is bliss." Tell me, Mr. Brown, is it true?
> >>
> >> I don't know :-)
> >>
> >> I do know that the Riemann zeta function at -1 is -1/12, and I know that
> >> zeta(s) = sum(n^-s) for s with real part greater than 1.
> >
> > I have no knowledge of that apart from having read it previously. It was
> > also mentioned on an episode of Numb3rs that I remember.
>
> zeta(-1) is the basis of the -1/12 value. Interestingly, it also turns
> up in some physics problems where it looks like you have 1 + 2 + 3 +
> ..., and the "infinite bit" cancels out with other effects leaving you a
> neat -1/12 result.

There are other solutions which only involve known series. It's explained
in the videos.
They explain it in the video. It's why I posted it.

It has to do with the fact that it's an infinite series, and not a finite
series, and there are several interesting ways they've discovered to
handle infinite series:

Here's a more detailed explanation:
https://www.youtube.com/watch?v=jcKRGpMiVTw

When you begin to look at divergent infinite series, there are some
things which can be observed. Patterns and traits which do not exist
in finite series, or in divergent infinite series. As a result, this
new type of math ability comes into play. Kind of like how we have
i = sqrt(-1).

> If you take the series of partial sums in pairs, you get a sequence 1,
> 0, 1, 0, 1, 0, ... This clearly has an average value of 1/2, getting
> closer to 1/2 as you go further along. So you can say that in this
> sense (known as the Cesàro sum, which is a well-defined concept) the
> sequence S1 "sums" to 1/2. That has plenty of practical uses - it is
> how you set the backlight on your mobile to half-strength, for example.
> But it is /not/ the limit of the normal summation of the sequence.
>
> S1 = 1 - 1 + 1 - 1 + 1 - 1 + 1 ...
> so S1 = 1 + (-1 + 1) + (-1 + 1) + (-1 + 1) + ...
> = 1 + 0 + 0 + 0 + ....
> = 1.
> Also S1 = (1 - 1) + (1 - 1) + (1 - 1) + ...
> = 0 + 0 + 0 ...
> = 0.

They mention those two solutions in the video. They also explain why the
more generally accepted answer is now 1/2.

You know, you're very closed-minded, David, for someone who is so open
minded.

> It's easy to play games like this with infinite sequences - because the
> rules of normal arithmetic do not apply. But they are great for looking
> impressive, especially when the audience (or sometimes the presenter!)
> is not aware of the subtleties involved and unable to see the flaw - the
> undefined behaviour, in C++ terms.

You dismiss them summarily because you ascribe attention-seeking motives.
These theories were proposed back in the 1700s, 1800s, 1900s, and more
recently, by hundreds of mathematicians. Were they all fame seekers? No.
Many of them were absolutely astounded at what they discovered, and didn't
understand even how it could be that way. They spent decades trying to
solve it only to have someone later come along and solve it for them. And
there remains much unsolved today.

I've had the theory that in infinite series there is a wrapping that takes
place in a Madlebrot set, such that you start up or down from 0 and land
on a destination, which is the first value in the series. You are always
constrained in how far you can move based on the height of the bounds of
the shape described by the Mandlebrot set. You go up or down as far as
you can, and then wrap. That lands you on a destination point. You then
go to the next value in the series. That lands you on a new destination
point. You continue on with each value in the sequence, and eventually
you'll find a preponderance of points which converge on the value. And
if series values are finalized on a number outside of that range, it's
likely some scaled value within that range, which can be exrapolated.

That is my personal "gut instinct" as to what's going on (or something
similar to it), and I could be wrong. Where to begin on the x range of
the Mandlebrot set image ... that's another question. I think answers
will fall into ranges. Those that have one solution, those that have
multiple solutions, and those that have no solutions. But, we may see
in time if anyone studies it.

David Brown

unread,
Nov 1, 2016, 11:28:17 AM11/1/16
to
I /know/ it is a different sort of maths. I /know/ you can use
different ways of defining "infinite sum" and look at limits in
different ways, in order to get other useful properties. This is not
something new to me.

But you achieve this by introducing new features and tools to your maths
- not by claiming things that are clearly incorrect. When you work with
the real numbers, sqrt(-1) makes no sense - you either accept that, or
work with an extension (the complex numbers) for which it /does/ make sense.

The sum 1 + 2 + 3... has no answer - it is a divergent sequence, and no
matter how far you go, you will never get any closer to -1/12.

You can take that sequence of numbers and use it to derive a value of
-1/12 in a variety of ways. Ramunujan summation is one way, as is the
use of the zeta function. These are /not/ the result of doing an
infinite number of additions on the integers - they are the result of
properties of the sequence or the partial sums along the way. It is a
different kind of feature - it is useful maths, but it is easy for
non-mathematicians to get confused and think it means that if you take
all the positive integers and add them up, you get -1/12.

>
>> If you take the series of partial sums in pairs, you get a sequence 1,
>> 0, 1, 0, 1, 0, ... This clearly has an average value of 1/2, getting
>> closer to 1/2 as you go further along. So you can say that in this
>> sense (known as the Cesàro sum, which is a well-defined concept) the
>> sequence S1 "sums" to 1/2. That has plenty of practical uses - it is
>> how you set the backlight on your mobile to half-strength, for example.
>> But it is /not/ the limit of the normal summation of the sequence.
>>
>> S1 = 1 - 1 + 1 - 1 + 1 - 1 + 1 ...
>> so S1 = 1 + (-1 + 1) + (-1 + 1) + (-1 + 1) + ...
>> = 1 + 0 + 0 + 0 + ....
>> = 1.
>> Also S1 = (1 - 1) + (1 - 1) + (1 - 1) + ...
>> = 0 + 0 + 0 ...
>> = 0.
>
> They mention those two solutions in the video. They also explain why the
> more generally accepted answer is now 1/2.

No, 1/2 is /not/ a "more generally accepted answer". The sums I showed
above demonstrate that there is no "answer" - it makes no mathematical
sense. The paragraph above (where I mentioned Cesàro summation, in case
you want to look it up) shows how you can take a different view of the
sequence and get a different result. Various methods of summation, such
as Cesàro summation, Abel summation, Ramunijan summation, and "normal"
infinite summation (Cauchy convergence) are possible. When these exist,
they agree in value - but sometimes they do not all exist. When only
some of these sorts of summations exist, the result will be consistent
with certain properties, and is perhaps useful - but it would be wrong
to say "this is the infinite sum".

>
> You know, you're very closed-minded, David, for someone who is so open
> minded.

You simply misunderstand the situation here. That is no disrespect -
these kind of summations are pretty advanced mathematics, and you are
unlikely to cover them in a normal university mathematics degree. It is
very difficult for most people to separate different ways of handling
sequences, especially infinite sequences, and to understand when you can
and cannot generalise and extrapolate.

>
>> It's easy to play games like this with infinite sequences - because the
>> rules of normal arithmetic do not apply. But they are great for looking
>> impressive, especially when the audience (or sometimes the presenter!)
>> is not aware of the subtleties involved and unable to see the flaw - the
>> undefined behaviour, in C++ terms.
>
> You dismiss them summarily because you ascribe attention-seeking motives.

No - I assume that it is done for fun. That is the primary motivation
for most mathematics, especially for demonstrating "weird maths".
Certainly that is why /I/ posted the cheese sandwich theorem. Or it was
done because the results are useful in the correct context (which is
/not/ standard arithmetical summation).

> These theories were proposed back in the 1700s, 1800s, 1900s, and more
> recently, by hundreds of mathematicians. Were they all fame seekers? No.
> Many of them were absolutely astounded at what they discovered, and didn't
> understand even how it could be that way. They spent decades trying to
> solve it only to have someone later come along and solve it for them. And
> there remains much unsolved today.

What they discovered was that:

a) If you pretend that infinite sequences can be manipulated like finite
ones, it is easy to end up with silly, inconsistent or clearly incorrect
results - as well as results that look interesting but you cannot
rigorously prove.

b) If you form rigorous and formal definitions for ways of handling
infinite sequences, you can do very interesting things with them and
produce fascinating and useful results - but you are no longer just
doing simple addition.

>
> I've had the theory that in infinite series there is a wrapping that takes
> place in a Madlebrot set, such that you start up or down from 0 and land
> on a destination, which is the first value in the series. You are always
> constrained in how far you can move based on the height of the bounds of
> the shape described by the Mandlebrot set. You go up or down as far as
> you can, and then wrap. That lands you on a destination point. You then
> go to the next value in the series. That lands you on a new destination
> point. You continue on with each value in the sequence, and eventually
> you'll find a preponderance of points which converge on the value. And
> if series values are finalized on a number outside of that range, it's
> likely some scaled value within that range, which can be exrapolated.
>
> That is my personal "gut instinct" as to what's going on (or something
> similar to it), and I could be wrong. Where to begin on the x range of
> the Mandlebrot set image ... that's another question. I think answers
> will fall into ranges. Those that have one solution, those that have
> multiple solutions, and those that have no solutions. But, we may see
> in time if anyone studies it.

I'm sorry, I can't figure out what you are trying to describe here - so
I can't tell you if I think your gut instinct is right or wrong (or even
if I know enough maths to have an opinion on your gut instinct here).

Jerry Stuckle

unread,
Nov 1, 2016, 11:38:55 AM11/1/16
to
I never said you claimed to be an expert on these sorts of mathematical
structures.

And there are facts, and there is Wikipedia. Sometimes the two agree.
But I do agree, advanced math articles are typically written by people
who know what they are talking about. Unlike many Wikipedia articles.

I've even seen Dubai claimed as the capital of the UAE (it's Abu Dhabi).
How could someone get THAT one wrong?

Jerry Stuckle

unread,
Nov 1, 2016, 11:45:24 AM11/1/16
to
x=1 // Beginning assumption
x^2 = x // Multiply both sides by x
x^2-1 = x-1 // Subtract 1 from both sides
(x+1)*(x-1)=1*(x-1) // Factor
x+1=1 // Cancel out duplicate terms
1+1= 1 // Substitute for x
2=1 // Voila

Rick C. Hodgin

unread,
Nov 1, 2016, 12:09:40 PM11/1/16
to
I'm only repeating what the PhD mathematician said:

Solution S-1 = ?, results in S = 1/2:
https://www.youtube.com/watch?v=PCu_BNNI5x4&t=2m41s

"A lot of people think that the best answer is a half":
https://www.youtube.com/watch?v=PCu_BNNI5x4&t=3m45s
You're arguing against the mathematicians in the videos, David. You're
not alone. Many people in the comments make various arguments.

Good luck with your attempts to debunk them.

Mr Flibble

unread,
Nov 1, 2016, 3:40:56 PM11/1/16
to
You obviously have a different idea as to what mathematics actually is
compared to the rest of us. Dividing by zero is undefined, period.

/Flibble


David Brown

unread,
Nov 1, 2016, 4:56:40 PM11/1/16
to
On 01/11/16 16:38, Jerry Stuckle wrote:
> On 11/1/2016 5:06 AM, David Brown wrote:
>> On 31/10/16 23:23, Jerry Stuckle wrote:
>>> On 10/31/2016 1:57 PM, Gareth Owen wrote:
>>>> Mr Flibble <flibbleREM...@i42.co.uk> writes:
>>>>
>>>>> Those three "mathematical structures" are gibberish; yes the articles
>>>>> should be deleted. Again: division by zero is undefined in kosher
>>>>> mathematics.
>>>>
>>>> Wow. That's ... Stucklesque.
>>>>
>>>
>>> Wrong. I go by the facts - not what someone who claims to be an expert
>>> because he read an article in Wikipedia says.
>>>
>>> And yes, division by zero is valid in some mathematics.
>>>
>>
>> The Wikipedia articles were quoted as references for division by zero in
>> some mathematics. Wikipedia is not always correct or complete, but it
>> is often not bad - and it's maths articles are usually accurate (though
>> sometimes almost incomprehensible, even to people familiar with the topic).
>>
>> But the point is that Wikipedia articles on mathematical structures like
>> the Riemann sphere are vastly better references than Mr. Flibble's
>> ignorance backed up by absolutely nothing.
>>
>> (For the record, I have not claimed to be an expert on these sorts of
>> mathematical structures. I learned about them - and forgot most of them
>> - before Wikipedia was invented.)
>>
>
> I never said you claimed to be an expert on these sorts of mathematical
> structures.
>

I didn't say you said anything like that - that's why I put in brackets
"for the record", in case /anyone/ had thought I was an expert on these
points. We have had enough arguments and disagreements in the past -
this was not intended to be one - I agree with you here!

> And there are facts, and there is Wikipedia. Sometimes the two agree.
> But I do agree, advanced math articles are typically written by people
> who know what they are talking about. Unlike many Wikipedia articles.
>

Yes, but in this case the articles are fine (though they are not
designed to be maths tutorials for people new to these topics).

I agree that sometimes Wikipedia articles can be wrong - and sometimes
surprisingly so even on simple factual matters. It all depends on who
wrote the article, how careful and serious they are, whether they have
some extra agendas or biases, and how many others read, edit and correct
the article. Usually the more technical articles are quite accurate,
while things like political topics are more likely to start out as
opinion pieces.


> I've even seen Dubai claimed as the capital of the UAE (it's Abu Dhabi).
> How could someone get THAT one wrong?
>

That sounds like a genuine mistake - and not an uncommon one. If it was
on an article about the UAE, politics or geography, then it should
definitely not have occurred. It is a little more excusable if it were
merely mentioned in passing in another article.

But there is no doubt that mistakes like this happen in Wikipedia, and
one must be aware of the possibility when using it as a reference or guide.

It is also interesting sometimes to look at the same article in
different languages. I looked up "The Battle of Largs" once. This was
a fairly sizeable battle in which the Scottish king finally beat the
Vikings and forced them out of Scotland (or at least, forced them out of
power in Scotland). The English language version was quite extensive,
and described it as a large and important battle with serious political
effects on the history of Scotland. The Norwegian version, on the other
hand, described it as a minor scuffle of no importance. The contrast
between the viewpoints of the winners and the losers was clear!


Vir Campestris

unread,
Nov 1, 2016, 4:58:41 PM11/1/16
to
On 31/10/2016 10:09, David Brown wrote:
> I look to the future, and for every processor made that has efficient
> traps on division by zero, there will be thousands made that don't have
> such traps. There even will be thousands made that don't even have
> instructions for division.

I consider that unlikely. Once upon a time I used 8 bit processors that
didn't have a divide; these days I use 32 bit ones with a divide, and
they only cost a couple of bucks.

Andy

David Brown

unread,
Nov 1, 2016, 5:03:18 PM11/1/16
to
To prove that everyone in the class gets the same result, we use general
induction. That is, we prove a starting case, and we prove the
induction step that the theorem holds for size "n" assuming that it
holds for all sizes less than "n".

First, the starting case. In a class of size 1, everyone gets the same
results.

For a class of size n, take one person P out of the class. Divide the
remaining pupils into two groups A and B, each roughly half of n in size
(the precise split does not matter). Then consider A + P to be a class.
It is smaller than n (since it is missing everyone in B), so by our
induction hypothesis, we can assume everyone in A + P gets the same
results. Similarly, everyone in B + P gets the same results. But since
P is in both groups, everyone in A, B and P - i.e., the whole class of
size n - has the same results.

By general induction, therefore, everyone in a class gets the same
results in any exam.

David Brown

unread,
Nov 1, 2016, 5:13:08 PM11/1/16
to
There are a great many microcontrollers - including 32-bit ones - that
don't have hardware division instructions. Fast division is big (and
therefore expensive) to make in hardware, and relatively uncommon in
code in microcontrollers. Often when you do have division, it is by a
constant and can be turned into a multiplication (or even a shift). So
omitting hardware division is a reasonable tradeoff for the smallest and
cheapest devices. In terms of the number of devices produced, these are
the sizeable majority.

It is changing, and a larger proportion of devices made and sold now
have hardware division than just a few years ago, but I still expect
that for every "big" cpu (like an x86) made with traps on divide by
zero, there are vast numbers of small microcontrollers with no division
support. Maybe "thousands" is an exaggeration - but certainly "lots".

IIRC, hardware division was dropped from some members of the 68k family
because someone noticed that a software division loop was faster than
the hardware instruction!


Paavo Helde

unread,
Nov 1, 2016, 5:57:16 PM11/1/16
to
On 1.11.2016 21:40, Mr Flibble wrote:
>
> You obviously have a different idea as to what mathematics actually is
> compared to the rest of us. Dividing by zero is undefined, period.

Count me out of the "rest of us", please. Studied theoretical physics
too long. Thanks!

Paavo


Rick C. Hodgin

unread,
Nov 1, 2016, 6:17:18 PM11/1/16
to
On Tuesday, November 1, 2016 at 5:13:08 PM UTC-4, David Brown wrote:
> On 01/11/16 21:58, Vir Campestris wrote:
> > On 31/10/2016 10:09, David Brown wrote:
> >> I look to the future, and for every processor made that has efficient
> >> traps on division by zero, there will be thousands made that don't have
> >> such traps. There even will be thousands made that don't even have
> >> instructions for division.
> >
> > I consider that unlikely. Once upon a time I used 8 bit processors that
> > didn't have a divide; these days I use 32 bit ones with a divide, and
> > they only cost a couple of bucks.
> >
>
> There are a great many microcontrollers - including 32-bit ones - that
> don't have hardware division instructions. Fast division is big (and
> therefore expensive) to make in hardware, and relatively uncommon in
> code in microcontrollers. Often when you do have division, it is by a
> constant and can be turned into a multiplication (or even a shift). So
> omitting hardware division is a reasonable tradeoff for the smallest and
> cheapest devices. In terms of the number of devices produced, these are
> the sizeable majority.

I disagree with that goal. The software written for these devices is
typically not very complex or large (due to the constraints of the
devices). On the other hand, Android-like devices and larger have many
Gigabytes of memory and storage, and run much larger much more powerful
applications.

It should be expected for any language designer to target a growing set
of internal abilities exposed to it through the ISA, not less, and not
a decreasing set.

Hardware is very cheap these days. The fact that it's still cheaper in
the smaller form factors is of very little overall consequence because
they have significantly lesser demands of a language, even enough that
many of them could be handled with simple assemblers with supporting
function call libraries provided for by the manufacturers just as
easily as relying upon something as complex as a C standard library.
In fact, I'd wager that in such applications the C standard library
would be a huge hindrance due to the special needs of limited hardware
for optimization.

> It is changing, and a larger proportion of devices made and sold now
> have hardware division than just a few years ago, but I still expect
> that for every "big" cpu (like an x86) made with traps on divide by
> zero, there are vast numbers of small microcontrollers with no division
> support. Maybe "thousands" is an exaggeration - but certainly "lots".
>
> IIRC, hardware division was dropped from some members of the 68k family
> because someone noticed that a software division loop was faster than
> the hardware instruction!

I see the future as an increasing set of abilities exposed to the
software developer, up to a point, but always including the base and
fundamental operations in integer and fp.

David Brown

unread,
Nov 2, 2016, 4:05:42 AM11/2/16
to
Don't worry, I for one know that Mr. Flibble is using the "royal we".


David Brown

unread,
Nov 2, 2016, 4:56:03 AM11/2/16
to
On 01/11/16 23:16, Rick C. Hodgin wrote:
> On Tuesday, November 1, 2016 at 5:13:08 PM UTC-4, David Brown wrote:
>> On 01/11/16 21:58, Vir Campestris wrote:
>>> On 31/10/2016 10:09, David Brown wrote:
>>>> I look to the future, and for every processor made that has efficient
>>>> traps on division by zero, there will be thousands made that don't have
>>>> such traps. There even will be thousands made that don't even have
>>>> instructions for division.
>>>
>>> I consider that unlikely. Once upon a time I used 8 bit processors that
>>> didn't have a divide; these days I use 32 bit ones with a divide, and
>>> they only cost a couple of bucks.
>>>
>>
>> There are a great many microcontrollers - including 32-bit ones - that
>> don't have hardware division instructions. Fast division is big (and
>> therefore expensive) to make in hardware, and relatively uncommon in
>> code in microcontrollers. Often when you do have division, it is by a
>> constant and can be turned into a multiplication (or even a shift). So
>> omitting hardware division is a reasonable tradeoff for the smallest and
>> cheapest devices. In terms of the number of devices produced, these are
>> the sizeable majority.
>
> I disagree with that goal.

What "goal" do you mean? I haven't mentioned any goals here, so I don't
know what you are referring to. The nearest I can think of is the goal
of making microcontrollers as small, cheap and low-power as practically
possible without compromising required functionality. And for most
small microcontrollers, that includes not adding a hardware divider.

Or do you mean that your goal for your language does not include support
for smaller devices (or even many big ARM application processors), since
they don't have hardware division instructions? And many processors
that /have/ hardware division instructions don't have traps on divide by
zero (because it is a mess to implement in hardware) - are they also to
be excluded?


> The software written for these devices is
> typically not very complex or large (due to the constraints of the
> devices). On the other hand, Android-like devices and larger have many
> Gigabytes of memory and storage, and run much larger much more powerful
> applications.

Software on small microcontrollers can be surprisingly complex. As for
"large", that depends on how you measure it - the binary image is
usually far smaller than on a PC application, but the lines of code
written for the program (excluding libraries and third-party components)
is a lot more comparable (to the extent that there is a "typical" PC
application and "typical" microcontroller application).

And yes, Android devices have bigger processors - they are not
microcontrollers, but processors. And many of them lack hardware
division instructions.

>
> It should be expected for any language designer to target a growing set
> of internal abilities exposed to it through the ISA, not less, and not
> a decreasing set.
>

No, it should be expected for a language designer to make their language
run well on any appropriate target, and even better on the more capable
targets.

So a language designed today with an aim for speed would have native
support for vector operations so that programmers can easily take
advantage of SIMD units on many larger processors. If its typical
targets included DSPs and control systems, then it might have native
support for saturating arithmetic which can be implemented very
efficiently in most DSPs and microcontrollers such as Cortex M devices.
But it would be a bad move to make saturated arithmetic the default if
the language was also to be used on x86 processors.

It is fine to take advantage of the newer features of the latest
processors - but you should think long and hard before making your
language /require/ processor features that are not universal.

And remember, the processor world spreads out in many directions - it is
not just "bigger and faster". Nor is there a consolidation in the
processor world, and it certainly is not consolidating on the feature
set of the x86 devices you are familiar with. New processor
architectures are developed all the time, and they don't follow the
legacy ideas of x86. Traps are a fine example of something you
definitely don't want to require, because they are often not supported.

It is your decision what features you will want and what targets you
expect to use, of course.


> Hardware is very cheap these days. The fact that it's still cheaper in
> the smaller form factors is of very little overall consequence because
> they have significantly lesser demands of a language, even enough that
> many of them could be handled with simple assemblers with supporting
> function call libraries provided for by the manufacturers just as
> easily as relying upon something as complex as a C standard library.
> In fact, I'd wager that in such applications the C standard library
> would be a huge hindrance due to the special needs of limited hardware
> for optimization.

I'm sorry, I can't see what you are getting at here. Maybe I haven't
had enough coffee yet.

>
>> It is changing, and a larger proportion of devices made and sold now
>> have hardware division than just a few years ago, but I still expect
>> that for every "big" cpu (like an x86) made with traps on divide by
>> zero, there are vast numbers of small microcontrollers with no division
>> support. Maybe "thousands" is an exaggeration - but certainly "lots".
>>
>> IIRC, hardware division was dropped from some members of the 68k family
>> because someone noticed that a software division loop was faster than
>> the hardware instruction!
>
> I see the future as an increasing set of abilities exposed to the
> software developer, up to a point, but always including the base and
> fundamental operations in integer and fp.
>

Well, that is definitely /not/ the future of processors and processing
units - just a future of one part of it. For higher performance, wider
is the future - not bigger, faster or more complex. Vector and SIMD
operations are getting more common - these are often simpler than the
kind of "do everything" instructions on an x86. For example, SIMD
floating point operations will often not be IEEE compliant, and they
certainly won't trap - they expect to give you sensible results for
sensible inputs, with little regard for what comes out when you put in
bad inputs. There are many processor architectures where speed comes
from tightly-coupled parallel processing - inter-process and
inter-processor communication is key. On the other hand, floating point
performance is relatively low priority (or missing altogether) in many
architectures.



Rick C. Hodgin

unread,
Nov 2, 2016, 8:21:48 AM11/2/16
to
I think we're basically saying the same thing just in different ways. You
place a greater emphasis on certain parts or in wording it a particular way,
but I think we generally agree.

And, enjoy your coffee. Personally I never touch the stuff. I like things
that (generally speaking) taste like they smell. :-)

woodb...@gmail.com

unread,
Nov 2, 2016, 2:49:59 PM11/2/16
to
That would be news to me.

Brian
Ebenezer Enterprises - In G-d we trust.
http://webEbenezer.net

Daniel

unread,
Nov 2, 2016, 5:00:52 PM11/2/16
to
On Tuesday, November 1, 2016 at 3:40:56 PM UTC-4, Mr Flibble wrote:
>
> You obviously have a different idea as to what mathematics actually is
> compared to the rest of us. Dividing by zero is undefined, period.
>
Flibblesticks. I'm assuming your university math background ended before the real analysis or abstract algebra courses.

Daniel

Jerry Stuckle

unread,
Nov 2, 2016, 10:20:55 PM11/2/16
to
Actually, I think his math background ended with 2+2=4. And he's not
sure about that.

Rick C. Hodgin

unread,
Dec 12, 2016, 9:59:32 PM12/12/16
to
I saw a video recently which conveys this concept graphically. You say
you don't watch any of the links I post ... so ... it can be for other
people to examine:

https://www.youtube.com/watch?v=sD0NjbwqlYw

And specifically at this part:

https://www.youtube.com/watch?v=sD0NjbwqlYw&t=18m49s
0 new messages