Almost any calculation involves more than one operation, which means that the rounding problem will turn the entire chain of calculations into meaningless NaRs and thus give the programmer a headache, because the system simulation will no longer be in a measurable state, and its data will be corrupted by NaRs.
I propose a different behavior of indeterminate forms:
- If the argument `x` of some function does not allow the range `(-oo; a)` and is in this range, then `x` is equal to `a`
- If the argument `x` of some function does not allow the range `(b; +oo)` and is in this range, then `x` is equal to `b`
- If the argument `x` of some function does not allow the range `(c; d)` and is in this range, then `x` is equal to `c` if `x < (c + d) / 2` otherwise `d`
As a result, calculations are not able to create NaR. NaR becomes sort of null. If at least 1 argument of some function is null, then the result is null.
Also division by zero will equal the maximum value. And this is good, since how exactly was this zero obtained? Thanks to a chain of rounded calculations that yielded something near zero. And it does not matter which particular number from this near zero came to the division operation, it is important that the set of such numbers is within certain expectations. Getting into a small range of indeterminate forms is an error of rounded calculations. Getting into a large range of indeterminate forms the programmer will not allow. NaR is not needed.