On 7/24/2021 3:36 AM, Thomas Koenig wrote:
> BGB <
cr8...@gmail.com> schrieb:
>> On 7/23/2021 12:45 PM, Quadibloc wrote:
>>> On Friday, July 23, 2021 at 9:30:20 AM UTC-6, MitchAlsup wrote:
>>>
>>>> No, doubling the widths just hides the problem and makes the actual
>>>> problem harder to find. You can take the point of view that that is
>>>> enough, but it is not in reality.
>>>
>>> It's enough to get one Moon lander on the surface. But solving the
>>> real problem is still better - but it's nice if there's a way to meet
>>> deadlines and the like.
>>>
>>
>> I would expect for a physical system, the inherent "noisiness" of
>> physical reality would matter a lot more for the outcome a lot more than
>> any bit-exact arithmetic would help make the landing.
>>
>> Some expertly crafted and computed flight plan could be thrown off by
>> much larger factors like which direction the wind was blowing during
>> liftoff, solar wind or flars blowing the spacecraft slightly off course,
>> collisions with space dust, ...
>
> One major issue was gravitational anomalies of the Moon, which are
> big enough to have a significant impact on the orbit of spacecraft.
>
Makes sense, but yeah, more factors into the point that unpredictable
physical variables are a lot more likely to factor in to mission success
or failure than whether or not floating-point operations have exact
rounding.
In cases where it matters, it is usually more a case of wanting multiple
implementations to give results which are consistent, rather than
necessarily maximizing accuracy.
One could have a floating point definition which specifies the use of
truncation, and potentially how the results are truncated internally
during operations, rather than one which assumes an infinitely precise
result.
For example, what if we defined double-precision FADD/FSUB relative to
the behavior of a 64-bit twos complement integer mantissa?...
Or, FMUL relative to a triangular multiplier which multiplies two 56-bit
inputs and produces a 56-bit (truncated) output?...
Not necessarily as a "universal standard", but rather as a target which
can be:
Relatively cost-effectively be implemented bit-exact in hardware;
Can be mostly emulated in software mostly using 64-bit integer
operations (*1).
*1: Well, sorta; FMUL would need special treatment to be bit-exact. One
would need to make some extra effort to approximate the behavior of a
truncated multiplier (naively using a 64*64->128 bit widening multiplier
and discarding the low bits would not produce bit-exact results).
Bit-exact would require getting the same results for the bits "hanging
off the bottom", with such a multiplier probably being defined relative
to the behavior of a collection of 16*16->32 sub-multipliers or similar.
There are other options, such as approximating a "smooth bottom"
truncated result, but this is not likely to be cost-effective relative
to the more "jagged" version.
...
Worthwhile is also debatable, as an FPU built to be able to support,
say, an FP96 format, would not produce bit-identical results with one
built to only support Binary64/Double, without special case logic to
artificially truncate or discard bits in the intermediate results, ...
Granted, an "FP standard" whose definitions depend on the size of the
largest output format supported by the FPU in question seems "kinda
useless" on this front.
The "cheaper" alternative is to not make any requirements here, and give
rounding behavior in terms of a probability.
As can be noted though, integer multiply via FP generally survives such
a multiplier because inputs which produce an in-range result will have
zeroes in the low-order parts of the mantissa, and thus all the
sub-multiplies which would have "fallen off the bottom" would have
contained multiplies against zero (and thus not had a visible effect on
the result).
>> Presumably these missions have some sort of closed-loop control, such as
>> course-correction, ability to detect the distance from the target, ...
>
> Having a spacecraft travel around in space is an engieering problem.
> You always have error bars on every measurement and assumption -
> mass of your spacecraft, duration and efficiency of a burn, actual
> mass expended on a burn, orientation, ...
>
> Any of the above uncertainties is _much_ higher than the floating
> point precision of a 32-bit real, let alone of a 64-bit real.
>
> Of course, if you subtract two numbers of almost equal magnigude,
> that could be much different...
>
> This is why space missions have course corrections somehwere in
> the middle of their trajectory. After sufficient time has elapsed
> to gather data on the velocity and position of the spacecraft, but
> not yet enough that a course correction would be too expensive,
> you alter the spacecraft's velocity by a relatively small amount
> of delta v.
>
Makes sense.
>> Granted, this is assuming it is using sufficient numerical precision.
>> Trying to do long range navigation using binary32 or similar is probably
>> just asking for it to crash into the surface (or miss the moon entirely,
>> say because jitter in the math miscalculated the location of the moon by
>> like 150km or something, ...).
>
> A few years ago (pre-Corona) I visited the Space Center in Houston.
> A tour guide for the Saturn-V on exhibition there told the group
> that he had worked on the courses for Apollo, and that they had
> actually mostly used analytical methods. I was a bit surprised
> at that, I would have thought numerical solution of ODEs would
> have been employed more.
>
IME:
Most of what one might want to calculate in practice, can be done using
algebra.
If it can't be done directly, one can subdivide it into smaller
timesteps and do it incrementally. At small enough timesteps, pretty
much everything becomes linear.
Similarly, a lot of stuff one could do with ODE's could instead be done
using B-splines or similar, ...
Also, divide is one of those operations one generally wants to avoid
when possible, not just for speed reasons, but because it is more prone
to adding instability: if you divide by a number which happens to
approach 0, then the numbers involved can get huge. Calculations which
give wonky results, or spit out Inf, NaN, or raises a fault when given
certain inputs, are not desirable.
Usually better when possible to find an alternative which avoids the use
of division, or at least eliminate cases where divide-by-zero exists as
a possibility.
Though, this does seem to be one of those points of division between
doing math on a computer relative to traditional mathematics, which
likes to throw in divide operators all over the place.