To ojdkbuild,
Java floating point errors occur by means of a representation issue to do with using the IEEE 754 floating point binary equation for floating point decimal types in Java, namely, float and double.
Since floating point errors in Java do occur in the decimal unit in the last place and the hexadecimal unit in the last place, and that any such spurious results break the contiguous range property in a ranged base 10 and base 16 number line, in an unknown magnitude that is higher or lower than accurate, it becomes impossible to understand who could possibly, in any meaningful way, need or relying on an inconsistent and chaotic floating point error situation? This is an error and a bug situation that should be corrected by a Java language Vendor.
Java JDK/JRE Java runtimes generate floating point arithmetic or java.lang.StrictMath denormal and pronormal values as a result of Java source code like this, either in the defaulting base 10 mode, but also when hexadecimal notation for base 16 numbers starts to be used. They are presently all error phenomena which can and aught be fixed:
// The Java Language. Arithmetic only, no comparisons.
import static java.lang.System.*;
public class Start
{
public static void main(String ... args)
{
out.println("Program has started...");
out.println();
float a = 0.1F;
float b = 0.1F;
float c = a*b;
out.println(c);
double d = 0.1D;
double e = 0.1D;
double f = d*e;
out.println();
out.println(f);
out.println();
out.println("Program has Finished.");
}}
Program has started...
0.010000001
0.010000000000000002
[0.01]
Program has Finished.
The standard that Java turns to, IEEE 754, doesn't specifically say anything about the base 10 or base 16 digit degradation that does happen at the right hand side of floating point decimal number type variable data. Historically, the view that floating point is an approximation for range accuracy and nothing more is not the only view and not a requirement. There is other mathematically correct implementation out there.
The primary understanding for numbers and arithmetic is that binary is for computers, and denary is for human beings. An approach which mixes these two up at the same time, while not maintaining separation between these concerns, only leads to logic confusion and errors. In the OpenJDK, float and double are the main offenders, however a similar problem occurs with java.lang.StrictMath method calls. Most importantly, this happens in relation to base 10, but base 16 has the same problem. All examples of this are logic errors, that need to be repaired in either a default or mutual way, simply because denary accuracy is required out of a compiled and running Java program, either at its conclusion or some mid point, or from the start. Relevant Computer hardware that the writer of this letter has in mind is the Desktop PC, running any ubiquitous operating system, or as configured as a database or internet server.
Workarounds are used, being BigInteger, BigDecimal, and big-math, in java. They introduce an entire tier to Java software which isn't needed to cover for floating point errors. BigInteger and BigDecimal are slow and produce a loss in speed, are larger in RAM and waste memory, and don't allow the use of arithmetic operator source code syntax. To say nothing about the absence of an included accurate type calculator class. Things that developers and their programs need.
Floating Point correction doesn't mean that only the default behaviour be changed. It can be done compatibility. There could be a patch program, separate to the main Java download. Working not to change how Java compiles, there could be a runtime switch which can be wrapped around previous mode Java code. There could be an overhead static class with methods that can be called to apply to code spaces, or all code space inside a main method, a Thread start method, Runnables, Futures or any similar. Something similar could be possible by using annotations for the runtime, if that can remain efficient enough. But so long as the contents of a float and a double can read and write over one another, both modes of operation could be used at the same time if things were set up consistently and threads of execution are synchronised and accessible enough. All that is presuming that the bolder step of total default change isn't to be taken.
The IEEE should include or state something new in its standard 754 to encourage software language vendor(s) to implement floating point arithmetic more completely, but if it doesn't, vendor(s) are left to act on their own. Oracle and the Java Community process have not apprehended repeated bug requests, and have chosen not to act further despite multiple discussion requests about the needs involved. While it would be most appropriate for the upstream Vendors to implement this change, in face of their ongoing refusal, the best remaining option is to inquire of other vendors, which is the purpose of this message.
The following is a version of the IEEE 754 floating point binary number formula:
n2=(-1)^s*m2*2^(e2-(L-1))
It is defined by the sign of the number s positive or negative, mantissa of the registry in binary m2 which is the digits of the number from denary, the exponent of the registry in binary e2 which is the position of the decimal point, for a 32 bit,64 bit or rarely a 128 bit hardware registry.
The most nearby correction approach is to augment the present circumstances with available hardware bit registers, and more implementation logic. That is, introducing two range limits each for float and double, values range limits, and right hand
side consideration range limits for decimals and their relationship to binary, leading to one more limit per type, three each. This has been possible since the inclusion of SSE CPU registers, and their descendants. SSE and equivalent does exist in the vast
majority of relevant and ubiquitous, compatible desktop PC CPU hardware today (2022). Those registers can be leveraged to solve these value error problems. This way, accuracy of all possible decimal and hexadecimal values can be maintained, and degrading of
the decimal digits can be kept outside the decimal value range limit, with no range internal errors at all at the decimal end. Just as the floating point equation has an asymptote that can be manipulated, the decimal place floating point range asymptotes towards
truncated range accuracy, which can be upheld within a range with SSE registers and enhanced arithmetic and function logic.
Consider what GNU C++ has done with SSE and floating point decimal accuracy:
//The C++ Language. Arithmetic only, no comparisons.
#include <iostream>
#include <iomanip>
using namespace std;
int main()
{
cout << "Program has started..." << endl;
setprecision(-9);
float a = 0.1F;
float b = 0.1F;
float c = a*b;
cout << endl << c << endl;
//setprecision(2); //Multiplication count from left to right.
//d = 0.1F;
//e = 0.1F;
//f = d*e;
//cout << endl << f << endl;
setprecision(-18);
double d = 0.1D;
double e = 0.1D;
double f = d*e;
cout << endl << f << endl;
cout << endl;
//cout << setprecision(2); //Multiplication count from left to right.
//a = 0.1D;
//b = 0.1D;
//c = a*b;
//cout << endl << c << endl << endl;
cout << "Program has Finished.";
return 0;
}
Without using setPrecision to scale the view, the default float and double view and use range is visibly within that of Java. This visibility range can be tantamount to a stronger range limit, leaving the matter very equivalent.
It is most compatible to use an SSE an additional bit registers approach to do further correct calculating and to offset the range of binary to decimal degradation, the way that GNU C++ has achieved.
Is the ojdkbuild Java team able to update its JDK and JRE offerings for all platforms, to either repair at default or at capability, these floating point logic errors?
I would be thrilled to hear about a positive response!
Yours Sincerely,
Sergio Minervini.
S.M.