Incidentally here as we discuss the term 'decimal' versus 'decimal point' there is linguistic confusion I think that I have participated in. The decimal number system can be simply our presumed mod-ten whole numbers if you wish as a first number system, whereas it is too easily confused with numbers that contain a 'fractional' part, or decimal point. Clearly it will not be proper under this interpretation to claim the terminology of 'real' value either since that obviously carries all the baggage. The essence of this interpretation is a rejection of the real value as it is taught, and yet the usage of the value nearly as in scientific notation is agreeable. However in scientific notation the ability to apply negative or positive direction to unity (a signed system) is not actually required to perform the duties we see here. As well to incorporate polysign into the numeric sign is generalized.
Clearly the structure of values such as
- 1.234 , + 34.56 , - 0.00001
is undergoing scrutiny here. Stripping away signs and decimal points we see natural values. One with some leading zeros, but large values can be expressed such as
+ 1000000000000.0
and so the coding of the decimal point as a positional thing from the right of the string of digits is direct and requires nearly no translation as a natural value. Likewise sign once generalized is yet another constrained natural value. To even require this language of 'unsigned' to describe sign is an obvious ambiguity and yet this is just how standard mathematics proceeds with numerous papers discussing R+ and zero or some such as if R were primitive. R is not primitive. R; the reals; are two-signed numbers. Their continuous nature as proven through epsilon/delta can hold generally across the whole lot, and this sort of universality; this sort of simplification; ought to be appealing, yet we know mathematicians pride themselves on the levels of complexity that they can achieve.
The full format of number is actually
s x e
where s is sign, x is a natural value, and e is the unit position(decimal point). Because though this interpretation is so ordinary the latter part can be abbreviated to
s x
where x is simply magnitude, which incorporates the decimal point into x. Certainly with the sign divorced from the x portion we will not be stating that the x is in R+ or some such contorted language. In some ways it has always been my burden to address that x portion, though the productive work was done in the s portion. It seems I have arrived in that interpretation here and it does include significant detail.
The idea that operators and values are distinct concepts is somehow not discussible by mathematicians. I have attempted to engage them here on this topic many times and been dodged regularly. Such a simple concept does go offended within the standard curriculum. Particularly the division operation, which is arguably not even fundamental since it is the reversal of the product operation which is more fundamental, is engaged in the rational value.
The bridge that is the operator as in
Value1 Operation1 Value2 = Value3
is not seriously adopted by the mathematician. It has to go avoided in abstract algebra with their polynomials that refuse to perform such evaluation.
This becomes an undiscussible topic. Believe me I have tried.
I see the sum as more fundamental than the product, while advanced mathematics attempts to blur the two ala group and ring theory. Meanwhile the language of a 'successor' hides addition. Here it seems they abide by a principal of avoiding operators in the definition of their number system, whereas within the rational value they don't mind mixing it up. There is such strong physical correspondence in the sum as fundamental; as in the consideration of the quantity of objects about you in sum or superposition in space. Of course the integral is a sum. Here too we land in criticism of the sum defined as a binary operator. Clearly the n-ary form of the sum is superior. This then as well gets us the singleton value as the first form
a
a+b
a+b+c
a+b+c+d
...
The usage of a nonzero sign (sign two) as the sum as is done above is problematic notation other than in P2. Within polysign this is remedied with the implementation of a zero sign '@' which maintains its position as an identity sign; neutral in product. Here again with a nose for operator theory we have to ask whether convention and notation are ambiguous. It is not pretty rewriting convention, yet it does seem that this one works out, and again we have that physical concept of superposition as well.
All of these details are only loosely woven, but they do fit together. There is only a loose need to confront the rational value, yet it has occurred through the lens of polysign number theory. This is because upon generalizing sign we are forced to bump into operator theory. Sum and product as fundamental rings true from polysign, yet their inversions are not so true. The field requirements of mathematics are wrong. They are not general. Abstract algebra seems to have the right beginning, yet they shove their foot in their mouths just as soon as they arrive in the polynomial form. Polysign clearly develops the form they were after naturally.
Our digital system of base ten ( base 10?) goes ignored and presumed.
Meanwhile every radix system is base 10.
This is how rapidly ambiguity can creep in and it certainly goes avoided.
To realize that sign is modulo behaved is not much of a step, yet humans have avoided that step until the advent of polysign.
Why is this? The extension to modulo three sign works perfectly. Modulo n sign works perfectly.
As I research the history of mathematics it's almost as if the two-signed reals were not taken seriously.
It's more like they crept their way in.
"The English mathematician, John Wallis (1616 - 1703) is credited with giving some meaning to negative numbers by inventing the number line, and in the early 18th century a controversy ensued between Leibniz, Johan Bernoulli, Euler and d'Alembert about whether log(−x) was the same as Log(x)." -
https://nrich.maths.org/5961
"Negative numbers appear for the first time in history in the Nine Chapters on the Mathematical Art (Jiu zhang suan-shu), which in its present form dates from the period of the Han Dynasty (202 BC – AD 220), but may well contain much older material." -
https://en.wikipedia.org/wiki/Negative_number
Until you arrive with minus one times minus one is plus one the modulo behavior of sign is not established. These early forms are not that. Even the real value takes its prominence on claims of the continuum which need not really include the negative half. It's pretty clear that mathematics is abiding accumulation. The rational value as a subset of a continuum is a perfect instance. Rational values as taught in modernity form a re-radix system, and their more natural counterparts would be those originating radix systems; not in the base 1234567890 system. The ordinary decimal usage works but this does not mean that they are fundamental. Certainly they are computable and can abide by epsilon/delta.