16 views

Skip to first unread message

Sep 19, 2023, 10:38:04 AMSep 19

to

We are fond of our set theoretic mathematical number systems.

These are formal mathematics. Closure is simply the observation that for instance the sum of two values in a set is again a value in that set. Therefor the sum obeys closure, and while it can be observed readily, it is not an axiom but rather a property. We cannot enforce it but we can measure it. Indeed it doesn't take long to break closure, for the inverse operation to summation, known as subtraction, gets us there quickly.

So then, does the breakage of closure with subtraction, say, of the naturals, generate the negative values? They form a break with intuition, but upon allowing a free standing -5, for instance, and here we do have the puzzle of operator versus value on this '-' sign interpretation, where we clearly are demanding that it be a pure value now in light of the attempt to maintain closure, though this is being done via the observed breakage of closure of the original natural value.

This is the first introduction to sign, and as one who has managed to generalize sign I ought to take a deep interest here. Our notation has managed to jam together operators and values with no regard, and seemingly no conflict. And of course the interpretation to regard the minus sign as an operator still within that value seems possible as a result of our habituated training, yet to claim that an operator is a value is a direct conflict; an exact contradiction of a distinction that should not be abiding. If you were to confuse your drinking with your beverage of choice; well, possibly it can happen in the human mind. Could it be that wee humans have some weaknesses within our linguistic abilities that are normalized in order that we proceed into the great accumulation that lays in the stacks of university libraries? I can only hover on this idea momentarily, but evolutionarily speaking we are likely an early form and just how our FOXP2 (that's folded box protein 2) does its thing, while extraordinary, could still have its limitations. As social animals no doubt the complexities of getting along in our clan would explain extreme abilities to handle exceptions. Yet as we adopt these exceptions within our mathematics then arguably it is not mathematics at all. To confuse operators with values seems to me a very clean rendition of this problem.

From another angle, closure is not actually guaranteed by computing hardware. A 32 bit machine gives rise to a 64 bit machine in this way. Addressable random access memory actually exceeded the 32 bit space, necessitating this, though they might well have gone to a five byte machine rather than an eight. Maybe some of the byte style processing works out better staying in powers of two. All sorts of issues arise in this computing hardware, but the point really is that addition on the 32 bit accumulator will break when you add large enough values. Yes, a carry flag allows for further processing, but as to whether you are still in the same elemental set: here it is easy to interpret that you are not. The 32 bit value is fundamental to the machine; it is elemental. And once again, upon breaking closure the cause to preserve it will generate a new form.

It seems fairly factual to admit that the forms which actually preserve closure are modulo behaved. They care not how many times they've wound around. They lose that data. This then allows them to remain in a small space. So it is not as if this closure breakage is universally felt by large numbers. What I think may be felt by a sensitive reader is that our claims of infinitely large number systems are somewhat making an artifice of closure. This is more readily explained by the product, where the problem is far larger than the carry flag. When we multiply two ten digit numbers we need twenty digits for the result. This breakage of closure is stronger than the sum breakage. Now looking at the inverse operation, the result in preserving closure is as well stronger, for even from the naturals come the rationals via traditional mathematical progression. Now we've birthed not just the negative numbers, but the source of the continuum!

Still, just as the sign became augmented to the natural value, so too can a decimal point become augmented, and this form arguably has performed the operation known as division and arrived at a pure value. There is less confusion it seems when we engage the decimal point as to whether value or operator, and of course we have to now care about the digits of the value slightly more. It is a past-time of mathematicians to wipe away the details of the digits, yet here by amplifying their awareness we arrive in quite a lively interpretation.

Of course, my computer says that 1/3=0, staying in the integers, say, so another avenue lays that way. To challenge the logic of a hardware engineer, whose duty is to perform such operations; well, division does just that. As users we are handed division of a platter; by a button that is barely different from the multiply button.

Yet another avenue of falsification takes the physical to mathematical transition and exposes a mathematical fraud. If there are nine apples on a small apple tree then we readily confess this to be a natural value. Plucking off the leaves of the tree we find there are 126 of them, and the apples very well exposed now, and the poor tree a bit nude. These natural values will add readily together (135) by the mathematicians reckoning of elemental set theory. Yet to the physicist a stark mistake is made with that addition. The product only amplifies the situation more so, and even remaining in apples alone, should we have three apples in our left hand and six in our right, while the product is eighteen, we will still have only nine apples. So the abuse of natural numbers ought to be readily felt by the physicist at least, while the mathematician completely escapes from reality via his denial mechanism.

Backing out of that hole, perhaps we could meet at the base of the tree; philosophers, physicists, and mathematicians, and admit that the divisions into these ranks is a bit rank itself. In this modern age of compiler level integrity there will be talk of units, types, and set theoretic principals that even the physicist might obey. As to what is elemental: this is actually quite a serious task, and if along the way in our dissection a few details have gone wrong yet unnoticed as happens within an invalid assumption; well, the consequences could, would, and should, be mind-altering.

These are formal mathematics. Closure is simply the observation that for instance the sum of two values in a set is again a value in that set. Therefor the sum obeys closure, and while it can be observed readily, it is not an axiom but rather a property. We cannot enforce it but we can measure it. Indeed it doesn't take long to break closure, for the inverse operation to summation, known as subtraction, gets us there quickly.

So then, does the breakage of closure with subtraction, say, of the naturals, generate the negative values? They form a break with intuition, but upon allowing a free standing -5, for instance, and here we do have the puzzle of operator versus value on this '-' sign interpretation, where we clearly are demanding that it be a pure value now in light of the attempt to maintain closure, though this is being done via the observed breakage of closure of the original natural value.

This is the first introduction to sign, and as one who has managed to generalize sign I ought to take a deep interest here. Our notation has managed to jam together operators and values with no regard, and seemingly no conflict. And of course the interpretation to regard the minus sign as an operator still within that value seems possible as a result of our habituated training, yet to claim that an operator is a value is a direct conflict; an exact contradiction of a distinction that should not be abiding. If you were to confuse your drinking with your beverage of choice; well, possibly it can happen in the human mind. Could it be that wee humans have some weaknesses within our linguistic abilities that are normalized in order that we proceed into the great accumulation that lays in the stacks of university libraries? I can only hover on this idea momentarily, but evolutionarily speaking we are likely an early form and just how our FOXP2 (that's folded box protein 2) does its thing, while extraordinary, could still have its limitations. As social animals no doubt the complexities of getting along in our clan would explain extreme abilities to handle exceptions. Yet as we adopt these exceptions within our mathematics then arguably it is not mathematics at all. To confuse operators with values seems to me a very clean rendition of this problem.

From another angle, closure is not actually guaranteed by computing hardware. A 32 bit machine gives rise to a 64 bit machine in this way. Addressable random access memory actually exceeded the 32 bit space, necessitating this, though they might well have gone to a five byte machine rather than an eight. Maybe some of the byte style processing works out better staying in powers of two. All sorts of issues arise in this computing hardware, but the point really is that addition on the 32 bit accumulator will break when you add large enough values. Yes, a carry flag allows for further processing, but as to whether you are still in the same elemental set: here it is easy to interpret that you are not. The 32 bit value is fundamental to the machine; it is elemental. And once again, upon breaking closure the cause to preserve it will generate a new form.

It seems fairly factual to admit that the forms which actually preserve closure are modulo behaved. They care not how many times they've wound around. They lose that data. This then allows them to remain in a small space. So it is not as if this closure breakage is universally felt by large numbers. What I think may be felt by a sensitive reader is that our claims of infinitely large number systems are somewhat making an artifice of closure. This is more readily explained by the product, where the problem is far larger than the carry flag. When we multiply two ten digit numbers we need twenty digits for the result. This breakage of closure is stronger than the sum breakage. Now looking at the inverse operation, the result in preserving closure is as well stronger, for even from the naturals come the rationals via traditional mathematical progression. Now we've birthed not just the negative numbers, but the source of the continuum!

Still, just as the sign became augmented to the natural value, so too can a decimal point become augmented, and this form arguably has performed the operation known as division and arrived at a pure value. There is less confusion it seems when we engage the decimal point as to whether value or operator, and of course we have to now care about the digits of the value slightly more. It is a past-time of mathematicians to wipe away the details of the digits, yet here by amplifying their awareness we arrive in quite a lively interpretation.

Of course, my computer says that 1/3=0, staying in the integers, say, so another avenue lays that way. To challenge the logic of a hardware engineer, whose duty is to perform such operations; well, division does just that. As users we are handed division of a platter; by a button that is barely different from the multiply button.

Yet another avenue of falsification takes the physical to mathematical transition and exposes a mathematical fraud. If there are nine apples on a small apple tree then we readily confess this to be a natural value. Plucking off the leaves of the tree we find there are 126 of them, and the apples very well exposed now, and the poor tree a bit nude. These natural values will add readily together (135) by the mathematicians reckoning of elemental set theory. Yet to the physicist a stark mistake is made with that addition. The product only amplifies the situation more so, and even remaining in apples alone, should we have three apples in our left hand and six in our right, while the product is eighteen, we will still have only nine apples. So the abuse of natural numbers ought to be readily felt by the physicist at least, while the mathematician completely escapes from reality via his denial mechanism.

Backing out of that hole, perhaps we could meet at the base of the tree; philosophers, physicists, and mathematicians, and admit that the divisions into these ranks is a bit rank itself. In this modern age of compiler level integrity there will be talk of units, types, and set theoretic principals that even the physicist might obey. As to what is elemental: this is actually quite a serious task, and if along the way in our dissection a few details have gone wrong yet unnoticed as happens within an invalid assumption; well, the consequences could, would, and should, be mind-altering.

Sep 20, 2023, 11:08:39 AMSep 20

to

Along another line, and already some sense of discrete versus continuous is on the slate, That partition of space which arguably takes a continuous form as well embodies a discrete property of three dimensionality, or I suppose if you've settled into a paper version of two dimensionality. In that geometry is already embodied with the physical sensibility of the natural number, to what degree will a recovery of 3D space via its offspring be valid? Could it be invalid? Isn't our 3D real valued space (x,y,z) a bit too black and too white? Certainly it is, and upon taking a solid analysis view of the situation, which again holds onto physical awareness, which are the means by which we arrive at three dimensional space as the space we inhabit, well, no matter how large or small you make a solid object (x,y,z) will not be sufficient, for that solid will carry an orientation as well. You may as well stay at finite size and do the analysis; three more parameters will be necessary. They are in a dimensionally diminishing form. This somewhat tracks the realization of additional characteristics of the electron. To what degree is the static situation even valid? Certainly as an early form it must be helpful, yet objects in motion are actually what we need to resolve. The point as fundamental relaxes us back to an overly simplistic black and white version of reality where the (x,y,z) Cartesian real valued system as RxRxR, whereas the alternative is a bit overwhelming. Ordinarily non-Euclidean tends to imply curvatures or some non-flat paradigm, but here it appears as though we can break through to a non-Euclidean version of space that is still flat. We simply have to challenge the cartoon image of a black and white space, as we've worked on paper to construct in our training. That spatial representation involves a structural integrity that is hidden by the (x,y,z) basis is not helpful. Habituation of students onto this form may be misleading. Maintaining stronger physical correspondence may be helpful. This reflects onto set theory as well; a selection of black points in an otherwise white space is not actually so worthy as a basis; yet this is what we are taught.

I suppose we'd have to arrive at atomic theory and molecular theory not much farther along from there, if physical correspondence is to hold up. It is bewildering to make such a claim from my own abilities, but perhaps some young mind will find a way.

Sep 20, 2023, 4:11:41 PMSep 20

to

So unfortunately, never, since simply most of them are like calculaters, where they think if they can calculate & approximate then they are mathematicians

Of course not, since before the ability of calculating & approximating the true understanding which they usually miss 🤔!

For illustration, the following nu2merical expression (7 = 7) is of course very meaningless expression, where if it is represented for them in other deformed shape like (7 - 7 = 0), then, they would never be able to understand that big cheating is going up to brain wash them by no valid operation as the negative or by no fictional numbers like zero

It is too hard mainly for mathematicians to understand anything of what you are going to educate them, Sure

BKK

Sep 21, 2023, 10:39:59 AMSep 21

to

I get caught in this morass in the empirical view as attacking the theoretical view. Then flipping it around, and so forth: it may be that my arguments here are confused, but at least they are reaching. I see the natural value argument that I make, while I still do perceive that natural value just as I was trained; as the simplest form of number. Well: unity is ultimately the simplest form of number, and unification may as well be renormalized to unitification, and possibly all would be well. That the universe is avoided in our awareness, and yet as we confess that every practical count will require a partition of that universe be cleanly secured, the very stability of our situation to arrive with the natural value is fortunate. Dragons do not fly out of cans of coffee when you open them up in the morning, do they? I guess it's bags of coffee for most people these days, but I buy the cheap stuff.

I suppose at some level, and this involves stepping out away from the morass, we can witness that we are involved in a progression; that this progression is not necessarily straight-forward, and that our involvement in the progression is to find our contribution to it. I think all who bother to post here hope to do just this. Quite a few of us wind up finding the need to tear quite a lot down in the process. Facing the accumulation I find this entirely acceptable, but as well our constructions have to stand after the fact, or else we've merely burned the thing down.

That's really going too far, and while it may seem that the thing burns too well, possibly another approach is to disentangle the parts. It is bizarre to me that functional analysis takes a seat in the basis of values and operators, as is the formal language today. I say that without values and operators what can a function even portray? As to which is elemental, and which is built upon those elements: this inversion at least deserves scrutiny. This is a compiler level error which Peano committed, and later the abstract algebraist, too. As to who is being destructive to mathematics: I could readily point my finger to them rather than at myself. Have I failed to appreciate the mathematician's function? Am I too entrained on my computer's version? The fact that we can substantiate such a complete structural inversion within human thought is very troubling to me.

Now, taking the number and its hardware form more seriously, we witness several formal syntax usages which deserve structural scrutiny. One is the sign, and the other is the decimal point. Sure enough our hardware versions do bother to keep track of these augmentations to the natural value. Meanwhile the mathematician views them as evolutionary set theory, somewhat through the closure argument that I've opened with here. Staying in the digital view, which is consistent with modern computing hardware, we see that we'd certainly like closure to hold up, but it does not universally. In this moment I am brought to a new name; should we augment the notion of the digit and allow that these elemental conglomerations as if they have achieved a singular larger digit? A mudigit, for instance? I don't feel settled on that, but upon tying those binary bits together in hardware such that they do indeed spit out another in the same form, then we do in fact validate this molecular form. In a sense this brings us around to taking the original digit more seriously as well, which is anathema to Peano's way.

As to finding order in the universe: pretty clearly our immediate form is troubled at our scale of existence. All that we can do is concern ourselves with some partition of it, as for instance a bean pod containing five fertile beans introduces some semblance of order, whereas the quantity and varieties of beans in the universe is unknown. This value likely goes gray as refinements occur. The very definition of bean will need to be reformed; refined. At the human scale we will not achieve elemental status this way; the bean is not fundamental. Even the atom is just a stop along the way. Enter the muon. A return to the classical exposes so many contradictory constructions along the way that the very mathematics that are in use along the way become open.

I am going to stick to my claim that the trifurcation to physics, mathematics, and philosophy is invalid. Heck, these days your king is a queen, right?

Whether philosophy is just a jack, or rather, an ace: this has to be determined. It depends upon the philosophy, you see? My hope is that a semi-classical approach will work; that we can make sense of reality. The alternative is; well; it will go dark before our eyes. I don't really want to say it, but the analysis speaks for itself practically: quantum physicists are nihilists. I suppose that should be another topic of conversation, and I don't mean to place it too seriously, but from the semi-classical approach there are plenty of quotes direct from their mouths to prove it.

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu