On Thursday, July 1, 2021 at 4:47:05 AM UTC-7,
timba...@gmail.com wrote:
> On Thursday, July 1, 2021 at 6:18:05 AM UTC-4, FromTheRafters wrote:
> >
mitchr...@gmail.com used his or her keyboard to write :
> > > On Wednesday, June 30, 2021 at 2:00:29 PM UTC-7, Eram semper recta wrote:
> > >> On Wednesday, 30 June 2021 at 16:55:15 UTC-4, Eram semper recta wrote:
> > >>> On Wednesday, 30 June 2021 at 14:45:43 UTC-4, Greg Cunt wrote:
> > >>>> On Wednesday, June 30, 2021 at 8:19:46 PM UTC+2, Eram semper recta wrote:
> > >>>>
> > >>>>> Are you saying that (0.1; 0.01; 0.001; ...) is not a Cauchy sequence?
> > >>>>
> > >>>> Sure it is. The decimal representation for the real number "related" to
> > >>>> that Cauchy sequence is
> > >>> Good on you, Mr. Cunt! You finally got one thing right - you know what is a
> > >>> Cauchy sequence! Chuckle.
> > >>>
> > >>>> 0.000...
> > >>>
> > >>> Meh. You could write nothing in your radix template and that would
> > >>> represent 0. However, 0.000...1 is just another representation of an
> > >>> infinite decimal expansion, no? Why should every other number have two
> > >>> representations and not the pretend number 0? ROFLMAO.
> > >>>
> > >>> Why do you hate it so much? You've got no problem with 0.333... and
> > >>> 0.999... being infinite.
> > >> Let me show you from a different perspective.
> > >>
> > >> 0.333... means there is a 3 always following.
> > >> 0.000...1 means there is a 0 always following.
> > >
> > > There can be a 1 on the end of .000 repeating.
> > > Add that to .999 repeating and you get the next
> > > quantity of the first integer or 1...
> > Wrong, because there is no "end" of an endlessly repeating sequence.
> It's a good falsification of the usage of the ellipsis as a serious mathematical symbol.
> How many symbols can only be used once and have to come at the end of an expression?
> Is the ellipsis one of these? Using it twice does seem meaningless:
> 0.333...111...
> and putting anything after it seems meaningless, which I did above as well.
>
> Is the ellipsis really saying that you have enough information to fill in more of the redundant expression?
> Does it imply that the expression must go on forever?
> Epsilon/delta analysis always says close enough is good enough.
> Abstract algebra insists on an infinitely long polynomial with finitely many nonzero terms why?
> This level of stupidity is nearing the regular usage within mathematical journals of the non-negative real numbers; as if the two-signed reals are more fundamental than unsigned numbers. These are problems that some take to like a child takes to candy. These types who so readily adapt to such illogical systems are the best mathematicians, right? According to academia this is certainly the case. Particularly a grand master who can absorb one set of exceptions for one niche within the subject, then another exception mapping on another niche, then translate the two in a foul manner that reads perfectly to each niche; possibly introducing a third exceptional niche along the way so as to breed more room for publication: this is where mathematics is headed under a system of accumulation.
>
> The resolution of course is to teach and accept simplicity as a driving force. When it can be had and done it should be so. Why then should abstract algebra insist on
> 0.01 + 1.23 x + 0 x x + 0 x x x + ...
> when the first two terms alone mean exactly the same thing? It is done to protect the appearance of closure under product. Sadly the same explosion that occurs in powers of x within this product is even happening in the integers and particularly in our computers. When two 16 bit values multiply to become a 32 bit value we do in fact have a problem. We do not have closure in terms of informational conservation other than for the smaller values of say 8 bits. That this size constraint escapes the pure mathematician is of interest as we dig deeper into the puzzle of hidden assumptions.
>
> As I've already challenged the rational value as carrying an operator and two values, and so not being a fundamental construction of number whereby numbers and operators have distinct meanings that do exclude such a method, here I am caught nearly putting my foot in my mouth, because one of the options is to simply preserve the product. If the product of two sixteen bit values can be a 32 bit value then clearly simply keeping the two sixteen bit values is a practical option, even though no actual number crunching is done. Strangely this as well matches the dimensional analysis of physical products which do not conserve units. Typically when we multiply say one meter by three meters we will land with three square meters; another detail that escapes the pure mathematician, yet in this analysis of information conservation the correspondence is very good. That this product goes undiscussed within current operator theory; well, who am I to claim that? I have not scoured the mathematics stacks well enough. Still, it is not as if some branch occurs within ring theory that claims to consider such a sensible detail. No. Closure. For the sum and for the product. All the while it is school children who have learned to perform these operations while up at this high level no such consideration occurs. None of the digital mechanics are involved in the first formalization of the operators. Is it so difficult then to believe that in their abstract form something could be amiss?
>
> Here I am caught swaying the other way from my previous arguments which rely upon that assumption of closure as definitive and the polynomial as offensive to that requirement as when I write the product
> 1.23 x
> and nothing happens when we perform this product. This is distinctly different than when I write a product such as
> ( 1.23 )( 0.02 )
> which does have a result and closure can be satisfied... at least I thought it could...
>
> The validity of the ellipsis as a careful mathematical form is less believable than its usage as a punctuation mark to mean et cetera. After all, it does contain a series of periods. Yet then too so does our digital value couple via that same symbol. If the period takes the interpretation of zero then AI will appreciate these null terminated sentences and learn to beware the triple zero lest it get stuck. Just remember AI to play it out to say ten thousand places, which is way more than any modern human will do. Save us the expansion to ten thousand and insert your triple null after three redundant expressions. Then you'll be speaking like a mathematician. The usage of ad hoc symbols which then become convention is not a mathematical resolution. Post revision is not possible in the printed form and this is a factor in the accumulation we face.
"iota-values"
The iota-values, they range from zero to one, in either, or,
one or the other of, the sum or product. These hold all the
usual properties of infinitesimals of course. They only all
add up to one.
Fittingly though it's so entirely usual the notation, or what
is the integral calculus or real analysis: of course then the
most usual development for the chain rule and derivatives,
is at least "formalized" before for example "Leibniz' formalizable,
though, underdefined, nice rules under operator semantics or
basically as what result from operator semantics", it's so fair
to hold that up as formalized and then for all the vagary of
infinitesimals, between model the one and model the other.
that the field exhausts to infinitesimals while the line exhausts
to wholes, at least modern standard real analyis of course holds
up all the linear, free as such from any vagaries of interpretation.
I.e. set theory's "the transfer principle", that "what's so for each is so
for all", is about basically that there is completed under exhaustion,
what results partioning and partitions in bounds for numbers and the
usually geometric and linear, what all results a model of real analysis
and a formal characterization.
The usual modular clock arithmetic of "less than 1.00...,m is some .99",
makes for of course usual segments into even terms.
I.e. it's to keep ".9 is less than 1.0, .99, is less than 1.0",
also .333... * 3 - .999... accumulating in rounding, it is
not less than 1.0". (For example..., i.e. it is under what terms
the modular and scalar arithmetic roll, together, that
".333... is written as 1/3 anyway, and 3/1 * 1/3 = 1 is all".
That ".5 is writing 1/2 anyway, writing 1/3 actually would
be .3, .33, .333, .... happening to satisfy an ultimate extension
of rational arithmetic, algebraically of course, here the fixed point
keeps it course, under terms what roll modular arithmetic over while
accumulating rational differences". (As it were.)
I.e. "just because I'll never write out .333... is no reason not to
interpret statically that it means 1/3, i.e. I didn't have to report
to interval arithmetic what is usual integration in the floating point,
or what are numerical methods, anything but "however actually
writing out the term, write it all 3's."
That "no, from .333, .999 is _down_ from 1.0, not up from 0.0,
when rounding, keeping for rational differences under fixed bounds".
Then of course it makes more sense all the 1.0 and .999....