Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A spiral space-filling curve as a natural continuum

107 views
Skip to first unread message

Ross A. Finlayson

unread,
Sep 17, 2012, 3:38:17 PM9/17/12
to

In the forward notion of establishing mutual results in Euclidean
geometry and Finlaysonian geometry it is to where Euclid defines
geometry, in terms of points and lines, and here the spiral space-
filling curve from the natural continuum defines the geometry in
points and space.

Euclid gives us four or five rules of geometry.

Two points define a line.

(Two non-parallel lines define a point, if they are not skew.)

There are points all along the line, finite constructions are in the
edge and compass, Euclid gives us this. Then, space is naturally
defined as area with the products of functions in geometry. Here he
starts with the 2-D case.

Then, it is convenient to establish, for example, that from the unit
line segment and circle and disc about the origined, there is defined
affine geometry. (Here in the well in Euclidean geometry.)

Then, that's convenient to establish coverage or the space of the
functions that are then generally defined in for example three
orthonormal vector bases. This establishment of the Euclidean
geometry and the mutually consistent co-existence of a spiral space-
filling curve with these compatible properties, establishes the
general canon.

Then, for the polydimensional perspective, we see the line segment
from the disc. We see the origin from the line segment and disc.
Look, there is the origin, it begins the spiral, space-filling curve
of the continuum. To then maintain this generally, for the line
segment it is that and for the disc it is that.

Then, we see this adds power to the theory, simply from that general
Euclidean geometry is maintained. Here it is from that, you can
translate from the disc to the well for Euclidean results, parallel
lines from the unit disc, here for affine geometry.

Regards,

Ross Finlayson

William Elliot

unread,
Sep 17, 2012, 10:54:39 PM9/17/12
to
On Mon, 17 Sep 2012, Ross A. Finlayson wrote:

> In the forward notion of establishing mutual results in Euclidean
> geometry and Finlaysonian geometry it is to where Euclid defines
> geometry, in terms of points and lines, and here the spiral space-
> filling curve from the natural continuum defines the geometry in points
> and space.

So spakest the random though generator.

Ross A. Finlayson

unread,
Sep 18, 2012, 11:53:35 AM9/18/12
to
On Sep 17, 7:54 pm, William Elliot <ma...@panix.com> wrote:
> On Mon, 17 Sep 2012, Ross A. Finlayson wrote:
> > In the forward notion of establishing mutual results in Euclidean
> > geometry and Finlaysonian geometry it is to where Euclid defines
> > geometry, in terms of points and lines, and here the spiral space-
> > filling curve from the natural continuum defines the geometry in points
> > and space.
>
> So spakest the random though generator.
>
>
>

Heh, the monkeys. Now in diapers!

Pretty direct for random.

Sometimes, not so much.

Well I guess it's more in context among the threads named "A spiral
space-filling curve as a natural continuum". Here that's plainly what
it is and to be the article title. Elliot - that's real direct for
random.

Ah then, points and lines or points and space: geometry. Is it, no
longer geometry? Here if points and lines is geometry, and that of
shapes, then of points and space is geometry.

I turn to Hardy and for example the real numbers are point and number,
geometric interpretation, numeric interpretation, point, number, and
vector. For Hardy's "Pure Mathematics" or "the text" of the day,
features of the real numbers are features in geometry and vice versa.

Basically, I'm to use language tools to extract the semantic contents,
that would be the "though" generator.

What's cheaper, five grad students or a beowulf cluster imagines them?

Then, for the features of the natural continuum, here as defined with
its simple properties in the thread, I'd be interested to know what
you think of fundamental results in the real numbers and as well in
geometry in the real numbers, with for example transforms among the
unit disc, segment, square, and quadrant and plane.
R(-1,1) -> R[0,1) -> R[0,1] x R[0,1] -> R[0,oo) x R[0,oo) -> R x R

R(-1,1) -> R x R[0,1) -> R x R

For centralized components:

R(-1,-1) -> R x R

Then, the idea of these simple transform diagram, is in terms of the
space, then to indicate, for example regions of resources, in the real
numbers or quantities represented in them, to work from the transforms
and how they maintain the data, to see in the general plan the Hilbert
course.

Here,
N -> R[0,1]
or sometimes
N -> R[0,1)

As a route for application, it is as simple the logic of the data.

Then,
N x N -> R[0,oo)
but not
N -> R[0,oo)
except
N -> N x N .

Thanks,

Ross Finlayson

William Elliot

unread,
Sep 19, 2012, 3:20:17 AM9/19/12
to
On Tue, 18 Sep 2012, Ross A. Finlayson wrote:
> On Sep 17, 7:54嚙緘m, William Elliot <ma...@panix.com> wrote:
> Then, for the features of the natural continuum, here as defined with
> its simple properties in the thread, I'd be interested to know what
> you think of fundamental results in the real numbers and as well in
> geometry in the real numbers, with for example transforms among the
> unit disc, segment, square, and quadrant and plane.
> R(-1,1) -> R[0,1) -> R[0,1] x R[0,1] -> R[0,oo) x R[0,oo) -> R x R
>
> R(-1,1) -> R x R[0,1) -> R x R
>
> For centralized components:
>
> R(-1,-1) -> R x R

Pray tell, oh divine Jabberwokie, what be
R(0,1), transformed and centralized, and by
what tragic magic thou doesth N -> R(0,1),
thy fiat, begging for sense and meaning.

> Then, the idea of these simple transform diagram, is in terms of the
> space, then to indicate, for example regions of resources, in the real
> numbers or quantities represented in them, to work from the transforms
> and how they maintain the data, to see in the general plan the Hilbert
> course.
>
> Here,
> N -> R[0,1]
> or sometimes
> N -> R[0,1)
>
> As a route for application, it is as simple the logic of the data.
>
> Then,
> N x N -> R[0,oo)
> but not
> N -> R[0,oo)
> except
> N -> N x N .
>
> Thanks,
>
> Ross Finlayson
>
>
> > > Euclid gives us four or five rules of geometry.
> > > Two points define a line.
> >
> > > (Two non-parallel lines define a point, if they are not skew.)
> >
> > > There are points all along the line, finite constructions are in the
> > > edge and compass, Euclid gives us this. 嚙確hen, space is naturally
> > > defined as area with the products of functions in geometry. 嚙瘡ere he
> > > starts with the 2-D case.
> >
> > > Then, it is convenient to establish, for example, that from the unit
> > > line segment and circle and disc about the origined, there is defined
> > > affine geometry. 嚙�Here in the well in Euclidean geometry.)
> >
> > > Then, that's convenient to establish coverage or the space of the
> > > functions that are then generally defined in for example three
> > > orthonormal vector bases. 嚙確his establishment of the Euclidean
> > > geometry and the mutually consistent co-existence of a spiral space-
> > > filling curve with these compatible properties, establishes the
> > > general canon.
> >
> > > Then, for the polydimensional perspective, we see the line segment
> > > from the disc. 嚙磕e see the origin from the line segment and disc.
> > > Look, there is the origin, it begins the spiral, space-filling curve
> > > of the continuum. 嚙確o then maintain this generally, for the line
> > > segment it is that and for the disc it is that.
> >
> > > Then, we see this adds power to the theory, simply from that general
> > > Euclidean geometry is maintained. 嚙瘡ere it is from that, you can

Ross A. Finlayson

unread,
Sep 20, 2012, 3:41:21 AM9/20/12
to
On Sep 19, 12:20 am, William Elliot <ma...@panix.com> wrote:
> On Tue, 18 Sep 2012, Ross A. Finlayson wrote:
> > On Sep 17, 7:54 pm, William Elliot <ma...@panix.com> wrote:
> > Then, for the features of the natural continuum, here as defined with
> > its simple properties in the thread, I'd be interested to know what
> > you think of fundamental results in the real numbers and as well in
> > geometry in the real numbers, with for example transforms among the
> > unit disc, segment, square, and quadrant and plane.
> >    R(-1,1) -> R[0,1) -> R[0,1] x R[0,1] -> R[0,oo) x R[0,oo) -> R x R
>
> >    R(-1,1) -> R x R[0,1) -> R x R
>
> > For centralized components:
>
> >    R(-1,-1) -> R x R
>
> Pray tell, oh divine Jabberwokie, what be
> R(0,1), transformed and centralized, and by
> what tragic magic thou doesth N -> R(0,1),
> thy fiat, begging for sense and meaning.
>
>
>


What with apparently having one right here I say I rather feel it is
fiat than make it. I certainly believe it.

Simply, when 0 counts up to infinity, that's enough to completely
divide an amount, or here the amount. Then, via deduction, the
complete ordered field is made from that.

N * 1/N = 1.0

N * 1.0 = R^dots+ = R+

R^bar+ = R+

Jabberwock: the gyres and gimbles in the wabes, mimsy were the
borogoves, and the momeraths outgrabe, slithy toves: snicker-snack.
Snicker-snack. (shudder). (Here the Jabberwock recounts the dreadful
events.)

Building the complete ordered field is from making the real numbers
first then simply establishing the elements of the ordered field in
that, then to be complete the reals are defined by the complete
ordered field, the closure of properties and operations on them.

Z = N + -N
Z * 1.0 = R^dots = Rbardots
...
R^bar = Rbardots

Q e Rbardots, AnE n x N e Rbardots,
R^bar = AnE n x N e Rbardots,
Rbar = Rbardots

Here, that says because all the expansions are in R bar dots, that is
enough for Rbar and standard R.

Here then the reals are not constructed in the theory to have
approximations except to rationals, in R^bar. That's suitable and
efficient, where, all the approximations can be constructed together
and are countable, because they're co-defined and they're defined.

Alright then, I'll go ask the 8-Ball, Ouija, and Guatemalan family
living in my closet, the Jabberwock, how to arrive at a function from
N onto 0.0 to 1.0: most directly.

Here "most directly" is one for each of them, 1/N. Basically this is
that division exists in the natural integers and it exists for all of
them or a completed infinity, and N/N = 1 and here 1.0, establishing
topological properties in the integer space.

Then if for Maxwell's demons we could have these jabberwocks that
basically make sense of the poem, isn't it because we can already make
sense of the poem? Basically figuring if the demon has to work
letting in or not hot and cold molecules, here the jabberwock has that
if it's a real defined from the line it's not as if it's defined from
the complete ordered field, where both share properties of the reals.
Here though there's no need to employ jabberwocks for this: simply it
is maintained in the definition.

And, yes, I've gone on and developed why the standard has that there
is countable additivity in the unit measure because the reals have
countable additivity in the unit measure, for a standard non-standard
measure theory, yes the standard and classical.

Basically there's nothing different, here than the standard.
_
R = R

Regards,

Ross Finlayson



> > Then, the idea of these simple transform diagram, is in terms of the
> > space, then to indicate, for example regions of resources, in the real
> > numbers or quantities represented in them, to work from the transforms
> > and how they maintain the data, to see in the general plan the Hilbert
> > course.
>
> > Here,
> >    N -> R[0,1]
> > or sometimes
> >    N -> R[0,1)
>
> > As a route for application, it is as simple the logic of the data.
>
> > Then,
> >    N x N -> R[0,oo)
> > but not
> >    N -> R[0,oo)
> > except
> >    N -> N x N .
>
> > Thanks,
>
> > Ross Finlayson
>
> > > > Euclid gives us four or five rules of geometry.
> > > > Two points define a line.
>
> > > > (Two non-parallel lines define a point, if they are not skew.)
>
> > > > There are points all along the line, finite constructions are in the
> > > > edge and compass, Euclid gives us this.  Then, space is naturally
> > > > defined as area with the products of functions in geometry.  Here he
> > > > starts with the 2-D case.
>
> > > > Then, it is convenient to establish, for example, that from the unit
> > > > line segment and circle and disc about the origined, there is defined
> > > > affine geometry.  (Here in the well in Euclidean geometry.)
>
> > > > Then, that's convenient to establish coverage or the space of the
> > > > functions that are then generally defined in for example three
> > > > orthonormal vector bases.  This establishment of the Euclidean
> > > > geometry and the mutually consistent co-existence of a spiral space-
> > > > filling curve with these compatible properties, establishes the
> > > > general canon.
>
> > > > Then, for the polydimensional perspective, we see the line segment
> > > > from the disc.  We see the origin from the line segment and disc.
> > > > Look, there is the origin, it begins the spiral, space-filling curve
> > > > of the continuum.  To then maintain this generally, for the line
> > > > segment it is that and for the disc it is that.
>
> > > > Then, we see this adds power to the theory, simply from that general
> > > > Euclidean geometry is maintained.  Here it is from that, you can

Virgil

unread,
Sep 20, 2012, 3:58:20 AM9/20/12
to
In article
<3adbabc3-8d4e-4fe3...@u2g2000pbl.googlegroups.com>,
"Ross A. Finlayson" <ross.fi...@gmail.com> wrote:

> Simply, when 0 counts up to infinity, that's enough to completely
> divide an amount, or here the amount. Then, via deduction, the
> complete ordered field is made from that.

GIGO!
--


William Elliot

unread,
Sep 20, 2012, 4:04:42 AM9/20/12
to
On Thu, 20 Sep 2012, Ross A. Finlayson wrote:
> On Sep 19, 12:20 am, William Elliot <ma...@panix.com> wrote:
> Simply, when 0 counts up to infinity, that's enough to completely
> divide an amount, or here the amount. Then, via deduction, the
> complete ordered field is made from that.
>
Define your terms:
N, 1/N, R^dots, R+, R^bar, R^bar+, Rbardots,

and explain what _
"Q e", "AnE n x N" and R mean.

> N * 1/N = 1.0
> N * 1.0 = R^dots+ = R+
> R^bar+ = R+
>
> Building the complete ordered field is from making the real numbers
> first then simply establishing the elements of the ordered field in
> that, then to be complete the reals are defined by the complete
> ordered field, the closure of properties and operations on them.
>
> Z = N + -N
> Z * 1.0 = R^dots = Rbardots
> ...
> R^bar = Rbardots
>
> Q e Rbardots, AnE n x N e Rbardots,
> R^bar = AnE n x N e Rbardots,
> Rbar = Rbardots
>
> Here, that says because all the expansions are in R bar dots, that is
> enough for Rbar and standard R.
>
> Here then the reals are not constructed in the theory to have
> approximations except to rationals, in R^bar. That's suitable and
> efficient, where, all the approximations can be constructed together
> and are countable, because they're co-defined and they're defined.
>
> Here "most directly" is one for each of them, 1/N. Basically this is
> that division exists in the natural integers and it exists for all of
> them or a completed infinity, and N/N = 1 and here 1.0, establishing
> topological properties in the integer space.
>

Ross A. Finlayson

unread,
Sep 20, 2012, 4:23:45 AM9/20/12
to
_
R^bar = R

..
R^dots = R

..
_
R^bar^dots = R

Basically from N is constructed the ratios and fractions of N, or the
positive rationals Q+ or n/n, from that 1/N, that defines 1.0.

Q is the rationals, R is the real numbers, A and E are for any and
each, and n x N is an element of all sequences of n-sets, or here
simply the sets of those.

A n_i e n X N, E n_i

Then, that's simply stating they exist, and also, that each exists or
is constructible. Here that is where these are of the rationals. The
rationals do approximate each. Each as constructions are in R^dots as
it exhausts all the finite approximations, but, only in R bar dots
from here R being R bar.

Conveniently the modern standard treatment is in R bar, it's called R
in modern standard real analysis that uses set theory: the set of
real numbers.

Convenient for me!

Regards,

Ross Finlayson

Ross A. Finlayson

unread,
Sep 20, 2012, 4:37:16 AM9/20/12
to
On Sep 20, 1:23 am, "Ross A. Finlayson" <ross.finlay...@gmail.com>
wrote:
For real analysts, who wouldn't have to translate their proofs, it's
convenient for them, too.

R: set of real numbers

Regards,

Ross Finlayson

Tim Golden BandTech.com

unread,
Sep 20, 2012, 7:59:05 AM9/20/12
to
On Thursday, September 20, 2012 4:37:16 AM UTC-4, Ross A. Finlayson wrote:
> On Sep 20, 1:23 am, "Ross A. Finlayson" <ross.finlay...@gmail.com>
> > On Sep 20, 1:04 am, William Elliot <ma...@panix
> > > On Thu, 20 Sep 2012, Ross A. Finlayson wrote:>
> > > > On Sep 19, 12:20 am, William Elliot <ma...@panix.com> wrote:
>
> > > > Simply, when 0 counts up to infinity, that's enough to completely
>
> > > > divide an amount, or here the amount.  Then, via deduction, the
>
> > > > complete ordered field is made from that.

I've messed around with similar constructions. Another nearby one is to consider just the spherical shell in such terms. I have an algorithm that graphs an n-dimensional shell this way, but the data is so dense that it is graphically useless.

By mapping an n-dimensional space to a one dimensional path you really have to admit the compromises that occurred. I don't believe that informationally you are free to pull say five theoretically pure real values from one theoretically pure real value. This just breaks information theory and we would have a lack of conservation of information. Well, there is rather a lot of information already, but if you want to work with a number that does not contain an infinity of numbers within it it might be wise to address the grievances, and this then lands one in some modulo behaviors. Well, the concrete form of the real number itself (the radix ten representation that we use today as in 1.2345) contains modulo behavior so there are games that can be played in terms of digital interpretation. To play these games however one will be conserving information.

Here is one from my site:
http://bandtech.com/PolySigned/Deformation/AxisDualDeformStudy.gif
This is a 3D sphere which is surveyed with a one dimensional path. It is plenty good. Still, the limitations are clear.
sphere
From a physical point of view this tool would be most interesting if it carried relative behaviors, but it gets pretty awful so far as I can see, especially if two observers at differing origins happen so select different scales for their survey. I know you just care about the math.

Keep up the good work. I believe it is a good awareness but I don't think it needs to be white washed either.

- Tim


>
> >
> h

Ross A. Finlayson

unread,
Sep 20, 2012, 12:23:44 PM9/20/12
to
On Sep 20, 4:59 am, "Tim Golden BandTech.com" <tttppp...@yahoo.com>
wrote:
> On Thursday, September 20, 2012 4:37:16 AM UTC-4, Ross A. Finlayson wrote:
> > On Sep 20, 1:23 am, "Ross A. Finlayson" <ross.finlay...@gmail.com>
> > > On Sep 20, 1:04 am, William Elliot <ma...@panix
> > > > On Thu, 20 Sep 2012, Ross A. Finlayson wrote:>
> > > > > On Sep 19, 12:20 am, William Elliot <ma...@panix.com> wrote:
>
> > > > > Simply, when 0 counts up to infinity, that's enough to completely
>
> > > > > divide an amount, or here the amount.  Then, via deduction, the
>
> > > > > complete ordered field is made from that.
>
> I've messed around with similar constructions. Another nearby one is to consider just the spherical shell in such terms. I have an algorithm that graphs an n-dimensional shell this way, but the data is so dense that it is graphically useless.
>
> By mapping an n-dimensional space to a one dimensional path you really have to admit the compromises that occurred. I don't believe that informationally you are free to pull say five theoretically pure real values from one theoretically pure real value. This just breaks information theory and we would have a lack of conservation of information. Well, there is rather a lot of information already, but if you want to work with a number that does not contain an infinity of numbers within it it might be wise to address the grievances, and this then lands one in some modulo behaviors. Well, the concrete form of the real number itself (the radix ten representation that we use today as in 1.2345) contains modulo behavior so there are games that can be played in terms of digital interpretation. To play these games however one will be conserving information.
>
> Here is one from my site:
>    http://bandtech.com/PolySigned/Deformation/AxisDualDeformStudy.gif
> This is a 3D sphere which is surveyed with a one dimensional path. It is plenty good. Still, the limitations are clear.
> sphere
> From a physical point of view this tool would be most interesting if it carried relative behaviors, but it gets pretty awful so far as I can see, especially if two observers at differing origins happen so select different scales for their survey. I know you just care about the math.
>
> Keep up the good work. I believe it is a good awareness but I don't think it needs to be white washed either.
>
>  - Tim
>
>
>

There are ways and ways. Here if R is still equivalent to Rdots and
it to N, it is that the function between, or the injection either way
comprising the function, is a composition with the equivalency
function. Then, N x N -> R, but it is not clear that N -> R, where it
is clear that N -> 1.0. Here it is that N isn't "just" an inductive
set, for example if it were deductively there are its infinite
ordinals and there's induction through that. Here, then it is
sufficient to complete the natural integers, that it contains an
infinite element, where then the statements of induction that define
the existence of the natural integers and here that they index the
rational partitions between zero and one, that for day omega or 1/N,
it has infinite, constant partitions.

What may be a useful notion is that the integer space already defines
what will be the real-valued space. Then, the logic to establish a
function from integers to values between zero and one in the integer
space, here to corresponding values 0.0 through 1.0 in the real-valued
space establishes the density and completeness, or the gaplessness
property, with the nilpotent infinitesimal (square is zero, though
cube and fourth not), these iota-values that establish the unit course-
of-passage for a treatment of induction. Here there is the general
consideration that in the complete ordered field, there is guaranteed
existence of elements, between any two elements from the complete
ordered field. Here in the unit ring or here simply the partitioned
unit, those elements from the complete ordered field are infinitely
far apart in the contiguous sequence, where, given their construction,
it is perfect the ratio between those two values in R^bar, and their
translate in R_dots. So, while still it is a feature of R^dots that
all of R^bar is in it, there exists an infinite ordinal up to w^w that
maintains, up to the precision or denominator of the two real values,
their difference in contiguous scale, in R^dots.

There are more reals in the reals than integers, sure. There are more
even numbers than multiples of three in their supersets, more quarters
than halves, in theirs, in given ranges of their supersets
establishing for example an abitrary uniform stride: there are more
because a fairly random method to find each returns more of the
regular than the less. Here then basically I'm working to that a well-
ordering of the reals has some interesting properties in ZF or here
ZFC. (ZF + "Reals are well-ordered" is equivalent to ZFC) Because,
using infinite ordinals for reals, their well-ordering's ordinal is
consistently >= Aleph_1 (or 2 for reals) in ZFC, and, keeping the
construction out of the arithmetic (or here arithmetical) is the goal
then, to keep definitions beyond the countable: predicative, in
ordinals. This is with the transfinite induction established
literally, as so for each the expansion exists as does a predicative
ordinal.

For still the construction is in the ordinals, here then functions on
R still have these ordinals but they are simply impredicative, why?

Here, it is from that the reals already existed, values with the
properties between zero and one, yet still there is a general means.
Here, it is that the order of the real numberis not simply maintained
in the predicate, that from it's place on the line, it is defined by
its ratio, except how it is defined by its ratio. Here the product of
their ordinals maintaining the medium and maintaining the field, is
greater than either's recursive construct. Then, in as to cardinals,
still if the results vary as to the reals, as a set, then they vary as
to the theory, for sets. Or here, it is to the functions are as to
ordinals, and they simply establish their cardinal value, erasing
their ordinal value.

Then, that goes to that the elements of the set are as well ordering-
sensitive, here R as countable continuum and R as complete ordered
field, their orderings to best enumerate them, having inductive and
arithmetical structure respectively. Yes, this then leaves general
results in ZFC so in the real numbers and cardinal arithmetic.

Borel vs. Combinatorics, anyone?

Regards,

Ross Finlayson

Tim Golden BandTech.com

unread,
Sep 21, 2012, 8:08:13 AM9/21/12
to
On Sep 20, 12:23 pm, "Ross A. Finlayson" <ross.finlay...@gmail.com>
wrote:
This condition 'in their supersets' is wheedling isn't it?
Formally, for every d which is a second multiple of b there exists c
which is the third multiple of b.
These are the real numbers of which I speak, and not a subset
thereof.

You are trying to play beyond the edge I think. The most practical
implementation will be on the rationals, and for instance any argument
which relies upon a random real number

> more quarters
> than halves, in theirs, in given ranges of their supersets
> establishing for example an abitrary uniform stride:  there are more
> because a fairly random method to find each returns more of the
> regular than the less.  Here then basically I'm working to that a well-
> ordering of the reals has some interesting properties in ZF or here
> ZFC.  (ZF + "Reals are well-ordered" is equivalent to ZFC)  Because,
> using infinite ordinals for reals, their well-ordering's ordinal is
> consistently >= Aleph_1 (or 2 for reals) in ZFC, and, keeping the
> construction out of the arithmetic (or here arithmetical) is the goal
> then, to keep definitions beyond the countable: predicative, in
> ordinals.  This is with the transfinite induction established
> literally, as so for each the expansion exists as does a predicative
> ordinal.
>
> For still the construction is in the ordinals, here then functions on
> R still have these ordinals but they are simply impredicative, why?

You have selected a subset of the reals above hear and now generalize
that to the reals, which is not coherent is it? You've completely
ignored my informational argument. There are errors in your thinking,
or at least that is my position. You can have your cake and eat it
too, but there will be consequences.

- Tim

Ross A. Finlayson

unread,
Sep 21, 2012, 12:00:23 PM9/21/12
to
On Sep 21, 5:08 am, "Tim Golden BandTech.com" <tttppp...@yahoo.com>
I'll admit that the notion that the real numbers have this foundation
the defines them as the contiguous points only between integers, and
that somehow this definition supports their definition as complete
ordered field, is tough to reconcile and mathematics has been working
on it for thousands of years.

That either way they're the same elements, there are though various
properties of them as sets that differ. Basically it goes to the
construction of the items. Here there are two stores: a unit segment
store that has divisibiliity through the infinite maintaining constant
differences that "subsume the ratio" or here define as continuous the
segment, and its corresponding index by Z of the line, and the line
store, which is built from ratios or expansions.

Then, as above where I wrote "R = R bar", there's a lot built into the
expression, that all of R's properties are satisfied by being R bar.
That's not necessarily set identity, where sets are defined by their
elements. Rather it is the placeholder the R could be replaced with R
bar, and the topological properties of its elements and existence and
uniqueness of elements in R bar, are as in R. This still reflects
that R bar is also R dots, or rather, here also R bar dots, and that,
is also R dots. And, there is a rather restricted transfer principle
between them.

Then, basically R dots and R bar are the constructions, within a
framework that defines the properties of R or the continuous line
through the integers, for each of R dots and R bar. Then, defined by
their elements, the elements are of a type, where the elements of R
can be defined many ways to establish the properties of the set, if
not each element, here of numbers, not sets. There are a variety of
reasonable ways to build numbers from sets, and vice versa. Then,
here, toward understanding why the continuous real line has these
properties of being the shortest distance and maintaining all the
fields, is that it is and does.

Regards,

Ross F.

Virgil

unread,
Sep 21, 2012, 6:26:40 PM9/21/12
to
In article
<f892b5a9-b29c-4ae0...@t2g2000pbt.googlegroups.com>,
"Ross A. Finlayson" <ross.fi...@gmail.com> wrote:

A complete ordered field of real numbers has essentially one of two
standard constructions from the field of rational numbers, in terms of
Cauchy sequences or in terms of Dedekind cuts, but however constructed,
to be valid must produce something that is isomorphic as a complete
ordered field to one of them, as all models of the reals are isomprphic
as complete ordered fields.

So anything structure that is not complete-ordered-field-isomorphic to
either the Cauchy or Dedekind versions is a false model of the reals.
--


Ross A. Finlayson

unread,
Sep 21, 2012, 7:35:03 PM9/21/12
to
On Sep 21, 3:26 pm, Virgil <vir...@ligriv.com> wrote:
>
> A complete ordered field of real numbers has essentially one of two
> standard constructions from the field of rational numbers, in terms of
> Cauchy sequences or in terms of Dedekind cuts, but however constructed,
> to be valid must produce something that is isomorphic as a complete
> ordered field to one of them, as all models of the reals are isomprphic
> as complete ordered fields.
>
> So anything structure that is not complete-ordered-field-isomorphic to
> either the Cauchy or Dedekind versions is a false model of the reals.
> --
>
>


Yes, that's so (for a reasonable definition of "complete-ordered-field-
isomorphic"). Here R bar is a model of R, R dots is not, except that
it models R bar dots which models R bar. Here, "model" then is under
other conditions. Some theorems true of R aren't true of R dots, for
example "there's no smallest positive real". Yet, the reals of R bar
are defined as, at least, being infinitely along the line of R dots.
They so define and are so defined, representative elements of the
complete ordered field, satisfying standard theorems for R, and Q.

Then basically gaplessness (the "complete" part of the complete
ordered field) is free because it is R dots.

To standard analysis above measure theory enough to discount for the
countable in additivity, this construct satisfies being a structure of
R, or rather, that is its goal.

Yes that's quite so.

Regards,

Ross Finlayson

Virgil

unread,
Sep 21, 2012, 10:28:32 PM9/21/12
to
In article
<07f09283-69fb-464a...@kg10g2000pbc.googlegroups.com>,
"Ross A. Finlayson" <ross.fi...@gmail.com> wrote:

> On Sep 21, 3:26 pm, Virgil <vir...@ligriv.com> wrote:
> >
> > A complete ordered field of real numbers has essentially one of two
> > standard constructions from the field of rational numbers, in terms of
> > Cauchy sequences or in terms of Dedekind cuts, but however constructed,
> > to be valid must produce something that is isomorphic as a complete
> > ordered field to one of them, as all models of the reals are isomprphic
> > as complete ordered fields.
> >
> > So anything structure that is not complete-ordered-field-isomorphic to
> > either the Cauchy or Dedekind versions is a false model of the reals.
> > --
> >
> >
>
>
> Yes, that's so (for a reasonable definition of "complete-ordered-field-
> isomorphic").

All the definitions of "complete-ordered-field-isomorphic" that I am
aware of are reasonable and Archimedean, though I would not be a bit
surprised that Ross has access to all sorts of unreasonable definitions,
including some for "complete-ordered-field-isomorphic".
--


Tim Golden BandTech.com

unread,
Sep 22, 2012, 9:17:20 AM9/22/12
to
On Sep 21, 10:28 pm, Virgil <vir...@ligriv.com> wrote:
> In article
> <07f09283-69fb-464a-b75f-f9c067988...@kg10g2000pbc.googlegroups.com>,
I tend to think that the rationals are plenty good enough, and will
forgo a formal definition of plenty good enough. Settling back to this
position then we could proceed with the spiral analysis, but instead
we stay within the gobbledy gook domain of wheedling an encryption of
n values into one value for free. Upon settling for a concrete form of
the construction what we will observe is that the one value must
contain n times as much information, or else suffers a uniqueness
problem. Beyond this lays many other problems. Some awareness of
whether these problems are general dimensional, or for instance if a
special case exists in the 2D form, are of interest and these could be
valuable extensions from the base construction. Particularly the
practical usage of such a coordinate system ought to cover relative
coordinates, and this is going to be a bear in general dimension with
general resolution. It is a very pretty thing to collapse n dimensions
to one dimension, but better would be to find out why wee humans can
stop our physical representations at three dimensions. If another
means fell out of this math then that could be meaningful. I do know
that an emergent spacetime form does already exist in pure math, and
so it ought to be recoverable under several formats. That this spiral
math is general dimensional is true, but let's try to see if it is
useful.

We know that in the calculus sense that we are free to settle upon a
resolution of error so long as it is freely expressed, and it is so
under this construction. This principle is what ought to be relied
upon to proceed. This step of raising the resolution is not very
pretty, but still has some forgiving properties. For instance in the
physical domain at a location O particulate points nearest to the
location will preserve their order so long as each is a unique
distance from O, and the resolution is sufficient. Are these
restrictions practical? To some I think they will be. For others not,
but I think that they would see the reasoning easily enough.

The worst part of this math is when we attempt to compare two such
representations O1 and O2 (two unique centers of spiral) in the same
(static) physical environment. Clearly ordering has turned to
disordering here, but still given that O1 has O2's position within its
own series and O2 has O1's position within its series then there is no
reason to believe that the communication of these perhaps along with a
few other positions that O1 and O2 can agree upon as references would
suffice to translate such that O2's perspective can be predicted by
O1. This method would be the natural form of representation rather
than relying upon a cartesian basis. Still, how one goes about denying
the cartesian representation altogether seems very difficult.

The resolutions get pretty crazy pretty quickly, but what the hek,
there are guys in physics playing around with figures like 10^10^100
and so forth, so I wouldn't dismiss the large value problem
altogether. Also there are alternate constructions of physicality, one
of which is to impose a unit shell as the substrate of existence, and
here I think is a good medium to work this representation on.

Blah de blah de blah

- Tim


Ross A. Finlayson

unread,
Sep 22, 2012, 11:54:13 AM9/22/12
to
On Sep 22, 6:17 am, "Tim Golden BandTech.com" <tttppp...@yahoo.com>
wrote:
Among reasons it makes sense that if there could be a line drawn from
zero through one that there could be a curve drawn through each of the
dimensions up to 1 = (1,0,0,0,0, ...), there there is that same line
from the origin through one.

http://groups.google.com/group/sci.math/browse_thread/thread/3dcb42cab227de36/d842ed6145d06971

Then, questions are along the lines of: how to get from a circle to
square? Most area for a perimeter or most area for n sides? (This is
with ran(EF) x ran(EF).) Here uniform squares pack but uniform
circles don't pack, packing the planes with circles has that there is
their square-co-sited centers or their offset centers, then circles in
each of the remaining area here greedily as the area is reduced and
divided, deterministically, with the next .

Then, that is a question, how is the density of these points uniform
in the space? This is where, if there is only one revolution about
the axis, the circle about 1.0 and the one about 2.0 have perimeters
that double, if a random selection was either 1.0 or 2.0 then one of
the points on the circle, then any points on the 2.0 circle has only
half the probability of being selected, as a spot on the 1.0 circle.
Then, is the 2.0 circle simply to be weighed twice as much as the 1.0
circle in selection, for that for however for along in all the
dimensions it is, that when at the second integer point, that simply
defines where the probability is twice as likely as at the first
integer point 1.0?

Here, there is an ordering of the curve, that, for any two points from
R bar, it is known then their ordering in R dots. For example 1.0 and
2.0 have the same ordering. Yet, to build R^2, there is a
consideration that it would be reasonable to build R^2 = R x R, or
R[0,oo) = R+ squared (here including zero in R+). Then it isn't the
Argand and roots of unity aren't built until that is built into R^2.

For something like working out that approximations exhaust, consider
the circle. There are that for each n >= 2 that there is a regular n-
gon with the angle and endpoint formulas. For example the equilateral
triangle, square, or hexagon, are regular polygons, here that also
both happen to be the only regular n-gons that pack the plane, though
triangles pck the plane, under rotation or flip making parallelogons
with 60 degree acute angles with that the side length equals the base
length, those have a name. So, starting with 100 then 1000 then
million-gons out through n-gons, that stellation when connected has
less and less area between the circles that circumscribe it it
exascribe or here, what is that, inscribe, draw inside and outside the
circles. Eventually, for a given n, it is beyond the measurement
distance, which may have upper and lower bounds, there is no
distinction of the N-gon and circle.

A method of integration can be developed for functions that describe
polygons and are circumscribed. There is where the area of the circle
is pi r^2. If the function of the area between the sides of the
polygon and the circle goes to zero as the number of points in the
polygon increases, then the area of the polygon goes to pi r^2. Here
that is a surface-filling integral. Here something like molecules in
a balloon describe that, here with pressure laws.

Then, for the rationals, they already offer the properties of being a
field, and an ordered field. Basically in definining division, the
closure of the integers to division by the integers is the rationals,
without zero, as dividing by zero results in a quantity that in
memoryless systems of computation is undefined. So, except for zero,
then, there is this ordered field. Here with Q+ being the positive
rationals and Q++ being the products N/N or Z+/Z+.

So, now here what happens when the rationals are stellated where they
define that there are not infinitely many points in the circle about
the origin but only one then two then three or four and so on. As
that number of points increases without bound, here there is whether
that each successive circle also goes as being the same to the
previous or more, in that: it is expected to go 1, 2, 3, ....This
would be where, each circle starts the same distance, along the
translated line, from its previous as each of its previous is of it.
For any, say, powers of two N-gon, rotated to maintain offset, that
adds to a value that averages over the sweep, an example of fairly
integrating the spiral space-filling curve, to get results on the
plane.

Then, how does that define pi. Here, the notion is to establish pi as
a structural feature, numerically in R dots, and establish its
feature.

Regards,

Ross Finlayson

Ross A. Finlayson

unread,
Sep 22, 2012, 1:44:02 PM9/22/12
to
On Sep 22, 8:54 am, "Ross A. Finlayson" <ross.finlay...@gmail.com>
> http://groups.google.com/group/sci.math/browse_thread/thread/3dcb42ca...
Here, you can see there are regular notions then of defining the
continuum, from what we would know about dimensions. This is where,
it is as simple for the line to go from the beginning through the
point and the points to a point and through, as for to go from the
beginning through the point and the points to a point and through,
where the point will still be as close to the origin as it can be,
going in a circle first to fill the circle and then to find the line,
in the circle, as a ray.

Then, here it is the same notion as you address on how to address N
many points on a line. Here for example a 2-D addressing scheme might
have a virtual row of a given width and a column, for example a fixed
array of given width and height, here conveniently with as many rows
and columns of the matrix of these things, which is here built. Then
enumeration is familiar as scanlines. To establish distance and
rotation in maintaining that the next point is as close to the origin
as it can be and as close to the previous point as it can be, and here
different than each, it would go back and forth in diagonals from the
origin at 0,0, here not the main diagonal we know as each (n.n) or
(0,0) <-> (n,n) but the other diagonal that is

(0,0)
(1,0) <-> (0,1)
(2,0) <-> (0,2)
(3,0) <-> (0,3)

or

(n,0) <-> (0,n),

yet while (n.n) is easily defined as a row, column, or the main
diagonal this opposite diagonal then has its computation of points as
modeled by a line traversing the elements, while the main diagonal
modeling to either side is modeled by translates or shifts.

For example one to the left of the main diagonal is (n+1.n), or here
in the notation the order can be fixed for the shift of the form of
the coordinates of the item in the addressing scheme, one to the right
is (n.n-1), for the other to start and reflect the other, it is (n.-
n), the input to arithmetic will always be at least as large as n
instead of averaging half of n. Then, for the elements to be in their
natural order on that line, the opposite diagonal instead of the main
diagonal in the addresssing scheme, is whether the addressing is more
simple for the main or opposite diagonal. Here it is simpler or more
simple for the square, compared to the general rectangular or block
matrix.

This is where, to traverse to (x,y) in a space filling scheme,
generally in scanlines (vis-a-vis, spiral) is with that x * MAX(x) + y
is computed, this value represents (x,y). Here in the spiral, it is

while (diagonal contains component less than (x,y)){
sum length of diagonal
next diagonal
}
from that, sum the remaining, that is the value for (x,y)

Here, "contains component less than" is that (x,y) either has an x
component greater than n, a y component greater than n. Then, there
is arithmetically a different cost for computing the offset (into here
the sequence of values of the coordinates) from the coordinates, for
the scanlines case and the spiral case, here in the discrete.

Then, working up what those concretely represent, that storing an area
in a matrix costs less at some level in the spiral than the scanlines,
it is for the variable bounds, those that average having bounds less
than the overall bounds, where in the scanlines case, on average half
the bounds are the upper bounds.

Now, these are in the discrete, working up for how the simple
properties would define features in the continuous, here these points
of the geometry, then what is let is to be let yet what can be kept:
is, from the discrete to the continuous, and vice versa.

Regards,

Ross Finlayson

Ross A. Finlayson

unread,
Sep 22, 2012, 2:52:48 PM9/22/12
to
On Sep 22, 10:44 am, "Ross A. Finlayson" <ross.finlay...@gmail.com>
Then, starting with a 2-D addressing scheme, then there can be various
compositions of this spiral and scanline addressing, in the discrete.
Here, the default addressing scheme we are familiar with from
programming is the scanline addressing. The scanline or spiral is
reaily composed in the traversals, of the virtual coordinate space, or
here default layouts for the ready computation, and retrieval, of
coordinate access.

For example, the usual scanline case is to scan each line, here row,
or scanrow, to scan lines that are of a fixed size in the forward
default traversal or layout of the elements. This could be "spiral"
be returning from the end of each line, to the the beginning of the
next (for each), with the non-existent line before each returning to
the origin. Then it is a space-filling curve, instead of that it is a
composition of space-filling curves, where curves are continuous or
here contiguous. The scanlines each fill the space of their row, then
generally it is that their beginnings each also fill the space here of
the columns (row-major). Then, the function defining the possible
coordinate values is space filling in the space of the coordinates.
Then, where the composition is in scan lines, there is maintenance of
the entire square in the possible space of the inputs. Yet, in
addressing components from the origin, on average the upper bound of
arithmetic input is half of the upper bound of the entire space, while
in the scanline case, the arithmetic input is on average half the
upper bound of the entire space. (Here, that there's a difference
between "on average" that here varies on the averaging here generally
of the area.) So, given that relevant values from the origin might
average, for example, close to the origin, those two data sets of
clustered and uniform have plainly different reasonable description of
different features, in the asymptotic, here for the characteristics of
fundamental algorithms, here for for the cases at each bound for
scanning back to the beginning of the previous component , or
spiralling without starting a new component.

Obviously enough, these addressings can even be combined within the
space, here constructed for maze-finding. Yet, then their maintenance
itself is at cost in the arbitrarily defined schemes that here define
paths in the integers that cover coordinate spaces. Here that there
is simplicitly in the rules that define the paths, in the range and
bounds of the space, there is simply the least amount of maintenance
in a general purpose algorithm, for its support of the entire space
here basically where there will be an input for each item in the
space. As an example of that different containers for clustered and
uniform data have different runtime characteristics (over the space of
all data) has then that addressing schemes, are fundamental in them as
the description of the simplest rule to define the generation of their
components.

Here, for example, a line fills space: it fills the space of its
points. Then the consideration of the scanlines case for the reals
has that the second component after drawing the first edge of the unit
square to begin, would begin in the scanlines case at the next
component after the origin in y, or here that it would spiral back to
the origin here as the previous component. Each new point would
extend the component, and also extend the boundary about an earlier
point in the component, being the or among the nearest points so
esablished. Basically the scanline rule doesn't always have the next
in distance along the line and next in distance from the origin, where
the spiral case does. Then, where that's a feature of the
construction, and it generally follows first principles of the items
or individua of the continuum itself, that's a feature of all
arithmetic inputs.

Then, where the continuum is of the real numbers, the structural forms
of among the simplest ways to define the continuum, establish from how
objects differ, that they form structure.

Then, to get to the complete ordered field (here the reals vis-a-vis,
say, C, Argand's complex plane) from the spiral space-filling curve,
it is established the uniform in density that is established about the
origin. Then, (all) the algorithms change to so reflect that.
Uniform in density, the complete ordered field as constructed for
example over the denominators in Q or enumerating Q here with some
obvious rules bounding the distance of the construction from the
origin, but here over N/N with that for a notation for an iteration of
the construct, that the constructs establish here coordinates, then
the various fundamental arithmetic generators, that define sufficient
structures here throughout ordinals to support operations on the
space, would be where to look for then why having this definition of
the real numbers in the spiral space-filling curve, would well
complement the definition as the complete ordered field: as real
numbers, of the continuum, of real numbers.

Regards,

Ross Finlayson

Tim Golden BandTech.com

unread,
Sep 23, 2012, 1:37:36 AM9/23/12
to

On Sep 22, 11:54 am, "Ross A. Finlayson" <ross.finlay...@gmail.com>
> http://groups.google.com/group/sci.math/browse_thread/thread/3dcb42ca...
Alright Ross. I've read your other posts here briefly. It sounds like
you are really into this. The code to do such is not that grotesque.
Consider that there does exist a unital value within this scheme, and
that the dimensions can be treated as a decomposition... Well here I
should just post some code:


class UnitShell
{
// traces points on a shell of unity radius.
// This Sphere shell is a series of latitudinal rings.
public:
int dim; // number of dimensions
vector <double> a; // angles in rotations ( 0 to 1 )
vector <double> da; // delta angles in rotations
int color;
int count; // zero if initialized
Cartesian cpos;
Cartesian clast;
Poly pos;
Poly last;
//Projection axisRotation; // rotate the sphere by this
//Cart axisRef; // orientation for axisRotation
UnitShell( int dimensions, double latitudePoints, double harmonic );
UnitShell( int dim );
~UnitShell();
virtual int Next();
virtual void UpdatePos();
virtual int Color();
//virtual double AngleLimit( int i );
//void SetAxis( const nSigned & axis );
};

void UnitShell::UpdatePos()
{
//cerr << "1UnitShell::UpdatePos()\n" << flush;
clast = cpos;
last = pos;
//cerr << "1.1UnitShell::UpdatePos()\n" << flush;
for( int i = 0; i < dim; i++ )
cpos[i] = 1;
//cerr << "2UnitShell::UpdatePos()\n" << flush;
for( int i = 0; i < dim; i++ )
{
for( int j = i; j < dim; j++ )
{
if( j == i )
cpos[j] *= cos( 2 * pi * a[i] );
if( j > i )
cpos[j] *= sin( 2 * pi * a[i] );
}
}
// axisRotation.Project( cShell, cpos );
//cerr << "3UnitShell::UpdatePos() cpos: " << cpos << "\n" << flush;
Color();
pos = cpos;
}

This is the shell and is fairly skeletal. Your methods can be firmed
up and rewarding. Still, in high dimension the data density is
frustrating. This code does work in high dimension, though I am open
to falsification. In low dimension I suspect that the UnitShell is the
true continuous form. This would mean that some of what we look at is
reflections from the past. At least I am willing to consider this
perspective. When we view the universe from the Hubble are we looking
at the universe in uniquity, or is it possible that some of these
images have travelled harmonically to us? Well, this is one way of
livening up the conversation, and as soon as you mathematical purists
come into the physical fold then maybe we'll get some realistic
theories. We need pure theory and the physisicists have already
forgone their ideals.

- Tim

Tim Golden BandTech.com

unread,
Sep 23, 2012, 2:14:16 AM9/23/12
to
On Sep 22, 11:54 am, "Ross A. Finlayson" <ross.finlay...@gmail.com>
> http://groups.google.com/group/sci.math/browse_thread/thread/3dcb42ca...
Well, signons pack n-dimensional space.
http://bandtech.com/PolySigned/Lattice/Lattice.html
In the case of 2D this is the hexagon that you mention; not the
triangle, and in this distinction comes the calculus sense of things.
Yes , that hexagon is composed of six triangles, but they (the
triangles) are not regular from an iterated standpoint, whereas the
hexagons are. This is general dimensional and turns into rhombic
dodecahedra in 3D and so forth into nD. These are what I call the
signa, which is the plural of signon. They do orderly pack space like
the familiar cubic, though they do have their own quirks. If you
iterate the space fully as a series of for loops you will cover the
same locations multiple times since you could hit
( 2, 2, 2 )
and
( 1, 1, 1 )
which are actually the same location within polysign. These happen to
be the origin
( 0, 0, 0 )
within P3 and likwise
( 1, 2, 3 )
is the same location as
( 0, 1, 2 )
withing polysign. This is due to the ultimate balancing act
( 1, 1, 1 ) = 0
which even the real numbers maintain since
- 1 + 1 = 0
which is just
( 1, 1, ) = 0.
Still, this redundancy can be remedied through a careful usage of for
loops which do not look like the cubic version.
In P5 we are already up to a 4D medium and physically we regard space
as just a 3D medium. In the UnitShell medium that I am supporting at
the moment we should need just 4D, so P5 could suffice. So the work is
cut out for you in general dimension, but no so for physical systems,
or at least that is the observer's perspective.

This doesn't really relate to the spiral analysis directly but since
you've triple posted I thought I'd take another moment to double
post... No really I know that I can be perceived as offensive, but you
maintain an even temperament. Anyway content is content and rhetoric
is rhetoric.

- Tim

Ross A. Finlayson

unread,
Sep 23, 2012, 3:21:15 PM9/23/12
to
On Sep 22, 11:14 pm, "Tim Golden BandTech.com" <tttppp...@yahoo.com>
> > polygons and are circumscribed.  There is where the area of...
>
> read more »


Ah that said, then in for example the coordinate space, sometimes here
we see there is redundant information for example that there are two
formulas to arrive at a coordinate in signons or N+1-D hedra with
sides of squares or hexagons that pack the space, then for the 2D
(square) case and the hexagonal case, of the side, where cubes are
examples of signons or signa, those are as clock arithmetic. Here a
point is only in one signa, just like it is only in its own
neighborhood, or just like it has an integer value in R^2 those being
its coordinates (here of lines through the spaces).

Then, for the polysigned number, it could be added to four, more, any
number of coordinates (or bases in the coordinate space) with the plan
divided into any number of segments about the origin, that many signs
in the coordinate or dimensions in the coordinate space (here that we
can write in 2-D). I am thinking the n-signed number is this way.
Then, working up they are n-k or k-n-signa or n-signa ^ k.

Here for example rays from the origin could have any number of
coordinates to represent the plane. Then, the relevance of the 2-
signed and 3-signed coordinates is that given another origin of a 2-
or 3-signed number they pack the plane, on the line through the
origins. A 4-signed coordinate is too convex, as is any n-signed (or
here m-signed) for n > 4. As regular polygons those pack the plane
from lattice symmetries.

Then, to for example, fill squares or hexagons with regular
stellations from the center, then those pack the plane, or from the
circle that fills the square. Yet, to sum their area, that they
represent in 2-D, it is for example, for each of the objects with the
value at that coordinate. Then it's simple or convenient to attribute
the space, for example an integer lattice space of fixed bounds. Here
each of the integer points has a polysigned attribute. Here we see
that each each 3-signed number writes out, for example, in the
hexagonal placement, with up to bounds stitching, the most convenient
notation for going from the center of one hexagon to the center of the
next, incrementing the distance, for example, or working back to
planar coordinates. Then, working back to the average of one and
the square's root that connects two side lengths and fits in the box
of the side length, the average of one and the square root of two,
here I have not defined average yet it is about that, as the number of
partitions of the circle that fit in the regular polygon or here n-gon
increase, the average distance between the radius and edge. where the
circle doesn't always meet the polygon at the endpoints. For example
using a multiple of two, a 2n-signed number, the radius meets at the
midpoint of the edges. Then, the average is under rotations. For
example dividing the circle into any prime number greater than the
number of sides or here signs, there is always an edge with more than
one radius through it. Then it is the average through those. Here
that is working up from arithmetic progression the distribution of
segments and how many points of the polysigned attribute stellate it,
the contribution of that to distance, in computing area.

The distribution through segments basically indicates the direction to
that edges face that they could pack. For example, a randomly facing
five-star (of rays) fits to face in a square or 2-gon, in the plane,
the 2-2-hedron, and 2^2 = 4. Then, the five star has one of the sides
with two points, so, it might as well be 1/4 of the time the edge
facing with the next square, to compute the area, as to where to put
the center on computing the distance , where otherwise it would be the
corner or point to fill the space for a square, where the shapes fill
to pack or placed to the corner with squares. Here to work up planar
area in the polysigned, where the computation of the area is of the
polysigned components, then going from the 2-signed to the 3-signed,
that has in the algorithms that the attributes of the element of the
area, define its planar area. Here, whether the edges and faces
match, or the corners match, each contributes to defining
statistically, means of general integration of polygonal areas.

Then, for integer lattices, we have some very regular methods for
defining planar area in integer coordinate terms. The area is the
simple product of the integers. We know the area of the circle is pi
r^2, approximately 3.14 or 3.1415926, pi, times, the square for the
radius. Then, the radius of each the rays, those go to that ratio, or
the 4 or 2x2 square, and 3, reducing from 4 - pi. Twice the square
would be where the circle cimcumscribes the square where it's diagonal
length is 2 not square root or two, where its side length is square
root of two. Then that's (2 + root pi)(2 - root pi).

Then, here the notion is to write generating structures of the area,
for each of the polygons inside and outside the circle, including
matching edges, corners, and faces. For example, here the cases for
2,3, and 5 build tools for each of the prime numbers, they have
convenient forms for example as expansions, in their base.

For example, to represent two and the multiples of two, it is as
simple to store numbers
.1_2
.2_3
instead of
.1_2
.10101010101010.._2 .

Then, here representing the exact paths as rays from the origin, then
it works through representations that maintaining that systematically,
then maintains bounds and area.

Here, then the coordinates you are using for polysign, or location,
that is what I would understand compared to working up here these
other quantities. Here, there is much about the polysigned that I
read into the definition from representing the coordinate space in the
location, here in the polysigned it is also the winding rule or
reversibility.
http://bandtech.com/PolySigned/PolySigned.html

Here then I am looking more at the properties of poly-signed or here n-
signed and that it would have, for example, rules in product spaces?

http://bandtech.com/PolySigned/Deformation/DeformationUnitSphereP4.html
"The four-signed numbers (P4) fail to conserve magnitude when their
product is taken."
Here, then I hope you would further define the useful properties of P3
for that.

http://bandtech.com/PolySigned/FourSigned.html

Heh, you can be perceived as offensive? I had no idea.


Da, Uff Da. I don't know what that means in Swedish, but I like it.

Regards,

Ross Finlayson

Ross A. Finlayson

unread,
Sep 24, 2012, 3:08:34 PM9/24/12
to

Here, then, a consideration is how to bring this to the applied and
directly.

One way would be to define an approximative method, here for example
working toward defining how various simple iteration methods over the
square array see as natural particular constant cost operations in the
array (for example in drawing lines). Then, where in the system those
are modeled as constant, the line operation, then the array is of
lines besides points. Here, that is a convenient way, organizing each
of the rows, columns, and the diagonal of a square array, then here
the diagonals through, for example the reverse, and THEN the rotation
and combination of each line through.

For example, for a table column, in the editor, adjust the column and
have it adjust the column value for that. That's application: re-
using the column value here could simply be saved.

Then, defining computer data structures for the program that, for
example, only work as if they are points and lines in space, here for
example as constant cost and constantly partitioned lines as accessed,
then to their constant enumeration and maintaining that cost, these
ideas about the mathematical objects in the theory, have programs to
show their behavior here in the digital and discrete.

The main point from that is designing that the combinatorial
enumeration, which has a combinatorial cost in the computation, is
modeled for the theory that it has instead the much lower or constant
access of the geometry to working resources. For bounded resources,
this can be accomplished as their products are complete, here the
products can be enumerated then just read out.

Then, with the idea that establishing that some expectation exists for
then as the resources are unbounded, then showing in the real world
experiments of the expectation, working toward defining the points and
line and points of space, of the continuum, here has the reasonable
expectation that, as far as whether two points are more clustered or
uniform, for example, avenue for application or novel application is
in the simple acknowledgment that courtesy the construction, the
approximations are maintained for the theory: for the theory.

Here, there is general interest in deriving an approximative form for
pi, in this otherwise system of rational construction. For the unit
circle in the 2x2 box, then the hexagon, here I'm trying to understand
what use there is of the features of the one and the other. Basically
the idea here is to approximate the perimeter of a circle from the
regular polygons. Here, it is in a way where, for example, besides
that the square has diagonal, not side length of two in from the
quadri-lateral to that the 5-lateral, 6-lateral, (pentagon,
hexagon)..., each centered at the origin and with unit length to the
corner, that average to the unit circle or has as the limit, also it
has side length 2n when the unit circle is in it. Then, it's one
thing to work up the circle between polygons (that have the sum of
their side lengths or perimeter go to 2pi), it's another to work up
each of the polygons between circles. As well, there's a general
notion to, for any two (regular) polygons with the same center, work
out the distance between them and area, under rotation. It's another
to work up each of the polygons betweens circles, or here squares or
hexagons or as symmetric about the origin, squaring the circle.

Then it's generally known that squaring the circle is impossible (in
finite constructions). Yet, it's simple enough to define the
functions that establish the expectations of the space, from here
finite (or bounded) constructions of a sort, that are themselves
simply defined recursively in those terms. Here it works to symmetry
of the space, and the mutual properties that the continuum establishes
a sequence, and the complete ordered field. Here, with the natural
numbers as founding the continuum, and thus that many establishing a
continuum between zero and one, then it not being its own foundation
the continuum of continuums to define the (here the product space
besides the value space, N x N for N * N) perspective law, has that
defining expectations, as well as that the complete ordered field,
with its perspective law, defines expecations. Now it's funny because
here the simple uniform in expectations is maintained simply by EF in
its properties in real function. It is how to define expectations
from the rational approximations in the complete ordered field, for
continuous distributions, (here for example in computing area) where
here these constructive expectations found statistical expectations.

So, for constants like e and pi, or the Catalan constant or Euler and
so on, these have quite varied sources and are these fundamental
ratios of sorts that arise: functions. The idea here is to define
these constructions of geometry that there are established simple
restatements of classical proofs non-classically. Then, that is not
always direct, sometimes the inductive case establishes a different
conclusion, than the deduction and inference from otherwise
established expectations in the space. Induction still holds, the
case changes. Then, from that can be inferred other conditions that
would see that, in establishing here the expectations of the micro and
macro, but generally the expectations of conflicting odds, that
generally there's an exact balance between them.

Then, what we know as fundamental in physics as laws of conservation
and symmetry, these are so in mathematics. Simply in defining work
mathematically the measurement of an algorithm is its cost. Defining
work mathematically, it is where there are simple forces in the
mathematics as the physics, here, generally inductive forces. Then
the work is to put against that the distance. Here for example in
generally reducing distance, the work is to put the edges together to
see if they face. Then, work is minimized. To accomplish that in
algorithms, the relevant work is to work up the cases where they
face. Then, that work is minimized, accomplishes a relevant concern
of here how they fill, and pack, here the structures that fill and
pack the plane. Here, the difference between fill and pack, is as to
where the shapes are completely filling the plane, that is a packing
of the plane of those shapes, that each shape is filled. Here there
are a random collection of shapes, say bounded by regular polygons.
They can align and pack in any of the symmetries of the alignment and
packing of cubes or hexagons or other shapes that here pack the
space. For example the square has two major and two minor symmetries
through it the hexagon three or six. Then, working out how that is
for all the other shapes, then for some given expectation of what a
random shape is, off of its simplest numeric generators, is a route to
build a calculus, of those things, for alternative methods, for
application.

Then, the idea is to work out which inductive cases, are the most
natural, or how they combine forms: here we see that these notions of
symmetry principles to found deduction and deductive inference,
complement induction and forward inference. Then, if there is a force
in information in nature it is entropy, how forms combine. Here the
symmetry in the unit line segment is that both from one to zero and
from zero to one there is the symmetry, so the form of "from zero"
goes to one as does from it. Then, instead of getting more to the
applied, this is instead general.

Here it is to apply to the containers of the items or the array, the
very algorthms on it that would have its effects there being the same
as the combined, maintained effects of there being all items of the
array (establishing existence criteria). Here, from that each of the
shapes, for a given bounded area of space, has a given amount of the
bounds, in evolving from the point to the bounds. This evolution,
with the geometric mutation in the large and small, is that while the
point is round- or square-like that the bounds might be spherical or
the box. Then, for shapes, as their boundaries are most conveniently
and directly represented in the minimal space terms possible as they
reflect less work, random assemblages of shapes have statistical
properties established on covering, here of the analog areas from the
discrete terms.

Then, for "when a pentagon grows to fill a square, what is the
probability of it having a side facing any side" where here it was
established that for any star in the center that at least one side of
the square would have two rays of the 5-star for a side between them,
has as well the combinatorial description, of a directional problem,
the distribution is built around the origin. Then, to build for each
of the regular polygons is one thing, here building that function for
the square and hexagon in the plane, the function uniform over the
disc of each of the arcs facing a given side and thus establishing
area disjoint, should see trivial solutions for cases like packing
squares into integer lattices, than for coverings of circles and
squares (that, these would establish expectations for partitions by
the complete ordered field in applications for uniformity, and
clustering, and for partitions by the integral continuum). (Vis-a-vis
the integrital and integratal.)

Here is a way to consider it: dropping circles or squares from an
arbitrary distance and defining what their distributions would be.
For example the circles might expect to form a line and be shallow
while the squares were deep and centralized. Dropping a square might
be that, dropping it to the line, here for packing the square in the
integer space, is to the square at the origin. Then, the line through
x = -y or the reverse diagonal, is the boundary where the dropped or
moving square would stop, so imagine dropping squares that are turned
diagonally, that fall to a line that is really all the angles exposed
in the packing, here for squares or hexagons. Then, when thy land,
though they don't bounce, the second one lands directly on the first
and slides to the left or right. After starting that first pile, it
is wiedge and centralized. Yet, if it was into circles then the
second square could knock the first one over leaving them both flat.
Here you can see that then in defining all those cases, as the half of
the square is built in packing the integer lattice with squares
diagonally, has that circles build the line and squares build area.
Then, in partitions, the squares build the line and the circles build
area. Then, for the areas of integration, they are likely squares as
circles.

Then, the circle and square are the same on the line of their own,
they pack on the first one as the pile, compared to the pyramid which
would be built from the bottom up. On the flat line, the squares
would stack up and the circles would fall to a line. The edge at the
top of the circle has no balance, points have no balance. Then, about
building the centralized distribution, it builds the normal
distribution. But, why not the half-square or diamond distribution?
Here, there is some simple tendency that, for example, alternating
drops go right and left, barely right or left of the balance point,
then the diamind would result, compared to then, landing on the first
diamond, ah, here that case is then it would still have the partial
component from the diagonal, here in the original +- x. Then,
dropping diamond would see that to build another item to the left on
the line, it would have built what it has on the right of the line,
for the diamond to fall on a diamond and go either left or right,
going in the direction the faces match, and not a normal
distribution. Then, a square falling might have the first one land,
and, if it is the first of the next line also, the next diamond might
slide off and skip or here to half a skip. Here, this generally is
toward establishing what happens with given shapes, like Tetris, where
the shapes are square or round and they fall on the corners. Then, it
changes whether they land on a line, or a line or fracture through the
greater shape reflected in the line's shape (here for example the edge
of a larger square), where the fracture would be to the lesser shape,
where the shapes combine with the other shapes, the boundary is all
edges, points, and corners. For squares, it is edges, points, and
corners, for circles edges or faces and corners (between circles).
Then, the simple deterministic play in the game is defined as
constantly dropping the next piece.

Then, yes that does seem quite direct. Building the statistical
expectations into the model then refers to here that using geometry,
statistics can be built. This is a useful feature then for statistics
in area and cross-correlating across population means, working up
effects in area, and joint centralized and distributed collection of
bounds.

Here, the idea is to extend this game to where circles and diamonds
have different properties as they fill the pack grating. The circles
could gain extra or go extra the existing stack, while the diamonds on
the stack drop and match faces and stop, the circle might, given how
many it fell on the stack, go that many beyond the stack. Then, when
it was reached, in regular diamond dropping, the diamonds would still
fill the half diamond. It would matter the number of circles for the
height of the diamond for how it builds area beyond the half diamond
of the diamonds themselves onto the diamond grating. Then, enough
circles could move the center of the pile. For example a 2-high pile
could build. the circle land on its slides down and goes over one
spot. Then, for it to get more than one spot over, diamonds would
have built the pile that high in the center already to get there,
while the shape dropped on the pile can go either left or right where
there is a minima and the grating is full of minima, here of the
height of the pile, until, circles built up a pile the same height.
Then, it would be when the circle was dropped, right after the diamond
dropped that filled the half-diamond. (Here the goal is to build the
normal curve or semi-circle.) That could then following, with enough
circles to fill the orginal distance that the circle went. Then,
alternating diamonds and circles could see as long as circles fell
after completing each half-diamond until one went that way, would
further build the second layer of the line in circles instead of
diamonds in the half-plane grating.

Basically the difference between circles and diamonds then seems to be
that the circles land on an edge or face while the diamonds land on
the corner. Then, going left or right, both the diamond and circle
match faces to the corner, then a circle matches a face for each face
it passed (to the diamond or circle in what is there). Here while
dropping the shapes are always between shapes, the original drop is on
top of the first shape, it drops randomly or in an order or under a
condition to the left or right. The diamond always stops next to a
shape and above and between two shape, here the circle might be
ascribed to "roll" or that circle-circle point contact isn't enough
for face contact for where the grating alternates, that the circle
rolls, here from where it was placed in the middle of the diamond.

Then, the question is, how is the boundary grating grated or the flat
line. Here basically the idea is to integrate these shapes. Then, the
squares fall not to the line, but to the corner. As they fall filling
the pyramid, that natually fills the well. Yet, it doesn't fill the
line it builds the pile. With circles then, those could fill as they
would fill each diagonal line from the corner. The pile would always
be higher than two, but the pile, besides being a half of the semi-
square would again have that varying the ratios or conditions of
circles and squares changes how fill would start, to that the circles
could start fill back up the side, for the circle that matched faces
to the edge of the pile, where the diamond is always centered on the
line through the corner. Then, the other piles it represents or here
the boundaries of the space filled with these diamonds and circles, if
they generally had their own drop on each drop of a shape to the
origin, then that would describe how the shapes integrate. Basically
into pels and voxels, here for circles and diamonds or circles and
hexagons, the point is to work out what happens in the discrete that
defines the continuous.

Then, when the shapes are any polygon, then the idea is to work out,
from enumerating families of those, regular polygons that fill and
pack the plane, how all the polygons fill the plane and what is left.

Then, for establishing surface effects like friction, tension,
catching, rest, and otherwise establishing continuous effects from
discrete principles, these are defined on the mathematical abstract
points here as squares, circles, and hexagons, for the computation of
the accumulation of point effects.

Regards,

Ross Finlayson

Virgil

unread,
Sep 24, 2012, 6:05:54 PM9/24/12
to
In article
<e188e8ad-bf87-47b3...@k13g2000pbq.googlegroups.com>,
"Ross A. Finlayson" <ross.fi...@gmail.com> wrote:

> Here, then, a consideration is how to bring this to the applied and
> directly.

What does this mean in Standard English, rather than the Rossian it is
'expressed' in above?
--


Ross A. Finlayson

unread,
Sep 25, 2012, 10:52:17 AM9/25/12
to
On Sep 24, 3:05 pm, Virgil <vir...@ligriv.com> wrote:
> In article
> <e188e8ad-bf87-47b3-94f4-413c39c8c...@k13g2000pbq.googlegroups.com>,
>  "Ross A. Finlayson" <ross.finlay...@gmail.com> wrote:
>
> > Here, then, a consideration is how to bring this to the applied and
> > directly.
>
> What does this mean in Standard English, rather than the Rossian it is
> 'expressed' in above?
> --


"Here, then, a consideration is how to bring this to the applied and
directly (to the applied)."

also

"Here, then, a consideration is how to bring this (directly) to the
applied."

Standard, yes, these mean the same thing.

"Here then a consideration is how to bring this directly to the
applied, in mathematics".

Then, here when I say "in mathematics", I mean "in mathematics".

Then, if your question is actually about meaning in natural language,
this is a discussion about foundations, in mathematics.

Or, "Good luck with that."

Regards,

Ross Finlayson

Virgil

unread,
Sep 25, 2012, 3:44:56 PM9/25/12
to
In article
<a0061172-82d4-4c9a...@kr1g2000pbb.googlegroups.com>,
"Ross A. Finlayson" <ross.fi...@gmail.com> wrote:

> On Sep 24, 3:05 pm, Virgil <vir...@ligriv.com> wrote:
> > In article
> > <e188e8ad-bf87-47b3-94f4-413c39c8c...@k13g2000pbq.googlegroups.com>,
> >  "Ross A. Finlayson" <ross.finlay...@gmail.com> wrote:
> >
> > > Here, then, a consideration is how to bring this to the applied and
> > > directly.
> >
> > What does this mean in Standard English, rather than the Rossian it is
> > 'expressed' in above?
> > --
>
>
> "Here, then, a consideration is how to bring this to the applied and
> directly (to the applied)."
>
> also
>
> "Here, then, a consideration is how to bring this (directly) to the
> applied."
>
> Standard, yes, these mean the same thing.

NOT in standard English, though only Ross can speak for what it means in
Rossian.
>
> "Here then a consideration is how to bring this directly to the
> applied, in mathematics".
>
> Then, here when I say "in mathematics", I mean "in mathematics".

But in yourmoriginal statement,yhr one I questioned, , you did not say
"in mathematics".
>
> Then, if your question is actually about meaning in natural language,
> this is a discussion about foundations, in mathematics.

Apparently expressed in the UNnatural language of Rossian.
>
> Or, "Good luck with that."
>
> Regards,
>
> Ross Finlayson
--


Ross A. Finlayson

unread,
Sep 26, 2012, 6:43:49 PM9/26/12
to
On Sep 25, 12:44 pm, Virgil <vir...@ligriv.com> wrote:
> In article
> <a0061172-82d4-4c9a-a543-0740add86...@kr1g2000pbb.googlegroups.com>,
What, they do (have the same meaning in standard English the natural
language). Here just because something is brought directly doesn't
mean it doesn't arrive directly. Just because a different word
wouldn't be adverb to both the main verb and the object of a
preposition, directly as adverb here has these each meaning the same
thing.

Or, I could see "Here then a consideration is in as to how", to bring
this directly to the applied, in mathematics.

Here I'll express interest in why you see them as different but really
don't expect much. I see your reply as simply contrarian where here
it is rather simple the mathematics under discussion.

Eh, alt.usage.english this isn't. Here if you'd care to scan the
typeset, dog-eared rules of the day, you might so enjoin to crusade
for the preservation of the English language, as you seem compelled in
mathematics, many could use help. But, not here.

Correctness and unambiguity in natural language proofs is as relevant
as in symbolic proofs, and symbolic proofs can be read out in natural
language.

There's a lot of opportunity in the above for mathematical
development.

English?

Modern standard American English?

In mathematics?

Here I'd recommend Tao's shining blog http://terrytao.wordpress.com ,
and to read mathematics from there, also Math Overflow the complement
to Stack Overflow, http://mathoverflow.net, there's plainly lots
better reading than sci.math. http://mathoverflow.net/questions/108138/how-to-tell-a-paradox-from-a-paradox

Here in the simple anaphora, that includes all well-formed sentences,
it is particular that "directly", adverb, establishes for the verb,
and for the object, here of the verb.

endophora
anaphora
cataphora

Then, where all meaningful and well-formed statements have infinitely
many corresponding statements with the same meaning, and for each
statement in an anaphoric reference it can be made in a cataphoric
reference, so the simple anaphora contains the meaning of each well-
formed (if, poorly styled) sentence or collection of sentences, here
there is much to make with the foundations of language as there is
with the foundations of mathematics. And, being linguists, they often
have good words for these things.

So, I can well prove that those natural language statements have their
meaning.

English, Virgil: English.

Virgil

unread,
Sep 27, 2012, 12:35:01 AM9/27/12
to
In article
<0f10043b-e506-48d0...@kg10g2000pbc.googlegroups.com>,
"Ross A. Finlayson" <ross.fi...@gmail.com> wrote:


> English, Virgil: English.

Let me know when you have learned it!
--


Ross A. Finlayson

unread,
Sep 27, 2012, 2:01:31 PM9/27/12
to

English is here our common language and you're grumpy. Grumpy old
man, this is a mathematics forum (of dubious quality). Now I'd like
to thank the other posters for writing in English as my command of
European and Asiatic language doesn't match the simple correspondence
in English.

Now, this is a discussion (in part) about a theory with geometry so
defined toward establishing the classical theorems of Euclidean
geometry, from alternate primary objects here of the points in space
instead of Euclidean construction of points and line.

Then, that this geometry is consistent with the super definition of
the reals as R^bar^dots, and the proviso of the natural/unit
equivalency function, is explored. This builds a variety of results,
in the post-modern, in mathematics: to whit, novelty.

Then, where it well appears that these structural implications follow
establishing the possibility of nonstandard, and true results of
systems of numbers, for application of this mathematics, the
conscientious mathematician is interested in that, for mathematical
physics.

So, you won't hear that as you stridently declare, others do, and I'm
interested in their constructive opinion (or of course in any true
mathematical statements in the area under discussion. With these
notions being plainly accessible to the interested reader, the idea
that this new theory has less axioms and decides "stronger" theories,
for example establishing measure 1.0, these are novel foundations with
enough suggestion of their consistency that, again, the conscientious
mathematician wouldn't ignore it, instead revel in it.

So, I'm interested in your opinion, in mathematics, and would
appreciate that you took time to read and extract the salient critical
points that should well allow you to put it together yourself. That
of course is addressed to you, the reader, not Virgil, he who won't:
as he won't.

Thanks, I feel great and the (re-)discovery of these features in
mathematics is an honor.

Regards,

Ross Finlayson

Ross A. Finlayson

unread,
Aug 13, 2019, 5:22:34 AM8/13/19
to
For the areas of integration, they are likely squares as circles.

Chris M. Thomasson

unread,
Aug 13, 2019, 6:17:55 AM8/13/19
to
For some reason, this makes me think of filling any fractal with
circles. Think of a means of telling if _any_ point of the circle is
"outside" of the fractal _or_ intersects with any other circle. I have
did this, and its great fun. Its simple, if any point on a circle
escapes, its out of the fractal. If any circle intersects with any other
circle, its invalid. The result is an infinite circle packing of any
fractal.

Simple example:

https://www.facebook.com/photo.php?fbid=162231644935842

can you see the image? Here are some more examples:

http://paulbourke.net/fractals/filling

:^)



Transfinite Numbers

unread,
Aug 25, 2019, 12:59:10 AM8/25/19
to
You didn't take your pills herpes boy, right?

On Monday, September 17, 2012 at 9:38:17 PM UTC+2, Ross A. Finlayson wrote:
> In the forward notion of establishing mutual results in Euclidean
> geometry and Finlaysonian geometry it is to where Euclid defines
> geometry, in terms of points and lines, and here the spiral space-
> filling curve from the natural continuum defines the geometry in
> points and space.
>
> Euclid gives us four or five rules of geometry.
>
> Two points define a line.
>
> (Two non-parallel lines define a point, if they are not skew.)
>
> There are points all along the line, finite constructions are in the
> edge and compass, Euclid gives us this. Then, space is naturally
> defined as area with the products of functions in geometry. Here he
> starts with the 2-D case.
>
> Then, it is convenient to establish, for example, that from the unit
> line segment and circle and disc about the origined, there is defined
> affine geometry. (Here in the well in Euclidean geometry.)
>
> Then, that's convenient to establish coverage or the space of the
> functions that are then generally defined in for example three
> orthonormal vector bases. This establishment of the Euclidean
> geometry and the mutually consistent co-existence of a spiral space-
> filling curve with these compatible properties, establishes the
> general canon.
>
> Then, for the polydimensional perspective, we see the line segment
> from the disc. We see the origin from the line segment and disc.
> Look, there is the origin, it begins the spiral, space-filling curve
> of the continuum. To then maintain this generally, for the line
> segment it is that and for the disc it is that.
>
> Then, we see this adds power to the theory, simply from that general
> Euclidean geometry is maintained. Here it is from that, you can
> translate from the disc to the well for Euclidean results, parallel
> lines from the unit disc, here for affine geometry.
>
> Regards,
>
> Ross Finlayson

Ross A. Finlayson

unread,
Sep 10, 2021, 9:26:26 PM9/10/21
to
On Thursday, September 20, 2012 at 12:41:21 AM UTC-7, Ross A. Finlayson wrote:
> On Sep 19, 12:20 am, William Elliot <ma...@panix.com> wrote:
> > On Tue, 18 Sep 2012, Ross A. Finlayson wrote:
> > > On Sep 17, 7:54 pm, William Elliot <ma...@panix.com> wrote:
> > > Then, for the features of the natural continuum, here as defined with
> > > its simple properties in the thread, I'd be interested to know what
> > > you think of fundamental results in the real numbers and as well in
> > > geometry in the real numbers, with for example transforms among the
> > > unit disc, segment, square, and quadrant and plane.
> > > R(-1,1) -> R[0,1) -> R[0,1] x R[0,1] -> R[0,oo) x R[0,oo) -> R x R
> >
> > > R(-1,1) -> R x R[0,1) -> R x R
> >
> > > For centralized components:
> >
> > > R(-1,-1) -> R x R
> >
> > Pray tell, oh divine Jabberwokie, what be
> > R(0,1), transformed and centralized, and by
> > what tragic magic thou doesth N -> R(0,1),
> > thy fiat, begging for sense and meaning.
> >
> >
> >
> What with apparently having one right here I say I rather feel it is
> fiat than make it. I certainly believe it.
>
> Simply, when 0 counts up to infinity, that's enough to completely
> divide an amount, or here the amount. Then, via deduction, the
> complete ordered field is made from that.
>
> N * 1/N = 1.0
>
> N * 1.0 = R^dots+ = R+
>
> R^bar+ = R+
>
> Jabberwock: the gyres and gimbles in the wabes, mimsy were the
> borogoves, and the momeraths outgrabe, slithy toves: snicker-snack.
> Snicker-snack. (shudder). (Here the Jabberwock recounts the dreadful
> events.)
>
> Building the complete ordered field is from making the real numbers
> first then simply establishing the elements of the ordered field in
> that, then to be complete the reals are defined by the complete
> ordered field, the closure of properties and operations on them.
>
> Z = N + -N
> Z * 1.0 = R^dots = Rbardots
> ...
> R^bar = Rbardots
>
> Q e Rbardots, AnE n x N e Rbardots,
> R^bar = AnE n x N e Rbardots,
> Rbar = Rbardots
>
> Here, that says because all the expansions are in R bar dots, that is
> enough for Rbar and standard R.
>
> Here then the reals are not constructed in the theory to have
> approximations except to rationals, in R^bar. That's suitable and
> efficient, where, all the approximations can be constructed together
> and are countable, because they're co-defined and they're defined.
>
> Alright then, I'll go ask the 8-Ball, Ouija, and Guatemalan family
> living in my closet, the Jabberwock, how to arrive at a function from
> N onto 0.0 to 1.0: most directly.
>
> Here "most directly" is one for each of them, 1/N. Basically this is
> that division exists in the natural integers and it exists for all of
> them or a completed infinity, and N/N = 1 and here 1.0, establishing
> topological properties in the integer space.
>
> Then if for Maxwell's demons we could have these jabberwocks that
> basically make sense of the poem, isn't it because we can already make
> sense of the poem? Basically figuring if the demon has to work
> letting in or not hot and cold molecules, here the jabberwock has that
> if it's a real defined from the line it's not as if it's defined from
> the complete ordered field, where both share properties of the reals.
> Here though there's no need to employ jabberwocks for this: simply it
> is maintained in the definition.
>
> And, yes, I've gone on and developed why the standard has that there
> is countable additivity in the unit measure because the reals have
> countable additivity in the unit measure, for a standard non-standard
> measure theory, yes the standard and classical.
>
> Basically there's nothing different, here than the standard.
> _
> R = R
>
> Regards,
>
> Ross Finlayson
> > > Then, the idea of these simple transform diagram, is in terms of the
> > > space, then to indicate, for example regions of resources, in the real
> > > numbers or quantities represented in them, to work from the transforms
> > > and how they maintain the data, to see in the general plan the Hilbert
> > > course.
> >
> > > Here,
> > > N -> R[0,1]
> > > or sometimes
> > > N -> R[0,1)
> >
> > > As a route for application, it is as simple the logic of the data.
> >
> > > Then,
> > > N x N -> R[0,oo)
> > > but not
> > > N -> R[0,oo)
> > > except
> > > N -> N x N .
> >
> > > Thanks,
> >
> > > Ross Finlayson
Pow, "here then the standard".


0 new messages