13 views

Skip to first unread message

May 28, 2006, 11:04:46 AM5/28/06

to

This thread is motivated by a discussion at:

http://groups.google.com/group/sci.physics.relativity/msg/c551631f5c8b61f6

.

We are getting into pure math and this is a clean start on the problem

that will hopefully draw in some feedback from you.

The polysign construction is covered at:

http://bandtechnology.com/PolySigned/PolySigned.html

We would like a matrix expression that will do the polysign product. I

am slightly turned off by this approach because of the simplicity of

the native math that we are about to garble but since the lingua franca

is block language it must be done.

We can take two Cartesian values x1, x2 of similar dimension and

transform them to polysigned values. We can then take their product in

the polysigned domain. We can then transform this back to a cartesian

value x3.

So we know that this product

x1 x2 = x3

is possible and exists in any whole dimension. The challenge is

expressing it purely in the matrix system. Rather than use P4 I think

we should use P3 as an example since the number of terms will be

simplified. Since the process is general it will extend naturally.

The following notation will be used:

An explicit Cartesian value will be represented in the form:

[ Cn X0, X1, X2, ..., X(n-1) ]

where X are real values and are the usual vector form in n dimensions.

An explicit polysigned value will be represented in the form:

[ Pm S0, S1, S2, ..., S(m-1) ]

where S are magnitudes that inherently hold their sign as their

position.

S0 is the identity sign, S1 is first sign, etc.

If we instantiate two 2D Cartesian x's x1 and x2 we'll show their

components as:

x1 = [ C2 x10, x11 ]

and

x2 = [ C2 x20, x21 ]

so the last index is the component and the first index is an

identifier.

This applies similarly for the polysigned domain.

The transform from Cartesian to polysigned is resolved:

http://bandtechnology.com/PolySigned/CartesianTransform.html

so we can simply take

s1 = x1 t23

s2 = x2 t23

where x1 and x2 are 2D Cartesian, t23 is the 2 x 3 transfrom matrix,

and s1, s2 are in P3. There is a slight ambiguity in that the s values

could come out negative in the math of the transform. If this happens

they can be normalized. The details of this are not problematic and can

be discussed if necessary.

So we have two three-signed values s1 and s2 that came from their

Cartesian counterparts x1 and x2. Their product is defined as:

s1 s2 = s3

where

s30 = s10 s20 + s11 s22 + s12 s21

s31 = s11 s20 + s10 s21 + s12 s22

s32 = s12 s20 + s10 s22 + s11 s21 .

This is covered in slightly different form at:

http://bandtechnology.com/PolySigned/PolySigned.html .

Here the + operator means summation and should not be confused with the

polysign symbolic representation. All of the above values are

magnitudes being multiplied and summed.

The matrix challenge is embodied in the expression of s3 above.

Upon declaring that in matrix notation the desired x3 is

simply:

x3 = s3 t32

where t32 is the inverse transform matrix.

The expression is rotationally variant but allows the product and sum

to take on arithmetic qualities so that the usual distributive,

associative, and commutative laws of the real numbers apply in any

whole dimension.

Does the rotational variance necessarily deny the usage of tensor

notation? In that case the algorthmic matrix operation that is needed

may be new. It has to spin the source vectors around to produce s3.

When that operation is generalized to any dimension the matrix

definition will be completed.

I hope that a matrix guru can get this notation easily. It's not

necessarily embodied by an existing matrix operation.

It really boils down to a matter of notation. This operation is

embodied in the three lines that are the polysigned product. You can

break it out in two ways:

s3 = [ P3 s10s20, s11s20, s12s20 ]

+ [ P3 s12s21, s10s21, s11s21 ]

+ [ P3 s11s22, s12s22, s10122 ]

or alternatively

s3 = [ P3 s10s20, s10s21, s10s22 ]

+ [ P3 s11s22, s11s20, s11s21 ]

+ [ P3 s12s21, s12s22, s12s20 ] .

They are symmetrical extensible representations. I am unaware of the

symbollic representation of such an operation in matrix form. If it

exists already please comment. If not I guess we'll have to name such

an operation and attempt to give it a symbol and definition. I would

call it a spin or flip form. But I am no matrix master so please

comment if you are.

-Tim

May 28, 2006, 7:46:37 PM5/28/06

to

Timothy Golden BandTechnology.com wrote:

> This thread is motivated by a discussion at:

>

> http://groups.google.com/group/sci.physics.relativity/msg/c551631f5c8b61f6

> .

I'll fill in some of what I thought were major details from that

discussion.

Tim wrote:

> Spoonfed wrote:

> > Tim wrote:

> <Snipping away old stuff.>

> > > Here is the general n-signed product in C++ code:

> > > for( i = 0; i < n; i++ )

> > > {

> > > for( j = 0; j < n; j++ )

> > > {

> > > k = (i+j)%n;

> > > x[ k ] += s1.x[i] * s2.x[j];

> > > }

> > > }

Note, (i+j)%n = (i+j)modulo n

This particular piece of code makes the n-signed numbers

indistinguishable from particular sets of complex numbers. However,

there is another piece to the definition

By definition, P4 represents four vectors lying at angles of

Pi-arccos(1/3) radians from each other. These four vectors cannot lie

in the same plane, thus are not described by any set of four complex

numbers.

In my understanding, this is generalizable, so in one dimension, P2,

you have two vectors lying 180 degrees from one-another,

In 2 dimensions, P3 is three vectors lying 120 degrees from

one-another.

In 3 dimensions, P4 is four vectors lying 109.47 degrees from each

other.

In n dimensions, you have four vectors lying 180-arccos(1/(n-1))

degrees from one another.

Now the final key was that Tim said that multiplication was possible in

any level of dimension. Not cross-product, not dot product, but simple

multiplication. I have never heard of such a thing in 3D, but he's

given a general, reasonable set of rules for simple multiplication in

his C++ code, which reproduces multiplication in 1D and 2D, and seems

to flow into a pattern generalizable to any dimension.

> i={1,0,0}

> j={-1/3, sqrt(8/9),0}

> k={-1/3,sqrt(2/9),sqrt(2/3)}

> l={-1/3,sqrt(2/9),-sqrt(2/3)}

> i+j+k+l=0

> Op0=Transpose[{i,(3j+i)/sqrt(8),(2k+j+i)/sqrt(8/3)}]

> ={{1,0,0},{0,1,0},{0,0,1}}

> Op1=Transpose[{j,(3k+j)/sqrt(8),(2l+k+j)/sqrt(8/3)}]

> ={{-1/3,-sqrt(2/9),-sqrt(2/3)},{sqrt(8/9),-1/6,-sqrt(1/12)},{0,sqrt(3/4),-1Â/2}}

> Op2=Transpose[{k,(3l+k)/sqrt(8),(2i+l+k)/sqrt(8/3)}]

> {{-1/3,-sqrt(2/9),sqrt(2/3)},{-sqrt(2/9),-2/3,-sqrt(1/3)},{sqrt(2/3),-sqrt(Â1/3),0}}

> Op3=Transpose[{l,(3i+l)/sqrt(8),(2j+i+l)/sqrt(8/3)}]

> ={{-1/3,sqrt(8/9),0},{-sqrt(4/9),-1/6,sqrt(3/4)},{-sqrt(2/3),-sqrt(1/12),-1Â/2}}

> I have verified this in Mathematica, but I can't copy and paste into a

> text editor, so errors may have cropped up in the re-typing.

> Anyway, I verified with these operators that

> Op0.i=i

> Op0.j=j

> Op0.k=k

> Op0.l=l

> Op1.i=j

> Op1.j=k

> Op1.k=l

> Op1.l=i

> Op2.i=k

> Op2.j=l

> Op2.k=i

> Op2.l=j

> Op3.i=l

> Op3.j=i

> Op3.k=j

> Op3.l=k

> Det[Op0]=Det[Op2]=1

> Det[Op1]=Det[Op3]=-1

Op0 is identity corresponding to multiplication by #1 in Tim's P4

notation

Op1 (multiplying by -1) and Op3 (multiplying by *1) involving

reflection, rotation and scaling, and Op2 (multiplying by +1) is a

proper transformation involving rotations and scaling, but no

reflection.

Jun 1, 2006, 6:55:10 AM6/1/06

to

I'm unburying this thread in the hopes that you will respond.

The form:

s30 = s10 s20 + s11 s22 + s12 s21

s31 = s11 s20 + s10 s21 + s12 s22

s32 = s12 s20 + s10 s22 + s11 s21 .

is extensible to any dimension and provides a commutative product

definition.

Is there an existing definition of this matrix form?

The sources s1 and s2 are native to the polysigned domain so that their

unit vectors form a nonorthogonal n-1 dimensional simplex coordinate

system subject to the following extensible rule:

[ 1, 1, 1 ] = 0

In the three component (P3) form above this system generates the

complex numbers.

In P2 it generates the real numbers. P4 and above are poorly

understood.

In trying to talk matrix language the system suffers a breakage in P4

where rotational variance affects the product so that the math is not

tensor compatible. Yet this simple patterned and extensible expression

looks very much like a matrix form.

What is it?

The form:

s30 = s10 s20 + s11 s22 + s12 s21

s31 = s11 s20 + s10 s21 + s12 s22

s32 = s12 s20 + s10 s22 + s11 s21 .

definition.

Is there an existing definition of this matrix form?

The sources s1 and s2 are native to the polysigned domain so that their

unit vectors form a nonorthogonal n-1 dimensional simplex coordinate

system subject to the following extensible rule:

[ 1, 1, 1 ] = 0

In the three component (P3) form above this system generates the

complex numbers.

In P2 it generates the real numbers. P4 and above are poorly

understood.

In trying to talk matrix language the system suffers a breakage in P4

where rotational variance affects the product so that the math is not

tensor compatible. Yet this simple patterned and extensible expression

looks very much like a matrix form.

What is it?

Jun 4, 2006, 11:04:22 AM6/4/06

to

I only understand a little of your terminology, but I think you are

asking for these.

/s1-s3\ = /1 1/(2 Sin[th2])\/r1\

\s2-s3/ \0 2/(2 Sin[th2])/\r2/

and

/r1\ = /1 Cos[th2]\/s1-s3\

\r2/ \0 Sin[th2]/\s2-s3/

Where th2 = Pi - ArcCos[1/2]

And once we get in the {s1-s3,s2-s3} domain:

i=/1 0\

\0 1/

j=/0 -1\

\1 -1/

k=/-1 0\ (k won't be used in the solution, but it's here for

reference.)

\-1 1/

/ \

{i,j} = | / 1 0\ |

| \ 0 1/ |

| |

| / 0 -1\ |

| \ 1 -1/ |

\ /

So multiplying {s00,s01,s02} by {s10,s11,s12} is the same as

multiplying

{s00-s02,s01-s02,0} by {s10-s12,s11-s12,0}.

({i,j}.{s00-s02,s01-s02}).{s10-s12,s11-s12}

(Note you can switch the order of the 2X1 matrices, but the 2X2X2

matrix must be multiplied first)

{i.j}.{s00-s02,s01-s02}=

(s00-s02) */1 0\

\0 1/

+(s01-s02)*/0 -1\

\1 -1/

=/s00-s02 s02-s01\

\s01-s02 s00-s01/

Then we dot this matrix with {s10-s12,s11-s12} to get...

(s00-s02)(s10-s12)+(s02-s01)(s11-s12)+(s01-s02)(s10-s12)+(s00-s01)(s11-s12)

simpifying to

s00 (s10+s11-2 s12)+s01 (s10-2 s11+s12)+s02 (-2 s10+s11+s12)

...at which point I need to take a break. A check is still needed--How

does this compare to answers achieved with Polysigned method or by

complex notation? I see I've failed to keep the indexes the same, for

one thing.

Jun 5, 2006, 5:28:11 PM6/5/06

to

Spoonfed wrote:

>

> I only understand a little of your terminology, but I think you are

> asking for these.

>

> /s1-s3\ = /1 1/(2 Sin[th2])\/r1\

> \s2-s3/ \0 2/(2 Sin[th2])/\r2/

>

> I only understand a little of your terminology, but I think you are

> asking for these.

>

> /s1-s3\ = /1 1/(2 Sin[th2])\/r1\

> \s2-s3/ \0 2/(2 Sin[th2])/\r2/

I don't understand the notational meaning of '/' and '\' in your

equations.

We're working in P3 or 2D so I'm guessing that you're getting your own

proof for complex equivalence here. That should come out of your new

angled Cartesian system here as well.

So I'm guessing and seeing that's what you're doing. Good idea. I

suppose I should be able to decipher the notation but I'm going to let

you tell me.

I'll try to do this your way (or should I say my version of your way)

and see if there is some pattern matching.

So in P3 if we have two points [ P3 a', b', c' ] and [ P3 d', e', f' ]

We force the last component to zero and enter the SP3 domain of real

valued components

These become

[ SP3 a, b, 0 ] and [ SP3 d, e, 0 ]

where a=a'-c', b=b'-c', d=d'-f', e=e'-f' .

Alternatively this will become

[SC2 a, b ] and [SC2 d, e]

where SC2 is a special nonorthogonal Cartesian(is that an oxymoron?) 2D

space with a specific angle of sixty degrees between the axes.

So the product of these two points which have now been expressed in

three subtly different coordinate systems is:

[ SP3 ad, ae + bd, be ]

which is

[ SC2 ad - be, ae + bd - be ] .

So if you slant the coordinate system to sixty degrees instead of

ninety the coordinate system will yield the usual complex number

product using this last formula.The positive real axes is the first

coordinate axis and the positive real is thirty degrees past the

second.

To resolve the components you need to use a paralellogram resolution,

NOT the perpendicular resolution, which is inconsistent.

It is possible to check this doing a few computations like ( i )( i )

but I haven't yet so no verification except algebra. I started checking

(0+1i)(0+1i) but got tired at the reverse transform out of SC2. It

would be nice to demonstrate this. Can you do all this in Mathematica?

It seems very round about compared to my proof:

http://bandtechnology.com/PolySigned/ThreeSignedComplexProof.html

but if you will be convinced probably others will be too so it would be

a nice step and would verify your approach.

>

> and

>

> /r1\ = /1 Cos[th2]\/s1-s3\

> \r2/ \0 Sin[th2]/\s2-s3/

>

> Where th2 = Pi - ArcCos[1/2]

>

> And once we get in the {s1-s3,s2-s3} domain:

>

> i=/1 0\

> \0 1/

>

> j=/0 -1\

> \1 -1/

>

> k=/-1 0\ (k won't be used in the solution, but it's here for

> reference.)

> \-1 1/

> / \

> {i,j} = | / 1 0\ |

> | \ 0 1/ |

> | |

> | / 0 -1\ |

> | \ 1 -1/ |

> \ /

>

> So multiplying {s00,s01,s02} by {s10,s11,s12} is the same as

> multiplying

> {s00-s02,s01-s02,0} by {s10-s12,s11-s12,0}.

>

> ({i,j}.{s00-s02,s01-s02}).{s10-s12,s11-s12}

>

I'm still confused by the notation with / and \.

> (Note you can switch the order of the 2X1 matrices, but the 2X2X2

> matrix must be multiplied first)

>

> {i.j}.{s00-s02,s01-s02}=

> (s00-s02) */1 0\

> \0 1/

> +(s01-s02)*/0 -1\

> \1 -1/

>

> =/s00-s02 s02-s01\

> \s01-s02 s00-s01/

>

> Then we dot this matrix with {s10-s12,s11-s12} to get...

>

> (s00-s02)(s10-s12)+(s02-s01)(s11-s12)+(s01-s02)(s10-s12)+(s00-s01)(s11-s12)

>

> simpifying to

>

> s00 (s10+s11-2 s12)+s01 (s10-2 s11+s12)+s02 (-2 s10+s11+s12)

>

> ...at which point I need to take a break. A check is still needed--How

> does this compare to answers achieved with Polysigned method or by

> complex notation? I see I've failed to keep the indexes the same, for

> one thing.

The equation above looks about right except for some typos. You've

shifted the ID's down to 0 and 1 from 1 and 2 and there is a '-2' where

there should be a '+' in each parenthetical. Also the indices are not

stated so if these are all magnitudes you'd have just a single value at

the evalution of your one line equation. But the form is there.

The product is really very simple and straightforward in its native

format. We are complicating it quite a bit but if you are able to do

these problems in Mathematica under the shifted Cartesian pretext then

it is worthy. We'd have some verification ability since your software

and mine are independent. Does Mathematica handle the nonorthogonal

coordinate system well? What about the product definition? I haven't

tried to see if there is a general product definition coming out of the

forms that we've made in SC but if it exists and can be coded in

Mathematica then you'd have a reasonable system.

-Tim

Jun 6, 2006, 10:09:40 AM6/6/06

to

Does anyone know of an existing matrix definition for the following

matrix form?

matrix form?

s30 = s10 s20 + s11 s22 + s12 s21

s31 = s11 s20 + s10 s21 + s12 s22

s32 = s12 s20 + s10 s22 + s11 s21 .

It is commutative so that

s1 s2 = s2 s1 .

It is extensible to any dimension.

When the identity

( x, x, ... ) = 0

is invoked the system drops one dimension and derives the real numbers

and complex numbers. It also generates weird high dimensional spaces.

The well behaved members of the family are congruent with spacetime.

The system defines a zero-dimensional space that has time

correspondence.

-Tim

Jun 7, 2006, 7:42:52 PM6/7/06

to

Timothy Golden BandTechnology.com wrote:

> Spoonfed wrote:

> >

> > I only understand a little of your terminology, but I think you are

> > asking for these.

> >

> > /s1-s3\ = /1 1/(2 Sin[th2])\/r1\

> > \s2-s3/ \0 2/(2 Sin[th2])/\r2/

>

> I don't understand the notational meaning of '/' and '\' in your

> equations.

This is supposed to be a linear algebra expression, with / and \

denoting parentheses on more than one line.

Let me try to give you some pseudocode you can translate into C++

sa = r1 + r2/(2*Sin[th2])

sb = r2 + r2/Sin[th2]

s3=Max(-sb,-sa)

s1=sa+s3

s2=sb+s3

sa=s1-s3

sb=s2-s3

r1=sa + sb*Cos[th2]

r2=sb*Sin[th2]

Wow, I really screwed that one up. 2X2 matrix times 2X1 vector should

give a 2X1 vector. In the end it looks like garbage no matter how you

write it, so have a look at the pseudocode written below. Hopefully a

little less bewildering.

> >

> > ...at which point I need to take a break. A check is still needed--How

> > does this compare to answers achieved with Polysigned method or by

> > complex notation? I see I've failed to keep the indexes the same, for

> > one thing.

'Define two numbers in P3 as ordered triplets {s00,s01,s02} and

{s10,s11,s12}

'Then define ordered pairs {a0,b0} and {a1,b1} based on them as

follows:

a0=s00-s02

b0=s01-s02

a1=s10-s12

b1=s11-s12

'Then we want to form a 2X2 matrix {{m00,m01},{m10,m11}} from a0*i

+b0*j as follows:

'(This is the section of code where we are multiplying a 1X2 by a

2X(2X2) matrix to get a 1X(2X2) 'matrix, or more commonly known as a

2X2 matrix.

m00=a0

m01=-b0

m10=b0

m11=a0-b0

'Now take this new matrix M and multiply it by {a1,b1} to get {a2,b2}

a2=m00*a1+m01*b1

b2=m10*a1+m11*b1

'And to get {s20,s21,s22}

s22=Max[-a2,-b2,0]

s20=a2+s22

s21=a1+s22

>

> The equation above looks about right except for some typos. You've

> shifted the ID's down to 0 and 1 from 1 and 2

Sorry about that.

Jun 7, 2006, 8:41:54 PM6/7/06

to

I was introduced the other day to GF[4] which bears some vague

resemblance to P3. The multiplication table is the same as for unit

vectors in P3, but the addition properties are a little off.

Jun 8, 2006, 7:40:39 AM6/8/06

to

much alignment except that GF(3) addition table looks like P3

multiplication table. The reliance on primes seems to be basic to

Galois Fields. In polysign the prime sign systems are very flat in

terms of symmetry of the signs as operators. But there is no

restriction about primes.

P4 and all higher even sign systems are not fields since division does

not necessarily work. The last requirement of the mathematical field

is:

Existence of multiplicative inverses:

For every a â‰ 0 belonging to F, there exists an element

a^âˆ’1 in F,

such that

a * a^âˆ’1 = 1.

In P4 if we multiply a value by

+ 1 # 1

we will get another value on that same ray. I call this the identity

axis. It exists in every even signed system. In effect what was a zero

becomes a zero axis to the field definition. The dimensionality does

not allow the strict field definition to apply. This does not trouble

me. The polysigned numbers challenge existing mathematics on many

levels. It's because existing mathematics is built upon the real

numbers. The singular zero of this field requirement becomes an axis in

general. I can't change the definition of a field. To quibble over this

as a stopping point is not relevant. The polysigned system behaves on

its own terms.

You've reminded me of

http://groups.google.com/group/sci.math/msg/11ceb7ea329e2a97

Roger Beresford's terplex product exactly matches so he must have the

same matrix form.

It's readily apparent when you look at his unit vector definition.

But again without the identity which drops the actual dimensionality by

one.

We hashed it out on that thread but I have failed to recall it as a

resource. It does exist in other usage and sci.math just isn't a very

good place to query for the info. He has code too:

http://library.wolfram.com/infocenter/MathSource/4894/

His stuff deals in orthogonal bases.

Augmented with the forward and reverse Cartesian transforms it would be

sufficient to do general polysigned math.

This approach would skip over your latest move of SC slanted space. The

identity is somewhat optional to the system. You won't need it except

that equivalent values don't necessarily look the same until reduced.

The inverse transform will take care of that for you as long as you

look at your values in Cartesian. Keep in mind that above P3 the

solutions become rotationally variant so what was isotropic becomes

anisotropic yet maintains linearity. I'm not really comfortable with

the word isotropic. It seems to carry a haze around with it.

In effect your generic Cartesian product is a function that looks like:

Cartesian Product( Cartesian c1, Cartesian c2 )

{ // this does the polysigned product.

// caution: the transform orientation matters.

// |AB| = |A||B| breaks above 2D.

Cartesian s1( c1.dim + 1 ) = PolySignXform( c1 );

Cartesian s2( c2.dim + 1 ) = PolySignXform( c2 );

Cartesian s3( c1.dim + 1 ) = TerplexProduct( s1, s2 );

Cartesian c3( c1.dim ) = InversePolySignXform( s3 );

return c3;

}

Even without the identity function this will work fine. In effect the

identity has been embodied by the transform. The usual vector sum is

the same in either space so there is no trouble there. This should be a

general solution upon generalizing the transform and if you are willing

to work in a particular sign like P4 that part can be hard coded for

now. This might help turn others on to the system.

-Tim

Jun 8, 2006, 7:43:26 AM6/8/06

to

> ninety the coordinate system will yield the usual complex number

> product using this last formula.The positive real axes is the first

> coordinate axis and the positive real is thirty degrees past the

> second.

than the second SC axis.

Jun 8, 2006, 8:14:31 AM6/8/06

to

You are a bit hush about where you are going. I'm seeing you plugging

into a Lorentz style transform inherently at the specific angle. Out

come the complex numbers. Then what?

On to higher signs? Perhaps you can respond on the other tendril and

let this one die.

Jun 10, 2006, 10:17:04 PM6/10/06

to

Look closer. The multiplication table for GF(3) corresponds to P2.

GF(4) corresponds to P3.

> P4 and all higher even sign systems are not fields since division does

> not necessarily work. The last requirement of the mathematical field

> is:

>

If multiplication in P4 can be represented by matrix multiplication,

division should be possible through multiplying by the inverse matrix.

> Existence of multiplicative inverses:

>

> For every a â‰ 0 belonging to F, there exists an element

> a^âˆ’1 in F,

> such that

> a * a^âˆ’1 = 1.

>

> In P4 if we multiply a value by

> + 1 # 1

> we will get another value on that same ray. I call this the identity

> axis.

I thought that #1 and (-1+1*1) identified the identity axis... But

then again, any real-valued eigenvalue associated with a real valued

eigenvector should have this property. I posted a list of these

somewhere either in this or the other thread.

> It exists in every even signed system. In effect what was a zero

> becomes a zero axis to the field definition. The dimensionality does

> not allow the strict field definition to apply. This does not trouble

> me. The polysigned numbers challenge existing mathematics on many

> levels. It's because existing mathematics is built upon the real

> numbers. The singular zero of this field requirement becomes an axis in

> general. I can't change the definition of a field. To quibble over this

> as a stopping point is not relevant. The polysigned system behaves on

> its own terms.

>

Alright, since you brought it up, let's quibble. Perhaps some insight

might be gained. Have you invented a new math, or have you simply

found some interesting properties of the old math?

If the polysigned system behaves on its own terms, then we should

highlight those differences. Nothing to be gained by avoiding the

stopping points; we should approach them directly and stop at them.

To represent zero as a half dimension doesn't make any sense. You say

there is some fundamental difference in P2 between +7-5 and +2. To me

they are simply different representations of the same number.

> You've reminded me of

> http://groups.google.com/group/sci.math/msg/11ceb7ea329e2a97

> Roger Beresford's terplex product exactly matches so he must have the

> same matrix form.

> It's readily apparent when you look at his unit vector definition.

> But again without the identity which drops the actual dimensionality by

> one.

> We hashed it out on that thread but I have failed to recall it as a

> resource. It does exist in other usage and sci.math just isn't a very

> good place to query for the info. He has code too:

> http://library.wolfram.com/infocenter/MathSource/4894/

> His stuff deals in orthogonal bases.

> Augmented with the forward and reverse Cartesian transforms it would be

> sufficient to do general polysigned math.

Cool.

> This approach would skip over your latest move of SC slanted space. The

> identity is somewhat optional to the system.

What are you referring to here?

> You won't need it except

> that equivalent values don't necessarily look the same until reduced.

> The inverse transform will take care of that for you as long as you

> look at your values in Cartesian. Keep in mind that above P3 the

> solutions become rotationally variant so what was isotropic becomes

> anisotropic yet maintains linearity.

i.e. There ARE special directions. Multiplying by #1 maintains the

shape, but multiplying by *1 inverts everything.

> I'm not really comfortable with

> the word isotropic. It seems to carry a haze around with it.

>

> In effect your generic Cartesian product is a function that looks like:

>

> Cartesian Product( Cartesian c1, Cartesian c2 )

> { // this does the polysigned product.

> // caution: the transform orientation matters.

> // |AB| = |A||B| breaks above 2D.

> Cartesian s1( c1.dim + 1 ) = PolySignXform( c1 );

> Cartesian s2( c2.dim + 1 ) = PolySignXform( c2 );

> Cartesian s3( c1.dim + 1 ) = TerplexProduct( s1, s2 );

> Cartesian c3( c1.dim ) = InversePolySignXform( s3 );

> return c3;

> }

>

Hmmm.

> Even without the identity function this will work fine. In effect the

> identity has been embodied by the transform.

Still not quite sure what you mean by the identity.

> The usual vector sum is

> the same in either space so there is no trouble there. This should be a

> general solution upon generalizing the transform and if you are willing

> to work in a particular sign like P4 that part can be hard coded for

> now. This might help turn others on to the system.

>

> -Tim

Whether I'm willing to work is a key question. ;-) At this stage, I

can see it might be worthwhile to put forth some effort into developing

P6, P10, P12, P16, etc. Not the primes--which you've mentioned are

flat, but those with prime-1 signs corresponding to the prime number

based Galois Fields. Just a thought, and it might be totally wrong.

Also, see if you can convince me or anybody that P1 is a construction

worthy of notice. A clue might lie in GF[2] (or might not.)

Jun 11, 2006, 8:39:38 PM6/11/06

to

Spoonfed wrote:

> Timothy Golden BandTechnology.com wrote:

> Timothy Golden BandTechnology.com wrote:

> > Spoonfed wrote:

> > > http://en.wikipedia.org/wiki/Galois_field

> > As I look at the tables at the bottom of that wiki page I don't see

> > much alignment except that GF(3) addition table looks like P3

> > multiplication table. The reliance on primes seems to be basic to

> > Galois Fields. In polysign the prime sign systems are very flat in

> > terms of symmetry of the signs as operators. But there is no

> > restriction about primes.

> >

>

> Look closer. The multiplication table for GF(3) corresponds to P2.

> GF(4) corresponds to P3.

> > > http://en.wikipedia.org/wiki/Galois_field

> > As I look at the tables at the bottom of that wiki page I don't see

> > much alignment except that GF(3) addition table looks like P3

> > multiplication table. The reliance on primes seems to be basic to

> > Galois Fields. In polysign the prime sign systems are very flat in

> > terms of symmetry of the signs as operators. But there is no

> > restriction about primes.

> >

>

> Look closer. The multiplication table for GF(3) corresponds to P2.

> GF(4) corresponds to P3.

Got it. OK. Nice pattern recognition. But I still see a wide divide

between the constructions. I suppose as a cross-study it is relevant to

compare the two. I can see that treating the signs as discrete

operators sends one toward this domain. But let's not forget that the

polysigned system is a continuum. I suppose one could ponder how to

jump to a continuum construction from a finite construction and I

suppose adjacency would be an appropriate tool. As we begin counting we

then realize that scale can be useful and that there is no need to

limit oneself to unit values, though D-theorists seem to be forgoing

that scalar concept:

http://koti.mbnet.fi/mpelt/tekstit/dtheory.htm

There are some surprising congruencies to the polysigned lattice though

the constructions are motivated differently.

What I have absorbed so far of the Galois construction is that it is

very primitive. More so than the polysigned numbers. It's motivation

and consequences are not very clear to me. Also that zero product that

wipes everything away seems pretty destructive. In polysign math the

notion of a zero sign is just the identity sign, which preserves the

state of the operand.

The Galois sum miffs me. Why would like values sum to zero? If

summation is superpostion shouldn't inverse values sum to zero? The

construction will not yield dimensionality.

The polysigned numbers require cancellation to exist via equal amounts

in each orientation. Symmetry and superposition together form the heart

of the polysigned numbers and generate dimensionality.

> > P4 and all higher even sign systems are not fields since division does

> > not necessarily work. The last requirement of the mathematical field

> > is:

> >

>

> If multiplication in P4 can be represented by matrix multiplication,

> division should be possible through multiplying by the inverse matrix.

I'll try to study this approach. The product is not actually a matrix

multiplication though is it?

I'm not seeing it for the moment. I see you have a matrix in the

nonorthogonal Cartesian system. But will the inverse work there?

>

> > Existence of multiplicative inverses:

> >

> > For every a â‰ 0 belonging to F, there exists an element

> > a^âˆ’1 in F,

> > such that

> > a * a^âˆ’1 = 1.

> >

> > In P4 if we multiply a value by

> > + 1 # 1

> > we will get another value on that same ray. I call this the identity

> > axis.

>

> I thought that #1 and (-1+1*1) identified the identity axis... But

> then again, any real-valued eigenvalue associated with a real valued

> eigenvector should have this property. I posted a list of these

> somewhere either in this or the other thread.

Let's try an example in P4:

( - 1 + 2 ) ( - 1 + 1 # 1 )

= + 1 * 1 - 1 * 2 # 2 + 2

= - 1 + 3 * 3 # 2

= + 2 * 2 # 1 .

Now using the identity axis:

( - 1 + 2 ) ( + 1 # 1 )

= * 1 - 1 # 2 + 2

= + 1 # 1 .

In general the value won't come to +1 # 1. It might be + 1.5 # 1.5 or

it could be - 3.2 * 3.2 but it will always be on this axis. The name

identity axis may be a bad name choice. The word identical is

appropriate but perhaps it should be called the null axis. Any value

multiplied by a value on this axis will come out on this axis. In P6

the values (1,0,1,0,1,0) and (0,1,0,1,0,1) form the identity axis.

This axis is very much like a real line since one side of it has an

inverting property so that in P4:

( + 1 # 1 ) ( + 1 # 1 ) = + 2 # 2 ,

( - 1 * 1 ) ( - 1 * 1 ) = + 2 # 2 ,

( - 1 * 1 ) ( + 1 # 1 ) = - 2 * 2 ,

just as in P2:

( + 1 )( + 1 ) = + 1 ,

( - 1 )( - 1 ) = + 1 ,

( - 1 )( + 1 ) = - 1 .

> > It exists in every even signed system. In effect what was a zero

> > becomes a zero axis to the field definition. The dimensionality does

> > not allow the strict field definition to apply. This does not trouble

> > me. The polysigned numbers challenge existing mathematics on many

> > levels. It's because existing mathematics is built upon the real

> > numbers. The singular zero of this field requirement becomes an axis in

> > general. I can't change the definition of a field. To quibble over this

> > as a stopping point is not relevant. The polysigned system behaves on

> > its own terms.

> >

>

> Alright, since you brought it up, let's quibble. Perhaps some insight

> might be gained. Have you invented a new math, or have you simply

> found some interesting properties of the old math?

>

> If the polysigned system behaves on its own terms, then we should

> highlight those differences. Nothing to be gained by avoiding the

> stopping points; we should approach them directly and stop at them.

>

> To represent zero as a half dimension doesn't make any sense. You say

> there is some fundamental difference in P2 between +7-5 and +2. To me

> they are simply different representations of the same number.

I would not word it quite that way. It's a matter of the operations

that we perform arithmetically. One of them is optional. It is embodied

by the identity

- x + x = 0 (P2) .

The second to last standard field law is:

Existence of additive inverses

For every a belonging to F, there exists an element âˆ’a in F, such

that a + (âˆ’a) = 0.

If we forgo this law we can still do any arithmetic to the point of an

expression like +7-5.

When we generalize sign we see this odd thing down at P1:

- x = 0 .

By my argument we can still do arithmetic in P1. It's just that when we

go to perform this evaluation we get zero. P1 is zero dimensional and

these qualities are defining what zero dimensional means. The

correspondence to time is remarkable.

As we quibble over this I am claiming a conflict with the flat

defintion of a field. I am not about to argue with mathematicians that

they are wrong. That would be futile. Mathematics as a free language

says that you can build whatever you want. As long as you follow your

rules the results become a measure of the value of those rules.

If the polysigned numbers could speak for themselves in the court of

fields they would say that the inverse field law should be viewed as an

operation that has a more general form than a strict binary inverse.

They also say that the multiplicative inverse may need to exclude more

than just zero. I'm just happy to look at them and see what they do

without bothering to worry about these problems. Traditional

mathematics is eccentric to the real numbers. How many times have I

seen people try to define the polysigned numbers as built from the real

numbers? A lot. Yet it is the polysigned construction that builds the

real numbers and the complex numbers from the same simple rules. They

develop higher dimension systems with a sum and a product that behave

just as they do for the real numbers. That is all new.

I know that you won't find this convincing. We've already been through

this without the field law references. I guess I'm willing to discuss

it endlessly but it sure would be nice to find a nifty way to convince

you. I've given you arithmetic examples. The graphical representation

says a lot. This math is easy to learn, but is not fully developed and

won't be without a lot of work. Still the sum and the product are

complete and they are the grounds for other operations. I still have

not found a general conjugate. It exists in P3 but I can't find it in

P4 or anywhere higher up. Division becomes a lot of work using linear

equations and I haven't gotten far with it. Maybe your inverse matrix

approach will work.

>

> > You've reminded me of

> > http://groups.google.com/group/sci.math/msg/11ceb7ea329e2a97

> > Roger Beresford's terplex product exactly matches so he must have the

> > same matrix form.

> > It's readily apparent when you look at his unit vector definition.

> > But again without the identity which drops the actual dimensionality by

> > one.

> > We hashed it out on that thread but I have failed to recall it as a

> > resource. It does exist in other usage and sci.math just isn't a very

> > good place to query for the info. He has code too:

> > http://library.wolfram.com/infocenter/MathSource/4894/

> > His stuff deals in orthogonal bases.

> > Augmented with the forward and reverse Cartesian transforms it would be

> > sufficient to do general polysigned math.

>

> Cool.

>

> > This approach would skip over your latest move of SC slanted space. The

> > identity is somewhat optional to the system.

>

> What are you referring to here?

SC being 'special Cartesian' where the last component is forced to

zero.

The approach above would simply do the orthogonal Cartesian transform.

It would use the transform vectors at the bottom of:

http://groups.google.com/group/sci.physics.relativity/msg/e4b45e2de0541d59

>

> > You won't need it except

> > that equivalent values don't necessarily look the same until reduced.

> > The inverse transform will take care of that for you as long as you

> > look at your values in Cartesian. Keep in mind that above P3 the

> > solutions become rotationally variant so what was isotropic becomes

> > anisotropic yet maintains linearity.

>

> i.e. There ARE special directions. Multiplying by #1 maintains the

> shape, but multiplying by *1 inverts everything.

I prefer 'rotates' to 'inverts'. I have yet to prove that the sign unit

vectors are the only preservative factors. But I'm pretty sure that is

true. e.g. in P4 any object multiplied by *1 will be a rotated version

of itself with no morphing but a unit vector in the *1-0.2 direction

will rotate and morph the object according to the P4 product

deformation study.

> > I'm not really comfortable with

> > the word isotropic. It seems to carry a haze around with it.

> >

> > In effect your generic Cartesian product is a function that looks like:

> >

> > Cartesian Product( Cartesian c1, Cartesian c2 )

> > { // this does the polysigned product.

> > // caution: the transform orientation matters.

> > // |AB| = |A||B| breaks above 2D.

> > Cartesian s1( c1.dim + 1 ) = PolySignXform( c1 );

> > Cartesian s2( c2.dim + 1 ) = PolySignXform( c2 );

> > Cartesian s3( c1.dim + 1 ) = TerplexProduct( s1, s2 );

> > Cartesian c3( c1.dim ) = InversePolySignXform( s3 );

> > return c3;

> > }

> >

>

> Hmmm.

>

> > Even without the identity function this will work fine. In effect the

> > identity has been embodied by the transform.

>

>

> Still not quite sure what you mean by the identity.

I just mean the cancellation law:

- x + x * x # x = 0 (P4) .

>

> > The usual vector sum is

> > the same in either space so there is no trouble there. This should be a

> > general solution upon generalizing the transform and if you are willing

> > to work in a particular sign like P4 that part can be hard coded for

> > now. This might help turn others on to the system.

> >

> > -Tim

>

> Whether I'm willing to work is a key question. ;-) At this stage, I

> can see it might be worthwhile to put forth some effort into developing

> P6, P10, P12, P16, etc. Not the primes--which you've mentioned are

> flat, but those with prime-1 signs corresponding to the prime number

> based Galois Fields. Just a thought, and it might be totally wrong.

>

> Also, see if you can convince me or anybody that P1 is a construction

> worthy of notice. A clue might lie in GF[2] (or might not.)

I understand. You've certainly spent plenty of time on this

construction and I've learned some things through that process. If it

weren't for the time correspondence I'd have very little to say about

P1 but you can also use it as a starting point to get the whole deal.

So a naive person says, "Wait a minute. Time only goes one way. The

real numbers go two ways. If I just hack off half of them I'll get

time." Then this naive person goes on to wonder about adding a sign to

the real numbers. The puzzle when solved leads to the polysigned

numbers. They are a trivial solution. But now taking what one has

learned about sign in general and applying it back at P1 takes on a

surprise that is congruent with the paradox of time and allows time to

be placed structurally and symetrically with space. No rod or ruler

will measure time directly as they can measure space. It is not

geometrically possible.

Until a physics model arises that gets even more I accede that the

spacetime congruence may be only coincidental. To take this gamble is a

personal choice. The probability of failure is quite high but the

construction is so suggestive to me that I have no trouble with that.

When I found out that P4 and up were broken that was enough for me. I'd

been looking for it and it came out in such a simple principle of

distance. So now the product is the focus of my study since it is the

product that begets the spacetime claim. We see classical particle

models operating under a product relationship (albeit in a reciprocal

space) and so I try to model the space we are in as a particle product

space rather than a Cartesian product space. That takes me up to where

we started. All along the way there is a lot to be filled in. Other

arguments are coming along also like the relative 2D generic particle

scenario. I am fairly comfortable with the notion of an axis being an

inherent quality of a particle for purely geometrical reasons. In a 2D

(P3) particle product scene one is left with the horrible notion of

these angles adding to each other via the product and everything stops

making sense. Particles from far away are modifying the axis just as

much as those nearby. So something is needed just like the reciprocal

space and I have no idea what it is.

Probably that's too cryptic but that is the top of my stack. Down lower

there are the basic math possibilities that may lead somewhere too. To

argue about field laws will not be productive. The raw math is

straightforward. I am interested in convincing anyone to spend time

studying the construction in the hopes that it will prove useful.

Unfortunately it is difficult for people to understand the simplicity

of it. I believe that this is due to the real numbers being the footing

for their internal model.

-Tim

Jun 12, 2006, 3:03:48 AM6/12/06

to

Avoid typing the same text again and again

Stop wasting your time on mouse movements

Open favorite web pages with a single hotkey press

Record keystrokes and play them back with a single hotkey press

------------------------------

http://www30.webSamba.com/SmartStudio

------------------------------

EnergyKey Save yourself from repetitive tasks

Stop wasting your time on mouse movements

Open favorite web pages with a single hotkey press

Record keystrokes and play them back with a single hotkey press

------------------------------

http://www30.webSamba.com/SmartStudio

------------------------------

EnergyKey Save yourself from repetitive tasks

Jun 12, 2006, 4:31:10 AM6/12/06

to

Timothy Golden BandTechnology.com wrote:

> I'm unburying this thread in the hopes that you will respond.

> The form:

> s30 = s10 s20 + s11 s22 + s12 s21

> s31 = s11 s20 + s10 s21 + s12 s22

> s32 = s12 s20 + s10 s22 + s11 s21 .

> is extensible to any dimension and provides a commutative product

> definition.

If we define addition and the scalar product in the obvious way, this

defines a commutative, associative, and distributive real algebra. So

you know it can only be either

the direct sum of three real factors, or a real and a complex. It turns

out to be the latter. If you take w to be a root of unity, then the

basis elements are {1,w,w^2} and the elements of the algebra can be

equated with a + bw + cw^2. The "a" part, from the 1, is the real

factor and the {w,w^2} the complex factor in R+C.

Jun 12, 2006, 2:19:19 PM6/12/06

to

So instead of a 3x3 you could do a 5x5.

The form is extensible.

It seems like the form in general should have been developed and named

some time in the past. Beresford does use it as something he calls

terplex and applies prolifically. I have used it for polysigned numbers

to define their arithmetic product. In the case of polysigned numbers

the actual dimension is dropped one by the law:

( 1, 1, ... ) = 0 .

This makes the coordinate system follow a simplex geometry. The s

positions are literally signs in this construction and their values are

magnitudes. This allows a product and sum definition in any whole

dimension that is commutative, distributive, and associative.

The general product format codes very simply as a software algorithm.

It is a commutative operator that should have been named generically.

-Tim

Jun 12, 2006, 3:09:17 PM6/12/06

to

Timothy Golden BandTechnology.com wrote:

> Gene Ward Smith wrote:

> > If we define addition and the scalar product in the obvious way, this

> > defines a commutative, associative, and distributive real algebra. So

> > you know it can only be either

> > the direct sum of three real factors, or a real and a complex. It turns

> > out to be the latter. If you take w to be a root of unity, then the

> > basis elements are {1,w,w^2} and the elements of the algebra can be

> > equated with a + bw + cw^2. The "a" part, from the 1, is the real

> > factor and the {w,w^2} the complex factor in R+C.

> What happens when you raise the dimension?

> So instead of a 3x3 you could do a 5x5.

For any odd prime p, you can take a primitive p-th root of unity and

construct a similar

algebra over the rationals Q which is Q+cyclotomic field. Over the

reals, the cyclotomic field part breaks apart into (p-1)/2 copies of C.

> The form is extensible.

> It seems like the form in general should have been developed and named

> some time in the past.

Why? I don't understand what the use of it is. Mostly, fields are

preferred over commutative algebras which are not fields, because the

latter can be understood in terms of the former. Noncomutative

algebras, such as central simple algebras, are another story with a

very rich theory.

Jun 12, 2006, 4:28:51 PM6/12/06

to

fields. I didn't mean to restrict the possibility to primes. For

instance a 4x4 could be considered as well. Anyhow you are saying that

C (the complex numbers) will pop out again almost in a raw form.

However the product will wind up being defined accross these. I have

seen this in P4(the four-signed numbers, the 4x4 product form which

comes down to 3D) yet the product is slightly different than the

orthogonal product of its parts. In effect P4 looks like R x C or P2 x

P3, but I don't quite have it yet. There is a small error between the

two and I think there must be some dimensional mixing. But the

resemblance is certainly there.

There is a parity being expressed in the polysigned family in adjacent

dimensions that may be producing something close to cyclotomic

behavior.

You ask why. One simple reason is that I can generate the complex

numbers and the real numbers from a few simple rules using this

product. I can claim that it is the most compact definition, getting

both in one fell swoop and a step. You say commutative algebras are not

fields but I suppose you mean that they are not necessarily fields.

That they may be fields is still a possibility.

I am aware that quaternions are noncommutative and support your very

rich theory statement. They enforce:

| A B | = | A || B |

whereas P4 breaks this conservation law yet allows distributive,

associative and commutative properties just like real number algebra.

The entire polysigned family is very close to satisfying the field

requirements, but some of the statements don't make sense. Like when

the multiplicative inverse excepts zero the polysigned family would

like to except an axis of solutions in the high even signs. Finding

these multiplicative inverses are challenging and have yet to be

resolved as doable or not doable. So far I only know that they are not

doable on the identity axes.

Do you believe the field laws to be perfect? They seem to be eccentric

to the real numbers.

-Tim

Jun 12, 2006, 5:06:55 PM6/12/06

to

Timothy might be happy I mentioned his polysigned numbers in another topic ? :-)

further to answer a physics question

where did the antimatter go , when the universe was created ??

simple

the big bang is simply an antimatter-black-hole

and bye absorbing antimatter , the matter got repelled ,and still is ...Hubbles constant

what do you think about my critical torus Anthony ?

post there if ya want

greetz

tommy

Jun 12, 2006, 5:29:11 PM6/12/06

to

Timothy Golden BandTechnology.com wrote:

> I'm only partially absorbing what you have said about cyclotomic

> fields. I didn't mean to restrict the possibility to primes. For

> instance a 4x4 could be considered as well. Anyhow you are saying that

> C (the complex numbers) will pop out again almost in a raw form.

There's a structure theorem for real algebras which will always make

this likely. Real algebras can have nilpotent elements, like the "dual

numbers", which are R[e] with

e^2=0. There are also infinite-dimensional algebras, like the algebra

of polynomials in n indeterminates, or "symmetric algebra", R[x1, ...,

xn]. However, if your real algebras are finite-dimensional and have no

nilpotent elements, like the ones you are considering, then they are

isomorphic to n copies of the reals plus m copies of the complex

numbers, though the isomorphism is one you have to find.

> However the product will wind up being defined accross these. I have

> seen this in P4(the four-signed numbers, the 4x4 product form which

> comes down to 3D) yet the product is slightly different than the

> orthogonal product of its parts.

Exactly; you have to find out what corresponds to the square root of

minus one in each of these. For instance, if w is a third root of

unity, it's a root of w^2+w+1=0 and (w-w^2)/sqrt(3) will give a square

root of minus one. That tells you how to make the polysign numbers for

n=3 look like real+complex numbers, with the complex numbers written in

the usual way.

> You ask why. One simple reason is that I can generate the complex

> numbers and the real numbers from a few simple rules using this

> product.

Indeed you can, since the 3-dimensional polysign numbers are

Reals+Complexes.

I can claim that it is the most compact definition, getting

> both in one fell swoop and a step.

That I don't see. You are assuming you already know the reals to define

the polysign numbers. You'd have to do something like define them over

the rationals and then define how to complete them.

You say commutative algebras are not

> fields but I suppose you mean that they are not necessarily fields.

> That they may be fields is still a possibility.

Yes, but the only real algebras which are fields are the real and

complex numbers.

> The entire polysigned family is very close to satisfying the field

> requirements, but some of the statements don't make sense.

Yes, they satisfy the commutative associative and distributative laws

for both addition and multiplication, and have no nilpotent elements.

Like when

> the multiplicative inverse excepts zero the polysigned family would

> like to except an axis of solutions in the high even signs. Finding

> these multiplicative inverses are challenging and have yet to be

> resolved as doable or not doable.

I'm afraid the inverse does not always exist; the trouble is you have

zero divisors, which are two nonzero numbers which when multiplied are

zero. An example with the 3-polysign numbers is [1,1,1] x [-2,1,1] =

[0,0,0].

> Do you believe the field laws to be perfect? They seem to be eccentric

> to the real numbers.

I have no idea what that means.

Jun 12, 2006, 6:08:53 PM6/12/06

to

Gene Ward Smith wrote:

> I'm afraid the inverse does not always exist; the trouble is you have

> zero divisors, which are two nonzero numbers which when multiplied are

> zero. An example with the 3-polysign numbers is [1,1,1] x [-2,1,1] =

> [0,0,0].

I should add that if we set e1 = [1,1,1]/3 and e2=[2,-1,-1]/3, then not

only do we have

e1 x e2 = 0, we have e1^2 = e1 and e2^2 = e2. Therefore, e1 and e2 are

what are called orthogonal idempotents, and can be used to split the 3D

polysign numbers into their real and complex parts:

real part = e1 x number

complex part = e2 x number

Jun 12, 2006, 6:16:41 PM6/12/06

to

Gene Ward Smith wrote:

I should also add that if you want to pursue this topic in the

mathematical literature, look under "group algebra". The traditional

name being the "cyclic group algebra" R[Cn], where Cn is the cyclic

group of order n.

Jun 13, 2006, 7:21:06 AM6/13/06

to

No. Well, maybe this is true but what I meant is that P2 are the real

numbers and P3 are the complex numbers. The only difference between the

two constructions is a natural number n.

This comes from the product rule this thread discusses and the

cancellation law. Those are the two rules. Superposition may also need

to be defined. This construction does assume magnitudes (scalars) to

exist so may be construed by traditional mathematics as relying upon

the real numbers. I disagree with this thinking. Magnitude is

fundamental. In this regard the real numbers are built by allowing two

signs to be combined with the scalar concept. Increasing to three signs

generate the complex numbers. The rules are the same for both systems.

They can be extended upward and downward. The family is called the

polysigned numbers. So sign becomes a general discrete entity attached

to scalars. Dimensionality falls out directly. The geometry is unusual.

It is nonorthogonal and perfectly symmetrical. When one considers a

lattice structure stepping around on the lattice is a unidirectional

process. The option to travel back along a segment does not exist above

P2. Instead a loop must be formed if one wishes to return to the same

location. I mention it to help distinguish the geometry from the usual

2D space. When the system is continuous this phenomenon vanishes.

>

> I can claim that it is the most compact definition, getting

> > both in one fell swoop and a step.

>

> That I don't see. You are assuming you already know the reals to define

> the polysign numbers. You'd have to do something like define them over

> the rationals and then define how to complete them.

>

> You say commutative algebras are not

> > fields but I suppose you mean that they are not necessarily fields.

> > That they may be fields is still a possibility.

>

> Yes, but the only real algebras which are fields are the real and

> complex numbers.

>

> > The entire polysigned family is very close to satisfying the field

> > requirements, but some of the statements don't make sense.

>

> Yes, they satisfy the commutative associative and distributative laws

> for both addition and multiplication, and have no nilpotent elements.

> Like when

> > the multiplicative inverse excepts zero the polysigned family would

> > like to except an axis of solutions in the high even signs. Finding

> > these multiplicative inverses are challenging and have yet to be

> > resolved as doable or not doable.

>

> I'm afraid the inverse does not always exist; the trouble is you have

> zero divisors, which are two nonzero numbers which when multiplied are

> zero. An example with the 3-polysign numbers is [1,1,1] x [-2,1,1] =

> [0,0,0].

You've got some misunderstanding here. The three-signed numbers are the

complex numbers so will work. [1,1,1] is zero by definition. So you are

using zero which is excluded from the inverse realtionship. n-signed

numbers are n-1 dimensional due to this relation. Each component of a

number is a scalar. So up in 3D we have four-signed numbers which will

suffer this problem:

( - 2 + 2 ) ( + 3 # 3 ) = 0 .

Neither of these values is zero. However, one of them does lie on the

identity axis.

The polysigned numbers rely upon equal magnitudes in every sign

yielding zero:

P2: - x + x = 0 (the real numbers)

P3: - x + x * x = 0 (the complex numbers)

P4: - x + x * x # x = 0 (start of anisotropic members)

>

> > Do you believe the field laws to be perfect? They seem to be eccentric

> > to the real numbers.

>

> I have no idea what that means.

Math as a religion. The book has been written. One must preserve the

book.

There are few constructions that can go below the existing foundation.

I know it is asking alot for you to go there. Must a scalar be built

from the real numbers? Which is more fundamental? Certainly the scalar

is.

-Tim

Jun 13, 2006, 7:23:04 AM6/13/06

to

tommy1729 wrote:

> Timothy might be happy I mentioned his polysigned numbers in another topic ? :-)

I'll post there.

-Tim

Jun 14, 2006, 10:59:16 PM6/14/06

to

Timothy Golden BandTechnology.com wrote:

> Spoonfed wrote:

> > Timothy Golden BandTechnology.com wrote:

> > > Spoonfed wrote:

> > > > http://en.wikipedia.org/wiki/Galois_field

> > > As I look at the tables at the bottom of that wiki page I don't see

> > > much alignment except that GF(3) addition table looks like P3

> > > multiplication table. The reliance on primes seems to be basic to

> > > Galois Fields. In polysign the prime sign systems are very flat in

> > > terms of symmetry of the signs as operators. But there is no

> > > restriction about primes.

> > >

> >

> > Look closer. The multiplication table for GF(3) corresponds to P2.

> > GF(4) corresponds to P3.

>

> Got it. OK. Nice pattern recognition.

> Spoonfed wrote:

> > Timothy Golden BandTechnology.com wrote:

> > > Spoonfed wrote:

> > > > http://en.wikipedia.org/wiki/Galois_field

> > > As I look at the tables at the bottom of that wiki page I don't see

> > > much alignment except that GF(3) addition table looks like P3

> > > multiplication table. The reliance on primes seems to be basic to

> > > Galois Fields. In polysign the prime sign systems are very flat in

> > > terms of symmetry of the signs as operators. But there is no

> > > restriction about primes.

> > >

> >

> > Look closer. The multiplication table for GF(3) corresponds to P2.

> > GF(4) corresponds to P3.

>

> Got it. OK. Nice pattern recognition.

I was lucky enough to have to do a few dozen multiplication and

addition problems in GF(4) for homework, so it didn't take too long to

see the similarity.

> But I still see a wide divide

> between the constructions.

One being finite fields where only 0, -1, +1, *1, exists and the other

being a field containing -x, +y, *z, and all combinations of the three

where x, y, and z are nonnegative real numbers. It is a wide divide.

> I suppose as a cross-study it is relevant to

> compare the two. I can see that treating the signs as discrete

> operators sends one toward this domain. But let's not forget that the

> polysigned system is a continuum. I suppose one could ponder how to

> jump to a continuum construction from a finite construction and I

> suppose adjacency would be an appropriate tool.

> As we begin counting we

> then realize that scale can be useful and that there is no need to

> limit oneself to unit values, though D-theorists seem to be forgoing

> that scalar concept:

> http://koti.mbnet.fi/mpelt/tekstit/dtheory.htm

Interesting, but most of this goes off in a wild angle from my most

successful attempts to understand what little I know of reality.

> There are some surprising congruencies to the polysigned lattice though

> the constructions are motivated differently.

> What I have absorbed so far of the Galois construction is that it is

> very primitive. More so than the polysigned numbers. It's motivation

> and consequences are not very clear to me. Also that zero product that

> wipes everything away seems pretty destructive. In polysign math the

> notion of a zero sign is just the identity sign, which preserves the

> state of the operand.

The textbook I'm reading is called "Fundamentals of Error Correcting

Codes," and I realized I'd better look a little deeper. Found in

chapter 3.3.2, it says "Theorem 3.3.2 The elements of (a base q field)

are precisely the roots of x^q - x."

> The Galois sum miffs me. Why would like values sum to zero? If

> summation is superpostion shouldn't inverse values sum to zero? The

> construction will not yield dimensionality.

I'd been trying to picture some sort of modular arithmetic with vectors

inside a unit circle, where a point on one side of the circle maps to a

point on the opposite side. I just sketched it, and it seems to work

for every sum. I'll bet it works this way with every one of the Galois

Fields. Which brings us right back to where we started, breaking free

of a two-dimensional circle and out into arbitrary-dimensional space.

>

> The polysigned numbers require cancellation to exist via equal amounts

> in each orientation. Symmetry and superposition together form the heart

> of the polysigned numbers and generate dimensionality.

>

> > > P4 and all higher even sign systems are not fields since division does

> > > not necessarily work. The last requirement of the mathematical field

> > > is:

> > >

> >

> > If multiplication in P4 can be represented by matrix multiplication,

> > division should be possible through multiplying by the inverse matrix.

>

> I'll try to study this approach. The product is not actually a matrix

> multiplication though is it?

> I'm not seeing it for the moment. I see you have a matrix in the

> nonorthogonal Cartesian system. But will the inverse work there?

>

Oh my. Remind me when I have time. I think so. We found a way to

represent multiplication by any given Polysign as a matrix. I'm pretty

sure it was an invertible matrix. And I'm pretty sure that each matrix

can be mapped back to a unique (or as unique as possible--considering

that by your concept, different representations of the same polysigned

number actually represent different numbers...) polysigned number.

All matrices with nonzero determinants have inverses. However, the

identity associated with the multiplicative inverse (the way I

calculated them) in this case will be #1 in P4, or *1 in P3, etc. You

define the identity to be a different axis below.

ASIDE: I think the null axis would be a good name for the points lying

along what I consider to be equal coordinates, such as -3+0*0#0 =

-2+1*1#1 = -1+2*2#2

> Any value

> multiplied by a value on this axis will come out on this axis.

OH!!!! I'm starting to get a feel for what you mean now. That's

called an EIGENVECTOR! Well, okay, an eigenvector is a vector whose

direction is not changed when it is multiplied by a quantity called the

eigenvalue which is sometimes real, and sometimes complex. In this

case, you are looking for something that may not be quite the same as

an eigenvector or eigenvalue, (especially since both vector and value

will have the same form) but it's really similar.

I think in P3, (*1)(*1) = *1,

and in P4 (#1)(#1)=#1,

so these would have a similar property

I recommend putting this in a prominent location on your website. Kind

of reminds me of getting rid of the parallel lines postulate for

nonEuclidian geometry.

> As we quibble over this I am claiming a conflict with the flat

> defintion of a field. I am not about to argue with mathematicians that

> they are wrong. That would be futile. Mathematics as a free language

> says that you can build whatever you want. As long as you follow your

> rules the results become a measure of the value of those rules.

>

Well, I read a little bit of the chapter of my textbook on finite

fields, but it has been taking me about two or three readings before it

begins to make sense to me. From the feel of it, it seems like the

Galois field comes more from a set of precisely defined terms and

solutions to polynomial equations.

> If the polysigned numbers could speak for themselves in the court of

> fields they would say that the inverse field law should be viewed as an

> operation that has a more general form than a strict binary inverse.

> They also say that the multiplicative inverse may need to exclude more

> than just zero. I'm just happy to look at them and see what they do

> without bothering to worry about these problems. Traditional

> mathematics is eccentric to the real numbers. How many times have I

> seen people try to define the polysigned numbers as built from the real

> numbers? A lot. Yet it is the polysigned construction that builds the

> real numbers and the complex numbers from the same simple rules. They

> develop higher dimension systems with a sum and a product that behave

> just as they do for the real numbers. That is all new.

>

> I know that you won't find this convincing. We've already been through

> this without the field law references. I guess I'm willing to discuss

> it endlessly but it sure would be nice to find a nifty way to convince

> you. I've given you arithmetic examples. The graphical representation

> says a lot. This math is easy to learn, but is not fully developed and

> won't be without a lot of work. Still the sum and the product are

> complete and they are the grounds for other operations. I still have

> not found a general conjugate. It exists in P3 but I can't find it in

> P4 or anywhere higher up. Division becomes a lot of work using linear

> equations and I haven't gotten far with it. Maybe your inverse matrix

> approach will work.

>

Well, I'm certainly intrigued by the idea, and it will be running

around in my head if I see something else to link to it.

I think some of them might rotate, while others invert. I noticed that

in your sphere animation the swirls get flat and then expand again. I

assume that is going from positive to 0 to negative, thus inverting.

But it also rotated.

Ah, good. I can respect that. Declare your intention to look for

further links in this direction. Some people say that a scientist

should never do this. But that's silly. I remember when I was in Jr.

High and with every problem I hoped against hope that the answer would

come out to be positive instead of negative. It didn't keep me from

getting the right answer, whichever way it turned out to be.

> When I found out that P4 and up were broken that was enough for me. I'd

> been looking for it and it came out in such a simple principle of

> distance. So now the product is the focus of my study since it is the

> product that begets the spacetime claim. We see classical particle

> models operating under a product relationship (albeit in a reciprocal

> space) and so I try to model the space we are in as a particle product

> space rather than a Cartesian product space.

I know that the word Hermitian comes up a lot in discussion of GF[4]

and in quantum mechanics. Maybe I'll eventually see how this all links

together. If I remember right it's a variation on the theme of complex

conjugate. Of course P4 is analogous to GF[5] and I haven't seen much

of that.

> That takes me up to where

> we started. All along the way there is a lot to be filled in. Other

> arguments are coming along also like the relative 2D generic particle

> scenario. I am fairly comfortable with the notion of an axis being an

> inherent quality of a particle for purely geometrical reasons. In a 2D

> (P3) particle product scene one is left with the horrible notion of

> these angles adding to each other via the product and everything stops

> making sense. Particles from far away are modifying the axis just as

> much as those nearby. So something is needed just like the reciprocal

> space and I have no idea what it is.

Reminds me vaguely of discussions of chromodynamic forces. Hmmm, as a

matter of fact it reminds me a LOT of discussion of chromodynamic

forces. Red, Green and Blue quarks, attracted or repelled by a force

which is constant over distance. Green and Blue are equivalent to

anti-red, Green and Red are equivalent to anti-blue, etc.

Okay, that's my trademark pattern match for today.

Jun 14, 2006, 11:38:33 PM6/14/06

to

I know it seems important to Tim to have [1,1,1] not equal to zero, and

not allow negative values into his setup. But I think Timothy has come

up with a field which is neither a set of complex numbers nor a set of

real numbers.

But at the very least, Tim has put together a well defined addition,

additive identity, multiplication and multiplicative identity for for

an arbitrary length set of real numbers, i.e. P2 is a group in {R}, P3

is a group in {R,R}, P4 is a group in {R,R,R}, etc. where R is the

reals.

Up until now, the only vector multiplication I knew of that could be

done in arbitrary dimension was the dot product. I don't know whether

this is new to the world, but it is new to me.

Jun 15, 2006, 5:08:14 PM6/15/06

to

Hi Jon. Nice to hear from you. This has been one of the most productive

threads to date for me. I get very little work done so communicating

here is an important part of my process. I appreciate that you have

been willing to explore this system. Most people can't do it. The need

to cast judgement and the consequences of that judgement are too much.

If they judge me right or wrong they themselves might be right or wrong

and so the probability of getting success drops to 25%. Thus far no

refutation has denied any of the basics. Just the same there has been

little verification. You've independently demonstrated that the P4

product does not conserve magnitude so that is a nice step. My computer

tells me that it is true and I can verify the math on a piece of paper

to a point but to bother with all of the forward and inverse

transformation is a lot of work. I have been introduced to the term

'cyclotomic' but it seems again to be about finite fields. And the

notion that R and C will keep popping out of the higher spaces does

seem reasonable. So the notion of translating the P4 product over to

( R x C )

is still a going concept. The catch is that the product definition is a

mystery in that space. If that matches some existing mathematics then

the tie could be a nice bonus, but either way the P4 product is the P4

product on its own terms.

I'll try to look into the quark interaction. I made a little mental

step on this angle problem recently. remember that the product is a

force relationship so the result is an acceleration. The same concept

in angle land is actually what is inferred. So the angular product

result would hopefully just be a differential adjustment to the angle.

There is still plenty of haze there but the horror may be a mental

malfunction. The distance notion still is independent of it when the

product is taken literally, so I am still excited about your quark

color statement.

-Tim

Jun 15, 2006, 8:04:26 PM6/15/06

to

Timothy Golden BandTechnology.com wrote:

> Most people can't do it. The need

> to cast judgement and the consequences of that judgement are too much.

Here's a free clue: on average, mathematicians probably understand

mathematics better than you do, so if this is directed at the sci.math

readership I think its a misguided missile.

So the notion of translating the P4 product over to

> ( R x C )

> is still a going concept. The catch is that the product definition is a

> mystery in that space.

I've already explained exactly how do to it. There is no catch. It's a

done deal.

If that matches some existing mathematics then

> the tie could be a nice bonus, but either way the P4 product is the P4

> product on its own terms.

I've already explained exactly how this ties into existing mathematics;

if you are really interested you can find out.

Jun 15, 2006, 10:22:43 PM6/15/06

to

...or they may have something else at the top of their stack.

> The need

> to cast judgement and the consequences of that judgement are too much.

> If they judge me right or wrong they themselves might be right or wrong

> and so the probability of getting success drops to 25%. Thus far no

> refutation has denied any of the basics. Just the same there has been

> little verification. You've independently demonstrated that the P4

> product does not conserve magnitude so that is a nice step.

> My computer

> tells me that it is true and I can verify the math on a piece of paper

> to a point but to bother with all of the forward and inverse

> transformation is a lot of work. I have been introduced to the term

> 'cyclotomic' but it seems again to be about finite fields.

>From what I can figure, I would say that

*real numbers is to binary

as

*(reduced) polysigned is to Galois Fields.

As for the P4 product not conserving magnitude, I would encourage you

look at your own animation again, and check how multiplying sets of

points on the unit sphere by vectors on the unit sphere yield

quantities inside the unit sphere (and thereby the product has smaller

magnitude than its factors.) Judging strictly from that animation, it

seems to me that the magnitude of the product will always be less than

or equal to the product of the magnitudes. Also the vectors where the

sphere maintains its original shape and the vectors where the sphere is

flat as a pancake are probably worthy of identifying and naming.

> And the

> notion that R and C will keep popping out of the higher spaces does

> seem reasonable.

Now analyzing my own silly motivations for pursuing this whereas nobody

else does. I hate C. Historically there was a pretty big debate about

whether imaginary and complex numbers should be admitted into formal

mathematics. But imaginary numbers had too many good applications. So

even if we have something *completely* equivalent to the set of complex

numbers, and adding absolutely nothing to any branch of physics or

mathematics, I'd still be happy just to know there was a less ephemeral

explanation tha complex numbers.

> So the notion of translating the P4 product over to

> ( R x C )

> is still a going concept. The catch is that the product definition is a

> mystery in that space. If that matches some existing mathematics then

> the tie could be a nice bonus, but either way the P4 product is the P4

> product on its own terms.

I've got a philosophy that before you can come up with a good answer,

you've got to have a good question. First of all you should consider

whether (RXC) is a useful construction or even a usable construction.

Secondly, if you've already found a one-to-one and onto mapping into R

X R X R, will you benefit from finding a mapping into R X C? And

finally, if you think you will benefit, can you take the mapping from R

X R X R into R X C?

> I'll try to look into the quark interaction. I made a little mental

> step on this angle problem recently. remember that the product is a

> force relationship so the result is an acceleration. The same concept

> in angle land is actually what is inferred. So the angular product

> result would hopefully just be a differential adjustment to the angle.

> There is still plenty of haze there but the horror may be a mental

> malfunction. The distance notion still is independent of it when the

> product is taken literally, so I am still excited about your quark

> color statement.

>

> -Tim

The wind is rushing past my ears. You need to find a good college

library, and find a book aimed at somewhere around your level. I still

think what your saying vaguely resembles quantum chromodynamics in that

they both involve three values and their inverses, and also they

resemble each other in that both are concealed by a black haze of

complete incomprehensibility.

Jun 16, 2006, 8:27:31 AM6/16/06

to

Right. I've done some of this and I guess I'll publish some graphics

that explore the puzzle.

The real axis is the identity axis or the null axis as we are possibly

renaming it.

The complex plane is exhibited as normal to this line at the origin and

by using a squaring technique I get:

const Poly & P4SecondaryAxis()

{ // this is the proposed P3* embedded in P4 found in CheckP3P2Relation

static Poly s(4);

s[0] = 1.5;

s[1] = 0.75;

s[3]= 0.75;

s.Unitize();

return s;

}

where s[0] is the identity sign and s[1] is the - sign, etc.

This vector is used to build the transforms that allow comparison of

the product in one space versus the other. The graphics are not enough

since they look so much alike.

I'm going to add a meridian to the shell algorithm and see what that

exposes. The spiral shape may be getting a twist (P4) along the real

axis (T3) that is hidden. I'll publish the comparisons so you can see

for yourself the coherent difference. It's like the T3 and P4 are twins

with a mini-me between them. Geeze wouldn't that be something if an

iterative product cancelled it out.

What you say about getting some hard max and min type of values is

true. I once did some of this for the P4 cone but this is more general.

Also the unitary values are in the same class of analysis so all of

that will fall out.

As to the QCD below here I urge you to consider two generic points in a

2D space. There is no need to use the polysign domain. Invoking

relativity on these two generic points will yield the features. That is

to say that by granting each point an independent referential

coordinate system additional qualities are exhibited by generic points

that are consistent with known point particles. No product operation is

needed to get this. Not much at all is needed to do the construction

but a definition of 2D. Relativity does the rest. Each point particle

is its own origin. Each point particle has its own referential zero

angle or unit ray or whatever you want to call it. The information

exhibited jumps to 3D since each particle exhibits independence in this

realm. So a two body problem that is traditionally a one dimensional

solution due to the unitary distance is much more when the

dimensionality is interpreted this way. The congruence to spin axes is

obvious.

Doing the same in one dimension yields a binary that is suggestive of

charge. There is also at least one more bit up in 2D if one allows left

and right handedness. In effect the 1D bit is also handedness. It could

also be construed as a 2-bit solution. Disambiguating the asymmetry of

charge is a puzzle. One would hope that the system would yield a heavy

proton and a light electron but instead these charges look

indistinguishable so I don't think this is a convincing model yet. P1

can only be one-handed so may allow some trickery with its vanishing

act.

Again for me the ultimate topology of spacetime is:

0D + 1D + 2D ... or P1 + P2 + P3 ...

This is the substrate that we (or our particles) are products of under

this model.

The breakpoint at P3 is natural and what is exhibitied on the other

side of it is bizarre.

Whether the bizarre is eliminated or not does not have to be dealt with

yet.

This is really a classical model taken to a new level.

-Tim

> > And the

> > notion that R and C will keep popping out of the higher spaces does

> > seem reasonable.

>

> Now analyzing my own silly motivations for pursuing this whereas nobody

> else does. I hate C. Historically there was a pretty big debate about

> whether imaginary and complex numbers should be admitted into formal

> mathematics. But imaginary numbers had too many good applications. So

> even if we have something *completely* equivalent to the set of complex

> numbers, and adding absolutely nothing to any branch of physics or

> mathematics, I'd still be happy just to know there was a less ephemeral

> explanation tha complex numbers.

>

> > So the notion of translating the P4 product over to

> > ( R x C )

> > is still a going concept. The catch is that the product definition is a

> > mystery in that space. If that matches some existing mathematics then

> > the tie could be a nice bonus, but either way the P4 product is the P4

> > product on its own terms.

>

> I've got a philosophy that before you can come up with a good answer,

> you've got to have a good question. First of all you should consider

> whether (RXC) is a useful construction or even a usable construction.

> Secondly, if you've already found a one-to-one and onto mapping into R

> X R X R, will you benefit from finding a mapping into R X C? And

> finally, if you think you will benefit, can you take the mapping from R

> X R X R into R X C?

Good point. I'm not even sure what the motivation of all of this is

other than translating the polysign language to an existent language.

Either way the polysign language can speak in its own terms. That

another language might accelerate the development is great. Perhaps

that is what Gene is doing, but I'm not convinced that this other

language is as fundamental as the polysign. It's like he's got a swiss

army knife and I've got a chisel.

-Tim

Jun 16, 2006, 8:44:40 AM6/16/06

to

Timothy Golden BandTechnology.com wrote:

> As to the QCD below here I urge you to consider two generic points in a

> 2D space. There is no need to use the polysign domain. Invoking

> relativity on these two generic points will yield the features. That is

> to say that by granting each point an independent referential

> coordinate system additional qualities are exhibited by generic points

> that are consistent with known point particles. No product operation is

> needed to get this. Not much at all is needed to do the construction

> but a definition of 2D. Relativity does the rest. Each point particle

> is its own origin. Each point particle has its own referential zero

> angle or unit ray or whatever you want to call it. The information

> exhibited jumps to 3D since each particle exhibits independence in this

> realm. So a two body problem that is traditionally a one dimensional

> solution due to the unitary distance is much more when the

> dimensionality is interpreted this way. The congruence to spin axes is

> obvious.

> Doing the same in one dimension yields a binary that is suggestive of

> charge. There is also at least one more bit up in 2D if one allows left

> and right handedness. In effect the 1D bit is also handedness. It could

> also be construed as a 2-bit solution. Disambiguating the asymmetry of

> charge is a puzzle. One would hope that the system would yield a heavy

> proton and a light electron but instead these charges look

> indistinguishable so I don't think this is a convincing model yet. P1

(-)(-) = + and (+)(+) = + .

Can the two resultant plus's be distinguished? No, but they came from

different things.

Well the - - is two and the + + is four in non-mod form.

That sort of suggests stepping up to a four-signed system where the -

sign doesn't exist.

Abstract but perhaps worth pondering.

That a product results are in the same space as the original is an

arithmetic concept. The Force product is an entirely different

resultant, just as meters times meters is square meters, but even more

extreme according to the classical force equations.

-Tim

Jun 16, 2006, 1:10:16 PM6/16/06

to

I've put a page on my website demonstrating the discrepancy in the flat

product of RxC compared to P4:

http://bandtechnology.com/PolySigned/Deformation/P4T3Comparison.html

There is a fair amount of transformation to get the comparison. I have

verified that the forward and reverse yield the identity so there is no

conflict there. Just the same I cannot guarantee there to be no math

error.

product of RxC compared to P4:

http://bandtechnology.com/PolySigned/Deformation/P4T3Comparison.html

There is a fair amount of transformation to get the comparison. I have

verified that the forward and reverse yield the identity so there is no

conflict there. Just the same I cannot guarantee there to be no math

error.

How's this for a start of the discussion:

The flat RxC product is orthogonal and since its component parts

conserve magnitude in their products so will the result. Now we have a

conflict between what the graph shows and this statement. All of the

sources for the product are unit length vectors, yet they yield non

unit-length vectors. Just as a starting point it may be useful.

Resolving this conflict should help to understand what is the nature of

the R x C product and its look alike the P4 product and the mini-me

difference that they generate.

For more background on these graphs please see:

http://bandtechnology.com/PolySigned/Deformation/DeformationUnitSphereP4.html

and

http://bandtechnology.com/PolySigned/FourSigned.html

The unit shell representation is a survey of the P4 product in its

entirety.

I'll try to get a meridian into the shell algorithm so that any

weirdness there will be exposed.

Jun 18, 2006, 5:10:04 PM6/18/06

to

This isn't at all clear to me. It looks like you are defining a

specific Polysigned number to be the secondary Axis. I'm not familiar

enough with C++ to know what represents inputs and outputs of this

code.

I'm not familiar enough with C++ code to

>

> where s[0] is the identity sign and s[1] is the - sign, etc.

> This vector is used to build the transforms that allow comparison of

> the product in one space versus the other. The graphics are not enough

> since they look so much alike.

>

Are you saying the product in P4 is different in some other system?

You would need to define that system because as far as I know there is

no such product in any other system.

> I'm going to add a meridian to the shell algorithm and see what that

> exposes. The spiral shape may be getting a twist (P4) along the real

> axis (T3) that is hidden. I'll publish the comparisons so you can see

> for yourself the coherent difference. It's like the T3 and P4 are twins

> with a mini-me between them. Geeze wouldn't that be something if an

> iterative product cancelled it out.

>

Don't know what meridian means. Have never seen "T3" before.

> What you say about getting some hard max and min type of values is

> true. I once did some of this for the P4 cone but this is more general.

> Also the unitary values are in the same class of analysis so all of

> that will fall out.

>

To the best of my knowledge, that's one of the reasons why eigenvectors

and eigenvalues are of interest.

> As to the QCD below here I urge you to consider two generic points in a

> 2D space. There is no need to use the polysign domain. Invoking

> relativity on these two generic points will yield the features. That is

> to say that by granting each point an independent referential

> coordinate system additional qualities are exhibited by generic points

> that are consistent with known point particles.

In relativity the word "point" is ambiguous and should be avoided. If

you would like a description of relativity theory, I can try to help

you with that. You can speak of objects and their paths, and you can

talk about events and collisions, but "point" presumes a reference

frame.

> No product operation is needed to get this.

> Not much at all is needed to do the construction

> but a definition of 2D. Relativity does the rest. Each point particle

> is its own origin. Each point particle has its own referential zero

> angle or unit ray or whatever you want to call it. The information

> exhibited jumps to 3D since each particle exhibits independence in this

> realm. So a two body problem that is traditionally a one dimensional

> solution due to the unitary distance is much more when the

> dimensionality is interpreted this way. The congruence to spin axes is

> obvious.

I don't know what problem you're trying to solve.

> Doing the same in one dimension yields a binary that is suggestive of

> charge. There is also at least one more bit up in 2D if one allows left

> and right handedness. In effect the 1D bit is also handedness. It could

> also be construed as a 2-bit solution. Disambiguating the asymmetry of

> charge is a puzzle. One would hope that the system would yield a heavy

> proton and a light electron but instead these charges look

> indistinguishable so I don't think this is a convincing model yet. P1

> can only be one-handed so may allow some trickery with its vanishing

> act.

Slow down.

> Again for me the ultimate topology of spacetime is:

> 0D + 1D + 2D ... or P1 + P2 + P3 ...

> This is the substrate that we (or our particles) are products of under

> this model.

> The breakpoint at P3 is natural and what is exhibitied on the other

> side of it is bizarre.

> Whether the bizarre is eliminated or not does not have to be dealt with

> yet.

> This is really a classical model taken to a new level.

>

Take 20 steps back. Ask simple questions. Give simple answers.

When people with swiss army-knives come along don't point your chisel

at them.

Jun 18, 2006, 6:09:24 PM6/18/06

to

Timothy Golden BandTechnology.com wrote:

> I've put a page on my website demonstrating the discrepancy in the flat

> product of RxC compared to P4:

> http://bandtechnology.com/PolySigned/Deformation/P4T3Comparison.html

I'm afraid this web page crashed Firefox. Can you simply give an

algebraic expression for whatever product it is you want to talk about?

Jun 18, 2006, 9:14:26 PM6/18/06

to

Spoonfed wrote:

> >

> > const Poly & P4SecondaryAxis()

> > { // this is the proposed P3* embedded in P4 found in CheckP3P2Relation

> > static Poly s(4);

> > s[0] = 1.5;

> > s[1] = 0.75;

> > s[3]= 0.75;

> > s.Unitize();

> > return s;

> > }

>

> This isn't at all clear to me. It looks like you are defining a

> specific Polysigned number to be the secondary Axis. I'm not familiar

> enough with C++ to know what represents inputs and outputs of this

> code.

> >

> > const Poly & P4SecondaryAxis()

> > { // this is the proposed P3* embedded in P4 found in CheckP3P2Relation

> > static Poly s(4);

> > s[0] = 1.5;

> > s[1] = 0.75;

> > s[3]= 0.75;

> > s.Unitize();

> > return s;

> > }

>

> This isn't at all clear to me. It looks like you are defining a

> specific Polysigned number to be the secondary Axis. I'm not familiar

> enough with C++ to know what represents inputs and outputs of this

> code.

Sorry. I'll take us through the whole thing again here but focus on

this vector. We want to compare the product in P4 versus T3, which is R

x C in polysign. Well, its realy P1xP2xP3 but the P1 part doesn't do

anything. So just think of T3 as P2 x P3 which is R x C. In order to

perform a comparison of these products we have to find the transform

between P4 and T3. We can make an educated guess that the null axis(or

identity axis +1#1) is P2. We know that P3 is orthogonal to P2, but we

don't know how it is oriented. We can rotate through a full revolution

and also change its handedness. Having made the assumption that there

is a P3 we can take any vector in it (but over in P4) and square it:

s2 = s1 s1 .

We have chosen s1 to be an arbitrary vector perpendicular to the null

axis ( +1#1 )

Since we are assuming a complex plane we know that the angle of s2 has

doubled from the +real or *(P3) axis. We simply reflect s2 back from s1

and this vector will be the * axis of the embedded P3 plane. So now we

have defined the plane and it's positive real axis ( or * axis in P3 )

is:

# 1.5 - 0.75 * 0.75

Converted to a unit vector this value becomes a means of transforming

from P4 to T3 along with

# 1 + 1

which is the embedded P2 orientation.

I could print out the actual vectors of P4ToT3() and T3toP4() but I'm

not sure that will be helpful. Let me know if you want them. Anyhow

these two routines embody what has been described and allow the

transform. So now when we multiply a point by another point in P4 we

can also transform to T3 and perform the multiplication in the T3

domain. At this point we can take the results and look in either

system. In this case we are looking in T3, so we take the P4 result and

transform it over to T3. Now, if these two values are equal then the

systems are in agreement. But they are not as evidenced by the

graphics, which take this differencing concept and apply it to the unit

shell. Strangely the difference of the two is another smaller image of

the original product of either space. If you look at the sizes of the

P4 and the T3 you can see that the P4 is larger. Scaling the result

however will not eliminate the error. I can set a scale that will set

the errro to zero for a specific point product, but the scale is not

universal. So the error is still exhibited even after scaling.

> >

> > where s[0] is the identity sign and s[1] is the - sign, etc.

> > This vector is used to build the transforms that allow comparison of

> > the product in one space versus the other. The graphics are not enough

> > since they look so much alike.

> >

>

> Are you saying the product in P4 is different in some other system?

> You would need to define that system because as far as I know there is

> no such product in any other system.

Right, but it looks an awful lot like the R x C independent product.

And Gene has reinforced this prediction so I guess that is why we are

here. His R x C and repetitive claim for higher dimensions is of

interest. You can see that the results are very similar. So no doubt

there is some connection, though it may not be a straightforward one.

Still, I think you are right. As I look around for R x C I find very

little. Supposedly there are methods to make the two come in line. We

have discussed this product comparison back a while ago. I don't know

how important it is to force these things to mesh. Probably it's an

interesting problem.

>

> > I'm going to add a meridian to the shell algorithm and see what that

> > exposes. The spiral shape may be getting a twist (P4) along the real

> > axis (T3) that is hidden. I'll publish the comparisons so you can see

> > for yourself the coherent difference. It's like the T3 and P4 are twins

> > with a mini-me between them. Geeze wouldn't that be something if an

> > iterative product cancelled it out.

> >

>

> Don't know what meridian means. Have never seen "T3" before.

>

> > What you say about getting some hard max and min type of values is

> > true. I once did some of this for the P4 cone but this is more general.

> > Also the unitary values are in the same class of analysis so all of

> > that will fall out.

> >

>

> To the best of my knowledge, that's one of the reasons why eigenvectors

> and eigenvalues are of interest.

>

> > As to the QCD below here I urge you to consider two generic points in a

> > 2D space. There is no need to use the polysign domain. Invoking

> > relativity on these two generic points will yield the features. That is

> > to say that by granting each point an independent referential

> > coordinate system additional qualities are exhibited by generic points

> > that are consistent with known point particles.

>

> In relativity the word "point" is ambiguous and should be avoided. If

> you would like a description of relativity theory, I can try to help

> you with that. You can speak of objects and their paths, and you can

> talk about events and collisions, but "point" presumes a reference

> frame.

That's exactly the point. Take what you have just said literally. The

point particles inherently have an angular measure to the other

particle. There is a conflict here and the angular momentum of point

particles meshes with this angular characteristic. There is no direct

proof. It is anecdotal evidence. Whether you personally reject this

notion is fine. I like it. To me it is a bridge between relativity and

quantum and classical through dimensionality. I predict that the 2D

(P3) product will yield a model of spin. Anyhow I am open to being

wrong but the path is there to explore.

> > Again for me the ultimate topology of spacetime is:

> > 0D + 1D + 2D ... or P1 + P2 + P3 ...

> > This is the substrate that we (or our particles) are products of under

> > this model.

> > The breakpoint at P3 is natural and what is exhibitied on the other

> > side of it is bizarre.

> > Whether the bizarre is eliminated or not does not have to be dealt with

> > yet.

> > This is really a classical model taken to a new level.

> >

>

> Take 20 steps back. Ask simple questions. Give simple answers.

Sorry. There's a few too many issues going on in one thread.

This is just a summary of where I am at. It's not necessary to go

through it all again. You have no need to be burdened with it. Chance

are good that there are errors.

> When people with swiss army-knives come along don't point your chisel

> at them.

That's a good idea.

I see a process of debate. Attacks are acceptable in this domain, no

different than on a chess board. Here an effective attack should be

waged upon the polysign system by others.

As I defend them I learn. And as someone else attacks them they learn

as well. There is no death in this game. We just walk away and call it

a game. Either side is free to walk at any time. This real number issue

does seem to be important. I can hardly believe that people don't see

magnitude as fundamental, and noone seems willing to address the point

except for me. You just keep claiming that the polysign system is built

from the reals, and I keep refuting this. I suppose we are weary of

that. I think the answers to be obvious yet I must address your claim.

Under the algebraic definition there is no need for a Cartesian

product. That is probably the strongest reason to avoid all of the

(x,y,z) type of stuff.

Anyhow the subject of this thread is about the matrix form of the

polysign product.

Now we're off on all sorts of tangents. The P4 product is of interest

since it is a potentially new space so I suppose that is the one to

form a new thread on.

-Tim

Jun 19, 2006, 6:01:21 AM6/19/06

to

Sorry about that. There are three animations on that page. They work on

my Firefox but load it pretty badly. The individual images should

probably work OK for you:

The difference:

http://bandtechnology.com/PolySigned/Deformation/T3P4DifferenceStudy.gif

The P4 product:

http://bandtechnology.com/PolySigned/Deformation/AxisDualDeformStudy.gif

The flat R x C product:

http://bandtechnology.com/PolySigned/Deformation/AxisDualTatrixStudy.gif

Anyhow the algebraic will need two transforms, one from P4 to R x C

which I'll call T3 and one inverse of this. So We could call these

P4OfT3 and T3OfP4.

Since we have to compare in one domain we choose T3 arbitrarily.

Instantiate two points in T3: t1, t2 (actuallly rtu1pos etc. in the

code)

Transform these over to P4: s1 = P4OfT3(t1), s2 = P4OfT3(t2) (actually

rsu1pos etc in code)

Take products:

t3 = t1 t2 ( actually t1 in the code... Sorry to confuse but here

these names good)

s3 = s1 s2 ( s1 in code)

Now take s3 back into T3:

ts3 = T3OfP4( s3 )

And finally note the difference:

t3 - ts3

which is nonzero.

This algebra relies on P4OfT3 and T3OfP4 being chosen carefully since

these systems are anisotropic. Even though the unit shell is

symmetrical its product results are not. The fact that the spiral shell

appears similarly in both domains is a strong indication that the

choice is correct. The positive Real is:

+ 1 # 1 (in P4)

The positive real within the complex plane is:

# 1.5 - 0.75 * 0.75 (in P4)

The transforms can be gotten from these. I look at them as projections

from 3D to 3D. This is done in the Cartesian domain. I have verified

that their product yields the identity matrix.

These products are quantified by their behavior over the unit shell.

All other values are scalings of these unit vector results. So the

graphics are providing a survey of the product.

-Tim

Jun 19, 2006, 10:24:16 PM6/19/06

to

Hi Jon. Here is the result for the product:

http://bandtechnology.com/tmp/T3P4DifferenceStudySpoonfed.gif

This link may go stale or change meaning as we iterate so not too much

description.

Here are some test values for the product we discussed:

[T3 [C1 0 ][C2 0, 0 ]] TIMES [T3 [C1 0 ][C2 0, 0 ]] = [T3 [C1 0 ][C2 0,

0 ]]

[T3 [C1 1 ][C2 1, 0 ]] TIMES [T3 [C1 0 ][C2 1, 0 ]] = [T3 [C1 0 ][C2 2,

0 ]]

[T3 [C1 0 ][C2 0, 1 ]] TIMES [T3 [C1 0 ][C2 0, 1 ]] = [T3 [C1 0 ][C2

-1, 0 ]]

[T3 [C1 1 ][C2 0, 1 ]] TIMES [T3 [C1 1 ][C2 0, 1 ]] = [T3 [C1 1 ][C2

-1, 2 ]]

If you want more values let me know.

They look good to me.

The notation is a T3 tatrix whose contents are in their Cartesian form.

This means R x C where [C1 x ] is the R and [C2 y, z]is the C as y +

iz.

The graph now has larger error.

I'll try another def if you like.

The result is interesting, but we want to see nothing in the difference

graph.

Oh, here is the product image (no differencing):

http://bandtechnology.com/tmp/T3SpoonfedProduct.gif

General background:

http://bandtechnology.com/PolySigned/FourSigned.html

-Tim

Jun 20, 2006, 11:31:57 PM6/20/06

to

Timothy Golden BandTechnology.com wrote:

> Hi Jon. Here is the result for the product:

> http://bandtechnology.com/tmp/T3P4DifferenceStudySpoonfed.gif

> This link may go stale or change meaning as we iterate so not too much

> description.

> Here are some test values for the product we discussed:

>

> [T3 [C1 0 ][C2 0, 0 ]] TIMES [T3 [C1 0 ][C2 0, 0 ]] = [T3 [C1 0 ][C2 0,

> 0 ]]

> [T3 [C1 1 ][C2 1, 0 ]] TIMES [T3 [C1 0 ][C2 1, 0 ]] = [T3 [C1 0 ][C2 2,

> 0 ]]

> [T3 [C1 0 ][C2 0, 1 ]] TIMES [T3 [C1 0 ][C2 0, 1 ]] = [T3 [C1 0 ][C2

> -1, 0 ]]

> [T3 [C1 1 ][C2 0, 1 ]] TIMES [T3 [C1 1 ][C2 0, 1 ]] = [T3 [C1 1 ][C2

> -1, 2 ]]

>

> If you want more values let me know.

> They look good to me.

> The notation is a T3 tatrix whose contents are in their Cartesian form.

> This means R x C where [C1 x ] is the R and [C2 y, z]is the C as y +

> iz.

> Hi Jon. Here is the result for the product:

> http://bandtechnology.com/tmp/T3P4DifferenceStudySpoonfed.gif

> This link may go stale or change meaning as we iterate so not too much

> description.

> Here are some test values for the product we discussed:

>

> [T3 [C1 0 ][C2 0, 0 ]] TIMES [T3 [C1 0 ][C2 0, 0 ]] = [T3 [C1 0 ][C2 0,

> 0 ]]

> [T3 [C1 1 ][C2 1, 0 ]] TIMES [T3 [C1 0 ][C2 1, 0 ]] = [T3 [C1 0 ][C2 2,

> 0 ]]

> [T3 [C1 0 ][C2 0, 1 ]] TIMES [T3 [C1 0 ][C2 0, 1 ]] = [T3 [C1 0 ][C2

> -1, 0 ]]

> [T3 [C1 1 ][C2 0, 1 ]] TIMES [T3 [C1 1 ][C2 0, 1 ]] = [T3 [C1 1 ][C2

> -1, 2 ]]

>

> If you want more values let me know.

> They look good to me.

> The notation is a T3 tatrix whose contents are in their Cartesian form.

> This means R x C where [C1 x ] is the R and [C2 y, z]is the C as y +

> iz.

As far as I know, there is no such product defined in traditional math

for such ordered pairs. Any choice you make as to how to convolve the

pairs is a random decision, and I would encourage you to stop worrying

about this. I want you to look at the eigenvectors and eigenvalues,

which I have given below in R3; I believe you have a transformation

available to convert between R3 and P4.

> The graph now has larger error.

> I'll try another def if you like.

> The result is interesting, but we want to see nothing in the difference

> graph.

> Oh, here is the product image (no differencing):

> http://bandtechnology.com/tmp/T3SpoonfedProduct.gif

>

These are interesting, but they do far too much to make it clear. You

have the path of the red dot moving around in a helical pattern around

the sphere. Instead of doing this, move the red dot straight from the

origin outward along each of the four directions. If I am not

mistaken, moving the dot along the #1 vector (violet), you will have

the sphere simply expand or contract. But I would like to see what

happens as you move the dot straight outward along the -1, +1, and *1

vectors.

The eigenvectors for multiplying by

#1 are {{0, 0, 1}, Eigenvalue=1

{0, 1, 0}, Eigenvalue=1

{1, 0, 0}}, Eigenvalue=1

#1 should preserve the shape of the sphere in all dimensions.

{{1, 1, 1}, {-1, I, -I}, {-1, -1, 1}, {-1, I, -I}}

-1 {{Sqrt[2/3], -(1/Sqrt[3]), 1}, Eigenvalue -1

{(-1 + I)*Sqrt[2/3], (1 + 2*I)/Sqrt[3], 1}, Eigenvalue I

{(-1 - I)*Sqrt[2/3], (1 - 2*I)/Sqrt[3], 1}}, Eigenvalue -I

-1 should reflect the sphere along the plane perpendicular to the first

axis, and I don't know what to do with the complex valued eigenvectors

and eigenvalues. Perhaps it would become apparent if you slid the red

dot along the -1 axis and watched what happened to the sphere.

+1: {{-Sqrt[3/2], 0, 1}, Eigenvalue -1

{1/Sqrt[2], 1, 0}, Eigenvalue -1

{Sqrt[2/3], -(1/Sqrt[3]), 1}}, Eigenvalue +1

This reflects the sphere along the plane perpendicular to the first two

axes and should preserve the coordinate along the third axis.

*1: {{Sqrt[2/3], -(1/Sqrt[3]), 1}, Eigenvalue -1

{(-1 - I)*Sqrt[2/3], (1 - 2*I)/Sqrt[3], 1}, Eigenvalue I

{(-1 + I)*Sqrt[2/3], (1 + 2*I)/Sqrt[3], 1}} Eigenvalue -I

Again, complex eigenvalues and eigenvectors. I believe it has

something to do with rotation.

So my expectation is to see as the red dot moves along the #1 axis, the

sphere will simply expand. For the +1 axis, the sphere will expand in

a somewhat different manner. But as the red dot moves along the -1 and

*1 axes, we should see some rotation.

My coordinates are set up so that vectors are represented (x,y,z). I

just transformed them into modified P4 {#,-,+}. You can see

{Sqrt[3/2], 0, Sqrt[3/2]} appears as an eigenvector for -1, +1, and *1

but not for #1. You have been referring to the vector +1 #1 as an

identity axis. This is that axis, for what it's worth.

{{

{Sqrt[3/2]/2, Sqrt[3/2]/2, Sqrt[3/2]}, {1/(2*Sqrt[2]), 3/(2*Sqrt[2]),

0}, {1, 0, 0}},

{{Sqrt[3/2], 0, Sqrt[3/2]}, {I*Sqrt[3/2], (1 + I)*Sqrt[3/2],

Sqrt[3/2]}, {(-I)*Sqrt[3/2], (1 - I)*Sqrt[3/2], Sqrt[3/2]}},

{{-Sqrt[3/2]/2, Sqrt[3/2]/2, Sqrt[3/2]}, {3/(2*Sqrt[2]),

3/(2*Sqrt[2]), 0}, {Sqrt[3/2], 0, Sqrt[3/2]}},

{{Sqrt[3/2], 0, Sqrt[3/2]}, {(-I)*Sqrt[3/2], (1 - I)*Sqrt[3/2],

Sqrt[3/2]}, {I*Sqrt[3/2], (1 + I)*Sqrt[3/2], Sqrt[3/2]}}}

> General background:

> http://bandtechnology.com/PolySigned/FourSigned.html

>

> -Tim

I am extremely surprised that squaring each point in a sphere yields a

cone. I wish to point out, however that #1 * #1 = #1 and #1 should be

a point on the circle that maps to itself when it is squared. You do

not have the cone intersecting #1 in your diagram. Perhaps you have

mislabeled your axis or I misunderstood which one is the identity.

Jun 21, 2006, 11:19:45 AM6/21/06

to

I have produced this:

http://bandtechnology.com/tmp/SignVectorStudy.gif

Nice basic step. Good idea.

Perhaps I'll add some random vectors to this to demonstrate the scaling

effect.

Any vector travelling from the origin outward will provide the scaling

effect. The morphing is constant along these.

Nice catch! You are quite right. It's the zero sign quagmire. The red

axis is always the zero sign which is the identity sign. So the color

mnemonic will be extensible on the color spectrum as ROYGBIV. In P4 red

is #, green is -, blue is +, and violet is *.

I'm still correcting this on my website. The graphs are correct and the

text is wrong. The graphs follow the color mnemonic.

The cone: If you look at the parametric equations for the sphere and

the cone they are not very far off from each other. The two halves of

the original sphere are occupying the same space of the cone, no

different than the squares of the real numbers fold over to the

positive half of the real line. The even signs posess this feature. I

have yet to see it happen on the odd signs. It is related to the

identity axis.

eigenvectors: I'm still trying to grasp the value of them. Do you want

me to transform some of your values? I am happy to do some work with

them.

There are three distinct values in P4 that will preserve themselves

under the self product. There may be more, but the ones that I know of

are:

#1 ,

# 1 + 1 ,

# 1.5 - 0.75 * 0.75 .

The latter two need to be scaled to unity magnitude for them to hold

their magnitude steady under the self product. I'm not sure how this

relates to your eigenvector study. I can hunt for more of these and get

the angles between these three if you think it is important.

The latter two were used to get the R x C transformations which I am

willing to stop worrying about for the moment though I won't be

surprised if that gets awakened again at some point.

The latter two are orthogonal.

Have you studied Clifford algebras? I think that is the closest I can

get to existing continuum products, but they are only associative

algebras. The quaternions are in there. There is no definition for 3D

in them as far as I can gather from:

http://en.wikipedia.org/wiki/Clifford_algebra

and its linked pages. Certainly the polysign construction is very

different from these in any high dimension (3D+). This has been my

understanding and I don't know what to make of the claims from Gene in

this regard. Just wait and see if he comes back with anything I guess.

Anyhow, whatever product anyone wishes to compare P4 to I am happy to

graph out more.

-Tim

Jun 21, 2006, 12:37:20 PM6/21/06

to

I was expecting to see rotation or skewing of the figure as we moved

along -x, +x, or *x, I guess I shouldn't have expected this. However,

I think one more change to the animation would help. On the left hand

side, you show a yellow end and a black end of the coil, but on the

right, the coil is all black. To see which direction the coil is

pointing we need it on both sides, and then we can tell whether the end

product has been mirrored by whether the coil has switched handedness.

Eigenvectors and eigenvalues are a really good shortcut to getting a

quick visual concept of a linear transformation if you know how to use

them. Unfortunately, I'm still learning how to use them. I already

transformed them to see what they were in polysigned. It's given me an

idea that we have not chosen the most convenient translation between R3

and P4. Instead, the #1 and +1 vectors should lie in the xy plane so

that the +1#1 vector lies on the x axis, and the -1 and *1 vectors

should lie in the xz plane so the -1*1 vector lies on the -x axis.

This would probably yield much more symmetrical looking operators.

> There are three distinct values in P4 that will preserve themselves

> under the self product. There may be more, but the ones that I know of

> are:

> #1 ,

> # 1 + 1 ,

> # 1.5 - 0.75 * 0.75 .

> The latter two need to be scaled to unity magnitude for them to hold

> their magnitude steady under the self product.

The unit sphere in P4 is, I believe the set (a,b,c,d) where

a^2 + b^2 + c^2 + d^2- 2*(ab+bc+cd+de+ac+bd)/3 =1

This puts (sqrt(3/4),0,sqrt(3/4),0) on the surface of the sphere, and I

believe it maps to the vertex of the cone (3/4,0,3/4,0) when you square

it.

> I'm not sure how this

> relates to your eigenvector study. I can hunt for more of these and get

> the angles between these three if you think it is important.

> The latter two were used to get the R x C transformations which I am

> willing to stop worrying about for the moment though I won't be

> surprised if that gets awakened again at some point.

> The latter two are orthogonal.

> Have you studied Clifford algebras? I think that is the closest I can

> get to existing continuum products, but they are only associative

> algebras. The quaternions are in there. There is no definition for 3D

> in them as far as I can gather from:

> http://en.wikipedia.org/wiki/Clifford_algebra

> and its linked pages.

> Certainly the polysign construction is very

> different from these in any high dimension (3D+). This has been my

> understanding and I don't know what to make of the claims from Gene in

> this regard. Just wait and see if he comes back with anything I guess.

> Anyhow, whatever product anyone wishes to compare P4 to I am happy to

> graph out more.

>

> -Tim

Well, if you believed you'd find something and didn't, I probably won't

find it, believing I won't.

Jun 22, 2006, 10:43:30 AM6/22/06

to

Spoonfed wrote:

> > Any vector travelling from the origin outward will provide the scaling

> > effect. The morphing is constant along these.

> >

>

> I was expecting to see rotation or skewing of the figure as we moved

> along -x, +x, or *x, I guess I shouldn't have expected this. However,

> I think one more change to the animation would help. On the left hand

> side, you show a yellow end and a black end of the coil, but on the

> right, the coil is all black. To see which direction the coil is

> pointing we need it on both sides, and then we can tell whether the end

> product has been mirrored by whether the coil has switched handedness.

>

I've written code to graph what I call a 'keyed' sphere.

It will have rings ehre the original sign vector positions were.

I concur on the operand sphere coloring also.

I want to rush an image but really there is some work of verification

to do.

Rather than rush I figure I'll get out a reply here to alleviate one

pressure.

The handedness issue may not become apparent in the graphs.

My graphing is non-occluding so everything is see through. The last

pixel written is on top so any path intersections are lying as to their

depth. When you look through the sphere your eye allows it to have two

representations. With no good 3D projection these qualities are going

to be more difficult to expose graphically.

I think we will find that when the sphere crosses over itself in the

coherent projection along the identity axis that that is when the

handedness flips. So half of the space should posess this handedness

flip and that half is determined by the identity axis. any point

resolving on the negative half (negative in the real number line sense

of this peculiar axis) of the identity axis will flip handedness. That

would mean that both -1 and *1 have flipped handedness. This agrees

with your previous results:

http://groups.google.com/group/sci.physics.relativity/msg/84a59e9d01e1d8ab

so when we see the sphere cross itself we have it's rotation pinned

orthogonal to the identity axis and are seeing it turn inside out while

it spins.

How do we measure handedness? I want to go to a physical pair of

tetrahedrons, start with identical signs and do the sign product

manually. But that will only get us the discrete sign operations. Again

I go back to matching up the vertices so that they contact. When they

are out of hand the last two will be four units apart, When they are in

hand the last vertices will be in contact.

Yet I don't have a simple way to detect that algorithmically. To

literally implement that in code could be a bear. It's a neat

fundamental puzzle.

> > The cone: If you look at the parametric equations for the sphere and

> > the cone they are not very far off from each other. The two halves of

> > the original sphere are occupying the same space of the cone, no

> > different than the squares of the real numbers fold over to the

> > positive half of the real line. The even signs posess this feature. I

> > have yet to see it happen on the odd signs. It is related to the

> > identity axis.

> >

> > eigenvectors: I'm still trying to grasp the value of them. Do you want

> > me to transform some of your values? I am happy to do some work with

> > them.

>

> Eigenvectors and eigenvalues are a really good shortcut to getting a

> quick visual concept of a linear transformation if you know how to use

> them. Unfortunately, I'm still learning how to use them. I already

> transformed them to see what they were in polysigned. It's given me an

> idea that we have not chosen the most convenient translation between R3

> and P4. Instead, the #1 and +1 vectors should lie in the xy plane so

> that the +1#1 vector lies on the x axis, and the -1 and *1 vectors

> should lie in the xz plane so the -1*1 vector lies on the -x axis.

> This would probably yield much more symmetrical looking operators.

Yuk. I suppose I could implement this. I don't think you can make the

#1 and +1 go in the same plane when the + 1 # 1 value has been fixed.

Maybe the angles work out though.

Anyhow composite projections are not a problem so that whatever we

graph can be reoriented via a 3 to 3 orthogonal projection.

>

> > There are three distinct values in P4 that will preserve themselves

> > under the self product. There may be more, but the ones that I know of

> > are:

> > #1 ,

> > # 1 + 1 ,

> > # 1.5 - 0.75 * 0.75 .

> > The latter two need to be scaled to unity magnitude for them to hold

> > their magnitude steady under the self product.

>

> The unit sphere in P4 is, I believe the set (a,b,c,d) where

> a^2 + b^2 + c^2 + d^2- 2*(ab+bc+cd+de+ac+bd)/3 =1

>

> This puts (sqrt(3/4),0,sqrt(3/4),0) on the surface of the sphere, and I

> believe it maps to the vertex of the cone (3/4,0,3/4,0) when you square

> it.

This would mean that you have found the native distance function!

And it looks extensible!

Why isn't ad and in there? Oh I see it. You have de where ad is.

I have not tried to check this yet but that will be very easy to do.

And in general dimension too!

This would be a very important piece since a native distance function

alleviates the need to enter the Cartesian domain.

> > I'm not sure how this

> > relates to your eigenvector study. I can hunt for more of these and get

> > the angles between these three if you think it is important.

> > The latter two were used to get the R x C transformations which I am

> > willing to stop worrying about for the moment though I won't be

> > surprised if that gets awakened again at some point.

> > The latter two are orthogonal.

> > Have you studied Clifford algebras? I think that is the closest I can

> > get to existing continuum products, but they are only associative

> > algebras. The quaternions are in there. There is no definition for 3D

> > in them as far as I can gather from:

> > http://en.wikipedia.org/wiki/Clifford_algebra

> > and its linked pages.

> > Certainly the polysign construction is very

> > different from these in any high dimension (3D+). This has been my

> > understanding and I don't know what to make of the claims from Gene in

> > this regard. Just wait and see if he comes back with anything I guess.

> > Anyhow, whatever product anyone wishes to compare P4 to I am happy to

> > graph out more.

> >

> > -Tim

>

> Well, if you believed you'd find something and didn't, I probably won't

> find it, believing I won't.

The funny thing with P3 is that if you aren't given the facts of the

handedness you can't detect them either. When you square a value you

could be on either side. The only detectable reference is a singular

axis. This has implications on the relative point particle model and

how many bits can be gotten from P3. It argues for bilateral symmetry

at an abstract level.

In P4 there is the same effect for - and *, but + and # are detectable.

Even having detected + and # the - and * still go in perfect symmetry

so there is a three-way classification where one of the elements has

twice the population. In P5 we have only one detectable and it doesn't

do anything. The prime signed spaces are transparent in this regard.

The effect still creeps in on the others, P4 being a good example. Is

this related to handedness? I don't see it. P3 doesn't change

handedness under any sign vector product. It may be useful to

investigate your operator determinant on P5 if we don't find a way to

detect handedness natively. Probably the answer is that even-signed

spaces change handedness under product by their identity axis component

and that odd-signed spaces never change handedness.

-Tim

Jun 23, 2006, 10:40:44 PM6/23/06

to

Oh, well, that would be alright I guess. I was thinking in terms of

doing it myself if I ever had the time and inclination and you thought

it would be helpful. This is just for going back and forth between

polysigned and cartesian coordinates anyway. Right now it's not really

broke, but I was just tempted to fix it anyway.

> Anyhow composite projections are not a problem so that whatever we

> graph can be reoriented via a 3 to 3 orthogonal projection.

>

> >

> > > There are three distinct values in P4 that will preserve themselves

> > > under the self product. There may be more, but the ones that I know of

> > > are:

> > > #1 ,

> > > # 1 + 1 ,

> > > # 1.5 - 0.75 * 0.75 .

> > > The latter two need to be scaled to unity magnitude for them to hold

> > > their magnitude steady under the self product.

> >

> > The unit sphere in P4 is, I believe the set (a,b,c,d) where

> > a^2 + b^2 + c^2 + d^2- 2*(ab+bc+cd+de+ac+bd)/3 =1

> >

> > This puts (sqrt(3/4),0,sqrt(3/4),0) on the surface of the sphere, and I

> > believe it maps to the vertex of the cone (3/4,0,3/4,0) when you square

> > it.

>

> This would mean that you have found the native distance function!

> And it looks extensible!

> Why isn't ad and in there? Oh I see it. You have de where ad is.

Heh. Yes, and as is my typical behavior, I've spread the derivation

around the house on three or four sheets of looseleaf paper. It's not

pretty, but it definitely forms a pattern.

If you want the general idea consider that the cosine of A in the x

direction is 1, while the cosine of B, C, and D is -1/3, so the x value

is x=A-1/3(B+C+D)

The cosine of B in the y direction is sqrt(8/9) while the cosine of C

and D is -sqrt(2/9) so y=sqrt(8/9)(B-(C+D)/2)

Finally, z=sqrt(8/9)*sqrt(3/4)*(C-D)

To find the Cartesian length, just take sqrt(x^2+y^2+z^2).

In doing this problem I began to think that a different choice of

coordinate system might make the problem easier to tackle. For

instance, if A and B were both in the xy plane, and C and D were both

in the xz plane, it would be easier to calculate.

> I have not tried to check this yet but that will be very easy to do.

> And in general dimension too!

> This would be a very important piece since a native distance function

> alleviates the need to enter the Cartesian domain.

>

I was even wondering if the set of points in P4 which map to themselves

when squared and whether they formed a surface. For instance, this

spherical set, when squared, seems to map to a cone. Another reason

for wanting to change the coordinate system is that it appears from

your diagram that a full sphere maps to a cone all to one side of an

equator of the sphere. I think that that the plane cutting that sphere

marks the dividing plane between negative-like and positive-like

numbers in P4.

I think I follow you up to here. Given two numbers to multiply in P3,

you can rotate around the axis. The direction of +1 is important

because you have to reference your angle from here to figure out how

far to rotate. If you turn the paper over, you still have the +1

vector labeled the same way, but the rotation is reversed. It doesn't

matter, though; it gives the same answer; there is no back or front of

the page, only a line and an angle.

In three dimensions, though, you can tell which side of the paper

you're looking at, and you can tell whether you're looking at a

positive angle on the front of a page, or a negative angle at the back

of a page, or if you're looking at the paper in the mirror, because the

ink is on the front and the letters are all backwards.

> This has implications on the relative point particle model and

> how many bits can be gotten from P3. It argues for bilateral symmetry

> at an abstract level.

That's three things that I don't know the meaning of, right in a row.

> In P4 there is the same effect for - and *, but + and # are detectable.

Which effect is this?

> Even having detected + and # the - and * still go in perfect symmetry

> so there is a three-way classification where one of the elements has

> twice the population.

I don't think I've yet perceived the effect you are talking about here.

Is this anything to do with the negative-like and positive-like

numbers I made up earlier?

>In P5 we have only one detectable and it doesn't

> do anything. The prime signed spaces are transparent in this regard.

> The effect still creeps in on the others, P4 being a good example. Is

> this related to handedness? I don't see it.

Still not sure, but notice the use of imaginary numbers in physics

involve quantities that have some mathematical calculability, but tend

to disappear when measurements are made. I still haven't made any

connection to P4 with anything definitely physical, but you seem to be

making the analogy of + and # to real numbers and - and * to imaginary

numbers?

> P3 doesn't change

> handedness under any sign vector product. It may be useful to

> investigate your operator determinant on P5 if we don't find a way to

> detect handedness natively. Probably the answer is that even-signed

> spaces change handedness under product by their identity axis component

> and that odd-signed spaces never change handedness.

>

> -Tim

Could be. Perhaps Sunday I will spend an hour or two on this.

Jun 25, 2006, 10:07:49 AM6/25/06

to

Spoonfed wrote:

> Timothy Golden BandTechnology.com wrote:

> > Spoonfed wrote:

> > > The unit sphere in P4 is, I believe the set (a,b,c,d) where

> > > a^2 + b^2 + c^2 + d^2- 2*(ab+bc+cd+de+ac+bd)/3 =1

> > >

> > > This puts (sqrt(3/4),0,sqrt(3/4),0) on the surface of the sphere, and I

> > > believe it maps to the vertex of the cone (3/4,0,3/4,0) when you square

> > > it.

> >

> > This would mean that you have found the native distance function!

> > And it looks extensible!

> > Why isn't ad and in there? Oh I see it. You have de where ad is.

>

> Heh. Yes, and as is my typical behavior, I've spread the derivation

> around the house on three or four sheets of looseleaf paper. It's not

> pretty, but it definitely forms a pattern.

> If you want the general idea consider that the cosine of A in the x

> direction is 1, while the cosine of B, C, and D is -1/3, so the x value

> is x=A-1/3(B+C+D)

> The cosine of B in the y direction is sqrt(8/9) while the cosine of C

> and D is -sqrt(2/9) so y=sqrt(8/9)(B-(C+D)/2)

> Finally, z=sqrt(8/9)*sqrt(3/4)*(C-D)

It has the necessary symmetry and as I look upon it it seems to be the

only sensible form.

What will the factor do when we we change dimension? It doesn't just go

1/2, 2/3, 3/4, ...

does it? Well, I haven't even verified it yet for P4. I've been

immersed in trying to get the KeyedSphere cleaned up and applied to the

website. I'll verify your distance function next.

It's the most interesting thing that I'll get to do for some time so I

suppose I'm savoring the moment.

>

> To find the Cartesian length, just take sqrt(x^2+y^2+z^2).

>

> In doing this problem I began to think that a different choice of

> coordinate system might make the problem easier to tackle. For

> instance, if A and B were both in the xy plane, and C and D were both

> in the xz plane, it would be easier to calculate.

>

> > I have not tried to check this yet but that will be very easy to do.

> > And in general dimension too!

> > This would be a very important piece since a native distance function

> > alleviates the need to enter the Cartesian domain.

> >

>

> I was even wondering if the set of points in P4 which map to themselves

> when squared and whether they formed a surface. For instance, this

> spherical set, when squared, seems to map to a cone. Another reason

> for wanting to change the coordinate system is that it appears from

> your diagram that a full sphere maps to a cone all to one side of an

> equator of the sphere. I think that that the plane cutting that sphere

> marks the dividing plane between negative-like and positive-like

> numbers in P4.

Right, and that plane is orthogonal to the identity axis.

>

> >

> > The funny thing with P3 is that if you aren't given the facts of the

> > handedness you can't detect them either. When you square a value you

> > could be on either side. The only detectable reference is a singular

> > axis.

>

> I think I follow you up to here. Given two numbers to multiply in P3,

> you can rotate around the axis. The direction of +1 is important

> because you have to reference your angle from here to figure out how

> far to rotate. If you turn the paper over, you still have the +1

> vector labeled the same way, but the rotation is reversed. It doesn't

> matter, though; it gives the same answer; there is no back or front of

> the page, only a line and an angle.

>

> In three dimensions, though, you can tell which side of the paper

> you're looking at, and you can tell whether you're looking at a

> positive angle on the front of a page, or a negative angle at the back

> of a page, or if you're looking at the paper in the mirror, because the

> ink is on the front and the letters are all backwards.

See if you like

http://bandtechnology.com/tmp/SignVectorStudy.gif

The handedness flip is apparent though it is by use of the eye and

again a mathematical measure should be defined. The Cartesian cross

product comes to mind but that is stuck in 3D.

The handedness is not a matter of paper games. It is an argument about

observation. So we start out claiming not to know the identity or

position of the sign vectors. Instead we have only our operators which

in this case are sum, product, and render. I am arguing that you will

never detect any difference between the - vector and the + vector in

P3. Also that you will never detect a difference between the - and *

vector in P4. The detection of the identity vector is possible in all

cases. The detection of the + vector in P4 is also possible, since it's

square will be the # vector. Under these conditions the symmetries are

strict and true. The labels are arbitrary. We can just flip the - and +

sign vector labels in P3 and never know the difference, whether we look

at

a + b,

a a,

a b,

a( b + c ),

etc.

The graphs will be identical.

It is a parallel concept with handedness. Yet of these two systems one

flips handedness( P4 ) and the other( P3 ) does not. P2 does also flip

handedness, and P1 does not.

-Tim

Jun 25, 2006, 7:06:18 PM6/25/06

to

Congratulations Jon !!!

I've verified the distance function and found its general form.

The factors go

2/1, 2/2, 2/3, 2/4

for n = 2, 3, 4, ...

The following code has verified this out to large sign:

double SpoonfedDistanceFunction( Poly s )

{

if( s.n == 1 ) return s[0];

double d = 0;

double fac = - 1.0 / ( s.n - 1 );

for( int i = 0; i < s.n; i++ )

{

for( int j = 0; j < s.n; j++ )

{

if( i == j )

d += s[i] * s[j];

else

d += fac * s[i] * s[j];

}

}

return sqrt(d);

}

void testSpoonfedDistanceFunction()

{

for( int j = 2; j < 100; j++ )

{

Poly s(j);

int i = 0;

double d;

while( i++ < 100 )

{

s.Randomize();

s = s * ((double)(i));

d = SpoonfedDistanceFunction( s );

cout << i << " : " << d << s <<"\n";

if( abs( d - i ) > 0.00000001 ) cerr<< "\nGot a mismatch.\n";

}

}

}

Randomize generates a random unit vector. This is scalar multiplied by

an index so that the vector length is the index. They match from P2 to

P100. With your distance function the polysigned code will be quite

improved. It currently relies on the use of the Cartesian domain and so

a transform each time a distance is required. The algorithm confuses

the equation. Since each combination is hit twice the 1/ (n-1) becomes

2/(n-1) as in your equation.

This is really great. I've been wondering about this for some time. The

symmetry of the equation.

-Tim

Reply all

Reply to author

Forward

0 new messages