Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Advice for math phD programs

93 views
Skip to first unread message

mickeykollins

unread,
Dec 8, 2010, 3:46:40 AM12/8/10
to
I'm applying for grad schools this fall, in physics and applied math. For the applied math, I would like to apply to programs that combine pure and applied math, rather than just separate applied math programs. But since I decided to apply to grad schools just a couple months ago, I registered for just the physics GRE, not the math subject. Its too late to take the math subject. This turned out to be a huge mistake. Thus, I have no chance at my dream schools, such as Berkeley, Texas Austin, and Michigan. Also, the only proof-based math courses I've taken are linear algebra, analysis and fourier analysis. No abstract algebra or topology

Here's my stats:
- physics gpa: 3.64, applied math: 3.93 from a top state school
- I just took the PGRE and expect to get around 80th percentile
- I've done two different research projects, but no publications.
- All my LORs will come from physics, not math, professors

Applied math programs I'm thinking of applying to (since these don't require (but usually strongly recommend) math GRE):
Maryland, Cornell, Brown, NYU, and maybe Arizona and UC Davis

Because I have a strong feeling that I'll get rejected from all of those schools because I only have a strong desire for 4 of those schools, and I lack the math GRE, I've been thinking about what to do afterwards


I completed my undergrad at UCLA, which offers an "Extension" program whereby graduates can enroll in regular classes without formally applying to the university. "Extension" students are graded according to the same standards as regular degree-seeking students.
This seems like an ideal way to fill in the gaps in my undergraduate background. I currently live closer to UCI, so I could go there instead

Would it greatly help me for the future if I take the math classes that I currently lack, such as abstract algebra, topology, or 2nd quarter of analysis. If I do this, should I enroll in the winter (starts in Jan 2011) or spring (April)? I don't think I'll hear back from NYU, Brown, etc until March at the earliest

X

unread,
Dec 8, 2010, 3:52:32 AM12/8/10
to
Why not talk with someone at UCLA, from where you
got your degree.? They usually have advisers who
are well-qualified to help students in your position.
Good Luck.!

Gerry

unread,
Dec 8, 2010, 6:08:19 AM12/8/10
to

What he said.
--
GM

Tim Golden BandTech.com

unread,
Dec 8, 2010, 9:28:41 AM12/8/10
to

If you do get to taking abstract algebra, try to keep an eye on the
polynomial development, where products and sums like
a0 + a1 X + a2 X X
are developed, claimed to be ring behaved, yet the product
a1 X
does not obey the ring definition, for a1 and X are not in the same
set. X is undefined and a1 is real. Abstract algebra is the bastard
child of modern mathematics. The academic programming is not a mind
opening experience as far as I can tell. The programming is strictly
mimicry, and your own ability to challenge a flawed but accepted
construction is nill. You will be assimilated.

- Tim

Marshall

unread,
Dec 8, 2010, 9:54:13 AM12/8/10
to
On Dec 8, 6:28 am, "Tim Golden BandTech.com" <tttppp...@yahoo.com>
wrote:

Also watch out for cranks.


Marshall

Jesse F. Hughes

unread,
Dec 8, 2010, 10:05:16 AM12/8/10
to
"Tim Golden BandTech.com" <tttp...@yahoo.com> writes:

> On Dec 8, 6:08 am, Gerry <gerry.myer...@mq.edu.au> wrote:
>> On Dec 8, 7:52 pm, X <X...@sdc.com> wrote:
>>
>> > Why not talk with someone at UCLA, from where you
>> >  got your degree.? They usually have advisers who
>> >  are well-qualified to help students in your position.
>> >  Good Luck.!
>>
>> What he said.
>> --
>> GM
>
> If you do get to taking abstract algebra, try to keep an eye on the
> polynomial development, where products and sums like
> a0 + a1 X + a2 X X
> are developed, claimed to be ring behaved, yet the product
> a1 X
> does not obey the ring definition, for a1 and X are not in the same

> set. [...]

Geez Louise, you are an unhelpful attention seeker.

"Yes, that's an interesting issue you raise, but hey! Over here! Look
at me!! Won't someone pay attention to MEEEEEEEEE????!!!!"

--
Jesse F. Hughes
"His name is Crap Talker and he's a bad guy because he doesn't listen.
And he has three faces."
--Quincy P. Hughes (age 5) invents a new super-villain.

Jesse F. Hughes

unread,
Dec 8, 2010, 10:16:59 AM12/8/10
to
"Jesse F. Hughes" <je...@phiwumbda.org> writes:

> Geez Louise, you are an unhelpful attention seeker.
>
> "Yes, that's an interesting issue you raise, but hey! Over here! Look
> at me!! Won't someone pay attention to MEEEEEEEEE????!!!!"
>
> --
> Jesse F. Hughes
> "His name is Crap Talker and he's a bad guy because he doesn't listen.
> And he has three faces."
> --Quincy P. Hughes (age 5) invents a new super-villain.

Wow. That has to be the best .sig serendipity in *years*.

In fact, the below randomly-selected .sig is a pretty good match, too.
I'd put my .sig random selector against Herc's silly Bible games any
day.

--
Jesse F. Hughes
"Why do the dirty villains always have to tie your hands *behind* ya?"
"That's what makes them villains."
--Adventures by Morse (old radio show)

quasi

unread,
Dec 8, 2010, 1:27:03 PM12/8/10
to
On Wed, 8 Dec 2010 06:28:41 -0800 (PST), "Tim Golden BandTech.com"
<tttp...@yahoo.com> wrote:

>If you do get to taking abstract algebra, try to keep an eye on the
>
>polynomial development, where products and sums like
> a0 + a1 X + a2 X X
>are developed, claimed to be ring behaved, yet the product
> a1 X
>does not obey the ring definition, for a1 and X are not in the same
>set. X is undefined and a1 is real.
>
>Abstract algebra is the bastard child of modern mathematics.

Ignore the above. Here's a counter-opinion.

Abstract Algebra is a clean, elegant, highly applicable, and
ultimately unifying branch of mathematics. The proofs are clearly
proofs, the exercises are often challenging and fun. The hodge-podge
of subtheories (Group Theory, Ring Theory, Field Theory, etc) evolved
naturally in an attempt to abstract the common features of important
pre-existing structures. All in all, Abstract Algebra is a beautiful
child of Modern Math, and strives to stay beautiful (as it evolves,
it's always looking in the mirror).

>The academic programming is not a mind opening experience as far as
>I can tell.

Apparently one mind did not get opened.

>The programming is strictly mimicry,

If that's all you can do, yes.

Perhaps your objection to "mimicry" is really an objection to the fact
that there are rules (definitions, axioms, etc) for each subtheory and
that, within that subtheory, you have to play by the rules, even if
you don't like them.

>and your own ability to challenge a flawed but accepted
>construction is nill.

Flawed logic can always be challenged.

Flawed choice of definitions and axioms, less so, except in the event
of exposing a logical inconsistency. The reason that won't happen in
pretty much any subtheory of Abstract Algebra is due to the fact each
subtheory typically has various concrete models for which the axioms
are clearly satisfied.

>You will be assimilated.

By being won over by the overpowering beauty and elegance of the
subject, not by force.

quasi

Tim Golden BandTech.com

unread,
Dec 8, 2010, 3:29:30 PM12/8/10
to
On Dec 8, 1:27 pm, quasi <qu...@null.set> wrote:
> On Wed, 8 Dec 2010 06:28:41 -0800 (PST), "Tim Golden BandTech.com"
>
> <tttppp...@yahoo.com> wrote:
> >If you do get to taking abstract algebra, try to keep an eye on the
>
> >polynomial development, where products and sums like
> >   a0 + a1 X + a2 X X
> >are developed, claimed to be ring behaved, yet the product
> >   a1 X
> >does not obey the ring definition, for a1 and X are not in the same
> >set. X is undefined and a1 is real.
>
> >Abstract algebra is the bastard child of modern mathematics.
>
> Ignore the above. Here's a counter-opinion.
>
> Abstract Algebra is a clean, elegant, highly applicable, and
> ultimately unifying branch of mathematics. The proofs are clearly
> proofs, the exercises are often challenging and fun. The hodge-podge
> of subtheories (Group Theory, Ring Theory, Field Theory, etc) evolved
> naturally in an attempt to abstract the common features of important
> pre-existing structures.

It is fundamental to these definitions (both group and ring) that the
elements that these operators work upon are of the same set, and that
the result of that operation is in that same set. Thus the
construction
a1 X
where a1 is real and X is not real is inconsistent with the very
definition of the product that is in use.

That this math has gone to the trouble of formally defining these
operators, and then goes on to use them in contradictory terms is
unmathematical. That it does not care to discuss its own conflicted
usage, well, this is the state of mimicry within academia whereby your
degrees become digressions. There is no choice but to swallow the
bait; otherwise you will fail. You will be assimilated.

My mathematical statement is merely a two liner, and here we see
several posters incapable of invalidating my falsification. It is such
a simple matter. So simple as to go unseen?
I'm not the one to make the final judgement, so in the meanwhile I'm
not afraid to use all the rhetoric I want. The fact is that nobody is.
There is no finality within these subjects. They are open and always
will be so. The mimicry paradigm is well supported right here on this
thread. Let's see content. I've provided some. The straight A's are
the best mimics, for that is their training. Yes, it's a bit too
harsh, but there needs to be some rhetoric along with the content to
get any response.

> All in all, Abstract Algebra is a beautiful
> child of Modern Math, and strives to stay beautiful (as it evolves,
> it's always looking in the mirror).
>
> >The academic programming is not a mind opening experience as far as
> >I can tell.
>
> Apparently one mind did not get opened.
>
> >The programming is strictly mimicry,
>
> If that's all you can do, yes.
>
> Perhaps your objection to "mimicry" is really an objection to the fact
> that there are rules (definitions, axioms, etc) for each subtheory and
> that, within that subtheory, you have to play by the rules, even if
> you don't like them.
>
> >and your own ability to challenge a flawed but accepted
> >construction is nill.
>
> Flawed logic can always be challenged.

And so I have. A math which defines product and sum operations cannot
freely use those operations in conflict with its own definition;
particularly not without discussing the misuse, and so I stand by
AA(abstract algebra) as a bastard child.

- Tim

quasi

unread,
Dec 8, 2010, 4:07:16 PM12/8/10
to

Sorry, I don't have time to debunk your obviously crankish and
somewhat trollish objections, so I'll leave this subthread, but I'll
make one last simple observation ...

Polynomials with real coefficients existed way before Abstract Algebra
was born, and no one objected then.

quasi

Tim Golden BandTech.com

unread,
Dec 8, 2010, 5:25:33 PM12/8/10
to

Ahh, but then X was real. X is no longer real within the polynomial
construction of AA.

Now, I should defend myself for here we see the discontented of
rhetoric like accusations of 'troll' and 'crank' without an ounce of
content to back them up. My objection is roughly a one liner:

The ring definition is broken within the polynomial construction.

To expand this single line we need a portion of the definition of
ring:
Closure under addition:
For all a, b in R, the result of the operation a + b is also in
R.
Closure under multiplication:
For all a, b in R, the result of the operation a · b is also in
R.
These are directly from
http://en.wikipedia.org/wiki/Ring_mathematics#Formal_definition

Now, as we study the polynomial construction we see real a(n) mixed in
product and sum with X, whose nature is poorly defined. Wouldn't you
think that having carefully constructed a product and a sum that the
restrictions would be followed, and if not that some qualitative
discussion would take place?

I happen to like the ring definition, though its agreement with
physical concepts is poor, for while we may be accustomed to using
real numbers in products within physics they are not generally
accepted as belonging to the same set when units are applied to them.
So for instance with Ohm's law
V = I R
we cannot accept that Volts are in the same set as Amperes or Ohms,
nor that Amperes or Ohms are in the same set, though when the units
are lost the ring definition does ding a bit, though a change in units
would lead to arbitrary results due to the variable nature of unity.
More than just AA can be attacked, but that is too tangential.

More conflict with the ring definition can be found in the usage of
the standard complex value
a + b i
where a and b are real and i is not real. The complex numbers are
claimed to be ring behaved even while their standard definition is in
direct conflict, just as the polynomial is. There are cover-ups in
some texts and you will not see the conflict exposed. The answer as to
why these discussions do not exist in the texts is bound to be
interesting, though highly unmathematical.

- Tim

Arturo Magidin

unread,
Dec 8, 2010, 8:31:54 PM12/8/10
to
On Dec 8, 4:25 pm, "Tim Golden BandTech.com" <tttppp...@yahoo.com>
wrote:

> > Polynomials with real coefficients existed way before Abstract Algebra


> > was born, and no one objected then.
>
> Ahh, but then X was real. X is no longer real within the polynomial
> construction of AA.

Your knowledge of history is matched only by your knowledge of
mathematics. The depth of your knowledge of history is matched by the
depth of your concern that when you fail to understand something, it
might possibly be your fault and not the rest of the world's.

You are little more than a crank with delusions of adequacy, ready to
blame the world for your intellectual shortcomings. Which are legion.

--
Arturo Magidin

amzoti

unread,
Dec 8, 2010, 11:43:02 PM12/8/10
to
On Dec 8, 6:28 am, "Tim Golden BandTech.com" <tttppp...@yahoo.com>
wrote:

Is this James?

Arturo Magidin

unread,
Dec 9, 2010, 1:03:29 AM12/9/10
to
On Dec 8, 10:43 pm, amzoti <amz...@gmail.com> wrote:

[.snip.]

> Is this James?

No, though he shares the same arrogance based on arguments from
personal incredulity and ignorance.

--
Arturo Magidin

Bill Dubuque

unread,
Dec 9, 2010, 6:27:50 PM12/9/10
to

That's not accurate historically. For one example see my post [1]
from one of the prior Tim Golden threads on this topic. It includes
a jaw-dropping exchange between Cauchy and Hankel on the topic of
the syntax and semantics of polynomial (quotient) rings.

--Bill Dubuque

[1] http://groups.google.com/groups?selm=y8zhbxkjhil.fsf%40nestle.csail.mit.edu

Message has been deleted

quasi

unread,
Dec 10, 2010, 1:10:56 AM12/10/10
to
On Thu, 9 Dec 2010 18:32:47 -0800 (PST), Arturo Magidin
<mag...@member.ams.org> wrote:

>On Dec 9, 5:27 pm, Bill Dubuque <w...@nestle.csail.mit.edu> wrote:
>
>> > "Tim Golden BandTech.com" <tttppp...@yahoo.com> wrote:
>> >
>> > Polynomials with real coefficients existed way before
>> >Abstract Algebra was born, and no one objected then.
>>
>>That's not accurate historically.
>

>Of course not! Just like math is not the way it actually
>is but rather the way Timmy *thinks* it is, history is not
>about what actually happened, but rather what Timmy thinks
>should have happened.

Except that the references got mangled here.

The reply claiming that polynomials with real coefficients existed way
before abstract algebra was from me, not from Tim.

I'm not really sure of the history, but I assumed that polynomials
with real coefficients existed at least as far back as Descartes, and
probably even earlier, whereas I would guess that the initial
development of Abstract Algebra (but again, I'm not sure) could be
regarded as originating with Galois.

quasi

Arturo Magidin

unread,
Dec 10, 2010, 1:53:52 AM12/10/10
to
On Dec 10, 12:10 am, quasi <qu...@null.set> wrote:
The reply claiming that polynomials with real coefficients existed
way
> before abstract algebra was from me, not from Tim.

Quite; I misread it, thinking that Bill was replying to the ludicrous
claim that the "x" was "a real number" and not an indeterminate.

> I'm not really sure of the history, but I assumed that polynomials
> with real coefficients existed at least as far back as Descartes, and
> probably even earlier, whereas I would guess that the initial
> development of Abstract Algebra (but again, I'm not sure) could be
> regarded as originating with Galois.

Look at the link Bill provides; the issue was not whether they existed
or not, but to "no one objected to them." There *were* some
discussions about it, all of which were *completely resolved and
clarified* by abstract algebra, contra Timmy. And contra Timmy, the
indeterminate was not "a real number". That would have made all sorts
of solutions computed by Euler and others nonsense, since from a_nx^n
+ ... + a_0 = b_nx^n + ... + b_0 in polynomials you can conclude
a_i=b_i for all i, but such an assertion when x stands "for a real
number" would be false.

--
Arturo Magidin

scaaahu

unread,
Dec 10, 2010, 2:29:37 AM12/10/10
to
On Dec 10, 2:10 pm, quasi <qu...@null.set> wrote:
> On Thu, 9 Dec 2010 18:32:47 -0800 (PST), Arturo Magidin
>

Back in 1991, there was somebody asking people on sci.math about
good beginning level abstract algebra books, somebody recommended
Elements of Abstract Algebra by Allan Clark. I saw that post. The
next time I was in a book store, I saw the book and bought it.

I like it very much, especially the first introductory section, It
tells me a lot of things I didn't know. "The word algebra comes
from an Arabic word meaning reduction or restoration. It first
appeared ... about the year 825 A.D.", etc.

There is a paragraph around the end of the first section, "Group
theory was called the theory of substitutions until 1854 when the
English mathematician Arthur Cayley(1821-1895) introduced the
concept of abstract group."

I am particularly interested in this saying because I have had a
question about Cayley's theorem in group theory. I found this
theorem in every abstract algebra/group theory books I own except
Lang's Algebra. There is no mention of this theorem in the book.
Does anybody know why?

Arturo Magidin

unread,
Dec 10, 2010, 12:12:52 PM12/10/10
to
On Dec 10, 1:29 am, scaaahu <scaa...@gmail.com> wrote:
>
> I am particularly interested in this saying because I have had a
> question about Cayley's theorem in group theory. I found this
> theorem in every abstract algebra/group theory books I own except
> Lang's Algebra. There is no mention of this theorem in the book.
> Does anybody know why?

Cayley's Theorem was very important historically, as a connection
between the new formalism and the old intuition, but it has less
importance now. (The idea of the proof has some interesting
applications, but they may be introduced elsewhere).

See for example the discussion in

http://math.stackexchange.com/questions/10029/importance-of-cayleys-theorem

--
Arturo Magidin

scaaahu

unread,
Dec 10, 2010, 8:29:51 PM12/10/10
to
> http://math.stackexchange.com/questions/10029/importance-of-cayleys-t...
>
> --
> Arturo Magidin

Thanks to Dr. Magidin for the link.

I was not questioning the importance of Cayley's theorem.
My point was, why it is not in an Algebra book? If I were
to write a paper in group theory, I would think that I don't
need to mention it even I used it. The assumption would be
the readers would know the theorem by default.

However, if I were to write an Algebra book, I would
mention it, at least to pay respect to Cayley. The book
is over 900 pages thick with many elementary results
mentioned.

Arturo Magidin

unread,
Dec 10, 2010, 9:45:32 PM12/10/10
to
On Dec 10, 7:29 pm, scaaahu <scaa...@gmail.com> wrote:
> On Dec 11, 1:12 am, Arturo Magidin <magi...@member.ams.org> wrote:
>
>
>
> > On Dec 10, 1:29 am, scaaahu <scaa...@gmail.com> wrote:
>
> > > I am particularly interested in this saying because I have had a
> > > question about Cayley's theorem in group theory. I found this
> > > theorem in every abstract algebra/group theory books I own except
> > > Lang's Algebra. There is no mention of this theorem in the book.
> > > Does anybody know why?
>
> > Cayley's Theorem was very important historically, as a connection
> > between the new formalism and the old intuition, but it has less
> > importance now. (The idea of the proof has some interesting
> > applications, but they may be introduced elsewhere).
>
> > See for example the discussion in
>
> >http://math.stackexchange.com/questions/10029/importance-of-cayleys-t...
>
> > --
> > Arturo Magidin
>
> Thanks to Dr. Magidin for the link.
>
> I was not questioning the importance of Cayley's theorem.
> My point was, why it is not in an Algebra book?

And the answer is precisely what I indicated: that while Cayley's
theorem was very important historically, *today* its importance is
much lessened. One can certainly do a lot of basic group theory
without every worrying about Cayley's theorem, and using instead the
notions of group actions, completely bypassing Cayley's.

> If I were
> to write a paper in group theory, I would think that I don't
> need to mention it even I used it. The assumption would be
> the readers would know the theorem by default.

Actually, I cannot think of a single paper I've read in group theory,
written in the last 50 years, in which whether the reader knows or
does not know Cayley's theorem would make *any* difference whatsoever.

Now, Sylow's theorems, on the other hand...

> However, if I were to write an Algebra book, I would
> mention it, at least to pay respect to Cayley.

Fair enough; the author of the book obviously disagrees. Doesn't seem
to me like it is being "disrespectful", or is necessarily missing
anything important because of that omission.

--
Arturo Magidin

scaaahu

unread,
Dec 10, 2010, 10:37:10 PM12/10/10
to
> Arturo Magidin- Hide quoted text -
>
> - Show quoted text -

Lang did not explain why he omitted that theorem, to the best of my
knowledge. Dummit and Foote explained the importance of the notion
of "action". The word action appears all over the places in their
book. Even that, they mentioned Cayley's theorem. I am not saying
which book is better or worse than others. I just want to know if
Lang had good reasons to ignore Cayley's theorem.

Arturo Magidin

unread,
Dec 11, 2010, 2:12:15 AM12/11/10
to
On Dec 10, 9:37 pm, scaaahu <scaa...@gmail.com> wrote:

> Lang did not explain why he omitted that theorem, to the best of my
> knowledge. Dummit and Foote explained the importance of the notion
> of "action". The word action appears all over the places in their
> book. Even that, they mentioned Cayley's theorem. I am not saying
> which book is better or worse than others. I just want to know if
> Lang had good reasons to ignore Cayley's theorem.

Lang's algebra is not meant to be a comprehensive book on group
theory; it misses a lot of things about book theory. It is meant to be
more encyclopedic for *other* areas of algebra; Dummit and Foote, by
comparison, is written at a lower level, aimed at a different
audience.

As to why Lang would make an idiosyncratic choice in writing a book, I
think that the answer is quite simply: "Because he was Lang." He was
always a bit strange insofar as writing went.

--
Arturo Magidin

scaaahu

unread,
Dec 11, 2010, 3:45:48 AM12/11/10
to

I agree to your answer. I wish Lang were alive today so I could
ask him why he made that choice.

Frederick Williams

unread,
Dec 11, 2010, 9:33:19 AM12/11/10
to
Arturo Magidin wrote:
>
> [...] He [Lang] was

> always a bit strange insofar as writing went.

Why do you say that? We used his texts on complex analysis and many
real variables. They both seemed clear but otherwise unremarkable to
me.

--
http://indology.info/papers/gombrich/uk-higher-education.pdf

quasi

unread,
Dec 11, 2010, 9:46:34 AM12/11/10
to
On Sat, 11 Dec 2010 14:33:19 +0000, Frederick Williams
<freddyw...@btinternet.com> wrote:

>Arturo Magidin wrote:
>>
>> [...] He [Lang] was
>> always a bit strange insofar as writing went.
>
>Why do you say that? We used his texts on complex analysis and many
>real variables. They both seemed clear but otherwise unremarkable to
>me.

I second that.

Lang's texts seem to be characterized by a desire to "say it right",
cleanly, with a minimum of fuss.

What would be an example of his "strange writing"?

quasi

Arturo Magidin

unread,
Dec 11, 2010, 3:23:58 PM12/11/10
to
On Dec 11, 8:46 am, quasi <qu...@null.set> wrote:
> On Sat, 11 Dec 2010 14:33:19 +0000, Frederick Williams
>
> <freddywilli...@btinternet.com> wrote:
> >Arturo Magidin wrote:
>
> >> [...] He [Lang] was
> >> always a bit strange insofar as writing went.
>
> >Why do you say that?  We used his texts on complex analysis and many
> >real variables.  They both seemed clear but otherwise unremarkable to
> >me.
>
> I second that.
>
> Lang's texts seem to be characterized by a desire to "say it right",
> cleanly, with a minimum of fuss.
>
> What would be an example of his "strange writing"?

His Calculus textbook is infamous for seeming like it was written on a
weekend (legend has it that it was, on a dare), and his undergraduate
books are not considered that good (and I joint that opinion on the
ones I've seen). His Algebra is generally considered a good
encyclopedic resource, but not the best textbook to learn the material
for the first time. The second edition of the Algebra was infamous for
the chapter on homological algebra, which he included essentially
"under protest": he did not believe homological algebra was important
or hard, and although the chapter contained several theorems, it
contained almost no proofs. One of the exercises read: "Go to the
library, check out any book on homological algebra, and prove all the
theorems looking only at the statements and not the proofs."

At the time, there was essentially only one textbook on homological
algebra (Cartan-Eilenberg). The next edition of that book contained
one new exercise: "Go to the library, check out any book by Serge
Lang, and find all the errors."

When I took graduate abstract algebra, using Lang (the edition just
before he moved to Springer) as the textbook, about half the homeworks
contained a problem that was essentially "Theorem XX.xx in the
textbook is incorrect. Find a counterexample, explain where the error
in the proof is, and give an extra hypothesis that will make the
statement correct."

Lang was a bit of an eccentric, both personally and in his writing.
I'm not saying his books are not worth it: they usually are; but they
are also not always the best resource for learning the material for
the first time, in my experience. His "Algebraic Number Theory" and
his book on Diophantine analysis likewise are not, in my opinion, good
places to start because of the way they are written and the selection
of material, though they are great resources once you know the basics.

--
Arturo Magidin

scaaahu

unread,
Dec 11, 2010, 9:26:22 PM12/11/10
to
On Dec 11, 10:33 pm, Frederick Williams

Would you mind pointing out the relevancy of the
document "British Higher Education Policy in the
last Twenty Years: The Murder of a Profession" to
Lang's books?

scaaahu

unread,
Dec 11, 2010, 11:40:32 PM12/11/10
to
On Dec 11, 10:46 pm, quasi <qu...@null.set> wrote:
> On Sat, 11 Dec 2010 14:33:19 +0000, Frederick Williams
>
> <freddywilli...@btinternet.com> wrote:
> >Arturo Magidin wrote:
>
> >> [...] He [Lang] was
> >> always a bit strange insofar as writing went.
>
> >Why do you say that?  We used his texts on complex analysis and many
> >real variables.  They both seemed clear but otherwise unremarkable to
> >me.
>
> I second that.
>
> Lang's texts seem to be characterized by a desire to "say it right",
> cleanly, with a minimum of fuss.
>
> What would be an example of his "strange writing"?
>
> quasi

I own Lang's Algebra, 3rd edition, 2nd printing.
On Page 6,
"If R is a commutative ring, we shall deal with
multiplicative subsets S, that is subsets containing
the unit element, and such that if x, y in S then
xy in S. Such subsets are monoids."

Yes, commutative monoids are monoids. But, not all
monoids are commutative monoids. How much difference
between abelian groups and non-abelian groups?

I happen to study semigroups a lot. That's how I spotted
this fuss! Is this three-liner "say it right" cleanly?

Transfer Principle

unread,
Dec 11, 2010, 11:51:45 PM12/11/10
to
> Is this James?

At first, I avoided this thread, since the original question about
math
phD programs doesn't apply to me. But now I see fit to enter.

Posters here at sci.math regularly insult each other all the time, but
rarely do I see so many insults directed at one poster in such a short
period of time -- except for the aforementioned JSH, of course.

So Tim Golden questioned abstract algebra, and within 15 hours, he
received a five-letter insult from Spight, a .sig insult from Hughes,
a
five-letter insult from Magidin, a JSH comparison from amzoti. Even
quasi, whom I usually consider to be more tolerant, saw fit to dish
out
a pair of five-letter insults. All simply because Golden's opinion of
algebra isn't the majority opinion.

Meanwhile, I must commend Bill Dubuque for giving one of the very few
helpful responses to Golden, without any insults.

My hope is that both sides can learn from each other and learn how to
accept each other's ideas. I hope that Golden can accept the
polynomial
ring R[x] while the others can accept Golden's polysigned numbers Pn.

First, I present a Dubuque-like construction of Golden's Pn:

We begin with the set P of nonnegative reals ("magnitudes"), which we
obtain by performing the first two steps of the construction of R as
given by David Ullrich, which takes us from N to Q+ via equivalence
classes of ordered pairs (fractions) and Q+ to P via Dedekind cuts (or
Cauchy sequences). We leave out the last step taking us from P to R,
or more precisely, we generalize the last step, as follows:

To form the n-signed numbers, we start with the set P^n of ordered
n-tuples in P. We write each n-tuple as:

(a_0, a_1, a_2, ..., a_(n-1))

We define an addition on this monoid as componentwise addition. Then
we mod out the submonoid generated by:

(1, 1, 1, ..., 1)

This gives us a group, which we make into a ring by defining a
multiplication as cyclical convolution. One can prove that addition
and multiplication on the equivalence classes are welldefined.

-- If n=1, one obtains the trivial ring.
-- If n=2, the resulting ring is a field isomorphic to R.
-- If n=3, the resulting ring is a field isomorphic to C.
-- The resulting ring is an (n-1)-dimensional vector space over R.

The name "polysigned" comes from the fact that for n=2:

+1 corresponds to (1,0)
-1 corresponds to (0,1)

So for n=3, one can assume that there's a third "sign," and write:

*1 corresponds to (1,0,0)
-1 corresponds to (0,1,0)
+1 corresponds to (0,0,1)

A few other interesting notes:

-- If one performs the Golden construction on N rather than P (i.e.,
omitting both Ullrich steps), then N2 yields the integers Z and N3
gives the Eisenstein integers.
-- If one performs the Golden construction on Q+ rather than P (i.e.,
omitting the second Ullrich step), then Q2 yields the rationals Q
and Q3 gives the cyclotomic field of order 3.

Second, I perform a Golden-like construction to obtain the field C of
complex numbers, since Golden questioned the use of the symbol "i" in
the standard construction of C.

We notice that Golden, in his construction of Pn, points out that the
signs multiply similar to addition mod n. Thus we have:

In P2, {-1,+1} is isomorphic to Z/2Z.
In P3, {-1,+1,#1} is isomorphic to Z/3Z.
In Pn, the signs are isomorphic to Z/nZ.

But there's no reason that the signs must be isomorphic to a _cyclic_
group like Z/nZ. So let's come up with a polysigned ring
where the signs are isomorphic to a group other than Z/nZ.

The simplest group that isn't cyclic is the Klein 4-group (Z/2Z)^2. So
let's call the four signs +, -, |, and !, and let's give the table for
this polysigned group. Since the Klein 4-group is often written as the
symbol V, we'll call the corresponding polysigned ring PV.

(Note: this table requires fixed-point to read.)

+-|!
-+!|
|!+-
!|-+

As we can see, the choice of + and - as two of the signs is hardly
coincidental, for they act just like + and - in P2 and R.

Let's try something here. (Here we write @ as the addition operation
in PV, as Golden often does for Pn.)

(|1)(!1)(|1@!1) = |1@!1 (since (|1)(!1) = +1 multiplicative identity)

but

(|1)(!1)(|1@!1) = (|1)(|1@!1)(!1) (commutativity)
= (-1@+1)(!1) (distributivity plus the chart)
= 0(!1) (since -1 and +1 sum to 0 as in P2/R)
= 0

Thus |1 and !1 sum to 0 as well. So we can write:

|2@!1 = (|1@|1)@!1
= |1@(|1@!1) (associativity)
= |1@0 (from above)
= |1 (additive identity)

Finally, a multiplication:

(+2@|1)(+3@|3) = (+2)(+3)@(+2)(|3)@(|1)(+3)@(|1)(|3)
= +6 @ |6 @ |3 @ -3
= +6 @ -3 @ |3 @ |6
= +3 @ |9

And therein lies the kicker -- the ring PV is actually isomorphic
to the field C of complex numbers, where +1 and -1 correspond to
the expected real numbers, and |1 and !1 actually correspond to
the _imaginary_ numbers i and -i.

So I hope that I've presented the complex numbers i and -i in a
manner acceptable to Golden. The numbers i and -i are actually
two new signs, |1 and !1, in a polysigned ring with four signs,
except instead of mod 4, we use Klein 4-group to multiply signs.

Trying to have Golden accept polynomial rings is trickier. As it
turns out, we would need a polysigned ring with _infinitely_ many
signs, instead of finitely many. We would have to apply Golden's
construction to the infinite set w of nonnegative integers. Since
w is an (additive) monoid rather than a group, division isn't
always possible in the resulting polysigned ring Pw (but we
already know that division of polynomials isn't always possible).

scaaahu

unread,
Dec 11, 2010, 11:56:58 PM12/11/10
to
On Dec 12, 12:40 pm, scaaahu <scaa...@gmail.com> wrote:
> I own Lang's Algebra, 3rd edition, 2nd printing.
> On Page 6,
> "If R is a commutative ring, we shall deal with
> multiplicative subsets S, that is subsets containing
> the unit element, and such that if x, y in S then
> xy in S. Such subsets are monoids."
>
> Yes, commutative monoids are monoids. But, not all
> monoids are commutative monoids. How much difference
> between abelian groups and non-abelian groups?
>
> I happen to study semigroups a lot. That's how I spotted
> this fuss! Is this three-liner "say it right" cleanly?- Hide quoted text -
>
I missed a word in my previous post,
The book I own is a Revised 3rd edition

Jesse F. Hughes

unread,
Dec 12, 2010, 8:41:04 AM12/12/10
to
Transfer Principle <lwa...@lausd.net> writes:

> So Tim Golden questioned abstract algebra, and within 15 hours, he
> received a five-letter insult from Spight, a .sig insult from Hughes,
> a five-letter insult from Magidin, a JSH comparison from amzoti. Even
> quasi, whom I usually consider to be more tolerant, saw fit to dish
> out a pair of five-letter insults. All simply because Golden's opinion
> of algebra isn't the majority opinion.

No, not all because his opinion isn't a majority opinion.

First, as far as being quoted in my .sigs: it is, of course, complete
coincidence that his quote was selected for a recent post. The
selection process is random, and that particular quote has been in
rotation since... er...

I can't seem to find Tim Golden in my .sig collection at all. I imagine
I'm overlooking something, but could you please tell me what .sig insult
I intentionally hurled by random selection?

In any case, the other responses aren't due to mere differences of
opinion either. Tim has a basic misunderstanding. For instance, he
claims that A[X] can't be a ring, since it involves multiplication of an
element of A with X, and they aren't elements of the same set. But this
is silly, partly because one needn't regard aX as multiplication at all,
but also because A u {X} *is a set*. There is no issue here at all!

--
Jesse F. Hughes
"Well, I guess that's what a teacher from Oklahoma State University
considers proper as Ullrich has said it, and he is, in fact, a teacher
at Oklahoma State University." -- James S. Harris presents a syllogism

Jesse F. Hughes

unread,
Dec 12, 2010, 8:42:25 AM12/12/10
to
Transfer Principle <lwa...@lausd.net> writes:

> My hope is that both sides can learn from each other and learn how to
> accept each other's ideas. I hope that Golden can accept the
> polynomial ring R[x] while the others can accept Golden's polysigned
> numbers Pn.

What does "accept" mean?

I haven't looked at his polysigned numbers, but assuming that they are a
well-defined mathematical structure, then I guess I "accept" them. I
wouldn't want my daughter to marry a polysigned number, but some of my
best friends are polysigned numbers.

--
Jesse F. Hughes
"I already have major discoveries, which mathematicians have simply
avoided bothering to inform the public about, so I'll solve the
factoring problem, and that will end." JSH: A Man with a Plan!

Jesse F. Hughes

unread,
Dec 12, 2010, 8:57:17 AM12/12/10
to
"Jesse F. Hughes" <je...@phiwumbda.org> writes:

> Transfer Principle <lwa...@lausd.net> writes:
>
>> So Tim Golden questioned abstract algebra, and within 15 hours, he
>> received a five-letter insult from Spight, a .sig insult from Hughes,
>> a five-letter insult from Magidin, a JSH comparison from amzoti. Even
>> quasi, whom I usually consider to be more tolerant, saw fit to dish
>> out a pair of five-letter insults. All simply because Golden's opinion
>> of algebra isn't the majority opinion.
>
> No, not all because his opinion isn't a majority opinion.
>
> First, as far as being quoted in my .sigs: it is, of course, complete
> coincidence that his quote was selected for a recent post. The
> selection process is random, and that particular quote has been in
> rotation since... er...
>
> I can't seem to find Tim Golden in my .sig collection at all. I imagine
> I'm overlooking something, but could you please tell me what .sig insult
> I intentionally hurled by random selection?

Ah, yes! I recall now (thanks to Google).

A poster honestly asked for advice regarding a PhD program. This is a
serious, important issue for the OP. Tim's response was to ignore the
substance of the question and instead talk about his own odd views about
polynomial rings.

Tim was being an asshole, plain and simple.

As it happens, the .sig form my response (found at
http://science.niuz.biz/re-t388536.html?s=7288ce145a6c4d5c80be882537e8e9e3&amp;)
was apt, and I commented on that fact.

Notice that I didn't comment on Tim's claims at all? In fact, until the
above post, I have not (as I recall) made any comment on the correctness
of Tim's assertion that A[X] is not a ring[1]. Instead, I told him that
he was being a jerk for using the OP's thread to attract attention
instead to his own pet rant.

It would not matter whether he was right or wrong, by the way. I'd make
the exact same response in either case. He was being a jerk.

Footnotes:
[1] He's wrong, and it would be very silly for you to claim that what
he means is really best interpreted in some other, not yet specified,
theory. He's talking about algebra as it is taught in graduate courses.

--
Jesse F. Hughes
"Women aren't that unpredictable."
"Well, I can't guess what you're getting at, honey."
-- Hitchcock's _Rear Window_

Frederick Williams

unread,
Dec 12, 2010, 10:09:22 AM12/12/10
to

There is none. It's in my sig which Google groups misformats.

Two dashes, a space and newline mark the start of a sig.

--
http://indology.info/papers/gombrich/uk-higher-education.pdf

quasi

unread,
Dec 12, 2010, 12:15:37 PM12/12/10
to

He didn't just question it.

He called it "the bastard child of modern mathematics".

>and within 15 hours, he received a five-letter insult from Spight,
> a .sig insult from Hughes, a five-letter insult from Magidin,
>a JSH comparison from amzoti. Even quasi, whom I usually consider
>to be more tolerant, saw fit to dish out a pair of five-letter insults.

But note again the vulgar slur "bastard child" hurled by Tim.

Besides, Tim is at a level where he should understand the constant use
in mathematics of "identification by injection" where an object in one
set is identified with an object in another set, often with no change
in notation, the distinction being clear from the context.

If he can't understand that simple idea, then sorry, he really is a
crank, and if he does understand it, but is just trying to be
objectionable, then he is a troll. I suspect a lot of one, a little of
the other.

quasi

Marshall

unread,
Dec 12, 2010, 12:29:58 PM12/12/10
to

FIRST!


> Even quasi, whom I usually consider to be more tolerant, saw
> fit to dish out a pair of five-letter insults. All simply because
> Golden's opinion of algebra isn't the majority opinion.

No, that's not why. It's because he's an arrogant idiot. (Count
the letters!) Arrogant, because no matter how many people
explain his mistake to him, he continues to assume everyone
else is wrong, and he's some kind of intrepid discoverer of
a "flaw" in mathematics.


Marshall

Jesse F. Hughes

unread,
Dec 12, 2010, 2:40:07 PM12/12/10
to
quasi <qu...@null.set> writes:

>>and within 15 hours, he received a five-letter insult from Spight,
>> a .sig insult from Hughes, a five-letter insult from Magidin,
>>a JSH comparison from amzoti. Even quasi, whom I usually consider
>>to be more tolerant, saw fit to dish out a pair of five-letter insults.
>
> But note again the vulgar slur "bastard child" hurled by Tim.

Eleven letters. Doesn't count.

--
"There are some dark forces among you though[...] They are, quite
simply, anti-mathematicians. They pretend to be mathematicians, but
show their true colors when discovery is about, as their role is to
block human progress!!!" -- James Harris on the beast of the number.

Bruce Wheeler

unread,
Dec 13, 2010, 7:23:17 AM12/13/10
to
On Wed, 08 Dec 2010 03:46:40 EST, mickeykollins
<mickey...@yahoo.com> wrote:

>I'm applying for grad schools this fall, in physics and applied math. For the applied math, I would like to apply to programs that combine pure and applied math, rather than just separate applied math programs. But since I decided to apply to grad schools just a couple months ago, I registered for just the physics GRE, not the math subject. Its too late to take the math subject. This turned out to be a huge mistake. Thus, I have no chance at my dream schools, such as Berkeley, Texas Austin, and Michigan. Also, the only proof-based math courses I've taken are linear algebra, analysis and fourier analysis. No abstract algebra or topology
>
>Here's my stats:
>- physics gpa: 3.64, applied math: 3.93 from a top state school
>- I just took the PGRE and expect to get around 80th percentile
>- I've done two different research projects, but no publications.
>- All my LORs will come from physics, not math, professors
>
>Applied math programs I'm thinking of applying to (since these don't require (but usually strongly recommend) math GRE):
>Maryland, Cornell, Brown, NYU, and maybe Arizona and UC Davis
>
>Because I have a strong feeling that I'll get rejected from all of those schools because I only have a strong desire for 4 of those schools, and I lack the math GRE, I've been thinking about what to do afterwards
>
>
>I completed my undergrad at UCLA, which offers an "Extension" program whereby graduates can enroll in regular classes without formally applying to the university. "Extension" students are graded according to the same standards as regular degree-seeking students.
>This seems like an ideal way to fill in the gaps in my undergraduate background. I currently live closer to UCI, so I could go there instead
>
>Would it greatly help me for the future if I take the math classes that I currently lack, such as abstract algebra, topology, or 2nd quarter of analysis. If I do this, should I enroll in the winter (starts in Jan 2011) or spring (April)? I don't think I'll hear back from NYU, Brown, etc until March at the earliest

Since none of the regulars has responded, here are a few things
(basically just common sense) to think about:
1) Talk to someone at UCLA or UCI about how you should plan your
future.
2) Get some recommendations from math professors. If you have a 3.93
GPA in applied math, that should be possible.
3) Look into taking the math GRE as soon as you can.
4) Yes, you should take some extra math classes if possible. You may
need to take these classes as part of your graduate program anyway,
and it's better to take them ahead of time. This is especially true of
analysis.
5) Decide what area of applied math specifically interests you (it's a
big area, with many unrelated subjects), and look for professors you
may want to work for, rather than 'dream schools'.
6) Think about getting an MS at one school, and transferring to
another 'dream school' after the MS, if you want to get a PhD. Maybe
apply to UCLA or UCI (or another UC campus) for an MS program.
7) Think about putting off applying for a year, take relevant courses
this year, and take the GRE next fall.
8) Apply to programs for the winter quarter/semester next year instead
of next fall. This would also allow time for better preparation.
9) Write to, or visit, math programs which interest you to find out
whether they could be flexible relative to the GRE requirement (maybe
letting you take it later or not at all), and also what classes you
may need to take, such as analysis, before taking the relevant
graduate classes. Demonstrate that you are really interested in
pursuing this, and they may be flexible.

Regards,
Bruce Wheeler

Pubkeybreaker

unread,
Dec 13, 2010, 10:36:35 AM12/13/10
to
> Marshall- Hide quoted text -


This describes virtually all of the cranks who post here.

Tim Golden BandTech.com

unread,
Dec 16, 2010, 12:45:07 PM12/16/10
to
On Dec 12, 8:41 am, "Jesse F. Hughes" <je...@phiwumbda.org> wrote:

> In any case, the other responses aren't due to mere differences of
> opinion either.  Tim has a basic misunderstanding.  For instance, he
> claims that A[X] can't be a ring, since it involves multiplication of an
> element of A with X, and they aren't elements of the same set.  But this
> is silly, partly because one needn't regard aX as multiplication at all,
> but also because A u {X} *is a set*.  There is no issue here at all!

I believe that this is the strongest defense that you state above.
It was stated on or near the thread that Dubuque mentions as well.
Note that while Dubuque has strong research I believe he remains
agnostic on the formalities of this discussion. I tried to pin him
down before when he admitted ambiguity, but was unable to coax a clear
statement. Those links that he gave were excellent, and most of his
content is as well. Nicely he found where Hamilton declared i to be a
sign. If only he had truly generalized sign we'd be farther along
today. Simple concepts can lay buried in false assumptions, and we
humans must consider that we work backward toward the truth. The
fundamentals are not all done. They must be approached as open
problems even still after centuries or millennia of development.

The formal construction of the set A Union X is nonexistent.
What if we were to remove A from this set? What is left?
This set A Union X can be regarded as the standard notation
A[X]
within AA. But what it contains other than real values is ill defined,
because X is ill defined. This is sheer silliness, and if I were to
attempt the same I would be laughed at. Well, I am already laughed at,
and I laugh back. Not only must we declare X to be in that set, but we
must manually declare its offspring XX, XXX, etc. to be in that set.
What a joke it is, for the notation is identical to polysign numbers,
especially when the modding of the ideal is set up, though that bit of
gobbledy gook is incomprehensible to me. The real value already had a
modulo two component built into it. That portion is generalizable and
is a clean construction; lacking the conflicts of the polynomial
theory within the ring formalism.

One attack on your defense is that if it were true that the product
a X
were ring defined on the set A[X] then we could replace this value
with a value
b = a X
which is just as the ring definition implies. This is clearly not
intended. The values are forced to preserve X as a counting mechanism.
These parts do not combine, and since operators have been formally
declared the proper way out is to declare this
a X
operation a new operator. It is clearly along the lines of a unit
vector, and that makes this portion of abstract algebra a
multidimensional affair, but because it is possible to wrap the
dimensional device then the math is building low dimensional spaces
from high dimensional spaces, which is a poorer method than building
higher dimensional spaces from lower or nondimensional space. This
dimensional view of the polynomial construction is in line with the
polysign construction, where rotational features of the product are
maintained. Modulo mathematics is in the core of polysign. Abstract
algebra is a farce from the polysign perspective, especially for the
reliance upon the real number as fundamental within AA.

It also happens that compact representation of physical formulas are
possible within this modulo interpretation, and so its value is
already apparent. The idea that reality inherently has these discrete
modulo behaviors is new, but that space carries discrete dimensions is
not. Still the awareness needs more nurturing.

The title of this thread is 'Advice for math phD programs' and so I
have given some advice here to those programs. I like the variation of
TP's Klein four group though I don't see it as a general phenomenon.
Polysign presents spacetime support from pure arithmetic, and I will
quibble about P1 being a trivial ring, for the cancellation
(x) = 0 (P1)
(x,x) = 0 (P2)
(x,x,x) = 0 (P3)
...
can be interpreted as an operator, and that operator is not
necessarily exercised when sum and product operations are. Indeed the
operation is exactly that of geometrical rendering, and this is so
deep a statement that the cartesian thinkers of this day will not
appreciate it. The geometry of polysign is inherently implied and
symmetrically balanced. The real number is not fundamental. That time
and P1 have such strong correspondence is not a mistake. That
electromagnetism is a property of spacetime is likewise carrying a
polysign correspondence, though it has not yet been formalized.

- Tim

Jesse F. Hughes

unread,
Dec 16, 2010, 3:33:48 PM12/16/10
to
"Tim Golden BandTech.com" <tttp...@yahoo.com> writes:

> On Dec 12, 8:41 am, "Jesse F. Hughes" <je...@phiwumbda.org> wrote:
>
>> In any case, the other responses aren't due to mere differences of
>> opinion either.  Tim has a basic misunderstanding.  For instance, he
>> claims that A[X] can't be a ring, since it involves multiplication of an
>> element of A with X, and they aren't elements of the same set.  But this
>> is silly, partly because one needn't regard aX as multiplication at all,
>> but also because A u {X} *is a set*.  There is no issue here at all!
>
> I believe that this is the strongest defense that you state above.
> It was stated on or near the thread that Dubuque mentions as well.
> Note that while Dubuque has strong research I believe he remains
> agnostic on the formalities of this discussion.

If you're suggesting that Bill Dubuque is not sure whether A[X] is a
ring or not (given that A is a ring), I am confident that you're mighty
mistaken.

> I tried to pin him down before when he admitted ambiguity, but was
> unable to coax a clear statement. Those links that he gave were
> excellent, and most of his content is as well. Nicely he found where
> Hamilton declared i to be a sign. If only he had truly generalized
> sign we'd be farther along today. Simple concepts can lay buried in
> false assumptions, and we humans must consider that we work backward
> toward the truth. The fundamentals are not all done. They must be
> approached as open problems even still after centuries or millennia of
> development.
>
> The formal construction of the set A Union X is nonexistent.

More nonsense from Tim. Sorry, that's the old way of speaking about
people that don't understand mathematics, yet talk about it anyway.
L. Walker has taught me the new way.

Tim has different beliefs about mathematics. These beliefs are, of
course, simple theorems in some unspecified theory. Tim has every right
to his beliefs and it is rude to use real mathematics to show that he is
wrong.

> What if we were to remove A from this set? What is left?
> This set A Union X can be regarded as the standard notation
> A[X]
> within AA. But what it contains other than real values is ill defined,

> because X is ill defined. This is sheer silliness [...]

No doubt.

Anyway, keep at it, Tim. I am sure that you alone have grasped the fact
that algebra is a farce, a mere hoax perpetrated by so-called
mathematical experts. Congrats!

In the meantime, I see no reason to try to disabuse you of your
remarkable insights.

--
Jesse F. Hughes
"Social castigation. Their pictures in the papers. Reporters hounding
them with hard questions. And it won't end during their lifetimes."
-- Oppose James S. Harris and you get post-mortem hardball interviews

Arturo Magidin

unread,
Dec 16, 2010, 3:56:27 PM12/16/10
to
On Dec 16, 2:33 pm, "Jesse F. Hughes" <je...@phiwumbda.org> wrote:

> "Tim Golden BandTech.com" <tttppp...@yahoo.com> writes:
>
> > On Dec 12, 8:41 am, "Jesse F. Hughes" <je...@phiwumbda.org> wrote:
>
> >> In any case, the other responses aren't due to mere differences of
> >> opinion either.  Tim has a basic misunderstanding.  For instance, he
> >> claims that A[X] can't be a ring, since it involves multiplication of an
> >> element of A with X, and they aren't elements of the same set.  But this
> >> is silly, partly because one needn't regard aX as multiplication at all,
> >> but also because A u {X} *is a set*.  There is no issue here at all!
>
> > I believe that this is the strongest defense that you state above.
> > It was stated on or near the thread that Dubuque mentions as well.
> > Note that while Dubuque has strong research I believe he remains
> > agnostic on the formalities of this discussion.
>
> If you're suggesting that Bill Dubuque is not sure whether A[X] is a
> ring or not (given that A is a ring), I am confident that you're mighty
> mistaken.

Bill noted that *historically* there was indeed some ambiguity about
polynomial rings and just what X was or was not, but that modern
algebra had in fact *removed* that ambiguity and replaced it with
solid definitions and meanings.

This is in complete opposition to Timmy's version of history, in which
polynomials were perfectly fine and understood creatures up until the
advent of modern algebra, which proceeded to make them
incomprehensible.

In short, while Timmy's objections might have had a place two
centuries ago, they were long since solved by precisely the ideas that
Timmy rejects.

It would be as if Timmy were raising objections to Weierstrass's
version of calculus and saying that back when Newton and Leibnitz were
doing calculus with infinitesimals, everything was clear and fine but
that Weierstrass and his limits made it nonsensical. Then someone
would bring up Bishop Berkeley's objections to infinitesimals and
calculus to show that in fact the problem was the other way around,
and Timmy would consider this vindication of his complaints about
limits.

--
Arturo Magidin

Tim Golden BandTech.com

unread,
Dec 17, 2010, 10:40:57 AM12/17/10
to
On Dec 16, 3:56 pm, Arturo Magidin <magi...@member.ams.org> wrote:
> On Dec 16, 2:33 pm, "Jesse F. Hughes" <je...@phiwumbda.org> wrote:
> > "Tim Golden BandTech.com" <tttppp...@yahoo.com> writes:
> > > On Dec 12, 8:41 am, "Jesse F. Hughes" <je...@phiwumbda.org> wrote:
> > >> In any case, the other responses aren't due to mere differences of
> > >> opinion either.  Tim has a basic misunderstanding.  For instance, he
> > >> claims that A[X] can't be a ring, since it involves multiplication of an
> > >> element of A with X, and they aren't elements of the same set.  But this
> > >> is silly, partly because one needn't regard aX as multiplication at all,
> > >> but also because A u {X} *is a set*.  There is no issue here at all!
> > > I believe that this is the strongest defense that you state above.
> > > It was stated on or near the thread that Dubuque mentions as well.
> > > Note that while Dubuque has strong research I believe he remains
> > > agnostic on the formalities of this discussion.
> > If you're suggesting that Bill Dubuque is not sure whether A[X] is a
> > ring or not (given that A is a ring), I am confident that you're mighty
> > mistaken.
> Bill noted that *historically* there was indeed some ambiguity about
> polynomial rings and just what X was or was not, but that modern
> algebra had in fact *removed* that ambiguity and replaced it with
> solid definitions and meanings.

I am stating that the polynomial construction is ambiguous with
respect to the closure property of the ring operators, especially if
a X
contains a product, for a is real and X is not real, in direct
contradiction to the operators which are claimed to be in use. Really
it is so simple and here you have not continued the discussion.
Instead you have merely snubbed it by not replying directly to my
criticisms(both Jesse and Arturo), whereas I have come straight to
your strongest point (Jesse's). Your methods are not yielding a
falsification of my statements. My point is so simple that there is
not much room. A falsification should be straightforward, but instead
we see a roundabout sort of defense of the polynomial (Jesse's) which
fails, and then is ignored by the supporters of the status quo.

The extensions of the argument and the interpretation of X as a unit
vector is appropriate to me, and this is the means by which I try to
understand the polynomial construction, but in the definition of the
ideal and the modding operation '/' there is much to complain about.
The trouble is that in this area the arguments are not so straight
forward. Still, that a flawed construction would turn out this way is
predictable.

I guess the easiest way to form the modding interpretation that I
offer is to use actual instances. To move on into the ideal we could
select a member of A[X] such as
p = 1.0 + 2.0 X + 3.0 X X + 4.0 X X X
and consider the operation
p / ( X X + 1 )
which is supposed to generate a complex number. The square and cube
portions must mod down to lower dimension. Now I know that this is not
satisfactory language to you, but this is the modding that has to take
place, and this interpretation leaves us building a two dimensional
space from an infinite dimensional space. The sensibility of this as a
universal means is fairly obvious since from an infinite dimensional
space much is possible, but to admit this as a truth is not such a
powerful truth. Why such instantiation as I offer above does not take
place I am not sure. Perhaps it is because it further exposes
conflicts within the construction.

As to limit theory in calculus I would argue that treating the problem
as an open problem would be the first step in any sincere discussion
of the topic. That there might be other means to construct a calculus
would then be a possible offshoot of such a conversation. This is not
the religious view that Arturo will take. I ask any onlooker to please
consider the content beyond the rhetoric, which is very thin on the
side which supports the standard abstract algebra construction; an
area which I rhetorically call the bastard child of modern
mathematics. The idea that mathematicians can make mistakes is
entirely acceptable. Mimicry is prescribed into them for they are
humans. To ask that they break out of mimicry, well, this is merely
asking for variations, and perhaps a bit more than that, for when a
falsification is offered then the human social order within their own
cult is challenged. Here we have it in roughly one line:

The expression ( a X ) cannot be ring behaved if a is real and X is
not real.

- Tim

Jesse F. Hughes

unread,
Dec 17, 2010, 10:55:29 AM12/17/10
to
"Tim Golden BandTech.com" <tttp...@yahoo.com> writes:

> I am stating that the polynomial construction is ambiguous with
> respect to the closure property of the ring operators, especially if
> a X
> contains a product, for a is real and X is not real, in direct
> contradiction to the operators which are claimed to be in use. Really
> it is so simple and here you have not continued the discussion.
> Instead you have merely snubbed it by not replying directly to my
> criticisms(both Jesse and Arturo), whereas I have come straight to
> your strongest point (Jesse's). Your methods are not yielding a
> falsification of my statements.

Perhaps I haven't been explicit enough.

You are clearly incapable of understanding the algebraic structures
because you don't have the mathematical background for it. I'm sure
that you disagree and will view this reply as a cowardly avoidance of
your arguments, but I just don't care. I am not about to waste my time
vainly trying to teach algebra to a person who already has so many
misconceptions and ridiculous opinions on its validity.

Perhaps L. Walker can help you out. He has more patience than I.

--
Jesse F. Hughes

"If the above is not true, it could have been."
-- Bart Goddard offers the perfect .sig.

Arturo Magidin

unread,
Dec 17, 2010, 12:55:25 PM12/17/10
to
On Dec 17, 9:40 am, "Tim Golden BandTech.com" <tttppp...@yahoo.com>
wrote:

Perhaps I did not make myself clear:

You don't know what you are talkign about. You think that your own
ignorance represents a failure of mathematics. You are unwilling to
listen and instead prefer to bray your ignorance at the top of your
lungs, and I will not waste any more time trying to educate you out of
your wallowing ignorance. You enjoy wallowing in the filth of your own
ignorance; good for you, I have no intention to lower myself to spend
time there.

Also, perhaps I did not make myself clear: you completely, and
utterly, misunderstood Bill Dubuque's comments. You did so because you
are unwilling (or incapable) to look past the end of your nose, and
think everything is about you.

Go stick it up your ass; it will keep your head good company.

--
Arturo Magidin

Aatu Koskensilta

unread,
Dec 17, 2010, 12:55:50 PM12/17/10
to
Arturo Magidin <mag...@member.ams.org> writes:

> Also, perhaps I did not make myself clear: you completely, and
> utterly, misunderstood Bill Dubuque's comments. You did so because you
> are unwilling (or incapable) to look past the end of your nose, and
> think everything is about you.

In all fairness, good old-fashioned incompetence is another plausible
explanation.

--
Aatu Koskensilta (aatu.kos...@uta.fi)

"Wovon man nicht sprechen kann, darüber muss man schweigen"
- Ludwig Wittgenstein, Tractatus Logico-Philosophicus

Tim Golden BandTech.com

unread,
Dec 17, 2010, 7:29:21 PM12/17/10
to
On Dec 17, 12:55 pm, Aatu Koskensilta <aatu.koskensi...@uta.fi> wrote:

> Arturo Magidin <magi...@member.ams.org> writes:
> > Also, perhaps I did not make myself clear: you completely, and
> > utterly, misunderstood Bill Dubuque's comments. You did so because you
> > are unwilling (or incapable) to look past the end of your nose, and
> > think everything is about you.
>
>   In all fairness, good old-fashioned incompetence is another plausible
> explanation.
>
> --
> Aatu Koskensilta (aatu.koskensi...@uta.fi)

>
> "Wovon man nicht sprechen kann, darüber muss man schweigen"
>   - Ludwig Wittgenstein, Tractatus Logico-Philosophicus

Well, here we have replies without a single falsification; merely
hurling insult after insult. This is highly unmathematical behavior.

Now, at some point we should anticipate that all mathematical
construction should be clean, and in this regard it does mean that it
should be straightforward. I do take a straightforward approach.
Abstract Algebra does not.

For all of your disagreement there is no specific content that I can
discuss. All of your own counterattacks here have zero content. For
the sake of providing some small amount of content I again summarize
my complaint, for it is so brief as to be nearly costless to jot down:

The polynomial construction, as well as the traditional development
of the complex number, while claimed to be ring behaved cannot be so,
for they contain product and sums which do not satisfy the closure
requirement of those operators. The lack of discussion of this
fundamental sort is bewildering. A math which bothers to formally
define its operators, and then go on to use those same operators
without compatibility is a contemptible construction.

Your own contempt of myself is irrelevant. I discuss a topic within
modern mathematics and claim a falsification. You (Jesse) provide one
brief formal statement, which I work off of above here, proving your
interpretation to be poor, and there the discussion ends as far as the
topic itself is concerned. You seem to prefer an emotional basis,
whereas I am attempting a mathematical basis, with some emotional
epithets. One should try to keep the topic interesting. You all are
weak here.

If I am so wrong then why not point out the exact spot where I am
wrong, just as I do for you? This is such a basic part of discussion
that is lacking here, and here on a mathematics group. I surmise that
this is one exposure of human social behavior, and further that the
ability of an academically trained person to challenge their own
teachings is severely limited by the en masse belief, especially
within a subject whose perfection is claimed to be beyond a doubt. Yet
none are willing to even doubt. This is a serious problem within
mathematical thinking for the human. How many opportunities were you
given to decide whether you believe what you were taught? Very few. If
you did not swallow quickly what was taught and mimic it properly then
you were not a good enough student. In this way academia is rewarding
mimicry and lacks analytical training of the sort that is necessary to
uncover past mistakes and possible reworks. How much of academia will
be valid in one hundred years? How much accumulation can humans
stomach? Especially poor accumulation will have to be dealt with, and
so clean replacements are sought. Abstract Algebra claims to be one of
these sorts of universal disciplines, but I do not see it this way.
Yes, the ring definition is fairly pristine, but then it goes broken
almost immediately by its own followers. Formalizing operators is
good, but then the formality must be followed.

- Tim

Arturo Magidin

unread,
Dec 17, 2010, 7:34:29 PM12/17/10
to
On Dec 17, 6:29 pm, "Tim Golden BandTech.com" <tttppp...@yahoo.com>
wrote:

> On Dec 17, 12:55 pm, Aatu Koskensilta <aatu.koskensi...@uta.fi> wrote:
>
> > Arturo Magidin <magi...@member.ams.org> writes:
> > > Also, perhaps I did not make myself clear: you completely, and
> > > utterly, misunderstood Bill Dubuque's comments. You did so because you
> > > are unwilling (or incapable) to look past the end of your nose, and
> > > think everything is about you.
>
> >   In all fairness, good old-fashioned incompetence is another plausible
> > explanation.
>
> > --
> > Aatu Koskensilta (aatu.koskensi...@uta.fi)
>
> > "Wovon man nicht sprechen kann, darüber muss man schweigen"
> >   - Ludwig Wittgenstein, Tractatus Logico-Philosophicus
>
> Well, here we have replies without a single falsification; merely
> hurling insult after insult. This is highly unmathematical behavior.

You were given all the necessary information to "falsify" your
assertions, as well as your fake history, long ago. You chose to
reject it either sight-unseen, or on the grounds that you were
slightly confused at one point and decided it was the fault of
thousands of mathematicians making things complicated so that *you*
wouldn't get them.

So don't complain now that nobody wastes the time trying to educate
you. You already exhibited ample "highly unmathematical behaviour",
and have proven beyond a doubt that you are unteachable. You refuse to
*think*, and think everything should be at the level of the pablum
that you are capable of understanding without work, and reject
anything that does not meet your low standards.

--
Arturo Magidin

Rotwang

unread,
Dec 17, 2010, 7:58:41 PM12/17/10
to
On 18/12/2010 00:29, Tim Golden BandTech.com wrote:
>
> [...]

Because it's pointless. The exact spot where you are wrong has been
pointed out many times before, for example in the post linked by Bill
earlier in this thread:

http://groups.google.com/group/sci.math/msg/f7817717c3a5c162

Or my own post from a few days earlier:

http://groups.google.com/group/sci.math/msg/a5ff65ad35503079

Or many posts by Arturo and others in the same thread. Your replies to
those posts made it perfectly clear that you simply hadn't made a good
faith effort to understand the answers you were given. Take for example
your reply to the post of mine:

http://groups.google.com/group/sci.math/msg/7d24b6c2301f7230

There you stated that

The only unique new element in A[X] is X, other than the elements
in A.

which is plainly false to everyone who has digested the simple
definition of A[X] given by Arturo and others. You also wrote

somehow you all are happy working with X as an unknowable.

despite my having given an explicit definition of X in the very material
you quoted (a standard definition that had already been given by several
others); I don't believe you even read my post before writing your
reply. The problem is that you simply aren't interested in any answer to
your complaint other than the (wrong) answer upon which you had already
decided before asking the question. People can and did tell you the
correct answer, but you just didn't listen. Luckily for mathematics,
this is a problem for nobody but you.

Marshall

unread,
Dec 17, 2010, 11:43:32 PM12/17/10
to
On Dec 17, 4:29 pm, "Tim Golden BandTech.com" <tttppp...@yahoo.com>
wrote:
>

> If I am so wrong then why not point out the exact spot where I am
> wrong, just as I do for you? This is such a basic part of discussion
> that is lacking here, and here on a mathematics group. I surmise that
> this is one exposure of human social behavior, and further that the
> ability of an academically trained person to challenge their own
> teachings is severely limited by the en masse belief, especially
> within a subject whose perfection is claimed to be beyond a doubt. Yet
> none are willing to even doubt.

Academically, I'm a total outsider: no PhD, no university affiliation,
no one cares whether I write any papers. I don't buy into any
claims of "perfection ... beyond a doubt." I don't actually know
very much math.

But even I can see you're a fruit loop.


Marshall

Marshall

unread,
Dec 17, 2010, 11:45:25 PM12/17/10
to
On Dec 17, 9:55 am, Aatu Koskensilta <aatu.koskensi...@uta.fi> wrote:

> Arturo Magidin <magi...@member.ams.org> writes:
> > Also, perhaps I did not make myself clear: you completely, and
> > utterly, misunderstood Bill Dubuque's comments. You did so because you
> > are unwilling (or incapable) to look past the end of your nose, and
> > think everything is about you.
>
>   In all fairness, good old-fashioned incompetence is another plausible
> explanation.

I don't quite see how, unless you mean the sort of incompetence that
leads one to believe one is right even when the rest of the world
lines
up to point out one is wrong.


Marshall

Brian Chandler

unread,
Dec 18, 2010, 12:58:51 AM12/18/10
to

Perhaps not literally "the rest of the world", but the original thread
was notable for the number of people who had completely independent
attempts to penetrate TG's confusion. They all failed, with amazing
consistency.

Brian Chandler

Pubkeybreaker

unread,
Dec 18, 2010, 10:03:20 AM12/18/10
to

Dunning and Kruger

Tim Golden BandTech.com

unread,
Dec 19, 2010, 3:38:52 PM12/19/10
to
On Dec 17, 7:58 pm, Rotwang <sg...@hotmail.co.uk> wrote:
> On 18/12/2010 00:29, Tim Golden BandTech.com wrote:
> Because it's pointless. The exact spot where you are wrong has been
> pointed out many times before, for example in the post linked by Bill
> earlier in this thread:
>
> http://groups.google.com/group/sci.math/msg/f7817717c3a5c162
>
> Or my own post from a few days earlier:
>
> http://groups.google.com/group/sci.math/msg/a5ff65ad35503079

Certainly I would rather argue your post with you than argue Dubuque's
with you.
In that terribly long thread you said:

"That's because the + and juxtaposition that appear
in (2) aren't really addition and multiplication; they are JUST
NOTATION, which are used to denote exactly the same element of R[X] as
the notation (1). "
in reference to
"P = (a0, a1, ... an, 0, 0, ...) (1)
P = a0 + a1 X + a2 X^2 + ... an X^n. (2)" .

This above statement is tantamount to admitting that there is a
conflict with this math. You suggest that removing this notation is a
fix to the analysis. In the paragraph following this statement you
say:
"The reason is that one can write expressions just like (2) in
which
the + and juxtaposition really do denote addition and multiplication,
albeit in the ring R[X], not R."

I agree with this latter statement, though it does conflict with the
first quote, and it seems that here is the junction to Jesse's claim
that X and a(n) are in the same set. The question then remains what is
left in that set, for we see that the real numbers are in that set,
and X and its integer exponents exist in that set, but as to what X
is, well, to find out we can remove the reals from this set, and
attempt to discover what is left.

The tuple notation that you use is consistent with my own high
dimensional interpretation of the polynomial construction as a
rotational multidimensional (cartesian) math.
I see that the freedom with which the tuple notation is used is quite
loose, as we can readily represent a multidimensional coordinate, a
polynomial within abstract algebra, or even a polysign value, all with
the same notation. This must mean that the tuple notation requires
qualification. That qualification lays back in the polynomial format,
and they are each considered equals within the subject as far as I can
tell.

To say that this is just notation, well, this is a copout, for all of
math can be regarded as 'just notation' so long as it is in a
printable format. That the operators themselves are notation I do
accept. Upon formalizing those operators within the notation then
conflicted usage does deserve attention, especially in area of
mathematics that is regarded as pristine. Therefor it follows that
abstract algebra is not pristine, and that it is merely an attempt at
something whose next form we have yet to witness. This leaves the
subject open, as it should be left, and this means that criticisms of
the existing math should be of interest. Especially within the
academic system the notion that prior minds have validated a
mathematics and so the validation should not be resumed by every
individual is not satisfactory.

In our day we have compilers which operate upon strict type
information. I had an exchange with good old galathaea on this and I
do believe that the mathematicians typology is too loose. It is just
as I've written in my one liner: the ring definition, having been
constructed is nearly immediately broken, for the multiplication of a
variable defined as real is not compatible in product or in sum with a
value that is not real. If it is true that the set A[X] contains
singular elements, then what on earth are we doing with infinite
dimensional elements as a basis? These are not even elementary, and
the cartesian product is in usage here in two differing methods; one
for the operators, and one for the tuple. It is not true that
1.23 = ( 1.23, 0, 0, 0, ... )
for on the left we have a one dimensional value, whereas on the right
we have an infinite dimensional value. The meaningless zeros become
another vector of thought on this issue, but formalities will not hold
up to the dropped zeros. This was my first falling out with Arturo,
who spent much time carefully rendering a polynomial construction in
that past thread. As I recall Dubuque or some other smarty pants
admitted this. I do not believe Rotwang that you have taken you own
writing as seriously as I have here, though I admit that my remarks
here are different than in that old thread. That thread was too long
and I was tired, just as my message reads about retiring the thread.

I do scrutinize your statement on
a(0) + a(1) X + a(2) X X + ...
as 'simply notation' and that these sums and products are not actually
sums and products. I seriously doubt if any professor has ever
presented this detail to his students, for if they are not sum and
product operators, then what are they? Having gone to the trouble of
formally defining a sum and a product on a type then your own
statement is a falsification of the integrity of abstract algebra. You
have essentially admitted that the above form is invalid, and that
form is in standard usage. I am amazed that you cannot see such a
simple thing. This is the way it goes for humans: it is the simplest
things that we overlook. This is a great hope also; a hope that some
simple things remain to be discovered which will lead to a simpler
system, for the gryations of the existent system are not
straightforward.
This idea that something straightforward could exist is repugnant to a
mathematician whose prowess comes in his ability to mimic complexity.
That we may be operating still upon false assumptions must become a
valid perspective for there to be fundamental gains.

- Tim

Arturo Magidin

unread,
Dec 19, 2010, 3:44:46 PM12/19/10
to
On Dec 19, 2:38 pm, "Tim Golden BandTech.com" <tttppp...@yahoo.com>
wrote:

> On Dec 17, 7:58 pm, Rotwang <sg...@hotmail.co.uk> wrote:
>
> > On 18/12/2010 00:29, Tim Golden BandTech.com wrote:
> > Because it's pointless. The exact spot where you are wrong has been
> > pointed out many times before, for example in the post linked by Bill
> > earlier in this thread:
>
> >http://groups.google.com/group/sci.math/msg/f7817717c3a5c162
>
> > Or my own post from a few days earlier:
>
> >http://groups.google.com/group/sci.math/msg/a5ff65ad35503079
>
> Certainly I would rather argue your post with you than argue Dubuque's
> with you.
> In that terribly long thread you said:
>
>    "That's because the + and juxtaposition that appear
> in (2) aren't really addition and multiplication; they are JUST
> NOTATION, which are used to denote exactly the same element of R[X] as
> the notation (1). "
> in reference to
>   "P = (a0, a1, ... an, 0, 0, ...)        (1)
>    P = a0 + a1 X + a2 X^2 + ... an X^n.   (2)" .
>
> This above statement is tantamount to admitting that there is a
> conflict with this math.

No. It's a statement that says that the symbols are notational and
purely notational.

The only conflict is between these simple facts and your prejudices
and ignorance.

--
Arturo Magidin

Tim Golden BandTech.com

unread,
Dec 19, 2010, 5:04:24 PM12/19/10
to

Wow. I don't think you folks are actually able to follow a logical
argument.
You have been assimilated. It's the same old, but here Arturo I
believe we could take a third party with no knowledge of the subject
of even of mathematics of this type; so long as they have integrity
they will see the flaw in your group's argumentation. Your statement
on notation suggests that notation can be as loose as we wish, and
this is a highly unmathematical statement. Nay, that is the pre
abstract algebra notion that you are expressing. This is the first
time that operators are formally defined. They have a familiar feel,
but the definition, which is a careful notation, is very strict. It
requires that each component in a sum or in a product belong to the
same set, and that the result is likewise in that same set, as a
singular element of that set. You ignore the consideration of


1.23 = ( 1.23, 0, 0, 0, ... )

just as in your original recitation, where you loosely allow this, but
then formalize to the RHS. But the LHS is a valid representation
within the polynomial form, for if it weren't then the reals cannot be
admitted into the set A[X]. If this is acceptable to you then much of
what came before is totally invalid, as far as argumentation goes. You
see, a fixup in one spot brings us back to whack-a-mole, which is an
indication of a poor construction. We could travel the loop, and it
likely is wise to touch upon the different contexts, but we should not
endlessly go about. Instead we should focus on a strict disagreement,
so that a clear conflict is exposed. To loosen the argument out to a
claim that mathematical notation does not really matter is about as
weak an argument as is possible, especially given the spirit of the
ring definition.

Arturo, does the notation
a(1) X
have meaning? Is it an operator? What is this thing if as Rotwang
states it is not a product, oh, but this depends upon which statement
from Rotwang we are working with. He produced an inherently conflicted
argument, which I addressed, and which you seem to be in support of,
without ever discussing an ounce of the content. Let's face it, if you
had a clear falsification we should have seen it by now. Instead you
will hinge your latest argument on freedom of notation, with no clear
limit placed upon that freedom. This is an absurd stance and I will
respect if you choose to withdraw this statement.

- Tim

Arturo Magidin

unread,
Dec 19, 2010, 6:23:10 PM12/19/10
to
On Dec 19, 4:04 pm, "Tim Golden BandTech.com" <tttppp...@yahoo.com>

It's been clear that you are incapable of recognizing a logical
argument, and that what you post is nothing but an extended Argument
From Personal Ignorance mixed in with an Argument Form Personal
Incredulity.

Kindly do not project your own ignorance on your betters, Timmy.

> This is an absurd stance

I agree: You take nothing bu absurd stands.

> and I will
> respect if you choose to withdraw this statement.

There is nothing for me to withdraw, and you demonstrate that you
neither understand nor care to understand anything that you are told.

You are a liar, when you claim that I never "discussed an ounce of the
content", since I did so way back when, when you demonstrated that you
have neither the interest nor the capacity for rational discussion.
Your "presentation" is false, incoherent, and derives that incoherence
not form incoherence in the orignal, but from the vacuum between your
ears.

So you demonstrate that in addition to lacking intelligence, you also
lack integrity and honesty.

Again, stick it up your ass; it might keep your head company.

And try not to ever again presume to state what I did or did not do.
You lack the integrity, the honesty, or the intelligence to even
attempt it.

--
Arturo Magidin

Arturo Magidin

unread,
Dec 19, 2010, 6:26:56 PM12/19/10
to
On Dec 19, 4:04 pm, "Tim Golden BandTech.com" <tttppp...@yahoo.com>
wrote:

> Let's face it, if you
> had a clear falsification we should have seen it by now.

Let's face it: WE DID. You ignored it, and pretended it did not exist.
Then you repeat that there has been "no falsification."

The entire thing was explained to you from the ground up. You ignored
that explanation and rejected it sight unseen.

You will accept nothing short of an acclamation of how clever you are.
Guess what? You aren't: you are just a big ignoramus who decided long
ago that he is so smart that anything he does not understand must be
nonsense.

You are nothing but an idiot and a bad lair, Timmy.
--
Arturo Magidin

Tim Golden BandTech.com

unread,
Dec 20, 2010, 11:59:39 AM12/20/10
to
On Dec 19, 6:23 pm, Arturo Magidin <magi...@member.ams.org> wrote:
> There is nothing for me to withdraw, and you demonstrate that you
> neither understand nor care to understand anything that you are told.

Ah, but I do. I take deep interest in this subject, and see a problem
with it. This is a sincere belief that I express. I come from a
software background; typesafe and structured. Compilers will not
accept arguments of a type that do not match their definition.
Operators are much the same, though they are not generalized fully,
even in C++. I recently learned that Haskell has this ability, but
this is beside the point, for we can construct the operators with
functions. In AA this attempt will fail, for X is not a type. Most
importantly if we wish to build a 'real coefficient' where coefficient
implies product then according to AA we must have

real ProductOperator( real a, real b )
{ return a * b ; }

The idea that a product such as
Polynomial ProductOperator( real a, Polynomial b )
{ return a * b ; }
exists is in direct contradiction to the ring operator.

Your own ease of taking notation loosely while developing such a
technical subject as abstract algebra is bewildering to me. The
extensions of my way of thinking do extend into other math areas. For
instance the claim that the real numbers are a subset of the complex
numbers is readily accepted by most, and I believe this is another
context within this topic that was discussed back in that 500 message
long thread. For instance
1.23 = ( 1.23, 0 ).
This above is a mapping that must be explicitly stated, and the fact
that plenty of other mappings exist means that it is not the only
mapping. I admit that this is the only mapping that preserves the
operators of sum and product through the ring definition, but this
does not mean that there is an exact equivalence of the two
representations. If we regard each component as carrying one chunk of
information for the magnitude, plus one bit for sign, then we see that
the two-tuple carries twice the data of the one-tuple. The freedom
with which the modern mathematician changes dimensions so readily is a
sort of carryover of the development of the real number from the
natural number on up, which then allows the development of the complex
number as if it were next in line, especially within the subsetting
structure. Is a one dimensional space a subset of a two dimensional
space? Yes, I would accept that it can be, but as to which one
dimension we are discussing of the two, or which projection of
numerous other possibilities may arise, this must be explicitly stated
if we are to go further.

Next, when we declare a variable x to be real then we clearly did not
declare x to be complex. We are not free to assign
1.0 + 2.0 i
to x, though some seem to be happy to consider x to be complex as well
as being real. This bit of set theory must be considered seriously,
and when types of variables are specified with respect to operators
then we might consider the problem of a real x multiplied by a complex
z
x z
which to most (including me) does have a clear resolution, but the
idea that it may have several resolutions is the more general
thinking, and the thinking that is more consistent with the spirit of
the ring definition, though this expression xz is outside the ring
definition. While the standard interpretation would yield a concrete
instance
x = 2.0 , z = 3.0 + 4.0 i
xz = 6.0 + 8.0 i
we can likewise consider
xz = 6
so that the least common denominator is upheld. This is more apparent
within polysign, where we could consider the product of a P5 and a P6
value, which are merely siblings of the R and C(which are merely P2
and P3 in polysign). The idea that
(P5)(P6) = P6
is not an obvious statement, yet the math will work out consistently
with the above instance of a complex and a real in product. Here of
course is my own branching off point to considering physical meaning
of the math, and interestingly in keeping with the ring definition we
can simply deny that these products exist. What this means is that
within the representation unlike types need not resolve and so admit
their own form of multidimensional behavior. This principle can be
used within polysign or within ordinary mathematics, but it would mean
that the product xz(above) simply holds its form; becoming a three
dimensional structured entity. Nicely this marries with emergent
spacetime within the polysign progression taken literally as a
product:
P1 P2 P3 | P4 ...
where | indicates a behavioral breakpoint, though the upper terms
remains ring behaved. The most analogous thing to this natural
progressions that I could declare in familiar terms would be
t x z
where t represents time, x represents a real value, and z represents a
complex value. This is a more refined spacetime representation, for it
inherently carries structure and the additional geometry that may make
electromagnetics expressible more naturally; those properties being
inherent to spacetime itself.

This is a bit tangential, but the last expression
t x z
can be taken as a product notation which does not resolve itself,
consistent with the ring definition, which requires like types to
condense to a single element. Really we are discussion computational
activity and it becomes apparent that this new structured entity
T = ( t, x, z ) (the tatrix; here a T3 tatrix)
does now allow a computable product
(t1,x1,z1)(t2,x2,z2) = (t3,x3,z3)
such that
t1 t2 = t3, x1 x2 = x3, z1 z2 = z3.
However the physical meaning of this, while products are common
knowledge within the classical force equations, is not automatically
satisfactory using the metrics that are most familiar. No,
accelerations are second derivatives, and here we have only a pure
geometry, whose products seem deprived of physics.

I believe that it is true that the failings of the existing math run
much deeper than AA. The cartesian product is likewise a weak and
perhaps misused constructor. The most obvious instance is to consider
that AA uses a cartesian product to declare its operators while the
cartesian product is likewise used to build higher dimensional spaces.
Let's face it, the elements of the operators are from the same
singular space.

The higher dimensional space can be built through other means e.g.
within polysign, and then the spatial products like
P2 P3
are naturally formed since they are unique types which simply will not
resolve. So I see that modern mathematics has run amuck at a very
fundamental level. As humans continue to treat the real number as
fundamental the sight which they have is lacking, as can be seen in
our recent discussion of projections, whose definition PP = P is
absurd. These effects no doubt are confusions built out of false
assumptions, such as the necessity of orthogonality, but more remains
to be exposed.

- Tim

Arturo Magidin

unread,
Dec 20, 2010, 12:09:45 PM12/20/10
to
On Dec 20, 10:59 am, "Tim Golden BandTech.com" <tttppp...@yahoo.com>
wrote:

> On Dec 19, 6:23 pm, Arturo Magidin <magi...@member.ams.org> wrote:
>
> > There is nothing for me to withdraw, and you demonstrate that you
> > neither understand nor care to understand anything that you are told.
>
> Ah, but I do. I take deep interest in this subject

Oh, but you don't. You have your prejudice, arrived at through
ignorance and stubborness. Your only "interest" is in having other
people praise you for it.

> and see a problem with it.

There *is* a problem. It's beein pointed out several time.

The problem, Dear Boy, lies entirely within you.

The rest is just you trying to justify your inexcusable ignorance and
prejudices. Probably to yourself. And failing miserably

--
Arturo Magidin

Rotwang

unread,
Dec 20, 2010, 4:38:44 PM12/20/10
to
On 19/12/2010 20:38, Tim Golden BandTech.com wrote:
> On Dec 17, 7:58 pm, Rotwang<sg...@hotmail.co.uk> wrote:
>> On 18/12/2010 00:29, Tim Golden BandTech.com wrote:
>> Because it's pointless. The exact spot where you are wrong has been
>> pointed out many times before, for example in the post linked by Bill
>> earlier in this thread:
>>
>> http://groups.google.com/group/sci.math/msg/f7817717c3a5c162
>>
>> Or my own post from a few days earlier:
>>
>> http://groups.google.com/group/sci.math/msg/a5ff65ad35503079
>
> Certainly I would rather argue your post with you than argue Dubuque's
> with you.
> In that terribly long thread you said:
>
> "That's because the + and juxtaposition that appear
> in (2) aren't really addition and multiplication; they are JUST
> NOTATION, which are used to denote exactly the same element of R[X] as
> the notation (1). "
> in reference to
> "P = (a0, a1, ... an, 0, 0, ...) (1)
> P = a0 + a1 X + a2 X^2 + ... an X^n. (2)" .
>
> This above statement is tantamount to admitting that there is a
> conflict with this math.

No it isn't.


> You suggest that removing this notation is a
> fix to the analysis. In the paragraph following this statement you
> say:
> "The reason is that one can write expressions just like (2) in
> which
> the + and juxtaposition really do denote addition and multiplication,
> albeit in the ring R[X], not R."
>
> I agree with this latter statement, though it does conflict with the
> first quote,

No it doesn't. The fact that an expression P = a0 + a1 X + ... an X^n
may either be taken as notation for P = (a0, a1, ..., an, 0, 0, ...) or
shorthand for P = /a0/ & /a1/#X & /a2/#X#X & ... /an/#X#X#...#X (using
Arturo's notation) leads to no conflict, because both non-shorthand
expressions evaluate to the same element P of R[X], as is very easy to
prove.


> and it seems that here is the junction to Jesse's claim
> that X and a(n) are in the same set. The question then remains what is
> left in that set, for we see that the real numbers are in that set,
> and X and its integer exponents exist in that set, but as to what X
> is, well, to find out we can remove the reals from this set, and
> attempt to discover what is left.

No, to find out what X is you need only look at the definition of X that
you have been given repeatedly.


> The tuple notation that you use is consistent with my own high
> dimensional interpretation of the polynomial construction as a
> rotational multidimensional (cartesian) math.
> I see that the freedom with which the tuple notation is used is quite
> loose, as we can readily represent a multidimensional coordinate, a
> polynomial within abstract algebra, or even a polysign value, all with
> the same notation. This must mean that the tuple notation requires
> qualification. That qualification lays back in the polynomial format,
> and they are each considered equals within the subject as far as I can
> tell.
>
> To say that this is just notation, well, this is a copout, for all of
> math can be regarded as 'just notation' so long as it is in a
> printable format.

Nonsense. How is the fundamental theorem of algebra 'just notation', for
example?


> That the operators themselves are notation I do
> accept. Upon formalizing those operators within the notation then
> conflicted usage does deserve attention,

There is no "conflicted usage", for the reason given above.


> especially in area of
> mathematics that is regarded as pristine. Therefor it follows that
> abstract algebra is not pristine, and that it is merely an attempt at
> something whose next form we have yet to witness. This leaves the
> subject open, as it should be left, and this means that criticisms of
> the existing math should be of interest. Especially within the
> academic system the notion that prior minds have validated a
> mathematics and so the validation should not be resumed by every
> individual is not satisfactory.
>
> In our day we have compilers which operate upon strict type
> information. I had an exchange with good old galathaea on this and I
> do believe that the mathematicians typology is too loose. It is just
> as I've written in my one liner: the ring definition, having been
> constructed is nearly immediately broken, for the multiplication of a
> variable defined as real is not compatible in product or in sum with a
> value that is not real. If it is true that the set A[X] contains
> singular elements,

What is a "singular element"? Who claimed that the set A[X] contains
singular elements? If A[X] does not contain singular elements, how is
this a problem?


> then what on earth are we doing with infinite
> dimensional elements as a basis? These are not even elementary, and
> the cartesian product is in usage here in two differing methods; one
> for the operators, and one for the tuple. It is not true that
> 1.23 = ( 1.23, 0, 0, 0, ... )

Who claimed that 1.23 = (1.23, 0, 0, 0, ...)? How is the fact that this
is false a problem?


> for on the left we have a one dimensional value, whereas on the right
> we have an infinite dimensional value. The meaningless zeros become
> another vector of thought on this issue, but formalities will not hold
> up to the dropped zeros. This was my first falling out with Arturo,
> who spent much time carefully rendering a polynomial construction in
> that past thread. As I recall Dubuque or some other smarty pants
> admitted this. I do not believe Rotwang that you have taken you own
> writing as seriously as I have here, though I admit that my remarks
> here are different than in that old thread. That thread was too long
> and I was tired, just as my message reads about retiring the thread.
>
> I do scrutinize your statement on
> a(0) + a(1) X + a(2) X X + ...
> as 'simply notation' and that these sums and products are not actually
> sums and products. I seriously doubt if any professor has ever
> presented this detail to his students,

This detail has been presented to you, many times. Upon picking up an
abstract algebra book from my shelf and looking at the definition of
polynomial, I see that this is explained perfectly well. It's also
explained in the Wikipedia article on "Polynomial":

A polynomial f in one variable X over a ring R is defined to be a
formal expression of the form

f = a_n X^n + a_{n - 1} X^{n - 1} + ... + a_1 X^1 + a_0 X^0

where n is a natural number, the coefficients a_0, ..., a_n are
elements of R, and X is a formal symbol, whose powers Xi are just
placeholders for the corresponding coefficients ai, so that the given
formal expression is just a way to encode the sequence (a_0,a_1,...),
where there is an n such that ai = 0 for all i > n.

[...]

These polynomials can be added by simply adding corresponding
coefficients (the rule for extending by terms with zero coefficients
can be used to make sure such coefficients exist). Thus each
polynomial is actually equal to the sum of the terms used in its
formal expression, if such a term aiXi is interpreted as a polynomial
that has zero coefficients at all powers of X other than Xi.

As you can see, the above quote clearly distinguishes between the sum of
polynomials and the "+" which appears in the formal expression defining
f; indeed it defines the former /after/ already using the latter (and
then goes on to explain why no ambiguity arises from using the same
symbol for both). This is completely standard. So why on Earth do you
doubt that any professor has ever presented this detail to his students?


> for if they are not sum and
> product operators, then what are they?

The are NOTATION. The reason mathematicians use this notation is because
they find it simple and intuitive. Clearly, though, you do not. So, if
you wish to understand the polynomial ring construction then there's a
very simple solution to the problems you have with this notation: don't
use it. You don't need to use the notation define R[X] at all, nor do
you need it to prove anything about R[X]. Once again, the definition of
R[X] is as follows: R[X] is the set of all sequences
(a_i | i in N) such that each a_i is in R and only a finite number of
a_i are different from 0. Given two polynomials (a_i | i in N) and
(b_i | in in N) their sum is defined by

(a_i | i in N)&(b_i | i in N) = (a_i + b_i | i in N)

and their product is defined by

(a_i | i in N)#(b_i | i in N) = (sum_{j = 0}^i a_j*b_{i - j} | i in N).

I claim that, with the above definitions, (R[X], &, #) is a ring. Do you
disagree? If so, which part of the definition of a ring does it fail to
satisfy? If not, what is your objection? Note that your objection should
refer only to the construction I give immediately above, since that
construction is exactly what is meant by R[X]. Anything else is just
notation that mathematicians use to talk about R[X].

Rotwang

unread,
Dec 20, 2010, 5:15:28 PM12/20/10
to
On 19/12/2010 22:04, Tim Golden BandTech.com wrote:
>
> [...]
>
> Arturo, does the notation
> a(1) X
> have meaning?

It's truly bizarre that you ask this after the number of times you've
been told exactly what a(1) X means. It means the polynomial (a_i | i in
N), where

a_i = a(1) if i = 1, and
a_i = 0 if i =/= 1.


> Is it an operator?

No.


> What is this thing

It is the polynomial whose coefficients a_i are given by

a_i = a(1) if i = 1, and
a_i = 0 if i =/= 1.


> if as Rotwang
> states it is not a product, oh, but this depends upon which statement
> from Rotwang we are working with.

My two statements do not contradict one another. It happens that a(1) X
is equal to the product of the polynomial /a(1)/ with the polynomial X.
Recall that /a(1)/ is the polynomial whose coefficients b_i are given by

b_i = a(1) if i = 0, and
b_i = 0 if i =/= 0

and that X is the polynomial whose coefficients c_i are given by

c_i = 1 if i = 1, and
c_i = 0 if i =/= 1.

In other words, using the same notation introduced by Arturo,

a(1) X = /a(1)/ # X.

See, a(1) X does not denote a product, it denotes a single polynomial.
However, it turns out to be the case that the polynomial it denotes is
equal to the product of two other polynomials. If that is too
complicated for you to understand, perhaps the following analogy will
prove useful:

6 = 2*3

Now, you see the character "6" on the left? It is not an operator. It
does not denote a product. Instead, it denotes a number (namely the
number 6). On the other hand, the "*" on the right denotes a product, in
this case the product of the two numbers denoted by the characters "2"
and "3". Although the right hand side of the equation involves a
product, and the left hand side does not, it is nonetheless true that
both expressions evaluate to the same number. Does that make sense?

quasi

unread,
Dec 20, 2010, 5:46:26 PM12/20/10
to
On Mon, 20 Dec 2010 21:38:44 +0000, Rotwang <sg...@hotmail.co.uk>
wrote:

Well done!

Of the many valiant attempts to break through Tim Golden's failure to
comprehend, I think the above is the clearest yet. I hope Tim puts the
same effort into trying to understand your explanation as you put into
assembling it.

Based on past evidence, Tim will probably close his mind to the ideas
you've presented, but who knows, maybe the above explanation will
break through. In any case, even if not, the clarity of your
explanation will hopefully be sufficient to defend some "innocents"
against regarding similar non-dilemmas as signs that modern math is
somehow fatally flawed.

quasi

Rotwang

unread,
Dec 20, 2010, 6:22:50 PM12/20/10
to
On 20/12/2010 22:46, quasi wrote:
>
> [...]

>
> Well done!
>
> Of the many valiant attempts to break through Tim Golden's failure to
> comprehend, I think the above is the clearest yet. I hope Tim puts the
> same effort into trying to understand your explanation as you put into
> assembling it.

Thanks! Though I can't take too much credit - Tim has already been given
explanations that were at least as detailed and lucid, especially those
of Arturo (from which I took the &, # and // notation). It didn't work
then and I doubt it will work now, but it's good to know that someone
appreciates the effort.

Tim Golden BandTech.com

unread,
Dec 21, 2010, 12:37:14 PM12/21/10
to

That's a fair analysis above here. I have not gone back to see what my
response was to Arturo's presentation, which you have transcribed, but
needless, without any analysis we see the presentation of new formal
operators, which is exactly what my analysis describes as one way out
of the situation. I thank you for providing a confirming construction,
but see that this detail is left out of all mainstream approaches to
the subject, and if it were inserted, then the motivations whereby it
becomes necessary are only more clouded, for you have yet to formally
construct these operators, whose definition will be in contradiction
to the ring definition.

You see, the details which make you a strong mathematician are those
that are difficult to follow. These are your training, and when such a
simple argument presents itself the ability to train upon such a
slender thing is very difficult. I suppose it is difficult for all of
us, but the modern software developer is actually capable of strict
interpretations. There is no compiler analyzing the polynomial
construction, but if there were then if would be barfing on your
undefined operators, and previously to that it would be barfing on the
attempt to apply the formal ring operators to the polynomial
construction.

The easiest way out is to construct finite spaces, instead of the
infinite progression that will hang any modern computer. These finite
spaces can wrap (modulo behaved) and what we will witness is that a
finite polynomial such as


1.0 + 2.0 X + 3.0 X X

can be interpreted. The next stage is to admit that the real number
already carries modulo two behavior within its sign, and then witness
the natural unfolding of this construction due to the real behavior
- 1 + 1 = 0
which can be scaled or simply stated
- x + x = 0
whose generalization to a modulo three construction will be
- x + x * x = 0
where * is a new sign. This modulo three construction happens to be
the complex numbers; falling out not from an arbitrarily chosen
function such as
X X + 1 = 0
which essentially imposes the modulo behavior. With the generalization
of sign the complex number is more fundamental, and oddly enough the
polysign construction is quite adjacent to the polluted mathematics of
abstract algebra, though it is much more pristine in terms of abiding
by the ring definition. Further the modulo one numbers match the
behaviors of time. Further than that support for emergent spacetime
from pure arithmetic is possible, and that structure


P1 P2 P3 | P4 ...

carries a natural breakpoint(product distance conservation) and
provides an inherent geometry that appears to be consistent with
electromagnetism. These details mean so much more to me than a ring
definition, and I see that products within physics carry so little
correspondence to the AA interpretation, and yet the polysign
construction runs fairly parallel with the AA construction. I can't
help but suppose that we are missing something fairly fundamental that
will aid both sides and form a bridge between the gap.

Nicely Rotwang has rebridged exactly where Arturo did and so we have
two supporters of traditional AA(Arturo and Rotwang) providing
corrections to the topic while being in complete denial of what they
have done. Note how little address this statement will receive in
their responses.

- Tim

Rotwang

unread,
Dec 21, 2010, 6:58:26 PM12/21/10
to
Rotwang wrote:
>>> Tim Golden BandTech.com wrote:
>>>>
>>>> [...]
>>>>
>>>> I agree with this latter statement, though it does conflict with the
>>>> first quote,
>>>
>>> No it doesn't. The fact that an expression P = a0 + a1 X + ... an X^n
>>> may either be taken as notation for P = (a0, a1, ..., an, 0, 0, ...) or
>>> shorthand for P = /a0/ & /a1/#X & /a2/#X#X & ... /an/#X#X#...#X (using
>>> Arturo's notation) leads to no conflict, because both non-shorthand
>>> expressions evaluate to the same element P of R[X], as is very easy to
>>> prove.

Do you have a response to this, Tim?


>>>> and it seems that here is the junction to Jesse's claim
>>>> that X and a(n) are in the same set. The question then remains what is
>>>> left in that set, for we see that the real numbers are in that set,
>>>> and X and its integer exponents exist in that set, but as to what X
>>>> is, well, to find out we can remove the reals from this set, and
>>>> attempt to discover what is left.
>>>
>>> No, to find out what X is you need only look at the definition of X that
>>> you have been given repeatedly.

How about this?


>>> [...]


>>>
>>>> In our day we have compilers which operate upon strict type
>>>> information. I had an exchange with good old galathaea on this and I
>>>> do believe that the mathematicians typology is too loose. It is just
>>>> as I've written in my one liner: the ring definition, having been
>>>> constructed is nearly immediately broken, for the multiplication of a
>>>> variable defined as real is not compatible in product or in sum with a
>>>> value that is not real. If it is true that the set A[X] contains
>>>> singular elements,
>>>
>>> What is a "singular element"? Who claimed that the set A[X] contains
>>> singular elements? If A[X] does not contain singular elements, how is
>>> this a problem?

Do you have an answer to this question?


>>>> then what on earth are we doing with infinite
>>>> dimensional elements as a basis? These are not even elementary, and
>>>> the cartesian product is in usage here in two differing methods; one
>>>> for the operators, and one for the tuple. It is not true that
>>>> 1.23 = ( 1.23, 0, 0, 0, ... )
>>>
>>> Who claimed that 1.23 = (1.23, 0, 0, 0, ...)? How is the fact that this
>>> is false a problem?

Or this one?


>>> [...]

Or this one?


>>>> for if they are not sum and
>>>> product operators, then what are they?
>>>
>>> The are NOTATION. The reason mathematicians use this notation is because
>>> they find it simple and intuitive. Clearly, though, you do not. So, if
>>> you wish to understand the polynomial ring construction then there's a
>>> very simple solution to the problems you have with this notation: don't
>>> use it. You don't need to use the notation define R[X] at all, nor do
>>> you need it to prove anything about R[X]. Once again, the definition of
>>> R[X] is as follows: R[X] is the set of all sequences
>>> (a_i | i in N) such that each a_i is in R and only a finite number of
>>> a_i are different from 0. Given two polynomials (a_i | i in N) and
>>> (b_i | in in N) their sum is defined by
>>>
>>> (a_i | i in N)&(b_i | i in N) = (a_i + b_i | i in N)
>>>
>>> and their product is defined by
>>>
>>> (a_i | i in N)#(b_i | i in N) = (sum_{j = 0}^i a_j*b_{i - j} | i in N).
>>>
>>> I claim that, with the above definitions, (R[X], &, #) is a ring. Do you
>>> disagree? If so, which part of the definition of a ring does it fail to
>>> satisfy? If not, what is your objection?

Or this one?


Tim Golden BandTech.com wrote:
> On Dec 20, 6:22 pm, Rotwang <sg...@hotmail.co.uk> wrote:
>

> [...]


>
> That's a fair analysis above here. I have not gone back to see what my
> response was to Arturo's presentation, which you have transcribed, but
> needless, without any analysis we see the presentation of new formal
> operators,

"New" to whom? The exact same formal operators were defined for your
benefit early in that thread from last June. They had already been
defined elsewhere decades ago.


> which is exactly what my analysis describes as one way out
> of the situation. I thank you for providing a confirming construction,
> but see that this detail is left out of all mainstream approaches to
> the subject,

The first mainstream account of the subject I checked, namely
Cameron's "Introduction to Algebra", gave the exact same construction
that I gave in this thread. So it's absolutely false that this detail
is left out of all mainstream approaches to the subject. What grounds
did you have for believing otherwise?


> and if it were inserted, then the motivations whereby it
> becomes necessary are only more clouded, for you have yet to formally
> construct these operators, whose definition will be in contradiction
> to the ring definition.

I asked you which part of the ring definition the construction I gave
failed to satisfy. You didn't answer. How exactly will the operators'
definition be in contradiction with the ring definition?


> You see, the details which make you a strong mathematician are those
> that are difficult to follow. These are your training, and when such a
> simple argument presents itself the ability to train upon such a
> slender thing is very difficult. I suppose it is difficult for all of
> us, but the modern software developer is actually capable of strict
> interpretations. There is no compiler analyzing the polynomial
> construction, but if there were then if would be barfing on your
> undefined operators,

What "undefined operators"?


> and previously to that it would be barfing on the
> attempt to apply the formal ring operators to the polynomial
> construction.

Just for you, here is an implementation of the polynomial ring Z[X] as
a class in Python:

class poly:
def __init__(self, coeffs):
if not isinstance(coeffs, list):
print "no"
if not all([isinstance(a, int) or isinstance(a, long) for a in
coeffs]):
print "no"
else:
self.coeffs = coeffs[:]
while len(self.coeffs) > 0 and self.coeffs[-1] == 0:
self.coeffs = self.coeffs[:-1]
self.deg = len(self.coeffs) - 1

def __call__(self, n):
return self.coeffs[n] if len(self.coeffs) > n else 0

def __eq__(self, other):
return all([self(n) == other(n) for n in range(max(self.deg,
other.deg) + 1)])

def __repr__(self):
out = "("
for n in range(self.deg + 1):
out += str(self(n)) + ", "
out += "0, ...)"
return out

def __str__(self):
if self.deg == -1:
return "0"
else:
out = ""
for n in range(self.deg + 1):
if self(n) != 0:
if n == 0:
out += str(self(n))
else:
out += str(self(n)) + " " if self(n) != 1 else ""
if n == 1:
out += "X"
else:
out += "X^" + str(n)
out += " + "
return out[:-3]

def __add__(self, other):
if isinstance(other, poly):
return poly([self(n) + other(n) for n in range(max(self.deg,
other.deg) + 1)])
elif isinstance(other, int) or isinstance(other, long):
return self + poly([other])
def __radd__(self, other):
return self + other

def __mul__(self, other):
if isinstance(other, poly):
out = []
for n in range(self.deg + other.deg + 1):
out.append(sum([self(m) * other(n - m) for m in range(n + 1)]))
return poly(out)
elif isinstance(other, int) or isinstance(other, long):
return self * poly([other])
def __rmul__(self, other):
return self * other

def __pow__(self, n):
out = poly([1])
for i in range(n):
out *= self
return out


Note 1: although only a finite number of coefficients are stored by
the computer for each polynomial, the __call__() method makes each
poly instance into a function which is defined for /all/ natural
numbers (at least, all those that your computer can handle). Since
integer-valued sequences are exactly the same thing as integer-valued
functions of the natural numbers, this class (assuming no coding
mistakes on my part) really does implement the definition I gave
above, at least insofar as the interpreter really implements integer
arithmetic. Note 2: As with the notational shsorthand I mentioned
before, if you attempt to multiply an integer with or add an integer
to a polynomial, the relevant method will replace the integer n with
the corresponding polynomial /n/. Let's try it out (the following is
pasted from IDLE):

>>> X = poly([0,1]) # This defines X as the polynomial (0, 1, 0, 0, ...)
>>> p = 1 + 2*X
>>> q = 3*X + 4*X**2
>>> p
(1, 2, 0, ...)
>>> q
(0, 3, 4, 0, ...)
>>> p + q
(1, 5, 4, 0, ...)
>>> p * q
(0, 3, 10, 8, 0, ...)
>>> p + q == 1 + 5*X + 4*X**2
True
>>> p + q == 1 + 5*X + 4*X**2 + X**3
False


Huh. How about that. No sign of the interpreter "barfing"; in fact it
works just how it's supposed to. Why not give it a try, to see for
yourself whether it satisfies the ring axioms?

Note 3, for people other than Tim: if for some strange reason you wish
to test the above code, I included a __str__() method to display
polynomials in the standard format that's easier to read but
apparently causes Tim so much confusion. For example, if we ask the
intepreter to print p * q we get this:

>>> print p * q
3 X + 10 X^2 + 8 X^3


> [...]


>
> Nicely Rotwang has rebridged exactly where Arturo did and so we have
> two supporters of traditional AA(Arturo and Rotwang) providing
> corrections to the topic while being in complete denial of what they
> have done.

We provided no "corrections"; that the definitions we gave are
standard may be seen in any number of sources, for example Cameron's
book.


> Note how little address this statement will receive in
> their responses.

Note the many points in my earlier post which you failed to address.
How about doing so next time?

Jesse F. Hughes

unread,
Dec 21, 2010, 7:42:35 PM12/21/10
to
Rotwang <sg...@hotmail.co.uk> writes:

> Tim Golden BandTech.com wrote:
>> On Dec 20, 6:22 pm, Rotwang <sg...@hotmail.co.uk> wrote:
>>
>> [...]
>>
>> That's a fair analysis above here. I have not gone back to see what my
>> response was to Arturo's presentation, which you have transcribed, but
>> needless, without any analysis we see the presentation of new formal
>> operators,
>
> "New" to whom? The exact same formal operators were defined for your
> benefit early in that thread from last June.

A year ago last June, I think.

--
Jesse F. Hughes
-- A lesson in meta-honesty --
Baba: Thanks for being honest.
Quincy (age 7): I won't be honest next time. And that's more honesty.

Rotwang

unread,
Dec 22, 2010, 5:08:34 AM12/22/10
to
On Dec 22, 12:42 am, "Jesse F. Hughes" <je...@phiwumbda.org> wrote:
> Rotwang <sg...@hotmail.co.uk> writes:
> > Tim Golden BandTech.com wrote:
> >> On Dec 20, 6:22 pm, Rotwang <sg...@hotmail.co.uk> wrote:
>
> >> [...]
>
> >> That's a fair analysis above here. I have not gone back to see what my
> >> response was to Arturo's presentation, which you have transcribed, but
> >> needless, without any analysis we see the presentation of new formal
> >> operators,
>
> > "New" to whom? The exact same formal operators were defined for your
> > benefit early in that thread from last June.
>
> A year ago last June, I think.

Yes, thanks.

Tim Golden BandTech.com

unread,
Dec 22, 2010, 10:16:40 AM12/22/10
to
On Dec 21, 6:58 pm, Rotwang <sg...@hotmail.co.uk> wrote:
> Rotwang wrote:
> >>> Tim Golden BandTech.com wrote:
> >>>> I agree with this latter statement, though it does conflict with the
> >>>> first quote,
>
> >>> No it doesn't. The fact that an expression P = a0 + a1 X + ... an X^n
> >>> may either be taken as notation for P = (a0, a1, ..., an, 0, 0, ...) or
> >>> shorthand for P = /a0/ & /a1/#X & /a2/#X#X & ... /an/#X#X#...#X (using
> >>> Arturo's notation) leads to no conflict, because both non-shorthand
> >>> expressions evaluate to the same element P of R[X], as is very easy to
> >>> prove.
>
> Do you have a response to this, Tim?

Yes, and it was in my last post, and is the crux of my statement
below.
You have caved in here by posing a new operational symbol.

> > Nicely Rotwang has rebridged exactly where Arturo did and so we have
> > two supporters of traditional AA(Arturo and Rotwang) providing
> > corrections to the topic while being in complete denial of what they
> > have done.
>
> We provided no "corrections"; that the definitions we gave are
> standard may be seen in any number of sources, for example Cameron's
> book.

I don't have access to Cameron's book. The correction that I discuss I
will discuss further below.


>
> > Note how little address this statement will receive in
> > their responses.
>
> Note the many points in my earlier post which you failed to address.
> How about doing so next time?

I am sorry, but my fundamental complaint is that the product
a1 X
is not a ring product, because a1 and X are not in the same set.
a1 is real, while X is not real. In effect all that you have done is
to work up a lot of detail without ever addressing the fundamental
point. A simple way of describing the polynomial would be:

A polynomial is a set composed of products and sums that are not ring
behaved, but their products and sums are ring behaved.

This line above is inherently conflicted, and matches your earlier
statement. To construct the polynomial requires the usage of
incompatible products and sums. The incompatibility is all about the
closure axioms, which feel familiar, but when formalized expose that
the usage of any unit vector like
a i + b j
or
1.0 + 2.0 X
do not conform to the ring definition, which requires products of
similar type source yielding results of that same type. Clearly the
unit vector notation does alter the meaning of its component, and this
product relationship is much as I have described one general
resolution of unlike types under product: they simply do not resolve.
This then becomes a means of dimensional representation, but the types
of those 'components' within this notation must be of unique type;
otherwise they will compute down to a condensed value. This is exactly
the behavior that allows the polynomial form, and it is not at all the
ring operations that are in use.

In what way does your work address this? And now you ask me to address
all of your work?
I did enter your python code, and I believe it is the first python
that I have used. In terms of coding the trouble that I layed out is
on typed languages. I do see that python does type checking somewhat,
but not like the ring operators do, or as it would be in a strongly
typed language as I've laid out previously.

I do see that you are sincere and are expending energy here, but I
don't see that you are being very direct. I would ask that you somehow
return back to the crux in order to explain how you've come to the
&,#,// notation that you use. As far as I can tell it is the latter
symbol
/a(n)/
which is the new fixup which you deny having made. This appears to me
as a typecast which actually does
1.23 -> ( 1.23, 0, 0, 0, ... ) .
This is where my awareness is in the moment, and I am open to
correction, but for instance you've already admitted that


1.23 = ( 1.23, 0, 0, 0, ... )

is false.

Lastly though, if I were to convince you that the polynomial
construction is not pristine, then you could perhaps come to see how
just nearby the same rotational principle can play out on the sign of
the real number, causing a new fundamental representation, but without
the reliance upon infinite series to get a clean theoretical
construction. Incidentally, this may be a criticism of the code you
supplied, where we can ask whether
( 1.1, 2.2, 3.3 ), ( 1.1, 2.2, 3.3, 4.4 )
are in the same set. The loose usage of the tuple notation within this
subject may allow you to answer 'yes' to the above, but similar to the
1.23 instance you should answer 'no'.

There are further criticisms nearby this topic, as for instance if we
were to consider the space CxC, which is fairly naturally occuring up
in higher dimension. This is the complex numbers as C, and now when we
introduce a value in the reals in product with CxC then we must make a
choice as to which C we will cast it into, and here we see that the
interpretation of R as a subset of C can be broken when generality of
dimension is imposed. As to what this full generality or stricture is
with regard to the tuple, well, this can be left open here, but the
topic is right nearby what we are discussing, and the idea that the
tuple form can take precedence over the polynomial form is not
acceptable, for it is a very lax format. I'm sorry to put in sideways
content, but it is connected, and gets toward the root of the topic
which I am attempting to address, whereas you and your buddy Arturo
are only interested in upholding the status quo.

- Tim

Arturo Magidin

unread,
Dec 22, 2010, 6:04:52 PM12/22/10
to
On Dec 22, 9:16 am, "Tim Golden BandTech.com" <tttppp...@yahoo.com>
wrote:

> I'm sorry to put in sideways
> content, but it is connected,

It's there because your only out is to change the subject.

> and gets toward the root of the topic
> which I am attempting to address, whereas you and your buddy Arturo
> are only interested in upholding the status quo.

Bullshit. You are only interested in claiming that you are keen and
insightful and that everyone else has no clue.

Too bad it's all a bunch of lies you tell yourself to be able to sleep
at night. You are nothing but an ignoramus who works really hard at
remaining as ignorant as you can.

--
Arturo Magidin

Rotwang

unread,
Dec 22, 2010, 6:39:13 PM12/22/10
to
On Dec 22, 3:16 pm, "Tim Golden BandTech.com" <tttppp...@yahoo.com>
wrote:

> On Dec 21, 6:58 pm, Rotwang <sg...@hotmail.co.uk> wrote:
>
>> Rotwang wrote:
>>>>> Tim Golden BandTech.com wrote:
>>>>>> I agree with this latter statement, though it does conflict with the
>>>>>> first quote,
>
>>>>> No it doesn't. The fact that an expression P = a0 + a1 X + ... an X^n
>>>>> may either be taken as notation for P = (a0, a1, ..., an, 0, 0, ...) or
>>>>> shorthand for P = /a0/ & /a1/#X & /a2/#X#X & ... /an/#X#X#...#X (using
>>>>> Arturo's notation) leads to no conflict, because both non-shorthand
>>>>> expressions evaluate to the same element P of R[X], as is very easy to
>>>>> prove.
>
>> Do you have a response to this, Tim?
>
> Yes, and it was in my last post, and is the crux of my statement
> below.
> You have caved in here by posing a new operational symbol.

What "new operational symbol"? I don't believe I've introduced any
operational symbols that I didn't already use 18 months ago.


>>> Nicely Rotwang has rebridged exactly where Arturo did and so we have
>>> two supporters of traditional AA(Arturo and Rotwang) providing
>>> corrections to the topic while being in complete denial of what they
>>> have done.
>
>> We provided no "corrections"; that the definitions we gave are
>> standard may be seen in any number of sources, for example Cameron's
>> book.
>
> I don't have access to Cameron's book. The correction that I discuss I
> will discuss further below.
>
>
>>> Note how little address this statement will receive in
>>> their responses.
>
>> Note the many points in my earlier post which you failed to address.
>> How about doing so next time?
>
> I am sorry, but my fundamental complaint is that the product
> a1 X
> is not a ring product,

That's right, it isn't, as I've already stated several times
(including my first post in the thread you started June of last year).
It is notation for the polynomial whose coefficients a_i are equal to
a1 when i = 1, and 0 otherwise. That polynomial happens to be equal to
the product of the two polynomials /a1/ (whose coefficients a_i are
equal to a1 when i = 0, and 0 otherwise) and X (whose coefficients a_i
are equal to 1 when i = 1, and 0 otherwise). Note that both of those
two things are polynomials, so the only product anywhere in sight is a
product of two things which are in the same set, namely the ring of
polynomials. I've explained this many times, and your only response to
this explanation is to repeat your original "complaint" in such a way
that completely ignores the explanations you have already been given.
Why are you bothering to ask if you have no intention of listening to
the answer?


> because a1 and X are not in the same set.
> a1 is real, while X is not real. In effect all that you have done is
> to work up a lot of detail without ever addressing the fundamental
> point. A simple way of describing the polynomial would be:
>
> A polynomial is a set composed of products and sums that are not ring
> behaved, but their products and sums are ring behaved.

No, that is not a simple way of describing the polynomial. That is
nonsense. Look, you stated many times in the other thread that you do
not understand the definition of the polynomial ring. I do understand
it. I've given you the definition, and it looks nothing whatsoever
like the garbled non-definition you write above. Why on Earth are you
presuming to tell me what a polynomial is, when I know and you don't?


> This line above is inherently conflicted,

Right, because the line in question is some nonsense that you've made
up. If the random guesses you make up seem to conflict with the things
that people who know about the subject are telling you, don't you
think it more likely that that's because your guesses are wrong (they
are) than because the subject you know nothing about is flawed?


> and matches your earlier statement.

No, it does not match any of my earlier statements. Once again, the
definition I gave of the polynomial ring A[X] is as follows: its
underlying set consists of sequences (a_i | i in N) such that a_i is
in A for each i in N, and such that only a finite number of a_i's are
non-zero. Given two polynomials (a_i | i in N_ and (b_i | i in N), we
define their sum by:

(a_i | i in N)&(b_i | i in N) = (a_i + b_i | i in N)

and their product by:

(a_i | i in N)#(b_i | i in N) = (sum_{j = 0}^i a_j*b_{i - j} | i in N)

Do you notice that the above definitions make no reference at all to
anything of the form a1 X? And that any objection to the above
construction that refers to something of the form a1 X therefore makes
no sense whatsoever?


> To construct the polynomial requires the usage of
> incompatible products and sums.

Where in the above construction of the polynomial ring are there any
incompatible products or sums?


> The incompatibility is all about the
> closure axioms, which feel familiar, but when formalized expose that
> the usage of any unit vector like
> a i + b j

a_i + b_j is not a "unit vector".


> or
> 1.0 + 2.0 X
> do not conform to the ring definition,

Where in the above construction of the polynomial ring are there any
expressions that look like that?


> which requires products of
> similar type source yielding results of that same type. Clearly the
> unit vector notation does alter the meaning of its component, and this
> product relationship is much as I have described one general
> resolution of unlike types under product: they simply do not resolve.
> This then becomes a means of dimensional representation, but the types
> of those 'components' within this notation must be of unique type;
> otherwise they will compute down to a condensed value. This is exactly
> the behavior that allows the polynomial form, and it is not at all the
> ring operations that are in use.
>
> In what way does your work address this?

The way that my work addresses this is that my "work" (which is
actually just stating elementary facts about polynomial rings that
you've been told many times before) simply makes no reference to
products of real numbers with things that aren't real numbers, or unit
vectors, or any of the other things you complain about, so that your
complaints make no sense at all.


> And now you ask me to address all of your work?
> I did enter your python code, and I believe it is the first python
> that I have used. In terms of coding the trouble that I layed out is
> on typed languages. I do see that python does type checking somewhat,
> but not like the ring operators do, or as it would be in a strongly
> typed language as I've laid out previously.
>
> I do see that you are sincere and are expending energy here, but I
> don't see that you are being very direct.

I don't know how I can be more direct than by giving the definition of
the polynomial ring and asking where in that definition these alleged
products of two things from different sets are supposed to occur. What
would be more direct on your part is if you would simply answer the
question.


> I would ask that you somehow
> return back to the crux in order to explain how you've come to the
> &,#,// notation that you use.

These notations were introduced by Arturo AFAIK. The same things are
usually denoted by different symbols, but since those symbols were
causing confusion on your part it was apparently necessary to
introduce new ones.


> As far as I can tell it is the latter symbol
> /a(n)/
> which is the new fixup which you deny having made.

Yes, I deny having made this because I didn't make it.


> This appears to me
> as a typecast which actually does
> 1.23 -> ( 1.23, 0, 0, 0, ... ) .

Yes, exactly right. The thing on the left is a real number. The thing
on the right is a polynomial. The function // takes real numbers (or
elements of your ring A if you're interersted in some ring other than
R) to polynomials. The latter are in the same set as the polynomial X
(whose coefficients a_i, recall, are equal to 1 when i = 1 and 0
otherwise) and therefore it makes sense to multiply /a/ with X, since /
a/ and X are both elements of the ring A[X].


> This is where my awareness is in the moment, and I am open to
> correction, but for instance you've already admitted that
> 1.23 = ( 1.23, 0, 0, 0, ... )
> is false.

Your use of the word "admitted" borders on dishonest. I don't believe
I've ever written anything that would suggest otherwise. Do you now
admit that it's wrong to beat your wife?


> Lastly though, if I were to convince you that the polynomial
> construction is not pristine, then you could perhaps come to see how
> just nearby the same rotational principle can play out on the sign of
> the real number, causing a new fundamental representation, but without
> the reliance upon infinite series to get a clean theoretical
> construction. Incidentally, this may be a criticism of the code you
> supplied, where we can ask whether
> ( 1.1, 2.2, 3.3 ), ( 1.1, 2.2, 3.3, 4.4 )
> are in the same set.

Not that I see what this has to do with the code, but the question
makes no sense. ( 1.1, 2.2, 3.3 ) and ( 1.1, 2.2, 3.3, 4.4 ) are not
just in one set each, they are in many sets. For example, they are
both in the set {( 1.1, 2.2, 3.3 ), ( 1.1, 2.2, 3.3, 4.4 )}. One of
them is in the set {( 1.1, 2.2, 3.3 )}, but the other isn't.


> The loose usage of the tuple notation within this
> subject may allow you to answer 'yes' to the above, but similar to the
> 1.23 instance you should answer 'no'.


Wrong, in fact the same answer applies.

You still didn't answer most of my questions, by the way.

Arturo Magidin

unread,
Dec 22, 2010, 10:35:24 PM12/22/10
to
On Dec 22, 5:39 pm, Rotwang <sg...@hotmail.co.uk> wrote:

[...]

> You still didn't answer most of my questions, by the way.

When I presented polynomials from the ground up, I stopped right after
defining the addition and product of polynomials. I did not introduce
any notion of "degree", I had not introduced the usual shorthand,
nothing. I finished asking if he was "with me so far."

He replied he was, and then proceeded to ignore everything I had said
and start complaining about degrees and dimensions of polynomials and
vectors, saying I had said that "0s are meaningless", and all sorts of
things. Essentially, just repeating the original muddled complaints,
ignoring everything that had been written. Then he claims nobody has
"falsified" his "objections" or his "claims of error."

Hasn't changed.

He's not interested in listening. What he wants is a pat on the head,
or probably closer to the point, for the masses to tell him how
impressed they are with his keenness. This is not about mathematics,
it's about his ego. He's not going to answer *any* of your questions,
except by misrepresentations and dishonesty. That's all he's ever
done.

--
Arturo Magidin

Bill Dubuque

unread,
Dec 23, 2010, 2:01:41 AM12/23/10
to
quasi <qu...@null.set> wrote:
>On Mon, 20 Dec 2010 21:38:44 +0000, Rotwang <sg...@hotmail.co.uk> wrote:
>>On 19/12/2010 20:38, Tim Golden BandTech.com wrote:
>>
>>> for if they are not sum and product operators, then what are they?
>>
>>They are NOTATION. The reason mathematicians use this notation is because
>>they find it simple and intuitive. Clearly, though, you do not. So, if
>>you wish to understand the polynomial ring construction then there's a
>>very simple solution to the problems you have with this notation: don't
>>use it. You don't need to use the notation define R[X] at all, nor do
>>you need it to prove anything about R[X]. Once again, the definition of
>>R[X] is as follows: R[X] is the set of all sequences
>>(a_i | i in N) such that each a_i is in R and only a finite number of
>>a_i are different from 0. Given two polynomials (a_i | i in N) and
>>(b_i | in in N) their sum is defined by
>>
>>(a_i | i in N)&(b_i | i in N) = (a_i + b_i | i in N)
>>
>>and their product is defined by
>>
>>(a_i | i in N)#(b_i | i in N) = (sum_{j = 0}^i a_j*b_{i - j} | i in N).
>>
>>I claim that, with the above definitions, (R[X], &, #) is a ring. Do you
>>disagree? If so, which part of the definition of a ring does it fail to
>>satisfy? If not, what is your objection? Note that your objection should
>>refer only to the construction I give immediately above, since that
>>construction is exactly what is meant by R[X]. Anything else is just
>>notation that mathematicians use to talk about R[X].
>
> Well done!
>
> Of the many valiant attempts to break through Tim Golden's failure to
> comprehend, I think the above is the clearest yet. I hope Tim puts the
> same effort into trying to understand your explanation as you put into
> assembling it.

This is just the standard construction of R[X] - which I pointed out
to Tim in the 9'th message [1] in the prior thread on Jun 5 2009,
8 hours after he posted questions on it. In the following 660 posts
over a month's time we elaborated at length on this construction.
Alas, apparently he couldn't understand any of those explanations.

--Bill Dubuque

[1] http://groups.google.com/groups?selm=y8zbpp29yqb.fsf%40nestle.csail.mit.edu

Tim Golden BandTech.com

unread,
Dec 23, 2010, 11:12:30 AM12/23/10
to

Thanks for the link back to that Bill:
"Then X = (0,1,0,0,0...), and X^n is the sequence having 1 in the
n'th place and 0 elsewhere; r = (r,0,0,0...) for constants r in R.
Now the question "what is X?" has a clear and rigorous answer."

If we were to ask for the construction of a polynomial, would it be
fair to construct the polynomial from polynomials? This is essentially
what you and Rotwang have done. By disregarding the multiterm sum of
products which is where the meaning of the word comes from and
replacing that meaning with tuples you somehow feel satisfied, yet
you've broken the construction and have built a circular argument. The
polynomial consists of polynomials. Great. All the while with real
coefficients the idea that the workspace is
R^inf
goes ignored. Having specified one singular value the proper form is
not to equate but to formally map a value r into that multidimensional
space. There are numerous such mappings.

My own investigation of X as a unit vector whose products are
rotational will require one more formal product definition, but the
results of the whole subject look more dismal when we see that the
results after posing the confusing ideal are that lower dimensional
spaces are constructed from infinite dimensional spaces.
The result could be built from low dimension upward instead, thus
engaging modulo behavior as a root rather than twisting it around an
infinite dimensional construction.

Of course I would prefer the conversation to go in a different
direction. I do consider these topics to be open and that superior
constructions may exist. Most importantly the abstract algebra
approach is not the pristine thing that mathematicians will claim it
to be.

There is another interesting offshoot to the polysign approach that
some onlooker might appreciate. The standard polynomial requires zeros
on the tail end outward to infinity. Under the polysign approach it is
possible to relax this requirement to a convergence to any constant
value; with some approximation possible for continuously defined
sequences. So for instance within polysign we could have
( 2.0, 1.5, 1.25, 1.125, ..., 1, 1, 1, ... )
as a workable sequence. The reason is simply that the above sequence
is equivalent to
( 1.0, 0.5, 0.25, 0.125, ..., 0, 0, 0, ... ).
It would also be possible to have a nonconvergent sequence that could
balance out as say a sinewave taken in say fifths on a P4 space. No
doubt this sort of thing maps over to your own polynomial theory such
that some ideals work well with some sequences.

I am not entirely opposed to the infinite dimensional construction,
but I do believe it is wise to allow the buildup to that infinite
dimensional construction, rather than start from it. Polysign does
this, and it is easy to regard the lower members as consistent with
that infinite dimensional version.

I would point out to you Bill that at the time that you answered my
question I was actually in the process of studying abstract algebra
since Hagman's(?) rebuttal to polysign hinges upon abstract algebra. I
was having great difficulty with the language of the quotient ring and
the ideal as well. It turns out that those difficulties spring from
previous difficulties that I point out here. The ring definition is
clear: elements satisfying it are within the same set. It becomes
immediately conflicted to construct 'polynomials with real
coefficients'. I am sorry to see you maintain your attachment so
closely to the subject. You have been assimilated. This is nearly a
religious issue, and I concede that there are plenty of highly
intelligent priests whose beliefs are not so impressive as their raw
intelligence. They will simply choose to follow the book... One must
preserve the book...

The real number in no way implies itself to be
a = ( a, 0, 0, 0, ... ) .
The LHS is one dimensional, and the RHS is infinite dimensional. The
amount of information in the LHS is infinitessimal in relation to the
amount of information in the RHS, and so no direct equation can be
possible. This formality is lacking in AA, and it may seem a small
complaint, yet the dimensional interpretation is fairly offensive
isn't it?

- Tim

> In the following 660 posts
> over a month's time we elaborated at length on this construction.
> Alas, apparently he couldn't understand any of those explanations.
>
> --Bill Dubuque
>

> [1]http://groups.google.com/groups?selm=y8zbpp29yqb.fsf%40nestle.csail.m...

Tim Golden BandTech.com

unread,
Dec 23, 2010, 11:38:45 AM12/23/10
to
On Dec 22, 6:39 pm, Rotwang <sg...@hotmail.co.uk> wrote:
> On Dec 22, 3:16 pm, "Tim Golden BandTech.com" <tttppp...@yahoo.com>
(snip)

> > As far as I can tell it is the latter symbol
> >    /a(n)/
> > which is the new fixup which you deny having made.
>
> Yes, I deny having made this because I didn't make it.
>
> > This appears to me
> > as a typecast which actually does
> >    1.23 -> ( 1.23, 0, 0, 0, ... ) .
>
> Yes, exactly right. The thing on the left is a real number. The thing
> on the right is a polynomial. The function // takes real numbers (or
> elements of your ring A if you're interersted in some ring other than
> R) to polynomials. The latter are in the same set as the polynomial X
> (whose coefficients a_i, recall, are equal to 1 when i = 1 and 0
> otherwise) and therefore it makes sense to multiply /a/ with X, since /
> a/ and X are both elements of the ring A[X].
(snip)

> You still didn't answer most of my questions, by the way.

Well, I am sorry to snip so much, but I think it is wiser to take one
point at a time, and for now I would rather just consider the above
argument, which we actually seem to be in agreement upon. You have
copied Arturo's notation, and that is fine, but when you post it and I
call it yours, then I see no need to quibble over that detail. It is
good of you to credit Arturo.

As I've said before, this can become a game of whack-a-mole, but I am
growing tired of that.
So I will just point out as I've already done that by introducing the
operator
/ a(n) /
you've accepted one piece of my criticism and provided a remedy, all
the while in denial.

- Tim

Arturo Magidin

unread,
Dec 23, 2010, 11:58:06 AM12/23/10
to
On Dec 23, 10:38 am, "Tim Golden BandTech.com" <tttppp...@yahoo.com>
wrote:

No. He's pointed out that this was solved long ago by the same subject
that you decry in exactly the way he is describing, something that you
ignore.

You keep missing the point: your "criticism" had a place 200 years
ago, BEFORE abstract algebra came into place. It was abstract algebra
that SOLVED the ambiguity about what X was or was not. Before they
did, things were not on solid ground. Abstract algebra solved this
problem IN EXACTLY THE WAY WE ARE DESCRIBING TO YOU.

Instead, your version of "history" is that polynomials made sense
BEFORE abstract algebra, and that abstract algebra made them nonsense.
You seem to think that Rotwang, or I, are inventing new notation and
new ideas to cover that up in response to your criticism. THAT IS
STUPID NONSENSE. All we are doing is reporting to you what Abstract
Algebra did over 100 years ago; the notation chosen is ad-hoc because
ASCII does not provide the necessary typography that algebraists had
in blackboards and books 100 years ago when they solved the problem,
but the report is verbatim what has been done for over 100 years to
define polynomials.


You are quite simply addressing your criticisms to the wrong
historical period, and blaming those that *solved* those problems for
problems that no longer exist.

What you fail to acknowledge, Timmy, is REALITY.

--
Arturo Magidin

master1729

unread,
Dec 23, 2010, 2:31:55 PM12/23/10
to
Timothy Golden wrote :

> On Dec 17, 12:55 pm, Aatu Koskensilta


> <aatu.koskensi...@uta.fi> wrote:
> > Arturo Magidin <magi...@member.ams.org> writes:
> > > Also, perhaps I did not make myself clear: you
> completely, and
> > > utterly, misunderstood Bill Dubuque's comments.
> You did so because you
> > > are unwilling (or incapable) to look past the end
> of your nose, and
> > > think everything is about you.
> >
> >   In all fairness, good old-fashioned incompetence
> is another plausible
> > explanation.
> >

> > --
> > Aatu Koskensilta (aatu.koskensi...@uta.fi)
> >
> > "Wovon man nicht sprechen kann, darüber muss man
> schweigen"
> >   - Ludwig Wittgenstein, Tractatus
> Logico-Philosophicus

i didnt read that thread yet , maybe i should.


>
> Well, here we have replies without a single
> falsification; merely
> hurling insult after insult. This is highly
> unmathematical behavior.

welcome to sci.math.

i agree with you that it is an often made mistake that every number system must be based upon the reals.

as P3 demonstrates.

your polysigns are " group rings " , that is its true terminology.

apart from that exception , i dont see anything wrong with abstract algebra.

i know you dont like to consider polynomials as rings , and i can understand that since there is no " reduction rule " nevertheless i dont object to it.


>
> Now, at some point we should anticipate that all
> mathematical
> construction should be clean, and in this regard it
> does mean that it
> should be straightforward. I do take a
> straightforward approach.
> Abstract Algebra does not.

why not ? apart from the 2 cases mentioned above , i think it does.

>
> For all of your disagreement there is no specific
> content that I can
> discuss. All of your own counterattacks here have
> zero content. For
> the sake of providing some small amount of content I
> again summarize
> my complaint, for it is so brief as to be nearly
> costless to jot down:
>
> The polynomial construction, as well as the
> the traditional development
> of the complex number, while claimed to be ring
> behaved cannot be so,
> for they contain product and sums which do not
> satisfy the closure
> requirement of those operators.

what ? what products and sums do not satisfy the closure ?

> The lack of
> discussion of this
> fundamental sort is bewildering. A math which bothers
> to formally
> define its operators, and then go on to use those
> same operators
> without compatibility is a contemptible construction.

discuss what ?


>
> Your own contempt of myself is irrelevant. I discuss
> a topic within
> modern mathematics and claim a falsification.

what falsification ?


You
> (Jesse) provide one
> brief formal statement, which I work off of above
> here, proving your
> interpretation to be poor, and there the discussion
> ends as far as the
> topic itself is concerned. You seem to prefer an
> emotional basis,
> whereas I am attempting a mathematical basis, with
> some emotional
> epithets. One should try to keep the topic
> interesting. You all are
> weak here.
>
> If I am so wrong then why not point out the exact
> spot where I am
> wrong, just as I do for you? This is such a basic
> part of discussion
> that is lacking here, and here on a mathematics
> group. I surmise that
> this is one exposure of human social behavior, and
> further that the
> ability of an academically trained person to
> challenge their own
> teachings is severely limited by the en masse belief,
> especially
> within a subject whose perfection is claimed to be
> beyond a doubt. Yet
> none are willing to even doubt.

im always willing to doubt.

but i do not doubt everything.

i see no reason to attack abstract algebra in its whole.

just promote " group rings " like polysigned more.

> This is a serious
> problem within
> mathematical thinking for the human. How many
> opportunities were you
> given to decide whether you believe what you were
> taught? Very few. If
> you did not swallow quickly what was taught and mimic
> it properly then
> you were not a good enough student.

educated guy hmm :)

> In this way
> academia is rewarding
> mimicry and lacks analytical training of the sort
> that is necessary to
> uncover past mistakes and possible reworks.


yeah.

but i dont think there are many ' past mistakes ' in abstract algebra ...


> How much
> of academia will
> be valid in one hundred years?

after the apocalypse ? :)

How much accumulation
> can humans
> stomach? Especially poor accumulation will have to be
> dealt with, and
> so clean replacements are sought. Abstract Algebra
> claims to be one of
> these sorts of universal disciplines, but I do not
> see it this way.

correction : set theory claims to be the universal foundation.

i say its number theory instead.

history agrees with me.


> Yes, the ring definition is fairly pristine, but then
> it goes broken
> almost immediately by its own followers. Formalizing
> operators is
> good, but then the formality must be followed.
>
> - Tim

'pristine' ? my english is not that good srr.

regards

tommy1729

master1729

unread,
Dec 23, 2010, 3:01:05 PM12/23/10
to
Timothy Golden wrote :


no.

kinda consider + as a vector sum , just as a + bi can be seen as a vector sum in the complex plane.


(snip)

hoping it helped.

tommy1729

master1729

unread,
Dec 23, 2010, 5:16:37 PM12/23/10
to
timothy golden wrote :


>
> I am sorry, but my fundamental complaint is that the
> product
> a1 X
> is not a ring product, because a1 and X are not in
> the same set.
> a1 is real, while X is not real.

then pi * i is invalid too ??

pi is real and i is not , just like a1 and X !

hence a + b i is not an element of the ring of complex numbers ?

see where you went wrong ??

tommy1729

Rotwang

unread,
Dec 23, 2010, 7:45:16 PM12/23/10
to

Except that I didn't "introduce" the aforementioned operator, and
neither did Arturo. Look, the only two undergraduate-level algebra books
I have on my shelf are Cameron's /Introduction to Algebra/ and Stewart's
/Galois Theory/. Let's see what they have to say about polynomial rings.
From Cameron:

A /polynomial/ over a ring R is an infinite sequence (a0, a1, a2, ...)
of elements of R, indexed by the non-negative integers, with the
property that there exists an integer n such that ai = 0 for all
i > n. In accordance with the usual notation, we write the sequence
(a0, a1, a2, ...) as a0 + a1 x + a2 x^2 + ..., or (if n is as in the
definition) as Sigma_{i = 0}^n ai x^i.

[...]

A /constant polynomial/ is a polynomial Sigma ai x^i with ai = 0 for
i > 0. In other words, the constant polynomials are the zero
polynomial and the polynomials whose degree is zero. They form a
subring of R[x] isomorphic to R. Often, we don't distinguish
carefully between the ring element r and the constant polynomial
r = Sigma ai x^i with a0 = r and ai = 0 for i > 0.

As you can see, Cameron explicitly refers to the polynomial Sigma ai x^i
with a0 = r and ai = 0 for i > 0 (which is the polynomial (r, 0, 0, ...)
written in the Sigma notation) associated to an element r of R. This is
exactly what Arturo and I have called /r/; Cameron simply uses the same
character r instead of the notation /r/ to denote this polynomial. From
Stewart:

[Exercise] 2.2 A set theorist would define C[t] as follows [ed note:
C is written in blackboard bold to denote the ring of complex
numbers]. Consider the set S of all infinite sequences

(an)_{n in N} = (a0, a1, ..., an, ...)

where an in C for all n in N, and such that an = 0 for all but a
finite set of n.

[...]

Define the map

theta: C to S
theta(k) = (k, 0, 0, 0, ...)

and prove that theta(C) c S is isomorphic to C.

As you can see, the map theta defined by Stewart is exactly the map
which Arturo and I have called //. Both of these quotes demonstrate that
I did not, in fact, introduce this operator in response to your
criticism. Rather, this operator had already been introduced long before
you came up with your silly criticism, and was well known among
everybody who actually understands the definition of the polynomial
ring. You too would have known about it within three days of starting
the "Understanding the quotient ring nomenclature" thread, if only you
had bothered to actually read the replies you received.

Anyway, I would still appreciate an answer to the following questions
from my last post:

Tim Golden BandTech.com

unread,
Dec 26, 2010, 8:34:00 AM12/26/10
to
On Dec 23, 5:16 pm, master1729 <tommy1...@gmail.com> wrote:
> timothy golden wrote :
>
>
>
> > I am sorry, but my fundamental complaint is that the
> > product
> >    a1 X
> > is not a ring product, because a1 and X are not in
> > the same set.
> > a1 is real, while X is not real.
>
> then pi * i is invalid too ??

If the operator * as you use it above cannot be a ring operator, for
if it were then these two constant values would resolve to a singular
value.

>
> pi is real and i is not , just like a1 and X !
>
> hence a + b i is not an element of the ring of complex numbers ?

I would state this question differently, and point out that there are
two operations in the expression
a + b i ;
one is a sum and one is a product, and if these do not combine into a
singular value as is stated within the ring definition then there is a
conflict. Clearly the a+bi notation is intended to not combine or
compute such that it maintains two dimensional structure. Thus the
operations cannot be ring behaved. This is not discussed in any
abstact algebra text, and here we see people issuing the tuple as the
means of conflict resolution, which is to say that they half accept my
criticism, without admitting it.


>
> see where you went wrong ??

No Tommy. I have pointed out here a subtle difference in context. It
is to observe the two operations within
a + b i
and see that they are not ring behaved, while mathematics teaches us
that this thing is ring behaved. One logical resolution is to declare
two more formal operators, but this would be very messy. Polysign
takes a different way out.

Roughly the same argument applies to the polynomial construction of
AA.

- Tim

>
> tommy1729

Tim Golden BandTech.com

unread,
Dec 26, 2010, 9:28:31 AM12/26/10
to

No he doesn't. You are grovelling here. Also the above never considers
a real value r or a.
It is upon assigning a real coefficient that the trouble begins, and
so your own defense is not even present above, though it reads nicely
enough. No, you and Arturo have placed a real value a in brackets
/ a /
to explicitly turn it into a polynomial, which it is not when it
stands freely without these brackets. Nowhere in the above discussion
do I see any of this content, and really your poor argumentation here
is proof either of you own lack of understanding of the position that
I take, or evidence that you see my criticism and have no actual
defense. For the sake of understanding I again present a short claim
of a conflict within abstract algebra:
The polynomial with real coefficients cannot be ring behaved, for
if it were then we would allow
a0 + a1 X + a2 X X = a3 .
This is a variation of my original, and can't help but reach into the
same area of awareness. That real valued a and nonreal valued X can
combine in products and sums is in conflict with the ring definition.

> From
> Stewart:
>
>    [Exercise] 2.2  A set theorist would define C[t] as follows [ed note:
>    C is written in blackboard bold to denote the ring of complex
>    numbers]. Consider the set S of all infinite sequences
>
>      (an)_{n in N} = (a0, a1, ..., an, ...)
>
>    where an in C for all n in N, and such that an = 0 for all but a
>    finite set of n.
>
>    [...]
>
>    Define the map
>
>      theta: C to S
>      theta(k) = (k, 0, 0,  0, ...)
>
>    and prove that theta(C) c S is isomorphic to C.
>
> As you can see, the map theta defined by Stewart is exactly the map
> which Arturo and I have called //. Both of these quotes demonstrate that
> I did not, in fact, introduce this operator in response to your
> criticism. Rather, this operator had already been introduced long before
> you came up with your silly criticism, and was well known among
> everybody who actually understands the definition of the polynomial
> ring. You too would have known about it within three days of starting
> the "Understanding the quotient ring nomenclature" thread, if only you
> had bothered to actually read the replies you received.

Well, this theta() mapping does seem to come close, though it is not
used in the general polynomial expression. It seems as if your own
statement above is to say that I have been correct all along, and that
modern abstract algebra interpretations which overlook this grimy
detail are getting a bit too glossy. Anyway, we do not see any actual
discussion of the problem of potential conflict with the ring
operators, which would be to admit that
c0 + c1 X + c2 XX
where c(n) are complex and X is not complex cannot be ring behaved. No
doubt your author goes on to use this standard notation, for if he did
not then you would have quoted that portion of his discussion.

Your arguments here are weak, though on the surface they appear very
strong. The discussion for me will always come back to the simple
operation
a1 X
where a1 is of instantiable quality, such as
1.234 X
such that we have a real value in product with a value that is not
real and is hardly defined at all, other than as a dimensional
construct, which does correctly align with the tuple usage, which
would then be consistent with building low dimensional structures from
infinite dimensional ones.

The sign of the real number already has modulo two behavior, and this
is the correct taking off point for natural dimensional increase. The
modulo three form is two dimensional and is the complex numbers,
though in a different format. Within the real numbers we see the
behavior


- x + x = 0

and this symmetry extends to the three-signed form


- x + x * x = 0

where '*' is a new sign. Interestingly, this law of the signs and
their cancellation is not actually a portion of the sum and product
operations. It is instead tied more to the geometry, which is implied
by the symmetrical balance. We should attempt to build dimension from
simpler things than the infinite dimensional form. The ring behavior
of the polysign numbers is stronger than with the standard complex
number, but it is true that if we regard
- 1.0 + 2.0 * 3.0
as a sum that this sum will not condense any further than
+ 1.0 * 2.0
and so general dimension is born. The ring definition becomes most
adequate when we regard these dimensional values by a representative z
which is unitary and hides internal sums and products. For some this
is JUST NOTATION, but this is the only way to get
z1 z2 = z3, z1 + z2 = z4
ring behaved operations. Same type in and same type out. They must be
unitary to do this. In some regards this interpretation denies the
plausibility of a real value in product with a complex value. Instead
an explicit map must be made, consistent with the concept of
projection, though there is often a default projection which seems
most sensible, and this is how we come to think of the real number as
a subset of the complex number.

>
> Anyway, I would still appreciate an answer to the following questions
> from my last post:
>
> >> Do you notice that the above definitions make no reference at all to
> >> anything of the form a1 X? And that any objection to the above
> >> construction that refers to something of the form a1 X therefore
> >> makes no sense whatsoever?

Here again Rotwang is very weak argumentation. You have severed your
own argument from the polynomial representation and so claim the tuple
form is a clean resolution, and now you are back to quoting from your
predecessors who use the polynomial form freely. I simply argue that
your own declaration here of the polynomial form as inadequate is
consistent with my own statements of a conflict in abstract algebra.

- Tim

Arturo Magidin

unread,
Dec 26, 2010, 8:46:03 PM12/26/10
to
On Dec 26, 8:28 am, "Tim Golden BandTech.com" <tttppp...@yahoo.com>
wrote:

None so blind as those who will not open their eyes.

> You are grovelling here.

None so stupid as those who use words whose meaning they do not
understand.

> Also the above never considers
> a real value r or a.

None so ignorant as Timmy.

Just say it, Timmy: You don't care. You will stick by your "criticism"
and your invented "history" so long as you draw breath, because you
rather pretend to yourself to have half a clue than admit that you are
an idiot.

Too bad that it doesn't take your acceptance of the fact. An idiot you
remain, whether you admit it or not.

--
Arturo Magidin

Rotwang

unread,
Dec 26, 2010, 8:48:19 PM12/26/10
to

Truly bizarre. He does, right there in the material you quoted: he
writes "the constant polynomial r = Sigma ai x^i with a0 = r and ai = 0
for i> 0", having earlier written "we write the sequence (a0, a1, a2,
...) as [something else or] Sigma_{i = 0}^n ai x^i." Are you really
incapable of putting those two statements together?


> You are grovelling here. Also the above never considers
> a real value r or a.

Of course not, since R in the quoted material denotes an arbitrary ring.


> It is upon assigning a real coefficient that the trouble begins, and
> so your own defense is not even present above, though it reads nicely
> enough. No, you and Arturo have placed a real value a in brackets
> / a /
> to explicitly turn it into a polynomial, which it is not when it
> stands freely without these brackets. Nowhere in the above discussion
> do I see any of this content, and really your poor argumentation here
> is proof either of you own lack of understanding of the position that
> I take, or evidence that you see my criticism and have no actual
> defense. For the sake of understanding I again present a short claim
> of a conflict within abstract algebra:
> The polynomial with real coefficients cannot be ring behaved, for
> if it were then we would allow
> a0 + a1 X + a2 X X = a3 .

If a3 is supposed to be an element of R in the above, then no, we
wouldn't allow that. The left hand side is an element of R[X], but not of R.

It doesn't merely "come close", it is *exactly the same*.


> though it is not
> used in the general polynomial expression.

There is no reason why it should be.


> It seems as if your own
> statement above is to say that I have been correct all along, and that
> modern abstract algebra interpretations which overlook this grimy
> detail are getting a bit too glossy. Anyway, we do not see any actual
> discussion of the problem of potential conflict with the ring
> operators, which would be to admit that
> c0 + c1 X + c2 XX
> where c(n) are complex and X is not complex cannot be ring behaved.

Yes we do, you just keep pretending it isn't there.


> No
> doubt your author goes on to use this standard notation, for if he did
> not then you would have quoted that portion of his discussion.

Yes, of course he does. This makes no difference.


> Your arguments here are weak, though on the surface they appear very
> strong. The discussion for me will always come back to the simple
> operation
> a1 X

That isn't an "operation", as people have pointed out over and over
again. You keep simply ignoring this fact.


> where a1 is of instantiable quality, such as
> 1.234 X
> such that we have a real value in product with a value that is not
> real

No, there is no such product, as people have pointed out over and over
again. You keep simply ignoring this fact, and repeating the above
falsehood.


> and is hardly defined at all,

You have been given the definition of X over and over again. You keep
simply ignoring it, and claiming that X is undefined.


> other than as a dimensional
> construct, which does correctly align with the tuple usage, which
> would then be consistent with building low dimensional structures from
> infinite dimensional ones.
>

> [...]


>
>>
>> Anyway, I would still appreciate an answer to the following questions
>> from my last post:
>>
>>>> Do you notice that the above definitions make no reference at all to
>>>> anything of the form a1 X? And that any objection to the above
>>>> construction that refers to something of the form a1 X therefore
>>>> makes no sense whatsoever?
>
> Here again Rotwang is very weak argumentation. You have severed your
> own argument from the polynomial representation and so claim the tuple
> form is a clean resolution,

What you call the "tuple form" is the *definition* of a polynomial, as
demonstrated by the above two quotes from standard sources. I didn't
cherry-pick those sources; like I said, they are the only two relevant
textbooks to which I have access. You keep pretending that the
definition of R[X] is something else and then complaining that that
something else is flawed. If the *actual* definition of the polynomial
ring were flawed then you could point out the flaw. But it isn't, so you
can't, so instead you waffle endlessly about a ridiculous strawman of
your own creation.

Anyway, it's clear that you're no more willing to actually listen to
what people who know what they're talking about have to tell you than
you were last year, so I'm giving up. I'll leave you with this link,
which succinctly explains the problems you're having with the polynomial
ring definition:

http://www.youtube.com/watch?v=VqaPB3zcuEI

Brian Chandler

unread,
Dec 26, 2010, 11:58:53 PM12/26/10
to
Tim Golden BandTech.com wrote:
> On Dec 23, 5:16 pm, master1729 <tommy1...@gmail.com> wrote:
> > timothy golden wrote :
> >
> >
> >
> > > I am sorry, but my fundamental complaint is that the
> > > product
> > >    a1 X
> > > is not a ring product, because a1 and X are not in
> > > the same set.
> > > a1 is real, while X is not real.
> >
> > then pi * i is invalid too ??
>
> If the operator * as you use it above cannot be a ring operator, for
> if it were then these two constant values would resolve to a singular
> value.

General comment: your posts are generally much too long, so I doubt if
anyone bothers to read them in full. If you want people to read what
you write, check that you are writing (at least!) grammatical sense.
Do this carefully on the first paragraph or two for a start.

I suppose you are claiming that this product pi * i cannot be a
product in a ring? Well, you're wrong.

For the product of pi and i to be a ring product, pi and i have to be
members of the ring set, which cannot therefore be the integers
(neither is a member), the irrationals (i is not a member), nor the
Gaussian integers (pi is not a member), but certainly can be the
complex numbers (both i and pi are members), or an unlimited number of
other things (obviously beyond you at this stage, but e.g. the
polynomials in pi over the Gaussian integers, to give an example which
does not have the complex numbers as a subset).

So do you disagree with any of these statements:

(1) The complex numbers form a ring, under normal complex addition (+)
and product (*)

(2) pi and i are both elements of the complex numbers

(3) The product of pi and i (normally written outside the ASCII
character set, but I'll write ipi (or "eye-pie")) is an element of the
complex numbers.

Of course I don't understand the latter part of your complaint:


> if it were then these two constant values would resolve to a singular
> value.

"Resolve"? "Singular value"? Can you supply definitions for these
terms?

i is a constant (in the context of a ring, _everything_ is a
constant); pi is a constant; eye-pie is a constant, isn't it? This is
_exactly_ the behaviour of a ring: two elements multiplied together
give another element of the ring.

Brian Chandler

Tim Golden BandTech.com

unread,
Dec 27, 2010, 9:34:17 AM12/27/10
to

Right, and when this R becomes the real numbers then the conflict is
exposed.
It is the idea that a real value a1 in product with X is not a ring
operation that I have always been focused on. You are not addressing
the fundamental complaint. Mathematicians tend to avoid instantiation,
and often when their constructions do become instantiated what is
actually present is far less than the language seems to present. Your
own denial throughout this discussion is evidence. Especially when you
provide a fix for the problem, then you should concede the conflict.

>
> > It is upon assigning a real coefficient that the trouble begins, and
> > so your own defense is not even present above, though it reads nicely
> > enough. No, you and Arturo have placed a real value a in brackets
> >     / a /
> > to explicitly turn it into a polynomial, which it is not when it
> > stands freely without these brackets. Nowhere in the above discussion
> > do I see any of this content, and really your poor argumentation here
> > is proof either of you own lack of understanding of the position that
> > I take, or evidence that you see my criticism and have no actual
> > defense. For the sake of understanding I again present a short claim
> > of a conflict within abstract algebra:
> >     The polynomial  with real coefficients cannot be ring behaved, for
> > if it were then we would allow
> >     a0 + a1 X + a2 X X = a3 .
>
> If a3 is supposed to be an element of R in the above, then no, we
> wouldn't allow that. The left hand side is an element of R[X], but not of R.

Here you have a version of the conflict which is exactly in conflict
with the ring definition. The ring definition states that combinations
of products and sums will resolve to a singular element, and further
that those source elements are in the same set. What I have written
above is consistent with the ring definition, but is regarded as
invalid.

You see, the products and sums of the polynomial must be maintained in
order for the polynomial to hold up. This is inherently an anti-ring
concept. That the result is a multidimensional interpretation is very
clear, but then this context likewise carries a round of whack-a-mole
attacks, which will land us with you insisting that the expression
above carry an infinite number of zero terms. I'd just as soon dodge
that ridiculous discussion, which Arturo and I have already covered.

Your poor logic here is exposed, and I've already gone into it a bit
further up.
You already introduced the usage of
/a/
as a means of fixing the conflict. You now claim that there is no
reason why this notation should be used. This is just as when you
claimed that there were a product and sum but that they weren't really
a product and sum.

It's pretty clear that you will carry on in defense of the standard
abstract algebra and that I will continue my criticisms. The utility
of this conversation is that you are a new person to debate the topic
with, and in the small variations that come about in the arguments
that we provide.

I don't believe that this is so much a personal issue as Arturo would
make it. Still we are people discussing a topic. You are on the
mainstream side and I am somewhat an underdog. Still, such discussions
should nail down clean falsifications. I have caught too many self
contradictions from your presentation to bother proceding with you.
Whether these contradictions indicate a weakness of the subject or a
weakness of the person is blurred, but I do believe that you are a
strong thinker, who has attached to a subject that is not so strong.
When will people such as yourself give themselves enough credit to
reject the en masse accepted mathematics? Without full scrutiny and
self empowered judgement we will propagate misinformation as easily as
we will propagate good information.

Here you have legitimated my criticisms, all the while denying their
validity, and now have gone to quoting prior works in a half way,
claiming to have their full way. I don't mean you any personal harm,
and appreciate that you have managed to keep the content flowing.
Thank you very much Rotwang for having this discussion. You have
managed some small variations on the prior conversation, and I admit
that the complaints which I have are slim, yet the contradictions that
they pose fly in the face of the basis that the subject claims to
compose itself from. It does not help that the concepts have
tremendous familiarity, so that one can nearly breeze over the ring
definition without ever looking back. Then the familiar polynomial
with a new twist, but still familiar, whose products and sums do not
evaluate. No student will be free to stop there, and as they continue
to have this shit shovelled down their throat at or beyond their
capacity to absorb it mimicry is the only way through. False belief
systems do propagate, regardless of the raw intelligence of the
members, especially under a formally enforced system.

- Tim

Marshall

unread,
Dec 27, 2010, 10:18:38 AM12/27/10
to
On Dec 27, 6:34 am, "Tim Golden BandTech.com" <tttppp...@yahoo.com>
wrote:
>

> You are on the mainstream side and I am somewhat an underdog.

The word is "crank."


> When will people such as yourself give themselves enough credit
> to reject the en masse accepted mathematics?

You mean, when will they become cranks, abandon math, and
start making stuff up.


> Without full scrutiny and self empowered judgement
> we will propagate misinformation as easily as
> we will propagate good information.

By "self-empowered judgment" evidently you mean
discounting any and all input.


> Here you have legitimated my criticisms, all the while
> denying their validity

So even when he says you're wrong what you hear
is that you're right.


Marshall

Bill Dubuque

unread,
Dec 28, 2010, 12:42:25 AM12/28/10
to
"Tim Golden BandTech.com" <tttp...@yahoo.com> wrote:

> It is the idea that a real value a1 in product with X
> is not a ring operation that I have always been focused on.

Perhaps some analogies will help:

2*X means (2,0..)*(0,1,0..) in Z[X]
2*i means (2+0i) *(0+i) in C
2*(3/4) means (2/1)*(3/4) in Q
2*(3,4) means (2,2)*(3,4) in Z^2
2*(3+4Z) means (2+4Z)*(3+4Z) in Z/4Z = Z (mod 4)

In all cases the LHS is merely a convenient abuse of notation
for the fully-specified ring operation denoted by the RHS.
If this notational abuse bothers you then simply replace it
by its fully-specified form on the RHS.

--Bill Dubuque

Bill Dubuque

unread,
Dec 28, 2010, 1:05:32 AM12/28/10
to
"Tim Golden BandTech.com" <tttp...@yahoo.com> wrote:
> On Dec 23, 2:01 am, Bill Dubuque <w...@nestle.csail.mit.edu> wrote:
>>
>> This is just the standard construction of R[X] - which I pointed out
>> to Tim in the 9'th message [1] in the prior thread on Jun 5 2009,
>> 8 hours after he posted questions on it.
>> [1] http://groups.google.com/groups?selm=y8zbpp29yqb.fsf%40nestle.csail.mit.edu

>
> Thanks for the link back to that Bill:
> "Then X = (0,1,0,0,0...), and X^n is the sequence having 1 in the
> n'th place and 0 elsewhere; r = (r,0,0,0...) for constants r in R.
> Now the question "what is X?" has a clear and rigorous answer."
>
> If we were to ask for the construction of a polynomial, would it be
> fair to construct the polynomial from polynomials? This is essentially
> what you and Rotwang have done.

No the underlying set of R[X] is the set of eventually-null infinite
sequences of elements of R, i.e. a subset of R^N. Analogously the
underlying set of C is R^2, where (a,b) is the formal object that
corresponds to the informal notation a+bi, just as (a,b,0...)
corresponds to a+bx.

--Bill Dubuque

Tim Golden BandTech.com

unread,
Dec 28, 2010, 8:36:07 AM12/28/10
to
On Dec 28, 12:42 am, Bill Dubuque <w...@shaggy.csail.mit.edu> wrote:

Thank you Bill. There are extensions to what you have written here,
but I'll spare those details, for this much of an admission is about
as much as I can hope for.
So it was true that you were admitting an ambiguity; admittedly a
fairly slender one, but one which goes directly against the grain of
the ring definition.

- Tim

Tim Golden BandTech.com

unread,
Dec 28, 2010, 8:49:30 AM12/28/10
to
On Dec 28, 1:05 am, Bill Dubuque <w...@shaggy.csail.mit.edu> wrote:

> "Tim Golden BandTech.com" <tttppp...@yahoo.com> wrote:
>
> > On Dec 23, 2:01 am, Bill Dubuque <w...@nestle.csail.mit.edu> wrote:
>
> >> This is just the standard construction of R[X] - which I pointed out
> >> to Tim in the 9'th message [1] in the prior thread on Jun 5 2009,
> >> 8 hours after he posted questions on it.
> >> [1]http://groups.google.com/groups?selm=y8zbpp29yqb.fsf%40nestle.csail.m...

>
> > Thanks for the link back to that Bill:
> >  "Then  X = (0,1,0,0,0...), and X^n is the sequence having 1 in the
> >   n'th place and 0 elsewhere; r = (r,0,0,0...) for constants r in R.
> >   Now the question "what is X?" has a clear and rigorous answer."
>
> > If we were to ask for the construction of a polynomial, would it be
> > fair to construct the polynomial from polynomials? This is essentially
> > what you and Rotwang have done.
>
> No the underlying set of R[X] is the set of eventually-null infinite
> sequences of elements of R, i.e. a subset of R^N. Analogously the
> underlying set of C is R^2, where (a,b) is the formal object that
> corresponds to the informal notation a+bi, just as (a,b,0...)
> corresponds to a+bx.
>
> --Bill Dubuque

Under the guise of abstract algebra's construction of the complex
field we will see that the complex value
a + b i
maps to
( a, b, 0, 0, 0, ... ),
( a/2, b/2, a/2, b/2, 0, 0, 0, ... ),
( a/3, b/3, a/3, b/3, a/3, b/3, 0, 0, 0, ... ),
...
and many more possible values. I don't why I am bringing this up,
because I would like to be done with this topic for now, but you've
gone into the complex number as some sort of distraction from building
the polynomial from the polynomial, so that I return your volley with
another branch of argumentation. As you fail to mention the
construction of the polynomial then go on to an argument on the
complex numbers so I will wrap around to a criticism of the AA complex
number form (built from the reals, though to Bill here these are not
reals but are instead


"eventually-null infinite sequences of elements"

whose type remain undisclosed, though the ordinary sequence of events
is to later set those elements to real values. As to what is
elemental, well, the series of elements is elemental isn't it? I have
to get a chuckle in here somehow, and I know that Bill will handle
this razzing much more strongly than Arturo will.

Thanks Bill for your feedback. I am pushing the envelope above but
believe that I can defend by what I've written. The result is a
ghostly image of abstract algebra.

- Tim

Bill Dubuque

unread,
Dec 28, 2010, 2:00:15 PM12/28/10
to
"Tim Golden BandTech.com" <tttp...@yahoo.com> wrote:
> On Dec 28, 12:42 am, Bill Dubuque <w...@shaggy.csail.mit.edu> wrote:
>> "Tim Golden BandTech.com" <tttppp...@yahoo.com> wrote:
>>>
>>> It is the idea that a real value a1 in product with X
>>> is not a ring operation that I have always been focused on.
>>
>> Perhaps some analogies will help:
>>
>> 2*X means (2,0..)*(0,1,0..) in Z[X]
>> 2*i means (2+0i) *(0+i) in C
>> 2*(3/4) means (2/1)*(3/4) in Q
>> 2*(3,4) means (2,2)*(3,4) in Z^2
>> 2*(3+4Z) means (2+4Z)*(3+4Z) in Z/4Z = Z (mod 4)
>>
>> In all cases the LHS is merely a convenient abuse of notation
>> for the fully-specified ring operation denoted by the RHS.
>> If this notational abuse bothers you then simply replace it
>> by its fully-specified form on the RHS.
>
> Thank you Bill. There are extensions to what you have written here,
> but I'll spare those details, for this much of an admission is about
> as much as I can hope for.
> So it was true that you were admitting an ambiguity; admittedly a
> fairly slender one, but one which goes directly against the grain of
> the ring definition.

There is no "ambiguity". Given the obvious implicit context, every
competent mathematician correctly (subconsciously!) parses those
notations on the LHS to their intended RHS interpretations.

As I said in prior posts, if you don't like the overloaded notation
on the LHS then simply used the non-overloaded form on the RHS.
Your failure to correctly comprehend overloaded notation does not
amount to any ill-definition of the underlying algebraic structures.

--Bill Dubuque

Bill Dubuque

unread,
Dec 28, 2010, 2:05:57 PM12/28/10
to
"Tim Golden BandTech.com" <tttp...@yahoo.com> wrote:
> On Dec 28, 1:05 am, Bill Dubuque <w...@shaggy.csail.mit.edu> wrote:
>> "Tim Golden BandTech.com" <tttppp...@yahoo.com> wrote:
>>> On Dec 23, 2:01 am, Bill Dubuque <w...@nestle.csail.mit.edu> wrote:
>>
>>>> This is just the standard construction of R[X] - which I pointed out
>>>> to Tim in the 9'th message [1] in the prior thread on Jun 5 2009,
>>>> 8 hours after he posted questions on it.
>>>> [1]http://groups.google.com/groups?selm=y8zbpp29yqb.fsf%40nestle.csail.mit.edu

>>
>>> Thanks for the link back to that Bill:
>>>  "Then  X = (0,1,0,0,0...), and X^n is the sequence having 1 in the
>>>   n'th place and 0 elsewhere; r = (r,0,0,0...) for constants r in R.
>>>   Now the question "what is X?" has a clear and rigorous answer."
>>
>>> If we were to ask for the construction of a polynomial, would it be
>>> fair to construct the polynomial from polynomials? This is essentially
>>> what you and Rotwang have done.
>>
>> No the underlying set of R[X] is the set of eventually-null infinite
>> sequences of elements of R, i.e. a subset of R^N. Analogously the
>> underlying set of C is R^2, where (a,b) is the formal object that
>> corresponds to the informal notation a+bi, just as (a,b,0...)
>> corresponds to a+bx.
>
> Under the guise of abstract algebra's construction of the complex
> field we will see that the complex value
> a + b i
> maps to
> ( a, b, 0, 0, 0, ... ),
> ( a/2, b/2, a/2, b/2, 0, 0, 0, ... ),
> ( a/3, b/3, a/3, b/3, a/3, b/3, 0, 0, 0, ... ),
> ...
> and many more possible values.

That makes no sense. In the standard construction that I mentioned,
the underlying set of C is R^2, i.e. pairs of reals. Even more simply
you could consider the ring of Gaussian integers Z[i] = m + n i
whose underlying set is Z^2, i.e. pairs of integers.

> As you fail to mention the construction of the polynomial

I mentioned the same standard construction that I did when you posed
your question years ago. Your failure to comprehend this construction
does not imply a failure to mention it.

> then go on to an argument on the complex numbers

I mentioned them because I thought that perhaps a simpler analogous
set-theoretical reduction might help you to understand precisely how
set-theoretical constructions serve to provide rigorous definitions.

> so I will wrap around to a criticism of the complex number form

> (built from the reals, though to Bill here these are not reals
> but are instead "eventually-null infinite sequences of elements"

No, you're confusing the standard polynomial construction with that
for complex numbers.

> whose type remain undisclosed

Perhaps you confuse set-theory with a strongly-typed programming language?

> though the ordinary sequence of events is to later set those elements
> to real values.

"later set those elements to real values" has no mathematical meaning.

> As to what is elemental, well, the series of elements is elemental isn't it?

The word "elemental" has no mathematical meaning.

You seem to lack any firm understanding of what set theory is and how
it serves to provide a rigorous foundation for mathematics. In particular
you seem to be confusing set theoretical constructions with constructions
in some programming language. Perhaps you mind find it helpful to study
the foundations of mathematics.

--Bill Dubuque

master1729

unread,
Dec 28, 2010, 4:59:43 PM12/28/10
to
Brian Chandler wrote :

well said.

Tim apparently isnt convinced that two elements multiplied together give another element of the ring.

even despite 'his' 'polysigned' ( group ring ) has this property too.

informally written

P3 => f(a,bX,cY) mod (X + Y + 1) mod (X^2 - Y) mod (Y^2 - X) mod (XY - 1)

regards

tommy1729

master1729

unread,
Dec 28, 2010, 5:09:23 PM12/28/10
to
Timothy Golden wrote :

that makes no sense.

you need to reduce everything and then its just (a,b).

in polynomial form

poly(a + bi) => poly(a + bX) mod (X^2 + 1)

which generalizes to general functions of a complex number :

f(a + bi) => f(a + bX) mod (X^2 + 1)


tommy1729

master1729

unread,
Dec 28, 2010, 5:15:54 PM12/28/10
to
i wrote :

>
> f(a + bi) => f(a + bX) mod (X^2 + 1)
>

typical for a number theory fanatic not :)

tommy1729

Tim Golden BandTech.com

unread,
Dec 28, 2010, 6:08:41 PM12/28/10
to
On Dec 28, 2:05 pm, Bill Dubuque <w...@nestle.csail.mit.edu> wrote:
> "Tim Golden BandTech.com" <tttppp...@yahoo.com> wrote:
>
>
>
> > On Dec 28, 1:05 am, Bill Dubuque <w...@shaggy.csail.mit.edu> wrote:
> >> "Tim Golden BandTech.com" <tttppp...@yahoo.com> wrote:
> >>> On Dec 23, 2:01 am, Bill Dubuque <w...@nestle.csail.mit.edu> wrote:
>
> >>>> This is just the standard construction of R[X] - which I pointed out
> >>>> to Tim in the 9'th message [1] in the prior thread on Jun 5 2009,
> >>>> 8 hours after he posted questions on it.
> >>>> [1]http://groups.google.com/groups?selm=y8zbpp29yqb.fsf%40nestle.csail.m...

>
> >>> Thanks for the link back to that Bill:
> >>>  "Then  X = (0,1,0,0,0...), and X^n is the sequence having 1 in the
> >>>   n'th place and 0 elsewhere; r = (r,0,0,0...) for constants r in R.
> >>>   Now the question "what is X?" has a clear and rigorous answer."
>
> >>> If we were to ask for the construction of a polynomial, would it be
> >>> fair to construct the polynomial from polynomials? This is essentially
> >>> what you and Rotwang have done.
>
> >> No the underlying set of R[X] is the set of eventually-null infinite
> >> sequences of elements of R, i.e. a subset of R^N. Analogously the
> >> underlying set of C is R^2, where (a,b) is the formal object that
> >> corresponds to the informal notation a+bi, just as (a,b,0...)
> >> corresponds to a+bx.
>
> > Under the guise of abstract algebra's construction of the complex
> > field we will see that the complex value
> >    a + b i
> > maps to
> >    ( a, b, 0, 0, 0, ... ),
> >    ( a/2, b/2, a/2, b/2, 0, 0, 0, ... ),
> >    ( a/3, b/3, a/3, b/3, a/3, b/3, 0, 0, 0, ... ),
> >    ...
> > and many more possible values.
Sorry that should have read

( a, b, 0, 0, 0, ... ),
( a/2, b/2, - a/2, - b/2, 0, 0, 0, ... ),
( a/3, b/3, - a/3, - b/3, a/3, b/3, 0, 0, 0, ... ),
...

> That makes no sense. In the standard construction that I mentioned,
> the underlying set of C is R^2, i.e. pairs of reals. Even more simply
> you could consider the ring of Gaussian integers  Z[i] = m + n i
> whose underlying set is Z^2, i.e. pairs of integers.

No, I am discussing the quotient/ideal construction of the complex
number as
R[X] / ( X X + 1 )

>
> > As you fail to mention the construction of the polynomial
>
> I mentioned the same standard construction that I did when you posed
> your question years ago. Your failure to comprehend this construction
> does not imply a failure to mention it.

Come on Bill, you know this math is weak. Look at the reliance on an
infinite progression. It does not work with a limit n, yet it could.
AA relies on an infinite dimensional construction to recover the
simplest things, such as the complex numbers. The structure which you
rely upon
( a0, a1, a2, ..., an, 0, 0, 0, ... )
is R^inf; not even R^n since products will not work out. Had they
developed the modulo space then this subject would be in stronger
territory, for then we could have low n, and no need for an infinite
length sequence.

>
> > then go on to an argument on the complex numbers
>
> I mentioned them because I thought that perhaps a simpler analogous
> set-theoretical reduction might help you to understand precisely how
> set-theoretical constructions serve to provide rigorous definitions.

Quite so, and the ring definition carries strong set requirements.
The very phrase
"polynomials with real coefficients"
is a flawed statement by the definition of ring. Likewise the
construction
a + b i
cannot be ring behaved, for b and i are not in the same set. What you
call rigorous in one moment you've already admitted is less than
rigorous in another moment. That a one dimensional set can be confused
for an infinite dimensional set within abstract algebra is hardly a
point that can be so easily overlooked, yet mathematicians have been
doing so for quite some time.

I agree with your comments elsewhere that I am from a software
typology system, and that the ring definition is compatible with that
software context, but that the standard usage in the polynomial
construction is not compatible with that context.

You are wavering in and out Bill, just as Rotwang and Arturo have
been.

>
> > so I will wrap around to a criticism of the complex number form
> > (built from the reals, though to Bill here these are not reals
> > but are instead "eventually-null infinite sequences of elements"
>
> No, you're confusing the standard polynomial construction with that
> for complex numbers.
>
> > whose type remain undisclosed
>
> Perhaps you confuse set-theory with a strongly-typed programming language?
>
> > though the ordinary sequence of events is to later set those elements
> > to real values.
>
> "later set those elements to real values" has no mathematical meaning.
>
> > As to what is elemental, well, the series of elements is elemental isn't it?
>
> The word "elemental" has no mathematical meaning.

Jeeze, you're really being a dick here. It is obvious that the word
elemental pertains to elements, as in elements of a set. This forms a
fine attack on AA, where we see the ring operation requires that
a0, a1, X, X X, etc.
are all elements, by the ring definition whose operators ought to be
applied to these expressions, whereas for you what is now elemental is
( a0, a1, a2, a3, a4, ... , an, 0, 0, 0, ... )
due to my own criticism.

"later set those elements to real values" refers to the discrepancy
between the abstract form of a ring R versus the ring of real numbers.
For instance within the polynomial theory the often discussed (though
invalid) 'polynomials with real coefficients' have inherently set the
abstracted ring R to a specific type. I find it difficult to believe
that my own words could be construed as anything other than this, and
so I feel comfortable calling you a dick here. As to what is
meaningful I find your own confession of abuse of notation to be
worthy, and now you seem to back peddle away from that again, and so
you are wavering in and out; why did you state that the notation is
abusive? This forms a crux, and exposes a contamination of the ring
requirements, which explains how you all flock to the tuple format and
have divorced from the polynomial format. The two are interchangeable
within this subject as far as I can tell, and the fact that you and
others head away from the polynomial form is admission enough.

- Tim

Tim Golden BandTech.com

unread,
Dec 28, 2010, 6:36:33 PM12/28/10
to

No Tommy. The elements a and b are real valued. The value i is not
real valued. That i is in product with b, and that this product is not
compatible with the ring definition, which requires that its elements
belong to the same set, and whose result likewise belongs in that same
set.

Another way to state this is simply that the expression
a + b i
does not resolve to a single element, which is as it should be, but is
in conflict with the ring definition. The above expression contains
three elements, but does not resolve to a single element c. It claims
to be that single element even while it is composed of three elements,
or two if you use the tuple notation.

Is R X R (the cartesian product) two elements or one element? Clearly
the ring definition believes it to be two elements, and so when these
jokers go to an infinite dimensional notation the cost of their
brilliance ought to be weighed. Abstract Algebra and its constructions
stink, and I seem to be the only one who smells it.

In terms of physical correspondence the notion of product as
symmetrical to sum with regard to this set theoretic stricture is not
believable. For instance Ohms law states that
V = I Z
but we certainly should not confuse Volts for Amperes. This same
distinction must be made for geometry as well, so that when we
multiply
( 5 meters )( 5 meters )
we do not yield 25 meters. We instead yield 25 square meters, which
are in a distinct set from the source figures. This consideration can
wander off onto the assumption of orthogonality and so forth, but
regardless the thinking would pose the ring definition as askance to
physical correspondence, even while quantum physics carries ties to
group theory.

- Tim

It is loading more messages.
0 new messages