On Wed, Jun 12, 2002 at 06:32:04PM -0700, Hal Finney wrote:
> For example, if I am offered a choice between running a very pleasant
> experience twice identically, or running the very pleasant experience
> once and a slightly less pleasant experience once, which should I pick?
> On the one hand, the average pleasantness is higher in the first
> alternative. On the other hand, I have two different experiences in
> the second alternative. I don't know which is better, so I don't have
> an initial subjective preference between them.
Earlier (at http://www.lucifer.com/exi-lists/extropians/2612.html) I
argued that preference between the two choices is subjective (i.e. depends
on your utility function). I now realize this implies that the
self-sampling assumption (or SSA, the idea that you should reason as if
you were a random sample from the set of all observers, see
http://www.anthropic-principle.com/index.html for more details) cannot be
applied universally, because it implies that only choosing the two
identical experiences is rational.
Here's a demonstration of this. Suppose you've agreed to participate in
the following experiment. First you're copied. The original will observe
while the copy (named A1) is told the following. A1 will be copied into
A2, B1 and B2. All four will be run on seperate and identical computers.
A1 and A2 will be shown a number equal to the millionth bit in the binary
expansion of PI. B1 and B2 will both be shown a number equal to 1 minus
that bit. All four will be asked to guess the millionth bit of PI. (Assume
you have no idea what the millionth bit is.) If A1 guesses correctly, it
will experience a very pleasant experience (call this experience E1). Same
applies for B1 and B2, each of whom will also have E1 if he guesses
correctly. If A2 guesses correctly however, he will experience a slightly
less pleasant experience E2. If anyone guesses incorrectly, he's halted
immediately. In any event all four copies are halted at the end of the
experiment. (The setup can be changed so that the four runs are done
sequentially instead of in parallel. I don't think that affects my
argument at all.)
Now put yourself in the position of A1 before he's been further copied,
trying to devise a strategy for guessing the millionth bit of PI. Let's
call that bit X and the number you'll be shown Y, and consider the two
strategies A) guess Y, and B) guess 1-Y. It should be obvious at this
point that if you prefer to have two identical very pleasant experiences
you'll select strategy B, and if you prefer to have one very pleasant
experience and one slightly less pleasant experience you'll select
strategy A. However according to the SSA only strategy B is rational.
Here's how I would analyze the situation given the SSA. After being shown
Y, there's 1/4 probability that I'm A1, 1/4 probability that I'm A2, 1/4
probability that I'm B1, and 1/4 probability that I'm B2. So if I guess Y,
there's 1/4 probability that I cause a copy of me to experience E1 and 1/4
probability that I cause a copy of me to experience E2, therefore my
expected utility is U(A) = 1/4*U(E1) + 1/4*U(E2). If I guess 1-Y instead,
there's 1/4 probability that I cause a copy of me to experience E1 and
another 1/4 probability that I cause a copy of me to experience E1, so my
expected utility is U(B) = 1/2*U(E1). Since U(E1) > U(E2), U(B) > U(A).
Here's my proposed non-SSA way of analyzing the situation. After being
shown Y, I consider myself to be A1, A2, B1, and B2 "simultaneously". If I
guess 1-Y, there's probability of 1 that I cause two copies of me to
experience E1 (call this {E1,E1}). If I guess Y, there's probability of 1
that I cause one copy of me to experience E1 and one copy of me to
experience E2 (call this {E1,E2}). Now which strategy I should choose
depends on whether U({E1,E1}) > U({E1,E2}), which can be independent of
whether U(E1) > U(E2).
So my position is that rather than being a principle of correct reasoning,
the status of the SSA should be reduced to that of an approximation useful
when one's utility function is close to satisfying certain constraints
(for example U({E1,E2})=U(E1)+U(E2) for all E1, E2). More general
principle(s) need to be worked out that subsumes the SSA as a special
case.
What about this variant on the experiment (the full experiment is below).
Instead of B1 and B2 both getting E1, let B1 get E1 and B2 get E1'.
E1' is another experience than E1 that is just about as good.
U(E1) > U(E2) and U(E1') > U(E2). The idea is that this eliminates
possible issues regarding whether two people (B1 & B2) who get exactly
the same experience should count twice.
> Now which strategy I should choose
> depends on whether U({E1,E1}) > U({E1,E2}), which can be independent of
> whether U(E1) > U(E2).
We can change this to whether U({E1,E1'}) > U({E1,E2}) in the modified
form.
It does seem that the SSA pretty much implies that if U(E1') > U(E2) then
U({E1,E1'}) > U({E1,E2}). Is it really rational for this to be otherwise?
We know that rationality puts some constraints on the utility function.
We can't have cyclicity in the utility preference graph, for example.
But in the case above, where U({X,Y}) means the utility of having two
different independent experiences X and Y, maybe it does follow that
U({X,Y}) and U({X,Z}) must compare the same as U(Y) and U(Z). You don't
have any choice but to accept the equivalence. As Lewis Carroll wrote,
"Then Logic would take you by the throat, and FORCE you to do it!"
(http://www.mathacademy.com/pr/prime/articles/carroll/index.asp)
Hal
I find it very hard to see how U({E1,E2}) is anything other than
p(E1)*U(E1)+p(E2)*U(E2) in this sort of experiment.
In the leadup to the discussion, Wei was suggesting that having two
different experiences may be better than repeating the same
experience. Surely this can only be true if you get to keep the first
experience when you experience the second, a situation that is false
in the current setup, since E2 is only experienced if you haven't
experienced E1.
Keep trying, but at this stage the argument against the SSA is not
compelling.
Cheers
----------------------------------------------------------------------------
A/Prof Russell Standish Director
High Performance Computing Support Unit, Phone 9385 6967, 8308 3119 (mobile)
UNSW SYDNEY 2052 Fax 9385 6965, 0425 253119 (")
Australia R.Sta...@unsw.edu.au
Room 2075, Red Centre http://parallel.hpc.unsw.edu.au/rks
International prefix +612, Interstate prefix 02
----------------------------------------------------------------------------
The issue of whether substitution effects can apply to experiences of
copies is of independent interest, so my original response still has a
point.
On Fri, Jun 14, 2002 at 07:42:27PM -0700, Hal Finney wrote:
> What about this variant on the experiment (the full experiment is below).
> Instead of B1 and B2 both getting E1, let B1 get E1 and B2 get E1'.
> E1' is another experience than E1 that is just about as good.
> U(E1) > U(E2) and U(E1') > U(E2). The idea is that this eliminates
> possible issues regarding whether two people (B1 & B2) who get exactly
> the same experience should count twice.
I think in that case it's still possible for U({E1,E1'}) < U({E1,E2}), if
for example E1 and E1' are very similar.
> It does seem that the SSA pretty much implies that if U(E1') > U(E2) then
> U({E1,E1'}) > U({E1,E2}). Is it really rational for this to be otherwise?
Yes, I believe it can be. If you believe otherwise you have to convince me
why it's impossible to value diversity of experience in your copies, or
why having that value would lead to absurd consequences.
We all know the law of diminishing marginal utility, which says that the
marginal utility of a good decreases as more of that good is consumed, and
the existence of substitution effects, where the marginal utility of one
good decreases when another similar good is consumed. I suggest there is
no reason to assume that the value of experiences of one's copies cannot
exhibit similar cross-dependencies. Actually I think the reason that
we have diminishing marginal utility and substitution effects, namely that
they provide an evolutionary advantage, also applies to the value of
experiences of copies.
> We know that rationality puts some constraints on the utility function.
> We can't have cyclicity in the utility preference graph, for example.
Our normative theories of rationality (i.e. decision theories) do put
constraints on preferences, but the history of decision theory has been
one of recognizing and removing unnecessary constraints, so that it can be
used by wider classes of people. The earliest decision theories for
example where stated in terms of maximizing expected money payoffs rather
than expected utility, which implicitly assumes that utility is a linear
function of money. Today, of course we recognize that utility can be any
function of money, even a decreasing one. Another example is the move from
objective probabilities to subjective probabilities.
> But in the case above, where U({X,Y}) means the utility of having two
> different independent experiences X and Y, maybe it does follow that
> U({X,Y}) and U({X,Z}) must compare the same as U(Y) and U(Z). You don't
> have any choice but to accept the equivalence. As Lewis Carroll wrote,
> "Then Logic would take you by the throat, and FORCE you to do it!"
> (http://www.mathacademy.com/pr/prime/articles/carroll/index.asp)
But remember that we choose the axioms. Logic doesn't tell use which
axioms to use.
Let me start with this. If U(E1) > U(E2), then would a rational person
have to pick E1 over E2? What if he were someone who were contrary?
Or someone who preferred lesser utility? I think we can rule these
cases out by properly defining utility. With the proper definition it
will always be the case that if U(E1) > U(E2), he picks E1.
Now consider a single-universe model. He can choose one of two
alternatives. In one alternative he is guaranteed to get E1, and in the
other alternative he has a 50-50 chance of getting E1 or E2. Is there
a rational way to prefer the second alternative? That is, can it be
better to have a chance of getting E2 rather than the certainty of E1?
I would like to rule this out for rational choosers, but I'm not 100%
sure. Some people seek risk, although a risk which has only a down side
still seems irrational.
Suppose we could be confident that choosing a certainty of E1 was always
better than a 50-50 chance of E1 or E2. Translating into a MWI model
we have something closer to the scenario Wei originally presented: a
choice between a state where the universe splits (or duplicates) and two
copies both get E1, and a state where one copy gets E1 and one gets E2.
The question is whether it would be rational to have different preferences
in the multiverse case than in the single-universe case.
There is an argument that there should be no differences, because the
information available in any sub-part of the multiverse is the same as in
the single universe case. In fact maybe we can never tell which theory
is correct, therefore the differences are entirely hypothetical. If
we accept this then what is irrational in the single universe case is
also irrational in the MWI.
The final scenario is to go from a multiverse setting to a case of actual
duplication as Wei presents. Instead of universes splitting, we have
people being duplicated. But whether the rest of the universe splits
or not should probably not affect our decision about which experiences
are best. So we should still get the same answer, and preferring E1+E2
is therefore irrational.
Granted this argument has a lot of steps and not all of them have
been fleshed out very well here. The common element is that the
E1/E2 experiences are mutually exclusive. In the first case they
are literally exclusive; in the second case they are in separate,
completely independent universes; and in Wei's original scenario they
are in separate and independent copies. It seems that in all cases
where we have completely independent and exclusive outcomes that utility
should be strictly additive. If there can be no interactions between the
experiences of E1 and E2, they might as well be in separate universes,
or they might as well be logically exclusive alternatives. Therefore
they can only add.
> We all know the law of diminishing marginal utility, which says that the
> marginal utility of a good decreases as more of that good is consumed, and
> the existence of substitution effects, where the marginal utility of one
> good decreases when another similar good is consumed. I suggest there is
> no reason to assume that the value of experiences of one's copies cannot
> exhibit similar cross-dependencies. Actually I think the reason that
> we have diminishing marginal utility and substitution effects, namely that
> they provide an evolutionary advantage, also applies to the value of
> experiences of copies.
I don't think diminishing utility applies when the experiences are with
mutually exclusive alternatives. The 20th apple is worth less to me
than the first because I'm sick of apples by the time I get to the 20th.
But in 20 different universes, if apples are my favorite fruit, I do best
to eat apples in every one. That maximizes my total sensory enjoyment
and nutritional gain, which is what gives apples the highest utility.
Maybe you could expand on your argument about how diminishing utility
relates to evolutionary advantage across copies; I'm not sure what you
are getting at there. I see the reason higher quantities have less
marginal value to you as because of how they interact with each other
and with you; putting them all into separate universes would eliminate
the effects which I see as causing diminishing marginal value.
Hal
Yes, this is part of the definition of utility.
> Now consider a single-universe model. He can choose one of two
> alternatives. In one alternative he is guaranteed to get E1, and in the
> other alternative he has a 50-50 chance of getting E1 or E2. Is there
> a rational way to prefer the second alternative? That is, can it be
> better to have a chance of getting E2 rather than the certainty of E1?
> I would like to rule this out for rational choosers, but I'm not 100%
> sure. Some people seek risk, although a risk which has only a down side
> still seems irrational.
I agree with you here, because if he prefers the second alternative, he
should not prefer E1 to E2. If faced with a choice between E1 and E2 he
would do better to throw a mental coin and decide between them randomly.
> There is an argument that there should be no differences, because the
> information available in any sub-part of the multiverse is the same as in
> the single universe case. In fact maybe we can never tell which theory
> is correct, therefore the differences are entirely hypothetical. If
> we accept this then what is irrational in the single universe case is
> also irrational in the MWI.
I disagree with you here. Although we have no direct sensory information
about what happens in other branches of the multiverse, theory gives
information about what happens in them, and that can be sufficient to
change what we value in this branch.
> Maybe you could expand on your argument about how diminishing utility
> relates to evolutionary advantage across copies; I'm not sure what you
> are getting at there. I see the reason higher quantities have less
> marginal value to you as because of how they interact with each other
> and with you; putting them all into separate universes would eliminate
> the effects which I see as causing diminishing marginal value.
You're right, the evolutionary advantage argument only applies to copies
within a universe, not across universes (or non-interacting branches). The
idea is that if you are content to have experiences similar to your
copies, then the collection of your copies as a whole will contain less
information (i.e. knowledge and skills) than the copies of someone who
wants to have experiences different from his copies. So if your copies
were to compete with his copies you would be at a disadvantage.
P.S. I retract my claim that the self-sampling assumption is incorrect. I
think I was just using it incorrectly. More on this in another post.
About books. Concerning the provability logics I always mentionned
the Boolos 1993 (or even his lovely lighter Boolos 1979), but I would
like to mention also the book "Self-reference and modal logic" by
Smorynski. The only problem is its very little caracters; I should go
to the occulist! :0 But he has a nice chapter on the algebraic models
of the provability logic: the so called diagonalisable algebras and
fixed point algebras. As you know the Z logics I got are so weak that
they loose (like G*) Kripke semantics or even Scott-Montague
(sort of topological) semantics. So we need some algebraic move.
Note that the G/G* story *begun* with those diagonalisable algebra
through the work of Magari (Italy).
But, perhaps more importantly at this stage I must recall the book
"Mathematics of Modality" by Robert Goldblatt. It contains fundamental
papers on which my "quantum" derivation relies. I mentionned it a lot
some time ago.
And now that I speak about Goldblatt, because of Tim May who dares
to refer to algebra, category and topos! I want mention that Goldblatt
did wrote an excellent introduction to Toposes: "Topoi". (One of the big
problem in topos theory is which plural chose for the word "topos". There
are two schools: topoi (like Goldblatt), and toposes (like Bar and
Wells). :)
Goldblatt book on topoi has been heavily attacked by "pure categorically
minded algebraist like Johnstone for exemple, because there is a remnant
smell of set theory in topoi. That is true, but that really help for an
introduction. So, if you want to be introduced to the topos theory,
Goldblatt Topoi, North Holland editor 19?(I will look at home) is
perhaps the one.
-Bruno
PS I get your questions. I will think a little bit before answering.
Thanks to Tim for Egan's exerp.
Yes, this is an excellent book. It has more of an expositional style
than many books on category and topos theory. It's out of print and
Amazon has been looking for months for a used copy for me. (Amazon can
search for books which become available. I also have them searching for
a copy of Mac Lane and Moerdijk's book on sheaves, logic, and toposes,
also out of print.)
Fortunately, I live near UC Santa Cruz, which has an excellent science
library.
The category and topos theory books I actually _own_ (bought through
Amazon) are:
* Cameron, Peter, "Sets, Logic and Categories, 1998. An undergraduate
level primer on these topics. One chapter on categories. (By the way,
most modern algebra books, e.g., Lang's "Algebra," Fraleigh, Dummitt and
Foote, etc. have introductory chapters on category theory, as this is
the "language" of modern abstract algebra.)
* Lawvere, F. William, Schanuel, Stephen H., "Conceptual Mathematics: A
first introduction to categories," 1997. This is a fantastic
introduction to categorical thinking. The authors are pioneers in topos
theory, but the presentation is suitable for any bright person. There is
not much on applications, and certainly no mention of quantum mechanics
a la Isham, Markopoulou, etc. But the conceptual ideas are profound.
(And this should be read before tackling the "formalistic" presentations
in other books.)
* Pierce, Benjamin, "Basic Category Theory for Computer Scientists,"
1991. A thin (80 pages) book which outlines the basics. Includes
material on compilers, the "Effective" topos of Hyland and others,
cartesian closed categories, etc.
* Mac Lane, Saunders, "Categories for the Working Mathematician, Second
Edition," 1971, 1998. Wow. A dense book by the co-founder of category
theory. As someone said, reading along at 10% comprehension is better
than reading other books at full comprehension. I find the book sort of
dry, on historical and conceptual motivations, but Mac Lane has written
many longer expositions in MAA collections of reminiscences...I just
wish mathematicians would do more of what John Baez in his papers: show
the reader the motivations.
(Much of mathematical writing came out of the tradition of "lecture
notes." In fact, the leading publisher of mathematics, Springer-Verlag,
calls their series "Lecture Notes," or, more recently, "Graduate Texts."
Brilliant mathematicians like Emil Artin and Emmy Noether would have
their lectures on algebra transcribed by grad students or post-docs,
like Van der Waerden, who would then republish the notes as "Moderne
Algebra," the first of the "groups-rings-fields" modern algebra books.
Which is why one of E. Artin's students, Lang, writes so many dry books!
These books are often very short on pictures or diagrams, very short on
segues and motivations. It's as if all of what a good teacher would do
in class, with drawings on blackboards, with historical asides, with
mentions of how material ties in with material already covered, with
mention of open research problems and unexplored territory...it's as if
all this material is just left out of these texts. Too bad.)
* Lambek, J., Scott, P.J., "Introduction to higher order categorical
logic," 1986. Way too advanced for me at this point. So no comments on
content. But it's useful to glance at topics so as to get some idea of
where things are going (part of the issue of motivation I raised above).
* Taylor, Paul, "Practical Foundations of Mathematics," 1999. Another
advanced book, covering logic, recursive function theory, cartesian
closed categories, and a lot of the second half I can't comment on. A
wonderful browsing book, as he has lots of tidbits and asides.
There are 3-4 other books I'd like to get, including the Goldblatt book
(he is giving permission to Xerox his book, so I may do that), the Mac
Lane and Moerdijk book, and a few others. Peter Johnstone wrote the
defining book on toposes in 1977...long-since out of print and
long-since overtaken by newer results. Ah, but he is about to have his
massive 3-volume set of books on topos theory published:
Here's John Baez's summary in his Week 180 column:
"2) Peter Johnstone, Sketches of an Elephant: a Topos Theory Compendium,
Cambridge U. Press. Volume 1, comprising Part A: Toposes as Categories,
and Part B: 2-categorical Aspects of Topos Theory, 720 pages, to appear
in June 2002. Volume 2, comprising Part C: Toposes as Spaces, and Part
D: Toposes as Theories, 880 pages, to appear in June 2002. Volume 3,
comprising Part E: Homotopy and Cohomology, and Part F: Toposes as
Mathematical Universes, in preparation.
"I can't wait to dig into this. A topos is a kind of generalization of
the universe of set theory that we all know and love, but topos theory
is really a wonderful way to unify and generalize vast swathes of
mathematics - you could say it's the way that logic and topology merge
when you take category theory seriously. I've really just begun to get a
glimmering of what it's all about, so I'm curious to see Johnstone's
overall view of the subject. "
(end of John Baez's comments)
The first two volumes are due this month or next, according to Oxford
University Press (_not_ Cambridge!) and Amazon. Cost for the two is a
whopping $295. But the books are 750 and 850 pages, respectively.
I'm am steeling myself to buying them. A lot of money, but this stuff is
more entertaining to me than spending the same amount for 1-2 nights in
a hotel, or lots of other things people spend their money on. And
obviously 1500 pages is a lot of reading!
And to better understand these things, I've been brushing up on my math
background. A lot of algebra texts (mentioned above), topology (Munkres,
Hocking and Young, Alexandroff, various Dover editions), and algebraic
topology (Massey, Bredon, Fulton, etc.). My background is mostly
physics, but I fortunately had some good exposure to analysis and
measure theory, the stuff that can provide the assumed "mathematical
maturity" for further study. I wish I'd spent more time studying this
stuff...but wishing about changes in the past is pointless.
I'm here now, in my one and only present, and this category and topos
theory is turning out to be enjoyable and stimulating as a goal unto
itself, and as a tool for, I think, better understanding things I want
to understand.
--Tim May
(.sig for Everything list background)
Corralitos, CA. Born in 1951. Retired from Intel in 1986.
Current main interest: category and topos theory, math, quantum reality,
cosmology.
Background: physics, Intel, crypto, Cypherpunks
Oops! I left out one of the most important and accessible of the books I
have and recommend:
* McLarty, Colin, "Elementary Categories, Elementary Toposes," 1992. An
intermediate-level, moderate-length book. Covers a lot of interesting
material.
Here's what Baez says:
"3) John Baez, Topos theory in a nutshell,
http://math.ucr.edu/home/baez/topos.html
and then try the books I recommended in "week68", along with this one:
4) Colin McLarty, Elementary Categories, Elementary Toposes, Oxford
University Press, Oxford, 1992.
which I only learned about later, when McLarty sent me a copy. I wish
I'd known about it much sooner: it's very nice! It starts with a great
tour of category theory, and then it covers a lot of topos theory,
ending with a bit on various special topics like the "effective topos",
which is a kind of mathematical universe where only effectively
describable things exist - roughly speaking. "
(end of Baez comments)
By the way, the Web is a great resource for finding online books. Barr
and Wells, who Bruno referred to, have put an updated version of their
book "Toposes, Triples and Theories" online in PDF form. Search for it
in the usual way.
I remember your post on the cypherpunks list about category theory, but I
have to admit I didn't pay it much attention since it didn't seem very
relevent at the time. I guess this is my second chance to learn about
category theory, so there are some questions for you.
Suppose I had the time for only one book, which would you recommend? Also,
can you elaborate a bit more on the motivation behind category theory? Why
was it invented, and what problems does it solve? What's the relationship
between category theory and the idea that all possible universes exists?
Does it help understand or formalize the notion of "all possible
universes"? I know in logic there is the concept of a categorical theory
meaning all models of the theory are isomorphic. Does that have anything
to do with category theory?
> Hi Tim, it's really interesting to see you here. (For those who don't
> know, I knew Tim from the cypherpunks mailing list. Hal Finney was an
> active member of the list as well. See
> http://www.activism.net/cypherpunk/crypto-anarchy.html if you're
> wondering
> what a cypherpunk is.) Two of the most prominent cypherpunks I know are
> now on my Everything mailing list. I wonder what that means... Anyway,
> welcome!
Thanks. One of my motivations here is to a) learn, b) have a chance to
explain what I have learned (which helps to learn), and c) avoid
politics and policies and similar issues completely. (Unless there is
talk of funding a National Everything Initiative, I think it's safe to
say that politics won't enter!). The first couple of years, especially
the first year, of Cypherpunks was heavy on learning and explaining, but
politics was of course important. The last several years have been
recyclings of older ideas...this can happen to any list, of course.
I've known of the Everything list, and of course the
Everett/DeWitt/Niven/Wheeler/Egan/Tegmark ideas for a while...I remember
reading Larry Niven's "All the Myriad Ways" around 1970 or so. We used
to sit around in the early 70s debating the Everett model, which a
couple of science fiction writers were making much of, and which had
gained new popularity after Bryce DeWitt dusted off the idea and began
publishing a lot on it. (DeWitt assigned much credit to his student, RN
Graham, and even called the MWI the "Everett-Wheeler-Graham" theory. Way
too charitable to Graham, I think.)
I learned of your list a while back, but wasn't super-interested in what
I thought (and partially still think) is a fanciful idea, akin to the
late David Lewis' "plurality of worlds" philosophy (that everything we
can imagine must have reality). Saul Kripke's "possible worlds" work was
more interesting, as it is closely linked to linguistics and AI and
predictions about the future ("If Oracle were to announce bad earnings
tomorrow, then this is what would probably happen," a possible worlds
"story" which is of course very close to some discussions of alternate
_presents_. In fact, no different, except more practical.)
Enough of this digression. I'll answer your question:
>
> I remember your post on the cypherpunks list about category theory,
> but I
> have to admit I didn't pay it much attention since it didn't seem very
> relevent at the time.
I posted to that list just to let folks know I was exploring a new area.
And because there actually be implications for areas of interest...I
mentioned these in that post.
> I guess this is my second chance to learn about
> category theory, so there are some questions for you.
>
> Suppose I had the time for only one book, which would you recommend?
I would start with Lee Smolin's "Three Roads to Quantum Gravity." This
is not about category theory, but provides a lot of the motivation. He
discusses many of the points that I allude to. In bullet form:
-- that we are embedded in the universe, that there is no "omniscient
observer who can see all of space-time"
-- that certain pieces of information are forever beyond our ability to
see. These turn out to be important for why accelerating objects see a
"temperature" of space, isomorphic to the temperature seen by a
spacecraft hovering over a black hole expending fuel at the same rate.
(I can explain separately if there's interest.)
-- that space appears to be discrete at the Planck scale (a la the "It
from bit" discreteness outlined by Wheeler 30 years ago, later by
Fredkin, Toffoli, and others, and more recently by Wolfram)--this
discreteness has not been proved, and may not be testable for many
decades, but all three of the routes Smolin outlines essentially predict
that space-time is not an infinitely differentiable (smooth) manifold
made of real numbers: the holographic/Crane/Susskind route, the loop
gravity/spin foam route, and the currently much popularized
string/M-brane Schwartz/Witten/etc. route.
-- that _relationships_ are more important than objects (which fits very
closely with category theory, as I'll explain below)
-- that the proper logic of understanding cosmology, given some or all
of the above points, is not the "omniscient" logic of
Aristotelian/Boolean logic ("A or not-A, nothing in between"), but a
more general form of Intuitionist (bad political connotations for many!)
Brouwer/Heyting logic, which also happens to be the natural logic of a
topos, a category with certain properties of logic attached to it.
Smolin spends a few pages discussing topos theory, but in such light
detail that anyone not tuned in to spotting the phrase "topos theory"
might even miss it. He refers to Fotini Markopoulou, a Greek woman who
collaborates a lot with him and with the other biggies of the "loop
gravity/spin foam" school, e.g., Rovelli, Ashketar, Baez, etc.
If you want a straight intro to category theory, with none of the stuff
about cosmology, space-time, and such, the Lawvere and Schanuel book
"Conceptual Mathematics: A first introduction to categories" is the book
I recommend.
But perhaps before reading a book, surfing the Web is better. There are
many tutorials and primers, and one can read a bunch of them to get the
lay of the land. I've mentioned John Baez's site a couple of times.
Chris Hillman has a good "Categorical Primer," but it's incomplete. Some
authors, like Barr and Wells, have even put their books up on the Web.
Reading any of the main books will not show many _apparent_ links to
Tegmark, Schmidhuber, etc. The same applies to reading Kleene's 1952
opus, "Metamathematics," or Lang's "Algebra," or any math book. None of
these will appear to be closely relate tp theories of everything, or
multiverses. One has to read between the lines. For example, when
reading about categories, think of Tegmark's diagram showing links
between "Abelian Fields," "Manifolds with Tensors Fields," etc.
A categorical approach is cleaner and shows the morphisms more clearly.
Take a look at category theory for about 2 days (of 2-3 hours a day) and
then look at Tegmark's paper again.
(This is just for the formalisms he talks about, not the underlying
philosophical point.)
> Also,
> can you elaborate a bit more on the motivation behind category theory?
> Why
> was it invented, and what problems does it solve?
OK, on to some history.
Mathematics had two great waves of consolidation and formalization in
the past century. Around 1920 much had been done with symmetries,
embodied in group theory. Much was known about integers and even the
real number line, embodied in the work of Stephen Dedekind in the 19th
century. Also, G. Peano, etc. This was mostly the algebra of rings and
fields. (On mostly separate tracks, a lot of work in topology, logic,
set theory.)
Two associates/students of David Hilbert largely gave us our modern
outlook on math in the 1920s: Emil Artin and Emmy Noether. In
influential lectures in Germany during the 1920s they consolidated all
of the bits and pieces about groups, rings, fields, modules, and vector
spaces into a coherent, axiomatized whole. Others in this milieu were
Herman Weyl, John von Neumann (different country, but still same
milieu), David Birkhoff (in the U.S.), and others. One of those who
transcribed Artin's and Noether's lectures was van der Waerden, who
wrote an enormously influential book, "Moderne Algebra," published
around 1930-32 in German. Van der Waerden established the
"groups-rings-fields" structure of nearly all of the abstract algebra
books to come later, such as G. Birkhoff (David's son) and Mac Lane's
1941 text in English, Jacobson's early 50s text, Mac Lane and Birkhoff's
more advanced text, Herstein's "Topics in Algebra," and so on. One of
Emil Artin's students (both Artin and Noether moved to the U.S. in the
early 1930s, part of the enormous wave of refugees from Germany and
Europe) was Serge Lang, author of a dozen or more "classics." (Warning:
Lang's books are dry, but complete.)
OK, so that was the first great wave of consolidation in "real"
mathematics. (By "real" I mean the stuff mathematicians actually use
every day...there were of course consolidations in foundational areas
happening at about the same time, via Russell, Whitehead, Godel, Curry,
Church, Kleene, etc.)
This was the "abstract math" we all know and love.
Around 1940 there was much work on applying algebra to topology, e.g.,
counting holes in surfaces, seeing which loops drawn on a surface could
be contracted to points and which could not. Two of those working on
this were Samuel Eilenberg, a young emigre from Poland (dubbed
"S-squared, P-squared, for "Smart Sammy the Polish Prodigy") and an
equally young Saunders Mac Lane, the same as the author of the Birkhoff
and Mac Lane basic text on algebra.
The combination of algebra and topology is of course algebraic topology.
What Eilenberg and Mac Lane realized is that certain _structures_ were
present in both algebra and topology, and that _structure-preserving
maps_ were showing up all over the place. For example, deformations of
an object would have the same structure as algebraic transformations.
Even examples like counting holes in surfaces were in a deep sense
isomorphic to counting things in purely symbolic or algebraic structures.
They began thinking about what the abstract ideas were, and came up
with the core ideas:
-- that maps, or arrows, or morphisms go between objects
-- that the arrrows themselves can be studied, and that arrows between
the arrows (morphisms of morphisms) are interesting to look at. (The
arrows between arrows, or the morphisms between morphisms, are usually
called "functors.")
(Digression: For example, look at _languages_. Words in English are
mapped into their plural form in various ways, usually by adding an "s."
Sometimes in other ways ("oxen"). The morphisms between words can be
compared (mapped) to morphisms between words in other languages, e.g.,
Chinese, German. There are functors going between plural formation in
English and plural formation in German. And between other structures in
each of these languages. In this sense, the "structures" of different
languages can be diagrammed and compared in a more interesting way than
simply talking about them or just by compiling descriptions.)
-- that we can often take the set of objects and morphisms and view them
as a "picture" (a model, sort of) of relationships
-- that "natural transformations" are ways of "sliding" this picture
into other categories, other domains
-- critical to much of category theory is the analysis of diagrams,
showing arrows from A to B, or A --> B, arrows from C to D, etc., and
then figuring out what is needed to make such diagrams commute (as an
example). This leads to things I won't try to explain here (for space,
and because I'm not the best person to look to for such explanations)
such as "pushouts" and "pullbacks." And it turns out that these diagrams
are closely linked to physics issues (!). (If you turn on your morphing
engines and begin playing with the words, you probably are already
speculating that maybe Feynman diagrams are a kind of category theory
diagram. And you would be right. See John Baez's "From Causal Sets to
Feynman Diagrams" for more.)
(Digression: By talking to you folks, by attempting to explain these
ideas, I am essentially trying to take the picture which is inside _my_
head and plant some semblance of it in _your_ head. This is a kind of
natural transformation, albeit much more complicated and fuzzier than
most nice, formal, pristine examples in math!)
So in 1945, delayed for 3 years by the war, Eilenberg and Mac Lane
published their influential and very long paper on what they called
"categories." A category was like a set, except made up of objects and
arrrows between the objects. Ordinary sets formed the category SET.
Vector spaces formed the category VECT, and so on, for essentially all
objects and relationships known to mathematics. And of course there are
then ways to map one category into another, or to compare two categories
(actually mapping and asking about the mapping are of course essentially
the same thing, with just a change of direction or emphasis).
Eilenberg and Mac Lane used category theory as a concise and unifying
notation for talking about mathematics, especially for algebraic
topology. Use of category theory for algebraic topology (particularly an
important branch called "homology theory") became widespread in the late
40s and ever since.
Aside: They coined the term "category." Perhaps they should have called
them something like "structures." Then category theory would be
understand at a glance (via those morphisms in our everyday language,
those patterns!) as "structure theory." Or as "pattern theory." And then
probably more people would have understand why it would likely to be
very important.
But much more was to come.
In the late 1950s a couple of important extensions (obscure pun not
intended) of Eilenberg and Mac Lane's "notational language" happened.
Daniel Kan discovered the "Kan extension" and proved something called
the "adjoint functor theorem." Now physicists have used "adjoints" and
"self-adjoint" concepts for years. When the category theory and algebra
people showed the power of their diagrams, physicists began to take
notice.
And at around that time the great mathematician Grothendieck developed
some notions which led to a particular kind of category called a topos.
(No space in this already too-long article to try to explain what a
topos is.) William (Bill) Lawvere was a young student of Eilenberg's. He
was convinced that topos theory could provide a foundation for
mathematics equvalient in power to that of set theory. Instead of sets
consisting of points and axioms about inclusion we would have objects
(not necessarily consisting of points) and arrows (morphisms). According
to the story, Mac Lane warned him away from attempting this. The story
goes that Eilenberg and Mac Lane were flying down to consult for the
Pentagon in 1963 and Eilenberg handed Mac Lane the recently completed
thesis of Lawvere. Bill Lawvere had largely established that toposes
could form the basis of all of mathematics. (Some worked needed to be
completed, and this was largely done by 1970.)
Meanwhile, a couple of major theorems were solved in the 1960s with the
use of category theory: the Weill Conjecture and Paul Cohen's work on
"forcing" to
Let me give a description I found of just one course in topos theory.
This will mention a lot of buzzwords, including folks like Saul Kripke
(logician, "possible worlds"), Grothendieck, etc. Don't expect to grok
this from a course summary (!), but I include it to show some of the
links:
http://www.math.uu.se/~palmgren/topos-eng.html
"Topos Theory, spring term 1999
"A graduate course (6 course points) in mathematical logic.
------------------------------------------------------------------------
"Topos theory grew out of the observation that the category of sheaves
over a fixed topological space forms a universe of "continuously
variable sets" which obeys the laws of intuitionistic logic. These sheaf
models, or Grothendieck toposes, turn out to be generalisations of
Kripke and Beth models (which are fundamental for various non-classical
logics) as well as Cohen's forcing models for set theory. The notion of
topos was subsequently extended and given an elementary axiomatisation
by Lawvere and Tierney, and shown to correspond to a certain higher
order intuitionistic logic. Various logics and type theories have been
given categorical characterisations, which are of importance for the
mathematical foundations for programming languages. One of the most
interesting aspects of toposes is that they can provide natural models
of certain theories that lack classical models, viz. synthetic
differential geometry.
"This graduate course offers an introduction to topos theory and
categorical logic. In particular the following topics will be covered:
Categorical logic: relation between logics, type theories and
categories. Generalised topologies, including formal topologies.
Sheaves. Pretoposes and toposes. Beth-Kripke-Joyal semantics. Boolean
toposes and Cohen forcing. Barr's theorem and Diaconescu covers.
Geometric morphisms. Classifying toposes. Sheaf models of infinitesimal
analysis.
"We will assume some familiarity with basic category theory, such as is
obtained in courses in domain theory or algebra. The course will be
given English, in case someone requests this. "
(Sorry for the quoted material, but it sometimes helps to see someone
else talking about something. At least I find it does.)
I'd best move on to answering or commenting the rest of Wei Dai's
questions:
> What's the relationship
> between category theory and the idea that all possible universes exists?
I believe it provides a natural language for talking about time-varying
sets and "sheaves" (Jargon Alert: the "sheaf" in mathematics may not
match exactly what one pictures a "sheaf of histories" to look like. But
mathematicians usually pick names that bear _some_ resemblance to
ordinary things, e.g., fibers, fiber bundles, sheaves, pre-sheaves,
partially ordered sets, spaces, etc.)
Personally, I'm not (yet) "taking seriously" either the David Lewis
"plurality of worlds" or Max Tegmark "everything" or Greg Egan "all
topologies model" ideas. At least not yet. I need to learn a lot more of
the language first.
(Without a good language, we end up just _talking_ and _speculating_.
I'm finding my category theory reading is giving me a welcome new
perspective on issues I've long been fascinated with, e.g., if a single
atom were to have been moved on Sirius a million years ago, would the
world around us be different? And how much different? The cascade of
changes has what topology? Is it the Pachinko topology?)
My strong hunch is that the universe at these levels (nature of
space-time) will need all of the mathematical tools we can muster. Now,
I'm not saying one needs to be an expert at group theory, for example.
Or that one needs to know all of algebraic topology. But it's pretty
clear to me that mathematics has historically been our most important
tool for all of the interesting branches of physics. No one can get far
in relativity or quantum mechanics without mastering some interesting
math.
For me, category theory has been a stunning window on things. I regret
that it is not taught, or even mentioned, to most physicists. (This may
be changing, with the books by folks like Robert Geroch, who starts with
category theory and covers much of modern physics from this perspective,
and the progress by the string theory and quantum gravity communities.)
> Does it help understand or formalize the notion of "all possible
> universes"? I know in logic there is the concept of a categorical theory
> meaning all models of the theory are isomorphic. Does that have anything
> to do with category theory?
Yes, there are deep and important connections. Models form a category.
The book I mentioned by Paul Taylor, "Practical Foundations of
Mathematics," is very good on these issues.
In my view, category theory (and topos theory) represents the "modern"
way of looking at a lot of seemingly unrelated areas.
As we know, as Hal Finney and several of us used to discuss about ten
years ago on the Extropians list, Chaitin's formulation of algorithmic
information theory gives us a much more understandable and
comprehensible proof of Godel's Theorem than Godel himself gave! (For
the best explanation of this, and why this is so, either see Greg
Chaitin's own papers and books or the wonderful summaries by Rudy Rucker
in his "Mind Tools" book.
These modern viewpoints are much more comprehensible than the classics.
Which is not surprising. Shoulders of giants and all that.
The same is true of category theory. It's a relentlessly modern approach
to seeing the similarities that pervade mathematics and physics. Whether
it answers the question about whether lots of other universes exist is
doubtful...I'm not convinced we'll know the answer to that question in
the year 3000.
But it's a powerful and elegant approach, with perhaps a slightly
misleading name, and it looks to me to be the right language for talking
about the world around us and possibly the worlds we cannot directly see.
I agree with Lee Smolin that topos logic is not just the logic of
cosmology, but also the logic of our everyday world of limited
information, bounded rationality, Bayesian decision making, and
information horizons. Even if this is not useful for answering questions
about "the Everything theory," because we may need to wait 600 or 6000
years for experimental tests to become feasible, I believe this outlook
will be of great utility in many areas.
I'll keep you all posted!
--Tim May
Wei Dai:
>Suppose I had the time for only one book, which would you recommend?
I think you (Wei) decide to look for the book by Lawvere. Good choice
but you should know it is just an introduction. Now, that book is useful
even for learning algebra. Some who knows algebra (i.e. groups, rings,
fields, topological spaces and more importantly their morphism) could
look for more advanced material perhaps.
>Also,
>can you elaborate a bit more on the motivation behind category theory? Why
>was it invented, and what problems does it solve? What's the relationship
>between category theory and the idea that all possible universes exists?
Tim makes a very genuine remark (but he writes so much I fear that has
been unnoticed!). He said: read Tegmark (Everything paper), then learn
category, then read again Tegmark. Indeed I would say category theory has
emerged from the realisation that mathematical structures are themselves
mathematically structured. Categorist applies the every-structure principle
for each structure. Take all groups, and all morphism between groups: you
get the category of groups. It is one mathematical structure, a category
(with objects = groups and arrows = homomorphism) which, in some sense
capture the essence of group.
Note that the category of groups is too large to be defined in Zermelo
Fraenkel set theory (as almost any so called
large category). The usual trick of categorist is to mention the Von Neumann
Bernay Godel theory of set which has classes (collection of sets which are not
sets themselves). A much modern view is to make category theory in a well
chosen topos!
Note that some common mathematical structure *are* categories. A group is
a category (with one object: the set of group elements, and the arrows are the
action of the group on itself). Partially ordered set (set with a transitive,
reflexive, and antisymmetric relation on it) give other simple exemples:
object of the category are the element of the partially ordered set, and
the unique (here) arrow between object is the order relation). Automatically
Boolean Algebra, but also Heytingian lattices, etc... are categories.
Of course just a set can be made into a trivial category.
Other categories lives in between groups and lattices. They have lot of objects
and lot of arrows between objects.
Categories arises naturally when mathematician realised that many proofs
looks alike so that it is easier to abstract a new structure-of-structures,
then makes proofs in it, then apply the abstract proof in each structure
you want. So they define universal constructions in category (like the
"product"), which will correspond automatically to
- "and" in boolean algebra
- "and" in Heyting algebra
- group product in cat of groups
- topological product in cat of topological spaces
- Lie product in cat of Lie groups, etc.
So Category theory helps you to make a big economy of work ... once you
invest in it, if you are using algebra. It saves your time.
But, to come back to Tim remark, it hints that a giant part whole of
mathematics is naturally mathematically structured, and this should be taken
into account. Also, as I have explained before, the whole of math cannot
be entirely mathematically structured in any consistent way (that's too big).
This can be shown with logic, but categories can give you a concrete
feeling of the bigness and endlessness of such an enterprise.
Another motivation for category is the realisation that elements of structures
are not necessary for defining those structures. Objects behavior are defined
(up to isomorphism) by their relationship (arrows) with other objects.
That's a sort of functional or relational philosophy not so different from
comp. As exercice you could try to define injection and surjection
between sets without mentioning the elements!
>
>Does it help understand or formalize the notion of "all possible
>universes"?
I don't think categories can help in any direct way, although I doubt
indeed we can live without categories (nor without logic-modalities) in the
long run. You could try to define the category of (multi)universes.
What would be a morphism between universes?
Note that lawvere has try to provide math foundation through the category
of all categories (with functors as arrows) but this has not succeed.
He discovers toposes instead.
Note that categories are difficult to marry with ... recursion theory.
(Despite so-called Dominical Categories, which does the job, but that is
too heavy math for me ...).
>I know in logic there is the concept of a categorical theory
>meaning all models of the theory are isomorphic. Does that have anything
>to do with category theory?
Not really. It is one of the reason it is better to use the adjective
categorial in algebra and categorical in logic. But not all scientist
follow this custom.
Bruno
> Tim makes a very genuine remark (but he writes so much I fear that has
> been unnoticed!).
True enough...I write a lot! (The old joke applies: "I don't have enough
time to write a short letter.")
> He said: read Tegmark (Everything paper), then learn
> category, then read again Tegmark.
Well, I didn't actually say "then learn category (theory). I said spend
enough time looking at category theory to get the gist of what they are.
A couple of days, for example.
Reading styles differ, but I have come to favor the "hawk spiral." I see
hawks spiralling in the thermals near my house, and this is how I like
to learn. I read something from one book, think, read from another,
think, try to compare what the authors are saying, read from another, go
back to the earlier book and read more, and so on. A helix, covering the
same material many times.
> Indeed I would say category theory has
> emerged from the realisation that mathematical structures are themselves
> mathematically structured. Categorist applies the every-structure
> principle
> for each structure. Take all groups, and all morphism between groups:
> you
> get the category of groups. It is one mathematical structure, a category
> (with objects = groups and arrows = homomorphism) which, in some sense
> capture the essence of group.
Exactly. A very nice explanation.
And much of what Tegmark outlines in his large chart can be dramatically
simplified and abstracted. Crane, Baez, Dolan, and others call this the
"categorification" process. Robert Geroch's textbook, "Mathematical
Physics," uses categories and functors throughout as a unifying (and
intuition-increasing) tool.
Hey, let me be very clear about something: I don't know what the
categorification of Tegmark's ideas are!
Categories and toposes are not a magic bullet.
But I know that gettting lost in the swamps of mathematical structures
is a real danger, and that mathematicians have found certain unifying
symmetries, structures, parallels which simplify things dramatically.
Category theory is a lot like finding metaphors and parallels. (We use
the term "isomorphism" almost in everyday language, so the leap to all
sorts of morphisms is not great.)
>
> Categories arises naturally when mathematician realised that many proofs
> looks alike so that it is easier to abstract a new
> structure-of-structures,
> then makes proofs in it, then apply the abstract proof in each structure
> you want. So they define universal constructions in category (like the
> "product"), which will correspond automatically to
> - "and" in boolean algebra
> - "and" in Heyting algebra
> - group product in cat of groups
> - topological product in cat of topological spaces
> - Lie product in cat of Lie groups, etc.
> So Category theory helps you to make a big economy of work ... once you
> invest in it, if you are using algebra. It saves your time.
Exactly. Another good explanation here.
And it's more than just a notational convenience. Proofs in one area,
such as some branch of topology, can be transformed into proofs in other
areas.
(Aside: I believe this is a big part of what thinking is about: applying
thoughts/concepts/morphisms/etc. from one area to another. Perhaps
category theory will push AI in new ways. Perhaps the "frame problem"
will be solvable with new tools.)
> But, to come back to Tim remark, it hints that a giant part whole of
> mathematics is naturally mathematically structured, and this should be
> taken
> into account.
I first heard of category theory about 10 years ago. A friend of mine
was working for a company in Palo Alto which was using category theory
to model economic data bases (such as petroleum reserves, ports in
different countries, etc. ...very probably CIA-related, now that I think
about it). He didn't have the interest or insight to explain why
category theory was so cool.
I asked a mathematician friend of mine (Eric H., for Hal and WD) about
it and he said it was about what mathematicians do when they draw
diagrams on blackboards. It didn't sound very interesting. It sounded
like some variant of denotational semantics.
But when the light bulbs went off this spring, when I dug into the
writings of Baez, Hillman, Markopoulou, and the books of Lawvere, Mac
Lane (difficult), and others, I had a major epiphany, a real "Ah ha!"
experience.
Everything seemed directly related to problems which had fascinated me
for decades. Some of these issues I hope I have hinted at here.
It was almost as if category and topos theory had been invented just for
me....an exaggeration, but it captures my sense of wonder. I haven't
been this excited about a new area in more than a decade. I expect I'll
be doing something in this area for at least the _next_ decade.
My apologies if this explanation of enthusiasm is too personal for you
the reader, but I think enthusiasm is a good thing.
I have always read books in this way (except courses and novels).
In each field I'm interested in I have master books I read and reread
and satellites I consult and reconsult.
Sometimes I am not sure if I exist only but as a slave of dormant ideas
in books manipulating me for spreading in some ways ...
My books are like butterflies jumping from tables to sofas, following
me everywhere trying to exchange ideas, linking notions, etc. I am
only an humble servitor ... :)
>And much of what Tegmark outlines in his large chart can be
>dramatically simplified and abstracted.
And we need to do that! because we are distributed in the tegmark
big structure in such a way that from our local views, the global
accessible view is mathematically more rich than the big structure!
(well more on that later).
>(Aside: I believe this is a big part of what thinking is about:
>applying thoughts/concepts/morphisms/etc. from one area to another.
>Perhaps category theory will push AI in new ways.
Part of it, sure. Most "explanations" are morphism in known structures.
>Everything seemed directly related to problems which had fascinated
>me for decades. Some of these issues I hope I have hinted at here.
>
>It was almost as if category and topos theory had been invented just
>for me....an exaggeration, but it captures my sense of wonder. I
>haven't been this excited about a new area in more than a decade. I
>expect I'll be doing something in this area for at least the _next_
>decade.
>
>My apologies if this explanation of enthusiasm is too personal for
>you the reader, but I think enthusiasm is a good thing.
Me too. Now, I feel almost like you about ... knot theory.
And this fit well with your cat-enthusiasm, for knot theory is
a reservoir of beautiful and TOE-relevant categories
(the monoidal one). I've just
ordered Yetter's book: functorial(*) knot theory. It is the number 24
of Kauffman series on Knots and Everything (sic) at World
Scient. Publ Co. A series which could be a royal series for this list ...
May I recommand the n° 1, by Louis Kauffman himself: knots and physics?
A must for the (quantum) toes, and (I speculate now) the comp toe too!
I knew Yetter's work a long time ago when I read his paper on
the semantics on non commutative girard linear logic.
Unfortunately, later, the Z logics gave me a (weakening) of
quantum logic (the von neumann one), which
Yetter dismisses in that paper, so I did dismiss Yetter ...
Now I know the z logics really should have "tensorial semantics",
sort of many related (glued) von neumann type of logics (which are
themselves atlases of boolean logics).
But where (in Zs logic) those damned tensorial categories come from???
Knots gives hints!!! This would explain the geometrical appearance
of realities.
Bruno
(*) For the other: "functorial" really means categorial. Functors are
the morphisms between categories. The first chapter of Yetter's book
is an intro to category theory, the second one, on Knot theory, ...
I've looked at some of the knot series books, but have put them off for
now.
A good book to prepare for these books is Colin Adams, "The Knot Book:
An Elementary Introduction to the Mathematical Theory of Knots," 1994.
Whether knots are the key to physics, I can't say. Certainly there are
suggestive notions that particles might be some kind of knots in
spacetime (of some dimensionality)...a lot of people have played with
knots, loops, kinks, and braids for the past century.
One thing that Tegmark got right, I think, is the notion that a lot of
branches of mathematics and a lot of mathematical structures probably go
into making up the nature of reality.
This is at first glance dramatically apposite the ideas of Fredkin,
Toffoli, Wheeler1970, and Wolfram on the generation of reality from
simple, local rules. Wolfram has made a claim in interviews, and perhaps
somewhere in his new book, that he thinks the Universe may be generated
by a 6-line Mathematica program!
However, while I am deeply skeptical that a 6-line Mathematica program
underlies all of reality, enormous complexity, including conceptual
complexity, can emerge from very simple rules. A very simple example of
this is the game of Go. From extremely simple rules played with two
types of stones on a 19 x 19 grid we get "emergent concepts" which exist
in a very real sense. For example, a cluster of stones may have
"strength" or "influence." Groups of stones develop properties which
individual stones don't have. Abstraction hierarchies abound. The
Japanese have hundreds of names for these emergent, higher-order
structures and concepts. All out of what is essentially a cellular
automata.
So even if our universe is a program running as a screen saver on some
weird alien's PC, all sorts of complexity can emerge.
Getting down to earth, most of this complexity is best seen as
mathematics, I think.
I expect to take a closer look at knots after I get more math under my
belt.
> ....Now I know the z logics really should have "tensorial semantics",
> sort of many related (glued) von neumann type of logics (which are
> themselves atlases of boolean logics).
> But where (in Zs logic) those damned tensorial categories come from???
> Knots gives hints!!! This would explain the geometrical appearance
> of realities.
>
> Bruno
>
> (*) For the other: "functorial" really means categorial. Functors are
> the morphisms between categories. The first chapter of Yetter's book
> is an intro to category theory, the second one, on Knot theory, ...
Exciting stuff.
Or even, perhaps, that "I am experiencing" all instantiations at once.
Eventually it will be the relative proportion of differentiating
(or bifurcating) history-instantiations which should count.
>My consciousness, in that sense, spans many
>parts of the multiverse, and the question of "which universe am I in"
>has no unique answer.
I would even say that the question is meaningless. It is not clear that
all "my" possible experiences can be associate with "well defined universe".
I think I agree with Hal Finney. Hal, do you defend this position or was it
only for the purpose of the discussion. Have you a definite opinion?
==========
Wei, I hope my way of talking yesterday didn't seem too rude. I am really
trying hard to understand what you don't understand about the necessity
to take into account the comp 1-indeterminacy in TOE, once comp is postulated
(comp = Church thesis + minimal amount of arithmetical realism + there is
a level of self-description such that my private experience doesn't change
for functional substitutions made at that level).
Bruno
So to elaborate, the reason applying the SSA seemed to lead to a bad
result is that I forgot to apply game theory, which is required here
because we have more than one person making decisions that interact with
each other (even though they're copies of one original person and have the
same preferences).
Here's a simplified thought experiment that illustrates the issue. Two
copies of the subject S, A and B, are asked to choose option 1 or option
2. If A chooses 1, S wins a TV (TV), otherise S wins a worse TV (TV2). If
B chooses 1, S wins a stereo, otherwise S wins TV. S prefers TV to TV2 to
stereo, but would rather have a TV and a stereo than two TVs. The copies
have to choose without knowing whether they are A or B.
According to my incorrect analysis, SSA would imply that you choose option
2, because that gives you .5*U(TV2) + .5*U(TV) > .5*U(TV) + .5*U(stereo)
since U(TV2) > U(stereo). I argued that you should consider yourself A and
B simultaneously so you could rationally choose option 2, because
U({TV,stereo}) > U({TV2, TV}). However taking both SSA and game theory
into account implies that option 2 is rational. Furthermore, my earlier
suggestion leads to unintuitive results in general, when the two players
do not share the same utility function.
The game theoretic analysis goes like this. There are two possible
outcomes with pure strategies (I'll ignore mixed strategies for now).
Either A and B both choose 1, or they both choose 2. The first one is a
Nash equilibrium, the second may or may not be. To understand what this
means, suppose you are one of the players in this game (either A or B but
you don't know which) and you expect the other player to choose option 1.
Then your expected utility if you choose option 1 is .5*U({TV,stereo}) +
.5*U({TV,stereo}). If you choose option 2, the expected utility is
.5*U({TV2,stereo}) + .5*({TV,TV}) which is strictly less. So you have no
reason not to choose option 1 if you expect the other player to choose
option 1. Whether or not the second possible outcome is also a Nash
equilibrium depends on whether U({TV2,TV}) > .5*U({TV2,stereo}) +
.5*({TV,TV}). But even if it is, the players can just coordinate ahead of
time (or implicitly) to choose option 1 and obtain the better equilibrium.
Now I want to show that while my earlier suggestion to consider yourself
to be both A and B works in this case, it doesn't work in general when A
and B have different utility functions. Consider the following game, which
we can call Amnesiac Prisoner's Dillemma. Players A and B can choose
Cooperate or Defect. If they both Defect they both get sentenced to 5
years in prison. If they both Cooperate they both get 1 year. If one
Cooperates and the other Defects they get 3 months and 10 years
respectively. The twist is that you have no memory of who you are and
don't know whether you're A or B. If you consider yourself to be both A
and B, then you would choose Cooperate, which contradicts our intuition. A
game theoretic analysis with the SSA would go like this. Suppose you
expect the other player to Cooperate. Then EU(Cooperate) = .5*EU(Cooperate
| I'm A) + .5*EU(Cooperate | I'm B) = .5*U(I'm A and A gets 1 year and B
gets 1 year) + .5*U(I'm B and A gets 1 year and B gets 1 year) <
EU(Defect) = .5*EU(Defect | I'm A) + .5*EU(Defect | I'm B) = .5*U(I'm A
and A gets three months and B gets 10 years) + .5*EU(I'm B and A gets 10
years and B gets three months). So Cooperate is not a Nash equilibrium.
Whether knots are the key to physics, I can't say. Certainly there are suggestive notions that particles might be some kind of knots in spacetime (of some dimensionality)...
I am confused about the relation of S to A and B. Did S go into a
copying machine and get two copies, A and B made, in addition to S?
And now A and B are deciding what S will win?
Why should they care? If S gets a TV that does not benefit them. Is it
just that they are similar to S, being recent copies of him, so they
have a sort of brotherly fondness for him and would like to see him happy?
I thought the earlier experiments had a more direct connection, where
the people making the decisions were the same ones who were getting
the reward (or at least, future copies of themselves)
> Now I want to show that while my earlier suggestion to consider yourself
> to be both A and B works in this case, it doesn't work in general when A
> and B have different utility functions. Consider the following game, which
> we can call Amnesiac Prisoner's Dillemma. Players A and B can choose
> Cooperate or Defect. If they both Defect they both get sentenced to 5
> years in prison. If they both Cooperate they both get 1 year. If one
> Cooperates and the other Defects they get 3 months and 10 years
> respectively. The twist is that you have no memory of who you are and
> don't know whether you're A or B. If you consider yourself to be both A
> and B, then you would choose Cooperate, which contradicts our intuition. A
> game theoretic analysis with the SSA would go like this. Suppose you
> expect the other player to Cooperate. Then EU(Cooperate) = .5*EU(Cooperate
> | I'm A) + .5*EU(Cooperate | I'm B) = .5*U(I'm A and A gets 1 year and B
> gets 1 year) + .5*U(I'm B and A gets 1 year and B gets 1 year) <
> EU(Defect) = .5*EU(Defect | I'm A) + .5*EU(Defect | I'm B) = .5*U(I'm A
> and A gets three months and B gets 10 years) + .5*EU(I'm B and A gets 10
> years and B gets three months). So Cooperate is not a Nash equilibrium.
I'm not sure I understand this; since the payoff matrix is symmetric it
doesn't matter if you are A or B so I don't see what the point is of
introducing amnesia. Would there be any cases with symmetric payoffs
where an amnesiac would behave differently than someone who knew whether
he was A or B?
Tangentially, it seems that the PD is a case where the "evidential"
vs "causal" principles of decision theory would show a difference.
The evidentialist would argue that by cooperating, it increases the
chance that the other person will cooperate (perhaps to a certainty, in
some versions), hence cooperating can be justified. This is basically
Hofstadter's principle of super-rationality. The causalist would reject
the possible correlation of choices and choose the dominant strategy
of defecting. Does that seem correct?
What about this: you are going to be copied, and your two copies are going
to play a PD game. You know this ahead of time and so you can decide
on whatever strategies you intend to follow during this time before the
copying occurs. Do you think in that case it would be rational to firmly
decide beforehand to cooperate?
And would "amnesia" make a difference? We might arrange for amnesia by
having the duplicates immediately play the game, without any knowledge
of which they are; and remove the amnesia by simply telling them that
one is A and one is B, before they play. Again, with a symmetric game
I don't see how the amnesia or its absence would be relevant. Maybe I
am misunderstanding that aspect.
Hal Finney
Yes, and yes.
> Why should they care? If S gets a TV that does not benefit them. Is it
> just that they are similar to S, being recent copies of him, so they
> have a sort of brotherly fondness for him and would like to see him happy?
Yes.
> I thought the earlier experiments had a more direct connection, where
> the people making the decisions were the same ones who were getting
> the reward (or at least, future copies of themselves)
The same reasoning applies to the earlier experiments as well. I thought
this is simpler because it removes the issue of whether the value of an
experience depend the experiences of one's copies.
BTW, evolution programmed us to value certain experiences, even if those
experiences no longer reliably indicate increases in inclusive fitness.
This is not going to last very long if we remain in an evolutionary
regime. Those who value only experiences that reliably indicate increases
in inclusive fitness will have an evolutionary advantage. That means the
same experience will have very different values depending on the subject's
background knowledge. For example the experience of eating a delicious
meal would not be valued if it's known that the experience is a computer
generated illusion and and no actual nutrition in being gained. In my
previous thought experiments, the experiences that were considered rewards
did not indicate increases in inclusive fitness. That doesn't mean it's
irrational to value them, just that most people in the future probably
will not value them.
Similarly, if copying becomes possible, then people who care greatly about
their copies will also have an evolutionary advantage.
> I'm not sure I understand this; since the payoff matrix is symmetric it
> doesn't matter if you are A or B so I don't see what the point is of
> introducing amnesia. Would there be any cases with symmetric payoffs
> where an amnesiac would behave differently than someone who knew whether
> he was A or B?
(*) I think they *shouldn't* behave differently. But if you consider
yourself to be both A and B when you don't know whether you are A or B,
then you *would* behave differently and choose Cooperate instead of
Defect. That's why I think it's wrong to consider yourself to be both A
and B.
> Tangentially, it seems that the PD is a case where the "evidential"
> vs "causal" principles of decision theory would show a difference.
> The evidentialist would argue that by cooperating, it increases the
> chance that the other person will cooperate (perhaps to a certainty, in
> some versions), hence cooperating can be justified. This is basically
> Hofstadter's principle of super-rationality. The causalist would reject
> the possible correlation of choices and choose the dominant strategy
> of defecting. Does that seem correct?
There is some literature on this connection between PD and
evidential vs causal decision theory. For example:
Lewis, D.: 1979, 'Prisoner's Dilemma is a Newcomb's Problem'. In
Campbell and Sowden: 251-255. Originally in Philosophy and Public Affairs
8, 235-240.
and
NEWCOMB'S PROBLEM AND REPEATED PRISONERS' DILEMMAS
Christoph Schmidt-Petri
http://logica.rug.ac.be/censs2002/abstracts/Schmidt-Petri.htm
Personally I think they are seperate issues. Causal vs evidential is about
one-agent decision theory, and PD is about multi-agent decision theory
(i.e. game theory). It doesn't make sense to use one-agent decision theory
to analyze PD.
BTW, I'm now having doubts about causal decision theory. Perhaps the extra
generality isn't really needed. See Huw Price's AGENCY AND PROBABILISTIC
CAUSALITY at
http://www.usyd.edu.au/philosophy/price/preprints/AgencyPC.pdf for an
argument against causal decision theory. I will try to summarize my own
thoughts on this matter in another post.
> What about this: you are going to be copied, and your two copies are going
> to play a PD game. You know this ahead of time and so you can decide
> on whatever strategies you intend to follow during this time before the
> copying occurs. Do you think in that case it would be rational to firmly
> decide beforehand to cooperate?
I guess you're assuming that your copies are not going to care about each
other, but it's possible to commit yourself to cooperate before you're
copied? In that case I think it would be rational to make this commitment.
> And would "amnesia" make a difference? We might arrange for amnesia by
> having the duplicates immediately play the game, without any knowledge
> of which they are; and remove the amnesia by simply telling them that
> one is A and one is B, before they play. Again, with a symmetric game
> I don't see how the amnesia or its absence would be relevant. Maybe I
> am misunderstanding that aspect.
See the paragraph marked (*) above.
OK, I understand now that the utilities below are the utilities for A
and B when S gets the various items. So U(TV) is the utility for A for
S to get a TV, which is the same as the utility for B since they are
identical copies.
> According to my incorrect analysis, SSA would imply that you choose option
> 2, because that gives you .5*U(TV2) + .5*U(TV) > .5*U(TV) + .5*U(stereo)
> since U(TV2) > U(stereo). I argued that you should consider yourself A and
> B simultaneously so you could rationally choose option 2, because
> U({TV,stereo}) > U({TV2, TV}).
Yes, that makes sense.
> However taking both SSA and game theory
> into account implies that option 2 is rational. Furthermore, my earlier
> suggestion leads to unintuitive results in general, when the two players
> do not share the same utility function.
I know you meant to write that game theory implies that option 2 is
irrational.
> The game theoretic analysis goes like this. There are two possible
> outcomes with pure strategies (I'll ignore mixed strategies for now).
> Either A and B both choose 1, or they both choose 2. The first one is a
> Nash equilibrium, the second may or may not be. To understand what this
> means, suppose you are one of the players in this game (either A or B but
> you don't know which) and you expect the other player to choose option 1.
> Then your expected utility if you choose option 1 is .5*U({TV,stereo}) +
> .5*U({TV,stereo}). If you choose option 2, the expected utility is
> .5*U({TV2,stereo}) + .5*({TV,TV}) which is strictly less. So you have no
> reason not to choose option 1 if you expect the other player to choose
> option 1. Whether or not the second possible outcome is also a Nash
> equilibrium depends on whether U({TV2,TV}) > .5*U({TV2,stereo}) +
> .5*({TV,TV}). But even if it is, the players can just coordinate ahead of
> time (or implicitly) to choose option 1 and obtain the better equilibrium.
If option 2 is also a Nash equilibrium, that is better than option 1,
right? This is why option 2 was preferred under the first analysis.
However I see that under this reasoning there are utility assignments
which make option 1 be a Nash equilibrium while option 2 is not, hence
option 1 would be preferred in those cases, despite the earlier reasoning
which would choose option 2.
I have a problem with this application of game theory to a situation where
A and B both know that they are going to choose the same thing, which I
believe is the case here. Let me make this more specific by assuming that
A and B (and S) are deterministic computational systems. Their needs
for randomness are met by an internal pseudo random number generator.
When S is duplicated to form A and B, the PRNG state is duplicated as
well, so that A and B are running exactly the same deterministic program.
This is the situation which most sharply appeals to the intuition that A
and B should be thought of as "the same person". They are two instances
of the same deterministic calculation, with exactly the same steps being
executed for both.
Under these circumstances, I don't see how considerations of Nash
equilibria can arise. These require implicitly assuming that the other
side may choose a different value than yourself. But with the setup I
give, it is physically impossible for that to happen. The other player
has no more freedom to behave differently than does an image in a mirror.
Likewise with the amnesiac prisoner's dilemma, if the amnesia is provided
in the manner I have described, so that both parties are running exactly
the same program and both know that they are doing so, it seems perfectly
reasonable to choose to cooperate. There are actually fewer degrees
of freedom than the game matrix implies; only two possible outcomes,
rather than four. And the best of the two possible outcomes is when
both parties cooperate rather than defect.
This approach suggests a question with regard to the causal interpretation
of Newcomb's paradox. First, as something of a digression, suppose
it turns out that the experimenter's eerie accuracy in the Newcomb
setup is because he has a time machine. After the subject's decision
is made to choose one or two boxes, the experimenter goes back in time
and fills the boxes appropriately. In this case, it seems to me that
the causalist may decide that taking one box is the preferred outcome,
because his choice does *cause* the filling of the boxes. The effect
takes place earlier in time, but given that there is a time machine in
the picture, we have to accept reversed causality. OTOH the causalist may
reject this reasoning, arguing along his usual lines that the boxes have
already been filled, and taking two has to give him more than taking one.
I don't know which conclusion he would choose.
But more relevantly, suppose that the experimenter's secret is as
follows. Let the subject be a deterministic computational system as in
the APD and other examples above. What the experimenter does is to run
the computation forward until it makes a choice. Then he rewinds the
computation to the state it was in at the beginning, and fills the boxes.
Now he runs the computation forward again, where it will make the same
decision (being deterministic) and so the "prediction" is always correct.
Of course, this description is not much different from standard variants
where the experimenter is an alien with a perfect grasp of human
psychology, or God, able to predict with perfection what people will do.
But by making it concrete in terms of deterministic computations, it
allows for a different view from the causal perspective.
Specifically, when the subject is asked to make his decision, he knows
that he will be put into this state twice; once when the experimenter
was running him to find out what he'd do, and again when the actual
choice was made. From the point of view of shared minds, the subject
must view himself as being in a superposition of these two states.
The point is that in one of those two states, his decision does in
fact have a causal effect on the outcome. It is the direct effect of
his decision that lets the experimenter fill the boxes. So from his
subjective perspective, where he doesn't know if this is the first or
second run, he can at least figure that there is a 50% chance that his
decision has a causal effect on the outcome. It seems to me that this
might be enough to justify choosing one box even from a causal analysis.
Hal Finney
Yes.
> > According to my incorrect analysis, SSA would imply that you choose option
> > 2, because that gives you .5*U(TV2) + .5*U(TV) > .5*U(TV) + .5*U(stereo)
> > since U(TV2) > U(stereo). I argued that you should consider yourself A and
> > B simultaneously so you could rationally choose option 2, because
^
I meant "option 1" here -----------------------------------^
> > U({TV,stereo}) > U({TV2, TV}).
>
> Yes, that makes sense.
>
> > However taking both SSA and game theory
> > into account implies that option 2 is rational. Furthermore, my earlier
^--- this should be "1" as well
> > suggestion leads to unintuitive results in general, when the two players
> > do not share the same utility function.
>
> I know you meant to write that game theory implies that option 2 is
> irrational.
Yes. I meant to write "option 1" in two places where I actually
wrote "option 2". Sorry!
> If option 2 is also a Nash equilibrium, that is better than option 1,
> right?
No, option 1 is better than option 2, because S prefers a TV and a stereo
to two TVs.
> This is why option 2 was preferred under the first analysis.
> However I see that under this reasoning there are utility assignments
> which make option 1 be a Nash equilibrium while option 2 is not, hence
> option 1 would be preferred in those cases, despite the earlier reasoning
> which would choose option 2.
No, I think you got confused because of my typos.
I'll answer the rest of your post later. I want to resolve this
misunderstanding ASAP.
I've been reading _Conceptual Mathematics_ but so far have not seen many
connections with topics I'm most interested in learning right now (logic,
recursion theory, decision theory). Perhaps category theory is more
relevant in physics, or I should move on to topos theory.
Topos theory seems to be motivated by intuitionistic logic, which is
considered the logical basis of constructive mathematics (according to
http://plato.stanford.edu/entries/logic-intuitionistic/). Does that mean I
should learn something about intuitionistic logic and constructivism first
before trying to tackle topos theory?
I notice the book "Constructivism in Mathematics" by Troelstra and Dalen.
Has anyone here read it, or can anyone recommend another book?
In my analysis I did not make the assumption that all of the copies are
deterministic and have no access to independent random numbers, so game
theoretic considerations do apply. In the first scenario where your copies
are trying to win prizes for you, game theory lead to the same
conclusion. In the Amnesiac Prisoner's Dillemma, the assumption is that
the players are different people who have suffered temporary amnesia, not
copies of one original person, so they can't be identical deterministic
computations.
On Wed, Jul 17, 2002 at 06:49:04PM -0700, Hal Finney wrote:
> The point is that in one of those two states, his decision does in
> fact have a causal effect on the outcome. It is the direct effect of
> his decision that lets the experimenter fill the boxes. So from his
> subjective perspective, where he doesn't know if this is the first or
> second run, he can at least figure that there is a 50% chance that his
> decision has a causal effect on the outcome. It seems to me that this
> might be enough to justify choosing one box even from a causal analysis.
Again, I think you need to think of yourself as both runs, so there is
probability 1 that your decision has a causal effect on the amount of
money in box 2. Otherwise, how do you compute the expected payoff of
chosing only one box?
Several points:
1. Perhaps you need to give it more time, as reading Lawvere and
Schanuel is not light reading. (It's at about the same level as Raymond
Smullyan's recent books on logic are. Or perhaps to Hofstadter's "Godel,
Escher, Bach." Not dense texts, but also not light reading.)
2. Or perhaps you can do what I do, which is to read several related
books at the same time, the various books illuminating the topic in
various ways.
3. As for fundamental relevance, I see it as important for mathematics
and computer science, as well as for certain (mostly frontier) issues in
physics. Not everyone does, even in math. You mention logic and
recursion theory, so you might want to do some Web searches on some of
these names: Martin Hyland (the effective topos, Eff), Joyal, Bell (not
the QM Bell), in connection with topos theory. The Paul Taylor book I
mentioned a while back has much material on logic, computability, with
lots of connections shown with category and topos theory.
(But, "moving on" to topos theory without knowing some category theory
is no more advisable than "moving on" to quantum theory without knowing
a fair amount of classical physics would be. Nothing wrong with dabbling
in several areas--QM, GR, classical--at the same time, using the "MBI"
(Many Books Interpretation).)
4. I find category and topos theory to be refreshing, stimulating, and
foundational. The Lawvere and Schanuel is a leisurely,
lots-of-exposition-and-motivational treatment which requires almost no
prerequisites, which is why both Bruno and myself recommend it. It is
definitely _not_ a "Why category theory is really important!" sort of
book.
5. Your mileage may vary. It may never be as interesting to you as it is
to me. That's a feature, not a bug. Or you may come back to it after a
while.
(Looking through my copy of Pearl's "Causality," simple graph theory
would be more immediately useful as background prep.)
> Topos theory seems to be motivated by intuitionistic logic, which is
> considered the logical basis of constructive mathematics (according to
> http://plato.stanford.edu/entries/logic-intuitionistic/). Does that
> mean I
> should learn something about intuitionistic logic and constructivism
> first
> before trying to tackle topos theory?
Yes, I think so. Doesn't mean you need to read an entire book on it,
though.
Smullyan's "Forever Undecided," 1987, is a good survey of parts of
modern logic. Not much on intuitionistic logc, but still good
preparation. A good chapter on possible worlds and Kripke's
contributions. (Which has strong connections to topos theory, as I've
mentioned.)
> Hal Finney wrote:
> > Why should they care? If S gets a TV that does not benefit them. Is it
>> just that they are similar to S, being recent copies of him, so they
>> have a sort of brotherly fondness for him and would like to see him happy?
>
>Yes.
But this confuses the 1 and 3 person point of view. (or points of view?)
So, either you are a zombie (which I doubt) or you hide the"mind data" under
the rug, or I miss something.
If you have problems with the 1/3 distinction, perhaps you could take
a look on the book edited by Casti, and in particular to Rossler on
"endophysics"
for more physical motivations. Svozil's book even gives an
endophysics related to
quantum logics (due basically to Moore and Conway).
Rossler O.E. Endophysics, in Real Brains, Artificial Minds, JL Casti,
A Karlqvist
(eds), North-Holland, New-York, 1987.
Rossler makes a link between 1_3 disticnction and covariance. He even
attributes
the link to Boscovich in "Boscovich Covariance", JL Casti, A Karlqvist (eds),
CRC Press, 1991.
Bruno
Bruno, which of the Tegmark's 'Everything papers' did you have in your mind?
> emerged from the realisation that mathematical structures are themselves
> mathematically structured. Categorist applies the every-structure principle
> for each structure. Take all groups, and all morphism between groups: you
> get the category of groups. It is one mathematical structure, a category
> (with objects = groups and arrows = homomorphism) which, in some sense
> capture the essence of group.
Cheers,
mirek
>
> Bruno Marchal in an older post wrote:
>>> Also,
>>> can you elaborate a bit more on the motivation behind category
>>> theory?
>>> Why
>>> was it invented, and what problems does it solve? What's the
>>> relationship
>>> between category theory and the idea that all possible universes
>>> exists?
>>
>>
>> Tim makes a very genuine remark (but he writes so much I fear that
>> has
>> been unnoticed!). He said: read Tegmark (Everything paper), then
>> learn
>> category, then read again Tegmark. Indeed I would say category
>> theory has
>
> Bruno, which of the Tegmark's 'Everything papers' did you have in
> your mind?
I guess it is this one:
http://space.mit.edu/home/tegmark/index.html
But it looks the paper is alive and evolves. I was thinking of its
diagram of mathematical structures.
Category theory put "natural" order in mathematical theories.
But recursion theory is a sort of obstacle. category theory works
well for sort of first person recursion theory (like with
realizability, typed lambda calculus/comobinators, etc.)
Then category is a must for knots and geometry ...
Well come back,
Bruno